content
stringlengths
275
370k
Why is Bloody Sunday so called? During an unauthorised civil rights march in Londonderry, Northern Ireland, on the afternoon of January 30, 1972, shots were fired by the Parachute Regiment on unarmed protestors, killing 13 and wounding a further 13, one of whom subsequently died. The day became known as Bloody Sunday. Why had it come to this? The Troubles, as the struggle over Northern Ireland between the Protestant unionists and the Catholic nationalists was known, had broken out four years earlier. British soldiers had been deployed to the province restore order. By 1972, the security situation had deteriorated significantly.
A MiddleWeb Blog What are the what-ifs of history? And how can puzzling over alternate histories help our students become more critical thinkers? The social studies and history teachers of Twitter recently discussed these questions, thanks to Sarah @WinchesterTeach and the awesome #sschat. In our own middle school classroom, we’ve always liked to include the “what ifs” of historical events in our teaching. For example: ▶ Our American expansion essay ends with “Explain what the United States would look like if the events you described had never happened. Tell the reader whether things would have been better or worse.” ▶ And the “so what” part of the conclusion for our causes of the Civil War essay asks students to: “Explain what the impact on the Civil War would have been had these events been dealt with differently.” Image: Alternate US history map – Kenzer & Co. Thinking about what-ifs helps students see the relationship between cause and effect; it allows them to see history as a dynamic series of choices, shows them that perspective is an important aspect of analyzing history, and shocks them out of the complacency of accepting history as some kind of destiny or a series of inevitable outcomes. Here are some comments from our recent #sschat: Many participants suggested their own “what-ifs” – turning points that could have changed history: Participants discussed some of the issues that can arise when using alternate reality scenarios in history, namely, that students can often remember the what-if as actually happening, as opposed to the events as they actually happened. Several participants also made the point that the perspective from which the history is being related is hugely important when discussing the alternate AND historical scenarios. Teachers agreed that context was key when discussing these alternate scenarios for history. When students have a solid grasp of events as they actually occurred, then they are more easily able to imagine the what-if scenario. Why we use “what if” scenarios In our classroom, we’re always looking for ways to connect the past that we study to the present we inhabit. “What if” scenarios are another opportunity for us to do this as history/social studies teachers. We can more easily imagine our historical role in the present by imagining these “what-if” scenarios; they remind us that nothing is certain to happen, that our choices are not pre-determined, and while many things in history may seem to have been inevitable, people in the past were people, just like us, just like our students. What are some what-ifs of history that intrigue you? What ways do you have your students think about alternate histories?
Sandstorms in Peru have revealed mysterious designs believed to have been etched into the desert thousands of years ago. The newly exposed geoglyphs, discovered last week by a pilot flying over the region, include a snake nearly 200 feet long as well as a bird and some llama-like creatures, reports the Independent. Other designs, known as the Nazca Lines, were first spotted in the desert region from the air in 1939, the Daily Mail notes, and archaeologists are now working to confirm that the new lines are genuine. The Nazca Lines—described by UNESCO as "a unique and magnificent artistic achievement that is unrivaled" elsewhere in the prehistoric world—are thought to have been etched into the desert by the Nazca people sometime before 500AD, and they include long lines and geometric shapes as well the shapes of animals and plants. Many scholars believe they were used for astronomic rituals, but one of Peru's top experts tells El Comercio that the new finds confirm the relationship between "the ancient people who occupied this arid desert with rain and worship of water." (Another prehistoric find was made in Spain recently—the oldest human poop ever discovered.)
For feathered dinosaurs, it appears that fashion came before function. A new study of a dinosaur fossil found in northeast China has revealed that the dinosaur, Beipiaosaurus, not only had the soft downy feathers that have been spotted in other fossils, it also had a more primitive type of feather that appears to have been used only for peacock-like displays. These primitive feathers don’t cover the dinosaur’s entire body, they’re found only on the creature’s head, neck and tail. The filaments couldn’t have generated lift, so they’re not flight worthy, and they’re too sparse to have retained the creature’s body heat. [Lead researcher Xing] Xu and his colleagues therefore speculate that the filaments served as display structures, just as many similarly placed feathers do on modern birds [Science News]. The feathers detected on the Beipiaosaurus, which lived in the Cretaceous Period, have a very basic structure. The modern-day feathers sported by birds are elaborate constructions with numerous fibers that branch out from a central filament and hook together. This arrangement is so complicated that many scientists theorize it could have evolved only once…. But paleontologists have proposed that a variety of simpler structures — including peculiar, branched structures colloquially called “dinofuzz” — evolved before feathers [Science News]. The new discovery reveals an even earlier piece of the evolutionary puzzle: the proto-feathers that Beipiaosaurus sported on its head, neck and tail are long filaments without any branches. The results also suggest that the first feathered dinosaurs evolved earlier than we thought, researchers reported in the Proceedings of the National Academy of Sciences [subscription required]. Similar quill-like structures have been found on Psittacosaurus, or “Parrot Lizard,” as well as some pterosaurs. The researchers therefore suspect the common ancestor of these creatures — along with Beipiaosaurus, which lived 125 million years ago — had the early feathers too [Discovery News]. The first rudimentary feathers may have appeared in the Middle Triassic Period about 235 million years ago, Xu suggests. The display feathers could have been used in mating rituals, Xu says, or in fights to defend territory. “Most previous studies suggest that insulation might have been the primary function for the first feathers, but our discovery supports that display represents one of the earliest functions for feathers,” Xu said, adding that “flight function appears very late in feather evolution.” The discovery negates the prior theory that feathers and flight co-evolved. It instead indicates pterosaurs, birds and other fliers recruited already existing feathers for flight [Discovery News]. 80beats: “Bizarre” and Fluffy Dino May Have Used Feathers to Attract Mates The Loom: Shake Your Jurassic Tail Feather 80beats: What Color Were Feathered Dinosaurs and Prehistoric Birds? DISCOVER: The Dragons of Liaoning, a tour of China’s rich fossil beds Image: Zhao Chuang and Xing Lida
In a new study, scientists were able to restore partial hearing to deaf gerbils by implanting human embryonic stem cells in their ears. The breakthrough offers hope that one day a similar treatment may be developed to cure hearing loss in humans. One cause of hearing loss is auditory neuropathy, the impairment of auditory neurons that normally transmit sound signals from the ear to the brain. Researchers at the University of Sheffield in the UK sought to restore hearing in 18 gerbils whose auditory nerves had been experimentally damaged, by replacing the nerves with new ones derived from human embryonic stem cells. The undifferentiated embryonic stem cells were first subject to chemicals to induce them into becoming auditory neurons. These new auditory neurons were then placed into the gerbils’ ears. Ten weeks later, many of the transplanted cells had grown fibers that reached the brainstem where several relay centers necessary for hearing are found. To see if those fibers helped the gerbils to hear, the researchers played sounds to the gerbils of increasing volume, and used electrodes to determine what volume was needed to evoke brain activity. The gerbils showed improved hearing ten weeks after receiving the stem cells, with a 46 percent increase in sensitivity. The improvement, however, was far from consistent. A third responded exceptionally well, with some regaining 90 percent of their hearing, while another third showed almost no recovery at all. The study was published recently in Nature. What would a 46 percent improvement mean for the hearing impaired human? Dr. Marcelo Rivolta, who led the study, told the BBC, “It would mean going from being so deaf that you wouldn’t be able to hear a lorry or truck in the street to the point where you would be able to hear a conversation.” The researchers acknowledge, however, that the technical hurdles needed to make that happen are formidable. The location in the human ear where the stem cells need to be placed is extremely small, making the operation very difficult. Over 275 million people worldwide have some form of hearing loss. Of these, only about 10 percent lose their hearing due to auditory neuropathy. The vast majority of hearing loss is not due to auditory nerve impairment, but damage to another type of cell found in the inner ear. These cells have hairs attached to them that vibrate to sound entering the ear. These hair cells act as a kind of microphone for the ear, transforming sound frequencies into neuronal signals that are sent to the brain’s auditory cortex where speech, music, and all other sounds are perceived. Previous work had already shown that human embryonic stem cells can be induced to become auditory nerve cells, but this is the first time the differentiated cells had been successfully implanted. Interestingly, the differentiation factors induced the embryonic stem cells to produce, not only auditory neurons, but hair cells as well. The ability to produce these cells in the lab is a positive sign that, in the future, replacing the impaired hair cells of deaf people could ameliorate deafness in the vast majority of cases. Unfortunately, hair cells require a very specific and precise orientation in the inner ear to function properly. Placing those cells correctly within the ear would be a phenomenally technical challenge, probably beyond our current capabilities. However, cochlear implants requires a functional auditory nerve to work, so the ability to implant a new auditory nerve would open the door for a subset of people for whom cochlear implants was not an option. Dr. Paul Colville-Nash, Program Manager for stem cell, developmental biology and regenerative medicine at the Medical Research Council – the UK’s equivalent to the US National Institutes of Health – said in a press release: “This is promising research that demonstrates further proof-of-concept that stem cells have the potential to treat a range of human diseases that currently have no effective cures. While any new treatment is likely to take years to reach the clinic, this study clearly demonstrates that investment in UK stem cell research and regenerative medicine is beginning to bear fruit.” Stem cell research may be near to delivering on the medical promise that so many of us are hoping for. Just this year stem cells were used to grow new teeth and improve the vision of patients. Again, it will undoubtedly be years before the auditory nerve procedure will benefit the hearing impaired. But when it does, it will be music to their ears, and the field of regenerative medicine as a whole.
Presentation on theme: "Structure & Formation of the Solar System"— Presentation transcript: 1 Structure & Formation of the Solar System What is the Solar System?The Sun and everything gravitationally bound to it.There is a certain order to the Solar System.This gives us information on its formation.Build a Solar SystemThe planets to scale with a portion of theSun visible in the background. 2 Part 1: Structure of the Solar System All the planets orbit the Sun in the same direction.All the planets orbit within nearly the same plane. Like a disk.Two type of planetsSolid, rocky, small planets close to the Sun (like Earth)Gaseous, large planets far from the Sun (like Jupiter) 3 The Sun The Sun is a star. It is completely gaseous. It emits light and heat through nuclear fusion in its core.It is by far the largest object in the Solar System. 700 times more massive than all of the other objects in the Solar System put together.It is composed mostly of Hydrogen and Helium gas and traces of many other elements.The Sun spins on its axis counter-clockwise. 4 The Planets In order of increasing distance from the Sun: Mercury VenusEarthMarsJupiterSaturnUranusNeptunePluto 5 The PlanetsAll the planets orbit in the same direction counter-clockwise as seen from above Earth’s North Pole.All the planets spin counter-clockwise too except for Venus, Uranus and Pluto. 6 The Inner or Terrestrial Planets Mercury, Venus, Earth and Mars share certain characteristics:All are rocky bodies.All have solid surfaces.Except for Mercury all have at least a thin atmosphereThey are called Terrestrial planets because of their resemblance to Earth.Pluto is going to be dealt with separately 7 The Outer or Jovian Planets Jupiter, Saturn, Uranus and Neptune share certain characteristics:All are large, gaseous bodies.All have very thick atmospheres, with possibly liquid interiors and solid coresAll have ringsThey are called Jovian planets because of their resemblance to Jupiter. 8 Pluto Pluto is unlike any of the terrestrial or jovian planets. much farther from the Sun than the terrestrial planets.much smaller than any jovian planet.composition is thought to be a mixture of ice and rockIt is similar to some of the satellites of the jovian planets and similar to some asteroids.There has been some discussion among astronomers whether Pluto should be considered a planet at all. 9 Satellites (Moons) Most of the planets have satellites. Most of the satellites orbit in the equatorial plane of the planet.Most satellites orbit counter-clockwise.The jovian planets have more than a dozen satellites each.Ganymede, Callisto, Io, and Europa.Four of Jupiter’s largest satellites.These were discovered by GalileoGalilei and together are called theGalilean satellites of Jupiter. 10 Comets and AsteroidsThe Solar System is filled with millions of smaller bodies.Comets - composed of ice and rockAsteroids - composed of rock and/or metalThere is also dust in space which can be seen in meteor showers 11 Part 2: Solar System Formation Our Milky Way Galaxy is filled with cold, dark clouds of gas and dust.These clouds are mostly hydrogen and helium with dust containing mostly iron, rock, and ice.The Solar System is thought to have formed from a huge, slowly rotating cloud about 4.5 billion years agoA nearby passing star or stellar explosion may have caused the cloud to collapse 12 Collapsing Gas CloudsAs the cloud collapsed the original slow spin began to speed up. This caused the cloud to flatten into a disk shape.The gravitational pull of the cloud caused it to shrink further and caused most of the material to fall towards the core forming a large bulge. 13 Collapsing Gas Clouds?In the Great Nebula of the constellation Orion are huge clouds of gas and dust.Among these clouds the Hubble Space Telescope observed lumps and knots that appear to be new stars and planets being formed. 14 Planets in Formation?Around the star Beta Pictoris a large disk of dust and gas has been observed.The light from the star is much brighter than the disk so it had to be blocked for the disk to appear clearly.Disks have been seen around other stars too including Vega.see 15 Birth of the SunAs material falls into towards the disk it collides with other material and heats up and melts.The increasing mass of the core also increases the gravitational pull and causes more material to be pulled in.When the mass is large enough and temperatures high enough nuclear fusion reactions begin in the core and a star is born! 16 Heating and Condensation of the Solar Nebula The heat from the Sun prevents ices from reforming on the dust grains in the region near the Sun.Ices condensed only in the outer parts of the Solar nebula.In the inner portion of the disk only materials like iron and silicates (rock) can condense into solids. Slowly they form clumps of material.In the outer portion of the disk much more material can condense as solids including ice. This extra material allows clumps to grow larger and faster. 17 Gravity does the jobWithin the disk, material is constantly colliding with one another. If the collisions are not too violent material may stick together.In the outer parts of the Solar Nebula the planets become large enough to have a significant gravitational pull and collect gas around them.Planets in the inner nebula can not grow enough to collect much gas.Eventually most but not all of the material was swept up by the planets. 18 The Last of the Planetesimals The remaining material exists today ascomets which were flung out to a region far beyond Pluto called the Oort cloud andasteroids mostly between Mars and Jupiter (the Asteroid Belt) and beyond Pluto (the Kuiper Belt)
Many jellyfish species can cause mild to extreme reactions if they sting humans. Jellyfish are aquatic creatures classified in the phylum Cnidaria along with other sea creatures such as corals and sea anemones. The body of a jellyfish is about 95% water and 5% solid matter, and it lacks elaborate body systems like those found in most animals. The solid matter of the creature is composed of three layers: - The epidermis which is the outer layer. - The mesoglea which is a thick jelly-like middle layer. - The gastrodermis which is the inner layer. The jellyfish is capable of stinging using its tentacles. Stings by jellyfish can be treated in a variety of ways and it is advised to contact medical personnel when administering first aid measures. Removing the tentacles with objects such as a tweezers and sticks is advisable to avoid contact with bare skin. Vinegar is mostly used to neutralize the venom, and it can be substituted with sea water or baking soda. Gently shaving the affected area has the effect of getting rid of remaining nematocysts. 6. Lion's Mane jellyfish This jellyfish species is recognized as the largest of its kind. Its habitat range includes the northern Pacific and Atlantic Oceans, up to the Arctic Oceans. The maximum diameter of its bell is six feet and seven inches, while the largest species ever recorded was 120 feet in length. The tentacles of this jellyfish can be as long as 100 feet and are used for predation. However, stings from the lion's mane jellyfish are not fatal. 5. Cannonball jellyfish The cannonball jellyfish also goes by the name of cabbage head jellyfish. It is distinguished by a cannonball shape and a dome-shaped bell. The species has been recorded in the mid-west Atlantic Ocean and the east-central and the northwest Pacific Ocean. It feeds mainly on zooplankton including veligers. Cannonball jellyfish is toxin producing, and its sting can lead to cardiac ailments in humans. 4. Moon jellyfish The moon jellyfish (Aurelia aurita) is a translucent jellyfish species inhabiting the world’s oceans. The species grows to between 10-16 inches in diameter. They are notable for their exquisite coloring. The moon jellyfish uses their tentacles to hunt carnivorous prey, primarily plankton and other small creatures. This species lives only for a few months, most likely for a maximum of six. 3. Sea Nettle The sea nettle species of jellyfish prefer the open waters of the Pacific, Atlantic, and Indian Oceans. They vary in physical characteristics depending on their habitat, but they can be distinguished by their golden-brown bell which can be as long as three feet. Trailing behind the bell are tentacles which can reach a length of 15 feet. The sea nettles use stinging cells while hunting, which are very painful to humans. 2. Box jellyfish Several species of the box jellyfish have been identified as having lethal venom. The species inhabit tropical and subtropical oceans, but the dangerous ones mainly prefer the Indo-Pacific region. The box jellyfish rely on the poison from its tentacles to hunt and defend itself. Some species of the box jellyfish have been blamed for human deaths while other species have no effect on humans. 1. Irukandji jellyfish Scuba divers and snorkelers are perhaps the most cautious about irukandji jellyfish as they are the most venomous of their kind. Populations of this jellyfish exist in the marine waters of the United States and Australia. It is a small creature at only 0.06 cubic inch, making it hard to spot. The Irukandji Jellyfish is responsible for the Irukandji syndrome which manifests as a headache, nausea, muscle, and abdominal pain, hypertension, backache, vomiting, chest pains, and pulmonary edema. The syndrome can lead to death if untreated.
What if I told you there’s no reason we couldn’t set up a small base on the moon by 2022 without breaking the bank? The endeavor would cost about $10 billion, which is cheaper than one U.S. aircraft carrier. Some of the greatest scientists and professionals in the space business already have a plan. NASA’s Chris McKay, an astrobiologist, wrote about it in a special issue of the New Space journal, published just a few weeks ago. Before we get into the details, let’s ask ourselves: Why the moon? Although scientists (and NASA) don’t find it all that exciting, the moon is a great starting point for further exploration. Furthermore, building a lunar base would provide us with the real-world experience that may prove invaluable for future projects on other planets like Mars, which NASA plans to reach by 2030. The main reason the moon is not a part of NASA’s plan is simply because of the agency’s crimped budget. NASA’s leaders say they can afford only one or the other: the moon or Mars. If McKay and his colleagues are correct, though, the U.S. government might be able to pull off both trips. All it takes is a change of perspective and ingenuity. “The big takeaway,” McKay says, “is that new technologies, some of which have nothing to do with space — such as self-driving cars and waste-recycling toilets — are going to be incredibly useful in space, and are driving down the cost of a moon base to the point where it might be easy to do.” The document outlines a series of innovations — already existing and in development — that work together toward the common goal of building the first permanent lunar base. Read: The U.S. Navy’s new $13 billion aircraft carrier will dominate the seas One such innovation is the proposed use of virtual reality during the preparation and planning phase. A lunar VR environment enriched with real-world scientific data would be used as a simulation in which the 3D-printed structures could be modeled and tested against the thermic and environmental factors present on the moon’s surface. This would provide scientists and engineers with vital information necessary to solve structural problems before they happen for real. 3D printing would also considerably reduce the repair and replacement costs on the lunar base, because small components could be easily replaced on-site. To bring robots, supplies, astronauts and habitats, SpaceX’s Falcon 9 rockets and the upcoming model Falcon Heavy would be used. Speaking of habitats, a modified, radiation-resistant version of Bigelow Aerospace’s inflatable habitat seems the most probable candidate for the role. Those habitats could be packed into rockets’ cargo bay and expanded after reaching their destination. Bigelow Aerospace’s inflatable habitat. The first station would probably be built on the outer rim of one of the moon’s North Pole craters. The poles receive much more sunlight than the rest of the moon (nights there can last up to 15 days), so solar-powered equipment will get enough light to function properly. Furthermore, all that energy could provide power for robots that would excavate large amounts of ice detected within the craters. Water gathered that way could then be used for life support, as well as for providing oxygen, or it could be processed into rocket fuel, which would be sold or stored for refueling space crafts. National Aeronautics and Space Administration This is what mining on the moon might look like. After rockets bring in supplies and gear, and robots unpack the habitats and establish the perimeter for mining operations, astronauts would start arriving. Here’s how the process is envisioned in the document: “Just imagine a small lunar base at one of the lunar poles operated by NASA or an international consortium and modeled according to the U.S. Antarctic Station at the South Pole. The crew of about 10 people would consist of a mixture of staff and field scientists. Personnel rotations might be three times a year. The main activity would be supporting field research selected by peer-reviewed proposals. Graduate students doing fieldwork for their thesis research would dominate the activity. No one lives at the base permanently but there is always a crew present. The base is heavily supported by autonomous and remotely operated robotic devices.” Also see: This new high-tech bomber is designed to keep China and Russia at bay It continues: “The activities at this moon base would be focusing on science, as is the case in the Antarctic. It could provide an official U.S. government presence on the moon, and its motivation would be rooted in U.S. national policy — again as are the U.S. Antarctic bases. A lunar base would provide a range of technologies and programmatic precedents supporting a long-term NASA research base on Mars.” If NASA takes these arguments to heart, affordable lunar bases may be a step toward the first permanent lunar settlements. From then on, anything could happen. In time, the moon could be terraformed, and hundreds of years from now, an entirely new human society may evolve, unfettered by issues we face on Earth. If this sounds like sci-fi, remember that not a long time ago, 90% of modern technology belonged to that category. What do you think about colonizing the moon? Please let me know in the comment section below.
Early Color Television Chromatic Television LaboratoriesIn 1951 Dr. Ernest O. Lawrence of the University of California, Berkeley proposed a single gun color CRT using vertical stripes of red, blue and green on the screen. Behind these stripes were vertical wires which could be charged with electrical energy to deflect the electron beam to each of the stripes, thereby creating a color picture. However, very high power RF (around 50 watts) had to be applied to the deflecting wires, and RF radiation from the tube caused interference with the receiver circuits. The university set up "Chromatic Television Laboratories" to commercially develop the system, in partnership with Paramount Pictures, who provided development funding. In 1951 Chromatic Television Laboratories began experimenting with using the CBS field sequential system with their Chromatron tube rather than the color wheel that had been used in all previous field sequential systems (information courtesy of Ed Reitan). Because the high power RF switching was done at a much lower rate than with the later NTSC system, the interference problem was minimized. Chromatic Television Laboratories built prototype PDF 22-4 Chromatron CRTs in 1952 and 1953, with a display area of 14 by 11 inches. In 1953 the coronation of Queen Elizabeth II was televised in color, using an experimental field sequential system developed by Pye and Chromatic Television Laboratories. A field sequential camera was used and the signal was broadcast over a UHF channel to Great Ormonde Street Children’s Hospital in London. Receivers used the Chromatron CRT. Here is a New York Times article from June 3, 1953 describing Paramount's involvement (courtesy of John Pinckney). Here are articles that appeared in Billboard about CTL. The University eventually abandoned their interest in Chromatron, but Paramount continued development through the 50s and early 60s, possibly as a system for displaying film during editing, which meant that the RF interference did not present a problem. Paramount also attempted to perfect the tube for use in receivers, but the RF interference problem was never solved. See this article in Radio-TV Experimenter (Courtesy of Wayne Bretl). In 1966, Sony made a few prototype sets using their own version of the Chromatron. A similar approach was later used a similar concept in their very successful Trinitron tube.
Electronics/Digital to Analog & Analog to Digital Converters Signals in the real world tend to be analog. For example the water level in a tank or the speed of car as measured by a tacho-generator. In order to process them with a digital circuit, we need to convert them to digital signals. Conversely, once the digital signals are processed, they must often be converted back to an analog signal. An example would be processing an audio signal digitally and sending it to a speaker. The speaker requires an analog signal. An Analog to Digital Converter (ADC) takes an analog input signal and converts the input, through a mathematical function, into a digital output signal. While there are many ways of implementing an ADC, there are three conceptual steps that occur. - The signal is sampled. - The sampled signal is quantized. - The quantized signal is digitally coded. By sampling we turn a continuous-time function which may take on infinitely many values at different times into a discretised function that may take on infinitely many values at different discrete indices. Sampling generally is done with a Sample-And-Hold circuit (simple experiments can be done using a capacitor and switch). To be able to reconstruct the signal we must consider the Sampling Theorem which says that a sampling frequency twice the highest frequency we're expecting is needed. Quantization is the process of taking a continuous voltage signal and mapping it to a discrete number of voltage levels. The number of voltage levels affects the quantization noise that occurs. Since digital computers are binary in nature, the number of quantization levels is usually a power of 2, i.e., where n is the number of quantization bits. The signal may be amplified or attenuated before going into the ADC, so that the maximum and minimum voltage levels give the best compromise between resolution of the signal levels and minimization of clipping. Coding is the process of converting the quantized signals into a digital representation. This coding is performed by giving each quantization level a unique label. For instance, if four bits are used, the lowest level may be (in binary) 0000, and the next highest level 0001, etc. An Digital to Analog converter (DAC) takes a digital signal and converts it, through a mathematical function, into an analog signal. Again, the DAC may be implemented in a number of ways, but conceptually it contains two steps. - Convert each time step of the digital signal into an "impulse" with the appropriate energy. In a real system, this could be accomplished by creating short pulses that have the same voltage, but whose total power is modified by changing the pulse length. This pulse train produces a signal whose frequency response is periodic (and theoretically extends to infinity). - Apply a low-pass filter to the time sequence of impulses. This removes all of the high-frequency periodicities, leaving only the original signal. In fact along with the counters we are using digital to analog convertor while converting analog signal to digital signals also. Here we can use counter along with the shift register to store the digital data.
There’s No Such Thing as a Trade Imbalance At no point in a typical retail exchange do either you or the store owner have a trade imbalance, because the value of goods and money being exchanged are equal. The store owner, having given a thing of value to you, is now in possession of a piece of paper that symbolizes the value of debt that society owes him in the form of goods and services. (Money is meaningless except as a measure of how many goods and services are owed.) The store owner holds onto the money you gave him for a little while and then uses it to purchase goods and services for himself. National trade works in a similar way. Nations keep track of all the trades they make in their balance of payments. The two primary accounts in the balance of payments are Current account: The current account measures the amount of consumable goods entering or leaving a country. (It’s what people are talking about when they discuss trade deficits and surpluses.) These goods may include food, cars, machinery, customer service, employment, or anything else being purchased. A current account deficit means a nation imports more goods than it exports; likewise, a current account surplus means a nation exports more than it imports. Capital account: The capital account consists of investments one nation makes in another nation’s economy, such as the value of new business start-ups, the value of stock and bond purchases, and even the transfer of money related to imports and exports. So when Nation A exports goods to Nation B, it does so with the expectation that the currency Nation B gives it will later be traded for a greater amount of resources than Nation B gave it this time. In other words, the whole process of exporting is an investment. Here’s a more personal example: If a person tried to buy something from you by using some type of money that you couldn’t spend or convert into a useable type of money, would you still sell to that person? Of course not. An increase in one of these accounts always results in a decrease in the other. So when a nation has a current account deficit, it also has a capital account surplus. A nation can sustain a current account deficit as long as the people of other nations are confident that they’ll be able to use the currency they receive for their exports to purchase other goods and services from the importing nation or other nations interested in the importing nation’s currency. The real issue is whether or not the value of the nation’s exports will increase over time relative to the value of its imports. In other words, a nation will want to know whether all the money its spending will boost the total value of its productivity in a manner that will allow it to meet its export obligations later (because other nations now hold their currency) while still maintaining enough production to meet domestic demand and whether corporations are treating imports as capital investments (hence, a capital account surplus) or mere consumption.
Ancient supernovae buffeted Earth’s biology with radiation dose, researcher says LAWRENCE — Research published in April provided "slam dunk" evidence of two prehistoric supernovae exploding about 300 light years from Earth. Now, a follow-up investigation based on computer modeling shows those supernovae likely exposed biology on our planet to a long-lasting gust of cosmic radiation, which also affected the atmosphere. "I was surprised to see as much effect as there was," said Adrian Melott, professor of physics at the University of Kansas, who co-authored the new paper appearing The Astrophysical Journal Letters, a peer-reviewed express scientific journal that allows astrophysicists to rapidly publish short notices of significant original research. "I was expecting there to be very little effect at all," he said. "The supernovae were pretty far way — more than 300 light years — that's really not very close." According to Melott, initially the two stars that exploded 1.7 to 3.2 million and 6.5 to 8.7 million years ago each would have caused blue light in the night sky brilliant enough to disrupt animals' sleep patterns for a few weeks. But their major effect would have come from radiation, which the KU astrophysicist said would have packed doses equivalent to one CT scan per year for every creature inhabiting land or shallower parts of the ocean. "The big thing turns out to be the cosmic rays," Melott said. "The really high-energy ones are pretty rare. They get increased by quite a lot here — for a few hundred to thousands of years, by a factor of a few hundred. The high-energy cosmic rays are the ones that can penetrate the atmosphere. They tear up molecules, they can rip electrons off atoms, and that goes on right down to the ground level. Normally that happens only at high altitude." Melott's collaborators on the research are Brian Thomas and Emily Engler of Washburn University, Michael Kachelrieß of the Institutt for fysikk in Norway, Andrew Overholt of MidAmerica Nazarene University and Dimitry Semikoz of the Observatoire de Paris and Moscow Engineering Physics Institute. The boosted exposure to cosmic rays from supernovae could have had "substantial effects on the terrestrial atmosphere and biota," the authors write. For instance, the research suggested the supernovae might have caused a 20-fold increase in irradiation by muons at ground level on Earth. "A muon is a cousin of the electron, a couple of hundred times heavier than the electron — they penetrate hundreds of meters of rock," Melott said. "Normally there are lots of them hitting us on the ground. They mostly just go through us, but because of their large numbers contribute about 1/6 of our normal radiation dose. So if there were 20 times as many, you're in the ballpark of tripling the radiation dose." Melott said the uptick in radiation from muons would have been high enough to boost the mutation rate and frequency of cancer, "but not enormously. Still, if you increased the mutation rate you might speed up evolution." Indeed, a minor mass extinction around 2.59 million years ago may be connected in part to boosted cosmic rays that could have helped to cool Earth's climate. The new research results show that the cosmic rays ionize the Earth's atmosphere in the troposphere — the lowest level of the atmosphere — to a level eight times higher than normal. This would have caused an increase in cloud-to-ground lightning. "There was climate change around this time," Melott said. "Africa dried out, and a lot of the forest turned into savannah. Around this time and afterwards, we started having glaciations — ice ages — over and over again, and it's not clear why that started to happen. It's controversial, but maybe cosmic rays had something to do with it." NASA's Exobiology and Evolutionary Biology program supported the research, and computation time was provided by the High Performance Computing Environment at Washburn University. Brendan M. Lynch
How long does it take to hatch the eggs of dinosaurs? The question seems to be unnecessary considering that an answer is already in the title. But it’s interesting to think a bit about it. Despite what many books and articles keep repeating, it’s wrong to think of dinosaurs as they were extinct. They live in the form of birds and – as we all see very clearly – they are pretty successful. And we know that the eggs of birds usually hatch within the first weeks after they are laid. Now, let’s have a look at what we traditionally mean when we speak of dinosaurs. We would probably expect that the Mesozoic bird relatives didn’t need a long time to break out of their eggshells either. In fact, that’s what the recent studies actually suggested. In the beginning of August 2016, Scott A. Lee from the University of Toledo published an article in which he attempted to estimate how long might it take to hatch the eggs of non-bird dinosaurs. Lee based his research on what is known about the embryos of birds (the living dinosaurs) and crocodiles (dinosaur closest relatives). He concluded that “the incubation times vary from about 28 days for [the early paravian theropod] Archaeopteryx lithographica to about 76 days for [the large titanosaur sauropod] Alamosaurus sanjuanensis“. However, a new study published by a team of scientists led by Gregory M. Erickson from the Florida State University suggests that the incubation was much slower. Erickson and his colleagues studied the incremental lines of von Ebner in teeth of embryonic non-bird dinosaurs. In particular, the researchers studied the teeth belonging to the embryos of the ceratopsian Protoceratops andrewsi from the Upper Cretaceous of Mongolia and the hadrosaurid (duck-billed dinosaur) Hypacrosaurus stebingeri from the Upper Cretaceous of Alberta, Canada. The lines of von Ebner are growth lines that are forming daily. In order to get the incubation time, we can count these lines in the teeth of near-term embryos. Naturally, it’s not as simple as it sounds. First, we need to have near-term embryos. And these are rare. Second, we need to know the timing when the embryos establish functional teeth. Fortunately, the timing is well known. As the authors say, in crocodiles, the teeth appear “between 42% and 52% of the total incubation period”. In order to infer the incubation time for Protoceratops and Hypacrosaurus, Erickson and his colleagues followed a more conservative scenario. If the hatchling teeth began formation at 42% of incubation time, it took the Protoceratops eggs about 83.16 days (almost 3 months) to hatch. The incubation time of the Hypacrosaurus embryos was even longer – about 171.47 days (almost 6 months). We could speculate that the embryos of larger dinosaurs, such as titanosaur sauropods, needed an even longer time. Nevertheless, any far-reaching conclusions based on these results, such as the potential impact of the long incubation time on the extinction of non-bird dinosaurs at the end of the Cretaceous, would be certainly premature. Featured image © Sinclair Stammers, Science Photo Library. Picture of the skeleton of Protoceratops andrewsi by FunkMonk. CC BY 2.0.
The Java input/output (I/O) facilities provide a simple, standardized API for reading and writing character and byte data from various data sources. In this article, we'll explore the I/O classes, interfaces, and operations provided by the Java platform. Let's start by taking a look at Java streams. All of Java's I/O facilities are based on streams that represent flowing sequences of characters or bytes. Java's I/O streams provide standardized ways to read and write data. Any object representing a mutable data source in Java exposes methods for reading and writing its data as a stream. Java.io is the main package for most stream-oriented I/O classes. This package presents two abstract classes, InputStream and OutputStream. All other stream-oriented I/O classes extend these base classes. The java.io package exposes a number of classes and interfaces that provide useful abstractions on top of the character and byte reading and writing operations defined by InputStream and OutputStream. For example, the ObjectInputStream class provides methods that allow you to read data from a stream as a Java object, and the ObjectOutputStream provides methods that allow you to write data to a stream as a Java object. Optimized reading and writing JDK 1.1 added a collection of reader and writer classes that provide more useful abstractions and improved I/O performance than the existing streams classes. For instance, the BufferedReader and BufferedWriter classes are provided to read text from and write text to character-based input streams and output streams. The BufferedReader class buffers characters to more efficiently read characters, arrays, and lines. The BufferedWriter class buffers characters to more efficiently write characters, arrays, and strings. The size of the buffer used by the BufferedReader and BufferedWriter classes can be set as desired. Reader and writer classes provided by the Java I/O Framework include the LineNumberReader class, the CharArrayReader class, the FileReader class, the FilterReader class, the PushbackReader class, the PipedReader class, and the StringReader class, among others. These classes are wrappers on top of the InputStream and OutputStream classes and thus provide methods that are similar to InputStream and OutputStream. However, these classes provide more efficient and useful abstractions for reading and writing specific objects, such as files, character arrays, and strings. An input stream is typically opened for you automatically when it is retrieved from the corresponding data source object or when you construct one of the reader objects. For example, to open the input stream for a file, we pass the name of the file into a java.io.FileReader object's constructor as follows: java.io.FileReader fileReader = new java.io.FileReader("/home/me/myfile.txt"); To read the next available byte of data from a FileReader's underlying input stream, use the read method with no parameters. The snippet in Listing A reads text from a file, one character at a time, and writes it to System.out. To read a given number of bytes from an input stream into a char array, use the read method with one char parameter. The length of the array is used to determine the number of characters to read. Listing B demonstrates this technique. To close an input stream and release any system resources used by the stream, use the close method as follows: Like an input stream, an output stream is typically opened for you automatically when it is retrieved from the corresponding data source object or when you construct one of the writer objects. For example, to open the output stream for a file, we pass the name of the file into a java.io.FileWriter object's constructor as follows: java.io.FileWriter fileWriter = new java.io.FileWriter("/home/me/out.txt"); To write one specified character to an output stream, use the write method with one int parameter. The int parameter represents the character to write. int aChar = (int)'X'; To write a specific number of bytes from a specified char array starting at a given offset to an output stream, use the write method with a char parameter, an int offset parameter, and an int length parameter as shown in the following example: fileWriter.write(buffer, 0, byteCount); To close an output stream and release any system resources associated with the stream, use the close method, like this: To force any buffered data out of an output stream, use the flush method as follows: Putting it all together We can use what we have learned to read from one file and simultaneously write to another, as demonstrated in Listing C. The Java I/O facilities add a simple and standardized API for reading and writing character and byte data from various data sources. Experience obtained while working with Java streams for one type of data source can be carried over to any other type of data source exposed by Java. In our next article, we will begin to explore the networking and remote communications frameworks of the Java platform. We will extend our discussion of Java streams to these environments and demonstrate how remote data sources can be opened, written to, and read from in much the same manner as a local data source, such as a file.
Exponents: Quotient Rule More Lessons for Grade 7 Math Videos, worksheets, and solutions to help Grade 7 students learn about exponent rules. The Quotient Rule for exponent states that when we divide two powers with the same base, we can subtract the exponents. 36 How Do You Divide Two Numbers With Exponents? Dividing Powers - Exponent Rule Exponent Rules, Dividing Quotient Rule and Zero Exponent Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
Page 2 of 3 Despite the lack of a wiring diagram it was possible to guess at various principles behind the functioning of collections of neurons. Given that the fact that the brain was made up of neurons was something that was only discovered in 1911 by Ramon y Cajal, it is remarkable that by 1943 people were speculating on how neural networks might function. Some of the earliest work was done by McCulloch and Pitts who showed how idealised neurons could be put together to form circuits that performed simple logic functions. This was such an influential idea that Von Neumann even made use of neuronal delay logic elements in ENIAC and many later pioneering computers made use of neuron-like circuit elements. At this time it really did seem that the structure of the brain had much to tell us about ordinary programmable computers, let alone intelligent learning machines. Normally we think of computers as being the product of hard engineering, electronics, Boolean logic, flow diagrams and yet in the earliest days the pioneers actually thought there was a direct connection between what they were doing and the structure of the brain. Minsky must have been strongly influenced by this feeling that computers and brains were the same sort of thing because his thesis was on what we now call “neural networks”. In those days you didn’t simulate such machines using general purpose computers – you built them using whatever electronics came to hand. In 1951 Minsky built a large machine, the first randomly wired neural network learning machine (called SNARC, for Stochastic Neural-Analog Reinforcement Computer), based on the reinforcement of simulated synaptic transmission coefficients. After getting his PhD in 1954 he was lucky enough to be offered a Harvard Fellowship. He had started to think about alternative approaches to AI but he was still troubled by the inability to see the neural structures that would tell him so much about how the brain is organised. So he invented a new type of microscope – the confocal scanning microscope. Because the basic operation of the microscope was electronic he also attempted some of the first image processing using a computer – the SEAC at the Bureau of Standards. Not with much success, however, because the memory wasn’t large enough to hold a detailed image and process it. MIT AI Lab In 1959 Minsky and John McCarthy founded what became the MIT Artificial Intelligence Laboratory which, in time became one of the main centres of AI research in the world. The lab attracted some of the most talented people in computer science and AI. Minsky continued to work on neural network schemes but increasingly his ideas shifted to the symbolic approach to AI and robotics in particular. The difference between the two approaches is subtle but essentially the neural network approach assumes that the problem really is to build something that can learn and then train it to do what you want, whereas the symbolic approach attempts to program the solution from the word go. In the early days of AI the neural network approach seemed to be having more success. Indeed there was almost a hysteria surrounding the development of one particular type of neural network – the perceptron. Rosenblatt invented the single-neuron perceptron in 1958 and went on to prove some very powerful theorems about what it could learn. These theorems were a sort of guarantee that if something was learnable then the perceptron would learn it. The AI community at the time oversold the idea with demonstrations and outlandish claims for what could be done with one single perceptron. Then the bubble burst. Minsky had met Seymour Papert and they were both thinking about the problem of working out exactly what a perceptron could do. The shocking truth that was revealed in the book that they wrote together “Perceptrons” was that there really were some very simple things that a perceptron cannot learn. In particular concepts such as “odd” and “even” are beyond a perceptron, no matter how big it is or how long you give it to learn. The perceptron book effectively discouraged any further work in the field simply because no funding organisation would give grants to crackpot AI research. For some 10 years, until the start of the 80s, the neural network approach to AI was effectively dead. A few places, mainly psychology labs and neurology labs still worked on the problem but progress was very slow. What started the revival was the discovery that multi-layer networks could be trained and they could solve the problems that Minsky and Papert had proved impossible for a perceptron.
The last 80 years the global mean of temperature has been rising. Is that the "El Nino" occurrence, the ozone layer depletion, or are we headed into another era where the overall climate of Earth is warmer? explain your answer Clear evidence exists from a variety of sources (including archaeological studies) that El Niños have been present for thousands, and some indicators suggest maybe millions, of years. However, it has been hypothesized that warmer global sea surface temperatures can enhance the El Niño phenomenon, and it is also true that El Niños have been more frequent and intense in recent decades. Whether El Niño occurrence changes with climate change is a major research question. The ozone hole does not directly affect air temperatures in the troposphere, the layer of the atmosphere closest to the surface. Although changes in circulation over Antarctica related to the ozone hole appear to be changing surface temperature patterns over that continent. Ozone is actually a greenhouse gas, and so are CFCs, meaning that their presence in the troposphere contributes slightly to the heightened greenhouse effect. The main greenhouse gas responsible for present-day and anticipated global warming, however, is carbon dioxide produced by burning of fossil fuels for electricity, heating, and transportation. Apr 19th, 2015 Did you know? You can earn $20 for every friend you invite to Studypool!
is found throughout the southwestern United States into Mexico, as far east as Missouri, north into Washington, Idaho, Colorado and Nebraska, and west to California and Baja California. Black-tailed jackrabbits inhabit desert scrubland, prairies, farmlands, and dunes. They favor arid regions and areas of short grass rangeland from sea level to about 3,800 m. Many different vegetation types are used, including sagebrush-creosote bush, mesquite-snakeweed and juniper-big sagebrush. They also frequent agricultural areas where they can impact fruit and grain crops. Black-tailed jackrabbits measure 47-63 cm from nose to rump, the tail is between 50-112 mm and the ears are 10-13 cm long. As they are true hares, black-tailed jackrabbits are lankier and leaner than rabbits, have longer ears and legs, and the leverets are born fully-furred and open-eyed. Black-tailed jackrabbits possess a characteristic black stripe down the center of the back, a black rump patch, and the tail is black dorsally. Both sexes look alike, but the female is the larger of the two sexes. Black-tailed jackrabbit males and females leap after, chase, and behave aggressively towards each other during a brief courtship phase before mating. Breeding season forextends from December through September in Arizona and from late January to August in California and Kansas. Females produce 3 or 4 litters annually with 1-6 leverets (generally 3 or 4) after a 41-47 day gestation period. The young are precocial; females only nurse their offspring for 2-3 days and are not seen with their young after that time. Lifespan in captivity is 5-6 years, but rabbits in the wild often die much sooner due to predation, disease or problems associated with overpopulation. As with all hares, blacktails rely on speed and camouflage (along with the characteristic "freeze" behavior) for their defense. When flushed from cover, a blacktail can spring 20 feet at a bound and reach top speeds of 30-35 mph over a zigzag course. Black-tailed jackrabbits do not generally occupy burrows: rather, they dig shallow depressions in the earth in which to lay. Black-tailed jackrabbits are mainly unsociable but are driven to common food sources in periods of drought. They are inactive during the hot afternoon hours and are mainly nocturnal, resting under bushes by day. Home ranges in California average 20ha (dependent upon population density), with females having larger ranges than males. Grasses and herbaceous matter are the preferred foods of, but twigs and young bark of woody plants are the staple food when other plants are not available. Sagebrush and cacti are also taken. Jackrabbits eat almost constantly and consume large quantities relative to their size; 15 jackrabbits eat as much as a large grazing cattle in one day. Black-tailed jackrabbits do not require much water and obtain nearly all the water they need from the plant material they consume. As with many other Lepus species, has been widely used as food for humans, especially by Native Americans. Their fur is not durable nor valuable, but it has been extensively used in the manufacture of felt and as trimming and lining for garments and gloves. Due to the removal of natural predators, such as coyote and kit fox, by European settlers, black-tailed jackrabbit populations have undergone incredible population explosions in which crops, orchards, and rangelands have suffered. They do considerable damange to farms, forest plantations, and young trees. Population numbers of black-tailed jackrabbits are sometimes quite high despite attempts at culling their populations by ranchers and farmers. Population densities often reach 470 animals per square km, with densities as high as 1500 animals per square km being recorded. Large herding attempts have netted as much as 6,000 hares at a time. As with many hares,populations undergo drastic fluctuations, with population numbers peaking every 6 to 10 years. In some years more then 90 per cent of western populations die from tularemia, which may or may not be related to the population cycling phenomenon. Because of their incredible fecundity, black-tailed jackrabbit numbers quickly recover from these kinds of die-offs. Black-tailed jackrabbit populations are not threatened in general, though extensive habitat destruction may reduce suitable habitat. (Wilson and Ruff, 1999) Jackrabbits obtained their name from early settlers of the Southwest who, noting the animal's extraordinarily long ears, dubbed it "jackass rabbit." This name was later shortened to jackrabbit. This species has 8 named subspecies. (Wilson and Ruff, 1999) Liz Ballenger (author), University of Michigan-Ann Arbor. living in the Nearctic biogeographic province, the northern part of the New World. This includes Greenland, the Canadian Arctic islands, and all of the North American as far south as the highlands of central Mexico. living in landscapes dominated by human agriculture. having body symmetry such that the animal can be divided in one plane into two mirror-image halves. Animals with bilateral symmetry have dorsal and ventral sides, as well as anterior and posterior ends. Synapomorphy of the Bilateria. Found in coastal areas between 30 and 40 degrees latitude, in areas with a Mediterranean climate. Vegetation is dominated by stands of dense, spiny shrubs with tough (hard or waxy) evergreen leaves. May be maintained by periodic fire. In South America it includes the scrub ecotone between forest and paramo. uses smells or other chemicals to communicate in deserts low (less than 30 cm per year) and unpredictable rainfall results in landscapes dominated by plants and animals adapted to aridity. Vegetation is typically sparse, though spectacular blooms may occur following rain. Deserts can be cold or warm and daily temperates typically fluctuate. In dune areas vegetation is also sparse and conditions are dry. This is because sand does not hold water well so little is available to plants. In dunes near seas and oceans this is compounded by the influence of salt in the air and soil. Salt limits the ability of plants to take up water through their roots. animals that use metabolically generated heat to regulate body temperature independently of ambient temperature. Endothermy is a synapomorphy of the Mammalia, although it may have arisen in a (now extinct) synapsid ancestor; the fossil record does not distinguish these possibilities. Convergent in birds. an animal that mainly eats leaves. A substance that provides both nutrients and energy to a living thing. An animal that eats mainly plants or parts of plants. having the capacity to move from one place to another. the area in which the animal is naturally found, the region in which it is endemic. the kind of polygamy in which a female pairs with several males, each of which also pairs with several different females. reproduction that includes combining the genetic contribution of two individuals, a male and a female living in residential areas on the outskirts of large cities or towns. uses touch to communicate that region of the Earth between 23.5 degrees North and 60 degrees North (between the Tropic of Cancer and the Arctic Circle) and between 23.5 degrees South and 60 degrees South (between the Tropic of Capricorn and the Antarctic Circle). Living on the ground. A terrestrial biome. Savannas are grasslands with scattered individual trees that do not form a closed canopy. Extensive savannas are found in parts of subtropical and tropical Africa and South America, and in Australia. A grassland with scattered trees or scattered clumps of trees, a type of community intermediate between grassland and forest. See also Tropical savanna and grassland biome. A terrestrial biome found in temperate latitudes (>23.5° N or S latitude). Vegetation is made up mostly of grasses, the height and species diversity of which depend largely on the amount of moisture available. Fire and grazing are important in the long-term maintenance of grasslands. young are relatively well-developed when born Flux, J.E.C. and R. Angermann. 1990. The hares and jackrabbits. In: Rabbits, Hares and Pikas: Status Survey and Conservation Action Plan. (J.A. Chapman and J.E.C. Flux, eds.) Information Press, Oxford, U.K. Grzimek's Encyclopedia of Mammals. Nowak, R.M. and J.L Paradiso. 1983. Walker's Mammals of the World. 4th edition. John Hopkins University Press, Baltimore, MD. Rue, L.L. 1967. Pictorial guide to the mammals of North America. Thomas Y. Crowell Company, New York. Wilson, D., S. Ruff. 1999. The Smithsonian Book of North American Mammals. Washington: Smithsonian Institution Press.
Reptiles are among the planet's oldest creatures—crocodilians, for instance, have been terrorizing smaller animals for approximately 200 millions years. But the majority of reptiles are unable to internally regulate their body temperature and so live in temperate and tropical climates. Some bear live young, but typically, reptiles lay eggs which develop miniature adults. Survivors from the time of the dinosaurs, this fierce family includes crocodiles, alligators and caimans. Learn more The inspiration for many a phobia, snakes are a diverse family with many unique adaptations. Learn more A hard carapace or shell provides a measure of protection for these often water-dwelling reptiles. Learn more One of only two species of venomous lizards on the planet, the gila monster faces serious threats to its habitat. Learn more
Try this experiment to find out if you're a water waster. How sweet is this activity? It’s an introduction to the rock cycle using chocolate! How are people affecting your local environment? How is our planet changing? Join the “citizen science” movement, and you can help discover the answers. Citizen science is a form of open collaboration in which members of the public participate in the scientific process to address real-world problems. Volunteers can work with scientists to identify research questions, collect and analyze data, interpret results, make new discoveries, develop technologies and applications, as well as solve complex problems. Trying to "see" what is beneath the surface of the Earth is one of the jobs of a geologist. Rather than digging up vast tracts of land to expose an oil field or to find some coal-bearing strata, core samples can be taken and analyzed to determine the likely composition of the Earth's interior. In this activity, students model core sampling techniques to find out what sort of layers are in a cupcake. Learn how soil scientists observe and record data and how that information is useful to farmers, builders, and others in order to use the land appropriately. Prepare a kit in case of natural hazards or a disaster. This list from FEMA and the Red Cross will have you prepared for almost any emergency! A fossil is any evidence of past life preserved in a geologic context, such as within rock or sediment. This activity allows you to explore the process used by paleontologists — scientists who study fossils to understand ancient landscapes, climate, and life on Earth — to find and identify fossils. This activity gives your students a glimpse at the difficulty of seafloor surveying, as well as the challenges the JOIDES Resolution faces during each expedition. Your students also will learn about latitude and longitude and plotting coordinates. The ocean is the key element in Earth's hydrologic cycle (water cycle). Students will construct a simple model of the hydrologic cycle to help them visualize and understand the movement of liquid water and heat. An instructional unit on caves for grades K-3. Five short chapters, with follow-up activities and lessons.
During the life cycle of Rubus spectabilis, sexual and asexual reproduction occurs. Sexually reproduction involves seeds, where asexual reproduction involves layering, sprouting from rhizomes, and basal sprouting. Sexual Reproduction: Flowers are pollinated by insects and hummingbirds. Eventually, drupelets are formed when pollination is successful. These drupelets are eaten by animals and they pass the seeds through the digestive tract to new sites. Most of these seeds lie dormant for many years creating a large seed bank. Under natural conditions these seeds germinate by disturbances such as wind, fire, and human activity. Within several years after germination, seedling growth approaches 20 to 30 cm per year. |Flowers of Rubus spectabilis Formation of drupelets| |Photos Courtesy of: Pat Breen, Oregon State Univ.| Asexual Reproduction: There are three processes this organism can use for asexual reproduction: layering, sprouting from rhizomes, and basal sprouting. Layering is when the stem in pinned to the soil by an object, such as a large tree branch, and buds on the upper side of the stem form new aerial shoots. Adventitious roots form on the lower surface. Basal Sprouting is when the buds are located near the base of the stem or in the root crown. These buds can re-establish Salmonberries that have been destroyed or damaged. Sprouting with rhizomes is when rhizomes grow within several feet of the soil surface. They form dense, interwoven mats. Rhizomes are capable of producing buds every half to one inch, meaning a single network can contain hundreds of thousands of buds per acre. This is a general life cycle for Salmonberries. Starting with the plant flowering, it is then pollinated and fruit production begins. As the fruit ripens, seeds are produced. These seeds, which are taken in by other animals and distributed else where, begin to germinate to form seedlings. The seedlings become young plants and growth and maturity takes place to produce adult Salmonberries. The adult plants age and eventually die. New plants grow in its place and the life cycle starts over again.
Just Ask Antoine! - ebulliometry. ebulliometric. - Determination of average molecular weight of a dissolved substance from the boiling point elevation of the solution. - EDTA. ethylenediaminetetracetic acid; versine. - A polydentate ligand that tightly complexes certain metal ions. EDTA is used as a blood preservative by complexing free calcium ion (which promotes blood clotting). EDTA's ability to bind to lead ions makes it useful as an antidote for lead poisoning. - effective nuclear charge. (Zeff) Compare with atomic number. - The nuclear charge experienced by an electron when other electrons are shielding the nucleus. - efflorescent. efflorescence; efflorescing. Compare with deliquescent and hygroscopic. - Efflorescent substances lose water of crystallization to the air. The loss of water changes the crystal structure, often producing a powdery crust. - effusion. effuse. Compare with diffusion and diffraction. - Gas molecules in a container escape from tiny pinholes into a vacuum with the same average velocity they have inside the container. They also move in straight-line trajectories through the pinhole. - electric charge. charge. - A property used to explain attractions and repulsions between certain objects. Two types of charge are possible: negative and positive. Objects with different charge attract; objects with the same charge repel each other. - electric current. current; electrical current. - A flow of electric charges. The SI unit of electric current is the ampere. - electric dipole. dipole. - An object whose centers of positive and negative charge do not coincide. For example, a hydrogen chloride (HCl) molecule is an electric dipole because bonding electrons are on average closer to the chlorine atom than the hydrogen, producing a partial positive charge on the H end and a partial negative charge on the Cl end. - electric dipole moment. (µ) dipole moment. - A measure of the degree of polarity of a polar molecule. Dipole moment is a vector with magnitude equal to charge separation times the distance between the centers of positive and negative charges. Chemists point the vector from the positive to the negative pole; physicists point it the opposite way. Dipole moments are often expressed in units called Debyes. - electric field. - A field of forces that act on any electric charge placed within it. The stronger the field, the stronger the force that acts on the charge. For example, the positive charge on an atomic nucleus creates an electric field that traps electrons. - electrical conductivity. conductivity; electric conductivity; electrical conductance; conductance. Compare with resistance. - A measure of how easily an electric current can pass through a material. The conductivity is the reciprocal of the resistance. The SI unit of conductance is the siemens. - electrical resistance. resistance. Compare with conductivity. - The ability of a material to oppose the flow of an electric current, converting electrical energy into heat. The SI unit of resistance is the ohm. - electrochemical cell. electric cell. - A device that uses a redox reaction to produce electricity, or a device that uses electricity to drive a redox reaction in the desired direction. - An electrically conducting surface that allows electrons to be transferred between reactants in an electrochemical cell. - electrolytic cell. - A device that uses electricity from an external source to drive a redox reaction. - The process of driving a redox reaction in the reverse direction by passage of an electric current through the reaction mixture. - A substance that dissociates fully or partially into ions when dissolved in a solvent, producing a solution that conducts electricity. See strong electrolyte, weak electrolyte. - electromagnetic radiation. electromagnetic wave. - A wave that involves perpendicular oscillations in the electric and magnetic fields, moving at a speed of 2.99792458×108 m/s in a vacuum away from the source. gamma rays, x-rays, ultraviolet light, visible light, infrared radiation, and radio waves are all electromagnetic waves. - electron. (e-) Compare with proton and neutron. - A fundamental consituent of matter, having a negative charge of 1.602 176 462 × 10-19 coulombs ± 0.000 000 063 × 10-19 coulombs and a mass of 9.109 381 88 × 10-31 kg ± 0.000 000 72 × 10-31 kg [1998 CODATA values]. - electron affinity. - The enthalpy change for the addition of one electron to an atom or ion in the gaseous state. For example, the electron affinity of hydrogen is H in the reactionH(g) + e- H-(g)H = -73 kJ/mol. - electron configuration. electronic configuration. - A list showing how many electrons are in each orbital or subshell. There are several notations. The subshell notation lists subshells in order of increasing energy, with the number of electrons in each subshell indicated as a superscript. For example, 1s2 2s2 2p3 means "2 electrons in the 1s subshell, 2 electrons in the 2s subshell, and 3 electrons in the 2p subshell. - electronegativity Compare with ionization energy and electron affinity. - Electronegativity is a measure of the attraction an atom has for bonding electrons. Bonds between atoms with different electronegativities are polar, with the bonding electrons spending more time on average around the atom with higher electronegativity. - electron volt. - Energy required to move an electron through a potential difference of 1 volt. An electron volt is equivalent to 1.6×10-19 J. - Electrorefining is a method for purifying a metal using electrolysis. An electric current is passed between a sample of the impure metal and a cathode when both are immersed in a solution that contains cations of the metal. Metal is stripped off the impure sample and deposited in pure form on the cathode. - element Compare with compound and mixture. - An element is a substance composed of atoms with identical atomic number. The older definition of element (an element is a pure substance that can't be decomposed chemically) was made obsolete by the discovery of isotopes. - element symbol. - An international abbreviation for element names, usually consisting of the first one or two distinctive letters in element name. Some symbols are abbreviations for ancient names. - elementary reaction. Compare with net chemical reaction. - A reaction that occurs in a single step. Equations for elementary reactions show the actual molecules, atoms, and ions that react on a molecular level. - emission spectrum. emission spectra. Compare with absorption spectrum. - A plot of relative intensity of emitted radiation as a function of wavelength or frequency. - A substance added to a formulation that gives it softening ability. For example, oils that can soften skin are added as emollients in some skin creams. - empirical formula. simplest formula. Compare with molecular formula. - Empirical formulas show which elements are present in a compound, with their mole ratios indicated as subscripts. For example, the empirical formula of glucose is CH2O, which means that for every mole of carbon in the compound, there are 2 moles of hydrogen and one mole of oxygen. - empirical temperature. - A property that is the same for any two systems that are in thermodynamic equilibrium with each other. - emulsion. Compare with colloid. - A colloid formed from tiny liquid droplets suspended in another, immiscible liquid. Milk is an example of an emulsion. - enantiomer. enantiomeric. Compare with diasteromer. - Two molecules that are nonsuperimposable mirror images of each other. One enantiomer rotates plane-polarized light to the left; the other rotates it to the right. - endothermic. endothermic reaction; endothermic process. Compare with exothermic. - A process that absorbs heat. The enthalpy change for an endothermic process has a positive sign. - endpoint. end point. Compare with equivalence point. - The experimental estimate of the equivalence point in a titration. - energy. Compare with heat and work. - Energy is an abstract property associated with the capacity to do work. - Enkephalins are molecules produced naturally by the central nervous system to numb pain. Enkephalins lock into receptors on the surface of a nerve cell and open ion channels. Ions flow into the cell and the distribution of charge on either side of the cell membrane becomes such that the nerve cell cannot fire. - enthalpy. (H) enthalpy change. Compare with heat. - Enthalpy (H) is defined so that changes in enthalpy (H) are equal to the heat absorbed or released by a process running at constant pressure. While changes in enthalpy can be measured using calorimetry, absolute values of enthalpy usually cannot be determined. Enthalpy is formally defined as H = U + PV, where U is the internal energy, P is the pressure, and V is the volume. - enthalpy of atomization. (Hat) atomization enthalpy; heat of atomization. - The change in enthalpy that occurs when one mole of a compound is converted into gaseous atoms. All bonds in the compound are broken in atomization and none are formed, so enthalpies of atomization are always positive. - enthalpy of combustion. (Hc) heat of combustion. - The change in enthalpy when one mole of compound is completely combusted. All carbon in the compound is converted to CO2(g), all hydrogen to H2O(), all sulfur to SO2(g), and all nitrogen to N2(g). - enthalpy of fusion. (Hfus) heat of fusion; molar heat of fusion; molar enthalpy of fusion. - The change in enthalpy when one mole of solid melts to form one mole of liquid. Enthalpies of fusion are always positive because melting involves overcoming some of the intermolecular attractions in the solid. - enthalpy of hydration. (Hhyd) hydration enthalpy; heat of hydration. - The change in enthalpy for the process A(g)A(aq)where the concentration of A in the aqueous solution approaches zero. Enthalpies of hydration for ions are always negative because strong ion-water attractions are formed when the gas-phase ion is surrounded by water. - enthalpy of neutralization. heat of neutralization. - The heat released by an acid-base neutralization reaction running at constant pressure. - enthalpy of reaction. (Hrxn) heat of reaction. - The heat absorbed or released by a chemical reaction running at constant pressure. - enthalpy of solution. (Hsoln) heat of solution. Compare with integral enthalpy of solution. - The heat absorbed or released when a solute is dissolved in a solvent. The heat of solution depends on the nature of the solute and on its concentration in the final solution. - enthalpy of sublimation. (Hsub) heat of sublimation. - The change in enthalpy when one mole of solid vaporizes to form one mole of gas. Enthalpies of sublimation are always positive because vaporization involves overcoming most of the intermolecular attractions in the sublimation. - enthalpy of vaporization. (Hvap) heat of vaporization. - The change in enthalpy when one mole of liquid evaporates to form one mole of gas. Enthalpies of vaporization are always positive because vaporization involves overcoming most of the intermolecular attractions in the liquid. - entropy. (S) - Entropy is a measure of energy dispersal. Any spontaneous change disperses energy and increases entropy overall. For example, when water evaporates, the internal energy of the water is dispersed with the water vapor produced, corresponding to an increase in entropy. - environmental chemistry. chemical ecology. - The study of natural and man-made substances in the environment, including the detection, monitoring, transport, and chemical transformation of chemical substances in air, water, and soil. - Protein or protein-based molecules that speed up chemical reactions occurring in living things. Enzymes act as catalysts for a single reaction, converting a specific set of reactants (called substrates) into specific products. Without enzymes life as we know it would be impossible. - equilibrium constant. (K, Keq) equilibrium constant expression; law of mass action. Compare with reaction quotient. - The product of the concentrations of the products, divided by the product of the concentrations of the reactants, for a chemical reaction at equilibrium. For example, the equilibrium constant for A + B = C + D is equal to [C][D] / ([A][B]), where the square brackets indicate equilibrium concentrations. Each concentration is raised to a power equal to its stoichiometric coefficient in the expression. The equilibrium constant for A + 2B = 3C is equal to [C]3/([A][B]2). For gas phase reactions, partial pressures can be used in the equilibrium constant expression in place of concentrations. - equivalence point. Compare with end point. - The equivalence point is the point in a titration when enough titrant has been added to react completely with the analyte. - equivalent. Compare with normality. - 1. The amount of substance that gains or loses one mole of electrons in a redox reaction. 2. The amount of substances that releases or accepts one mole of hydrogen ions in a neutralization reaction. 3. The amount of electrolyte that carries one mole of positive or negative charge, for example, 1 mole of Ba2+(aq) is 2 equivalents of Ba2+(aq). - An ester is a compound formed from an acid and an alcohol. In esters of carboxylic acids, the -COOH group and the -OH group lose a water and become a -COO- linkage: R-COOH + R'-OH = R-COO-R' + H2O where R and R' represent organic groups. - ethanol. (CH3CH2OH) ethyl alcohol; grain alcohol. - A colorless, flammable liquid produced by fermentation of sugars. Ethanol is the alcohol found in alcoholic beverages. - ethyl. (-Et, -CH2CH3) ethyl group. - A molecular fragment produced by removing a hydrogen atom from ethane (CH3-CH3). For example, ethyl chloride is CH3-CH2-Cl. - ethyl acetate (CH3COOCH2CH3) - A flammable liquid with a fruity odor, used in flavorings and as a solvent. - eutectic point. eutectic temperature; eutectic composition. - The composition and the melting point of a eutectic mixture. For example, the eutectic point of a mixture of NaCl and water occurs at 23.3% NaCl (by mass) and -21.1°C. That means that the lowest possible temperature at which a liquid NaCl solution can exist is -21.1°C; below the eutectic point the solution will freeze into a mixture of ice and salt crystals. - eutectic mixture. - A mixture of two or more substances with melting point lower than that for any other mixture of the same substances. - evaporation. vaporization. - Conversion of a liquid into a gas. - To convert a liquid into a gas. - excited state. Compare with ground state. - An atom or molecule which has absorbed energy is said to be in an excited state. Excited states tend to have short lifetimes; they lose energy either through collisions or by emitting photons to "relax" back down to their ground states. - An excitotoxin is a toxic molecule that stimulates nerve cells so much that they are damaged or killed. Domoic acid and glutamate are examples of excitotoxins. - exothermic. exothermic reaction; exothermic process. Compare with endothermic. - A process that releases heat. The enthalpy change for an exothermic process is negative. Examples of exothermic processes are combustion reactions and neutralization reactions. - An experiment is direct observation under controlled conditions. Most experiments involve carefully changing one variable and observing the effect on another variable (for example, changing temperature of a water sample and recording the change volume that results). - experimental yield. actual yield. Compare with theoretical yield and percent yield. - The measured amount of product produced in a chemical reaction. - extensive property. extensive; extensive properties. Compare with intensive property. - A property that changes when the amount of matter in a sample changes. Examples are mass, volume, length, and charge. - A technique for separating components in a mixture that have different solubilities. For example, caffeine can be separated from coffee beans by washing the beans with supercritical fluid carbon dioxide; the caffeine dissolves in the carbon dioxide but flavor compounds do not. Vanillin can be extracted from vanilla beans by shaking the beans with an organic solvent, like ethanol.
The human mind is the most complex entity that exists in the known universe. The mind is defined as a collection of neurons, or nerve cells, that continuously receive input from the outside world, process that information, and then send it to other neurons in order to give rise to the conscious and unconscious states experienced everyday by humans all around the planet. There are roughly 100 billion neurons, making tens of thousands of connections every second (Schwartz and Begley 105). Due to our still primitive understanding of the mind, many believe in the solely left- brain individual and right-brain individual. Furthermore, others are convinced that the neuronal connections in the brain are fixed and can never be changed. Still, more people assume that the brain and thought are completely separate. They think that the outside world must always influence what goes on inside the mind. However, thousands and thousands of neuronal connections in the mind are influenced every single day by thought. The age-old adage “mind over matter” undoubtedly got it right; thought, along with experience, physically changes the structure and function of our brains. Neuroplasticity, as this concept is known, allows for billions to change their inner world everyday, making possible a life that would be otherwise inconceivable if it were not for the incredibly “plastic” brain. Many believe that the left and right sides of the brain are completely separate entities, meaning that they do not “communicate” or influence one another. According to this theory, the brain is hard-wired for specific areas to perform specific functions. This is not the case. Different parts of the brain influence one another all of the time causing innumerable “plastic” changes of the mind. As psychoanalyst Norman Doidge says, the right is normally the “artistic” and “imaginative” brain that controls spatial recognition, while the left is “the verbal domain” (260). However, lateralization in the brain allows for these to interact and to physically change the neuronal connections in the brain. The experiments done by Roger Sperry on severed corpus callosa epileptic patients in the 1960s show the importance of communication between the left and right sides of the brain. In extreme cases of epilepsy, the patient’s corpus callosum is surgically severed in order to prevent the spread of a seizure from one hemisphere of the brain to another (Gibb 89). In the experiments, Sperry flashed a picture of an object, such as a fork, into the left visual field of the patient, so that it would be processed in the right side of the brain. He then asked the patient what he saw, but the answer was difficult to formulate since the language processes are in the left side of the brain (Gibb 90). The right hemisphere is unable to communicate with the left, so the patient cannot name the object. In some cases, the patient made up a word such as “spanner” in order to compensate (Gibb 90). Roger Sperry’s experiments show that the left depends on the right, and vice versa, meaning the two are indeed connected. Undoubtedly, the left and right hemispheres of the brain are a single communicating unit influencing one another all of the time. Despite the evident interactivity displayed, the severed corpus callosa patients do not display the power that the left and right sides of the brain have in making “plastic” changes. The story of a girl born with only half of a brain illustrates how neuroplasticity makes some lives possible and all lives better. Michelle’s tale begins inside the womb when complications caused her to be born with only the right half of her brain (Doidge 258). Despite only having half of a brain, Michelle is able to pray, read, watch movies, and love -functions of both the left and right brain- all thanks to the fact that her right-brain took over the functions of her left-brain (Doidge 259). She is able to read, comprehend, and discuss important issues even though she has no left hemisphere, which is said to be the verbal area of the brain. As psychoanalyst Norman Doidge says, “it’s hard to imagine a better illustration or indeed a greater test of human neuroplasticity” (Doidge 259). The right side of Michelle’s brain had to take over the functions of the left side and also had to economize its “own” functions (Doidge 259). This shows further the great power of neuroplasticity, since the right side of the brain adapted, changed, and evolved language and spatial functions without any input from the left side of the brain. Michelle’s story confirms that the brain is not hard wired for specific areas to perform specific functions; she can perform functions of both hemispheres with only half of her brain. Although Michelle is very capable, she does have some physical and processing limitations, which seem to suggest that neuroplastic changes are not taking place. In fact, the exact opposite is true. Her disfigured wrist, which is bent and twisted, shows signs of a missing hemisphere. Michelle has trouble seeing objects in her right visual field since she lacks the left side of the brain that would normally process this visual input (Doidge 261). On the surface, it seems that Michelle’s brain has failed to make the plastic changes necessary for her to process visual input in her right visual field; but, when examined more closely, one can easily see the fantastic neuroplastic changes her brain has made. Due to her lacking vision, Michelle has hypersensitive hearing that allows her to hear what she cannot see, such as her brothers attempting to steal her French fries (Doidge 261). Michelle’s area for hearing on her brain map has actually partially taken over the area for processing vision, causing hypersensitive hearing. The neurons for sight physically changed their structure and function in order to process auditory input, showing another sign of the brain’s ability to adjust. The fact that Michelle lacks some functions does not show that the brain is rigid, as some would quickly assume. Instead, it shows the amazing adaptability of the human brain. Michelle’s life was enhanced by the ability of her brain to respond to and change with environmental stimuli. The examination of phantom limb patients further confirms that neuroplasticity improves lives by enabling the brain to change in response to receiving sensory input from the outside world. The inputs received from the outside world rewire the brain more concretely because specific neurons in the brain are connected to the neurons that take in the senses all over the body. Neurologist V. S. Ramachandran describes a phantom limb as “an arm or leg that lingers indefinitely in the minds of patients long after it has been lost in an accident or removed by a surgeon”(22). Simply put, the sensation of the limb is still felt by the patient even after losing the limb. When body parts distinct from the lost limb are stimulated, the sensation is also felt in the phantom limb, revealing the nature of neuroplasticity. A certain area of the brain receives certain specific sensory input. For example, there is an area for the genitals, which lies next to the area for the feet, which lies next to the space for the trunk (Ramanchandran and Blakeslee 26). This cortical representation is known as a brain map. When someone loses a limb, the brain map will reorganize, meaning that the neurons for the genitals will take over the area for the feet and cause sensation to the genitals to be felt in the feet, even though the feet are not physically present (Ramachandran and Blakeslee 36). Ramachandran believes that when a part of the body is lost “its surviving brain map hungers for incoming stimulation and releases nerve growth factors that invite neurons from nearby maps to send little sprouts to them” (Doidge 183). This process culminates in the existence of a phantom limb and shows empirical evidence that sensory input influences the physical connections made by neurons, further supporting neuroplasticity. But how is neuroplasticity helping the affected phantom limb patients? World-renowned psychologist Oliver Sacks says that many patients with phantom limbs experience persistent phantom pain (Sacks, The Man Who 69). Furthermore, Dr. Herta Flor of Humboldt-University in Berlin, Germany and her team found that there is a direct positive correlation between the amount of pain experience by phantom limb patients and the amount of cortical reorganization (Flor et. al 482). Simply put, the more changes the neurons made, the more pain the patients experienced. It might seem as though neuroplasticity is causing only pain and discomfort to the patient. However, when one investigates the procedure to get rid of phantom limbs, neuroplasticity shows how it immensely improves the lives of the phantom limb patients. Many phantom limb patients have phantoms that they experience as paralyzed, immovable. Ramachandran attributes this to the fact that many amputees had their limbs in casts or slings for an extended period of time prior to amputation, which caused the brain to believe that the limb was frozen because it never received feedback that the limb was moving (Doidge 185). Thus, the brain reorganized its motor neurons, convincing itself that the limb was paralyzed. Then, once the limb is amputated, the brain still believes that the limb is immobile. Since the brain is certainly not receiving any input to tell it otherwise, a paralyzed phantom limb results (Doidge 185). The good news is that the brain is malleable enough to reverse this situation through a simple therapy technique developed by Ramachandran. He designed a mirror box that fools the brain into believing that the patient still has both arms, even if one has been amputated. The patient inserts their good arm into the box, and they can actually “see” their phantom move, as well as feel it move (Doidge 186). Over several weeks, the patient uses the box to fool the brain into thinking they are moving their phantom limb, without even having an arm to move. The paralysis is unlearned by stimulating a plastic change, which would rewire the brain map (Doidge 187). The first patient of Ramachandran to use this device, Philip, was thrown from a motorcycle a decade before, damaging the nerves in his arm and causing him to have an immovable, yet present arm (Doidge 187). He eventually opted to have his arm surgically removed, but he was left with a “frozen” phantom limb (Doidge 187). After four weeks of using the mirror box, Philip’s phantom not only became permanently unfrozen, but actually disappeared (Doidge 187). The amazing neuroplastic brain had, yet again, improved the life for another unfortunate person. Stroke patients are another group helped by the neuroplastic brain. A stroke occurs when a blood vessel going to the brain is either blocked or is ruptured causing blood flow to that particular area of the brain to stop, which, in effect, kills those brain cells (Stroke page #?). These dead brain cells often times cause paralysis in the body part that corresponds to the area of the brain where the dead cells are located. In the early 1990s, behavioral neuroscientist Edward Taub began to work with stroke patients in order to try and restore some movement to their paralyzed limbs (Schwartz and Begley 187). He did so by using a technique called constraint-induced movement (CI) therapy, which included the patients restraining their working arm and only being able to use their impaired limb (Taub 347). He began with patients who were in the top quartile of stroke patients in their ability to move their affected limbs, meaning they were able to “extend their wrist a minimum of twenty degrees and to flex each finger a minimum of ten degrees” (Schwartz and Begley 189). Two weeks after therapy, the patients had regained considerable use of their seemingly paralyzed limbs (Wolf et. al 2104). More importantly though, the patients could complete daily-tasks 97% more effectively after just one month of training (Schwartz and Begley 191). Taub went on to study patients in the second and third quartiles, patients with more restricted use of their affected limbs as well. He found that CI therapy worked for them but not nearly as well as for those who began with higher functioning limbs (Schwartz and Begley 192). So, what is the origin of this improvement? If neuroplastic changes were taking place, then one should be able to empirically measure those changes. It is not enough just to see the people moving their limbs to conclude that neurons are physically changing their structure or that brain maps are reorganizing. Most importantly,one must see the changes in the neurons to say that neuroplasticity is acting,especially when considering the fact that the patients in the second and third quartiles did not improve as much as those in the first quartile, In 1998, the first visual evidence of neuronal changes appeared in a study by Joachim Liepert and Cornelius Weiller of Friedrich-Schiller University in Jena, Germany (Scwhartz and Begley 192). They evaluated six chronic stroke patients, who had undergone the constraint-induced movement therapy, before and after they received Taub’s treatment (Liepert and Hamzei 710-711). All six patients improved in motor function (Schwartz and Begley 192). But, more importantly, “following CI therapy, the formerly shrunken cortical representation of the affected limb was reversed” and “an increase of excitability of the neuronal networks in the damaged hemispheres” was found (Schwartz and Begley 192-193). An expansion of the brain maps of the affected limbs was seen using brain-imaging techniques. Also, an increase in electrical activity was shown in the damaged brain areas, which corresponded to the damaged limbs. Furthermore, the physical changes in neuronal connections thought to be occurring during CI therapy were actually seen for the first time, therefore proving that neuroplasticity altered the brain. To further understand how neuroplasticity can save people, as it did for the stroke patients, one must investigate “mental force” and its use in helping the obsessive compulsive. Mental force is the willful, effortful use of one’s thoughts used to amend the neuronal circuits in the brain. Neurologist Jefferey M. Schwartz, who coined the term, was able to harness this mental force in order to create a four-step method to treat Obsessive-Compulsive Disorder (OCD). OCD is an anxiety disorder that is characterized by recurrent, unwanted thoughts, known as obsessions, and repetitive behaviors, referred to as compulsions (NIMH). In the brain of someone afflicted with OCD, there are two neuronal pathways that can be taken: one that leads to a compulsion or obsession, and one that leads to a behavior that takes the person away from the obsession or compulsion. Dr. Schwartz created a process with four steps: relabel, reattribute, refocus, and revalue in order to improve the lives of those with OCD. The Four-Step Method, as he called it, would change the brain, making it easier for the afflicted patient to follow the neuronal pathway to behavior that was not obsessive or compulsive (Schwartz and Begley 87). Relabeling is when patients recognize that the “obsessive thoughts and compulsive urges…are inherently false and misleading” (Schwartz and Begley 80). The afflicted “relabel” the thoughts and urges as a mere confabulation of the mind, a symptom of OCD. The next step in Schwartz’s theory, “reattribution,” argues that these obsessions are due to faulty brain wiring, and the obsession is not the actual “self” (Schwartz and Begley 81). The third step, refocusing, is the most important one because it lays down the circuit for non-obsessive, non-compulsive behavior. When the patient feels the need to act on an urge, they focus their attention on something that is not compulsive. For instance, if someone has the urge to constantly wash his hands, he will focus his attention on gardening, rather than act on the urge (Schwartz and Begley 83). Every time a compulsion arises the person must refocus his attention away from it, and over time this becomes easier and easier until the “good” pathway is followed every time. This shows the great impact of neuroplasticity since the neuronal pathways are actually changed. The person goes from following the diseased pathway to following the healthy one. Revalue, the final step, “means quickly recognizing the disturbing thoughts as senseless, as false, as errant brain signals not even worth the gray matter they rode in on, let alone acting on” (Schwartz and Begley 88). The afflicted must step outside of themselves and see their thoughts as nothing more than the disease. The Four-Step method developed by Dr. Schwartz shows how thought leads to neuroplastic changes in the brain to help improve the life of someone with OCD. The immense power of neuroplasticity improves the lives of billions everyday. By understanding that the two halves of the brain are constantly communicating, one can more easily see the enormous changes that take place inside the brain of someone who only has half of a brain, such as Michelle. Although Michelle’s case is a rare one, there are many more people with phantom limbs who also experience the power of the brain to change its neuronal connections, which leads to a more normal and pain-free life. Still more are debilitated annually by stroke, but the brain is there to assist again by reorganizing its maps in order to regain use of formally paralyzed limbs. Finally, there are those who are inundated with obsessive and compulsive thoughts who must use mental force to change the plastic brain, so their lives are certainly more enjoyable. In a world without a plastic brain, none of these amazing stories of recovery would be imaginable. However, one does not need to experience a stroke to see the neuroplastic brain in action. Someone might just be experiencing depressive thoughts, and all that she needs to do is change her thought pattern in order to change the neuronal connections to follow the “good” pathway. Neuroplasticity is not all about miraculous stories of recovery from debilitating diseases. It is simply that every individual has, inside himself or herself, the ability to change their own individual world, one neuron at a time. Plastic means that the brain contains “changeable, malleable, [and] modifiable” aspects (Doidge xix). Lateralization shows that the brain exists as two hemispheres, the left and right, that act separately, yet still interact with one another. The corpus callosum is a thick, flat bundle of nerve fibers that connects the two hemispheres of the brain (Gibb 89). A brain map refers to the idea that a certain area of the brain receives certain specific sensory input. Cortical deals with the cerebral cortex of the brain, which is the wrinkled part that resembles a walnut. It is split into four areas: parietal, temporal, orbital, and frontal.
Stay up to date with the latest information on the Wuhan coronavirus (now known as COVID-19) outbreak, made available by the WHO. COVID-19 is still spreading and affecting mostly those living in China with some recent epidemics in other countries. Most individuals who get infected by the virus experience mild illness and may recover, but for other people, it can be more severe. The best way to take care of your health during this period and protect other people is by doing the following: 1. Wash your hands frequently Alcohol-based hand sanitizers are very useful for regularly and thoroughly cleaning your hands. Better yet, wash your hands thoroughly with soap under running water. Why? When you wash your hands using soap and water or making use of alcohol-based hand rub helps to kill viruses that may stay on your hands. 2. Maintain social distancing Always maintain a minimum distance of 1 meter (3 feet) between yourself and any person who is sneezing or coughing. Why? When a person coughs or sneezes, they unknowingly spray tiny liquid droplets from their mouth or nose, which may contain the virus. If you stay too close to them, you can likely breathe in the droplets, and that includes the COVID-19 virus if the person sneezing or coughing has the disease. 3. Avoid touching eyes, nose and mouth Why? Our hands, as humans, touch many surfaces such as doorknobs, elevator buttons, stair railings, handles, and so many more places where we can easily pick up viruses. Once your hands are contaminated, they can quickly transfer the virus to your nose, eyes, or mouth. From there, the virus can travel into your body and can make you fall sick. 4. Practice respiratory hygiene Make sure you, and every person around you, follow the best respiratory hygiene. This means that you must cover your mouth and nose with a tissue or your bent elbow when you sneeze or cough. If you use a tissue, make sure that you dispose of it immediately. Why? The reason why using a bent elbow or a tissue is necessary is to protect the people around you from contracting viruses such as cold, flu, and COVID-19. If you begin to cough, have a fever, or have difficulty breathing, ensure that you seek medical care immediately. Stay at home if you begin to feel unwell. If you notice you have developed a fever, cough and difficulty breathing, seek immediate medical attention, and place a call through to the emergency health service in your country. Follow the directions provided by your local health authority. Why? Your national and local authorities will always have the most recent or to date information on the outlook of things in your area. Calling in advance when you feel any symptoms will allow your doctor or health care provider to refer you to the correct health facility quickly. This will also help to protect you and prevent the spread of viruses and some other infections. Stay up to date with information and follow the advice offered by your healthcare provider Stay up to date on the most recent developments about COVID-19. Check out for advice and information from your healthcare provider, as your national and local public health authority will always share information on how you can protect yourself and others from contracting the COVID-19. Why? The WHO makes sure that your national and local authorities always have the most up to date info on whether or not there is a spread of COVID-19 in your area. They are in the best position to advise on what people in your area must do or avoid to protect themselves. Protective measures for people who are currently in or have recently visited (past 14 days) places where COVID-19 has been reported Follow the guidance outlined above. Stay at home and take care of yourself if you begin to feel ill, even with mild symptoms like slight runny nose, and headache, until you feel better. Why? You must avoid contact with other people and also visit a medical facility as this will allow these medical facilities to operate more efficiently, and help to protect you and other people from possibly getting COVID-19 and other viruses. If you have already developed a fever, cough, and problems breathing, make sure to seek medical advice promptly as this may be as a result of a respiratory infection or other serious medical condition. Call in advance and talk to your provider of any recent travel or recent contact with travelers. Why? When you call in advance, it will allow your health care provider to direct you to the appropriate health facility swiftly. This is also a way to prevent any likely spread of COVID-19 and other viruses. We hope that you found this article helpful. Please do not hesitate to leave a comment below.
Linguistic and conceptual development are two important facets of the language learning process. Acquiring a new word requires the child to recognize a conceptual unit and a linguistic unit, and create a connection between them. Primary school teachers are often bewildered by the relationship between the teaching of language and the teaching of concepts. Should they teach language and conceptual development separately? Should they teach language first and see the kids form concepts out of the language? Or should they always combine them together? The confusion that teachers face is whether a Russian speaker perceives reality differently from, let’s say, a Dutch speaker? Does language shape our thoughts and change the way we think? The words and grammar we use cause differing perceptions of experiences. This idea has long been a point of contention for linguists. That said, before the child learns to speak, he is already using symbolic thought in which one object stands for another. The proof of the role of language comes from studies of blind and deaf children. Their conceptual development is delayed by between one and four years. Yet in the end, they acquire almost the same concepts as the normal child in spite of their language handicap. Interestingly, language gives us the very structure by which we think. As clarified by Noam Chomsky, the greatest difference between animals and human beings is marked by the latter’s capacity to use language. Doesn’t matter in which culture the human child is brought up, the nuances, meaning, and social knowledge of the language is imbibed by the child. Do explore “universal grammar” to know more. Concepts, on the other hand, are presentations. As such, they are the basis, foundation, or grounds for representations of our knowledge of things and objectives. Concepts are visualized because they don’t exist in the physical material world. An important point to note here is, explaining the progression of logic in a (computer) program will be possible only if the reader can correctly visualize (imagine) it in his mind. Together with images, we use language to build our ideas and perceptions, even within our imagination. When our minds decipher what we conceive or perceive, it’s bound by the concepts. Language and Concepts vs The actual Object in Reality Language has its limitations and we will need to create a human language and express it in all prevalent languages. I have a few articles written on language limitations and how to have a universal language which will include mathematical language, qualitative language, and causal language. As human beings, we need to make sure that we understand the limitation of language which has been our invention. Reality and existence are there and language can just help create a picture imagining and understanding it will be the effort of every human being and the reward will be a resolved state. Every single human being can be resolved and live in alignment always in abundance which Nature already had as a prerequisite for human evolution. Words are just tools to help you imagine that object in your imagination. Human beings invented Language and that is the utility of the same. You can call it embodied spirit, a Japanese may call it Sheishin or a Spanish may call it Espiritu, or in Hindi, you may call it Aatman the object is the same and you being able to understand it is your effort and that is important. Language is a construct of the human race. Reality existed before we started using and learning the language. So it has its limitations. Existing is existence, whether mentioned by any name or not. It is ever-present. Holistically everything included and there is no construction or destruction. Just ever-present. It is too simple and we need to unlearn a lot to look at it in all its simplicity. Everything just is. Human beings look at it as growing or decaying and think of one as positive and another as negative. The baby is born, grows, and dies. That is as per the rule of the physical body. The Soul/life atom observes and understands the world through this adopted body. Nonexistence is an abstract concept that represents nothing for human beings to understand. We can only understand by observing what is what exists and find the laws that govern such existence. We can satiate our inquisitiveness by knowing and understanding all that exists. Nothing more is required and nothing less will suffice. All elements in existence are complete in themselves and do their part in the whole. Language is there to explain reality Reality is there and human beings can use language to explain that reality, understand that reality, and experience that reality. Subjective or objective is a relational thing based on the point of reference. Existence is holistic and neither the observer nor what is observed or the feelings generated in observing be of any user individually. Holistically, we human beings as a race exist and need to understand to be in order. Everything else the animals, plants, and material know their role in the bigger system and are in order.
Popular Science: The Ordovician struggle for a solid base in a sea of soft sediment: focused on the conulariids of the Prague Basin Ordovician is the second period of the Proterozoic. It was codified in 1960 at the 21st International Geological Congress in Copenhagen. However, the proposal for its definition dates back to the latter half of the 19th century. At the time, the English geologist Charles Lapworth, who was studying index fossils, proposed the creation of a new geological period that would include rocks younger than Cambrian ones of North Wales and older than Silurian ones of South Wales. Lapworth correctly recognized the significant differences in the fauna of the two periods and included in the Ordovician those organisms that were not typical of either period. Ordovician was one of the coldest periods of the Phanerozoic, which is defined as beginning in the Cambrian and continuing to this day. Low global temperatures were caused by the position of much of the Gondwana continent around the South Pole. At the end of this period, there was one of the largest waves of mass extinction in the history of life on Earth. The bottom in many areas of the period ocean was formed primarily by sediments. In this environment, living organisms had to fight for every bit of solid ground to which they could attach. Many groups of animals and plants used other organisms for this purpose. A large number of different organisms known from the fossil record overcame the inhospitable soft environment in this way. One species of the many that served as a solid foundation were conulariids. These were fourfold animals belonging to the cnidarids group. Conulariids lived throughout the Proterozoic, from the Upper Ediacaran to the Triassic, when they disappeared after the largest known mass extinction wave. They subsisted by filtering seawater, where the tentacles picked up organic detritus from the water column. Their mouths were covered with four triangular flaps that could hide the animal inside a shell-like structure, which was composed of chitinophosphatic and organic microlamellae. Jana Bruthansová found over 200 individuals carrying sessile organisms. Sometimes only attachment scars have been preserved on the conulariid shell, other times whole epibionts (organisms that live on the surface of another living organism) have. Fossils come from the Ordovician rocks of the Prague Basin. Most of the finds belong to the Letná and Zahořany formations of the Sandbian and Katian stages. This corresponds to the greatest diversity of conulariids in this period. From some findings it is evident that the epibiont only attached after the death of the individual. Sessile organisms belong to several groups of animals. The most common epibionts on the bodies of the conulariids include brachiopods (the Craniidae family), bryozoans, edrioasteroids (echinoderms); less often, there are also monoplacophorans and the taxonomically problematic genus Sphenothallus, which belongs to the conulariids affinity. An interesting outcome of the research is also the preference of several genera of conulariids instead of others. Epibionts were most often found on the outside of shells of conulariid of the Anaconularia anomala species and the Archaeoconularia genus. Findings inside the shell or sitting on the inside of the closable flaps are very rare and belong to the monoplacophorans. These did not reach the shell until the death of the conulariid. Of the approximately 5,000 described conulariids of the Ordovician of the Prague Basin, only 4% exhibit at least some indication of a sessile animal. The results were compared with the findings of epibionts on conulariids from the area of today’s Morocco. At that time, today’s Czech Republic was relatively close to the African country. This is evident primarily thanks to the findings of very similar trilobite fauna from both areas of the same age. In both studies, a similar proportion of dominant groups of epibionts was found on conulariid shells. In addition, a certain targeting of sessile animals, mostly in the larval stage, was found in individuals of the Archaeoconularia genus. For example, the larvae of brachiopods and edrioasteroids would likely seek the most suitable places to attach to the shell. The best one was near the centre of the sides of the shell, which provided the largest space for epibionts to grow. Overall, it appears that the attachment occurred in random places rather than in pre-selected ones. The study of interactions between animals in a paleontological record is an interesting subfield of this scientific discipline. In particular, the processes observable in the modern animal kingdom are applied. During the Ordovician period, there was an observable increase in the number of species, when individual organisms tried to adapt as best as possible to the then environmental conditions. And some animals skilfully used other organisms on which they perched. The scientific article is a contribution to the project GAČR (18-05935S): Z minulosti do přítomnosti: fosilní versus recentní schránky mořských živočichů jako substrát pro kolonizaci a bioerozi. Co-researcher from the Department of Geology and Palaeontology, Faculty of Science, Charles University: doc. RNDr. Katarína Holcová, CSc. Bruthansová, Jana and H. V. Iten. “Invertebrate epibionts on Ordovician conulariids from the Prague Basin (Czech Republic, Bohemia).” Palaeogeography, Palaeoclimatology, Palaeoecology 558 (2020): 109963.
Basic Guidelines For English Spellings The number to which another is added.Compare with addend‘we can begin with the combination in which both the addend and the augend are 0's but the carry bit from the previous column is a 1’ - ‘For instance, the message + 5 (add five) carries an implicit assumption that the augend is the present value of the number receiving the message.’ - ‘The results of these additions are divided by the number of augends added.’ Late 19th century via German from Latin augendus, gerundive of augere ‘to increase’. Are You Learning English? Here Are Our Top English Tips
By Jeffrey Mays Teachers and students agree that lab days are everyone’s favorite days in science class. They are a unique feature found only in the science class: a day of activity and social interaction when exciting behaviors of the natural world will be witnessed. But how can this experience be replicated when learning is occurring online? Many homeschool co-ops are facing the challenge of how to conduct effective lab experiments online. So what is the solution? Is the solution for the teacher do the experiment while students watch on the screen? No. The solution is that students must do much, if not all, of the experiment at home with instructor supervision taking place via video conference. Just because lectures have to be done from a computer screen doesn’t mean that lab experiments work that way too. Student involvement in experiments is non-negotiable—and that might require parental engagement, or even working with other families in the community. While some accommodations must be made, experiments in an online learning context can still be effective. If you are searching for a solution, consider the following resources available from Novare Science: The Student Lab Report Handbook – although this book becomes most important starting in 9th grade, it contains useful information for middle school students about keeping a lab journal, accounting for and analyzing experimental error, and understanding the difference between accuracy and precision. In high school, students should read chapters 1-6 of this book at the time of their first experiment. Teaching Science So That Students Learn Science – chapter 9 of this little book contains excellent material on lab work and the importance of writing lab reports. Experiment Manuals – each Novare science textbook has an accompanying experiment manual, either sold as a separate book or included in the digital resources. For physics-based topics, there are separate student instructions with an explanation of the procedure, as well as material for the teacher/parent concerning learning objectives, materials lists, and pre-lab discussion points. Chemistry and biology experiment books are different in that every student is intended to have their own book along with the teacher. Preparing in advance for your science lab is absolutely necessary—including both gathering supplies and studying for the upcoming lab. Novare labs are not paint-by-numbers affairs! They are designed to bring students into the daily experience and teach the skills of real scientists; a seat-of-the-pants approach is anything but scientific. Our own Scholé Academy has found ways to execute lab experiments such that students at home are effectively engaged and receive quality lab experience and instruction. One of our science instructors at Scholé Academy, Dr. Kathryn Morton, offered some general insights, which I’ve adapted below, on how to conduct experiments in the online learning context - If the teacher has a smart phone or any video filming capability, he or she should conduct the lab in advance and record it, demonstrating especially how to do any difficult parts. This will also help the teacher know what to expect when the students perform the lab at home. - Whether with a single student or a class, at least one parent should be involved with every experiment. Not only are there safety concerns—such as mixing chemicals or operating a fire or electricity source—but frequently an extra pair of hands is needed, and sometimes a team of three or four is necessary. Parents can coach their student(s) on the need for the extreme attention to detail that science demands. They can also provide guidance when it comes to taking accurate measurements and thinking about what to note in students’ lab journals. - Provide students and families with material lists well in advance so they are able to procure equipment. Don’t be afraid to substitute less expensive items if budgets are a concern—for example, a polypropylene graduated cylinder for $3 works just as well as a glass one for $6. - Discuss the lab in a session prior to conducting the experiment. Build anticipation and discuss safety issues, proper lab protocol, and any modifications that will be required for the procedure. You can also use this time to build any necessary data tables together. - Have the students prepare any required chemistry solutions prior to the lab, ensuring they make enough for any subsequent experiments. - On the day of the lab, expect students to show up on time with equipment gathered and an appropriate place cleared off for conducting the experiment. Read the procedure aloud and present any videos you want to show them demonstrating technique. Then let them get started. If you have access to a business Zoom account with the right features, you can separate students into individual breakout rooms and monitor them individually. - Assume that the experiment will take more than the time allotted and that students will have to complete the lab after class is over and the cameras go off. Adaptation will vary from experiment to experiment and from one science discipline to another. The Pendulum Experiment in Introductory Physics and Accelerated Studies in Physics and Chemistry, for example, is very easy to conduct at home; you may need to be creative on frog dissection day. Don’t feel bad about referring students to videos on Youtube that they should watch beforehand. You may need to decide on certain experiments you will do using the method above, and supplement with additional experiments recorded and watched after hours. Online experiments may require considerable accommodation, but they can still be fun and significantly enhance students’ learning opportunities.
Seizures or convulsions are associated with the electrical activity of the brain. They have an impact on major systems of the body and can be fatal if not treated. Seizures are classified predominantly based upon their site of occurrence and the affected organ or system. Types of Convulsions General or clonic seizures: In most cases, generalized seizures are also called as tonic-clonic seizures as they involve the entire body. In common parlance, it also referred as epileptic attacks. Patients experience changes in sensations such touch, taste, smell and vision. Hallucinations or auras are also experienced as they begin to influence the emotional balance of a person. Focal or Partial seizures: These types of seizures are cause because of disturbed electrical activity in the brain which is localized to one part of the brain. It acts on the temporal region of the brain leading loss of memory and balance in extreme conditions. Petit mal Seizures: These are temporary and their effects are usually limited to 20 seconds. They generate temporary muscle spasms which happen because of electrical imbalances in the brain. Epilepsy: This type of seizure is closely related to general seizures. The factors associated with the onset of epilepsy may include preexisting conditions such as ischemic heart disease, Alzheimer disease, meningitis and encephalitis. Fever induced convulsions: These types of seizures predominantly occur in children, infants and toddlers. The initial phase of these convulsions is very intense as they cause much discomfort to the child. They usually subside within a few hours. Most of the fever induced convulsions are caused by viruses and ear infections. Most convulsions or seizures are characterized by classical muscle spasm symptoms which includes rigorous shaking and frothing with prolonged effects like unconsciousness (blackout). Since the predominant reason associated with convulsions are related to the electrophysiology of the brain, neurological symptoms such as confusion, hallucination, dementia, drooling, lack of bladder control and sudden loss of balance may also be noticed. Convulsions also have effects on the emotions of a person as many people complain of unprecedented symptoms such as sudden aggression, depression, mood swings, panic, extreme laughter and joy for a temporary period of time. Warning signs often appear before any form of seizure such as dizziness, sensitivity to light, vertigo and nausea. Seizures can also occur as a result of withdrawal from use of drugs such as barbiturates, valium or benzodiazepines. Drug abuse and alcohol abuse along with preexisting health complications such as end-stage renal disease, renal failure and congenital heart disease can indicate high percentage of seizure onset. Seizures also occur in conditions such as Steven Johnson syndrome, a disease occurring in children. In addition to these clinical manifestations, seizures can occur because of severe brain injury, shock or even during athletic events as result of extreme adrenaline levels in the blood. Diagnosis and Treatment Epileptics are diagnosed with a meticulous examination of their history. Various biochemical tests such as sodium levels, SGOT, SGPT and blood glucose levels are analyzed. Electro encephalogram is done to understand the electrophysiology of the brain. In some cases, neurologists recommend MRI and CT scans to understand the presence of any abnormalities or to identify any kind of trauma caused in the brain or the spinal cord. In most cases seizures are treated with antidepressants as the predominant cause of any form of seizure is depression. Anti epileptic drugs such as sodium channel blockers and GABA transaminase inhibitors are recommended. Epileptic seizures are commonly traced to brain injury or family history. About 0.5% to 2% of the population is likely to suffer an epileptic seizure at some point in time. When the delicate balance of electrical activity in the brain is disturbed, a person suffers seizures. When there are more than a couple of episodes of seizures, it is a condition of epilepsy. Status epilepticus refers to continuous or intermittent seizure activity for more than 5 minutes without recovery of consciousness. In a typical epileptic seizure, the neuronal activity is hampered bringing on convulsions, muscle spasms and possible loss of consciousness. Each person has a different threshold of resistance to seizures. Inherited condition of neurological disorder can lead to electrical instability causing epileptic seizures. Those dependent on alcohol or drugs may experience seizures during withdrawal. Rarely is a brain tumor the cause of epilepsy. Brain injury is a possible cause of epilepsy. This can be due to a birth defect or head injury or infection such as meningitis. Sometimes a person may experience idiopathic epilepsy where there is no clear cause for the seizures. Diagnosis of epilepsy can be made with investigative tests such as EEG, CT scan or MRI scan. Anti-epilepsy drugs (AEDs) can control the seizures though there is no cure. These medications help the patient in leading a better quality of life. AEDs are prescribed after studying the person's nature of seizures, general health, age and gender. These medications must be taken in prescribed doses to maintain desired level in the body to prevent further seizures. When some possible triggers have been identified for epileptic seizures, the patient must try and avoid them. These triggers could range from emotional disturbance to lack of sleep. The Vagus Nerve Stimulator (VNS) has been approved by the FDA for the treatment of epilepsy. The VNS is surgically implanted into the chest, near the collarbone. It is a small device, much like a pacemaker that sends weak electrical impulses to the brain through the vagus nerve. These electrical signals are helpful in preventing sudden electrical bursts in the brain that trigger off an epileptic attack. Seizures are conditions when there is abnormal functioning of the brain leading to uncontrollable muscle spasms, altered levels of consciousness and behavior. This is usually traced to abnormal electrical discharge within the brain. Seizures may be localized or affect the whole body. Seizures are classified into 3 based on the severity of attack and response: - Grand Mal - In this type of seizure, the whole body is racked with convulsions. There can be lack of consciousness or coma - Petit Mal - Only a part of the body is affected by this seizure - Absence - A type of seizure where the affected person is in a stupor and cannot be roused. Seizures can occur due to poisoning, drug overdose, head injury or medical conditions such as hypoglycemia or neurological abnormality. Fever, brain tumor or other vascular problems can also trigger a seizure. If the brain experiences a sudden lack of oxygen, it can lead to a seizure. Febrile seizures are usually noticed when an infant or small child has high fever, greater than 102 degrees F. The child loses consciousness and experiences uncontrolled shaking of the body. Typically this seizure lasts for a minute or two. Seizures of this kind are not to be mistaken for epilepsy. Though they can be terrifying, febrile seizure attacks must be tackled with care. Place the child on the ground or safe place. Do not restrain movements and wait for the seizure to subside. Do not attempt to feed the child immediately after a febrile seizure. Most seizures are self-limiting. What is essential is to ensure that the person does not get injured during a seizure. Seek seizure first aid. Call a doctor at once if you notice labored breathing or bluish pallor. Epilepsy is a medical condition that is characterized by marked pattern of chronic seizures. Various tests such as spinal tap, heat CT scan or MRI and EEG (Electroencephalogram) can help in identifying the cause for the seizures.
Your child grabs a toy from a playmate, tears erupt, and you immediately tell him to say, “I’m sorry.” Does this sound familiar? Although apologizing might seem to diffuse the situation, these words alone don’t fix hurt feelings, nor do they help your child understand how or why he caused those hurt feelings. Gradually, children develop empathy by gaining an understanding of human actions and reactions. Below are some ways you can work with your child to help him develop that understanding. Help your child identify his emotions. Before your child can manage his emotions in a positive way, he needs to be able to identify them. If he becomes frustrated and throws a toy, ask “What’s going on? How are you feeling right now? How can I help?” This helps your child to feel understood and heard. Encourage your child to brainstorm a solution. If your child upsets a friend by knocking over his block tower, first ask “How do you think your friend feels?” Your child will likely say, “He is sad. He’s crying.” Encourage him to brainstorm a solution by asking, “What should we do to make him feel better?” He might want to apologize by giving his friend a hug, by helping him build a new tower, or by drawing a picture for him. Although he may not readily want to say “I’m sorry,” any sincere gesture from your child is appropriate. Set a good example. Modeling positive behavior is a great way to teach your child conflict management techniques. For example, if you yelled at your child for climbing on furniture, and he started to cry, say “I understand that you got scared when I yelled, and I’m sorry. I don’t want you to get hurt. I should have spoken to you in a calmer voice.” Read books to reinforce conflict resolution. With your child, read “It’s Okay to Make Mistakes” by Todd Parr. Afterward, remind him of the conflict that occurred in the story and prompt him to tell you how the character fixed the problem. Talk about things that we can do when we hurt someone’s feelings. Other great books that focus on resolving conflict are “How to Grow a Friend” by Sara Gillingham and “Have You Filled a Bucket Today?” by Carol McCloud. Our Links to Learning curriculum promotes students’ social and emotional development, which is necessary for understanding and feeling empathy towards others, verbalizing wants and needs, and fostering friendships. Teachers use problem-solving activities, games and books to reinforce peer interaction skills, character education and classroom etiquette. It takes time for children to express their emotions in a positive way, and feel sorry for how their actions affect others. By fostering positive social-emotional skills in the preschool years, children are more likely to deal with conflict more successfully in elementary school and beyond.
Washington, DC–A joint study between Carnegie and the Woods Hole Oceanographic Institution has determined that the average temperature of Earth's mantle beneath ocean basins is about 110 degrees Fahrenheit (60 Celsius) higher than previously thought, due to water present in deep minerals. The results are published in Science. Earth's mantle, the layer just beneath the crust, is the source of most of the magma that erupts at volcanoes. Minerals that make up the mantle contain small amounts of water, not as a liquid, but as individual molecules in the mineral's atomic structure. Mid-ocean ridges, volcanic undersea mountain ranges, are formed when these mantle minerals exceed their melting point, become partially molten, and produce magma that ascends to the surface. As the magmas cool, they form basalt, the most-common rock on Earth and the basis of oceanic crust. In these oceanic ridges, basalt can be three to four miles thick. Studying these undersea ranges can teach scientists about what is happening in the mantle, and about the Earth's subsurface geochemistry. One longstanding question has been a measurement of what's called the mantle's potential temperature. Potential temperature is a quantification of the average temperature of a dynamic system if every part of it were theoretically brought to the same pressure. Determining the potential temperature of a mantle system allows scientists better to understand flow pathways and conductivity beneath the Earth's crust. The potential temperature of an area of the mantle can be more closely estimated by knowing the melting point of the mantle rocks that eventually erupt as magma and then cool to form the oceanic crust. In damp conditions, the melting point of peridotite, which melts to form the bulk of mid-ocean ridge basalts, is dramatically lower than in dry conditions, regardless of pressure. This means that the depth at which the mantle rocks start to melt and well up to the surface will be different if the peridotite contains water, and beneath the oceanic crust, the upper mantle is thought to contain small amounts of water–between 50 and 200 parts per million in the minerals of mantle rock. So lead author Emily Sarafian of Woods Hole, Carnegie's Erik Hauri, and their team set out to use lab experiments in order to determine the melting point of peridotite under mantle-like pressures in the presence of known amounts of water. "Small amounts of water have a big effect on melting temperature, and this is the first time experiments have ever been conducted to determine precisely how the mantle's melting temperature depends on such small amounts of water," Hauri said. They found that the potential temperature of the mantle beneath the oceanic crust is hotter than had previously been estimated. "These results may change our understanding of the mantle's viscosity and how it influences some tectonic plate movements," Sarafian added. The study's other co-authors are Glenn Gaetani and Adam Sarafian, also of Woods Hole. This research was funded by the National Science Foundation and the Woods Hole Oceanographic Institution's Deep Ocean Exploration Institute. The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science. Story Source: Materials provided by Scienmag
How do you formulate arguments? How do I write an argument? - At the beginning you always make the thesis (assertion). - Then you substantiate this with as many arguments as possible. - At the end you write a summary in which you summarize the most important statements again fundamentally. How do arguments have to be? 10 tips for better arguments and more persuasiveness - Argumentation tip 1: Make the topic relevant. - Argumentation tip 2: The thesis must be clear and precise. - Argumentation tip 3: Justify the thesis with facts. - Argumentation tip 4: The deeper the reason, the better. How do you write a good introduction to a discussion? The following should be mentioned in the introduction: author of the text, title, date and type of text. Then briefly describe what the text is about. In the main part you go back to your thesis argumentatively and choose arguments, reasons and examples. What goes into the introduction of a discussion? The introduction to any discussion should include: - 1) An introduction to the central question or thesis. - 2) Awaken the reader’s interest and explain why this topic is important. - 3) Briefly explain how to proceed. What Makes a Good Discussion? Name your arguments and explain them with suitable examples, as briefly and precisely as possible. The reader should be led elegantly through your text and should be able to recognize a logical structure. If the arguments and the structure are well laid out, you will have a good and understandable discussion. What does a good discussion look like? A discussion is an opinion on a topic or an open question. It is therefore a written form of the argument. Think about the topic and weigh up the reasons and counter-reasons. Try to substantiate and illustrate your train of thought with examples. Should a school uniform be introduced Discussion? Studies show, however, that these are only accepted if the students have a say in the choice of uniform. You cannot retouch your problem areas with the uniform. This is uncomfortable for some. Because school uniforms will certainly continue to blaspheme at German schools. What is the hourglass principle in a discussion? a) With the hourglass principle, the arguments of the opposite position are mentioned in descending order, those of your own position in ascending order. What is a discussion? The discussion is an essayistic text form in which the core facts (“places” or in a discussion can find one’s own point of view on a question, derive a factual judgment and justify this argumentatively with evidence / examples. What is an antithesis in a discussion? The antithesis (counter thesis) begins with the strongest argument and appropriate examples. If you yourself support a thesis, you start with your weakest argument and the corresponding examples and name your strongest argument at the end. This way it will be better remembered by the reader. What exactly is discussing? Discussing means dealing with a problem or a question. You critically deal with a thesis with the help of arguments for or against and draw a conclusion from it. You know that from arguing. The written argumentation is called a discussion. What is the difference between a discussion and an opinion? If one compares the ▪ free discussion of problems and issues with the statement, the fundamental difference between the two is based on this: In contrast to the former, the statement does not require “a multi-perspective discussion” (Fritzsche 1994, p. 124, italics by the author). . What is an introduction? The word introduction denotes: an introductory chapter of a text, for fictional texts see Prolog (literature) the first paragraphs of a shorter text, for example an article. the beginning of a speech, see Exordium (rhetoric) What does the introduction include? The introduction of your term paper makes it easier for your readership to get started with your term paper. You present your topic, explain your goal and give an overview of the structure of your homework. The length of your introduction is roughly 10% of the total length of your term paper. How do you write an introduction to an informational text? Write the information text: the introduction Pay attention to who the text is addressed to and try to address precisely this group of addressees (other students, adults, newspaper or blog readers). In the introduction the topic is presented and an overview is given. How do you write an introduction to a characterization? In the introduction we name the title of the text from which the character to be characterized comes from, the author, the type of text, the date of publication and the central topic. In addition, you already write at this point which figure should be characterized in your text. How do you write an introduction to a story? At the beginning you give an overview of the situation. The reader needs to find out what the main character is called and what concerns them. In the main part you tell the experience in several steps. Increase the tension to the point of tension. How do I start writing a story? Beginning: At the beginning of a story, you introduce the characters, their surroundings and the conflict in which they become entangled in the course of the story. You do this either by letting the characters tell and acting themselves or by telling about them. How do you write an introduction 8 class? Short summary Here it is your task to reproduce the content of the text in two or three short sentences in a completely neutral and without judgment. Briefly summarize the most important facts so that the introduction is well-rounded. How do you start writing a story? The story begins with a brief introduction. It usually describes the important people and circumstances of the story, the location of the action and the time in which the story takes place. The narrative perspective is also specified in the introduction. How do you start a short story? The beginning of a story should draw the reader into the text. It should entice him to read on. The most important task of the first few sentences is to arouse curiosity in the reader. However, the author is not allowed to make promises that the story does not keep. What’s the best way to start writing a novel? A good novel builds tension right from the start, in which the central conflict that will play the leading role in the book is briefly touched on on the first page of the book. So grab your readers by the sleeper and take them on a thrilling journey while reading your book. What do I have to do to write a book? If you want to write a book, it is best to start collecting uncritically. Then, with a bit of distance, you should sort the ideas: you throw out the stupid ideas and continue working with the good ideas. Stupid ideas – those are stories that don’t interest you yourself. How do I go about writing a book? - 9 tips for writing and publishing a book. - Write a good book. Do you want to publish? - Describe your book idea in one sentence. Learn to explain your book idea in just a few sentences. - Novel or non-fiction. - Research publishers carefully. - Literary agencies. - Writing groups and network. Visit the rest of the site for more useful and informative articles!
In this article we are going to demystify secondary dominant chords and the confusion that often accompanies this music theory topic. Even if you’ve never heard the term “secondary dominant chords” before, you’ve probably encountered them regardless of what kind of music you like to play. That’s because secondary dominant chords are present in all types of music – jazz, classical, rock, folk, pop, etc. Understanding these chords will improve your theory knowledge, harmonic analysis understanding, composition skills, and transcription abilities. Secondary Dominant Chords: What Are They? Let’s start with diatonic chords. Diatonic chords refer to the chords which result when we build a chord on each note of the a major scale. Below are the diatonic chords, and their Roman numeral names, in the key of C major. These Roman numerals represent a formula which will be the same in every major key (i.e., the ‘I’ chord will always be major, the ‘ii’ chord will always be minor, etc). Ok, now let’s breakdown what a secondary dominant chord is. First of all, secondary dominant chords are dominant chords, and dominant chords are 7th chords (major triad with a minor 7th on top). If we make 7th chords out of all the diatonic chords above, we only have one dominant chord – G7, the ‘V7’ chord. And what do dominant chords do? They resolve to their ‘I’ chord. Dominant chords want to move in a ‘V to I’ resolution. So dominant chords function as the ‘V7’ of a ‘I’ chord, and they pull to that ‘I’ chord. Now we’ve reviewed what a dominant chord is, but what is meant by the term secondary? ‘Secondary’ refers to the fact that secondary dominant chords come from outside of the key. So a secondary dominant chord is, by definition, any dominant chord that is not diatonic to the key. Look at the chord progression below: Do you see the dominant chord that does not fit in the key of C major? That’s right, the D7 chord. It’s a secondary dominant. Secondary Dominant Chords: How Do They Work? Now let’s understand how secondary dominant chords work. In a nutshell, a secondary dominant chord is borrowed from another key. So when you see a secondary dominant chord you have to ask yourself, “This secondary dominant is the ‘V7’ of what chord?” Looking at the chord progression above ask yourself, “D7 is the ‘V7’ of what chord?” The answer is that D7 is the ‘V7’ of G. And lo and behold, which chord comes after the D7 chord? Well, G7, of course. So the secondary dominant (D7) is a chord from outside the key that brings us to a chord inside the key (G7). Lastly, we refer to this D7 chord as a “V7/V” (read “five-seven of five” chord). Secondary Dominant Chords: How Does This Info Help Me? Understanding secondary dominant chords raises your musical awareness and understanding. You now know what a secondary dominant chord is, how to label it (with Roman numerals), how it functions, and why it is used. Practice playing the progressions above to get a sense of what secondary dominants sound like.
It’s a little embarrassing to admit but sometimes I lose track of just how the common terms in genetics all fit together. I learned them late in life and never used them to make a living, and now I pay the price when they don’t stick. What’s the difference between a chromosome and a strand of DNA? A gene and a genome? What do you call those three-letter sets in a DNA diagram, and what do they do? As I said, embarrassing. So here I’m going to pull out the English teacher deep in my bones and connect some of the units of written language—words, sentences, books—to the names of genetic units. Maybe my genetic picture will stay a little clearer a little longer. Maybe for the reader also. Let’s start small. The spiraling rungs on diagrams of a DNA (deoxyribonucleic acid) molecule are each marked with two of four specific letters: A, C, G, and T. The four DNA letters stand for the four nucleotides—Adenine, Cytosine, Guanine, and Thymine—that make up DNA. Like the letters of the full alphabet, these letters–or rather the four molecules they indicate–are the smallest building blocks of their language. In DNA, combinations of the letters for the four nucleotides make up the three-letter codons that are DNA’s version of words. Each three-letter codon/word specifies one amino acid. And most codons are “synonyms” in that several different codons refer to the same amino acid because there are many more codons than there are amino acids. The codons are “read” by a ribosome, a cellular reader/assembly-machine that produces the required amino acid and attaches it to the chain of amino acids that will form a protein. Groups of these codons make up a gene, much as words make up a sentence. The genes/sentences are long because most proteins are complex; human proteins consist of anywhere from several hundred to several thousand amino acid molecules. The gene/sentence for red hair says something like “Put this together with that and that and that….” Genes also include a codon at the start that says “Start the gene here” and another at the end that says “Stop here; gene complete.” Within the gene, however, no actual spaces separate the codons, but since all codons are triplets, it’s always clear where codons themselves begin and end. (We leave spaces between words when we write, but we didn’t always. Writing in the ancient world often lacked such spaces. As long as one could read slowly and figurethewordsoutspacesweren’tessential.) So, to recap. The four nucleotides are basic components much like the letters of our alphabet. Groups of three nucleotides spell out codons that can be thought of as words, which in this case are actual amino acid molecules. And a sequence of codons/amino acids forms a gene that resembles a sentence in a protein recipe for some aspect of the organism. Finally there are chromosomes and genomes. A molecule of DNA is very long, a continuous strand of anywhere from a couple of hundred to more than a thousand genes, many of them about related aspects of the organism. Each DNA molecule is called a chromosome which, because its genes concern similar aspects of the body, can be compared to a chapter in a book. But it is a strange book in that each chapter appears twice, in anticipation of the day when the molecule/chapter reproduces itself. Each human cell contain 23 such paired chromosomes, duplicate copies of the assembly instructions for an entire human being. Only the chromosome pair that determines sex contains chromosomes that are different from each other about half the time: females have two identical female chromosomes while males carry one female and one male chromosome. Finally, our genome is like the book itself, the totality of all our genes on all our chromosomes. The book might be called Me And Us. Your genome book is almost exactly like mine except for about one tenth of one percent of our 20,000 genes that are different. That’s similar to two copies of the same long book that differ only in a few sentences. Simplified though the comparison is, it’s startling what genetics and written language have in common. Keep in mind that writing is a recent human invention while DNA and other units of genetics have been forming life for almost four billion years ago. Yet both are composed of the smallest building blocks, then the groupings created from the building blocks, then the meaningful statements/instructions/recipes coded in the groupings, and finally the conversion of the code into organic construction/action/speech.
Music is usually thought of as an emotive art form. People participate in music individually, communally, or in performance to communicate ideas, to express feelings, and even to experience an emotional release of some kind. Skilled performers are able to evoke desired feelings from even the most passive of listeners. It is right to think of music in this way, if a bit simplistic. Despite the importance of emotion in music, the best musicians do not allow their inner emotional states to dictate the quality or emotive content of a performance. The uninitiated reader might be surprised or even dismayed to find out that the communication of happiness, sadness, anger, and any other feeling whether intense or subdued can and usually is programmed by the performer, who uses various musical devices to create desired types of musical expression—and stir up certain reactions in the listener—regardless of his or her emotional state during the performance. The need for this ability in vocal and dramatic music is obvious—otherwise how can one hope to perform a happy role like that of Papageno after receiving news of a loved one’s serious illness, or how could a joyful newlywed sing a stereotypical country song with the requisite lament? Even when performing absolute music, expressive devices must be planned to some extent or another, lest the performer fail to communicate any feeling to the listener except that of his or her own performance anxiety. In an important sense, every musician—even the instrumentalist—must to some degree become an actor. While this practice of “programming expression” might sound complicated, it usually isn’t. In most cases, following a few simple rules will enable instrumentalists to find the appropriate expressive devices for a given piece. My students are quite accustomed to hearing directives such as these: - Emphasize longer notes over shorter ones, and allow series of shorter notes to lead to and from longer ones. - Crescendo slightly during the first half of the phrase, and diminuendo slightly in the second half. - Push the tempo ahead slightly in the first half of the phrase, and pull back slightly in the second half. - Overdo all of the above devices in the practice room, as the presence of one’s instructor, accompanist, or audience will usually have a moderating effect. - Plan to take breaths in the places where the music “breathes” or pauses, not simply where one feels like breathing. Honestly, following the above five rules and observing all of the written expressive markings will go a long way toward creating the optimal expressive effect for just about any piece. Similarly, execution (especially tuning) can be boiled down to a few easily-remembered rules: - Major thirds must be lowered, minor thirds raised, and perfect fifths raised. (Other chord tones have rules governing their needed adjustments as well, but these three are the most vital to know.) - Brass players must learn the overtone series charts for their instruments, and the tuning tendencies of each partial. The fifth (must be raised), sixth (must be lowered), and seventh (must be raised very much; unusable on brass instruments except trombone) are perhaps the most important to know well. - The above two sets of tuning rules will in some cases either compound, thus increasing the needed adjustment, or negate one another, eliminating the need for any tuning adjustments. - Tuning rules should be applied both harmonically in ensembles, and melodically within one’s own playing. - When playing with piano the perfect intonation that is theoretically possible when playing in other types of ensembles cannot be achieved, due to the compromises inherent in piano tuning. Besides, given that the brass player can adjust pitch during performance and the pianist can’t, the responsibility for matching the piano rests with the brass player, even if this requires negating other tuning rules. - With regard to articulation, to achieve a given type of attack the tongue stroke will be softer in the lower register and harder in the upper register. (I recognize that the latter suggestion is contrary to “received wisdom,” but I have often found it to be the case.) - The most important element of good legato tonguing or slurring is the maintenance of constant airflow—and thus constant buzz—through the duration of the passage. These rules for expression and execution are starting to sound like quite a bit to remember, and this isn’t even a comprehensive list! In practice, though, remembering this is not all that difficult, and ultimately saves a lot of effort wasted through trial-and-error methods of figuring out how to execute a passage or improve its emotive effect. Still, as helpful as these rules are they must always bow to what I am calling here “The Rule of Rules in Music” or just “The Ultimate Rule.” Here it is: If it sounds good, it is good. The advantage of having “usually-applicable” rules for effecting expressive devices or technical execution is that much of the guesswork is removed from musical interpretation and performance. However, sometimes the rules don’t work, and students are often stumped in these cases. Perhaps two or more of these rules conflict with each other, or maybe a particular piece contains unusual compositional devices or requires extended techniques. Perhaps doing the thing that usually works simply sounds bad in a certain piece. In these cases, the regularly-applied rules must be modified or discarded and the “Rule of Rules” applied. Experiment until you find an approach that yields a desirable sound. If you are a student, trust that your teacher will give you some guidance, but be willing to experiment between lessons and see if a departure from the usual approach leads to a better result. A good teacher will appreciate your willingness to think, experiment, and search for creative solutions to expressive or technical difficulties, even when some correction is needed. “If it sounds good, it is good.” Whatever formulas musicians might devise to improve the technical or emotive aspects or performance, these must ultimately give way to the “Rule of Rules.” Too simplistic? I don’t think so. In fact, I think it stands as a partial but legitimate application of a nearly two-millennia-old directive, one which holds particular importance for Christian musicians like myself but might be at least appreciated by others: Finally, brothers, whatever is true, whatever is honorable, whatever is just, whatever is pure, whatever is lovely, whatever is commendable, if there is any excellence, if there is anything worthy of praise, think about these things. (Philippians 4:8) We as musicians are in the business of creating beautiful sounds, sounds which stir listeners’ emotions, engage their minds, and in the best music even point in a small way to the beauty, order, goodness, and excellence of the Creator. And yet we too often become so tangled in minutia that we obsess over rules and forget the most important things. Beautiful sounds. Edify the listener. Glorify God. “If it sounds good, it is good.”
This webpage has been designed to provide readers a concise overview of some of the key characteristics associated with pathogens specifically known to cause human infections and illnesses. Generally, the word 'pathogen' originates from the Greek word pathos meaning suffering, and gen meaning to give birth to. Thus, a pathogen is a generic term that describes any organism capable of producing a disease, such as a virus, bacterium, or other microorganisms. Throughout history, and even today, pathogens have been responsible for massive numbers of causalities and have numerous effects on afflicted groups (Figure 1). Of particular note in modern times is hepatitis B virus, which is known to have infected several million humans globally and continues to threaten the lives of people, along with HIV and the notorious influenza virus. While they are relatively small in size, many pathogens have the potential to replicate to high concentrations and eventually cause death to the invaded host within a few days if not treated appropriately. This is why it is particularly important to acknowledge the dangers these microorganisms pose on human health and life, and the pathways that constitute a particular biological process associated with their mode of infection. While social advances such as food safety, hygiene, and water treatment have reduced the threat from some pathogens, and other medical advances have safeguarded against infection through the use of vaccination, antibiotics, and fungicide, pathogens continue to threaten human life. In addition, although the body contains many natural orders of defense against pathogens, some possess specific strategies to exploit weak points in the body. For instance, bacteria such as the Gram-negative Pseudomonas aeruginosa are able to resist destruction by secreting enzymes known as elastases that inhibit host-cell proteins from invoking an immune response against the bacteria. Similarly, other pathogens are capable of altering their physical structure or rapidly change their surface antigens so that immune responses generated in the past are no longer protective against reinfection. Other pathogens are able to hide within host cells or even mimic host cell surface structures to escape a particular host response. All these mechanisms and more have been described in detail and will make more sense as you read on, with diagrams created to aid you through the intricate processes. Furthermore, each profile aims to provide detailed information on the pathogen's physical structure, morphology, features used for diagnosis, the types of infections or diseases it causes, pathogenicity, virulence factors, health risks, and information related to how the pathogen can be controlled. How the body copes with these infections and how pathogens exploit the defense systems of the human host will also be a reoccurring theme in each profile. To your right is a list of different types of notable pathogens as categorized by their structural characteristics. click the animation to read more about the featured organism
Renewable energy Tenleytown Renewable energy is the future of electronic technology. It is clean, safe, and sustainable. Types of renewable energy are solar, wind, water, and biomass. This technology is used to generate electricity, heat, or motive power. Our online course can take you through the basics of types of renewable energy and the technology used to generate it. You'll learn about: - Solar Energy: Solar panels collect sunlight and convert it into electricity. - Wind Energy: Wind turbines harness the power of the wind to generate electricity. - Water Energy: Hydroelectric dams capture the energy of moving water to generate electricity. - Biomass Energy: Biomass power plants burn organic material to generate electricity. Renewable energy is a critical part of the fight against climate change. Burning fossil fuels like coal, oil, and natural gas release greenhouse gases into the atmosphere, trapping heat and raising the Earth's temperature. This "global warming" can lead to more extreme weather, droughts, floods, and hurricanes. It also threatens the habitats of plants and animals around the world. Renewable energy doesn't produce greenhouse gases, so it's a vital part of the solution to climate change. And as renewable energy technology gets more advanced and less expensive, it's becoming an increasingly viable option for individuals, businesses, and governments. There are many types of renewable energy, but they all have one thing in common: they derive from natural processes that are continually replenished. That means we can never "run out" of renewable energy sources, unlike fossil fuels. Renewable energy facts: -It is a clean energy source that does not produce greenhouse gases or other pollutants. -It is a sustainable source of energy that can be used indefinitely. -It is a renewable source of energy that can be replenished. If you are ready to become a renewable energy expert, ᐅ sign up for our class today. Are you or do you know a Renewable energy ♻️ 🔋 ☀️ in Tenleytown? Add a company for free
Ovarian cancer (Ovarian Cancer) - Genes BRCA1, BRCA2, MLH1, MSH2 and TP53 Ovarian cancer is a disease that affects women, in which certain cells in the ovaries become abnormal and multiply uncontrollably to form a tumor. In about 90% of cases, ovarian cancer occurs after age 40, and most cases occur after 60 years. The most common form of ovarian cancer starts in the epithelial cells of the fimbriae at the end of one of the fallopian tubes and subsequently cancer cells migrate to the ovary. However, such cancer can also arise in epithelial cells in the surface of the ovary and peritoneal epithelial cells. This latter form of cancer called primary peritoneal cancer, similar to that of epithelial ovarian cancer in its origin, symptoms, progression and treatment. The primary peritoneal cancer often spreads to the ovaries, even in their absence. Because cancers that begin their development in the ovaries, fallopian tubes, and peritoneum are so similar and are easily spread from one of these structures to other, often difficult to distinguish. In approximately 10% of cases, ovarian cancer does not develop in epithelial cells but in germ cells or granulosa cells. Usually in its early stages, ovarian cancer is asymptomatic. As it progresses, signs and symptoms may include pain or feeling of heaviness in the pelvis or lower abdomen, bloating, early satiety when eating, back pain, vaginal bleeding between menstrual periods or after menopause, or changes in urinary or bowel habits. However, these changes can occur as part of many different disorders. Having one or more of these symptoms does not mean that a woman has ovarian cancer. In some cases, cancerous tumors can develop into metastatic cancers. If ovarian cancer spreads, cancerous tumors appear most frequently in the abdominal cavity or surfaces nearby organs such as the bladder or colon. Because usually diagnosed at an advanced stage ovarian cancer can be difficult to treat. However, when diagnosed and treated early, the survival rate at 5 years is high. This process is due to mutations in critical genes that control growth and cell division or DNA repair damaged, allowing the cells to grow and divide uncontrollably to develop a tumor. These genes are critical: TP53, BRCA1, BRCA2, MLH1 and MSH2. TP53, located on the short arm of chromosome 17 (17p13.1), encodes a protein called p53 acts as a tumor suppressor. This protein is located in the nucleus of cells throughout the body, where it binds directly to DNA. When DNA in a cell is damaged by agents such as toxic chemicals, radiation or ultraviolet sunlight (UV), this protein plays a critical role in determining whether the DNA is repaired or damaged cell undergoes apoptosis. To date, 261 have been described in the TP53 gene mutations: missense mutations (176), and cutting mutations -splicing- junction (27), regulatory mutations (2), small deletions (30), small insertions (12) , small indels (5), larger deletions (8), and complex rearrangements (1) .The TP53 somatic mutations are common in ovarian cancer because they occur in about half of ovarian tumors. Most of these mutations change the amino acids in the protein p53, which reduces or eliminates the tumor suppressor protein function. Because the altered protein is less able to regulate cell growth and division, you can develop a cancerous tumor. BRCA1, located on the long arm of chromosome 17 (17q21) and BRCA2, located on the long arm of chromosome 13 (13q12.3), encode proteins that act as tumor suppressors. These proteins are involved in repairing damaged DNA. Breaks in DNA can be caused by natural and medical radiation or other environmental exposures, and also occur when chromosomes exchange genetic material in preparation for cell division. To help repair DNA, BRCA1 and BRCA2 proteins play a critical role in maintaining the stability of the genetic information of a cell. In addition, it is believed that the BRCA1 and BRCA2 protein also regulate the activity of other genes and play an essential role in embryonic development. To perform these functions, these proteins interact with many other proteins, including tumor suppressors and other proteins that regulate cell division. Described to date 1424 mutations in the BRCA1 gene, of which: missense mutations (468), and cutting mutations -splicing- junction (117), regulatory mutations (6), small deletions (434), small insertions (151), small indels (25), deletions higher (171), insertions / higher duplications (32), complex rearrangements (19) and variations of repetition (1). On the other hand, have been described hitherto 1165 mutations in the BRCA2 gene: missense mutations (378), and cutting mutations -splicing- junction (85), regulatory mutations (1), small deletions (461), small insertions ( 166), small indels (25), deletions older (32), insertions / higher duplications (10) and complex rearrangements (7). Mutations in these genes alter DNA repair, allowing potentially deleterious mutations in DNA persist. As these defects accumulate, they can cause cells to grow and divide without control or order to develop a tumor. Germline mutations are involved in more than one fifth of cases of ovarian cancer. Between 65% and 85% of these mutations they are in the BRCA1 gene or BRCA2. These mutations of genes are described as "mutations high penetrance" because they are associated with a high risk of developing ovarian cancer. Compared with a lifetime risk of 1.6% of ovarian cancer in women in the population, the lifetime risk in women with a BRCA1 gene is 30% to 60%, and lifetime risk in women with a BRCA2 gene mutation is 12% to 25%. Men with mutations in these genes also have a higher risk of developing various forms of cancer. MLH1 genes, located on the short arm of chromosome 3 (3p21.3) and MSH2, located on the short arm of chromosome 2 (2p21), encode proteins that play an essential role in DNA repair. These proteins help repair errors that occur in DNA copying in preparation for cell division. Repairs are made by removing a section of DNA containing errors and the replacement section with a DNA sequence corrected. A significantly increased risk of ovarian cancer is also a feature of certain rare genetic syndromes, including Lynch syndrome. Lynch syndrome is associated more with mutations in the MSH2 or MLH1 gene and represents between 10% and 15% of ovarian cancers hereditary. Other rare genetic syndromes may also be associated with an increased risk of ovarian cancer. To date, 864 have been described mutations in the MLH1 gene: missense mutations (283), and cutting mutations -splicing- junction (146), regulatory mutations (8), small deletions (193), small insertions (80) , small indels (17), deletions higher (114), insertions / higher duplications (15) and complex rearrangements (8). Meanwhile, they have been described hitherto 839 MSH2 gene mutations: missense mutations (263), and cutting mutations -splicing- junction (91), regulatory mutations (2), small deletions (170), small insertions ( 83), small indels (13), deletions higher (189), insertions / higher duplications (20) and complex rearrangements (8) .The mutations in either gene may allow cells to grow and divide without control, which It leads to the development of a cancerous tumor. Like BRCA1 and BRCA2 genes, these genes are considered "high penetrance" because mutations greatly increase the likelihood of developing cancer of a person. Germline mutations in dozens of other genes have been identified as potential risk factors for developing ovarian cancer. These genes are described as "low penetrance" or "moderate penetrance" because changes in each of these genes appear to have only a small or moderate contribution to the overall risk of ovarian cancer. Some of these genes encode proteins which interact with the proteins encoded by the BRCA1 or BRCA2 genes. In addition to genetic changes they have been identified many personal and environmental factors that contribute to the risk of developing ovarian cancer in women. These factors include age, ethnicity, and hormonal and reproductive factors. A history of ovarian cancer closely related members of the family is also an important risk factor, especially if the cancer occurred in early adulthood. Most cases of ovarian cancer are sporadic and are not due to inherited genetic factors. These cancers are associated with somatic mutations acquired during a person 's life. In general, a cancer predisposition due to a germline mutation is inherited in an autosomal dominant pattern, which means that a copy of the altered gene in each cell is sufficient to increase the probability of developing cancer. Although ovarian cancer occurs only in women, the mutated gene can be inherited from the mother or father. It is important to note that people inherit a greater chance of developing cancer, not the disease itself. Not all people who inherit mutations in these genes develop cancer. Tests in IVAMI: in IVAMI perform detection of mutations associated with ovarian cancer, by the complete PCR amplification of the exons of TP53, BRCA1, BRCA2, MLH1 and MSH2 genes, respectively, and subsequent sequencing. Samples recommended: for being in most cases are not inherited somatic mutations is recommended to send tissue biopsy. In the case of inherited mutations extracted blood with EDTA for separating blood leukocytes, or impregnated sample card with dried blood (IVAMI may mail the card to deposit the blood sample).
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection. John Holland introduced genetic algorithms in 1960 based on the concept of Darwin’s theory of evolution; afterwards, his student Goldberg extended GA in 1989. - 1 Methodology - 2 The building block hypothesis - 3 Limitations - 4 Variants - 5 Problem domains - 6 History - 7 Related techniques - 8 See also - 9 References - 10 Bibliography - 11 External links In a genetic algorithm, a population of candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem is evolved toward better solutions. Each candidate solution has a set of properties (its chromosomes or genotype) which can be mutated and altered; traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals, and is an iterative process, with the population in each iteration called a generation. In each generation, the fitness of every individual in the population is evaluated; the fitness is usually the value of the objective function in the optimization problem being solved. The more fit individuals are stochastically selected from the current population, and each individual's genome is modified (recombined and possibly randomly mutated) to form a new generation. The new generation of candidate solutions is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. A typical genetic algorithm requires: - a genetic representation of the solution domain, - a fitness function to evaluate the solution domain. A standard representation of each candidate solution is as an array of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size, which facilitates simple crossover operations. Variable length representations may also be used, but crossover implementation is more complex in this case. Tree-like representations are explored in genetic programming and graph-form representations are explored in evolutionary programming; a mix of both linear chromosomes and trees is explored in gene expression programming. Once the genetic representation and the fitness function are defined, a GA proceeds to initialize a population of solutions and then to improve it through repetitive application of the mutation, crossover, inversion and selection operators. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Often, the initial population is generated randomly, allowing the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found. During each successive generation, a portion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as the former process may be very time-consuming. The fitness function is defined over the genetic representation and measures the quality of the represented solution. The fitness function is always problem dependent. For instance, in the knapsack problem one wants to maximize the total value of objects that can be put in a knapsack of some fixed capacity. A representation of a solution might be an array of bits, where each bit represents a different object, and the value of the bit (0 or 1) represents whether or not the object is in the knapsack. Not every such representation is valid, as the size of objects may exceed the capacity of the knapsack. The fitness of the solution is the sum of values of all objects in the knapsack if the representation is valid, or 0 otherwise. In some problems, it is hard or even impossible to define the fitness expression; in these cases, a simulation may be used to determine the fitness function value of a phenotype (e.g. computational fluid dynamics is used to determine the air resistance of a vehicle whose shape is encoded as the phenotype), or even interactive genetic algorithms are used. For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. Although reproduction methods that are based on the use of two parents are more "biology inspired", some research suggests that more than two "parents" generate higher quality chromosomes. These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions. These less fit solutions ensure genetic diversity within the genetic pool of the parents and therefore ensure the genetic diversity of the subsequent generation of children. Opinion is divided over the importance of crossover versus mutation. There are many references in Fogel (2006) that support the importance of mutation-based search. Although crossover and mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction, or migration in genetic algorithms. It is worth tuning parameters such as the mutation probability, crossover probability and population size to find reasonable settings for the problem class being worked on. A very small mutation rate may lead to genetic drift (which is non-ergodic in nature). A recombination rate that is too high may lead to premature convergence of the genetic algorithm. A mutation rate that is too high may lead to loss of good solutions, unless elitist selection is employed. In addition to the main operators above, other heuristics may be employed to make the calculation faster or more robust. The speciation heuristic penalizes crossover between candidate solutions that are too similar; this encourages population diversity and helps prevent premature convergence to a less optimal solution. This generational process is repeated until a termination condition has been reached. Common terminating conditions are: - A solution is found that satisfies minimum criteria - Fixed number of generations reached - Allocated budget (computation time/money) reached - The highest ranking solution's fitness is reaching or has reached a plateau such that successive iterations no longer produce better results - Manual inspection - Combinations of the above The building block hypothesis Genetic algorithms are simple to implement, but their behavior is difficult to understand. In particular it is difficult to understand why these algorithms frequently succeed at generating solutions of high fitness when applied to practical problems. The building block hypothesis (BBH) consists of: - A description of a heuristic that performs adaptation by identifying and recombining "building blocks", i.e. low order, low defining-length schemata with above average fitness. - A hypothesis that a genetic algorithm performs adaptation by implicitly and efficiently implementing this heuristic. Goldberg describes the heuristic as follows: - "Short, low order, and highly fit schemata are sampled, recombined [crossed over], and resampled to form strings of potentially higher fitness. In a way, by working with these particular schemata [the building blocks], we have reduced the complexity of our problem; instead of building high-performance strings by trying every conceivable combination, we construct better and better strings from the best partial solutions of past samplings. - "Because highly fit schemata of low defining length and low order play such an important role in the action of genetic algorithms, we have already given them a special name: building blocks. Just as a child creates magnificent fortresses through the arrangement of simple blocks of wood, so does a genetic algorithm seek near optimal performance through the juxtaposition of short, low-order, high-performance schemata, or building blocks." Despite the lack of consensus regarding the validity of the building-block hypothesis, it has been consistently evaluated and used as reference throughout the years. Many estimation of distribution algorithms, for example, have been proposed in an attempt to provide an environment in which the hypothesis would hold. Although good results have been reported for some classes of problems, skepticism concerning the generality and/or practicality of the building-block hypothesis as an explanation for GAs efficiency still remains. Indeed, there is a reasonable amount of work that attempts to understand its limitations from the perspective of estimation of distribution algorithms. There are limitations of the use of a genetic algorithm compared to alternative optimization algorithms: - Repeated fitness function evaluation for complex problems is often the most prohibitive and limiting segment of artificial evolutionary algorithms. Finding the optimal solution to complex high-dimensional, multimodal problems often requires very expensive fitness function evaluations. In real world problems such as structural optimization problems, a single function evaluation may require several hours to several days of complete simulation. Typical optimization methods can not deal with such types of problem. In this case, it may be necessary to forgo an exact evaluation and use an approximated fitness that is computationally efficient. It is apparent that amalgamation of approximate models may be one of the most promising approaches to convincingly use GA to solve complex real life problems. - Genetic algorithms do not scale well with complexity. That is, where the number of elements which are exposed to mutation is large there is often an exponential increase in search space size. This makes it extremely difficult to use the technique on problems such as designing an engine, a house or plane. In order to make such problems tractable to evolutionary search, they must be broken down into the simplest representation possible. Hence we typically see evolutionary algorithms encoding designs for fan blades instead of engines, building shapes instead of detailed construction plans, and airfoils instead of whole aircraft designs. The second problem of complexity is the issue of how to protect parts that have evolved to represent good solutions from further destructive mutation, particularly when their fitness assessment requires them to combine well with other parts. - The "better" solution is only in comparison to other solutions. As a result, the stop criterion is not clear in every problem. - In many problems, GAs have a tendency to converge towards local optima or even arbitrary points rather than the global optimum of the problem. This means that it does not "know how" to sacrifice short-term fitness to gain longer-term fitness. The likelihood of this occurring depends on the shape of the fitness landscape: certain problems may provide an easy ascent towards a global optimum, others may make it easier for the function to find the local optima. This problem may be alleviated by using a different fitness function, increasing the rate of mutation, or by using selection techniques that maintain a diverse population of solutions, although the No Free Lunch theorem proves that there is no general solution to this problem. A common technique to maintain diversity is to impose a "niche penalty", wherein, any group of individuals of sufficient similarity (niche radius) have a penalty added, which will reduce the representation of that group in subsequent generations, permitting other (less similar) individuals to be maintained in the population. This trick, however, may not be effective, depending on the landscape of the problem. Another possible technique would be to simply replace part of the population with randomly generated individuals, when most of the population is too similar to each other. Diversity is important in genetic algorithms (and genetic programming) because crossing over a homogeneous population does not yield new solutions. In evolution strategies and evolutionary programming, diversity is not essential because of a greater reliance on mutation. - Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants). Again, evolution strategies and evolutionary programming can be implemented with a so-called "comma strategy" in which parents are not maintained and new parents are selected only from offspring. This can be more effective on dynamic problems. - GAs cannot effectively solve problems in which the only fitness measure is a single right/wrong measure (like decision problems), as there is no way to converge on the solution (no hill to climb). In these cases, a random search may find a solution as quickly as a GA. However, if the situation allows the success/failure trial to be repeated giving (possibly) different results, then the ratio of successes to failures provides a suitable fitness measure. - For specific optimization problems and problem instances, other optimization algorithms may be more efficient than genetic algorithms in terms of speed of convergence. Alternative and complementary algorithms include evolution strategies, evolutionary programming, simulated annealing, Gaussian adaptation, hill climbing, and swarm intelligence (e.g.: ant colony optimization, particle swarm optimization) and methods based on integer linear programming. The suitability of genetic algorithms is dependent on the amount of knowledge of the problem; well known problems often have better, more specialized approaches. The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The floating point representation is natural to evolution strategies and evolutionary programming. The notion of real-valued genetic algorithms has been offered but is really a misnomer because it does not really represent the building block theory that was proposed by John Henry Holland in the 1970s. This theory is not without support though, based on theoretical and experimental results (see below). The basic algorithm performs crossover and mutation at the bit level. Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains. When bit-string representations of integers are used, Gray coding is often employed. In this way, small changes in the integer can be readily affected through mutations or crossovers. This has been found to help prevent premature convergence at so called Hamming walls, in which too many simultaneous mutations (or crossover events) must occur in order to change the chromosome to a better solution. Other approaches involve using arrays of real-valued numbers instead of bit strings to represent chromosomes. Results from the theory of schemata suggest that in general the smaller the alphabet, the better the performance, but it was initially surprising to researchers that good results were obtained from using real-valued chromosomes. This was explained as the set of real values in a finite population of chromosomes as forming a virtual alphabet (when selection and recombination are dominant) with a much lower cardinality than would be expected from a floating point representation. An expansion of the Genetic Algorithm accessible problem domain can be obtained through more complex encoding of the solution pools by concatenating several types of heterogenously encoded genes into one chromosome. This particular approach allows for solving optimization problems that require vastly disparate definition domains for the problem parameters. For instance, in problems of cascaded controller tuning, the internal loop controller structure can belong to a conventional regulator of three parameters, whereas the external loop could implement a linguistic controller (such as a fuzzy system) which has an inherently different description. This particular form of encoding requires a specialized crossover mechanism that recombines the chromosome by section, and it is a useful tool for the modelling and simulation of complex adaptive systems, especially evolution processes. A practical variant of the general process of constructing a new population is to allow the best organism(s) from the current generation to carry over to the next, unaltered. This strategy is known as elitist selection and guarantees that the solution quality obtained by the GA will not decrease from one generation to the next. Parallel implementations of genetic algorithms come in two flavors. Coarse-grained parallel genetic algorithms assume a population on each of the computer nodes and migration of individuals among the nodes. Fine-grained parallel genetic algorithms assume an individual on each processor node which acts with neighboring individuals for selection and reproduction. Other variants, like genetic algorithms for online optimization problems, introduce time-dependence or noise in the fitness function. Genetic algorithms with adaptive parameters (adaptive genetic algorithms, AGAs) is another significant and promising variant of genetic algorithms. The probabilities of crossover (pc) and mutation (pm) greatly determine the degree of solution accuracy and the convergence speed that genetic algorithms can obtain. Instead of using fixed values of pc and pm, AGAs utilize the population information in each generation and adaptively adjust the pc and pm in order to maintain the population diversity as well as to sustain the convergence capacity. In AGA (adaptive genetic algorithm), the adjustment of pc and pm depends on the fitness values of the solutions. In CAGA (clustering-based adaptive genetic algorithm), through the use of clustering analysis to judge the optimization states of the population, the adjustment of pc and pm depends on these optimization states. It can be quite effective to combine GA with other optimization methods. GA tends to be quite good at finding generally good global solutions, but quite inefficient at finding the last few mutations to find the absolute optimum. Other techniques (such as simple hill climbing) are quite efficient at finding absolute optimum in a limited region. Alternating GA and hill climbing can improve the efficiency of GA while overcoming the lack of robustness of hill climbing. This means that the rules of genetic variation may have a different meaning in the natural case. For instance – provided that steps are stored in consecutive order – crossing over may sum a number of steps from maternal DNA adding a number of steps from paternal DNA and so on. This is like adding vectors that more probably may follow a ridge in the phenotypic landscape. Thus, the efficiency of the process may be increased by many orders of magnitude. Moreover, the inversion operator has the opportunity to place steps in consecutive order or any other suitable order in favour of survival or efficiency. A variation, where the population as a whole is evolved rather than its individual members, is known as gene pool recombination. A number of variations have been developed to attempt to improve performance of GAs on problems with a high degree of fitness epistasis, i.e. where the fitness of a solution consists of interacting subsets of its variables. Such algorithms aim to learn (before exploiting) these beneficial phenotypic interactions. As such, they are aligned with the Building Block Hypothesis in adaptively reducing disruptive recombination. Prominent examples of this approach include the mGA, GEMGA and LLGA. Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems. As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as mixing, i.e., mutation in combination with crossover, is designed to move the population away from local optima that a traditional hill climbing algorithm might get stuck in. Observe that commonly used crossover operators cannot change any uniform population. Mutation alone can provide ergodicity of the overall genetic algorithm process (seen as a Markov chain). Examples of problems solved by genetic algorithms include: mirrors designed to funnel sunlight to a solar collector, antennae designed to pick up radio signals in space, walking methods for computer figures, optimal design of aerodynamic bodies in complex flowfields In his Algorithm Design Manual, Skiena advises against genetic algorithms for any task: [I]t is quite unnatural to model applications in terms of genetic operators like mutation and crossover on bit strings. The pseudobiology adds another level of complexity between you and your problem. Second, genetic algorithms take a very long time on nontrivial problems. [...] [T]he analogy with evolution—where significant progress require [sic] millions of years—can be quite appropriate. I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs.— Steven Skiena:267 In 1950, Alan Turing proposed a "learning machine" which would parallel the principles of evolution. Computer simulation of evolution started as early as in 1954 with the work of Nils Aall Barricelli, who was using the computer at the Institute for Advanced Study in Princeton, New Jersey. His 1954 publication was not widely noticed. Starting in 1957, the Australian quantitative geneticist Alex Fraser published a series of papers on simulation of artificial selection of organisms with multiple loci controlling a measurable trait. From these beginnings, computer simulation of evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern genetic algorithms. In addition, Hans-Joachim Bremermann published a series of papers in the 1960s that also adopted a population of solution to optimization problems, undergoing recombination, mutation, and selection. Bremermann's research also included the elements of modern genetic algorithms. Other noteworthy early pioneers include Richard Friedberg, George Friedman, and Michael Conrad. Many early papers are reprinted by Fogel (1998). Although Barricelli, in work he reported in 1963, had simulated the evolution of ability to play a simple game, artificial evolution became a widely recognized optimization method as a result of the work of Ingo Rechenberg and Hans-Paul Schwefel in the 1960s and early 1970s – Rechenberg's group was able to solve complex engineering problems through evolution strategies. Another approach was the evolutionary programming technique of Lawrence J. Fogel, which was proposed for generating artificial intelligence. Evolutionary programming originally used finite state machines for predicting environments, and used variation and selection to optimize the predictive logics. Genetic algorithms in particular became popular through the work of John Holland in the early 1970s, and particularly his book Adaptation in Natural and Artificial Systems (1975). His work originated with studies of cellular automata, conducted by Holland and his students at the University of Michigan. Holland introduced a formalized framework for predicting the quality of the next generation, known as Holland's Schema Theorem. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held in Pittsburgh, Pennsylvania. In the late 1980s, General Electric started selling the world's first genetic algorithm product, a mainframe-based toolkit designed for industrial processes. In 1989, Axcelis, Inc. released Evolver, the world's first commercial GA product for desktop computers. The New York Times technology writer John Markoff wrote about Evolver in 1990, and it remained the only interactive commercial genetic algorithm until 1995. Evolver was sold to Palisade in 1997, translated into several languages, and is currently in its 6th version. Genetic algorithms are a sub-field: This section needs additional citations for verification. (May 2011) (Learn how and when to remove this template message) Evolutionary algorithms is a sub-field of evolutionary computing. - Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in the real-value domain. They use self-adaptation to adjust control parameters of the search. De-randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution Strategy (CMA-ES). - Evolutionary programming (EP) involves populations of solutions with primarily mutation and selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include other variation operations such as combining information from multiple parents. - Estimation of Distribution Algorithm (EDA) substitutes traditional reproduction operators by model-guided operators. Such models are learned from the population by employing machine learning techniques and represented as Probabilistic Graphical Models, from which new solutions can be sampled or generated from guided-crossover. - Gene expression programming (GEP) also uses populations of computer programs. These complex computer programs are encoded in simpler linear chromosomes of fixed length, which are afterwards expressed as expression trees. Expression trees or computer programs evolve because the chromosomes undergo mutation and recombination in a manner similar to the canonical GA. But thanks to the special organization of GEP chromosomes, these genetic modifications always result in valid computer programs. - Genetic programming (GP) is a related technique popularized by John Koza in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list structures typical of genetic algorithms. - Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from individual items, like in classical GAs, to groups or subset of items. The idea behind this GA evolution proposed by Emanuel Falkenauer is that solving some complex problems, a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items in an optimal way, would better be achieved by making characteristics of the groups of items equivalent to genes. These kind of problems include bin packing, line balancing, clustering with respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly. Making genes equivalent to groups implies chromosomes that are in general of variable length, and special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date. - Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference. Swarm intelligence is a sub-field of evolutionary computing. - Ant colony optimization (ACO) uses many ants (or agents) equipped with a pheromone model to traverse the solution space and find locally productive areas. Although considered an Estimation of distribution algorithm, - Particle swarm optimization (PSO) is a computational method for multi-parameter optimization which also uses population-based approach. A population (swarm) of candidate solutions (particles) moves in the search space, and the movement of the particles is influenced both by their own best known position and swarm's global best known position. Like genetic algorithms, the PSO method depends on information sharing among population members. In some problems the PSO is often more computationally efficient than the GAs, especially in unconstrained problems with continuous variables. Other evolutionary computing algorithms Evolutionary computation is a sub-field of the metaheuristic methods. - Memetic algorithm (MA), often called hybrid genetic algorithm among others, is a population-based method in which solutions are also subject to local improvement phases. The idea of memetic algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they are shown to be more efficient than traditional evolutionary algorithms. - Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly, bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous environment, there is not one individual that fits the whole environment. So, one needs to reason at the population level. It is also believed BAs could be successfully applied to complex positioning problems (antennas for cell phones, urban planning, and so on) or data mining. - Cultural algorithm (CA) consists of the population component almost identical to that of the genetic algorithm and, in addition, a knowledge component called the belief space. - Differential search algorithm (DS) inspired by migration of superorganisms. - Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is intended for the maximisation of manufacturing yield of signal processing systems. It may also be used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to get the information. Because NA maximises mean fitness rather than the fitness of the individual, the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain "ambition" to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by adaptation of the moment matrix, because NA may maximise the disorder (average information) of the Gaussian simultaneously keeping the mean fitness constant. Other metaheuristic methods Metaheuristic methods broadly fall within stochastic optimisation methods. - Simulated annealing (SA) is a related global optimization technique that traverses the search space by testing random mutations on an individual solution. A mutation that increases fitness is always accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a relatively high rate of mutation and decreasing it over time along a given schedule. - Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by testing mutations of an individual solution. While simulated annealing generates only one mutated solution, tabu search generates many mutated solutions and moves to the solution with the lowest energy of those generated. In order to prevent cycling and encourage greater movement through the solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a solution that contains elements of the tabu list, which is updated as the solution traverses the solution space. - Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions, EO evolves a single solution and makes local modifications to the worst components. This requires that a suitable representation be selected which permits individual solution components to be assigned a quality measure ("fitness"). The governing principle behind this algorithm is that of emergent improvement through selectively removing low-quality components and replacing them with a randomly selected component. This is decidedly at odds with a GA that selects good solutions in an attempt to make better solutions. Other stochastic optimisation methods - The cross-entropy (CE) method generates candidates solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration. - Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and metaheuristics. - List of genetic algorithm applications - Genetic algorithms in signal processing (a.k.a. particle filters) - Propagation of schema - Universal Darwinism - Learning classifier system - Rule-based machine learning - Mitchell 1996, p. 2. - Sadeghi, Javad; Sadeghi, Saeid; Niaki, Seyed Taghi Akhavan (2014-07-10). "Optimizing a hybrid vendor-managed inventory and transportation problem with fuzzy demand: An improved particle swarm optimization algorithm". Information Sciences. 272: 126–144. doi:10.1016/j.ins.2014.02.075. ISSN 0020-0255. - Whitley 1994, p. 66. - Eiben, A. E. et al (1994). "Genetic algorithms with multi-parent recombination". PPSN III: Proceedings of the International Conference on Evolutionary Computation. The Third Conference on Parallel Problem Solving from Nature: 78–87. ISBN 3-540-58484-6. - Ting, Chuan-Kang (2005). "On the Mean Convergence Time of Multi-parent Genetic Algorithms Without Selection". Advances in Artificial Life: 403–412. ISBN 978-3-540-28848-0. - Akbari, Ziarati (2010). "A multilevel evolutionary algorithm for optimizing numerical functions" IJIEC 2 (2011): 419–430 - Deb, Kalyanmoy; Spears, William M. (1997). "C6.2: Speciation methods" (PDF). Handbook of Evolutionary Computation. Institute of Physics Publishing. - Shir, Ofer M. (2012). "Niching in Evolutionary Algorithms". In Rozenberg, Grzegorz; Bäck, Thomas; Kok, Joost N. Handbook of Natural Computing. Springer Berlin Heidelberg. pp. 1035–1069. doi:10.1007/978-3-540-92910-9_32. ISBN 9783540929093. - Goldberg 1989, p. 41. - Harik, Georges R.; Lobo, Fernando G.; Sastry, Kumara (1 January 2006). Linkage Learning via Probabilistic Modeling in the Extended Compact Genetic Algorithm (ECGA). Scalable Optimization Via Probabilistic Modeling. Studies in Computational Intelligence. 33. pp. 39–61. doi:10.1007/978-3-540-34954-9_3. ISBN 978-3-540-34953-2. - Pelikan, Martin; Goldberg, David E.; Cantú-Paz, Erick (1 January 1999). BOA: The Bayesian Optimization Algorithm. Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1. Gecco'99. pp. 525–532. ISBN 9781558606111. - Coffin, David; Smith, Robert E. (1 January 2008). Linkage Learning in Estimation of Distribution Algorithms. Linkage in Evolutionary Computation. Studies in Computational Intelligence. 157. pp. 141–156. doi:10.1007/978-3-540-85068-7_7. ISBN 978-3-540-85067-0. - Echegoyen, Carlos; Mendiburu, Alexander; Santana, Roberto; Lozano, Jose A. (8 November 2012). "On the Taxonomy of Optimization Problems Under Estimation of Distribution Algorithms". Evolutionary Computation. 21 (3): 471–495. doi:10.1162/EVCO_a_00095. ISSN 1063-6560. PMID 23136917. - Sadowski, Krzysztof L.; Bosman, Peter A.N.; Thierens, Dirk (1 January 2013). On the Usefulness of Linkage Processing for Solving MAX-SAT. Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation. Gecco '13. pp. 853–860. doi:10.1145/2463372.2463474. hdl:1874/290291. ISBN 9781450319638. - Taherdangkoo, Mohammad; Paziresh, Mahsa; Yazdi, Mehran; Bagheri, Mohammad Hadi (19 November 2012). "An efficient algorithm for function optimization: modified stem cells algorithm". Central European Journal of Engineering. 3 (1): 36–50. doi:10.2478/s13531-012-0047-8. - Wolpert, D.H., Macready, W.G., 1995. No Free Lunch Theorems for Optimisation. Santa Fe Institute, SFI-TR-05-010, Santa Fe. - Goldberg, David E. (1991). The theory of virtual alphabets. Parallel Problem Solving from Nature, Lecture Notes in Computer Science. Lecture Notes in Computer Science. 496. pp. 13–22. doi:10.1007/BFb0029726. ISBN 978-3-540-54148-6. - Janikow, C. Z.; Michalewicz, Z. (1991). "An Experimental Comparison of Binary and Floating Point Representations in Genetic Algorithms" (PDF). Proceedings of the Fourth International Conference on Genetic Algorithms: 31–36. Retrieved 2 July 2013. - Patrascu, M.; Stancu, A.F.; Pop, F. (2014). "HELGA: a heterogeneous encoding lifelike genetic algorithm for population evolution modeling and simulation". Soft Computing. 18 (12): 2565–2576. doi:10.1007/s00500-014-1401-y. - Baluja, Shumeet; Caruana, Rich (1995). Removing the genetics from the standard genetic algorithm (PDF). ICML. - Srinivas, M.; Patnaik, L. (1994). "Adaptive probabilities of crossover and mutation in genetic algorithms" (PDF). IEEE Transactions on System, Man and Cybernetics. 24 (4): 656–667. doi:10.1109/21.286385. - Zhang, J.; Chung, H.; Lo, W. L. (2007). "Clustering-Based Adaptive Crossover and Mutation Probabilities for Genetic Algorithms". IEEE Transactions on Evolutionary Computation. 11 (3): 326–335. doi:10.1109/TEVC.2006.880727. - See for instance Evolution-in-a-nutshell or example in travelling salesman problem, in particular the use of an edge recombination operator. - Goldberg, D. E.; Korb, B.; Deb, K. (1989). "Messy Genetic Algorithms : Motivation Analysis, and First Results". Complex Systems. 5 (3): 493–530. - Gene expression: The missing link in evolutionary computation - Harik, G. (1997). Learning linkage to efficiently solve problems of bounded difficulty using genetic algorithms (PhD). Dept. Computer Science, University of Michigan, Ann Arbour. - Tomoiagă B, Chindriş M, Sumper A, Sudria-Andreu A, Villafafila-Robles R. Pareto Optimal Reconfiguration of Power Distribution Systems Using a Genetic Algorithm Based on NSGA-II. Energies. 2013; 6(3):1439-1455. - Gross, Bill. "A solar energy system that tracks the sun". TED. Retrieved 20 November 2013. - Hornby, G. S.; Linden, D. S.; Lohn, J. D., Automated Antenna Design with Evolutionary Algorithms (PDF) - "Flexible Muscle-Based Locomotion for Bipedal Creatures". - Evans, B.; Walton, S.P. (December 2017). "Aerodynamic optimisation of a hypersonic reentry vehicle based on solution of the Boltzmann–BGK equation and evolutionary optimisation". Applied Mathematical Modelling. 52: 215–240. doi:10.1016/j.apm.2017.07.024. ISSN 0307-904X. - Skiena, Steven (2010). The Algorithm Design Manual (2nd ed.). Springer Science+Business Media. ISBN 978-1-849-96720-4. - Turing, Alan M. (October 1950). "Computing machinery and intelligence". Mind. LIX (238): 433–460. doi:10.1093/mind/LIX.236.433. - Barricelli, Nils Aall (1954). "Esempi numerici di processi di evoluzione". Methodos: 45–68. - Barricelli, Nils Aall (1957). "Symbiogenetic evolution processes realized by artificial methods". Methodos: 143–182. - Fraser, Alex (1957). "Simulation of genetic systems by automatic digital computers. I. Introduction". Aust. J. Biol. Sci. 10 (4): 484–491. doi:10.1071/BI9570484. - Fraser, Alex; Burnell, Donald (1970). Computer Models in Genetics. New York: McGraw-Hill. ISBN 978-0-07-021904-5. - Crosby, Jack L. (1973). Computer Simulation in Genetics. London: John Wiley & Sons. ISBN 978-0-471-18880-3. - 02.27.96 - UC Berkeley's Hans Bremermann, professor emeritus and pioneer in mathematical biology, has died at 69 - Fogel, David B. (editor) (1998). Evolutionary Computation: The Fossil Record. New York: IEEE Press. ISBN 978-0-7803-3481-6.CS1 maint: Extra text: authors list (link) - Barricelli, Nils Aall (1963). "Numerical testing of evolution theories. Part II. Preliminary tests of performance, symbiogenesis and terrestrial life". Acta Biotheoretica. 16 (16): 99–126. doi:10.1007/BF01556602. - Rechenberg, Ingo (1973). Evolutionsstrategie. Stuttgart: Holzmann-Froboog. ISBN 978-3-7728-0373-4. - Schwefel, Hans-Paul (1974). Numerische Optimierung von Computer-Modellen (PhD thesis). - Schwefel, Hans-Paul (1977). Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie : mit einer vergleichenden Einführung in die Hill-Climbing- und Zufallsstrategie. Basel; Stuttgart: Birkhäuser. ISBN 978-3-7643-0876-6. - Schwefel, Hans-Paul (1981). Numerical optimization of computer models (Translation of 1977 Numerische Optimierung von Computor-Modellen mittels der Evolutionsstrategie. Chichester ; New York: Wiley. ISBN 978-0-471-09988-8. - Aldawoodi, Namir (2008). An Approach to Designing an Unmanned Helicopter Autopilot Using Genetic Algorithms and Simulated Annealing. ProQuest. p. 99. ISBN 978-0549773498 – via Google Books. - Markoff, John (29 August 1990). "What's the Best Answer? It's Survival of the Fittest". New York Times. Retrieved 2016-07-13. - Ruggiero, Murray A.. (2009-08-01) Fifteen years and counting. Futuresmag.com. Retrieved on 2013-08-07. - Evolver: Sophisticated Optimization for Spreadsheets. Palisade. Retrieved on 2013-08-07. - Cohoon, J; et al. (2002-11-26). Evolutionary algorithms for the physical design of VLSI circuits (PDF). Advances in Evolutionary Computing: Theory and Applications. Springer, pp. 683-712, 2003. ISBN 978-3-540-43330-9. - Pelikan, Martin; Goldberg, David E.; Cantú-Paz, Erick (1 January 1999). BOA: The Bayesian Optimization Algorithm. Proceedings of the 1st Annual Conference on Genetic and Evolutionary Computation - Volume 1. Gecco'99. pp. 525–532. ISBN 9781558606111. - Pelikan, Martin (2005). Hierarchical Bayesian optimization algorithm : toward a new generation of evolutionary algorithms (1st ed.). Berlin [u.a.]: Springer. ISBN 978-3-540-23774-7. - Thierens, Dirk (11 September 2010). The Linkage Tree Genetic Algorithm. Parallel Problem Solving from Nature, PPSN XI. pp. 264–273. doi:10.1007/978-3-642-15844-5_27. ISBN 978-3-642-15843-8. - Ferreira, C. "Gene Expression Programming: A New Adaptive Algorithm for Solving Problems" (PDF). Complex Systems, Vol. 13, issue 2: 87-129. - Falkenauer, Emanuel (1997). Genetic Algorithms and Grouping Problems. Chichester, England: John Wiley & Sons Ltd. ISBN 978-0-471-97150-4. - Zlochin, Mark; Birattari, Mauro; Meuleau, Nicolas; Dorigo, Marco (1 October 2004). "Model-Based Search for Combinatorial Optimization: A Critical Survey". Annals of Operations Research. 131 (1–4): 373–395. CiteSeerX 10.1.1.3.427. doi:10.1023/B:ANOR.0000039526.52305.af. ISSN 0254-5330. - Rania Hassan, Babak Cohanim, Olivier de Weck, Gerhard Vente r (2005) A comparison of particle swarm optimization and the genetic algorithm - Baudry, Benoit; Franck Fleurey; Jean-Marc Jézéquel; Yves Le Traon (March–April 2005). "Automatic Test Case Optimization: A Bacteriologic Algorithm" (PDF). IEEE Software. 22 (2): 76–82. doi:10.1109/MS.2005.30. Retrieved 9 August 2009. - Civicioglu, P. (2012). "Transforming Geocentric Cartesian Coordinates to Geodetic Coordinates by Using Differential Search Algorithm". Computers &Geosciences. 46: 229–247. doi:10.1016/j.cageo.2011.12.011. - Kjellström, G. (December 1991). "On the Efficiency of Gaussian Adaptation". Journal of Optimization Theory and Applications. 71 (3): 589–597. doi:10.1007/BF00941405. - Banzhaf, Wolfgang; Nordin, Peter; Keller, Robert; Francone, Frank (1998). Genetic Programming – An Introduction. San Francisco, CA: Morgan Kaufmann. ISBN 978-1558605107. - Bies, Robert R.; Muldoon, Matthew F.; Pollock, Bruce G.; Manuck, Steven; Smith, Gwenn; Sale, Mark E. (2006). "A Genetic Algorithm-Based, Hybrid Machine Learning Approach to Model Selection". Journal of Pharmacokinetics and Pharmacodynamics: 196–221. - Cha, Sung-Hyuk; Tappert, Charles C. (2009). "A Genetic Algorithm for Constructing Compact Binary Decision Trees". Journal of Pattern Recognition Research. 4 (1): 1–13. CiteSeerX 10.1.1.154.8314. doi:10.13176/11.44. - Fraser, Alex S. (1957). "Simulation of Genetic Systems by Automatic Digital Computers. I. Introduction". Australian Journal of Biological Sciences. 10 (4): 484–491. doi:10.1071/BI9570484. - Goldberg, David (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley Professional. ISBN 978-0201157673. - Goldberg, David (2002). The Design of Innovation: Lessons from and for Competent Genetic Algorithms. Norwell, MA: Kluwer Academic Publishers. ISBN 978-1402070983. - Fogel, David (2006). Evolutionary Computation: Toward a New Philosophy of Machine Intelligence (3rd ed.). Piscataway, NJ: IEEE Press. ISBN 978-0471669517. - Holland, John (1992). Adaptation in Natural and Artificial Systems. Cambridge, MA: MIT Press. ISBN 978-0262581110. - Koza, John (1992). Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA: MIT Press. ISBN 978-0262111706. - Michalewicz, Zbigniew (1996). Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag. ISBN 978-3540606765. - Mitchell, Melanie (1996). An Introduction to Genetic Algorithms. Cambridge, MA: MIT Press. ISBN 9780585030944. - Poli, R.; Langdon, W. B.; McPhee, N. F. (2008). A Field Guide to Genetic Programming. Lulu.com, freely available from the internet. ISBN 978-1-4092-0073-4. - Rechenberg, Ingo (1994): Evolutionsstrategie '94, Stuttgart: Fromman-Holzboog. - Schmitt, Lothar M; Nehaniv, Chrystopher L; Fujii, Robert H (1998), Linear analysis of genetic algorithms, Theoretical Computer Science 208: 111–148 - Schmitt, Lothar M (2001), Theory of Genetic Algorithms, Theoretical Computer Science 259: 1–61 - Schmitt, Lothar M (2004), Theory of Genetic Algorithms II: models for genetic operators over the string-tensor representation of populations and convergence to global optima for arbitrary fitness function under scaling, Theoretical Computer Science 310: 181–231 - Schwefel, Hans-Paul (1974): Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977). - Vose, Michael (1999). The Simple Genetic Algorithm: Foundations and Theory. Cambridge, MA: MIT Press. ISBN 978-0262220583. - Whitley, Darrell (1994). "A genetic algorithm tutorial" (PDF). Statistics and Computing. 4 (2): 65–85. CiteSeerX 10.1.1.184.3999. doi:10.1007/BF00175354. - Hingston, Philip; Barone, Luigi; Michalewicz, Zbigniew (2008). Design by Evolution: Advances in Evolutionary Design. Springer. ISBN 978-3540741091. - Eiben, Agoston; Smith, James (2003). Introduction to Evolutionary Computing. Springer. ISBN 978-3540401841. - Provides a list of resources in the genetic algorithms field - Genetic Algorithms - Computer programs that "evolve" in ways that resemble natural selection can solve complex problems even their creators do not fully understand An excellent introduction to GA by John Holland and with an application to the Prisoner's Dilemma - An online interactive Genetic Algorithm tutorial for a reader to practise or learn how a GA works: Learn step by step or watch global convergence in batch, change the population size, crossover rates/bounds, mutation rates/bounds and selection mechanisms, and add constraints. - A Genetic Algorithm Tutorial by Darrell Whitley Computer Science Department Colorado State University An excellent tutorial with lots of theory - "Essentials of Metaheuristics", 2009 (225 p). Free open text by Sean Luke. - Global Optimization Algorithms – Theory and Application - Genetic Algorithms in Python Tutorial with the intuition behind GAs and Python implementation. - Genetic Algorithms evolves to solve the prisoner's dilemma. Written by Robert Axelrod.
Loro Parque’s World Population Clock, based on estimates by the United Nations’ Department of Economic and Social Affairs, has this week reached the historic figure of 7,700 million people. According to this population growth trend, by 2023 there will be more than 8,000 million people and 10,000 million by 2056. Meaning that there are more and more inhabitants, but also more endangered species. The Loro Parque Foundation warns that the enormous pressure of the growing population is driving animals out of their habitats. For example, it’s estimated that in Africa, before the Europeans arrived, there could have been over 29 million elephants. However, as early as 1935, the population had dropped to 10 million and now stands at less than 440,000, according to a 2012 study conducted by the International Union for Conservation of Nature. This same scenario happened with the blue whales, whose population in Antarctica passed, in less than a century, from 340,000 to just over 1,000 specimens. Fortunately, thanks to international protection, the population of this species is slowly recovering. However, some cetaceans such as the Mexican Vaquita or Gulf porpoise have not been able to improve their numbers and are on the verge of extinction with less than 50 specimens registered. At this point in time, United Nations estimates show that 57 per cent of the world’s population already lives in cities, far from contact with nature and animals. In addition, it’s estimated that by 2050 that percentage will have exceeded 80 per cent, making contact with nature even scarcer, with many people never having the opportunity to bond with wild animals. Asia is the most populous continent on the planet, with 4,478 million people and a density of 144 people per square kilometre, followed by Africa with 1,246 million and Europe with 739 million. Population densities in Europe and the Americas do not exceed 30 people per square kilometre, yet the enormous amount of infrastructure and agricultural use have fragmented and reduced natural habitats. This problem of overpopulation affects all individuals, as resource depletion, deforestation and pollution are just a sample of the consequences that affect everyone. For this reason, the role of wildlife conservation centres such as Loro Parque is more important than ever – necessary to maintain living contact between animals and the public. Therefore, the mission of modern zoos is to fight to preserve endangered species, work to increase scientific knowledge about animal species to protect them, and seek to inspire love and protection of the animals in all their visitors. Thus, in an increasingly populated and urban world, zoos are the embassy of animals and nature.
A Creole language is a stable, full-fledged language that originated from a pidgin. A French Creole is a Creole language based on the French language, more specifically on a 17th-century koiné French extant in Paris, the French Atlantic harbors, and the nascent French colonies. French-based Creole languages are spoken by millions of people worldwide, primarily in the Americas and in the Indian Ocean. Haitian Creole or Kreyòl ayisyen, a language spoken primarily in Haiti: the largest French-derived language in the world, with a total of 12 million fluent speakers. It is also the most-spoken Creole language in the world. French is its precursory language, with some indigenous Amerindian languages providing substrate input. Some words also derive from English and from Spanish. Permaculture is a branch of ecological design, ecological engineering, and environmental design that develops sustainable architecture and self-maintained agricultural systems modeled from natural ecosystems. The core tenets of permaculture are: - Care of the earth: Provision for all life systems to continue and multiply. This is the first principle, because without a healthy earth, humans cannot flourish. - Care of the people: Provision for people to access those resources necessary for their existence. - Return of Surplus: Reinvesting surpluses back into the system to provide for the first two ethics. This includes returning waste back into the system to recycle into a useful input/output. Creole Permaculture Courses In Sadhana Forest Haiti we have come to understand that such valuable knowledge should be shared with the masses. We do not believe that language should be a barrier in acquiring this knowledge so we have trained Permaculture teachers that can speak the local language in Haiti – Creole. We were surprised by the huge interest in the course and the amount of people who came to participate and learn. We believe that empowering the local population with knowledge of how they can utilize their land in a productive and sustainable way is very important for them and for future generations to come. We hope to host Creole Permaculture courses regularly The courses are always offered free of charge. To find out more on why we don’t charge for activities such as these go here: “Gift Economy”
A team of astronomers led by George Becker at the University of California, Riverside, has made a surprising discovery: 12.5 billion years ago, the most opaque place in the universe contained relatively little matter. It has long been known that the universe is filled with a web-like network of dark matter and gas. This “cosmic web” accounts for most of the matter in the universe, whereas galaxies like our own Milky Way make up only a small fraction. Today, the gas between galaxies is almost totally transparent because it is kept ionized– electrons detached from their atoms–by an energetic bath of ultraviolet radiation. Over a decade ago, astronomers noticed that in the very distant past — roughly 12.5 billion years ago, or about 1 billion years after the Big Bang — the gas in deep space was not only highly opaque to ultraviolet light, but its transparency varied widely from place to place, obscuring much of the light emitted by distant galaxies. Then a few years ago, a team led by Becker, then at the University of Cambridge, found that these differences in opacity were so large that either the amount of gas itself, or more likely the radiation in which it is immersed, must vary substantially from place to place. “Today, we live in a fairly homogeneous universe,” said Becker, an expert on the intergalactic medium, which includes dark matter and the gas that permeates the space between galaxies. “If you look in any direction you find, on average, roughly the same number of galaxies and similar properties for the gas between galaxies, the so-called intergalactic gas. At that early time, however, the gas in deep space looked very different from one region of the universe to another.” To find out what created these differences, the team of University of California astronomers from the Riverside, Santa Barbara, and Los Angeles campuses turned to one of the largest telescopes in the world: the Subaru telescope on the summit of Mauna Kea in Hawaii. Using its powerful camera, the team looked for galaxies in a vast region, roughly 300 million light years in size, where they knew the intergalactic gas was extremely opaque. For the cosmic web more opacity normally means more gas, and hence more galaxies. But the team found the opposite: this region contained far fewer galaxies than average. Because the gas in deep space is kept transparent by the ultraviolet light from galaxies, fewer galaxies nearby might make it murkier. “Normally it doesn’t matter how many galaxies are nearby; the ultraviolet light that keeps the gas in deep space transparent often comes from galaxies that are extremely far away. That’s true for most of cosmic history, anyway,” said Becker, an assistant professor in the Department of Physics and Astronomy. “At this very early time, it looks like the UV light can’t travel very far, and so a patch of the universe with few galaxies in it will look much darker than one with plenty of galaxies around.” This discovery, reported in the August 2018 issue of the Astrophysical Journal, may eventually shed light on another phase in cosmic history. In the first billion years after the Big Bang, ultraviolet light from the first galaxies filled the universe and permanently transformed the gas in deep space. Astronomers believe that this occurred earlier in regions with more galaxies, meaning the large fluctuations in intergalactic radiation inferred by Becker and his team may be a relic of this patchy process, and could offer clues to how and when it occurred. “There is still a lot we don’t know about when the first galaxies formed and how they altered their surroundings,” Becker said. By studying both galaxies and the gas in deep space, astronomers hope to get closer to understanding how this intergalactic ecosystem took shape in the early universe. Publication: George D. Becker, et al., “Evidence for Large-scale Fluctuations in the Metagalactic Ionizing Background Near Redshift Six,” ApJ, 2018; doi:10.3847/1538-4357/aacc73
Dopamine is a chemical that is present in the human body, naturally. It is a neurotransmitter which means it sends signals to the brain from the body. Dopamine plays a role in regulating a person’s movements and their emotional reactions. Dopamine ‘s right balance is important for physical as well as emotional health. Vital brain functions that affect a person’s body’s mood , sleep, memory , learning, concentration, and motor control are affected by the dopamine levels. A deficiency in dopamine can be linked to certain medical problems , including depression and Parkinson’s disease. A dopamine deficiency can be due to a reduction in the body’s amount of dopamine, or a problem with the brain’s receptors. The signs of a dopamine deficiency depend on the cause that underlies it. For example , a person with Parkinson’s disease may experience very different symptoms due to substance use than those with low dopamine levels. Some signs and symptoms of a dopamine deficiency disorder include: - muscle cramps, spasms, or tremors - aches and pains - stiffness in the muscles - loss of balance - difficulty eating and swallowing - weight loss or weight gain - gastroesophageal reflux disease (GERD) - frequent pneumonia - trouble sleeping or disturbed sleep - low energy - an inability to focus - moving or speaking more slowly than usual - feeling fatigued - feeling demotivated - feeling inexplicably sad or tearful - mood swings - feeling hopeless - having low self-esteem - feeling guilt-ridden - feeling anxious - suicidal thoughts or thoughts of self-harm - low sex drive - lack of insight or self-awareness Low dopamine is associated with various mental health problems but does not cause those conditions directly. The disorders most often associated with a dopamine deficiency include: - psychosis, including hallucinations or delusions - Parkinson’s disease In Parkinson’s disease, a loss of nerve cells occurs in a particular part of the brain and dopamine is lost in the same region. Drug misuse is also thought to have an effect on the dopamine levels. Studies have shown that repeated drug use can alter the thresholds needed for the activation and signaling of dopamine cell. Damage caused by drug addiction means that these levels are higher, and that the positive effects of dopamine are more difficult for a person. Drug abusers have also demonstrated major reductions in the dopamine D2 receptors and release in dopamine. Diets high in sugar and saturated fats can suppress dopamine and a lack of protein in a person’s diet may mean that they don’t have enough l-tyrosine, an amino acid that helps develop dopamine throughout the body. One study of interest showed that people who are obese and have a certain gene are more likely to be deficient in dopamine too. There is no reliable way of measuring dopamine levels directly in a person’s brain. There are some indirect ways of assessing an imbalance of dopamine levels in the brain. Doctors can measure the density of dopamine transporters which positively correlate with dopamine-using nerve cells. This procedure involves injecting a radioactive substance that binds to dopamine carriers, which can be measured by doctors using a camera. A doctor may analyze the symptoms, lifestyle factors and medical history of a person to decide if they have a disorder that is linked to low dopamine levels. Treatment of the dopamine deficiency depends on finding an underlying cause. When a person is diagnosed with a condition of mental health, such as depression or schizophrenia, a doctor may prescribe medicines to help with the symptoms. These medications can contain antidepressants and mood stabilizers. Ropinirole and pramipexole can increase dopamine levels and are often prescribed for the treatment of Parkinson disease condition. Levodopa is typically administered on diagnosis of Parkinson’s first. Many Dopamine Deficiency therapies can include: - changes in diet and lifestyle - physical therapy for muscle stiffness and movement problems Supplements to raise levels of essential fatty acids such as vitamin D, magnesium, and omega-3 can also help elevate levels of dopamine, but further work needs to be done on whether this is effective. It is also thought that activities that make a person feel comfortable and secure would increase the dopamine levels. This may involve exercise, massage therapy, and meditation. Dopamine vs. serotonin Dopamine and serotonin are both naturally occurring chemicals in the body having roles in the mood and wellbeing of a person. Serotonin influences the mood and emotions of a person, as well as the patterns of sleep , appetite, body temperature and hormonal activity such as the menstrual cycle. Many experts suggest the low serotonin levels are leading to depression. The relationship between serotonin and depression and other mood disorders is complex, and a serotonin imbalance alone is unlikely to cause it. In addition, dopamine affects how a person moves but there is no direct connection to serotonin ‘s role in movement. Dopamine deficiency can affect a person’s quality of life significantly and affect them both physically and mentally. Many mental health disorders relate to low dopamine levels. Many medical problems have also been linked to low dopamine, including Parkinson’s disease. There is limited evidence that diet and lifestyle can affect the dopamine levels that a person creates and transmits in his or her body. Some medications and some therapies can help alleviate symptoms, but if a person is worried about their dopamine levels, they should always speak to a doctor first.
It is impossible to imagine a world where we are separated. Workplace diversity is highly beneficial for everyone. It increases productivity, marketing opportunities, and creativity among employees. Cultural awareness and exposure to different opinions make you more flexible and successful. But, if it is so good, why do people still experience these career struggles? Racism and sexism have often stood in the way of someone’s promotion or higher salary. It is as relevant as ever for many job seekers. The job search process is not the only part of someone being subjected to unfair conditions. Applicant tracking systems often filter the perfect CV just because of the incorrect keywords. To avoid this problem, one can look for “write my resume for me” assistance available due to the best online resume writing services that polishes your application. Help from qualified writers can increase your chances to land an interview and get a job. How do racism and sexism end up in the workplace? How to recognize them and take appropriate actions to combat the injustices in the job market? For some people, the topic is a slippery slope because it is hard to prove that prejudices were the reason for someone’s rejection. It is important to recognize the issue and address it for the benefit of everyone. Some Important Definitions - Discrimination is defined as intentional or unintentional exclusion, denying benefits, and imposing burdens on a person based on prejudice. It also describes the forms of micro-aggression (verbal or behavioral), including stereotyping or hostile attitudes toward someone based on their gender, sex, or race. - Racism is a form of discrimination based on a person’s race or ethnicity. - Sexism is a form of discrimination based on a person’s sex or gender. A Brief History of Fight for Equal Rights The first visible participation of women in the workforce coincided with the Industrial Revolution. 1765 was the year the earliest working women’s society, the Daughters of Liberty was established. Within four years, women in America were banned from saving their earnings and owning property. Still, women could work, but they had nothing to sustain themselves. For several decades, women had to strike and challenge the authority to demand recognition in many occupations. However, racism and sexism have a long history going hand in hand. Women standing for other discriminated communities began with the abolition movement in the 1830s. It was about the shared experiences and striving to get equal opportunities with their male counterparts. It is hard to imagine that only a hundred years ago, the Equal Rights Amendment was adopted. The fight for equality, according to history, is the dance. You take one step ahead and two steps back. The reason why women still have to fight for their rights is that old stereotypes die hard. Women are expected to choose family over career, they are perceived as less qualified, and we all know harmful stories about “being too emotional.” There is still a lot of work to do towards diversity and equal rights, even though the fight has continued for a century. How Racism and Sexism Impacts Women? Lack of Self-Actualization One of the basic human needs is to self-actualize and build a meaningful career. A hostile environment and exclusion of women hold them back from professional growth. Four-in-ten (42%) of working women in the US experience sexism every day. Many women don’t land a job due to an unspecified reason from recruiters. Emotional, Physical, or Mental Health Issues A hostile workplace creates a stressful environment that can cause - low self-esteem; - isolation from colleagues; - isolation from a peer group, friends, or family; - mental health and substance abuse issues; - feelings of paranoia, anger, or fear. The psychological and well-being effects of sexism and racism are less discussed because of the stigma. However, they are one of the most common reasons why some women leave their career fields (for instance, tech). The most evident effect of these prejudices is seen in salary differences. A pay gap is an economic disadvantage for many women of color, especially if they are the providers for their families. White women also experience a significant wage gap in comparison to their male counterparts in the same positions. Lack of Representation The glass ceiling benefits only one group of people. Many women don’t look for higher positions because they feel discouraged to pursue them in the first place. The lack of diversity at CEO positions and higher-level jobs harms industries. It results in failing businesses and missed opportunities to expand and strengthen enterprises. Diversity and positive representation only benefit businesses. How to Combat Discrimination? It is important to find a support group that can help you stand for yourself or your colleagues. It is crucial to have proof of harassment or discrimination, and being precise is a must. Every person has a right to work in a safe, discriminatory-free environment. You also have a right to stand for yourself and your colleagues, demanding fair judgment. At the same time, you should know the federal law that promises you protection from race and gender discrimination. You can file a grievance, picket or protest your employer, or file a complaint to a government agency. For instance, opt for the Equal Employment Opportunity Commission (EEOC). Racism and sexism are frowned upon by many, but some things are still present in workplaces. Every day, women work twice as hard for their accomplishments and success, and they deserve recognition. It is important to demand people’s accountability and break the cycle of unfair treatment of women in workplaces.
Major and minor scales are made up of a pattern of whole steps and half steps (or tones and semitones) Formulas for Major and minor Scales Here are the formulas for creating Major and minor scales. You can start on any pitch and follow these patterns and play either a Major or minor scale. Major Scale Formula If you played all of the white keys you would play a C Major scale (C D E F G A B C). If we were to note the spacing of the pitches as either whole tones or half tones we would create the pattern below. Next, start on a ‘G’ and see if you can use the formula W W H W W W H we just created to play a G Major scale. Next, start on a ‘D’ and see if you can use the formula W W H W W W H we just created to play a D Major scale. Minor Scale Formula You can create a minor scale if you played all of the white keys. An A Minor scale (A B C D E F G A) would create the formula below. See if you can start on an ‘E’ and play an E minor scale using our formula W H W W H W W Next, see if you can create a ‘B’ minor scale using the formula W H W W H W W Practicing Playing Scales Try starting on different pitches on the piano and use the formulas to create Major and minor scales Online Game of Major Scales by MusicTechTeacher Online Help with Creating Scales using Whole and Half Steps – by Music Fundamentals – Great explantion YOUTUBE semitones/tones, half/whole steps and major scales
Bipolar disorder, also known as manic-depressive illness, is a brain disorder that causes unusual shifts in mood, energy, activity levels, and the ability to carry out day-to-day tasks. Symptoms of bipolar disorder are severe. They are different from the normal ups and downs that everyone goes through from time to time. Bipolar disorder symptoms can result in damaged relationships, poor job or school performance, and even suicide. Two main features characterize people who live with bipolar disorder: intensity and oscillation (ups and downs). People living with bipolar disorder often experience two intense emotional states. These two states are known as mania and depression. A manic state can be identified by feelings of extreme irritability and/or euphoria, along with several other symptoms during the same week such as agitation, surges of energy, reduced need for sleep, talkativeness, pleasure-seeking and increased risk-taking behavior. On the other side, when an individual experiences symptoms of depression they feel extremely sad, hopeless and loss of energy. Not everyone’s symptoms are the same and the severity of mania and depression can vary.
Doubles are taught in maths classes in elementary schools, typically during first and second years. Learning double maths facts is an important step for elementary students because maths is a cumulative subject. Maths is easier to learn when students understand and know basic maths facts. There are several common activities teachers use to teach doubles to pupils. Flash cards are a common way to learn all types of maths facts, including doubles. Doubles are simply adding two like numbers, such as 1 plus 1. Flash cards are a fun way for children to learn maths facts. Flash cards can also be sent home for students to practice on their own or with their parents. Many teachers use the game Around the World for learning doubles. This is a flash card game that is played by one pupil standing and working his way around the room from desk to desk. He stands by one desk and the teacher shows the flash card. If he guesses the answer first, he moves to the next desk. If the pupil at the desk answers it first, the student who is standing sits down, and the seated pupil stands up and moves on to the next desk. Many teachers use songs as activities to learn doubles. A common doubles song is this: 0 + 0 = 0 oh, 1 + 1 = 2 ooo, 2 + 2 = 4 more, 3 + 3 = 6 kicks, 4 + 4 = 8 that's great, 5 + 5 = 10 again, 6 + 6 = 12 that's great, 7 + 7 = 14 let's lean, 8 + 8 = 16 really keen, 9 + 9 = 18 jelly bean and 10 + 10 = 20 that's plenty. Another common song used to learn doubles is "The Ants Go Marching." This is done by dividing the room in half. When the first verse is sung, have one child from each side of the room walk to the front of the room together. This will show the double of one. Continue with each verse adding more children each time. Have the children count how many kids are in the front of the room after each verse. A common activity to learn doubles is to assign a picture to each number from 1 to 10. One could be a ball, 2 could be legs on a person, 3 is sides of a triangle and so on. To learn doubles for each number, two pictures of the item are drawn. The child then adds up the items to find the double. For example, the number 3 is illustrated by drawing two triangles. The sides of both triangles are added up to find the double of 3.
Memories influence our behaviour for better or worse. A traumatic incident, experienced once, can darken our lives for ever more. Drug or alcohol addiction – driven by remembered rewards – can render the idea of “normal life” impossible. So what if there was a therapy that could rapidly diminish the impact such memories have over us? It sounds like science fiction, or mind control. But in the last decade scientists have investigated the process of memory reconsolidation to erase established memories of trauma or signals such as drug paraphernalia or locations associated with compulsive drug taking. The resultant amnesia is permanent, and typically requires only a single treatment, effectively replacing the dysfunctional memory with a clean slate. Are memories set in stone? Initially, when a memory is formed it is fragile and susceptible to disruption, similar to memories that fail to form following a night of heavy drinking. But once a memory becomes stabilised, or consolidated, it is in an established state that can be recollected and mentally re-experienced. In the laboratory, rats are used as a model allowing the examination of learning and memory formation. Rats quickly learn to fear a sound that is present when a brief electrical shock occurs. Similarly, rats will perform specific responses when a light is illuminated indicating availability of addictive drugs such as cocaine or heroin, and will prefer an environment associated with addictive drugs compared to a neutral environment. Previously, attempts were made to extinguish a maladaptive memory by repeatedly presenting signals associated with fear or drugs with no outcome (such as physical trauma or a “hit” of heroin), a technique known as extinction. But the original memory is not erased; instead, a neutral memory forms in parallel. That means the maladaptive memory can return to control behaviour following re-exposure to certain prompts or environments. Researchers using laboratory rats found that consolidated memories are rendered transiently unstable (therefore temporarily susceptible to disruption) following retrieval if the outcome is unexpected – a sound no longer leads to a shock, a light no longer leads to cocaine. A restabilisation process – known as reconsolidation – allows existing memories to be updated, but this short burst of new information is not salient enough to change the memory completely. But the destabilised memory becomes susceptible to amnesia-inducing treatments once again. Amnesic agents have been shown to wipe the original memory to the extent that rats will no longer display fearful behaviours to the sound associated with electric shocks, respond for drugs or show preference to drug-associated environments. The reminder session is crucial: rats treated with amnesic pharmaceuticals in absence of the brief initial memory retrieval session continued to show fear or drug seeking responses. From lab rats to clinical trials In humans, the distressing memories underpinning post-traumatic stress disorder (PTSD) can occur following experiences of life-threatening incidents such as military combat, assault, serious accidents or terrorist attacks. Disruption of memory reconsolidation may provide the “magic bullet” to erase these damaging memories. The trials have demonstrated that Propranolol administration following the script driven reenactment of traumatic events within a clinical setting to recall the memory diminished the memory’s emotional component in PTSD patients - resulting in an enduring reduction in psycho-physiological responses. Put simply the emotional impact of the traumatic experience was decreased. The use of certain amnesic agents such as the drug MK-801 may be limited outside of the laboratory. Preclinical laboratory studies have frequently used amnesia-inducing drugs in rats that can have undesirable side-effects in humans, such as hallucinations. In a recent study, researchers used a novel behavioural procedure that combined the brief reminder session of memories associated with drugs of abuse - destabilising the existing memory, followed by repeated presentation of drug-associated cues in the absence of drug rewards. This was initially carried out in laboratory rats that were trained to administer heroin. Those that underwent memory retrieval shortly followed by extinction decreased responses to drug-associated signals, whereas responding returned in the rats that only had extinction trials. This retrieval-extinction procedure was then used in abstinent human heroin addicts with identical results – persistent reductions in responses such as craving when presented with drug cues. Similarly, a functional brain imaging study was conducted with human subjects. Participants were repeatedly exposed to a photo of a neutral environment containing a lamp that was lit either in red or blue, one of the coloured images led to an electrical shock (i.e. the red lamp), so learnt to associate one image with fear while the other (i.e. the blue lamp) remained neutral. This study demonstrated diminished neural activity in the amygdala - a brain region involved in the encoding and storage of fearful memories - in the group that had the fear memory recalled and then underwent extinction ten minutes later. The decreased amygdala activity was not observed in subjects where extinction followed recall after a prolonged delay. So the extinction treatment had to occur soon after the memory was recalled, otherwise the treatment had no effect. A spotless mind? Memory reconsolidation may prove useful in treating drug addiction. Showing an addict a syringe and then extinguishing that memory by not giving the patient access to drugs may break the associative link between the stimulus and the rewarding drug. Similarly, administering anxiety-decreasing drugs in conjunction with recall of traumatic experiences may persistently disrupt a fear memory and may give PTSD patients genuine remission, allowing an escape from traumatic memories. Of course, ethical implications underpin the selective removal of memories. In the case of alleviating traumatic memories in PTSD or reducing drug craving it has great benefits, but what if we could simply forget a relationship that ended badly? Our memories - good or bad - form parts of our identities and simply removing aspects of our character may have serious consequences. Further reading: Explainer: what is forgetting?
What does the Earth's orbit around the Sun look like? What induces the seasons? How high is the sun over the year in different latitudes? And how would the situation change when the Earth's axis had a different slope? What is the Solar Time and how long is the daytime at different places on the Earth? What time is currently in different cities on the Earth? View an interactive map! How do the moon phases that we can see from the Earth originate? Explore the mechanism of the Solar and Lunar eclipse! Why does the eclipse not occur every month? When will the solar or lunar eclipse occur in the future, and where will it be observable? Where on the Earth does the high tide and low tide occur? Take an interactive "walk" through the Solar system. (c) Martin Vézina Can you imagine what are the distances between the planets and what dimensions the planets have? Experience a map application that will present these enormous distances and dimensions in a familiar environment. The app Earth Space Lab is designed especially for teaching the topic of the Earth as a planet at grammar or elementary schools (geography, physics). The app consists of individual learning objects that can be used independently. This app was created by Václav Černík ([email protected]) and it's based on his diploma thesis at the Faculty of Science, Charles University in 2017. Any comments can be sent to e-mail [email protected]. Background image (c) Can Stock Photo / onyxprj
The organisms discussed in this chapter are short, pleomorphic Gram-negative rods that can exhibit bipolar staining. They are catalase positive, oxidase negative, and microaerophilic or facultatively anaerobic. Most have animals as their natural hosts, but they can produce serious disease in humans. The genus Yersinia includes Yersinia pestis, the cause of plague; Yersinia enterocolitica, an important cause of human diarrheal disease; and several others considered nonpathogenic for humans. Pasteurella are primarily animal pathogens but Pasteurella multocida can also produce human disease. YERSINIA PESTIS AND PLAGUE Plague is an infection of wild rodents transmitted from one rodent to another and occasionally from rodents to humans by the bites of fleas. Serious infection often results, which in previous centuries produced pandemics of “black death” with millions of fatalities. The ability of this organism to be transmitted by aerosol and the severity and high mortality associated with pneumonic plague make Y pestis a potential biological weapon. Morphology and Identification Y pestis is a Gram-negative rod that exhibits striking bipolar staining with special stains such as Wright, Giemsa, Wayson, or methylene blue (Figure 19-1). It is nonmotile. It grows as a facultative anaerobe on many bacteriologic media. Growth is more rapid in media containing blood or tissue fluids and fastest at 30°C. In cultures on blood agar at 37°C, colonies may be very small at 24 hours. A virulent inoculum, derived from infected tissue, produces gray and viscous colonies, but after passage in the laboratory, the colonies become irregular and rough. The organism has little biochemical activity, and this is somewhat variable. Yersinia pestis (arrows) in blood, Wright-Giemsa stain. Some of the Yersinia pestis have bipolar staining, which gives them a hairpin-like appearance. Original magnification ×1000. (Courtesy of K Gage, Plague Section, Centers for Disease Control and Prevention, Ft. Collins, CO.) All yersiniae possess lipopolysaccharides that have endotoxic activity when released. Y pestis and Y enterocolitica also produce antigens and toxins that act as virulence factors. They have type III secretion systems that consist of a membrane-spanning complex that allows the bacteria to inject proteins directly into cytoplasm of the host cells. The virulent yersiniae produce V and W antigens, which are encoded by genes on a plasmid of approximately 70 kb. This is essential for virulence; the V and W antigens yield the requirement for calcium for growth at 37°C. Compared with the other pathogenic yersiniae, Y pestis has gained additional plasmids. pPCP1 is a 9.5-kb plasmid that contains genes that yield plasminogen-activating protease that has temperature-dependent coagulase activity (20°–28°C, the temperature of the flea) and fibrinolytic activity (35°–37°C, the temperature of the host). This factor is involved in dissemination of the organism from the flea bite ...
Imagine the inner tube of your bicycle wheel is deflated. Imagine holding it in your hands; notice how limp and flexible it feels. You can bend it and twist it any direction you like. Now imagine the same inner tube filled with water. Try to flex it now and feel the rigidity that has suddenly appeared. Water, which normally seems to be quite yielding, is very difficult to compress. When contained, water provides a tremendous resistance to being squeezed. This is the basis of hydraulic systems – fluids, such as water, when they have been contained or constrained, resist compression and transfer forces placed upon them into areas of lesser resistance. As was mentioned when we looked at our connective tissues, our tissues are filled with a variety of extracellular substances such as collagen fibers, elastin, etc. What weren’t described were the fluids that flow around everything. These fluids are called our “ground substances.” Sometimes these are called “cement substances” and they are found widely distributed throughout our connective tissues and supporting tissues. Ground substances act very much like the water in the inner tube analogy; they provide strength and support to the tissues. But they do so much more than just that. Ground substances are the non-fibrous portion of our extracellular matrix (the stuff outside the cells of our bodies) in which the other components are held in place. They are made up of various proteins, water, and glycosaminoglycans. Water can make up sixty to seventy percent of the ground substances, and it is attracted there because of the GAGs. One of the most important GAGs is hyaluronic acid (HA ). Various researchers have estimated that HA can attract and bind one thousand to eight thousand times its volume of water. Another estimate suggests each HA protein in the extracellular matrix has fifteen thousand molecules of water associated with it! Another important kind of GAG is chondroitin-sulfate. When GAGs combine with proteins they are called “proteoglycans” and it is in this form that they attach to water molecules and hydrate our tissues. The proteoglycans are very malleable and move about freely. However, being made of water they also resist compression tremendously. With water as a principal component of our ground substances, we can see why the ground substances are an excellent lubricant between fibrils, allowing them to move freely past each other. Water gives our tissues a spring-like ability, allowing them to return to their original shapes once pressure has ceased. This is crucial to our tissues’ ability to withstand stresses; however, a cyclic loading and unloading of the tissue is important to maintaining health. One study found that the alteration of loading and unloading of pressure on the tissue, as long as it is not excessive, maintains cartilage health. The fluid in our joints (called “synovial fluid”) is also a lubricant and it too is made up substantially of GAGs. HA and two kinds of chondroitin-sulfates are essential to keeping our joints working properly. When the extracellular matrix is well hydrated, cells, nutrients, and other components of the matrix can move about freely. Toxins and waste products can migrate out of the matrix into the blood or lymphatic system to be removed from the body. The ground substances, which are also formed by the fibroblasts (remember, fibroblasts also produce collagen), are also helpful in resisting the spread of infection and are a part of our immune system barrier. Unfortunately, as we age, the ability of the body to create HA and other GAGs diminishes. We have fewer fibroblasts available to us, and those we do have produce less HA. As a consequence, the extracellular matrix becomes filled more and more with fibers. As these fibers come closer together, they generate cross-links that bind them to each other. As a result of that, our tissues become stiffer, less elastic, and less open to the flow of the other components in our matrix. Toxins and waste products become trapped in the matrix and cannot get out, but harmful bacteria can migrate around more easily. Fortunately exercise like yoga and massage, which stress the extracellular matrix, can help us maintain the number of fibroblasts and keep them functioning properly. This helps to keep the matrix hydrated, open, and strong. We need these fluids everywhere in the body. The fluid of the eye is made up mostly of ground substances. Our skin needs HA to remain soft. Recently cosmetic surgeons have been using HA injections, instead of collagen, as a soft tissue filler to increase the size of lips or remove skin wrinkles. The effects, however, last only six to twelve months. Chondroitin is an often-used supplement to help increase lubrication of joints. However, injections and supplements are very inefficient ways to hydrate the body. More effective is to coax the body to increase its own production. Ground substances can be fluidic or gel-like, and under certain conditions they change from one to the other. When they are gel-like they provide more stability, but they are less open for the passage of materials of the matrix. When they are fluid they have less rigidity, but more openness to the flow of materials. Compression of the tissues, via yoga and other means, can temporarily transform the ground substance from gel to fluid. During the fluid state, toxins and wastes can be transported out of the matrix. Once again, yoga is an excellent way to detoxify the body. - — That’s a mouthful, which is easy to gag upon when trying to pronounce. So let’s just call these GAGs for short. - — To be more current, we could call this hyaluronan. - — Called “ama” in yoga. - — One study showed that oral ingestion of chondroitin-sulfate resulted in only a five percent absorption rate, which meant that large doses were required to have any effect.
Archaeologists at the University of York say they may have found one of the earliest examples of a crayon: a 10,000-year-old elongated piece of ochre with a sharpened end. The tool was found near an ancient lake in North Yorkshire, a landscape with a rich Mesolithic archaeological record. Its finding might help archaeologists better understand how prehistoric hunter-gatherers worked with pigments. Just 22mm long and 7mm wide, the object’s surface has grooves and surfaces, as the scientists note in their study, published in the Journal of Archaeological Science: Reports. These lines are possible traces of someone using the object against granular surfaces, which would yield red marks. Its sharpened end also suggests that the piece was used as a kind of drawing or coloring tool. The archaeologists likewise found a small ochre pebble with deep striations, which they believe was used to harvest red pigment powder. “Color was a very significant part of hunter-gatherer life and ochre gives you a very vibrant red color,” Dr. Andy Needham, the lead author of the study, said in a press release. “It is very important in the Mesolithic period and seems to be used in a number of ways.”
Studying phage, a primitive class of virus that infects bacteria by injecting its genomic DNA into host cells, researchers have gained insight into the driving force behind this poorly understood injection process, which has been proposed in the past to occur through the release of pressure accumulated within the viral particle itself. Almost all phages (also known as bacteriophages) are formed of a capsid structure, or head, in which the viral genome is packaged during morphogenesis, and a tail structure that ensures the attachment of the phage to the host bacteria. A common feature of phages is that during infection, only their genome is transferred to the bacterial host's cytoplasm, whereas the capsid and tail remain bound to the cell surface. This situation is very different from that found in most eukaryotic viruses, including those that infect humans, in that the envelope of these viruses fuses with the host plasma membrane so that the genome is delivered without directly contacting the membrane. Phage nucleic acid transport poses a fascinating biophysical problem: Transport is unidirectional and linear; it concerns a unique molecule the size of which may represent 50 times that of the bacterium. The driving force for DNA transport is still poorly defined. It was hypothesized that the internal pressure built during packaging of the DNA in the phage capsid was responsible for DNA ejection. This pressure results from the condensation of the DNA during morphogenesis – for example, another group recently showed that the pressure at the final stage of encapsulation for a particular bacteriophage reached a value of 60 atomospheres, which is close to ten times the pressure inside a bottle of champagne. In the new work reported this week, researchers have evaluated whether the energy thus stored is sufficient to permit phage DNA ejection, or only to initiate that process. The researchers used fluorescently labeled phage DNA to investigate in real time (and with a resolution time of 750 milliseconds) the dynamics of DNA ejection from single phages. The ejected DNA was measured at different stages of the ejection process after being stretched by applied hydrodynamic flow. The study demonstrated that DNA release is not an all-or-none process, but rather is unexpectedly complex. DNA release occurred at a very high rate, reaching 75,000 base pairs of DNA/sec, but in a stepwise fashion. Pausing times were observed during ejection, and ejection was transiently arrested at definite positions of the genome in close proximity to genetically defined physical interruptions in the DNA. The authors discuss the relevance of this stepwise ejection to the transfer of phage DNA in vivo. Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009 Published on PsychCentral.com. All rights reserved. Don't be too timid and squeamish about your actions. All life is an experiment. The more experiments you make the better. -- Ralph Waldo Emerson
Use effective verbs# Verbs carry the action in a sentence, and they make your content come alive for users. To make the biggest impact with your writing, use strong, simple, action verbs. See the following sections for specific guidelines. - Use action-oriented verbs - Avoid nouns built from verbs - Use the simplest tense - Use helping verbs accurately - Use single-word verbs - Don't use verbs as nouns or adjectives - Don't use nonverbs as verbs - Use transitive verbs transitively, not intransitively - Don't humanize inanimate objects Use action-oriented verbs# Verbs are supposed to carry the action in a sentence. However, when you use verbs like be, have, make, or do (and their variants), or when you use gerunds (-ing words), nouns carry the action and weaken the meaning. Shift the focus from nouns to verbs by replacing weak verbs and gerunds with strong, action-oriented verbs. Relying on verbs rather than nouns usually makes sentences shorter, clearer, and more direct. |Rackspace leads the industry.||Rackspace is the industry leader.| |Role-Based Access Control (RBAC) restricts service access to authorized users.||Role-Based Access Control (RBAC) is a method of restricting service access to authorized users.| |If the node can't access the Internet, the installation process fails.||If the node doesn't have Internet access, the installation process fails.| |To create a server, specify a name, flavor, and image.||You create a server by specifying a name, flavor, and image.| |When you create a server, ...||When creating a server, ...| Avoid nouns built from verbs# Many nouns are built from verbs, for example, description and explanation. Such nouns are called nominalizations. Sentences that include a nominalization and a verb can often be simplified by changing the nominalization back into a verb and omitting the existing verb (as shown in the following examples). |The following table describes each of the products.||The following table provides a description of each of these products.| |Install the product by completing the following tasks.||Perform the installation of the product by completing the following tasks.| |The program encrypts user IDs and passwords.||The program enables the encryption of user IDs and passwords.| Use the simplest tense# Simple verbs, such as verbs in the present tense, are easier to read and understand than complex verbs, such as verbs in the progressive or perfect tense, or verbs combined with helping verbs (such as can, may, might, must, and should). |Before you perform this task, complete the prerequisites.||Before you perform this task, you should have completed the prerequisites.| |To start, three ports are open: ssh, http, and https.||To start, you are going to have three ports open: ssh, http, and https.| |If you use a Red Hat distribution, iptables works a little differently.||If you are using a Red Hat distribution, iptables works a little differently.| Use helping verbs accurately# If you need to use the following helping verbs, use them accurately and consistently: - Can: Use can to indicate the ability to perform an action. - May: Use may to indicate permission. - Might: Use might to indicate probability or possibility. - Must: You can use must to indicate the necessity of an action. However, in general, use the imperative mood, which implies the subject you and doesn't require must but still indicates necessity. - Should: Use should to tell users what they ought to do. Because should implies uncertainty, avoid using it unless you explain further. |You can customize Cloud Queues to achieve a wide range of performance, durability, availability, and efficiency goals.| |If you need space, you may uninstall the program.| |A service might expose endpoints in different regions.| |The worker must delete the message when work is done.| |To avoid losing a claim in the middle of processing a message, clients should periodically renew claims during long-running batches of work.| Use single-word verbs# When possible, use single-word verbs rather than phrasal verbs (verbs followed by prepositions or adverbs). For example, use omit rather than leave out, or shorten start up to start. One-word verbs are easier to understand and to translate. If you must use a phrasal verb, keep the parts of the verb together unless that changes the meaning of the sentence. Some acceptable phrasal verbs are back up, log in, set up, shut down, and work around. Don't turn a phrasal verb into a single-word verb. For example, don't use login, setup, or workaround as verbs. These single-word terms should be used only as nouns or adjectives. |Determine the type of encryption (32-bit or 64-bit) that your computer uses.||Figure out the type of encryption (32-bit or 64-bit) that your computer uses.| |Click the link.||Click on the link.| |You can safely back up a database by using Rackspace Cloud Backup.||You can safely back a database up by using Rackspace Cloud Backup.| Don't use verbs as nouns or adjectives# If a word is defined in the dictionary as a verb, don't use it as a noun or adjective. Some verbs that are commonly misused as nouns or adjectives are configure, compile, debug, and install. |After installation is completed, you can configure the product.||When you complete the install, you can begin the configure.| |After rubygems is compiled, the following message appears at the bottom of the output text.||When the compile process is finished, the following message appears at the bottom of the output text.| Don't use nonverbs as verbs# Don't use nouns or adjectives as verbs, and don't add verb suffixes to abbreviations, nouns, or conjunctions. |You can reorganize the table space.||You can REORG the table space.| |Verify the change by using the ping command to contact the server.||Verify the change by pinging the server.| |Some databases and search engines insert the AND operator between adjacent words in a keyword search.||Some databases and search engines AND adjacent words in a keyword search.| |Navigate to the new directory.||CD to the new directory.| Use transitive verbs transitively, not intransitively# Transitive verbs, such as display and complete, require a direct object. Intransitive verbs don't require a direct object. Be sure to use each type of verb correctly. To avoid using a transitive verb intransitively, you can make it passive if the performer of the action is understood or not important. The product displays the available servers in the right pane. The available servers are displayed in the right pane. |The available servers display in the right pane.| |After the installation is completed, ensure that the FTP services are running.||After the installation completes, ensure that the FTP services are running.| Don't humanize inanimate objects# Be careful not to ascribe human feelings, motivations, and actions to inanimate objects. For example, a software program doesn't know, need, remember, see, think, understand, or want. However, it can detect, record, require, store, check, calculate, and process. The following anthropomorphic verbs are acceptable in the computer industry: accept, calculate, deny, detect, interact, interpret, listen, refuse, read, and write. |When you reference your modules in your script by using a PHP function ||When you reference your modules in your script by using a PHP function |Mission-critical web-based applications and workloads require an HA solution.||Mission-critical web-based applications and workloads need an HA solution.| |The software stores your security profile and uses it the next time you log in.||The software remembers your security profile and uses it the next time you log in.|
Boston Massacre Trials: 1. What is odd about the defense and prosecution teams? 2. Why do they not hold the trials right away? 3. Why do they find Thomas Preston (verdict)? 4. Do the trials seem fair? Why/why not? 5. How might the trials have affected the reputation of John Adams? CD: Do you agree with the verdicts? Do you think the trials would have gone any differently in today's times? why/why not? 24. I can explain the causes of the Boston Tea Party and its effects. 27. I can explain the First Continental Congress and its impact. Is the First continental congress effective? First Continental Congress What caused it? What happened at it? What does it set in motion? Is it rebellion or revolution now? How do you know? Does it move us closer to war or further away? 29. I can explain the Declaration of Independence in my own words and analyze its impact. --Why do countries declare independence? ---What are the two big philosophical ideas in the Declaration of Independence? ---What grievances di the colonists have with King George? ---How does the Declaration of Independence make an argument for Independence? ---How declarations of independence from other countries compare with the US declaration of Independence?
The United Nations' Intergovernmental Panel on Climate Change publishes a report on the consensus view of climate change science about every five to seven years. The first findings of the IPCC's Fifth Assessment Report (AR5) were released on Sept. 27, 2013, in the form of the Summary for Policymakers report and a draft of IPCC Working Group 1's Physical Science Basis. The IPCC does not perform new science but instead authors a report that establishes the established understanding of the world's climate science community. The report not only includes observations of the real world but also the results of climate model projections of how the Earth will respond as a system to rising greenhouse gas concentrations in the atmosphere. The IPCC's AR5 relies on the Coupled Model Intercomparison Project Phase 5 (CMIP5) effort, an international effort among the climate modeling community to coordinate climate change experiments. These visualizations represent the mean output of how certain groups of CMIP5 models responded to four different scenarios defined by the IPCC called Representative Concentration Pathways (RCPs). These four RCPs – 2.6, 4.5, 6 and 8.5 – represent a wide range of potential worldwide greenhouse gas emissions and sequestration scenarios for the coming century. The pathways are numbered based on the expected Watts per square meter – essentially a measure of how much heat energy is being trapped by the climate system – each scenario would produce. The pathways are partly based on the ultimate concentrations of carbon dioxide and other greenhouse gases. The current carbon dioxide concentration in the atmosphere is around 400 parts per million, up from less than 300 parts per million at the end of the 19th century. The carbon dioxide concentrations in the year 2100 for each RCP are: RCP 2.6: 421 ppm RCP 4.5: 538 ppm RCP 6: 670 ppm RCP 8.5: 936 ppm Each visualization represents the mean output of a different number of models for each RCP, because data from all models in the CMIP5 project was not available in the same format for visualization for each RCP. All of the models compare a projection of temperatures and precipitation from 2006-2099 to a baseline historical average from 1971-2000. Thus, the values shown for each year represent the departure for that year compared to the observed average global surface temperature from 1971-2000. The IPCC report used 1986-2005 as a baseline period, making its reported anomalies slightly different from those shown in the visualizations.
Anatomy, Head and Neck: Adenoids The adenoids exist as a rectangular mass of lymphatic tissue in the nasopharynx. Meyer first described this mucosa-associated lymphoid tissue in 1868. The adenoids are midline structures situated on the roof and posterior wall of the nasopharynx. They form part of the Waldeyer ring, whose components include the adenoids, the palatine tonsils, and the lingual tonsils. They are present from the seventh month of gestation and typically grow until age 5. Adenoid tissue can be found extending to the eustachian tube opening and the fossa of Rosenmuller. The fossa of Rosenmuller is on the lateral wall of the nasopharynx, just behind the cartilage of the eustachian tube. These lymphoid masses have an important immunologic function, and hypertrophy can pose a risk for disease pathologies in children. Adenoids with other lymphatic tissue in the nasopharynx are the first line of defense against ingested or inhaled pathogens. Structure and Function Earn CME credit as you help guide your clinical decisions. Adenoids are pyramidal in shape, with the apex of the pyramid directed towards the nasal septum, and the base of the pyramid present between the roof and the posterior wall of the nasopharynx. Their composition is of respiratory epithelium. Histologically, the lymphoid tissue of the adenoids divides into four lobes with seromucous glands interposed throughout the substance of the tissue. As a portion of the Waldeyer ring, adenoids compose the lymphoid tissue that serves as a defense against potential pathogens in the pharynx. Adenoids, in conjunction with the lingual and palatine tonsils, are involved in the development of T cells and B cells. On the surface, adenoid tissue has specialized antigen-capture cells (ACC), M cells, which uptake the pathogenic antigens and then alert the underlying B cells. Activation of B cells leads their proliferation in areas called germinal centers; this helps in producing IgA immunoglobulins. Through this mechanism, the adenoids aid in the development of immunologic memory throughout childhood. Recent scientific literature has provided some evidence that adenoids also produce T lymphocytes (cellular immunity) like the thymus gland. The adenoids can function as a bacterial reservoir for the nasal cavity and are implicated in the pathogenesis of chronic rhinosinusitis. The prenatal development of the head during embryogenesis consists of the neurocranium and the viscerocranium. The human face develops as part of the viscerocranium throughout the fourth to tenth weeks of fetal life. The fusion of two lateral primordia forms the adenoids during embryological development. Dysfunctional facial evolution may cause craniofacial deformities such as cleft palate and cleft lip. The arterial supply of the adenoids is from the basisphenoid artery, the ascending pharyngeal artery, the ascending palatine artery, the pharyngeal branch of the maxillary artery, the tonsillar branch of the facial artery, and the artery of the pterygoid canal. The venous drainage of the adenoids is through the pharyngeal plexus. The pharyngeal plexus and the pterygoid plexus communicate, eventually draining into the facial veins and internal jugular veins. The lymphatic drainage of the adenoids is through the pharyngomaxillary space lymph nodes and the retropharyngeal lymph nodes. The nervous supply to the adenoids is via the pharyngeal plexus. The pharyngeal plexus contains fibers of cranial nerves IX, X, and XI. The innervation of the adenoids originates from the vagus (X) and the glossopharyngeal nerves (IX). The muscles found in the nasopharynx include the levator palatini and the pharyngeal constrictors. The superior pharyngeal constrictor muscle forms the superior aspect of the lateral walls of the nasopharynx. Adenoids decrease in size with age, typically atrophying completely by the teenage years. Persistence of adenoid tissue into adulthood is an uncommon clinical finding. Nonetheless, a disease process of the adenoids requires investigation in those presenting with symptoms of nasal obstruction. Immunocompromised patients, such as those diagnosed with human immunodeficiency virus (HIV) and organ transplant recipients, can exhibit adenoid hypertrophy. The thinking is that this finding is thought to be caused by regressed adenoid tissue reproliferating in response to infections. Adenoid tissue may separate into two parts in some individuals. This variant can occur through two means: by a fissure extending from the pharyngeal bursa or by a median fold passing towards the nasal septum from the pharyngeal bursa. Adenoidectomy: An adenoidectomy is the surgical excision of adenoid tissue. Primary indications for adenoidectomy include otitis media with effusion of at least 3 months duration, chronic adenoiditis, obstructive sleep apnea lasting 3 months or greater, and recurrent upper respiratory infections. Patients post-adenoidectomy require special consideration for hemorrhage as a potential (though rare) complication. The basisphenoid artery supplies a portion of the nasopharyngeal tonsil and can be a potential source of bleeding post-operatively. Adenoid hypertrophy (AH): Impaired mucociliary clearance has been implicated as playing a role in adenoid hypertrophy, a condition typically seen in children. An enlarged adenoid may block breathing and be a cause of snoring or obstructive sleep apnea. Adenoid hypertrophy can also lead to comorbid conditions such as serous otitis and sinusitis. AH is higher in frequency in children with allergic diseases, with the most common allergen being house dust. Other risk factors noted for developing AH include cigarette smoke exposure and allergic rhinitis. In a child with these risk factors, AH should be a consideration during a routine examination. Assessing adrenal size can be achieved through flexible nasal endoscopy, where adenoid size grading is on a scale of I to IV. This scale represents the percentage of the posterior choana blocked by the adenoid tissue, with grade IV representing the highest level of obstruction. While adenoidectomy remains a common surgical treatment for adenoid hypertrophy, intra-nasal steroids are an option as a non-surgical treatment regimen. Adenoiditis: Adenoiditis refers to inflammation of adenoid tissue secondary to an infection. Those affected with adenoiditis may present with nasal obstruction, rhinorrhea, mouth breathing, and cold-like symptoms. Adenoiditis may occur on its own or in combination with acute or chronic rhinosinusitis. Common pathogens leading to adenoiditis are often the same as those implicated in rhinosinusitis and include Streptococcus pneumoniae, Hemophilus influenza, and Moraxella catharrhalis. Nasal endoscopy showing purulent secretion of the adenoids can be useful to confirm adenoiditis. Adenoid faces: Adenoid faces describes the differences in physical characteristics seen in children with adenoid hypertrophy. The belief is that adrenal hypertrophy results in obstruction in proper nasal breathing, leading to mouth breathing and issues with dental and maxillofacial development. Adenoid faces characteristically demonstrate narrowed maxillary and dental arches, upper lip incompetence, retropositioning of mandibular incisors and hyoid bone, and a posterior-rotated mandible. There is also a downward tongue displacement and a lower position of the mandible. Removal of excessive adenoid tissue allows for the recovery of proper craniofacial growth. (Click Image to Enlarge) Stenner M, Rudack C. Diseases of the nose and paranasal sinuses in child. GMS current topics in otorhinolaryngology, head and neck surgery. 2014:13():Doc10. doi: 10.3205/cto000113. Epub 2014 Dec 1 [PubMed PMID: 25587370] Rout MR, Mohanty D, Vijaylaxmi Y, Bobba K, Metta C. Adenoid Hypertrophy in Adults: A case Series. Indian journal of otolaryngology and head and neck surgery : official publication of the Association of Otolaryngologists of India. 2013 Jul:65(3):269-74. doi: 10.1007/s12070-012-0549-y. Epub 2012 Mar 29 [PubMed PMID: 24427580]Level 2 (mid-level) evidence Smyth AG, Wu J. Cleft Palate Outcomes and Prognostic Impact of Palatal Fistula on Subsequent Velopharyngeal Function-A Retrospective Cohort Study. The Cleft palate-craniofacial journal : official publication of the American Cleft Palate-Craniofacial Association. 2019 Sep:56(8):1008-1012. doi: 10.1177/1055665619829388. Epub 2019 Feb 12 [PubMed PMID: 30755029]Level 2 (mid-level) evidence Bin X, Zhou Y. [Variation trend and significance of adult tonsil size and tongue position]. Lin chuang er bi yan hou tou jing wai ke za zhi = Journal of clinical otorhinolaryngology, head, and neck surgery. 2016 Aug 5:30(15):1179-1181. doi: 10.13201/j.issn.1001-1781.2016.15.001. Epub [PubMed PMID: 29798324] Türkoğlu Babakurban S, Aydın E. Adenoidectomy: current approaches and review of the literature. Kulak burun bogaz ihtisas dergisi : KBB = Journal of ear, nose, and throat. 2016 May-Jun:26(3):181-90. doi: 10.5606/kbbihtisas.2016.32815. Epub [PubMed PMID: 27107607] Evcimik MF, Dogru M, Cirik AA, Nepesov MI. Adenoid hypertrophy in children with allergic disease and influential factors. International journal of pediatric otorhinolaryngology. 2015 May:79(5):694-7. doi: 10.1016/j.ijporl.2015.02.017. Epub 2015 Feb 25 [PubMed PMID: 25758194] Al-Ammar AY, Shebib D, Bokhari M, Jomah M. Grading adenoid utilizing flexible nasopharyngoscopy. Annals of Saudi medicine. 2013 May-Jun:33(3):265-7. doi: 10.5144/0256-4947.2013.265. Epub [PubMed PMID: 23793429] Koca CF, Erdem T, Bayındır T. The effect of adenoid hypertrophy on maxillofacial development: an objective photographic analysis. Journal of otolaryngology - head & neck surgery = Le Journal d'oto-rhino-laryngologie et de chirurgie cervico-faciale. 2016 Sep 20:45(1):48. doi: 10.1186/s40463-016-0161-3. Epub 2016 Sep 20 [PubMed PMID: 27647047]
Temperate forest food web A food web is a transfer of energy to fuel another living thing. The sun is the ultimate source of energy, if there was no sun every living thing would die. In a temperate forest food webs the plants absorb sunlight and turns it into glucose, a sugar, in a process called photosynthesis. Then consumers like squirrels, birds, mice and rabbits will eat the seeds and plants. Which would be eaten by more consumers that don’t eat plants such as foxes and hawks and the decomposers decompose organic material . If one of the animals was to die out it would unbalance the whole web. For instance, in this temperate forest food web if the rabbits were to die out, the hawks might die out without any food to eat.
Open Source Your Knowledge, Become a Contributor Technology knowledge has to be shared and made accessible for free. Join the movement. Python comes with several useful built-in functions to work with collections. We saw sorted in the previous section, and you surely know len which returns the number of items in a collection. Here, I will present lesser known functions that can be truly useful: List comprehension and generator expressions List comprehension enables you to transform a list into another list. For example, to create the list of the first 10 even numbers, we transform the list evens = [n*2 for n in range(10)]. Starting from Python 3, range does not return a list but a range object. So, to be precise, here we transform a range into a list. By adding an if clause, list comprehension can also be used to filter the source list. Here is another way to create the list of the first 10 even numbers: evens = [n for n in range(20) if n%2 == 0]. There are situations where the created list is only iterated once, typically in a for loop or as an argument of a function which expects an iterator. In these cases, you can use a generator expression instead of a list comprehension. To create a generator expression, simply replace the surrounding brackets by braces: evens = (n*2 for n in range(10)). A generator expression does not create the full list in memory, but creates an iterator objects. This iterator is lazy and will create the items on demand. When used in a for loop, a new item is generated at each iteration. A generator expression can possibly save a lot of memory and be more efficient. Generator expressions can only be iterated once and cannot be accessed by index. Nevertheless, there are many cases where they can be used instead of a list comprehension. Tests on collections with We start with two very useful functions: all. They deserve a greater fame, because they enable compact and readable code. all take a single argument which must be an iterator, so they can handle lists, generator expressions, sets, dictionaries... True if at least one element of the collection is true. Obviously, True if all the elements of the collections are true. E.g. : any([True,False,False]) # True any([False,False,False]) # False all([True,True,True]) # True all([True,True,False]) # False You might think that lists of booleans are not so common and that adding two built-in functions just for them is not very useful. However, with list comprehension you can easily transform an existing list into a list of booleans. Consider a list of characters : all you can write : # Transform a list of character into a list of boolean. # Each boolean represents the status of the character if all([character.isDead for character in characterList]): lostGame() instead of : lost = True for character in characterList: if not character.isDead: lost = False break if lost: lostGame() all are efficient, because they will stop the evaluation as soon as possible. any stops on the first true item. Conversely, all stops on the first false item. In the following exercise, you have to convert the code of hasEven by removing the loop and by using all. You will probably also need to use a generator expression. In functional programming terminology, reducing a collection means iterating the collection in order to create a single value. max process number collections, but with generator expressions their scope is much wider. For example: myScore = sum(chest.value for chest in chestList) opponentStrength = max(character.strength for character in opponentList) The call to sum uses a generator expression. When you pass a generator expression to a single argument function, you can omit the surrounding braces. max do not support empty collections and will raise a ValueError exception when they receive one. sum returns 0 for an empty collection. Combined iterations with We now come to zip you can iterate several iterators at the same time. list(zip([1,2,3],['a','b','c'])) == [(1,'a'),(2,'b'),(3,'c')]. zip returns an iterator, so I made a list call in the previous expression for the sake of accuracy. zip works with iterators of different lengths and stops at the end of the shortest iterator. Therefore, the length of the returned iterator is the length of the shortest parameter. The name zip is a bit confusing ; it has nothing to do with data compression. It refers to the ubiquitous fastener. A zipper takes two rows of teeth and binds the corresponding teeth. In a somewhat similar way, the zip function takes two lists and binds their items into pairs. zip at work At first glance, the use cases for zip seem less easy to find than for zip comes in handy in many situations. Let's look at some examples. zip can be used to combine time series. Consider that you have two temperature sensors in a room, each one taking a measure every minute. At the end of the day, you have two lists of temperatures and you would like to build the list of mean temperatures. dayMeanTemperatures = # will contain the mean temperature for today # dayTemperatures is a method of Sensor which returns the list of temperatures for the current day for temp1 temp2 in zip(sensor1.dayTemperatures(), sensor2.dayTemperatures()): dayMeanTemperatures.append((temp1 + temp2) / 2) zip can be used with an arbitrary number of iterators, so this example can be generalized to more than two sensors. Let's come back to a game oriented scenario. Consider the list path=['A','B','C','D'] representing an ordered sequence, for example a path returned by a path finding algorithm. We would like to build the list of the edges which compose this path: [(A,B),(B,C),(C,D)]. This suspiciously looks like the result of a zip call, but can we use zip to build it ? If we take the first items of the pairs we get [A,B,C] and the second items give use [B,C,D]. So, we can write zip stops at the end of the shorter iterator, we finally have: path=['A','B','C','D'] edges = zip(path,path[1:]) A last example for the mathematically oriented readers. With zip, you can calculate the dot product of two vectors: # vector1 and vector2 are two lists representing two vectors # a first version with a for loop dotProductLoop = 0 for v1Value, v2Value in zip(vector1,vector2): dotProductLoop += v1Value * v2Value # and a second version with sum dotProductSum = sum(value1*value2 for value1,value2 in zip(vector1,vector2)) As I said before, zip is not limited to 2 iterators and works with an arbitrary number of iterators. Also, zip does not build a new collection, it returns an iterator that you can use in a for loop. If you want to reuse the result several times, you can build a list: Hands on session In the following exercise you have to implement the evenOdd functions, which take a single length argument and return a list of pairs. For the pairs function, the nth returned pair is [(0,1),(1,2),(2,3)]. For the evenOdd function, the nth returned pair is [(0,1),(2,3),(4,5)]. To implement these two functions, you have use only the enumerate function iterates through a collection a yields the index and the item at the same time. With enumerate you can write: for index,character in enumerate(characterList): doSomethingWithIndexAndCharater() instead of the more error prone: index = 0 for character in characterList: doSomethingWithIndexAndCharater() index += 1 The goal of the next exercise is to implement
Characteristics of tropical savannas The climatic pattern is one of the defining characteristics of tropical savannas worldwide. In this activity, we’ll explore what are the climatic features of savanna regions. Savanna climates - based on characteristics / Cape York / climate / climate patterns Many of you studying this unit will be living in northern Australia or have visited here and would be familiar with the weather being hot and dry or hot and wet depending on the time of year. This weather pattern is shown in Figure 2 where the rainfall and temperature is graphed for Darwin and Daly Waters in the Northern Territory. Have you thought about if these climate patterns are typical of savanna climates globally? Open the attached figure and refer to: - Hutley L.B. and Setterfield S.A. (2008, in press) Savannas. In S.E. Jørgensen (ed.) Encyclopaedia of Ecology, Elsevier, Amsterdam. A monsoonal climate with very distinct wet and dry seasons are typical of savanna ecosystems worldwide. The savanna environments are characterised by a rainy period with warm to hot conditions followed by a virtually rainless dry period with warm to cool conditions. Most commonly there is a single alternation of seasons (eg Weipa, Coen and Cooktown), however this can be bimodal (eg Bouake, Africa). Mean monthly temperatures vary from 20-35oC in the warm months to 10-25oC in the cool months. Savanna climates can vary greatly with respect to rainfall and the length of the dry season. Annual rainfall varies from 500mm (eg Barra, Africa and Daly Waters, Australia) to over 1600mm. The rainfall in Darwin is at the higher end of the rainfall range for savanna climates. The length of the dry season ranges from 3-4 months (e.g. Goiania, S. America) to 6-8 months (e.g. Nyala, Africa). Geology and soils Geology and soils of savanna regions The soils of the savanna regions of Australia, Africa, South America and India are similar in their underlying geology. These soils are generally low in nutrients. Why? Africa, South America, Australia and India were once united, forming the old continent of Gondwana. Many of the geological and soil features of these regions date from the time they formed one continent. The effects of the drifting as well as climatic changes during the Pleistocene created distinct geological features in each continent. Present-day landforms reflect common Gondwana characterisitics as well as the unique individual history of each continent since the Cretaceous. See the following animated graphic for a view of how these continents may have been located 115 million years ago, and their subsequent break-up. The soils are therefore formed from ancient landscapes and their poor nutrient status are partly the reflection of the extremely long history of weathering and nutrient leaching. The poorest soils (oxisols and ultisols) are those derived from the oldest deposits, since these have been subjected to weathering and leaching for the longest times. These are common throughout the world’s tropical savanna regions. As we have discussed, the savannas occur in tropical regions where there is transition between abundant rain and short-term drought within a one year period. To cope with such conditions, savanna organisms have developed a wide range of morphological, physiological and behavioural adaptations. The seasonality of the climate also imposes a seasonal response in growth and senescence in the vegetation, particularly the grasses. The following is a photographic time sequence from dry season to wet season and back to the dry season in the savannas around Darwin. Look at the photographic time sequence from dry season (November) to wet season (March). What do you notice about the differences in the amount of plant growth that occurs during the wet season and dry season? Is this typical of savannas worldwide? L.B. and Setterfield S.A. (2008, in Savannas. In S.E. Jørgensen (ed.) Encyclopaedia of Ecology, Elsevier, Amsterdam. These pictures show the massive differences in the amount of plant growth that occur during the wet and dry seasons, which is typical of savannas worldwide. In savannas, the ground flora in particular changes dramatically throughout the year, with annual plants and aboveground stems of perennial herbaceous species dying off during the dry season (May to October). At the onset of the wet season (November), many perennial herbaceous species sprout new leaves, and the seeds of annual species germinate and new seedlings establish. The wet season is the main period of flowering and growth of these species. As you may expect, there are significant variations in both the quantity and quality of plant material during the wet and dry seasons. Both quantity and quality are dependant on the total amount and seasonal distribution of rainfall, and on the availability of nutrients, particularly nitrogen and phosphorus (Frost et al. 1986). The savanna vegetation exhibits a wide range of vegetative and floral phenological behaviour. Growth and flowering may occur in both the wet season and the dry season. The wet season is the main period for growth and flowering of the herbaceous species in the savanna woodland and forests. The dry season and build up to the wet are important periods of growth and flowering in the woody vegetation. Figure 2 of Setterfield and Williams (1996, link below) shows the timing of major reproductive events for the most common evergreen trees (E. miniata and E. tetrodonta) in Kakadu. Figure 9 of Williams et al. (1997, link below) shows the main pattern of leaf growth and phenology of 5 common canopy species in Kakadu NP. The common evergreen species are: E. miniata, E. tetrodonta and E. porrecta and two deciduous trees: Erythrophleum chlorostachys and Terminalia ferdinandiana. The phenology of other savanna species are also documented and this data set provides a comprehensive examination of the patterns of leaf growth through the seasons. Like the reproductive phenology, there are clear and regular rhythms in the reproductive phenology. Refer to the following: - Williams, R. J., B. A. Myers, W. J. Muller, G. A. Duff, and D. Eamus. 1997. Leaf phenology of woody species in a northern Australian tropical savanna. Journal of Biogeography. - Setterfield, S. A., and R. J. Williams. 1996. Patterns of flowering and seed production in Eucalyptus miniata and E. tetrodonta in a tropical savanna woodland, northern Australia. Australian Journal of Botany 44: 107-122. - Sarmiento, G., and M. Monasterio. 1983. Life forms and phenology. Pages 93-94 in F. Bourliere, ed. Tropical Savannas. Elsevier, Amsterdam. List 5 major features about the timing and duration of leaf production and reproduction in Australian savanna trees. Are these distinct leaf and reproductive patterns exhibited by savanna species elsewhere in the world? Some major points apparent are: - Reproductive phenology is strongly seasonal in the main tropical Eucalypts. - Flowering and seed production occurs during an 8 month period in the dry season in the main tropical Eucalypts. - The dominant Eucalyptus species are able to grow during the 6 month dry season. - The late dry season, build up to the wet season, and the early wet season are all important periods for growth for the 5 species. - The late wet season and transition from wet to dry season are characterised by low levels of growth. - The timing of the major growth phases can differ between species and between years (compare the growth phases for Erythrophlem chlorostachys and Terminalia ferdinaniana). Distinct phenological patterns such as these have been identified in savannas around the world. On the basis of the timing of growth and flowering, Sarmiento and Monasterio 1983) established phenological groups. They formed 15 phenological groups based on: - Their life-history (perennial or annual) - Whether the species assimilated carbon all year, or had a rest period - Growth period - Timing of flowering in relation to the wet season Sarmiento and Monasterio (1983) emphasize that there is a “wide range of phenological strategies apparent” and that “in spite of the sharp seasonality of the vegetation, every period in the year appears to be favourable at least to the accomplishment of certain phenophases in on or another group of plant species.” Two common images of savannas are herbivory by large, native ungulates, particularly in Africa and the widespread grazing by domestic herds, particularly cattle. Large herbivore diversity and abundance are much higher in Africa than in Australia, Asia or South America. More than 40 large wild herbivore species have been described in African savanna. In contrast, only 6 species of megapod marsupial have been considered as large herbivorous mammals in the Australian savannas, and only three species of ungulates are regarded as native South American savanna inhabitants. Domestic animals, particularly cattle, buffalos, sheep and goats are now the dominant, large herbivores in most savannas. The more neglected group of savanna herbivores are the invertebrates, particularly grasshoppers, caterpillars, ants and termites. In Australia, insects assume a number of ecological roles (herbivory, seed predation) that are played by vertebrates elsewhere. We will discuss the role of mammal and insect herbivores in more detail in Topic 3: Determinants. Fauna of Australian savannas For a description of the fauna of northern Australia, refer to the Tropical Savannas CRC website at Next topic - Determinants
The exponent calculator is pretty straight-forward. Input the base, then the exponent, and get the answer below. It also handles negative exponents. This is the same operation that you might see typed out as "X ^ Y," where X is the base and Y is the exponent. An exponent is an operation in which the exponent designated the number of times the base is multiplied times itself. Some examples: • 3 ^ 4 = 3 * 3 * 3 * 3 = 81 • 4 ^ 3 = 4 * 4 * 4 = 64 • 2 ^ 3 = 2 * 2 * 2 = 8 • 2 ^ -4 = 0.5 * 0.5 * 0.5 * 0.5 = 0.0625 • 4 ^ -1 = 0.25 When we take the negative exponent of a number, that's the same thing as multiplying the reciprocal of the base times itself that many loops. Thus 4 times itself "negative one times" means the fraction 1/4th, or 0.25. Funny things happen at the edge cases. For instance, any integer base to the exponent of -1 yields 1. Zero raised to the negative exponent of any value is infinity, but actually it's considered one of those undefined problems related to dividing by zero. Finally, one tricky aspect is negative numbers raised to positive exponents. Check out this sequence: • -2 ^ 2 = 4 • -2 ^ 3 = -8 • -2 ^ 4 = 16 • -2 ^ 5 = -32 The rule in multiplication is that like signs produce a positive result, opposite signs produce a negative result. So look at " -2 ^ 3": • -2 ^ 3 = -2 * -2 * -2 • -2 * -2 = 4 • 4 * -2 = -8 When the exponent is even, the result is positive, but when the exponent is odd, the result is negative. The same even / odd pattern occurs with negative bases raised to negative exponents, producing alternate positive and negative results. Exponents come up all the time in various science fields. Take biology: Say we have a Petri dish of bacteria which reproduce through fusion (splitting in two), taking an hour to do so. Starting with 2 bacteria, in ten hours we get over a thousand. Exponential growth is very important to understand because its explosive rise always indicates something unsustainable; our bacteria experiment would level off after a while because there was no more room in the dish, for instance. Exponents are fun to play with and chart, using graphing calculators and the like. They also come in handy for applications ranging from engineering to physics.
About 160 million light years away in the constellation of Hydra, spiral galaxy NGC 3393 has been keeping a billion year old secret. It might have a poker face, but it has a pair of black holes up its sleeve… Using information obtained through NASA’s Chandra X-ray Observatory combined with Hubble Space Telescope imaging, scientists have uncovered first time evidence that NGC 3393 is harboring twin supermassive black holes. Residing only 490 light years apart, the duo may have been the product of a “minor merger” – where a small and large galaxy met. Although the hypothesis of two black holes within one galaxy isn’t new, it has been difficult to prove because the results of two galaxies combining material would result in a rather ordinary looking spiral. “The current picture of galaxy evolution advocates co-evolution of galaxies and their nuclear massive black holes, through accretion and galactic merging.” says G. Fabbiano, lead author of a recent Nature paper. “Pairs of quasars, each with a massive black hole at the centre of its galaxy, have separations of 6,000 to 300,000 light years and exemplify the first stages of this gravitational interaction.” If scientific calculations are correct, a smaller galaxy should have contained a smaller mass black hole. This leaves us with an odd situation. If both of these newly discovered black holes have similar mass, shouldn’t the merging pair also be of similar mass? If so, how could a minor merger be the answer? “The final stages of the black-hole merging process, through binary black holes and final collapse into a single black hole with gravitational wave emission, are consistent with the sub-light-year separation inferred from the optical spectra and light-variability of two such quasars. The double active nuclei of a few nearby galaxies with disrupted morphology and intense star formation demonstrate the importance of major mergers of equal-mass spiral galaxies in this evolution.” says Fabbiano. “Minor mergers of a spiral galaxy with a smaller companion should be a more common occurrence, evolving into spiral galaxies with active massive black-hole pairs, but have hitherto not been seen. The regular spiral morphology and predominantly old circum-nuclear stellar population of this galaxy, and the closeness of the black holes embedded in the bulge, provide a hitherto missing observational point to the study of galaxy/black hole evolution.” Lay down your bets, gentlemen… It seems the game changes each time it is played! Original Story Source: Chandra News. For Further Reading: A close nuclear black-hole pair in the spiral galaxy NGC 3393.
Tanzania in 1990 Dates of Independence: Tanganyika and Zanzibar, 9 December 1961 Population: 23.1 million Effects of South African destabilisation, 1980-88 (Source: United Nations Economic Commission for Africa) - 25 000 indirect war-related deaths - 75 000 Mozambican refugees (1988) - excess defence spending US$500 million, including 4000 troops in Mozambique - export trade losses US$75 million The United Republic of Tanzania was formed in 1964 when the newly independent territories of Tanganyika and Zanzibar merged into one nation. Prior to the colonial period, organisations ranged from centralised chiefdoms to small-scale chiefless societies. In the nineteenth century, long-distance trade increased, particularly in ivory and slaves. Traders from the coast spread Arab culture and Islamic beliefs inland. From 1884 to 1919 Tanzania was a German colony, part of German East Africa. Railways were built, German settlers encouraged and new cash crops introduced. From 1919 to 1961 Tanganyika was a British Trust Territory and from 1890 to 1963 Zanzibar was a British Protectorate. Under the British, extensive white settlement was not encouraged and development of any kind was slow and uneven. The Tanganyika African National Union (TANU) was formed in 1954 and gained mass support during the succeeding years, rapidly and peacefully leading to independence. Tanzania became a Republic under the Presidency of Julius Nyerere. Tanzania’s reputation as a radical and anti-capitalist regime, though it has deeper roots, goes back to the Arusha Declaration of 1967 with its assertion of egalitarianism and national self-reliance. But economic problems are acute for such a large and poorly resourced country.
We picked up The Tiny Seed by Eric Carle at the library and the girls are really enjoying it. You know the Carle style, with the amazing pictures and fun text. This one also has lots of great information about how a seed travels, what can happen to seeds to make them not grow into plants and then finally how a seed grows into a BIG plant. After we read the book we headed outside to hunt for seeds. When we saw the big flowers in our backyard Sam instantly got excited that they were like the ones in the book. Then we saw the seed pods and had a lot of fun pulling them off and opening them up. We tested out whether or not the wind would carry them away like in the book, but the wind failed to move these seeds. Sam still wanted to look for more so we looked more closely around our yard and found some pinecones. We broke these apart and found the seeds inside here too. It was a fun little backyard adventure and a great way to compliment the Carle book. To expand on this more at home or in the classroom you could: - Gather more seeds and allow your child some time to craft with the seeds. - Bring more pods, pinecones and other seeds into the classroom or house and really let them break them apart and examine the seeds up close. - On a windy day really compare which seeds more the fastest and once which drop straight down. - For seeds that don't blow in the wind you could experiment with how else they might get moved around; do they stick to clothing, float on water, could an animal or human eat them, etc. - If you child shows interest in edible seeds take a look around your kitchen with them to see where else you can find seeds; grains, rice, corn, sunflower seeds, peanuts, etc. - For long time learning plant some seeds in a clear container, making sure to put at least one seed right up against the edge of the container so that you can easily watch it change and grow over time. Have fun with seeds! The Nitty Gritty! I get asked all the time how I have time to think and plan the activities that I do with the girls. The fun part for me is that I don't really plan at all. Like for today, Sam found the book at the library and we really enjoyed it. So I thought about how we could expand on that. The outside part of our learning only lasted probably 10-15 mins. Sometimes what we do lasts much longer but I never force the girl's interest and try my best to follow their lead with what they want to do. When the pull of the swings takes over and they run off I don't yell for them to come back. I always want them to have fun with our learning and if it doesn't last that long I am okay with it. So, don't ever feel like a failure if your outdoor learning does not last that long. Even the shortest of moments can make a big impact on your little one!
THE COSMIC MICROWAVE RADIATION BACKGROUND 1 THE COSMIC MICROWAVE RADIATION BACKGROUND PART ONE Now we come to a different kind of astronomy, to a story that could not have been told a decade ago. We will be dealing not with observations of light emitted in the last few hundred million years from galaxies more or less like our own, but with observations of a diffuse background of radio static left over from near the beginning of the universe. The setting also changes, to the roofs of university physics buildings, to balloons or rockets flying above the earth's atmosphere, and to the fields of northern New Jersey. In 1964 the Bell Telephone Laboratory was in possession of an unusual radio antenna on Crawford Hill at Holmdel, New Jersey. The antenna had been built for communication via the Echo satellite, but its characteristics—a 20-foot horn reflector with ultralow noise—made it a promising instrument for radio astronomy. A pair of radio astronomers, Arno A. Penzias and Robert W. Wilson, set out to use the antenna to measure the intensity of the radio waves emitted from our galaxy at high galactic latitudes, i.e., out of the plane of the Milky Way. This kind of measurement is very difficult. The radio waves from our galaxy, as from most astronomical sources, are best described as a sort of noise, much like the "static" one hears on a radio set during a thunderstorm. This radio noise is not easily distinguished from the inevitable electrical noise that is produced by the random motions of electrons within the radio antenna structure and the amplifier circuits, or from the radio noise picked up by the antenna from the earth's atmosphere.The problem is not so serious when one is studying a relatively "small" source of radio noise, like a star or a distant galaxy. In this case one can switch the antenna beam back and forth between the source and the neighboring empty sky; any spurious noise coming from the antenna structure, amplifier circuits, or the earth's atmosphere will be about the same whether the antenna is pointed at the source or the nearby sky, so it would cancel out when the two are compared. However, Penzias and Wilson were intending to measure the radio noise coming from our own galaxy—in effect, from the sky itself. It was therefore crucially important to identify any electrical noise that might be produced within their receiving system. Previous tests of this system had in fact revealed a little more noise than could be accounted for, but it seemed likely that this discrepancy was due to a slight excess of electrical noise in the amplifier circuits. In order to eliminate such problems, Penzias and Wilson made use of a device known as a "cold load"—the power coming from the antenna was compared with the power produced by an artificial source cooled with liquid helium, about four degrees above absolute zero. The electrical noise in the amplifier circuits would be the same in both cases, and would therefore cancel out in the comparison, allowing a direct measurement of the power coming from the antenna. The antenna power measured in this way would consist only of contributions from the antenna structure, from the earth's atmosphere, and from any astronomical sources of radio waves. Penzias and Wilson expected that very little electrical noise would be produced within the antenna structure. However, in order to check this assumption, they started their observations at a relatively short wavelength of 7.35 centimeters, where the radio noise from our galaxy should have been negligible. Some radio noise could naturally be expected at this wavelength from our earth's atmosphere, but this would have a characteristic dependence on direction: it would be proportional to the thickness of atmosphere along the direction in which the antenna was pointed—less toward the zenith, more toward the horizon. It was expected that, after subtraction of an atmospheric term with this characteristic dependence on direction, there would be essentially no antenna power left over, and this would confirm that the electrical noise produced within the antenna structure was indeed negligible. They would then be able to go on to study the galaxy itself at a longer wavelength, around 21 centimeters, where the galactic radio noise was expected to be appreciable. (Incidentally, radio waves with wavelengths like 7.35 centimeters or 21 centimeters, and up to 1 meter, are known as "microwave radiation." This is because these wavelengths are shorter than those of the VHF band used by radar at the beginning of World War II.) To their surprise, Penzias and Wilson found in the spring of 1964 that they were receiving a sizable amount of microwave noise at 7.35 centimeters that was independent of direction. They also found that this "static" did not vary with the time of day or, as the year went on, with the season. It did not seem that it could be coming from our galaxy; if it were, then the great galaxy M31 in Andromeda, which is in most respects similar to our own, would presumably also be radiating strongly at 7.35 centimeters, and this microwave noise would already have been observed. Above all, the lack of any variation of the observed microwave noise with direction indicated very strongly that these radio waves, if real, were not coming from the Milky Way, but from a much larger volume of the universe. Clearly, it was necessary to reconsider whether the antenna itself might be producing more electrical noise than expected. In particular, it was known that a pair of pigeons had been roosting in the antenna throat. The pigeons were caught; mailed to the Bell Laboratories Whippany site; released; found back in the antenna at Holmdel a few days later; caught again; and finally discouraged by more decisive means. However, in the course of their tenancy, the pigeons had coated the antenna throat with what Penzias delicately calls "a white dielectric material," and this material might at room temperature be a source of electrical noise. In early 1965 it became possible to dismantle the antenna throat and clean out the mess, but this, and all other efforts, produced only a very small decrease in the observed noise level. The mystery remained: Where was this microwave noise coming from? Hope is the bedrock of this nation. The belief that our destiny will not be written for us, but by us, by all those men and women who are not content to settle for the world as it is, who have the courage to remake the world as it should be.
Glycoproteins are proteins that contain covalently attached sugar residues. The hydrophilic and polar characteristics of sugars may dramatically change the chemical characteristics of the protein to which they are attached. The addition of sugars is often required for a glycoprotein to function properly and reach its ultimate destination in the cell or organism. Glycoproteins are frequently present at the surface of cells where they function as membrane proteins or as part of the extracellular matrix. These cell surface glycoproteins play a critical role in cell–cell interactions and the mechanisms of infection by bacteria and viruses. There are three types of glycoproteins based on their structure and the mechanism of synthesis: N-linked glycoproteins, O-linked glycoproteins, and nonenzymatic glycosylated glycoproteins. N-linked glycoproteins are synthesized and modified within two membrane-bound organelles in the cell, the rough endoplasmic reticulum and the Golgi apparatus. The protein component of the glycoprotein is assembled on the surface of the rough endoplasmic reticulum by the sequential addition of amino acids, creating a linear polymer of amino acids called a polypeptide . Twenty different amino acids can be used for the synthesis of polypeptides. The specific order of the amino acids in the polypeptide is critical to its function and is referred to as the amino acid sequence. One of the twenty amino acids used for the synthesis of polypeptides, asparagine (C 4 H 8 N 2 O 3 ), is essential for the synthesis of N-linked glycoproteins. N-linked glycoproteins have carbohydrates attached to the R side chain of asparagine residues within a polypeptide. The carbohydrate is always located in amino acid sequences, where the asparagine is followed by some other amino acid and then a serine or threonine residue (-Asn-Xaa-Ser/Thr). Carbohydrate is not attached to the polypeptide one sugar at a time. Rather, a large preformed carbohydrate containing fourteen or more sugar residues is attached to the asparagine as the protein is being translated in the rough endoplasmic reticulum. The carbohydrate on the glycoprotein is then modified by enzymes that remove some sugars and attach others as the newly formed glycoprotein moves from the rough endoplasmic reticulum to the Golgi apparatus and other locations in the cell. Many N-linked glycoproteins eventually become part of the cell membrane or are secreted by the cell. O-linked glycoproteins are usually synthesized by the addition of sugar residues to the hydroxyl side chain of serine or threonine residues in polypeptides in the Golgi apparatus. Unlike N-linked glycoproteins, O-linked glycoproteins are synthesized by the addition of a single sugar residue at a time. Many O-linked glycoproteins are secreted by the cell to become a part of the extracellular matrix that surrounds it. Nonenzymatic glycosylation or glycation creates glycoproteins by the chemical addition of sugars to polypeptides. Since this type of glycosylation is nonenzymatic, the factors that control glycosylation are simply time and the concentration of sugar. Older proteins are more glycosylated, and people with higher circulating levels of glucose experience higher levels of nonenzymatic glycosylation. This is the basis of the glycosylated hemoglobin A1c diagnostic test used for the monitoring and long-term maintenance of blood sugar levels in diabetics. Berg, Jeremy M.; Tymoczko, John L.; and Stryer, Lubert (2002). Biochemistry, 5th edition. New York: W. H. Freeman. Voet, Donald; Voet, Judith G.; and Pratt, Charlotte W. (2002). Fundamentals of Biochemistry, updated edition. New York: Wiley.
William Lynch (Lynch law) Captain William Lynch (1742–1820) was a man from Pittsylvania County, Virginia, who claimed to be the source of the terms "lynch law" and "lynching". He is not the William Lynch who allegedly made the William Lynch speech in 1712, as the date on this apocryphal speech precedes Lynch's birth by thirty years. The term "Lynch's Law" was used as early as 1782 by a prominent Virginian named Charles Lynch to describe his actions in suppressing a suspected Loyalist uprising in 1780 during the American Revolutionary War. The suspects were given a summary trial at an informal court; sentences handed down included whipping, property seizure, coerced pledges of allegiance, and conscription into the military. Charles Lynch's extralegal actions were retroactively legitimized by the Virginia General Assembly in 1782. In 1811, Captain William Lynch claimed that the phrase "Lynch's Law", by then famous, actually came from a 1780 compact signed by him and his neighbours in Pittsylvania County, Virginia, to uphold their own brand of law independent of legal authority. The obscurity of the Pittsylvania County compact compared to the well-known actions of Charles Lynch casts doubt on it being the source of the phrase. According to the American National Biography: What was purported to be the text of the Pittsylvania agreement was later printed in the Southern Literary Messenger (2 [May 1836]: 389). However, the Pittsylvania County alliance, if it was formed at all, was so obscure compared to the well-known suppression of the uprising in southwestern Virginia that Charles Lynch's use of the phrase makes it seem most probable that it was derived from his actions, not from William Lynch's. - Brent Tarter. "Lynch, Charles". American National Biography Online, February 2000. - Christopher Waldrep, The Many Faces of Judge Lynch: Extralegal Violence and Punishment in America, Macmillan, 2002, p. 21.
In the Netherlands, land is used intensively to produce food and non-food crops such as flower bulbs. The climate favours fungal infection, which increases the need for fungicide in potato and fruit production. Consequently, annual pesticide-use is high compared to other European countries. Non-target organisms, including plants, fungi and insects, are damaged by pesticides as a result of spray drift, leading to a loss in biodiversity in rural areas. Since 1991, policy in the Netherlands1 has achieved a reduction in pesticide use of more than 50 per cent. The focus is now to reduce spray drift and its effects. Best practice techniques, such as leaving unsprayed, crop-free borders around fields and using sprayers that have low-drift nozzles can be effective. Much research has focused on the impact of these measures on water contamination. However, Dutch researchers have now estimated the side-effects of pesticide drift on terrestrial biodiversity and the potential to reduce these effects in the future. The researchers modelled the relationship between distance from pesticide-treated fields and amounts of pesticide deposited, via drift, in non-target areas. To measure the level of impact spray drift had on non-target species they examined the percentage areas in which the EC50 (the concentration of a pesticide where 50 per cent of the organisms die) was exceeded. Using these models they analysed three drift scenarios: (i) the recent past (1998); (ii) the present (2005) and (iii) the near future (2010). It was demonstrated that in the recent past (1998) the EC50 was exceeded for herbicides in 59 per cent of areas adjacent to treated fields, affecting non-target plants. Insecticides and fungicides effected non-target insects and fungi in almost 30 per cent of these areas, at the EC50 level. In 2005, herbicides still affected non-targets in 41 per cent of areas adjacent to treated fields, despite the use of low-drift nozzles and the introduction of unsprayed boarders. In the future scenario (2010), if non-crop boarders were increased to 2.25 m for potatoes (compared to 1.5 m in 2005) and 1m for other crops (compared to 0.5 m in 2005), it was predicted that pesticide impacts could be cut to zero. A new Thematic Strategy on the Sustainable Use of Pesticides has been adopted and European policy framework directive for sustainable pesticide use is under development2. This study suggests that increasing unsprayed buffer zones around crops is critical to the success of any new strategy to prevent the harmful impact of pesticides.
Math 009 Mutinies Lecture on mutinies throughout maritime history: ranging from the infamous Mutiny on the Bounty to Hermione, and specific focus on mutinies that occurred in local regions. Instructor: PowerPoint Presentation Students: Journals, writing implements - Batavia Wreck - HMS Bounty - HMS Hermione - Spithead and Note - Press Gang All of Maritime History began with the observation of the oceans and rivers by man. Further specific lectures can be assigned module letters as the need arises. Seafaring Lore and Legend, Peter D. Jeans 2007 Introduce the terms and concepts via PowerPoint by using images and bulleted lists to convey the information. Dialogue with the students in a question and answer format. Explain subject matter, and resource materials, with an eye on multimedia and hands-on instruction when materials are available. - Bullet list of intended topics for this module Direction on how instructor can conclude the module
A "short term memory". A Cache is a (relatively) tiny amount of (relatively) fast memory, which is used to accelerate access to (relatively) slow memory by temporarily storing frequently accessed parts of the data from the slow memory in the fast memory. This is called "caching". Accesses to data are transparently checked for whether the requested location has been stored in the Cache, ie whether it is "cached". If so, the request can be satisfied from the fast memory, providing a significant time savings. This is called a "cache hit". Failure to do so is called a "cache miss". F.ex, in the case of a HardDisk Cache in RAM, a cache hit is about 1,000 faster than having to actually access the HardDisk. In the case of a CPU's memory Cache, the disparity is not quite so huge, but a cache miss is still 30 - 100 times slower than a cache hit. Because the clock speed of CPUs is increasing at much faster rates than that of DRAM, current CPUs have up to 3 layers of Caches, each slower (and therefor cheaper) but much larger than the previous level. This way, only about 1-3% of all memory accesses actually have to be served directly from DRAM.
Sometimes called rabbit fever, tularemia is caused by the Francisella tularensis bacteria. It is spread to humans through the bites of infected insects—most often, ticks, mosquitoes, and deerflies. It can also be passed to people by direct contact with infected animals, including rabbits, cats, hares, and muskrats. Your child can get tularemia by consuming contaminated food or water, eating inadequately cooked meat, or breathing in the bacteria. It cannot be transmitted from person to person. Symptoms generally begin after an incubation period of usually 3 to 5 days, but possibly as long as 21 days. According to the Centers for Disease Control and Prevention, there are about 200 human cases of tularemia reported per year in the United States, mostly in rural regions. Most cases occur during the summer months, similar to tick season. Signs and Symptoms Tularemia can cause illnesses that vary depending on how the infection was spread. Most commonly, a painful ulcer develops in the skin at the site of the insect bite, with tender enlarged lymph glands in the groin or armpits. Sometimes the glands may enlarge with no apparent bite. Infection from food or water begins in the mouth with a severe sore throat, mouth sores, and enlargement of the neck lymph glands. With this form of the illness, your child may develop vomiting, diarrhea, and abdominal pain. Illness from inhalation of the bacteria mainly results in fever, chills, muscle aches, and a dry cough. When the infection enters through the eyes, it results in swollen and red eyes with tender lymph glands in front of the ears. In many cases, tularemia is seen as a combination of several of these symptoms. When to Call Your Pediatrician Call your pediatrician immediately if your child develops an illness that could be a sign of tularemia, especially if he has a high fever, chills, a skin ulcer, or enlarged lymph glands. Prompt treatment is very important with this infection. How Is the Diagnosis Made? Your pediatrician will take samples of your child’s blood and have them tested in the laboratory for antibodies to tularemia. Sometimes the bacteria can be grown from the blood or infected sites. The doctor will treat your child with an antibiotic such as streptomycin or gentamicin. Treatment usually lasts for a 10-day period, although sometimes longer for more serious cases. Early treatment of the infection is important. What Is the Prognosis? When children are treated with the appropriate antibiotics, their infection will quickly clear up, although relapses occasionally occur. If the infection goes completely untreated, however, it can be life threatening in some cases. You can protect your child from the bites that cause tularemia by making sure he wears protective clothing. Also, inspect your child frequently for ticks and remove any that may have attached themselves to his skin or scalp. The use of insect repellents, particularly those that contain the chemical DEET, is also recommended. Use gloves, masks, and goggles when skinning or dressing wild animals. Other preventive measures include: Instruct your child not to handle sick or dead animals. Make sure all meat is cooked thoroughly before feeding it to your youngster. Ensure that drinking water comes from an uncontaminated source. A vaccine is not available to protect against tularemia, although interest in vaccine development has been growing since concerns have been raised about the use of the F tularensis bacteria as a bioterrorist weapon. This organism could be spread through an airborne route, at which point it could be breathed in and would need to be treated quickly with antibiotics.
NOVA scienceNOW: Fuel Cells Write the definition of energy on the board. (Energy is the capacity to do work.) Make a two-column chart on the board similar to the one below and have students brainstorm examples for each form of energy. Form of Energy being metabolized or fuel being burned from light bulb A fuel cell is a type of battery and has the same parts as a household battery. From the Web or in a textbook, find a diagram of a battery for students to look at. Ask students how they think a battery generates electricity. Discuss the A battery converts chemical energy into electrical energy. Batteries have two electrodes—an electron donor and an electron acceptor. The anode (the electron donor) is made of a material that gives up electrons easily, and the cathode (the electron acceptor) is made of a material that accepts electrons easily. The anode and cathode are surrounded by a mix of chemicals (called electrolytes) that help produce an electric charge. Different kinds of batteries use different electrolytes. Ask students to describe characteristics important for batteries (e.g., long-lasting, inexpensive, light in weight, safe, easy to use, and Similar to household batteries, fuel cells power electrical circuits. Remind students that electricity is the flow of electrons and that a circuit provides a path for electrons to flow from the anode to the cathode. On the board, draw a simple circuit that includes a battery, bulb, and wires. (Alternatively, find a circuit diagram on the Web or in a textbook.) Trace how electrons flow through the circuit. (Electrons leave the battery's negative, electron-donating terminal [anode], travel through the wires toward the positive, electron-accepting terminal [cathode]. Along the way, they pass through the filament in the bulb.) Have students annotate the diagram to show where energy is being converted from one form to another. of the Circuit Conversion Taking Place Chemical to electrical as a waste product to light and heat Fuel cells use hydrogen and oxygen to generate electricity. Ask, "What is the most common substance that contains hydrogen and oxygen?" (Water) Ask students to name some gases that are presently used as a fuel source (e.g., propane, natural gas) and the precautions that must be taken when using them as a fuel. (They require proper handling and storage to prevent As an extension for students studying chemistry, have them locate hydrogen and oxygen on the periodic table and state their atomic mass and number. Draw students' attention to the number of electrons in the outer shell of both elements and discuss how these electrons influence their reactivity. (Both gases are highly reactive. To achieve a more stable atomic state, hydrogen readily donates its electron and oxygen readily accepts two electrons.) Discuss how hydrogen and oxygen bond covalently to produce water. Have students list ways a fuel cell and household battery are alike and different. (Batteries and fuel cells both have anodes and cathodes and produce electricity. However, their chemicals differ.) Print the NOVA scienceNOW fuel cell diagram (pbs.org/wgbh/nova/sciencenow/3210/01-fcw.html) and discuss with students how it works. Divide the class into teams and ask them to construct model fuel cells that include an anode, cathode, proton-exchange membrane (conducts positively charged ions and blocks electrons), and catalyst (material that facilitates the reaction between oxygen and hydrogen). Supply teams with common materials to make their models (e.g., foam sheets, cardboard, plastic wrap, foil, pipe cleaners, and string). Have teams display their models and explain how their fuel cells Have students visit the clickable fuel-cell car on the NOVA scienceNOW Web site (pbs.org/wgbh/nova/sciencenow/3210/01.html), or download the printable version of the car. Ask student pairs to identify the energy conversions that occur in a fuel-cell car and share their list of conversions. Draw a two-column chart on the board and have students brainstorm the pros and cons of hydrogen fuel cells, including where they would be most and least viable. (Providing a reliable supply of hydrogen and oxygen for small-scale or mobile uses, such as cars, necessitates solving a host of storage and distribution issues related to these reactive gases. These include developing a system of high-pressure tanks and pipelines, finding convenient ways to fill a fuel cell's gas tanks, and minimizing the risk of burns and explosions. Producing hydrogen and oxygen on-site avoids many of the challenges associated with transporting and storing these gases.) Assign students a place where fuel cells could be used: a car, train, home, apartment complex, or factory. Ask them to create an advertising poster that promotes the use of fuel cells as an energy source for their assigned location. 4. Robert Krulwich made the statement that there is plenty of hydrogen on Earth, but that it is always stuck to other stuff (i.e., other atoms). "It's in the foods we eat, the fuels we burn, the beverages we drink, and the plastics and plant materials we use to construct our world. In fact, hydrogen is so chemically reactive that it does not naturally occur as a pure element on Earth." To give students a sense of how challenging it is to produce hydrogen and oxygen by splitting water, have them do one of the electrolysis activities suggested in the Links and Books section. These include: Collect oxygen and hydrogen using a battery or DC power supply to provide Collect oxygen and hydrogen using a hand-crank generator to provide the energy. Students will experience—and likely be surprised by—just how much energy is required to produce a small amount of gas. the History of Fuel Cells Explains how different fuel cells work and offers historical information about each type, including proton-exchange membranes. Fuel Cells 2000 Offers information on fuel-cell basics and includes a section on hydrogen fuel Scientific American Frontiers—Electrolysis activity Presents an electrolysis activity recommended for grades 9-12. Energy by Marek Walisiewicz. Dorling Kindersley, 2002. Focuses on the future of energy technology, including hydrogen fuel cells. Jack Challoner. Dorling Kindersley, 1998. Provides an overview of energy and how it is used.
Status: In operation Studying how the solar wind affects the Earth, Cluster spacecraft are making the most detailed investigation yet of how the Sun and Earth interact. Cluster is a constellation of four spacecraft flying in formation around Earth. They relay the most detailed information ever about how the solar wind affects our planet in three dimensions. The solar wind (the perpetual stream of subatomic particles given out by the Sun) can damage communications satellites and power stations on Earth. The original operation life-time of the Cluster mission ran from February 2001 to December 2005. However, in February 2005, ESA approved a mission extension from December 2005 to December 2009. The four Cluster spacecraft have spent several years passing in and out of our planet's magnetic field. Their mission will be to complete the most detailed investigation ever made into the ways in which the Sun and Earth interact. The Sun emits the solar wind, which is a thin, hot, ionised gas that carries particles and magnetic fields outward from the Sun. The Earth is shielded from the full blast by its magnetosphere, the region around our planet controlled by its magnetic field. Some solar wind descends into Earth's upper atmosphere through the polar cusps, funnel-like openings in the magnetosphere at the poles. These energetic particles excite atoms and molecules in the upper atmosphere to create the Northern and Southern Lights (the auroras). The part of a planetary magnetosphere that is pushed in the direction of the solar wind is known as the magnetotail. Cluster will determine the physical processes involved in the interaction between the solar wind and the magnetosphere by visiting key regions like the polar cusps and the magnetotail. The four Cluster spacecraft map the plasma structures contained in these regions in three dimensions. The simultaneous four-point measurements also allow close studies of plasma quantities in both space and time. During periods of high solar activity (which cycles every 11 years), the solar wind can be particularly energetic. This can have a dramatic effect on human activities, disrupting electrical power and telecommunications or causing serious problems in the operation of satellites, especially those in geostationary orbit. Subtle changes to the weather on Earth also occur during these times. Watching the effects of this increased activity during these periods is one of the main tasks of Cluster. Understanding the interaction between the solar wind and the magnetosphere and how the plasma levels of the magnetosphere are affected is important. Cluster will help us to prepare for the effects of sudden bursts of solar energy here on Earth. The Cluster spacecraft resemble giant 'Lego' sets, assembled from thousands of individual blocks. Each one is shaped like a giant disc, 1.3 metres high and 2.9 metres wide, with a cylinder in the centre. Six spherical fuel tanks are attached to the outside of this central cylinder. The fuel they carry accounts for more than half the launch weight of each spacecraft. Most of the fuel is consumed soon after launch and in complex manoeuvres to reach their operational orbits. Each spacecraft also carries eight thrusters for smaller changes of orbit. Around the central cylinder is the main equipment platform. Electrical power comes from six curved solar panels attached around the outside of the platform. Five batteries are used for power supply during the four-hour-long eclipses when the spacecraft enter Earth's shadow. Rod-shaped booms open out once Cluster reaches orbit. There are two antennae for communications, two sensors, and four wire booms that operate when the spacecraft begins to spin. These measure changing electrical and magnetic fields around each spacecraft. At each launch, two Cluster satellites were placed in an elliptical orbit whose height varied from 200 to 18 000 kilometres above Earth. The two satellites of each launch were then released, one after the other and used their own on-board propulsion systems to reach the final operational orbit (19 000 to 119 000 kilometres from the planet). The first pair of Cluster satellites lifted off on 16 July 2000, the second pair one month later. This gap allowed fewer people to be used for mission control in the European Space Operations Centre (ESOC) in Darmstadt (Germany). Once the booster reached the correct altitude, after liftoff, the Fregat payload assist module and its two Cluster spacecraft were released. The Fregat main engine fired almost immediately to achieve a circular orbit of approximately 200 kilometres high. About an hour later, the Fregat engine fired again to inject the spacecraft into an elliptical orbit. The two satellites were released, one after the other. Each Cluster spacecraft main engine performed six major manoeuvres, using the large amount of on-board fuel (about half of each satellite's launch mass). The Cluster mission was first proposed in November 1982. The idea was developed into a proposal to study the 'cusp' and the ‘magnetotail’ regions of the Earth's magnetosphere with a polar orbiting mission. The Cluster idea developed into a proposal and then a mission. In 1996, Cluster was ready for launch. Cluster was expected to benefit from a 'free' launch on the first test flight of the newly developed Ariane-5 booster. After several minor delays, Ariane-501 lifted off from Kourou, French Guiana on 4 June 1996, carrying its payload of four Cluster satellites. Unfortunately, intense aerodynamic loads resulted in its break-up and initiation of the automatic destruct system. To recover some of the unique science from the mission, ESA decided to build a fifth Cluster satellite (named `Phoenix'). It would be equipped with flight spares of the experiments and subsystems prepared for the Cluster mission. Phoenix was expected to be fully integrated and tested by mid-1997, opening the way for a launch later that year. However, awareness grew that the scientific objectives of the Cluster mission could not be met by a single spacecraft. There were proposals to rebuild three or four full-size Cluster spacecraft alongside Phoenix. After a preliminary study, it was decided that a Soyuz rocket could launch a pair of Cluster spacecraft. However, the very eccentric orbit required a new upper stage. Two flights were successfully done at the beginning of 2000 and about six months later, Cluster was launched by a Soyuz-Fregat launcher from Baikonur Cosmodrome, Kazakhstan. On 10 February 2005, the ESA Science Programme Committee approved unanimously the extension of the Cluster mission, pushing back the end date from December 2005 to December 2009. This extension will allow the first measurements of space plasmas at both small and large scales simultaneously and the sampling of geospace regions never crossed before by four spacecraft flying in close formation. In October 2009 the mission was extended until end 2012. Prime contractor for the original (lost) Cluster and replacement Cluster satellites was Dornier Satellitensysteme GmbH (now Astrium), Friedrichshafen, Germany, the leader of an industrial consortium involving 35 major contractors from all of the ESA member countries and the United States. Each spacecraft carries an identical set of 11 instruments to investigate charged particles, electrical, and magnetic fields. These were built by European and American instrument teams led by Principal Investigators. The Cluster scientific community includes the ESA Project Scientist, 11 Principal Investigators, and more than 250 Co-Investigators from ESA Member States, the United States, Canada, China, the Czech Republic, Hungary, India, Israel, Japan, and Russia. Last update: 14 February 2013
September 16, 2012 To discuss the Classic and Romantic elements in Beethoven, one must first understand, the definition of “Classicism” and “Romanticism”. There is only one historical period that embraces the two styles, with two tendencies, one more classicizing, the other more romanticizing (Blume). By this, we would say that there is, an era that is called “Classic-Romantic” period. Beethoven is the figure that stands between Classic and Romantic styles, the composer who links the two styles together as transition in between. In Classic music, one can find humanity, the creative and individual personality, the space for imagination in audience’s point of view and the music for music’s sake- the “beauty”. But in Romantic music, passivity is given to the audience since the composer has already predetermined almost everything (with details of dynamic markings, tempo and expression markings, and so on) and the rise of “conductor” gives the interpretation in orchestral music to the audience. In Beethoven’s music, one can see the essential Classical and Romantic elements combined together. For rhythm, meter, and tempo, one can still find the differentiation of one rhythm from the other (Classical element) as well as one rhythm grows smoothly into the other in Beethoven’s music. Beethoven still uses the regular eight-bar period (Classical element) especially in earlier works, but he likes to distort and veil it in some ways by fragmentation and corrosion (e.g. String Quartet Op.131, No.5). He used sonata form, but transformed it in some innovative ways. In tonality and harmony, Beethoven uses much more minor keys than other “Classical” composers like Haydn and Mozart. This is very exceptional since most of the time “Classical” composers seldom used minor tonality or it was for very special use or moment. Folk dance element is very essential in Classical style since it represents a sense of universality, a common element that is known for everybody in the world. Both Beethoven and Haydn use quite a lot, e.g. in Beethoven’s String Quartet Op.130, No.13 (IV. Danza alla tedesca: allemande-style dance in triple meter and binary form), and Haydn’s Symphony No.88 (III. Minuetto & Trio, with bagpipe sound and folk music elements). Beethoven is truly a Classical composer, and together with Haydn and Mozart, they make the essential contemporary musical elements of the time coherently together, and their common understanding of musical language from a stylistic character of the period (Rosen). Beethoven, even though adding a lot of innovative means to his composition. stills adheres to the basic Classical language and keeps the sonata form, only to push the boundary of Classic music to a larger and wider extent. For instance, in Symphony No.3, sonata form is found in the first movement, with important devices of the form (i.e. two main themes, exposition, development and recapitulation). It is also found in Symphony No.5, first movement. However, one can obviously notice that the use of form is combined with dissonance, violent accent, and surprising changes of harmony. Rosen points out that Haydn usually uses monothematic material or two themes of similar characters in symphonies, while Beethoven likes to use two contrasting themes with different characters and dynamics or variations of one single themse in his symphonies (or other genres like piano works or string quartets). Indeed, Beethoven learns from Haydn about how to develop motives and sustain drama in cyclic form in symphonies. However, Haydn. as other “Classic” composers, builds up climax gradually along the way to the end, whilst Beethoven would build up the dramatic climax in the very center of the whole piece, to make a genuine symmetry. Before Beethoven, symphonies are usually in three-movement form (fast-slow-fast) or in Italian manner (fast-faster) (Platinga). But Beethoven would add one more movement and change the usual “Minuet and Trio” (third movement of four or second movement of three) into “scherzo” to sustain drama and power. In contrast, Haydn maintains to use Minuet and Trio throughout his symphonic works. To compare with Beethoven, Haydn’s use of tonalities and forms is much more simple than Beethoven. One must say, “more simple” does not mean “more naïve” or “more inferior”. Haydn tries to maintain the essence of early Classic style- elegance, charm, grace and simplicity (Blume) – by using the most basic functional harmony and tonalities to make the audience understand very clearly, though at times he would surprise them with sudden rests and forte passages. Beethoven, on the other hand, likes to raise the audience’s interest by the use of surprising modulation, e.g. in “Pathetique” Sonata’s first movement, C minor should be modulated to Eb major in the development, but instead Eb minor is presented there. Also, he would use thundering basses (in broken octaves or chords) as well. The violent contrast between piano and forte is commonly found in his works, e.g. in the opening of Symphony No.5, first movement. To say Beethoven is the figure who stands between Classic and Romantic styles, Haydn is the one between late Baroque and early Classic. His Symphonies No.7 “Le Midi”, and No. 73 “La Chasse”, still have the reminiscence of late Baroque concertante style. In Symphony No.73 for example. one can still find the harpsichord part as well as the influence of the Italian comic opera. The change of historical and social background also contributes to that of Beethoven’s musical styles during the composer’s time (Plantinga). Beethoven is truly a Romantic composer after he moved to Vienna in 1792. Around 1790s, Industrial Revolution and French Revolution had enormous impact in Europe – the blossom of commerce and industrialization and the rise of middle cultivated class. Public concerts could be found gradually. The ideas of French Revolution (equality, liberty and fraternity) and Enlightenment thinkers were deeply rooted in Beethoven’s heart from Neefe, his teacher in Berlin. The change of patronage system affected the musical scene most apart from the increase in public concerts. This means that the court and nobles no longer supported the composers like master to servant. Instead individul nobles and bourgeoisie would support composers whom now were note “servants” anymore but “individual” and “independent” musicians. So Beethoven is one of the kind. He is free to composer anything he wants. and he would even argue with publishers about his right of publishing and editions (even though Haydn is under Esterhazy, he is still very free to compose and has an orchestra which he can experiment with). In true sense, Beethoven bears the “Romantic” essence. All in all, there is only one period in Beethoven with two different musical styles-classic and romantic- as it is had to set a line to separate the two. The more important is to use which context and against which background we can understand Beethoven to the fullest extent. That is what we should concern about when studying Beethoven and his music. Fredrich Blume. Classic and Romantic Music: A Comprehensive Survey. Leon Plantinga. Romantic Music: A History of Musical Style in Nineteenth-Century Europe. Charles Rosen. The Classical Style: Haydn, Mozart, Beethoven.
Step away from the villages and idyllic beaches of Hawaii, and you may think you’ve been transported to the moon. Walking along the lava flows of the Kilauea volcano, the landscape changes from a lush tropical paradise to one that’s bleak and desolate, the ground gray and rippled with hardened lava. That’s how Christelle Wauthier, assistant professor in the Department of Geosciences and the Institute for CyberScience at Penn State, describes it, anyway. Wauthier has been studying Kilauea volcano for several years and is getting ready to start a new project at Penn State -- one using a radar imaging technique that researchers call interferometric synthetic aperture radar (InSAR) to try to peer below its surface and learn more about why the volcano is so volatile. Kilauea is the most active of the five volcanoes that make up the island of Hawaii. It’s been erupting continuously since 1983, so far spewing 3.5 cubic kilometers of lava onto the surrounding landscape. The lava usually flows southward, but last year an eruption started creeping east toward the nearby village of Pahoa. The flow was inconsistent -- advancing anywhere from 10 yards to one-quarter mile a day -- but it was enough to cause evacuations and lots of anxiety for the residents of the small village. Wauthier says the volcano’s recent brush with the island’s inhabitants reinforced the importance of studying not just what’s happening on the surface of the volcano, but also what’s going on below. “The volcano has been erupting for 31 years, so obviously there’s a lot of magma coming from below,” said Wauthier. “There’s lots of magma moving up and out, so one of the questions we’re asking is where are all these magma sources and how do they relate to each other?” One of the keys to answering this question is found in the deformations happening on the surface of Kilauea. While a deformation is simply a change on the volcano’s exterior, what it implies goes much deeper -- there has to be something below the surface causing the change. And without X-ray glasses to diagnose what’s happening, Wauthier uses InSAR to try to piece together what might be going on. “InSAR is a remote-sensing technique that combines radar data taken from satellites to create images that show subtle movements in the ground’s surface,” said Wauthier. “In this case, the movements we’re studying are deformations on Kilauea.” To begin the process, Wauthier gathers satellite data from archived databases. She looks for information about changes in elevation from before and after a “natural hazard event” -- an eruption or earthquake, for example. Wauthier then uses this data to create two images: one from before the natural hazard event and one from after. This shows how the event changed the ground's surface. The two pictures can then be combined to create a single, much more comprehensive InSAR image called an interferogram, which uses color to represent movement. Wauthier says that while InSAR images can certainly be created from two images, she also uses a time-series approach called Multi-Temporal (MT)-InSAR when enough radar images are available. This technique uses multiple images instead of two. “This approach is much more accurate, but it also requires much more data and computing power,” Wauthier said. “The powerful computer clusters and IT facilities available through the Institute for CyberScience here at Penn State are tremendously helpful by providing the necessary computing power and efficiency.” After Wauthier creates the InSAR images, she can begin to use them to predict what might be happening underneath Kilauea. She uses an approach called inverse modeling to estimate what caused the deformation. “Basically, we use what’s happening on the surface of the volcano to find a ‘best fit model’ for what’s happening underground,” said Wauthier. “For example, if we know the ground rose here but sank over there, we’ll come up with a best guess for the type of magma process -- like a magma reservoir or intrusion -- that’s below.” But magma processes aren’t the only things that could be affecting Kilauea’s volatility. The southern flank of the volcano is moving away from the island, and Wauthier says this could also be influencing the volcano’s magma plumbing system and activity. Wauthier says that although the flank is slipping seaward at an average speed of 6 to 10 centimeters a year, earthquakes in the past have caused more drastic movement and have even generated tsunamis. Remote-sensing technologies like InSAR are important because they allow researchers like Wauthier to do important research without physically being on location. (Although when you’re studying the Hawaiian landscape, you might want to be.) Wauthier says she would like to return to Hawaii one day, but in the meantime, she hopes the project will help uncover information that could help the people of Hawaii as well as other scientists at the U.S. Geological Society Hawaiian Volcano Observatory. Having a better understanding of Kilauea would help researchers better grasp the behavior of other ocean islands volcanoes. “Ideally, we’d like to get a much better picture of the underground magma systems and how they interact with the flank slip,” she said. “The flank instabilities can cause earthquakes and tsunamis, so we’d like to be able to understand and forecast those better. Hopefully, the more we know about these natural hazards, the more we can help people anticipate and mitigate their risks.”
Writing newspaper articles Study the following guidelines to writing a newspaper article carefully. Begin with basic information and try to answer the questions. – Who or what are you writing about? – What happened? – Why did this happen? – When did it happen? – How and where did the events take place? Write from an observer’s point of view without the personal pronouns “I” or “me”. It will be fact that an apartment building has burned down but only opinion as to how the fire started. Eyewitness reports may be used to add weight to theories. If a statement cannot be checked as fact, it may be reported in the following manner: “According to a witness at the scene, the driver appeared to lose control of the car.” Every phrase, sentence or paragraph should flow from the preceding one and carry the reader smoothly from one thought or event to the next. Some useful words to aid the transition are: also, thus, since, likewise, however, another, meanwhile, accordingly, subsequently, furthermore, etc.
Today automobiles are identified as one of the major causes of air pollution and energy depletion. Vehicles are also identified for their low efficiency in other words high-energy loss. Hybrid Electric Vehicles have the capacity to overcome these situations and bring a new revolution. Any vehicle that combines two or more sources of power that can directly or indirectly provides propulsion power is a hybrid. The gasoline electric hybrid vehicle is just the required type of crossbreed between a gasoline powered vehicle and electric vehicle. Hybrid vehicles run both with a rechargeable battery as well as gasoline. In a hybrid engine, power from gasoline and power from a set of batteries transmits power to an electric motor. There are less adverse effects due to the Hybrid electric vehicles. Also the batteries used in hybrid vehicles (rechargeable Nickel metal hydride, Lead acid batteries, Lithium Polymer batteries) are disposable which will not pose any toxic hazards to the environment. The notable fact in hybrid vehicles is that the DC machine is that it can run both as a Motor as well as a generator. In addition, hybrid vehicles make use of a system that recovers power from the momentum of the vehicle when braking. The Hybrid vehicles are complicated machines when compared to their counterparts of the gasoline-powered vehicles. But higher efficiencies, lower emissions of hybrid vehicles grabs the attention of the new ERA. The other facilities of these hybrid vehicles i.e. performance, manufacturing cost and environmental impacts are discussed in this paper. - Overall Length : 1245 mm - Overall Width : 851 mm - Overall Height : 1400 mm - Wheelbase : 1016 mm - Ground Clearance : 184 mm - Seating Capacity : 1 - Unladed Weight : 100 Kg. - Laden Weight : 200 Kg. - Type : 2 Stroke ,Air Cooled - Number of Cylinder(s) : 1 - Engine Capacity : 50 cc - Maximum Output : 1.75 BHP @ 4500 RPM - Maximum Torque (NM) : 3 N-m @ 3500 RPM - Type : A.C. Type. - Phase : 1φ - Power : 1HP - RPM : 1450 RPM - Voltage : 230 V - Engine : Chain Transmission - Motor : Belt Driven - Rear : Compression Coil Springs - Front :Hydraulic Shock Absorbers. - Fuel Tank : 3 LITRES. - Complexity of design : Average - Design Cycle : 60 Days - Development Cycle : 60 Days (Interlaced with design) - Design and Development time : 95 days (combined) - Fuel economy of vehicle : 40 Kmpl. - Average run per charge : N/A (50 Km – approximated) - Average time for charge : 6 Hrs (between consecutive runs) - Weight of vehicle : 100 Kg. - Control Type : Fully Automatic/Manual - Body dimensions : 1245 x 851 x 1400 mm - Fuel Type : Gasoline / Electric - Engine Power : 1.75 BHP - Electric Traction Power : 1 H.P. - Seating Capacity : 1 Person (expandable to 2) - Fuel carrying capacity : 3 liters By using a Hybrid Electric engine we can not only reduce Environmental pollution but also reduce the wastage of the depleting Fossil fuels. Though we are using an IC engine of about 30-45% efficiency, the motors and generators we use are about 70-85% efficient. This may bring the overall efficiency up to 50-70%. Also we can use the energy wasted by using brakes which is about 12-15% of total efficiency. By taking the above measures we can increase the mileage from a mere 12-20km/lt to about 25-30km/lt although the Air Conditioner is running. This makes the Hybrid Electric Vehicles a sensation.
Worksheet #12: Simplifying Radicals In this radical worksheet, students simplify radicals by identifying the root, and rationalizing the denominator. They complete 40 multiple choice answers. 9th - 10th Math 36 Views 108 Downloads Combining Square Roots with Addition and Subtraction Familiarize young mathematicians with the radical concept of square roots using this set of skills practice problems. Building on prior knowledge about combining like terms in algebraic expressions, this worksheet helps students learn... 7th - 10th Math CCSS: Adaptable Simplifying Radical Fractions Rationalizing denominators of radical fractions is one of those skills that pulls together understanding of many different concepts. By carefully scaffolding from easy to hard examples and explaining each example step-by-step, this video... 9 mins 9th - 12th Math CCSS: Designed Algebra 1: Simplifying Radicals After discussing the difference between rational and irrational numbers, the class pratices simplifying, adding, and subtracting radical expressions. They play a game which involves passing around cards until they have matching hands of... 8th - 9th Math CCSS: Adaptable Free Falling: Working with Radicals Look out, radical falling! The lesson plan shows the class how to evaluate formulas that include radicals. They involve escape velocity, sight distance, and free fall times. Pupils then discuss what occupations may use radicals in their... 9th - 12th Math CCSS: Adaptable
Concussion: A traumatic injury to soft tissue, usually the brain, as a result of a violent blow, shaking, or spinning. A brain concussion can cause immediate but temporary impairment of brain functions, such as thinking, vision, equilibrium, and consciousness. After a person has had a concussion, he or she is at increased risk for recurrence. Moreover, after a person has several concussions, less of a blow can cause injury, and the person can require more time to recover. Quick GuideDementia, Alzheimer's Disease, and Aging Brains Subscribe to MedicineNet's General Health Newsletter Brain and Nervous System Resources Last Editorial Review: 5/13/2016
EDUC 7106 PBL Instructional Unit-Marion Bush TECHNOLOGY INTEGRATION FOR MEANINGFUL CLASSROOM USE Lesson Plan ALesson Title: Analyze the School’s Cafeteria Related Lesson: Nutritious LunchesMenuGrade Level: 9th Grade Girls Unit: School Nutrition GoalsContent StandardsHealth EducationHE H.S.5: Students will demonstrate the ability to use decision-making skills to enhance health.Description: Students will use decision-making skills to identify, apply, and maintain health-enhancing behaviors. High school students will apply comprehensive decision-making processesin a variety of situations to enable them to collaborate with others to improve their quality oflives now and in the future.Elements:e. Analyze the potential short-term and long-term impact of each decision on self andothers.Examples:Analyze the consequences of the excessive eating of unhealthy foods.Analyze the consequences of using illegal drugs for oneself, for one’s family, and for thecommunity.f. Justify the health-enhancing choices when making decisions.Examples:Justify the benefits of eating healthy foods and beverages over less healthy foods and beverages.Justify the reasons for not using performance enhancing drugs.HE H.S.6: Students will demonstrate the ability to use goal-setting skills to enhance health.Description: Students will use goal-setting skills to identify, apply, and maintain health-enhancing behaviors. High school students will construct short-term and long-term health goalsbased on personal needs. In addition, they will design, implement, and evaluate critical steps toachieve these goals.Elements:a. Evaluate personal health and health practices.Examples:Evaluate the pros and cons of various fad diet plans.Assess your personal physical activity level.ISTE NETS-STechnology Standards NETS-SCreativity and Innovation Communication and CollaborationResearch and Information Fluency Critical Thinking, Problem-solving and Decision MakingDigital Citizenship Technology Operations and ConceptsInstructional Objectives: Students will create a menu for the county school Food Service Coordinator for suggestions onwhat types of foods to serve in all the school system’s cafeteria using Microsoft Publisher. ActionPreparation-Before Class-Set up technology equipment to show the videos on television introducing the students to the “Nutrition Unit.”-Prepare and print the KWL charts that will be passed out to the students.-Prepare and print the video notes papers-Prepare and print the graphic organizers.-Prepare and print the surveys.-Prepare and print groups, food group, nutrient, group member role, assignment schedule and timeline.-Print and prepare weight comparison chart.-Prepare and print the Rubrics and the Expectations of the Unit.-Prepare Internet Safety Rules-Prepare folders to keep students assignment in until the next class. During Class Time Instructional Activities Materials25 Minutes The teacher will introduce the Nutrition unit. KWL Chart Remind students to write their name and the date on all handouts and assignments. Rubrics Pass out and go over with the students-the KWL Chart, video notes paper, rubrics, surveys, and their Video notes paper group number, food group, nutrient, group member role, and assignment schedule timeline.. Surveys Review Internet safety rules that are posted on the wall. Group No., Food Remind the students to use their time wisely. Group, Nutrient, The teacher will explain to the students the importance Group Member of taking notes from the videos. Role, Assignment Students will complete the KWL Chart based on what Schedule they know, what they want to know and what they Timeline have learned about Nutrition.50 Minutes View the following 4 videos: Laptop The Food Guide Pyramid 2:56 http://www.youtube.com/watch?v=D6WUzEbzdiA&feature= Television related Site for the following videos: Computer Understanding Food-Nutrition-Part 1/2-9:55 Understanding Food-Nutrition-Part 2/2-8:33 Internet How to Read a Nutrition Label-1:57 http://www.neok12.com/Health-Nutrition.htm The students will take notes and while watching the Video Websites video. The students will discuss the videos after viewing them. The student will use the Internet to look up obesity, nutrition, calories, diet and nutrients.15 Minutes Let student comment on the videos Completed writing assignments and introduced to the problem. Problem: There are so many unhealthy and obese students in the nation’s school. It has become such a big problem that First Lady Michelle Obama has pushed for more nutritious foods to be served and available to students on campus. The county is changing to wheat breads, cutting out fried foods, sweets. and vending machines. The county’s dietitian has asked the students to create a menu on nutritious foods that should be available to students on the school campus. Answer student’s questions. Homework-surveys should be completed. Summarize or give a closing of the unit. Take up KWL Chart and video notes paper. MonitorOn Going AssessmentThe assessment will be based on the teacher’s observation of students working. It will also bebased on the student’s cooperation, participation, and time management.Accommodations have been made for students with:Disabilities (wheelchair, hearing impaired, and blind) by having all material in reach toaccommodate students in wheelchairs and for hearing impaired and blind by providing visualaids, auditory aids along with a transcript, collaboration tools, worksheets, and lecture notes.Second language learners by including and providing someone to interpret for non-speakingand understanding English students.Gifted students by giving them leadership roles, encouraging them to assist other students ifneeded.Visual learners, auditory learners, and kinesthetic learners by including visual aids, auditoryaids collaboration tools, worksheets, lecture notes.Special education students with IEPs by providing them help from a gifted students, lecturenotes and if needed extended time.Students with no access to technology at home will be able to complete assignment at school inthe computer lab.Cultural differences by not including information that will offend any students.Different intelligence (Gardner’s multiple intelligence) by including learning activities thatrequire, creating, remembering, producing communicating, comparing, organizing anddesigning. Let those that are absent from school make up their work in the mornings or during any free timethat they may have after school.Backup Plan:In case the Internet is down, I will have pictures of the food pyramid, different types of food andvending machines
Biology and Pathophysiology The body absorbs 10% (1 to 2 mg) of the iron encountered in dietary sources each day, but has no efficient means of rapidly eliminating excess iron, other than loss of blood. Iron absorption is regulated in the GI tract at the initial part of the small intestine called the duodenum, which lies just beyond the stomach in the digestive tract (Murray 2003; Heli 2011; Geissler 2011). Following absorption, iron is normally bound to specific storage or transport proteins when not in use; this limits the possibility of excess free iron catalyzing generation of damaging free radicals. Iron travels through the bloodstream bound to transferrin (an iron transport protein). Cells that require iron (e.g., red blood cells) express a transferrin receptor on their surface, which captures circulating transferrin and pulls it into the cell, causing it to release the bound iron. Iron, in excess of what is needed to satisfy metabolic demand, is stored bound to the iron storage protein ferritin (Geissler 2011; Fisher 2007). Both ferritin and transferrin are used as blood markers to monitor iron load (see Diagnosis below). Iron overload results from an elevated total body iron pool. There are primary (inherited) and secondary (acquired) causes of iron overload; many involve dysregulation of iron absorption from the gut. However, iron overload secondary to repeated blood transfusions can occur in patients with certain types of anemia (Pietrangelo 2010; Heli 2011). Despite its many important metabolic roles, iron is a potent free-radical generator. Damaging reactive oxygen species are constantly produced during cellular energy generation. Antioxidant enzymes (e.g., superoxide dismutase and catalase) normally eliminate these pro-oxidant compounds, sparing cells from oxidative damage. Iron, however, can readily convert these reactive oxygen species into damaging hydroxyl radicals that are not cleared by antioxidant enzymes. Hydroxyl radicals can damage DNA and cellular proteins, as well as decrease the integrity of cellular membranes (Marx 1996; Emerit 2001; Heli 2011). Iron balance (homeostasis) in humans is predominantly controlled by limiting intestinal absorption, as well as efficient recycling of the body pool because virtually no iron is excreted (Heli 2011). Iron is unique among dietary nutrients in that both iron deficiency and iron excess are relatively common health concerns; in fact, iron deficiency or overload is a question of a few milligrams of iron (Heli 2011; Cogswell 2009; Fleming 2001). Iron balance is regulated by the peptide hormone hepcidin (Pigeon 2001). Hepcidin, produced by the liver in response to high iron stores or inflammation, travels though the blood stream to the intestines where it reduces iron absorption. It is thought both genetic and acquired causes of iron overload may share a common mechanism of low hepcidin production (Siddique 2012). Normal iron absorption (1-2 mg/day) and dysregulated iron absorption differ by only a few milligrams each day, yet this is sufficient to outpace iron loss - approximately 1 mg/day in adult men - which occurs very slowly through the sloughing of gastrointestinal and skin cells (Heli 2011; Murray 2003). As the total body iron pool rises, its levels exceed the capacity of iron storage and transport proteins (ferritin and transferrin, respectively) to keep it safely bound (Brissot 2012). Increased levels of non-transferrin bound iron in the blood can enter cells, thus increasing free cellular iron levels. It is this free iron that is available for generating free radicals within cells, and is responsible for the cellular and tissue toxicities characteristic of iron overload (Brissot 2012).
Up to 30% of people who witness a traumatic event then go on to experience some of the symptoms of post-traumatic stress disorder (PTSD). These symptoms can vary widely between individuals. A person with PTSD will often relive the traumatic event through nightmares and flashbacks, and have feelings of isolation, irritability and guilt. They may also have problems sleeping, such as insomnia, and may find concentrating difficult. The symptoms are often severe and persistent enough to have a significant impact on the person’s day-to-day life. The symptoms of PTSD usually develop during the first month after a person witnesses a traumatic event. However, in a minority of cases (less than 15%), there may be a delay of months or even years before symptoms start to appear. Some people with PTSD experience long periods when their symptoms are less noticeable. This is known as symptom remission. These periods are often followed by an increase in symptoms. Other people with PTSD have severe symptoms that are constant. Re-experiencing is the most typical symptom of PTSD. A person will involuntarily and vividly relive the traumatic event in the form of flashbacks, nightmares or repetitive and distressing images or sensations. Being reminded of the traumatic event can evoke distressing memories and cause considerable anguish. Trying to avoid being reminded of the traumatic event is another key symptom of PTSD. Reminders can take the form of people, situations or circumstances that resemble or are associated with the event. Many people with PTSD will try to push memories of the event out of their mind. They do not like thinking or talking about the event in detail. Some people repeatedly ask themselves questions that prevent them from coming to terms with the event. For example, they may wonder why the event happened to them and whether it could have been prevented. Hyperarousal (feeling ‘on edge’) Someone with PTSD may be very anxious and find it difficult to relax. They may be constantly aware of threats and easily startled. This state of mind is known as hyperarousal. Irritability, angry outbursts, sleeping problems and difficulty concentrating are also common. Some people with PTSD deal with their feelings by trying not to feel anything at all. This is known as emotional numbing. They may feel detached or isolated from others, or guilty. Someone with PTSD can often seem deep in thought and withdrawn. They may also give up pursuing the activities that they used to enjoy. Other possible symptoms of PTSD include: - depression, anxiety and phobias - drug misuse or alcohol misuse - sweating, shaking, headaches, dizziness, chest pains and stomach upsets PTSD sometimes leads to the breakdown of relationships and causes work-related problems.
Informational (nonfiction), 99 words, Level F (Grade 1), Lexile 300L What are cascarones? How are they used? In the book Cascarones, students will learn how to make these brightly painted eggshells and how they are used in celebrations. The book uses detailed, colorful photographs; high-frequency words; and repetitive sentence patterns to support readers. The book can be also used to teach students how to make inferences and draw conclusions as well as how to recognize and use proper nouns. More Book Options Kurzweil 3000 Format Use of Kurzweil 3000® formatted books requires the purchase of Kurzweil 3000 software at www.kurzweiledu.com. Teach the Objectives Connect to prior knowledge to better understand text Make Inferences/Draw Conclusions : Make inferences and draw conclusions Rhyme : Discriminate rhyming words Rhyme : Identify and produce rhyming words Grammar and Mechanics Proper Nouns : Recognize and use proper nouns Alphabetical Order : Place words in alphabetical order Think, Collaborate, Discuss Promote higher-order thinking for small groups or whole class
For most people, carbon dioxide is a transparent gas that causes the earth’s temperature to rise. It is a product of many industries, our mobility and even our breathing. Currently, there are ideas, concepts and technologies that remove CO2 from the atmosphere and use it to synthesize valuable fuels and products. Not only would that make our flights to holiday destinations CO2-neutral, we could also counteract climate change and revolutionize the chemical industry and energy sector. How do we get CO2 from the air? Extracting the most prominent greenhouse gas from the atmosphere and using it to synthesize raw materials for the chemical industry? That sounds utopian, but the technology has existed for some years now. Climeworks1, Carbon Engineering2 and Global Thermostat3 are three companies that use cyclical processes to directly obtain CO2 from the air. With the help of a filter material that bonds to CO2 at room temperature and releases it at a higher temperature, CO2 can be cyclically filtered out of the air and turned into stone4,5 or used as fertilizer or raw material for fuels6 and chemical products7. This process is known as Direct Air Capture, the capturing of CO2 from the atmosphere whereby CO2 is available in low concentration, around 420 ppm. Another process is post-combustion carbon capture, where CO2 is filtered from exhaust gasses from burning8. From a kinetic standpoint, it is easier to obtain CO2 from exhaust gasses because here there is a higher concentration of CO2, around 15%. The disadvantage is that other gasses and particles can damage the filter. Flying without shame to a holiday destination The first idea behind this is that we can finally lower the rising CO2 concentration. We will most likely not be able to prevent an increase of global temperatures of 1.5°C, the goal stated in Paris in 20159, even if we were to stop emitting CO2 tomorrow10. By removing CO2 from the atmosphere, we not only create negative emissions but could also revolutionize the chemical industry and the mobility sector. Next to batteries and hydrogen for cars, trucks and trains, synthetic fuels made from CO2 from the air could be a solution for sustainable mobility. Even though the energy density of batteries will increase11, and the price of hydrogen fuel cells will decrease, these alternatives do not seem achievable for the shipping and aviation sector at the moment. We will have to continue using the conventional combustion engine for a while in these areas. Is there no alternative without ‘flying shame’? We will have to accept the nitrogen oxides and particles that are released during combustion and the low efficiency of combustion engines, but the fuel can be made CO2 neutral. These CO2-neutral fuels are called synthetic fuels and can be made from biomass for example. Another possibility is the conversion of carbon dioxide and water to carbon monoxide and hydrogen with the help of renewable energy. This can be done with electrolysis12, an electrochemical conversion13, or with a direct conversion at high pressure and high temperature, that can be achieved using solar energy. The mixture of carbon monoxide and hydrogen is called syngas and is an important gas in the chemical industry. It is the starting product for a few synthetic processes and also for synthetic fuels and plastics. The hydrogen would also be produced in a sustainable manner, whereas today around 95% of hydrogen is produced from natural gas14. The sustainable production of hydrogen from water using electrolysis is not yet executed on a large scale. Thinking big: Power-to-X Through combining Direct Air Capture with the conversion of carbon dioxide to valuable products, the carbon cycle can be closed and fuels and plastics can be made in a CO2-neutral way15. Different sectors would become linked. This concept is known as Power-to-X16. The X in this name can refer to hydrogen, fuel, methane or a different chemical or physical energetically valuable product. An important sector, which you will see change more and more over the next few years, is the energy sector. If the share of renewable energy increases, the fluctuations in the production of energy will also increase, since the sun does not always shine, nor does the wind blow at a constant speed. In moments where a lot of electricity is produced, this excess can be converted into other forms, such as hydrogen, synthetic fuels or methane for the mobility sector or the chemical industry. An economic analysis of CO2 from Direct Air Capture for the mobility sector. These are a few innovative technologies to slow, or even counteract climate change. The extent to which they will grow in the next few years is very dependent on political conditions. Currently the cost of capturing one ton of CO2 is around 500 euros17. That means that for one liter of diesel, for which 3 kg of CO2 are necessary, 1.5 euros are necessary, and this is only the cost of carbon source for the fuel. De costs of the process to synthesize the fuel is not included in that price. Since the price for one liter of diesel is currently around 1 euro, this way towards synthetic processes is not yet profitable. Carbon capture is still a very young technology, so there is a real chance that this price will drop18, but politics could support this technology by, for example, introducing a carbon tax. As a result, the price of traditional fuels would rise, and it would create an economic stimulus for synthetic fuels. Can we as people tinker with the climate? The acceptance of society will also have a large impact on the evolution of these technologies. Direct Air Capture is part of geo-engineering, the large-scale intervention in the climate to counteract climate change19. The ethical philosophical question is: ‘Can we as humans play with the climate?’. If the answer to this question is yes, then it is clear that Carbon Capture can only have a (positive) influence on the climate when it is done on a large-scale. Not everyone is convinced that these “CO2 vacuums”, as they are also called, will be the solution because they also require raw materials, energy and ground. Despite the fact that these global consequences as a result of Carbon Capture are not fully known, the IPCC (Intergovernmental Panel on Climate Change) came to the conclusion in 2018 that technologies that create negative CO2 emissions will be necessary to reach the goal of a maximum of 1.5°C warming10. So, there really are solutions to tackle the climate problem and it is clear that reducing CO2 emissions alone will not be enough to achieve our climate goals. As to how far Carbon Capture and synthetic fuels will contribute to a circular carbon economy and lead to a Power-to-X system is unclear and will depend on political conditions, the economical development of these technologies and the motivations of society, so on all of us. Original article in Dutch available on yera.be, translated by Karen Aerts.
April 22, 2020, will be the 50th anniversary of Earth Day, which is held to call attention to various environmental problems. It also represents the birth of the modern environmental movement. How Did Earth Day Start? Fifty years ago, 20 million Americans or about 10 percent of the population in the United States, held massive rallies and protests in auditoriums, parks, and streets to protest the ongoing deterioration of the environment and demand greater protection for it. Gaylord Nelson, a US Senator representing Wisconsin, established Earth Day after seeing the devastation caused by an oil spill in Santa Barbara in 1969. Nelson wanted to emulate the energy of the anti-war movement to raise awareness within the pollution and force the nation’s lawmakers to formulate laws and federal organizations that would protect the environment. To that end, Senator Nelson decided to hold a nationwide “teach-in” about the environment. He talked Pete McCloskey, a Republican in the US House of Representatives into joining him, and he recruited Denis Hayes, a 25-year-old at Harvard University, to be the national coordinator. Hayes eventually assembled 85 people to promote events across the country. The three men chose April 22 as the date for their events. The events on April 22, 1970, proved successful and united various groups to fight for environmental protection. These included groups fighting to solve problems such as toxic waste sites, oil spills, and air pollution, but they also included businessmen, labor leaders, farmers, Republicans, and Democrats. By the year’s end, President Nixon had established the Environmental Protection Agency (EPA), and Congress had passed the Endangered Species, Clean Water, and Clean Air Acts. Twenty years later, Earth Day became a global event that mobilized people in 141 countries. Roughly 200 million people took part in events that helped lead to the United Nations Earth Summit at Rio de Janeiro in 1992. Today, over a billion people take part in environmental awareness activities every April 22. How Is It Celebrated? In previous years, people took part in a variety of events that could include environmental protests, film festivals, or educational demonstrations. The events could be held anywhere from cities to parks to farms. Many events would have an overarching focus like air pollution, endangered species, or climate change. This year’s focus will be climate action, and the various events will educate people about climate change and what they can do to combat it. Due to the pandemic, many events will be held on-line, so people can participate safely while maintaining social distance. This year’s events will fall into the following categories: - Citizen Science - Presentation or Film Screening - Planning Committee - Artists for the Earth - Other Online Event Environmental clean-ups and similar events will, generally, be postponed until later in the year. For example, the Anacostia Watershed Society in the Washington DC area traditionally holds a cleanup along the Anacostia River. This year, the actual cleanup is currently scheduled for October, and the Society will host a Virtual Earth Week this month. Citizen science may sound like a new idea, but it’s actually the revival of a centuries-old tradition in which curious civilians would, for example, count the bird species that turned up in a given area. Modern citizen scientists typically send the resulting data over the internet. This April will see the inauguration of a massive citizen science project called Earth Challenge 2020 that will be overseen by organizations including the US Department of State and the Wilson Center. Citizen scientists will use a mobile app to document environmental changes in their communities. What Does Climate Action Involve? Climate action describes anything that can reduce a person’s carbon footprint or the total greenhouse gas emissions caused by that person’s actions. Events, products, and organizations also generate greenhouse gas emissions and thus also have carbon footprints. Greenhouse gas emissions have two sources: direct and indirect. An example of a direct source of GHG emissions would be the exhaust produced by one’s car. Indirect sources would include the energy needed to drill for the oil used in the car, the energy used to power the factory that made the car, and the fuel involved in transporting the gas and the car to its user. In addition to cars, other items that contribute to greenhouse gas emissions include planes, heating and air conditioning, electronics, and laundry machines. If it uses fossil fuels, it contributes to greenhouse gas emissions. Even food contributes to GHG emissions, especially if it was imported from somewhere hundreds or thousands of miles away. Transportation, however, is the main source of GHG emissions. Planes are particularly environmentally unfriendly because they require enormous amounts of fuel. Thus, an easy way to reduce one’s carbon footprint is to not take a plane. Teleconferencing instead of traveling for business also reduces GHG emissions, which most people are having to do now anyway. While environmental organizations typically encourage the use of public transportation over driving, the current pandemic has caused many states to restrict its use. Some states have “stay at home” orders that require people to remain at home except for such essential purposes as buying groceries or going to the doctor. You can still cycle or walk to your destination, which will reduce the amount of GHG emissions by a pound for every mile that you don’t drive. When it’s time to get a new car, consider a hybrid or electric car. The US Department of Energy maintains a website that will help you find the electric vehicle chargers in your area. You just have to enter your ZIP Code or city’s name. Are There Other Ways to Help the Environment? Yes, there are many eco-friendly actions you can take. If you aren’t sure where to start, the EPA has a calculator on their website that enables you to determine the largest source of greenhouse gas emissions produced by your household. It divides the sources into three categories: Home, Transportation, and Waste. You can use the results as a starting point. One easy way to go green is to replace old-fashioned and inefficient incandescent light bulbs with CFL or LED bulbs. While they are admittedly more expensive than incandescent bulbs, they last much longer. Some LED bulbs can last for over 20 years, while an incandescent bulb lasts only a year or two. There are also a variety of ways to help conserve water. For example, finding and repairing all of the leaks within a residence can save the average household about 10,000 gallons of water per year. Some authorities estimate that leaks waste one trillion gallons of water throughout the nation each year. Other ways to go green include the following: - Wash your car at a carwash rather than at home; professional car washes use half as much water - Launder clothes in cold water - Avoid using disposable plastics, especially single-use items like straws or bags - Don’t use disposable water bottles or coffee cups, bring a reusable container everywhere you go - Buy locally produced food whenever possible - Use non-toxic and eco-friendly cleaning products - Look for Energy Star products when it’s time to replace something like a refrigerator or air conditioner and make sure your old ones are properly disposed of The Webb Insurance Group will be doing our part on Earth Day and every day. We hope everyone gets to celebrate this holiday for its good cause, even from home. If you need further guidance about any of your insurance needs, contact the Webb Insurance Group. We hope you enjoy these tips!
When a child knows the proper sounds of the alphabet letters, he or she can use those sounds to sound out or decode a word. This skill is essential for successful phonics instruction later on. The more accurately the sounds are taught to children, the easier it will be for them to learn to read and spell. Study the videos and chart on this page to learn the correct pronunciations of the letter sounds. There are 26 letters in the English alphabet, each of which has a name and at least one sound. It is the sounds of these letters (not their names) that we blend together to form words. NOTE: At this point, it is much more important for your child to know the sounds of the letters than their names. Knowledge of the letter names will be very useful for spelling, but we are not there yet! Reading precedes spelling! There are over one million words in the English language, and at least 600,000 of them can be sounded out phonetically. The five most common vowel sounds are also known as the short vowels: A (as in apple), E (as in egg), I (as in it), O (as in odd), and U (as in up). All the vowel sounds are continuant sounds, said “long and loud,” which means that you draw them out for two full seconds. The consonants are the other 21 letters in the alphabet aside from the vowels: B, C, D, F, G, H, J, K, L, M, N, P, Q, R, S, T, V, W, X, Y, and Z. There are two types of consonant sounds: stop sounds and continuant sounds. - Stop sounds are also called “quick and quiet” sounds. Letters making these sounds are: B, C, D, G, H, J, K, P, and T. They have a sharp ending, with the sound stopping abruptly. - Continuant sounds are also called “long and loud” sounds. Letters making these sounds are: F, L, M, N, Q, R, S, V, W, X, Y, and Z. Hold these sounds out for two full seconds. We have a special way of writing the letter sounds, so that you (the adult) know when you should say the name of the letter and when you should say the letter sound. The stop (quick and quiet) sounds are written as a single letter between two slashes. For example, /b/ or /g/. Because these sounds are quieter and short, you may have to say them multiple times for children to hear. So we will sometimes instruct you to say “/b/ /b/ /b/,” meaning you should make the /b/ sound three times in quick succession. The continuant (long and loud) sounds are usually written as three letters between two slashes. For example: /mmm/ or /zzz/. This is to remind you that continuant sounds should be held for two full seconds. Many of our Phonemic Awareness games require you to say two sounds or word parts with a pause in between. We write that pause with a bullet mark (•). One bullet mark represents a half-second pause. So, “/mmm/ • /at/” is the word mat split by a half-second pause. Likewise, “/d/ • • • /og/” is the word dog with a 1.5-second pause in the middle. 5. Easier and Harder Sounds The continuant (long and loud) sounds – F, L, M, N, Q, R, S, V, W, X, Y, and Z – are easier for children to hear than the stop (quick and quiet) sounds. NOTE: A lot of children get confused because the lower-case letters b and d look so similar. As you start using phoneme cards (with individual letters) in the Phonemic Awareness games, we strongly recommend that you not show the d card until much later. Let the child develop a deep familiarity with the letter b; until then, you can reference d as simply “not b.” It is much better to simply separate out the introduction of these two letters. 6. Sound Pronunciation Chart Print out this sound pronunciation chart to use as a reference when teaching your child. It will remind you of the proper pronunciations of the letter sounds. Watch this short video for a refresher on the letter pronunciations! 8. Letter Sounds with Kids If you are careful to model the correct phoneme pronunciations for your children, they will absorb that knowledge and have a head start on being able to sound out words. As your child nears the end of our Phonemic Awareness curriculum, quiz her occasionally on the letter sounds, as in this video below.
Strange cousins: molecular alternatives to DNA, RNA offer new insight into life’s origins Living systems owe their existence to a pair of information-carrying molecules: DNA and RNA. These fundamental chemical forms possess two features essential for life: they display heredity—meaning they can encode and pass on genetic information, and they can adapt over time, through processes of Darwinian evolution. A long-debated question is whether heredity and evolution could be performed by molecules other than DNA and RNA. John Chaput, a researcher at ASU’s Biodesign Institute, who recently published an article in Nature Chemistry describing the evolution of threose nucleic acids, joined a multidisciplinary team of scientists from England, Belgium and Denmark to extend these properties to other so-called Xenonucleic acids or XNA’s. The group demonstrates for the first time that six of these unnatural nucleic acid polymers are capable of sharing information with DNA. One of these XNAs, a molecule referred to as anhydrohexitol nucleic acid or HNA was capable of undergoing directed evolution and folding into biologically useful forms. Their results appear in the current issue of Science. The work sheds new light on questions concerning the origins of life and provides a range of practical applications for molecular medicine that were not previously available. Nucleic acid aptamers, which have been engineered through in vitro selection to bind with various molecules, act in a manner similar to antibodies—latching onto their targets with high affinity and specificity. “This could be great for building new types of diagnostics and new types of biosensors,” Chaput says, pointing out that XNAs are heartier molecules, not recognized by the natural enzymes that tend to degrade DNA and RNA. New therapeutics may also arise from experimental Xenobiology. Both RNA and DNA embed data in their sequences of four nucleotides—information vital for conferring hereditary traits and for supplying the coded recipe essential for building proteins from the 20 naturally occurring amino acids. Exactly how (and when) this system got its start however, remains one of the most intriguing and hotly contested areas of biology. According to one hypothesis, the simpler RNA molecule preceded DNA as the original informational conduit. The RNA world hypothesis proposes that the earliest examples of life were based on RNA and simple proteins. Because of RNA’s great versatility—it is not only capable of carrying genetic information but also of catalyzing chemical reactions like an enzyme—it is believed by many to have supported pre-cellular life. Nevertheless, the spontaneous arrival of RNA through a sequence of purely random mixing events of primitive chemicals was at the very least, an unlikely occurrence. “This is a big question,” Chaput says. “If the RNA world existed, how did it come into existence? Was it spontaneously produced, or was it the product of something that was even simpler than RNA?” This pre-RNA world hypothesis has been gaining ground, largely through investigations into XNAs, which provide plausible alternatives to the current biological regime and could have acted as chemical stepping-stones to the eventual emergence of life. The current research strengthens the case that something like this may have taken place. Threose nucleic acid or TNA for example, is one candidate for this critical intermediary role. “TNA does some interesting things,” Chaput says, noting the molecule’s capacity to bind with RNA through antiparallel Watson-Crick base pairing. “This property provides a model for how XNAs could have transferred information from the pre-RNA world to the RNA world.” Nucleic acid molecules, including DNA and RNA consist of 3 chemical components: a sugar group, a triphosphate backbone and combinations of the four nucleic acids. By tinkering with these structural elements, researchers can engineer XNA molecules with unique properties. However, in order for any of these exotic molecules to have acted as a precursor to RNA in the pre-biotic epoch, they would need to have been able to transfer and recover their information from RNA. To do this, specialized enzymes, known as polymerases are required. Nature has made DNA and RNA polymerases, capable of reading, transcribing and reverse transcribing normal nucleic acid sequences. For XNA molecules, however; no naturally occurring polymerases exist. So the group, led by Phil Holliger at the MRC in England, painstakingly evolved synthetic polymerases that could copy DNA into XNA and other polymerases that could copy XNA back into DNA. In the end, polymerases were discovered that transcribe and reverse-transcribe six different genetic systems: HNA, CeNA, LNA, ANA, FANA and TNA. The experiments demonstrated that these unnatural DNA sequences could be rendered into various XNAs when the polymerases were fed the appropriate XNA substrates. Using these enzymes as tools for molecular evolution, the team evolved the first example of an HNA aptamer through iterative rounds of selection and amplification. Starting from a large pool of DNA sequences, a synthetic polymerase was used to copy the DNA library into HNA. The pool of HNA molecules was then incubated with an arbitrary target. The small fraction of molecules that bound the target were separated from the unbound pool, reverse transcribed back into DNA with a second synthetic enzyme and amplified by PCR. After many repeated rounds, HNAs were generated that bound HIV trans-activating response RNA (TAR) and hen egg lysosome (HEL), which were used as binding targets.) “This is a synthetic Darwinian process,” Chaput says. “The same thing happens inside our cells, but this is done in vitro.” The method for producing XNA polymerases draws on the path-breaking work of Holliger, one of the lead authors of the current study. The elegant technique uses cell-like synthetic compartments of water/oil emulsion to conduct directed evolution of enzymes, particularly polymerases. By isolating self-replication reactions from each other, the process greatly improves the accuracy and efficiency of polymerase evolution and replication. “What nobody had really done before,” Chaput says, “is to take those technologies and apply them to unnatural nucleic acids. ” Chaput also underlines the importance of an international collaboration for carrying out this type of research, particularly for the laborious effort of assembling the triphosphate substrates needed for each of the 6 XNA systems used in the study: “What happened here is that a community of scientists came together and organized around this idea that we could find polymerases that could be used to open up biology to unnatural polymers. It would have been a tour de force for any lab to try to synthesize all the triphosphates, as none of these reagents are commercially available.” The study advances the case for a pre-RNA world, while revealing a new class of XNA aptamers capable of fulfilling myriad useful roles. Although many questions surrounding the origins of life persist, Chaput is optimistic that solutions are coming into view: “Further down the road, through research like this, I think we’ll have enough information to begin to put the pieces of the puzzle together.” The research group consisted of investigators from the Medical Research Council (MRC) Laboratory of Molecular Biology, Cambridge, led by Philipp Holliger; the Institute, Katholieke Universiteit Leuven, Belgium, led by Piet Herdewijn; the Nucleic Acid Center, Department of Physics and Chemistry, University of Southern Denmark, led by Jesper Wengel; and the Biodesign Institute at Arizona State University, led by John Chaput. In addition to his appointment at the Biodesign Institute, John Chaput is an associate professor in the Department of Chemistry and Biochemistry, in the College of Liberal Arts & Sciences. The original article was written by Richard Harth Science Writer: The Biodesign Institute Source:- Arizona State University Published on 22nd April 2012
Rocking and Rolling. Fostering Curiosity in Infants and Toddlers You are here Cody, a 9-month-old in Ms. Angela’s infant room, crawls over to an interesting object: it is silver, made of metal wires, and easy to grasp. He tries to put it in his mouth. It doesn’t fit but the metal feels cool against his lips. He holds this strange object out to Ms. Angela, catching her attention. He opens his eyes wide and shakes the silver object. Ms. Angela smiles and nods. “Yes, I see it,” she says. “It’s called a whisk. You can mix ingredients with a whisk.” Cody shakes it again and tries once more to put it in his mouth. In the toddler room across the hall, the children are excited about a visitor coming today. Their teacher, Mr. Geoff, will be bringing his pet rabbit, Sherman. Their other teacher, Ms. Amy, is asking the children what they already know about rabbits: “Soft!,” “Eat carrots!,” and “They have ears!” Then she asks the children what they want to know about rabbits: “Have babies?,” “Do they bite?,” and “What do they play?” Ms. Amy writes down all their questions and then takes out a book that she said will help them find answers before Sherman arrives. She starts to read. Curiosity is the desire for knowledge (Markey & Loewenstein 2014). The opening vignettes highlight how curiosity drives children’s learning (von Stumm, Hell, & Chamorro-Premuzic 2011). But what does curiosity look like before a child can talk? In babies, we might observe - expressions of wondering or questioning (raised eyebrows, for example); - steady gazes between what’s being observed and a trusted information source (parent, teacher, or older child); - vocalizations (of excitement, or rising, questioning intonation); - or pointing (as if to say, “What is that?” or “What is happening here?”). To express curiosity, toddlers may - ask a question or repeat an action over and over again; - take your hand and show you what they are curious about; - or persist in trying different ways of gathering information about an object or activity. Tips for Nurturing Curiosity in Your Early Childhood Classroom Think About It - When was the last time you felt curious about something? What did you do in response to your feelings of curiosity? - How did your family, friends, or teachers support your curiosity (or not) as you were growing up? - How did your family respond when you asked questions as a child? How do you respond when children ask you questions in the early childhood classroom? - What activities already occur in your classroom that encourage children’s curiosity? What materials do you have available that can foster wondering? - How do you identify and document children’s wonderings in your daily practices? - Document children’s curiosity in different formats. Snap photos of the way infants gaze at and grasp seashells that you have offered for exploration, and laminate and post the photos at their height. Try taking a video of how toddlers adjust the height of a ramp to see whether the ball will roll farther or faster—watch this video together later in the day and talk about what children discovered. You can also refer to these recordings later for your own inspiration and ideas for how to build on children’s knowledge and curiosities. - Share with the children stories that focus on the power of curiosity. For babies, consider the board books Gossie Plays Hide and Seek, by Olivier Dunrea or Press Here, by Hervé Tullet. For toddlers, try stories like Windows, by Julia Denos, The Thingamabob, by Il Sung Na, and What’s Next, by Timothy Knapman. You can use these books as a jumping off point for activities that nurture curiosity. In The Thingamabob, a curious elephant spends much of the book wondering about what a funny red object is (it’s an umbrella). Following reading the story, you might introduce an unusual object to the children—like a cake decorating tool—and allow them to explore it and wonder what it might be or do, before you demonstrate its function. - As you plan activities, identify three “I wonder . . . ” or “I’m curious about . . . ” statements you might use with children each day. For example, if you are planning an exploration with flashlights you might consider asking questions like, “I wonder what will happen if I put my hand in front of the light? I’m holding the flashlight in my hand, but I wonder where the light is shining? Where did my shadow go?” It is a good habit to share and model your own curiosity with the children in your group. There are several different kinds of curiosity (Berlyne 1978). Curiosity can be motivated by a desire for knowledge or information—wondering how a door opens on a busy box or why some objects float and others sink. There is also the curiosity that is driven by a desire to entertain ourselves—like wondering what will happen if we pour water into sand. Still another type of curiosity is driven by the pleasure of that comes from mastery—imagine watching a child patiently stacking unevenly shaped rocks into a tower, sometimes failing and sometimes succeeding but all while maintaining curiosity. Think of curiosity as being prompted by an essential knowledge gap between what we currently know and what we wish to know; we feel intensely motivated to fill this gap with desired knowledge (Loewenstein 1994). When individuals feel curious, they “engage in persistent information-seeking behavior” (Shin & Kim 2019, 854), and all children are curious. Perhaps this is why emerging research shows a connection between higher curiosity in children and higher reading and math scores at kindergarten (Shah et al. 2018). Promoting curiosity in the classroom Each type of curiosity can be nurtured in the early childhood classroom, and each curiosity can foster a child’s early learning. Here are some key practices that early childhood educators can try to build on a child’s natural desire to explore and learn. (Also see “Tips for Nurturing Curiosity in Your Early Childhood Classroom” above for reflection questions and suggested activities.) - Practice the 5 Ws of wondering. Look for opportunities to model asking questions and wondering together by asking who, where, when, why and how questions. Toddlers are able to ask what questions at about 24 months, then where (26–32 months), who (36–40 months) and finally when, why, and how questions (42–49 months). - Use “I wonder . . . ” statements. Find ways to incorporate “I wonder” statements into your discussions and activities: “I wonder what will happen next in this story,“ “I wonder why the ball rolled farther on that ramp,” “I wonder how he’s feeling,” and “I wonder where they’ll pour the cement.” By modeling “I wonder” statements, you are showing your own curiosity and encouraging children to engage with you in finding a solution. You may find children begin to share their own wonderings with you—which create important opportunities to implement an emergent curriculum driven by their curiosity. Imagine a group of three toddlers running up to show you a cicada casing (empty shell) that they found on a tree trunk. You might use this as a learning opportunity in the classroom. You can introduce new vocabulary, such as cicada, casing, and life cycle, and teach them about the distinctive loud buzzing sound cicadas make. You may also follow up on this conversation with some age-appropriate books on insects to continue the children’s wonderings. - Document children’s wonderings. For toddlers (ages 2 years and up), make it a practice to ask children what they are curious about when you introduce a new material, object, or experience. Document their questions and wonderings on a flipchart and find answers to their questions during this planned exploration. For younger toddlers and babies, use the “sportscasting,” or play-by-play technique, to describe what babies seem to be asking or curious about: “You’re wondering how that top pops up. Watch how I turn the handle—here, you can help me try.” - Point out changes. Identifying changes and patterns in the world around us sparks a child’s desire to “figure out” how things work. “Do you see how this leaf looks different than this leaf over here? What do you see that’s different? Would you like to touch them both?” - Allow children to try and fail. Rather than offering a solution to every problem, share an observation and ask a question: “The block tower keeps falling down. Why do you think that happens? What can we do to make it stay up?” Of course, the teacher knows that putting the giant rectangle block on the very top of the block tower means it will come crashing down. But toddlers do not yet understand balance and weight distribution. Letting children experiment in this way builds their problem-solving skills and grows knowledge about the physical world by harnessing their curiosity about the event. - Follow the children’s lead. Every child is different, and what sparks curiosity will vary from child to child. See what captivates a child’s or group’s interest and suggest, “Let’s learn about this together!” One class, after hearing an informational text about bats, became fascinated by these flying mammals. Their teacher created a “bat cave” out of a child-sized table covered by a blanket. The children would “fly” around the classroom and then return to the cave where they would pretend to hang upside-down. Curiosity can be a source of motivation, learning, and joy for all of us—not just children. In fact, the inborn drive to be curious and seek out knowledge has been shown to activate the reward center of an adult’s brain—seeking knowledge to satisfy our curiosity feels good (Kang et al. 2009). By creating classrooms that celebrate curiosity, we can nurture children’s internal pursuit of knowledge, their pleasure in discovery, and their emerging understanding of the world around them. Most of all, by sharing in their curiosity, we build stronger relationships with children—the kind of relationships where they can better grow and thrive. Rocking & Rolling is written by infant and toddler specialists and contributed by ZERO TO THREE, a nonprofit organization working to promote the health and development of infants and toddlers by translating research and knowledge into a range of practical tools and resources for use by the adults who influence the lives of young children. Berlyne, D.E. 1978. “Curiosity and Learning.” Motivation and Emotion 2 (2): 97–175. Kang, M.J., M. Hsu, I.M. Krajbich, G. Loewenstein, S.M. McClure, J.T.Y. Wang, & C.F. Camerer. 2009. “The Wick in the Candle of Learning: Epistemic Curiosity Activates Reward Circuitry and Enhances Memory.” Psychological Science 20 (8): 963–73. Markey, A., & G. Loewenstein. 2014. “Curiosity.” In International Handbook of Emotions in Education, eds. R. Pekrun & L. Linnenbrink-Garcia, 246–264. London and New York: Routledge. Shah, P.E., H.M. Weeks, B. Richards, & N. Kaciroti 2018. “Early Childhood Curiosity and Kindergarten Reading and Math Academic Achievement.” Pediatric Research 84 (30): 380–86. Shin, D.D., S. Kim. 2019. “Homo Curious: Curious or Interested?” Education Psychology Review 31: 853–74. Von Stumm, S., B. Hell, & T. Chamorro-Premuzic. 2011. “The Hungry Mind.” Perspectives on Psychological Science 6 (6): 574–88. Copyright © 2020 by the National Association for the Education of Young Children. See Permissions and Reprints online at NAEYC.org/resources/permissions. Rebecca Parlakian serves as the senior director of programs at ZERO TO THREE, managing a portfolio of privately and federally funded projects designed to support the healthy development of infants, toddlers, and their families. In this role, Rebecca has developed parenting resources and professional curricula, and she provides professional development across the United States. She also serves as adjunct faculty at George Washington University’s Graduate School of Education. [email protected]
An antinuclear antibody (ANA) test measures the amount and pattern of antibodies in your blood that work against your own body (autoimmune reaction). The body's immune system normally attacks and destroys foreign substances such as bacteria and viruses. But in disorders known as autoimmune diseases, the immune system attacks and destroys the body's normal tissues. When a person has an autoimmune disease, the immune system produces antibodies that attach to the body's own cells as though they were foreign substances, often causing them to be damaged or destroyed. Rheumatoid arthritis and systemic lupus erythematosus are examples of autoimmune diseases. An ANA test is used along with your symptoms, physical examination, and other tests to find an autoimmune disease. Why It Is Done An antinuclear antibodies (ANA) test is done to help identify problems with the immune system, such as: How To Prepare You do not need to do anything before you have this test. Talk to your doctor about any concerns you have regarding the need for the test, its risks, how it will be done, or what the results will mean. To help you understand the importance of this test, fill out the medical test information form . How It Is Done The health professional drawing blood will: - Wrap an elastic band around your upper arm to stop the flow of blood. This makes the veins below the band larger so it is easier to put a needle into the vein. - Clean the needle site with alcohol. - Put the needle into the vein. More than one needle stick may be needed. - Attach a tube to the needle to fill it with blood. - Remove the band from your arm when enough blood is collected. - Apply a gauze pad or cotton ball over the needle site as the needle is removed. - Put pressure on the site and then put on a bandage. How It Feels The blood sample is taken from a vein in your arm. An elastic band is wrapped around your upper arm. It may feel tight. You may feel nothing at all from the needle, or you may feel a quick sting or pinch. There is very little chance of a problem from having a blood sample taken from a vein. - You may get a small bruise at the site. You can lower the chance of bruising by keeping pressure on the site for several minutes. - In rare cases, the vein may become swollen after the blood sample is taken. This problem is called phlebitis. A warm compress can be used several times a day to treat this. An antinuclear antibody (ANA) test measures the amount and pattern of antibodies in your blood that work against your own body (autoimmune reaction). If there are more antibodies in the blood than normal, the test is positive. When the test is positive, most labs do other tests right away to look for the cause. These tests can find out which antibodies are in the blood in higher amounts than normal. Sometimes ANA test results can be abnormal even when a person is healthy. A positive ANA test may be caused by: - Autoimmune connective tissue diseases. Examples include: - Rheumatoid arthritis. More than one-third of people with rheumatoid arthritis have a positive ANA test. - Systemic lupus erythematosus (SLE). Almost all people with SLE have a positive ANA test. But most people with a positive ANA test do not have SLE. - Sjögren's syndrome. - Juvenile idiopathic arthritis. - Raynaud's syndrome. - Autoimmune diseases of other organs. Examples include: - Medicines, such as those used to treat high blood pressure, heart disease, and tuberculosis (TB). - Viral infections. What Affects the Test Reasons you may not be able to have the test or why the results may not be helpful include: - Taking medicine. Many medicines can change the results of this test. Be sure to tell your doctor about all the non-prescription and prescription medicines you take. - A virus. Viral illness can cause an ANA to be positive, and later turn back to normal. What To Think About - Autoimmune diseases can't be diagnosed by the results of the ANA test alone. A complete medical history, physical examination, and the results of other tests are used with the ANA test to help identify autoimmune diseases, such as systemic lupus erythematosus (SLE) or rheumatoid arthritis. - Some healthy people can have an increased amount of ANA in their blood. For instance, this can happen in some people with a family history of autoimmune disease. The higher the ANA level is, though, the more likely it is that the person has an autoimmune disease. - ANA levels can increase as a person ages. Other Works Consulted - Fischbach FT, Dunning MB III, eds. (2009). Manual of Laboratory and Diagnostic Tests, 8th ed. Philadelphia: Lippincott Williams and Wilkins. - Pagana KD, Pagana TJ (2010). Mosby's Manual of Diagnostic and Laboratory Tests, 4th ed. St. Louis: Mosby Elsevier. Current as of: August 5, 2020 Author: Healthwise Staff Medical Review: Anne C. Poinier MD - Internal Medicine Brian D. O'Brien MD - Internal Medicine Martin J. Gabica MD - Family Medicine Kathleen Romito MD - Family Medicine Adam Husney MD - Family Medicine
When you are reading and writing files, you might run into problems with whitespace. These errors can be hard to debug because spaces, tabs, and newlines are normally invisible: >>> s = '1 2\t 3\n 4' >>> print(s) 1 2 3 4 The built-in function repr can help. It takes any object as an argument and returns a string representation of the object. For strings, it represents whitespace characters with backslash sequences: >>> print(repr(s)) '1 2\t 3\n 4' This can be helpful for debugging. One other problem you might run into is that different systems use different characters to indicate the end of a line. Some systems use a newline, represented \n. Others use a return character, represented \r. Some use both. If you move files between different systems, these inconsistencies might cause problems. For most systems, there are applications to convert from one format to another. You can find them (and read more about this issue) at Wikipedia.org/wiki/Newline. Or, of course, you could write one yourself.
Patellofemoral Pain Syndrome The knee is the largest joint in the human body and proper function and health of the knees is required to perform most everyday activities. The knee is made up of the lower end of the femur (thighbone), the patella (kneecap), and the upper end of the tibia (shinbone). Articular cartilage, which is a smooth substance that protects the bones and allows them to move freely, covers the ends of the three bones and acts as the main "shock-absorber". Between the femur and the tibia are two C-shaped cushioning wedges known as the menisci that act as the secondary "shock absorbers". Large ligaments (tough bands of tissues) help hold the femur and the tibia together in order to stabilize the joint by preventing excessive movement. The lining joint is covered by the synovial membrane, which is a thin lining that releases fluid that lubricates the cartilage, reducing the friction within the knee joint and providing nutrition to the cartilage. All of these components work together to facilitate proper function of the knee. As the knee bends and straightens the patella slides up, down, side-to-side, tilting and rotating along a groove in the femur called the trochlear groove. Repetitive abrasion on any surface of the patella or the femur exerts stress on the soft tissues of the patellofemoral joint and can lead to bruising or wear of the articular cartilage within the knee joint. Patellofemoral pain syndrome (PFPS), often referred to as a "runner's knee", is a common condition that occurs when an individual feels pain in the front of the knee, either under or around the patella (kneecap). It primarily occurs in teenagers and athletes involved in sports and activities that require significant use of the knees. There are many factors that can contribute patellofemoral pain such as overuse of the knee, improper rotation or alignment of the hip and knee joints, muscular weakness or tightness, tightness of the ligaments around the kneecap, or flatfeet. Individuals who experience patellofemoral pain syndrome may have the following symptoms: - Mild to severe pain around the kneecap, especially when sitting with bent knees for a prolonged period of time (also known as "theater ache"), squatting, jumping or going up/down the stairs - Occasionally buckling of the knee or the sensation that the knee is "giving way" - Sensation that the knee joints are catching, locking, or grinding when walking or moving the knee Patellofemoral pain syndrome can be diagnosed with a combination of patient history, physical examination, and imaging studies. The physician may examine the knee to assess the motion, stability, and overall strength. In some cases, the physician may order X-rays or an MRI scan to determine the extent of damage and rule out any structural damage to the knee and the tissues connected to it. The treatment approach for patellofemoral pain syndrome depends on many factors including the severity of the condition. Appropriate non-operative treatment will relieve most symptoms and is always the first method of treatment. Operative treatments are not commonly required and should only be considered after trying more-conservative approaches. - Rest - It is advised to decrease or completely stop the activity that makes the pain worse. A great way to stay active while allowing the symptoms to subside is to switch to low-impact, cross-training activities such as biking or swimming. - Ice - Placing ice (with a barrier such as a towel) on the most painful areas of the knee for up to 30 minutes (less if the skin becomes numb) three to four times a day can greatly soothe the pain and keep the swelling down. - Medication - Over-the-counter anti-inflammatory medication such as ibuprofen and naproxen can help reduce pain and swelling. - Durable Medical Equipment - The physician might prescribe orthotic devices such as a shoe inserts or special footwear to relieve pressure, support the arch, and absorb impact. The physician or physical therapist may recommend a knee sleeve or brace for some time to support the joint and facilitate position of the kneecap during the healing process. - Surgery - Surgery is not commonly required and should only be considered if more-conservative treatments have failed to reduce the symptoms. The specific type of surgery depends on the exact nature and the severity of the patellofemoral pain syndrome and should be discussed with the physician extensively when the need is identified. Recovery from patellofemoral pain syndrome can be an extensive process and depends greatly on the chosen method of treatment. Non-operative recovery usually takes weeks or months. Activities that require heavy use of the knee need to be eased into gradually. In order to reach pre-injury activity level, the patient needs to build strength and flexibility in the muscles around the core, hips, and knees. Recovery post-operatively can take much longer than non-operative recovery. Each patient is unique and their recovery will depend on the treatment method prescribed by the physician. CALL 911 IMMEDIATELY IF YOU ARE HAVING A MEDICAL EMERGENCY! The information provided on this website or through links to other sites, is for patient education purposes only and NOT a substitute for professional medical care. This website contains general, non-exhaustive information about common conditions and treatments and should not be used in the place of a visit or the advice of your physician or healthcare provider. If you think you may be suffering from any medical condition you should seek immediate medical attention. Reliance on the information appearing on this site and any linked sites is solely at your own risk.
In the evening, the daytime bustle of the human world quiets down and many animals stir and begin to speak, some flowers open only at night, and the sky can be dramatic. Moths are one nocturnal aspect to consider inviting into a garden. Types of Moths Moths come in a dazzling variety, from minute to gigantic. Many are intricately patterned and some have brilliant colors. Moths aren’t exclusively nocturnal or attracted to lights, some can’t fly, and not all caterpillars eat green plants. Many exhibit remarkable behaviors. Moths and butterflies make up the order Lepidoptera (scale-winged), but there are a vast number of moths, more than 10,000 species north of Mexico versus about 765 butterflies (and a mere 460 birds that have been seen in the Carolinas). There actually isn’t a clear dividing line between moths and butterflies. As with familiar plants, many moths are partially or wholly referred to by their scientific name, though some have whimsical common names, such as The Asteroid and the inconsolable underwing. There is potential to make scientific discoveries with moths, since little is known about many species (and inaccuracies are common), such as their larval foods and distribution. Information is also needed about what eats them. There are related citizen science projects, such as Caterpillars Count! and Firefly Watch. And National Moth Week is in July each year. Moths fly all year in our area of North Carolina, even when the temperature is in the 30s. The richest seasons seem to be late spring, early summer, and late summer. Many come out shortly after sunset, but various species fly at different hours. Besides observing moths on flowers, they can be attracted by lights and by painting bait on trees. UV light also attracts moths and can reveal caterpillars. Freshly metamorphosed females can be used to attract males of specific species. Other insects can also be attracted by lights, ranging from delicate, dragonfly-like adult antlions (and sometimes actual dragonflies) to stag beetles and superlative Hercules beetles. The Peterson Guide to Eastern Moths and the Princeton Guide to Caterpillars covers several hundred species and their larval foods. The Golden Guide to Butterflies and Moths is simple but covers many species. There are also websites and Facebook groups. How to Attract Moths to the Garden An important consideration for moths is planting larval food plants. A diversity of native plants is best. Or plant for specific moths, though a moth’s preferred food might vary geographically. Also, note that some adult moths don’t eat, Geometers or inchworms often eat a wide variety of trees. Black cherries feed many inchworms, tentworms, Io moths, and others. Blooming peaches, apples, and similar fruits are beautiful, and the famous pepper-and-salt geometer moth plus many others eat apple foliage. Many feed on hickories, oaks, tuliptrees, birches, locusts, elms, and blueberries. Caterpillars sometimes defoliate plants, but the plant usually recovers. Droppings (frass) can reveal caterpillars hidden in the treetops, such as big green luna moth caterpillars in sweetgums. American persimmon is a host plant for some moths and the fruit attracts late-season butterflies, wasps, and birds during the day, and opossums, raccoons, foxes, and deer at night, as well as being tasty for us. Some moths like vegetables. Five-spotted hawkmoths and Carolina sphinxes are large, fast-flying moths that can eat tomatoes as caterpillars (being tomato and tobacco hornworms, respectively) plus there are squash vine borers and dayflying wasp mimics. Catalpa sphinx moths feed on showy catalpa trees as hornworms, while pandorus, hog, and other sphinxes can eat Virginia creeper. I’ve seen snowberry and hummingbird clearwing hornworms eating Japanese honeysuckle and ornamental Viburnums, but they probably prefer native species. I was surprised to find a rustic sphinx hornworm eating Chinese privet. I would like to see a Cynthia silkmoth introduced with Ailanthus for sericulture. Weeds such as plantains feed great leopard moths and woolly bears (Isabella tiger moths). Sometimes moths drink dew or water from hands, though care should be taken with skin chemicals if insects are handled. Flowers attractive to butterflies may also attract moths. One September I visited the Atlanta area, where ubiquitous yellow lantanas attracted red-orange Gulf fritillary butterflies, which are rare around the Triangle, that were then replaced by hummingbird moths after sunset. Buttonbushes usually found near water but able to grow in upland gardens, are very attractive to butterflies and attract moths and other insects at night, as well as hunting green treefrogs. Other flowers include Abelias, butterfly bushes, milkweeds, ornamental Nicotianas, and phlox. Yarrow is both a nectar and host plant for moths, though moth-pollinated flowers tend to be pale, fragrant, and dense. Many moth caterpillars spin cocoons in which to pupate, while others dig into the ground, such as hickory horned devils, rosy maple moths, and oakworms. Some even bore into wood, such as various dagger moths. Fallen leaves are important because some species attach their cocoons to leaves that fall in the autumn, including tuliptree and polyphemus moths, while the cocoons of related promethea and cecropia moths remain on the twigs. A nocturnal garden could also include housing for bats, owls, and even chimney swifts. Nocturnal Southern flying squirrels live in hollows, reportedly peering out if their tree is tapped. Overwintering lepidopterans might move in as well into your nighttime garden. Featured image – Luna moth/Michael Pollock Michael Pollock is a freelance writer who gardens in Durham. He has written for publications such as Carolina Gardener, The News & Observer’s Durham News, Chatham County Line, and Carrboro Free Press.
Learning stageStage 4, Stage 5, Stage 6 On this page... A rock is a combination of one (such as quartzite) or more (such as granite) mineral particles. These combine through either crystallisation of molten magma (igneous rocks), settling of particles (sedimentary rocks), or reheating and pressure applied to pre-existing rocks (metamorphic rocks), with no set chemical composition or atomic structure. - Molten rock is called magma when it is inside the earth and lava when it is released from a volcano. - Fossils are only found in sedimentary rocks. - Some rocks like pumice can float on water. - The oldest minerals in the world are found in Jack Hills in Western Australia. The mineral is called zircon and it is 4.374 billion years old! - Name an example of a sedimentary rock and describe the way it formed. - Where do igneous rocks form? Give an example of a felsic and a mafic igneous rock. - What are the differences between a foliated and non-foliated metamorphic rock? - Select a metamorphic rock and describe how it formed from its parent rock. A series of three posters showing the main ways that sedimentary, igneous and metamorphic rocks are classified. They can be used to help identify rocks alongside a dichotomous key activity.
What is patent foramen ovale? A foramen ovale is a hole in the heart. The small hole naturally exists in babies who are still in the womb for fetal circulation. It should close soon after birth. If it doesn’t close, the condition is called patent foramen ovale (PFO). PFOs are common. They occur in roughly one out of every four people. If you have no other heart conditions or complications, treatment for PFO is unnecessary. While a fetus develops in the womb, a small opening exists between the two upper chambers of the heart called the atria. This opening is called the foramen ovale. The purpose of the foramen ovale is to help circulate blood through the heart. A fetus doesn’t use their own lungs to oxygenate their blood. They rely on their mother’s circulation to provide oxygen to their blood from the placenta. The foramen ovale helps blood circulate more quickly in the absence of lung function. When your baby is born and their lungs begin to work, the pressure inside their heart usually causes the foramen ovale to close. Sometimes it may not happen for a year or two. In some people, the closure may never happen at all, resulting in PFO. In the majority of cases, PFO causes no symptoms. In very rare cases, an infant with PFO could have a blue tint to their skin when crying or passing stool. This is called cyanosis. It usually only occurs if the baby has both PFO and another heart condition. Most of the time, there’s no need to pursue the diagnosis of a PFO. However, if your doctor feels a diagnosis is necessary, they may recommend an echocardiogram. This technique uses sound waves to get an image of your heart. If your doctor can’t see the hole on a standard echocardiogram, they may perform a bubble test. In this test, they inject a saltwater solution during the echocardiogram. Your doctor then watches to see if bubbles pass between the two chambers of your heart. In most cases, people with PFO have no symptoms or complications. PFO is usually not a concern unless you have other heart conditions. PFO and strokes There is some evidence that adults with PFO may have a higher risk of stroke. But this is still controversial, and research is ongoing. An ischemic stroke occurs when part of the brain is denied blood. This may happen if a clot becomes trapped in one of the arteries of your brain. Strokes can be minor or very serious. Small blood clots may pass through the PFO and get stuck in the arteries of the brain in some people. However, most people with PFO won’t have a stroke. PFO and migraines There may be a connection between PFO and migraines. Migraines are very severe headaches that can be accompanied by blurred vision, shimmering lights, and blind spots. Some people who have had a PFO surgically corrected report a reduction in migraines. In most cases of PFO, no treatment is necessary. A PFO can be closed by a catheterization procedure. In this procedure, your surgeon inserts a plug into the hole using a long tube called a catheter that is usually inserted at your groin. A PFO can be closed surgically by making a small incision, and then stitching the hole closed. Sometimes a doctor can repair the PFO surgically if another heart procedure is being done. Adults with PFO who’ve had blood clots or strokes may need surgery to close the hole. Medication to thin blood and prevent clots from forming may also be prescribed instead of surgery. The outlook for people with PFO is excellent. Most people will never even realize they have a PFO. Although stroke and migraines are possible complications of PFO, they aren’t common. If you need surgery for a PFO, you should expect to recover fully and live a normal and healthy life.
There is more to education than teaching reading, writing, and math. Knowledge goes far beyond the realm of traditional subjects. As a child grows and develops, it’s important for them to have a creative outlet. Many people will preach about teaching children more math and sciences, but there is serious value in creativity. Having creative outlets helps children open their minds and explore self-expression. Motor Skills and Coordination A major effect creative outlets can have in early childhood development is on their motor skills and overall coordination. Certain arts and crafts such as painting or drawing can greatly improve their physical development. When children are able to manipulate and handle tools such as paintbrushes at an early age, they quickly learn how to define their preferences like using their right or left hand and connect with the world around them. Improving Social Behavior By placing children in a creative place such as an art room or dance studio, they are not only being around others, they are also working with others. This kind of creative environment helps children learn how to interact and socialize with others. It helps them cover the basics of social learning in a fun environment where they can freely express themselves. They’ll also learn early on how to respect other’s self-expression. Expressing Their Feelings Growing up, it can be hard to process and express emotions in a healthy way. Many children are prone to throwing tantrums or crying at the drop of a hat. By introducing creativity early in their education, they are given a productive outlet to express their feelings and emotions. This helps greatly with their emotional development and they will carry these coping skills well into their adulthood as well. New Way of Thinking Fostering mental growth and development is especially essential during early education. The skills children learn in early education is what they will carry with them throughout their entire educational career. By giving them more creative outlets, children develop new ways of thinking. They develop problem-solving skills, finding new solutions, and think way outside of the box. Creativity encourages them to find new ideas and create their own path.
Take on this challenging weather word scramble! Your child must use his logic and his knowledge of weather systems to figure out each word. Put your child's memory and geographic knowledge to the test with this challenging exercise, where she'll list off the 50 states in alphabetical order. Your fifth grader will learn impressive words like "tenacious" and "strident" in this vocabulary builder worksheet. Learn some new words with this worksheet. Reinforce known and new words with your child using this vocabulary worksheet. Learn words like "recitation," "incorporate," and more. Words like "antagonist" and "transient" are confusing for adults and kids. Fifth graders will learn these words and more in this vocabulary worksheet. This vocabulary list includes words like "appreciate" and "petulant." Build your child's vocabulary with this vocabulary list. Help your fifth grader learn the words like "famished" and "industrious" with this vocabulary worksheet. Learn to identify and use these words and more. Teach your fifth grader words like ancient, option, and achievement with this vocabulary worksheet. Learn the meanings and spellings of these words and more. Encourage your fifth grader to grow his vocabulary with this word-focused worksheet. Kids will learn new words, then write them into sentences.
Training artificial intelligence with artificial X-rays | Robotics Artificial intelligence (AI) holds real potential for improving both the speed and accuracy of medical diagnostics. But before clinicians can harness the power of AI to identify conditions in images such as X-rays, they have to ‘teach’ the algorithms what to look for. Identifying rare pathologies in medical images has presented a persistent challenge for researchers, because of the scarcity of images that can be used to train AI systems in a supervised learning setting. Professor Shahrokh Valaee and his team have designed a new approach: using machine learning to create computer generated X-rays to augment AI training sets. “In a sense, we are using machine learning to do machine learning,” says Valaee, a professor in The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE) at the University of Toronto. “We are creating simulated X-rays that reflect certain rare conditions so that we can combine them with real X-rays to have a sufficiently large database to train the neural networks to identify these conditions in other X-rays.” Valaee is a member of the Machine Intelligence in Medicine Lab (MIMLab), a group of physicians, scientists and engineering researchers who are combining their expertise in image processing, artificial intelligence and medicine to solve medical challenges. “AI has the potential to help in a myriad of ways in the field of medicine,” says Valaee. “But to do this we need a lot of data — the thousands of labelled images we need to make these systems work just don’t exist for some rare conditions.” To create these artificial X-rays, the team uses an AI technique called a deep convolutional generative adversarial network (DCGAN) to generate and continually improve the simulated images. GANs are a type of algorithm made up of two networks: one that generates the images and the other that tries to discriminate synthetic images from real images. The two networks are trained to the point that the discriminator cannot differentiate real images from synthesized ones. Once a sufficient number of artificial X-rays are created, they are combined with real X-rays to train a deep convolutional neural network, which then classifies the images as either normal or identifies a number of conditions. “We’ve been able to show that artificial data generated by a deep convolutional GANs can be used to augment real datasets,” says Valaee. “This provides a greater quantity of data for training and improves the performance of these systems in identifying rare conditions.” The MIMLab compared the accuracy of their augmented dataset to the original dataset when fed through their AI system and found that classification accuracy improved by 20 per cent for common conditions. For some rare conditions, accuracy improved up to about 40 per cent — and because the synthesized X-rays are not from real individuals the dataset can be readily available to researchers outside the hospital premises without violating privacy concerns. “It’s exciting because we’ve been able to overcome a hurdle in applying artificial intelligence to medicine by showing that these augmented datasets help to improve classification accuracy,” says Valaee. “Deep learning only works if the volume of training data is large enough and this is one way to ensure we have neural networks that can classify images with high precision.” Materials provided by University of Toronto Faculty of Applied Science & Engineering. Note: Content may be edited for style and length.
Taking care of our inner mouth (like: teeth and gums) is called dental care. Dental care includes brushing, flossing and consulting your dental hygienist as to have healthy teeth. Eating healthy food such as: vegetables, fruits, dairy products and whole grains. - Bad Breath: Bad breath is also known as halitosis. Dental problems such as gum diseases, dry mouth, oral cancer and bacteria on the tongue causes bad breathe .If you are suffering from chronic breathe than confer with dentist. - Loss of Tooth: A sticky substance known as plaque mixes with the food that we consume and then it produces an acid which attacks tooth enamel causing loss of tooth. It also cause tooth decay. - Periodontal Diseases: A periodontal disease is also known as gum diseases. The gum diseases affect the gums surrounding the teeth .This disease is one of the cause for erosion of tooth among the adults. - Dental Cancer: Dental or Oral cancer is a severe and serious disease which sometimes causes death to the people. Oral cancer can be noticed from common symptoms like ulcer and discoloration of tissue. This kind of dental disease affects the inner or outer part of oral parts like: mouth, lips and throat. Studies say that, a tobacco user has a high chance of getting oral cancer. - Mouth Sores: There are different types of sores like fever blisters, cold sores, ulcers, thrush and canker sores. Mouth sores can be irritating sometimes creating an inconvenience while consuming food. - Sensitivity Problem: IF there is pain and discomfort in your teeth while consuming sweets and cold drinks or ice creams then this is called teeth sensitivity. - Organ Diseases: There are many dental problems which causes organ diseases like: - The infection in the wisdom tooth causes heart disease. - The front teeth are one of the causes for kidney problems. - Removal of some bad teeth can cause severe arthritis. - Microbes originating from infected teeth and gums affect our organs very badly. FREE DENTAL CARE: Professionally, dental care is costly but there are services that provide free dental care to the people and they are: National Health Services (NHS). Health Services Executive (HSE). National Health Services (NHS): National Health Services provides free taxation to the people regarding their health services such as: - Hospital Services: They provide free medical facilities to the person including their medications, surgery and admission fee etc. - Primary Care: Under this primary care dentist, pharmacists and opticians provide an independent service to the people. - Community Services: These services include child welfare clinic, vaccination, ambulance services maternity and environmental health services for the people. Health Services Executive (HSE): Health services executive also free schemes and services for free dental care they are: - Payment Arrangements Services: PCRS (Primary Care Reimbursement Service) is one of the parts of HSE who do all the payment arrangements to the dentist, general practitioners and community pharmacies. - Dental treatment Services Scheme (DTSS): Under this DTSS, the treatments and clinical process are arranged through treatments and full and lower dentures. All the expenditures which come under DTSS are paid by PCRS. No related posts.
From an article in Rise Earth: Scientists analysed a ‘hobbit’-sized skull found in Indonesia back in 2004 have claimed that it is a Alien Skull-Not Human. The fossil was discovered in Indonesia and named Homo floresiensis, or ‘hobbit’, but its species was not known. Now researchers at the Department of Anatomical Sciences at Stony Brook University claim the shape of skull is consistent with a scaled-down human ancestor but not with modern humans, Science Daily reports. Karen Baab said: ‘The overall shape of the skull, particularly the part that surrounds the brain looks similar to fossils more than 1.5 million years older from Africa and Eurasia, rather than modern humans, even though Homo floresiensis is documented from 17,000 to 95,000 years ago.’ The researchers believe their findings counter one scientific theory that says the creature was a diminutive human that had suffered microcephaly, which leads to a smaller cranium. They concluded that the skull had not suffered microcephaly because the difference between its right and left sides were not as great as would be expected in that case. Dr Baab recognised, however, that the controversy as to the evolutionary origins of the ‘hobbit’ will continue. The results of the study correspond with findings made about the rest of the creature’s skeleton. A range of primitive features have been documented in both the upper and lower limbs of Homo floresiensis, highlighting the many ways that these hominins were unlike modern humans. Latest posts by Royce Christyn (see all) - Irish Slaves – What The History Books Will Never Tell You - November 1, 2017 - Government Op Who Predicted Super Bowl Score Warns Of Nuclear War - February 18, 2017 - Video: Why Voting Doesn’t Change Anything & Democracy Is A Lie - May 7, 2016
As we discussed in chapter 13, fertilization in humans happens in the oviducts. For this to happen, the sperm need to arrive in the oviducts when there is an egg there. Sperm can stay alive in the female reproductive tract for 3-5 days. An egg needs to be fertilized within about 12 hours of ovulation; and while some fast-swimming sperm can reach the egg within an hour, many will take a day or more to swim that far. Based on these sperm swimming and egg survival times, the most likely timing for vaginal sex to occur to achieve fertilization is from 1-3 days prior to ovulation. Note: while there are birth control methods that take advantage of this timing, there are a LOT of babies conceived by people thinking that it is a “safe” time to have sex. Keep in mind that the drive to have sex is increased for both males and females during times of high fertility. So if you are trying to convince yourself you are “safe” from pregnancy, remember there are evolutionary drivers for reproduction that may be greater than your ability to calculate pregnancy risk. In a typical ejaculate there are about 100 million sperm. When these sperm are ejaculated in semen into a vagina, they begin swimming toward the cervix, through the cervix, through the uterus and into the oviducts. This is a perilous journey for the sperm. Many never make it through the cervix, some are attacked by immune cells in the uterus, and roughly half of those that remain enter the empty oviduct (remember, in general only one follicle in one ovary matures per menstrual cycle). Out of the 100 million-plus contenders only several dozen sperm actually reach the egg. When sperm and egg meet in the oviduct, the head of the sperm (or acrosome) secretes an enzyme that helps the sperm swim through the jelly-like coating of the egg. Once through this layer, the sperm fuses with the cell membrane of the egg; the membrane then undergoes chemical changes, blocking other sperm. Only the genetic material from the sperm enters the egg (mitochondria and all other parts of the sperm remain outside the egg). At this point the egg completes meiosis II. The genetic material from the sperm fuses with the genetic material from the egg and a fertilized egg, or zygote, is formed.