content
stringlengths 275
370k
|
---|
With this goal in mind – read often and from a wide variety of sources, we are spending time each week exploring different reading bins in the classroom. My reading group is keen to explore and share picture books. This week we explored our Rhyme and Repetition bin using this format: 1. Explore the bin 2. Spend time reading some self-selected titles 3. Talk about what we noticed 4. Write a reflection
I brought out this bin and students predicted that many of these titles would have rhyming parts and selections of text that repeated (the title of the bin made this prediction a pretty easy one :-)) I read a few pages of a few books to model exactly that. We identified how often ending words rhymed and that sentence structure or specific phrases repeat. Students then helped me spread the books out on the carpet and every child spent fifteen minutes reading a variety of titles from this set of books.
We then gathered back at the carpet and shared what we had noticed focusing on this question:
Our list definitely included the rhyming and the repeating but students started with the fun aspect of the stories pointing out that they were often silly, farfetched and funny. It was clear that the word play brought a lightness to the books. One student even commented that the authors would have to work very hard to make all the words work together.
I then asked students to take just five minutes and write their own reflections about the books they read from this bin. This student was a big fan of these titles! He writes: “I noticed that they (meaning the authors) were worked very hard, They are the best in the world. They are very funny.”
Today during independent reading, some students returned to this bin. It’s all in exposing students to new titles and genres to broaden their reading choices. Each week, I plan to introduce a different bin of books and follow a similar process. It’s a great opportunity to work on our reading stamina and increase our knowledge of book choices. |
A team of astronomers led by Joseph Hennawi of the Max Planck Institute for Astronomy, using the W.M. Keck observatory in Hawaii, have discovered the first quadruple quasar: four quasars with approximately the same redshift of about z ~ 2 and located on the sky in close proximity. The online article1 from Max Planck Institute is titled “Quasar quartet puzzles scientists” with the subtitle “Astronomers must rethink models about the development of large-scale cosmic structures.” This is a discovery of the first known group of four quasars with the same redshift found in the same location on the sky. A research paper has been accepted for publication in the journal Science and a preprint is now available.2
The quartet resides in one of the most massive structures ever discovered in the distant universe, and is surrounded by a giant nebula of cool dense gas. Either the discovery is a one-in-ten-million coincidence, or cosmologists need to rethink their models of quasar evolution and the formation of the most massive cosmic structures.1
The logic goes as follows. Quasars constitute a very brief phase in the evolution of a galaxy–lasting about only 10 million years. They are superluminous because their brightness is driven by matter falling into the supermassive black hole at their centre.
During this phase, they are the most luminous objects in the Universe, shining hundreds of times brighter than their host galaxies, which themselves contain hundreds of billions of stars. But these hyper-luminous episodes last only a tiny fraction of a galaxy’s lifetime, which is why astronomers need to be very lucky to catch any given galaxy in the act.1
As a result it has been calculated that it was a 1-in-10-million chance of seeing 4 nearly identical quasars all in the same nebula. They are rare. How did they form so early in the Universe, i.e. so soon after the alleged big bang? So not only was it lucky to see them, how did they form at all, at that epoch in the history of the Universe? And why is the density of galaxies at that redshift, in that region of space, so high, much higher than the standard model would predict?
“There are several hundred times more galaxies in this region than you would expect to see at these distances” explains J. Xavier Prochaska, professor at the University of California Santa Cruz and the principal investigator of the Keck observations.1
According to their redshifts (z ~ 2) and the usual Hubble law these objects are observed at a distance of about 10 billion light-years, which means according to the standard model they are being observed at a stage of their evolution about 4 billion years after the big bang. How did this happen? How did they grow to be so massive so soon? Not only that how did all the observed galaxies in the group, which they call a proto-cluster (because it is supposed to be so distant therefore are observed early in the age of the Universe) evolve to this state so soon in the evolution of the Universe?
The distances they give are based on the standard Hubble law interpretation. The quasars have redshifts z ~ 2, but if those redshifts are not due to the expansion of the Universe, but as Halton Arp has suggested, instead they are intrinsic redshifts, then this cluster of galaxies, including the quasars are not so distant after all. If that was the case it changes the distance figures they quote (see the figure caption above) and the nebula is not one million light-years across but much less. It would also mean that the quasars are not so superluminous as their luminosity is also calculated from their Hubble law distance. So without subscribing to the big bang model, that would solve some of their dilemmas. Nevertheless the concept of Halton Arp, with quasars being ejected from the hearts of active galaxies, is quite a different scenario anyway.
But clearly the discovery of this quartet of quasars is another big bang headache (emphasis added):
Hennawi explains “if you discover something which, according to current scientific wisdom, should be extremely improbable, you can come to one of two conclusions: either you just got very lucky, or you need to modify your theory.”1
Yes, that is right. Theory is wrong, but does not need modification; it needs to be discarded.
As such, the discovery of the first quadruple quasar may force cosmologists to rethink their models of quasar evolution and the formation of the most massive structures in the universe.1
I note that the quasar quartet all have redshifts very close to one of the quantised Karlsson values of zK = 1.96. The idea there is that that Karlsson redshift is intrinsic to the quasar (not due to expansion of the Universe) and hence any remaining component of a Hubble-law distance-determining redshift would be very small indeed. This fact alone would solve the dilemmas here.
I asked my friend Dr Chris Fulton, who last published a paper with Halton Arp on quasar-galaxy associations,3 and with whom I have been collaborating for many years on this subject.4,5 I asked Chris to have a look at the online NED database to see if there were any possible candidate galaxies that could have been the parent galaxy from which these quasars might have been ejected. Note, the symbols QSO and AGN both indicate quasars. After reading the research paper, and in reference to its Fig. 1b, shown left, Chris wrote (my emphasis added),
The f/g quasar and the three AGNs are in striking alignment, so I would expect the parent to be somewhere along that line, though not necessarily between AGN1 and AGN3, and at a lower redshift, z < 0.5. NED shows a plethora of galaxies with known redshifts (z) within 30′ of position 08h41m+39d21m, 80 of them to be exact, and there are many QSO candidates as shown in the attached list. The Max Planck article2 all but states, correctly, that the standard big bang model is in serious trouble. What else but an ejection from a central source would form a straight line of such massive objects at such great separations from one another?
But if the quasars were ejected from the heart of an active parent galaxy (or galaxies) then the standard model would be falsified. The standard big bang explanation is that all matter came from the big bang and galaxies formed from accretion of matter, and then grew by mergers of galaxies. No ejection of young new matter from AGNs is possible in the standard big bang.
I conclude then that this is further evidence against the standard big bang model. A far better explanation is that God created with a real great light show where He ejected newly-born galaxies out of the hearts of active parent galaxies.
- Quasar quartet puzzles scientists, May 15, 2015
- J.F. Hennawi, J.X. Prochaska, S. Cantalupo, F. Arrigoni-Battaia, Quasar Quartet Embedded in Giant Nebula Reveals Rare Massive Structure in Distant Universe, May 14, 2015, arXiv.org preprint 1505.03786v1.
- C.C. Fulton and H.C. Arp, The 2dF redshift survey. I. Physical association and periodicity in quasar families, Ap J 754:134-143, 2012.
- J.G. Hartnett, Quasar-galaxy associations.
- J.G. Hartnett, Quasar redshifts blast big bang. |
Celebrating 100 years of Eddington's eclipse
May the 29th marked an important Centenary for the world of physics: on that day in 1919, Cambridge Astronomer, Arthur Eddington, led teams to two continents to take what are now some of the most famous photographs we have. The results sent the scientific world into turmoil. Newton’s laws of gravity, that had stood unshaken for hundreds of years, were overturned by a young German scientist called Albert Einstein. To hear how it happened, Izzie Clarke headed over to the Institute of Astronomy at the University of Cambridge, to see space scientist Carolin Crawford.
Carolin - The dominant law of gravity was, of course, Newton's laws of gravity - which he had devised, and which proved a very accurate description of the way objects moved on the earth, and how the planets move round the sun. It was only during the middle part of the 19th century that it was clear that there was one thing it didn't quite account for, and that was the way that Mercury's orbit moved round the sun. But Einstein was the first one that could account for everything that Newton could account for - that we saw in space and on Earth - but could also justify what was happening to Mercury, due to an extra curvature of space-time in the proximity of the sun.
Izzie - This idea of space-time was at the heart of Einstein's theory: that space and time can be considered as one entity. I know, it's quite a lot to get your head around, but bear with me. Say you need to pop to the shops. You could say that they're 10 minutes away - or equally a few kilometres away. You can describe that journey in distance and time, because you know how fast you walk. There's a similar thing with space-time: that both space and time are interchangeable because you know the speed of light, and the speed of light is the same everywhere. And what Einstein then said that is so different from Newton is that this space-time could be distorted by massive objects like our sun - that they bend the shape of space, which creates that key difference in Einstein's theory of gravity.
Carolin - Light likes to travel in a straight line. But if you had the light from a distant star, a light ray, and it just grazed the surface of the sun - because the sun is the nearest large mass we have around - it would just deflect that light a little bit and cause a tiny shift in the apparent position of the stars, but it was so small it wasn't practically observable. The difference was that Einstein made a prediction that when you take into account the curvature of space you actually double the amount of that deflection, which brings it into the realms of observability; and it also provides a very nice discrimination between what Newton says and what Einstein says, if you could measure this deflection.
Izzie - But how can you measure the deflection of light from a distant object if your own giant fireball, i.e. our sun, is in the way? It would be impossible to distinguish the light from the two sources. The idea was proposed that pictures of distant galaxies could be taken during an eclipse, where the Moon blocks the light from our sun.
Carolin - So it wasn't a new idea to make this measurement, but the exciting thing was there was a particularly good eclipse coming up on 29th May 1919. It was good because it was of long duration, about six minutes or so, which gives you plenty of time to take your images. And also, quite unusually, the sun would be right in front of a very bright nearby star cluster - it's called the Hyadas, is in the constellation of Taurus - which meant that during the eclipse the sun would lie in a region surrounded by fairly bright stars, which would enable the observations. So Arthur Eddington, who was a director of the observatories here at Cambridge at the time, he was one of the few people who fully understood the theory to study Einstein's ideas. And it was Arthur Eddington and also particularly Frank Dyson, who was the Astronomer Royal at the time, who realised that this was a particularly momentous eclipse for doing this. What they decided to do was to make two expeditions. There was one that was led by Andrew Crommelin from the Royal Greenwich Observatory which took equipment to Sobral in northern Brazil, and they would catch the start of the eclipse. And then the path would move right across the Atlantic Ocean and on the other side you'd have Sir Arthur Eddington and his small team who would do the same observations on an island off the coast of West Africa. They're carrying out the same experiment in both locations, and the ideal thing about having two locations, of course, is that you're never quite sure of the weather; and indeed, both expeditions had problems. In Principe, off the west coast of Africa, Eddington had terrible weather; and so they took plenty of images, but in most of them there’s too much cloud in the way. And in Sobral, in Brazil, they had problems with the equipment - there was vibrations, which just ended up blurring some of the images. And in fact the really important data from Brazil were from a sort of backup telescope they'd just taken as a spare. But the true and precise measurements don't happen until they come back to the UK, and then the results are announced in November in 1919 at a very special occasion at the Royal Society in London.
Izzie - And what did they find?
Carolin - They found that their results vindicated Einstein's predictions of what should happen under his theory of gravity, rather than Newton’s.
Izzie - And how important was that finding, and what did that do for the field of physics?
Carolin - Well, it has revolutionised physics. I mean, at the time it was hugely important because very few people had really taken notice of Einstein's theories, and this idea of the whole curvature of light is quite a conceptual leap. And, to be quite honest, for a lot of scientists - and we have this in notes and letters - there is a resistance to having to use a more complicated theory. You know, if Newton's laws were sort of good enough, why not use those? But the point is, once this announcement is made, Einstein's relativity is proven as the better description of what happens on earth and in space - and at that point Einstein becomes the celebrity genius.
Izzie - And so Einstein's theory of general relativity was accepted: that what we perceive as the force of gravity in fact arises from that all-important curvature of space and time.
Carolin - Relativity is part of our general understanding of physics, so it's crucial to how we use physics. Now it may not make much difference here on earth, because Newton's laws are good enough. Where it becomes important is in more extreme situations - and so the closest thing to earth that people might run into everyday is of course your GPS satellites. If you didn't take into account the relativistic corrections for the fact that they're travelling in a reduced gravity field - and also faster compared to the surface of the earth - they would start to give you inaccurate results. Within a couple of minutes they'd be 10 km out per day. That's an immediate result that people might be able to relate to. I will say though, it's incredibly important for astronomers, because relativity gives the only good description of what happens where you have very large masses - and in astronomy, of course, you're involving the largest masses possible. |
What a Humidity Gauge Measures
Humidity, a measure of water vapor in the air, is one of the variables measured in basic meteorology. There are actually several different kinds of humidity, but what most people mean when they talk about "humidity" is relative humidity. Relative humidity is defined by Perry's Chemical Engineers' Handbook as "the ratio of the partial pressure of water vapor in the mixture to the saturated vapor pressure of water at a prescribed temperature."
In other words, relative humidity is an indirect way of measuring of how much water vapor is in the air at a given time, versus how much water vapor the air can hold at maximum. It is expressed as a percentage. When the relative humidity gets to 100 percent, water vapor in the air begins to condense back into liquid water: It will rain.
Relative humidity is useful to know because it gives an idea of how "wet" the air feels. Low relative humidity can lead to dry skin, itchiness, and thirstiness. High relative humidity makes cold temperatures feel colder and hot temperatures hotter. When the weather is very hot, high humidity impairs the body's ability to cool down by sweating. Relative humidity also has an impact on delicate machinery such as computer circuit boards and on the development of microorganisms and fungi. Inside the home, high humidity makes mildew more likely to develop, while low humidity facilitates the spread of the flu virus.
For all of these reasons, and more, it is useful to be aware of the relative humidity. Collectively, any of the instruments used to measure humidity is referred to as a hygrometer, a humidity gauge.
Cooled Mirror Dew Point Hygrometer
One of the most precise and modern types of hygrometer is called a "cooled mirror dew point hygrometer." A mirror is chilled, which causes condensation to form on its surface. The higher the relative humidity, the more condensation forms. This is measured using an optical sensor that detects the droplets distorting the smooth surface of the mirror. These hygrometers are electronic devices that require special expertise to build.
The first known hygrometer was invented about 500 years ago by Leonardo da Vinci. He came up with the idea of weighing a ball of wool, whose weight would change depending on the moisture in the air. This was not a very effective design, and it would be a long time before relative humidity could be measured accurately.
A little over 200 years ago, a scientist named Horace Bénédict de Saussure invented a hygrometer involving a strand of hair, from humans or animals. Depending on the relative humidity, the hair would shrink or grow in length by a very small amount, growing longer in high humidity and shrinking in low humidity. When the hair is placed under tension, this change can be measured. These so called "hair hygrometers" are still used today.
The most well-known type of hygrometer is called a "psychrometer." (Psychrometry is a field of engineering concerned with the properties of mixtures of gas and vapor. "Psychro" is a Greek root that means "cold.") A psychrometer works by using two thermometers in tandem. One of the thermometers is continuously kept wet by being covered with something like a wet cloth. As the water evaporates from the cloth, it absorbs energy, lowering the temperature in the immediate vicinity. (It's the same reason that your swimsuit feels cold after you get out of a swimming pool or hot tub.) This temperature drop is measured by the wet thermometer, which records a lower temperature than it otherwise would.
The other thermometer stays dry and is used as a reference. It measures the actual temperature of the air. The relative humidity can then be calculated by measuring the difference in temperature readings between these two thermometers. If the temperature difference is low, then the relative humidity must be high, because it means less water is able to evaporate from the cloth covering the wet thermometer, which in turn means that the air already has a lot of water in it. Likewise, if the temperature difference is high, then the relative humidity must be low, since more water is able to evaporate from the cloth.
Psychrometers are only effective if they are very precisely calibrated, and they must be recalibrated frequently.
- "Teacher's Weather Sourcebook"; Tom Konvicka; 1999
- "Water Vapor Measurement"; Pieter R. Wiederhold; 1997
- "Perry's Chemical Engineers' Handbook (7th Edition)"; R.H. Perry and D.W. Green; 2007 |
Paleontologists have made an exciting discovery near the city of Ganzhou, in southern China – the near complete fossil remains of a skull in a site that dates back to the Cretaceous period. The skull belongs to a dinosaur that has been scientifically designated Qianzhousaurus sinensis, a long-snouted species that belongs to the same species of Tyrannosaurus rex (Tyrannosauridae) that would have lived in Asia some 66 and 72 million years ago.
Nicknamed “Pinocchio rex” by researchers, this creature would have measured about 9 m from snout to tail, had an elongated skull and had long teeth compared with the deeper, more powerful jaws and thick teeth of a conventional Tyrannosaurus. From all of this, they have theorised that although sinensis lived alongside rex during the Cretaceous period, they would most likely have hunted different prey and not been in direct competition with one another.
“This is a different breed of tyrannosaur. It has the familiar toothy grin of Tyrannosaurus rex, but its snout was much longer and it had a row of horns on its nose. It might have looked a little comical, but it would have been as deadly as any other tyrannosaur, and maybe even a little faster and stealthier.”
Following the discovery, the palaeontologists have created a new branch of the tyrannosaur family for specimens with very long snouts, and they expect more dinosaurs to be added to the group as excavations in Asia continue to identify new species. The lead author of the paper, Professor Junchang Lü from the Institute of Geology, Chinese Academy of Geological Sciences, explained as follows:
“The new discovery is very important. Along with Alioramus from Mongolia, it shows that the long-snouted tyrannosaurids were widely distributed in Asia. Although we are only starting to learn about them, the long-snouted tyrannosaurs were apparently one of the main groups of predatory dinosaurs in Asia.”Source: Nature Communications – A new clade of Asian Late Cretaceous long-snouted tyrranosaurids Source: Sci-News – Qianzhousaurus sinensis: Long-Snouted Tyrannosaur Discovered in China
- Pinocchio rex: Paleontologists discover long-snouted T. rex in China (bnowire.com)
- New Tyrannosaur named ‘Pinocchio’ (bbc.co.uk)
- Pinocchio rex: new species of dinosaur discovered in China (theguardian.com)
- Pinocchio Rex Dinosaur Discovered (guardianlv.com)
- Long-nosed ‘Pinocchio rex’ dinosaur discovered by scientists (telegraph.co.uk)
- Edinburgh University discovers new dinosaur (scotsman.com)
- Long-snouted tyrannosaur unearthed (nature.com)
- New ‘Pinocchio rex’ dinosaur discovered (itv.com)
- New Tyrannosaur – Qianzhousaurus (geogeek1726.wordpress.com)
- Newly discovered ancient maps support Chinese territorial claims (chinadailymail.com)
Categories: Media & Entertainment |
Some Japanese people are endowed with a unique power to digest carbohydrates in seaweed, thanks to their gut microbes. The accidental finding–French scientists were studying enzymes that digest red algae when a genetic database revealed that the same gene could be found in some humans–hints at regional differences in our intestinal bacteria that may have allowed different groups to adapt to their local diets. And it’s just the latest example of nutritional advantages derived from microbes, which give us the ability to digest foods whose nutrients would otherwise be lost to us and make essential vitamins and amino acids that our bodies can’t.
As I wrote in a feature on our microbial menagerie in 2008,
New ultrafast DNA-sequencing technologies allow scientists to study the genetic makeup of entire microbial communities, each of which may contain hundreds or thousands of different species. For the first time, microbiologists can compare genetic snapshots of all the microbes inhabiting people who differ by age, origin, and health status. By analyzing the functions of those microbes’ genes, they can figure out the main roles the organisms play in our bodies.
In the new study, published today in the journal Nature, researchers searched for the gene within bacteria living in the guts from 18 North Americans and from 13 Japanese. They found it in 5 of the Japanese but none of the Americans. The gene was probably transferred to human gut microbes when people ate seaweed–and the microbes that live on it. According to a piece in Nature,
Although gene transfer to gut microbes is suspected in other cases, this is the first clear-cut example in which a gut microbe has gained a new biological niche by snatching genes from an ingested bacterium, says Mirjam Czjzek, a chemist at the Pierre and Marie Curie University in Paris, one of the two researchers who led the study. “Probably there are many more examples,” she says. “It’s only because of this exotic niche and the very rare specificity of this enzyme that we were able to pinpoint where it came from.”
As our food becomes increasingly sterile, our exposure to this genetic treasure chest is dwindling, Justin L. Sonnenburg, a Stanford University microbiologist told the journal. “We’ve gone to great lengths in the developed world to decrease the microbial burden of food, and in doing so we have decreased food-borne illness,” says Sonnenburg, who wrote a commentary in Nature accompanying the study. “This is good, but it comes at a cost. We’ve eradicated this potentially beneficial microbial component.” |
MIT’s New Invention Harvests Energy from Changes in Ambient Temperature
When two different materials have different temperatures, the temperature gradient causes the charge carriers to move from the hotter material to the cooler material. This conversion of temperature gradient into electricity and vice versa is called thermoelectric effect.
In this era of renewable energy sources, researchers have been developing thermoelectric devices which tap energy from thermal fluctuations. Recently, instead of needing two different temperature inputs simultaneously, MIT has invented a device that takes advantage of changes in ambient temperature during the day-night cycle to generate electrical power.
The New Concept
The thermal resonator, as what the inventors called the new energy-harvesting device, is the first of its kind. “We basically invented this concept out of whole cloth. We’ve built the first thermal resonator. It’s something that can sit on a desk and generate energy out of what seems like nothing. We are surrounded by temperature fluctuations of all different frequencies all of the time. These are an untapped source of energy,” said Michael Strano, one of the inventors.
The results and details of the study were published in the journal Nature Communications by graduate student Anton Cottrill, Carbon P. Dubbs, Professor of Chemical Engineering Michael Strano, and seven others in MIT’s Department of Chemical Engineering. According to the authors, the thermal resonator could supply power to remote sensing systems for years without the need for other power sources or batteries.
Advantages Over Other Energy-harvesting Devices
Although the pilot version of the thermal resonator generates power that is relatively lower than other major renewable sources, researchers say that it has the following advantages:
- It does not need direct sunlight. It generates energy from ambient temperature changes, even in the shade. That means it is unaffected by short-term changes in cloud cover, wind conditions, or other environmental conditions
- Its location and installation are not complicated. It can be situated under a shadow, such as below a solar panel. This allows gathering the energy wasted in solar panels and thus making them more efficient.
- It performs three times better than a commercial pyroelectric material. A thermal resonator was shown to generate three times more power per unit area than a similar sized pyroelectric available in the market. A pyroelectric device is an established way of converting thermal fluctuations into electricity.
The Concept behind Thermal Resonator
The key to the thermoelectric effect of the first ever thermal resonator is the design of the material and a material property called thermal effusivity. The physical meaning of thermal effusivity is how fast a material can gain or lose heat from its environment. It can be thought of as a combination of two other thermal properties of a material – thermal conductivity and heat capacity.
A material’s thermal conductivity describes how fast heat can spread throughout the material, while its heat capacity describes the amount of heat it can store per unit volume. In most cases, these two properties can’t be both high. For instance, ceramic materials can store a high amount of heat, but heat tends to spread slowly through it.
In order to create electricity from temperature fluctuations, the MIT research team decided to optimize thermal effusivity of a material by tweaking its composition or structure. The researchers came up with using a metal foam that is made of copper or nickel. To further increase its thermal conductivity, it was coated with a layer of graphene. Lastly, the metal foam was infused with octadecane, a wax-like phase-change material. That is, at a specific range of temperature, the octadecane solidifies or liquefies.
“The phase-change material stores the heat and the graphene give you very fast conduction,” explains Cottrill, the study’s lead author.
With this structure, the high thermal conductivity part of the thermal resonator gains heat fast. Subsequently, this heat slowly transfers to the phase-change material that stores heat. In this way, one part is always lagging behind the other, creating a perpetual thermal gradient and generate electricity. According to Strano, combining metal foam, graphene, and octadecane makes up “the highest thermal effusivity material in the literature to date.”
The study shows that with just a 10-degree-Celsius temperature difference between night and day, the thermal resonator can generate 350 millivolts of potential and 1.3 milliwatts of power, which is enough to supply small environmental sensors or communications systems.
The thermal resonator is not limited to harnessing energy from fluctuations in ambient temperature during the day-night cycle. With the right tuning in its properties, it could also be possible to harvest other kinds of temperature fluctuations such as that of on-and-off cycles in motors of refrigerators or industrial machines.
It could also be used in landers or rovers to provide low-power but long-lasting energy sources, according to Volodymyr Koman, an MIT postdoc and co-author of the study.
The Untapped Energy
“We’re surrounded by temperature variations and fluctuations, but they haven’t been well-characterized in the environment,” Strano says. These temperature variations are “untapped energy” because there was no known way to harness it until this new invention of MIT. Pyroelectric devices were used to harvesting energy from thermal cycles, but the thermal resonator is the first to be invented that “can be tuned to respond to specific periods of temperature variations, such as the diurnal cycle.”
The thermal resonator could also be used as a complementary energy source so that if one energy source fails, operations of sensor networks will remain running. “They want orthogonal energy sources, if one part fails, you’ll have this additional mechanism to give power, even if it’s just enough to send out an emergency message,” Cottrill says. |
As the image above illustrates, my colleagues and I at Griffith University have been able to photograph the shadow of an atom for the first time – the culmination of five years of work by our team.
The image, and attendant paper, are published today in the journal Nature Communications.
So, in a nutshell, how did we get the image? The following analogy might help.
On a sunny day at the beach, your shadow is a constant companion. Holding your hand up will block the bright sun, but a few rays will still penetrate the thinner parts of your fingers.
If we were to take a closer look using a microscope we would see dark strands of tightly wound DNA in the nucleolus (composed of proteins and nucleic acids found within the nucleus) of the skin cells. Looking closer still, we might wonder: how small can something be and still cast a shadow?
The picture leading this article shows the shadow cast in a laser beam by a single Ytterbium atom suspended in empty space. At Griffith University, we have has pioneered the use of Fresnel lenses (a type of lens for large aperture and short focal length – producing an ultra hi-res miscroscope) to capture high-resolution images of atoms.
Our lens is like a smaller versions of the lenses used in lighthouses – both have many separate segments all working in concert to focus the light.
The figure above shows how a laser beam (orange) passing by a single atom (blue) leaves a dark shadow in its wake, with the actual picture of the single atom shadow shown on the right end.
Since a single atom casts a very small shadow, our advances allowed us to be the first to take a picture of this effect. The size of the shadow is set by the wavelength of light, which is about a thousand times larger than the actual atom.
We hold the Ytterbium atom in empty space by removing one of its electrons and using high voltage electricity to fix its position. Ytterbium was chosen because we could build lasers of the right colour to be strongly absorbed by the atom.
Our work has implications for research ranging from quantum computing to microbiology. In quantum computing, light is the most effective method for communication, while atoms are often better for performing calculations.
In observing the shadow from a single atom we have shown how to improve the input efficiency in a quantum computer. Single atoms have well-understood light absorption properties. We used this knowledge to predict how dark the shadow should be for a given amount of light.
Since Dutch scientist Antonie van Leeuwenhoek’s first observations of red blood cells in 1674, absorption microscopy has played a prominent role in biology. X-ray and ultraviolet light are very useful for imaging cells but can also damage them at high dosages.
By knowing how much light is required to achieve a particular image quality, our work will be useful to predict when a little damaging light is enough to take a good image.
We’re pleased to be the first to capture a snap of the long shadow from an single atom’s dark side. |
Often farmers asked for seed, but we weren’t quite sure what to expect when we suggested—to the farmers’ union in Nampula, Mozambique—that they organize a fair in which the members could come together and exchange seed. They might only be interested in “improved” varieties.
However, when we arrived at the place the fair was to be held, it was clear that the farmers had picked up the idea. They had constructed temporary shades with grass roofing and the scene was bustling with activity. Songs, dances and other activities were performed. Many seeds were on display on the reed mats—many more than what farmers usually say they produce when asked what they grow (maize, cassava, cowpeas, peanuts and rice). Virtually all material was exchanged.
Since the beginning of agriculture, humans have been spreading seeds and other planting material with the purpose of improving productivity of native varieties or introducing new crops. Wars and conquests led to new foods and species being introduced in conquered places (for example buckwheat came from China to Europe with Djengis Khan’s army). Conquerors returned home with exotic species and foods.
Where climatic conditions allowed them to grow, edible species spread. Today a number of global crops exist far from their zone of origin, such as corn, potato, soja, wheat, tomato, rice and cassava.
Historically, the market provided farmers with those seeds they didn’t have because of unproductive crops, plagues or natural disasters. Farmers also looked in markets for seeds that were better appreciated, for taste or production traits, than the ones they were already cultivating.
Seed fairs, with methodical planning and careful selection of participants, create even greater potential than traditional markets for seed exchange to occur. Seed fairs have gained popularity and are being organized in Latin America, Africa and Asia. Mostly the organizers of the fairs are concerned about farmers’ lack of access to seed and about loss of on-farm diversity.
Generally speaking, seed fairs try to meet one or more of the following objectives:
- Improve (timely) access to seed for farmers;
- Contribute to conserving and managing plant genetic resources and maintain or improve agrobiodiversity;
- Raise awareness of the importance of biodiversity and Plant Genetic Resources;
- Strengthen the position of (small scale) farmers and their communities within the agricultural complex.
In order to analyse the potential and diversity of seed fairs, and to identify potential pitfalls, we will discuss seed fairs within the context of seed development and genetic diversity.
The importance of seeds
We need seed to guarantee our food for tomorrow. However, whether a seed will produce a crop or not depends on many different things: there should be water (not too much and not too little); light and warmth (not too much and not too little); 3) nutrients available; and 4) soil that is not too acid nor too salty. The time of planting should be right and pests and diseases should not destroy the plant etc. What is too much and what is too little depends on each seed. Farming is basically trying to balance all these factors. And as no crop cycle is the same, we need a large number of seeds with different characteristics in order to guarantee a minimum of (food) production.
After harvesting we still demand more. We have preferences for taste, texture, color and cooking qualities of crops, and we want to be able to store and process our harvest. Farmers keep all these preferences in mind when selecting seed for the next crop. All these requirements are stored in our seeds, which were domesticated from wild species over the centuries.
The importance of genetic diversity
With unpredictable and varied growing conditions from one year to another, there is no such thing as a perfect seed for one site, let alone for a variety of different locations. In order to at least produce something in any potentially occurring condition, our ancestor farmers developed and maintained different varieties. Some perform better in dry conditions, others in wet conditions, and yet others are better resistant against a particular virus or bacteria (and so on).
High genetic diversity is necessary for a crop to be stable over time. The importance of genetic diversity may best be illustrated by events that have resulted from its lack—like the Irish potato famine, the corn blight crisis in the USA in 1970 or more recently the problems with diseases in banana. In all the above cases, the root cause of each crisis was fatal uniformity of the genetic material, making the crop extremely vulnerable to diseases.
So-called ‘modernization’ in agriculture, which started in the late nineteenth century and which accelerated enormously since the onset of the green revolution, is the main force behind the great loss in the diversity of plant genetic resources. With the promotion and adoption of modern varieties, landraces are disappearing in farmers’ fields at a rather alarming rate (Mooney 1990). The losses are being reinforced by the ubiquitous promotion and subsidizing of chemical fertilizers and pesticides, encouraging farmers to abandon ecological rationality and adapt a market-oriented rationale. Rapid changes in the environment (habitat loss and changes in climate) further compound this situation.
How serious are the losses? Again, who can say? We do not know the characteristics of varieties now extinct. But we remember the sobering story of a wheat race collected in 1948 by Jack Harlan in Turkey. Arriving in the U.S., it was given the plant introduction number 178383 . No name was deemed necessary. Harlan described it thusly:
It is a miserable-looking wheat, tall, thin-stemmed, lodges badly, is susceptible to leaf rust, lacks winter hardiness . . . and has poor baking qualities. Understandably, no one paid any attention to it for some 15 years. Suddenly, stripe rust became serious in the northwestern states and P.I. 178383 turned out to be resistant to four races of stripe rust, 35 races of common bunt, ten races of dwarf bunt and to have good tolerance to flag smut and snow mould.
Harlan's miserable wheat is now used in all breeding programs in the northwestern states of the U.S. and saves farmers millions of dollars each year. Can we safely lose thousands of varieties of wheat today with the assurance that we will not need them in the future?
The encroachment of modern varieties is less in more challenging environments (like mountainous areas or areas with harsh conditions) compared to lowlands and more moderate climate zones.
Genetic diversity is crucial, as is the conservation of this diversity. With the growing loss of local varieties, we stand to lose the genes that allow us to better adapt to environmental changes, social challenges (like hunger) and even human diseases.
At the level of the agriculture ecosystem of small-scale producers, adaptability is a very important ecological attribute. Farmers demonstrate adaptability in the wealth of different seeds that they manage in their productive systems, and in the genetic heterogeneity present in each variety. As a result, farmers’ systems show a high degree of resilience and ecological stability in the face of challenges such as climatic changes, the appearance of plagues and natural disasters.
In essence, we can say that the seed saves valuable information, which small-scale agro-ecosystems of family farmers utilize to keep their enterprise sustainable. As such, family farmers conserve a much greater biodiversity than the genetically homogeneous modern systems of the green revolution and corporate agriculture.
Losing more than Plant Genetic Resources (PGR)
The loss of PGR and biodiversity in itself is very serious, but we are losing more than just genes.
Seeds are constantly renewing themselves and, in the process, certain characteristics may appear or disappear, so they need to be managed. This means that the most useful seeds (depending on the needs of the user, whether farmer or plant breeder) are kept for future use. In a way, the seeds being used by a community are a reflection of that community’s history and culture. By maintaining crop diversity and actually experimenting and developing new varieties, farmers are de facto conserving biodiversity. With the loss of seeds, this knowledge and the culture in which that knowledge is embedded are also being lost.
Modernization has not only introduced new seeds but also a new farming model whereby the intricate knowledge of farmers, with their seed selection and improvement methods, becomes obsolete and is lost. Farmers become increasingly dependent on breeding institutions and companies for their seed and on extensionists for advice on the use of seed and of the chemical inputs necessary for successful growing of the seed (Keep in mind that the performance of modern or improved varieties is directly linked to the use of chemical fertilizers and pesticides).
Genetically Modified Organisms (GMOs) are the ultimate symbol of scientific advance in the field of the improvement of seeds. However, rural systems can suffer many negative impacts from this advance. With the introduction of GMOs, local seed systems are damaged by degrading the local varieties. This introduction can also result in elimination of the internal mechanisms of knowledge transfer among the farmers, who must then depend on the external knowledge of technicians.
Why would farmers look for seed?
Farmers look for seed for several different reasons. First, peasants look for seeds they used to have but lost for some reason. One such reason, for instance, can be poor production, which obliged the family to eat or sell what they saved. For certain crops (maize for instance), modern varieties are more difficult to conserve. In Mozambique for example, in local varieties of maize, the grains have a harder skin and have far less problems with post- harvest losses than the modern varieties like the Matuba variety. In such cases, farmers often opt to sell (or eat) before losing the crop. It should also be noted that, by definition, landraces are well-adapted to the local environment. Thus, it can be expected that modern varieties are more easily lost by farmers than landraces.
Second, farmers constantly look for seeds with certain characteristics (e.g. early maturing, good taste or disease resistance) that they can integrate into their farming system to improve their production in terms of security, quality or quantity. A certain level of genetic diversity is maintained in local varieties. For example,
uniformity (a desirable trait for mechanized farming and for plant breeders wanting to identify their varieties) is not really relevant for a crop that will not be harvested mechanically. Genetic diversity is important to avoid major epidemics of diseases and pests (among other reasons), Climate change, and the more extreme weather patterns that result, might challenge production systems. As a result, farmers are challenged to respond by adapting their production system and thus their seed.
Third, often farmers are curious. They may not necessarily set out to acquire a new variety, but might encounter planting material that catches their interest, in a similar way as consumers that go to a shopping mall might come home with unplanned purchases.
Farmers also acquire different varieties in order to deliberately experiment and improve their seed stock (see for example Van der Ploeg, 1993, on Peruvian farmers in the Andes). They manage diversity, and carefully select and subsequently integrate newly generated seed into their system. As mentioned, the loss of genetic diversity and the encroachment of modern varieties results in the loss of specific farmers’ knowledge and skills.
Farmers’ exchange, selection and breeding activities are in danger of being pushed into illegality, as is already the case in Europe. Commercial companies are taking an increasing role in plant breeding and seed production in developed countries, supported by the so-called Plant Breeders’ Rights (PBR) and (in the case of GMOs) with the much more restrictive Intellectual Property Rights protection (See for example the case of Association Kokopelli, )
Where do farmers get their seed?
Basically, farmers acquire their seeds (and other planting material) in three ways: 1) they produce seeds themselves; 2) they barter, exchange or borrow with other farmers; or 3) they buy seeds in the market. These days, a fourth way is through extension agencies and companies that “freely” distribute seed, promoting “improved” varieties or certain crops like cotton or potato. In developing countries most seed is produced by farmers themselves, rather than by plant breeders.
Even in a country like Cuba, where until the early nineties the focus was on large-scale high-input/ high-output farming, an informal seed system, operated directly by and for farmers, continues to exist. The maintenance of wide variability and adaptation is traditionally carried out in small plots where farmers conserve in vivo those plants considered useful to the household. Through the informal system, the production of seeds of the basic staples of the Cuban diet has continued in many parts of the country. These genetic resources have provided a basis for plant breeders selecting commercial genotypes.
Ríos and Wright (1999)
Small-scale farmers in developing countries do not restrict themselves to seeds from one source, and often they use both local varieties (or landraces) and seeds originating from the institutional plant breeding system. What seed they use depends on the crop, use of the seed (e.g. for sale or for home consumption), availability (often markets do not have the required seeds at the right time) and accessibility (farmers often consider seed produced by the commercial system expensive).
Two approaches to plant breeding
Crop characteristics constantly change due to cross-pollination. Therefore seeds need to be managed through plant breeding activities to maintain or acquire desired traits. Generally speaking, there are two approaches to plant breeding.
Modern plant breeding presumes that a farmer is more or less able to control all the different crop requirements. Furthermore, generally speaking, the breeding focuses on one single objective, most often yield maximization. It thus presumes that whatever the plant requires can be catered for via, for example, irrigation and drainage, fertilizers, pesticides, greenhouses etc. It is also frequently presumed that farmers have the means to invest in all these inputs before growing a crop.
The conventional plant breeding approach is based on developments in industrial countries and addresses relatively uniform environments and market-oriented agriculture. The main objectives are maximization of yield and broad adaptability. Seeds produced in this way, however, don’t come cheap. In order to recover the investment needed to produce a new variety, the seed needs to be cultivated on large areas of land (at least 100, 000 ha according to Hardon, 2004). Furthermore, in order to be able to determine a specific crop identity, genetically uniform varieties are required, not for agricultural reasons, but to be able to define the seed’s identity in comparison with other varieties.
In contrast, most of the seed in developing countries is produced by small-scale farmers. Their approach is different from plant breeders. Farmers accept the ever-changing conditions as a given and aim to manage the insecurities by maintaining diversity (among and within crops). For small-scale farmers, the priority objective is much more to avoid unnecessary risks and improve yield security over time rather than to maximize yields immediately.
|Farmers [commenting on seed fairs in Tanzania] emphasized that more efforts should be directed towards local crop landraces that thrive well in semi-arid conditions, without forgetting those collected crops that are particularly important during harsh weather (FAO, 2006)|
The way farmers breed their crops is also different. The breeding process, rather than being strictly controlled, deliberately allows cross pollination from neighbouring fields or from wild varieties to occur.
Farmers approach their crop in the context of their livelihood, and thus many characteristics are important—rather than just the yield of a particular crop. For example, they actually eat what they produce, so taste is also highly relevant. Selection and breeding thus becomes a juggling act between many interdependent factors. These are just a few: pest resistance; labor requirement; secondary yields (like leaves for food or fodder); how easy the crop is to conserve; drought resistance; sensitivity to water logging; early vs. late maturation; taste; production in poor soils without fertilizer; and capacity to withstand strong winds. To illustrate this point a Mozambican commented, after visiting an exhibition in Brazil on the different uses of cassava, that she was really impressed by all the different ways cassava was used and processed, but at the same time surprised that apparently in Brazil they don’t eat the cassava leaves (whereas in Mozambique leaves are used for preparing a dish called Mathapa or M’boa) (personal comunication Leopoldina Dias 2005).
The type of seeds resulting from the two approaches can be broadly described as follows:
For many communities, seed traditionally represents much more than a means to produce a crop, and using seed to make money or recover costs is unheard of. Traditionally, seeds are for sharing, not for sale.
According to Hardon (2004 and 2009), plant breeders are increasingly aware that breeding primarily for increased yields in more favorable environments has led to associated problems. To name a few:
- An increased inequality between wealthier and small-scale resource-poor farmers;
- New pest and disease problems due to genetic uniformity;
- Huge losses of PGR;
- Insufficient attention to culturally determined preferences.
As a result, since the 1990s, Participatory Plant Breeding (PPB) and Participatory Variety Selection (PVS) have become more popular (Note, however, that participation can have many different meanings—from asking opinions to actually having farmers leading the selection and/or breeding process). PPB and PVS have been especially used as a crop improvement strategy (but not exclusively) in response to the need for impact in non-commercial crops and in very unpredictable, stressed production environments (Sperling et al., 2001). PPB and PVS activities have been successful according to quite a few documented cases, thereby confirming farmers’ skills and capacity in plant selection, breeding and management (see for example SEARICE, Proceedings of the International Workshop on Participatory Plant Breeding Valuation, 2007). Apart from the “technical” advantages of developing varieties better adapted to farmers’ needs and skills, farmers following the PPB approach are much faster than traditional plant breeding institutions in developing new lines/seeds [as clearly shown in the Community Biodiversity Development and Conservation (CBDC) and Biodiversity Use and Conservation in Asia Programme (BUCAP) experiences in all 5 countries (Bhutan, Vietnam, Laos, Thailand and Philipines) where the program is implemented, as well as in the seed project of the Association des Organisations Profisionelles Paysannes (AOPP) in Mali] (Noray and Coulibaly, 2009). In the context of rapidly changing weather patterns, an argument can easily be made for relying increasingly on farmers’ capacities and less on centralized institutions.
According to Ignacio Nori, Regional Program Coordinator for SEARICE (personal communication, 2010), complementary work is actually the most ideal, where plant breeding institutions generate and distribute pre-breeding materials to farmers based on the breeding objectives of farmers, while farmers do the selection from early-generation materials. A big problem is that plant breeding institutions usually do not want to release segregating materials (seeds that plant breeders start with in the process of developing new lines/varieties of seeds) to farmers. This is partly due to a lack of awareness as to the capacity of farmers to do plant breeding and selection. It is also due in part to Intellectual Property Right (IPR) laws, where ownership of new varieties is used as incentive for research and varietal development.
Seeds as part of an agricultural system
For an enhanced understanding of farmers’ behavior, consider their use of seeds as part of an agricultural system with many interrelated components. To assess the economic vulnerability of such a system, one should analyze the risk and uncertainty within the system components.
According to Fraser, Mabee and Figge (cited by Vander Vennet, 2010) the vulnerability of a system is largely determined by three factors:
- The wealth of a system (The bigger the wealth the more buffer capacity is available to cushion shocks to the system);
- Ability to control or influence external forces;
- Diversity of the system.
In assessing the vulnerability of small-scale farming systems, it becomes clear that the conventional plant breeding approach, with its tendency to uniformity, economy of scales and dependency on external (chemical) inputs, in most cases increases economic vulnerability by increasing the dependence on external forces without any leverage in controlling these forces, while at the same time reducing the diversity of the system. As small-scale farmers already start from a relatively poor system, the economic sustainability of the system can be seriously undermined by an increasing reliance on modern varieties.
Seed fairs have gained popularity and are being organized in Latin America, Africa and Asia. Generally speaking, there are two types of seed fairs. A first type is mainly concerned with conserving agro-biodiversity and promoting landraces. This type came about as a response to the loss of diversity and the realization that modern varieties are not adapted to the farming systems of small scale farmers.
A second type emerged as an attempt to provide alternative seed options (other than handouts) to farmers affected by devastating crop losses. This second type of fair usually makes use of a voucher system to allow even the poorest farmers access to seed.
In general, as mentioned earlier, seed fairs are conducted to meet one or more of the following objectives:
- Improve timely access to seed for farmers;
- Contribute to conserving and managing plant genetic resources and maintain or improve agrobiodiversity;
- Raise awareness of the importance of biodiversity and PGR;
- Strengthen the position of (small-scale) farmers and their communities within the agricultural complex.
One of the central assumptions regarding seed fairs is that, by growing their diverse crops, farmers are in fact actively conserving both diversity and the specific knowledge required for appreciating and maintaining that diversity. Diversity needs to be applied to be useful. If the wealth of seed diversity were to be confined to seed banks only, it would be of little use.
By creating a special occasion (i.e. the seed fair), farmers’ access to each others’ seed is facilitated. At the same time, an event dedicated to seeds and displaying the wealth of seed diversity contributes to a greater awareness of the importance of diverse plant genetic resources (PGR). Last but not least, seed fairs bring different stakeholders in PGR together.
Whether these objectives are achieved, however, depends on how the fair is organized. Consequently, the following aspects need to be considered when organizing a seed fair.
As FAO (2006) observes: “Unlike more formal agricultural fairs, which farmers attend as passive spectators of others’ materials and technology, a seed diversity fair gives farmers the opportunity of meeting to discuss and demonstrate not only their own seeds, but also their local practices and knowledge that are linked to specific seed varieties, storage methods, processing techniques and use.”
However, this is not necessarily the case in every instance. Success depends on who is participating in the seed fair and how the seed fair is organized.
Participants in a seed fair vary. Often, organizations and institutions concerned about loss of PGR are involved in the organization of seed fairs. This means that, in addition to farmers’ families, scientists (breeders, agronomists) and extensionists participate too. Traders and seed companies are sometimes also invited.
Farmers should participate, but which farmers? This depends on a number of issues.
Not all farmers can participate at a fair. Thus, participants have to be selected though there are curious community members, who may show up at the fair ground. Selection of the participants can help promote the quality and diversity displayed at the fair. In selecting farmers, try to ensure that each participating farmer represents a group of farmers. This will help maximize the outreach and impact of the fair, and will also ensure that a wide diversity of seeds are presented (farmers representing other farmers are more likely to bring seeds other than just their own to the fair).
A selection process, when based on previously established criteria (like the role of a farmer in the community, the number of different varieties a farmer plants, etc), can also serve as a way to improve democratic mechanisms and transparency within farmers’ communities and organizations, and can reflect the importance of seed and biodiversity for farmers.
Make sure you guarantee, through the selection process, the participation of women. If no active steps are taken to facilitate and promote the participation of women, it is often the men of the community who will dominate events organized by outsiders. For seed fairs, this would mean a lost opportunity for several reasons. In many societies, for instance, women are actually the main farmers. Moreover, the diversity of offerings in a seed fair could be reduced if women are not invited to participate. In some societies, women and men traditionally are responsible for different crops (Smith, 1996). For example, agricultural production among the Soninke in western Africa was traditionally crop specific according to gender. Women cultivated rice, indigo, cotton, and groundnuts, while men grew millet, sorghum, maize, and tobacco (Pollet and Winter, 1978; Weigel, 1980; Smith, J. 1996). In other societies, women are responsible for the selection and conservation of seeds (Martínez and Bakker, 2006). Men are often responsible for the marketing and the family cash crops. As a result, the men may be more oriented towards modern varieties, cultivate fewer landraces and generally contribute less seed diversity to seed fairs than women.
Regardless of the gender of participants, knowledge of crop varieties within a community is important. FAO (2006) reported that “a surprising number of farmers often know very little about the different crop varieties that are being used by other farmers living in the same rural community. Research greatly increases farmers’ and researchers’ awareness of crop diversity within rural communities, and there is a need to strengthen this process further through the use of community events. Seed fairs are a good way of achieving this.”
In some instances, it might be helpful to include farmers from outside the local community. For example, a farmers’ union in Mozambique felt that the seed fairs held at a community level were of little interest because most people brought the same seed (varieties). To tackle this problem, the farmers’ union devised a scheme whereby farmers from different regions participated at the same fair. This effectively increased the seed offerings as well as the diversity and exchange (Bakker and Martínez, 2009).
The Mozambican experience seems to be confirmed by that of the International Crops Research Institute for the Semi-Arid-Tropics (ICRISAT) in Zimbabwe: At a seed fair where modern varieties were offered and seed vouchers were distributed, ICRISAT noted:
Participating farmers recognized that they could still obtain seed of many local varieties outside the fair. But high-quality [modern varieties] maize seed was harder to find. Farmers wanted to purchase commercial maize seed and complained about the limited choice of commercial varieties on offer. In contrast to NGO assumptions, many sought hybrid maize seed instead of the open-pollinated seed offered by traders linked with specific NGOs. (Rohrbach and Mazvimavi, 2006)
In general, bringing farmers together from different regions to a seed fair helps to increase the diversity, not only of seeds, but also of experiences with crops (e.g. ways and timing of planting) and thus facilitates richer exchanges among participants.
Children and students
Seed fair organizers in Cuba, from Instituto Nacional de Ciências Agrícolas (INCA), realized that when you organize a seed fair as a community activity (in which you aim for the participation of men and women), consideration should be given to the children (Dueñas et al. 2005). Otherwise, women are less likely to be present and to actively participate because, in most cases, they are held responsible for taking care of the children. Once the researchers realized the importance of children, they saw the seed fair as an opportunity to raise awareness and educate children on issues such as love for nature, diversity, protection of PGR, importance of healthy food and other issues directly related to the communities where they live. At the same time, planning activities for the children allowed the parents to conduct their own business at the fair.
Consciously including the children of peasants or school students changes the dynamic of seed fairs. While adults might focus on acquiring certain seeds, the activities targeting the children are more educational and formative.
The main objectives when working with youngsters are:
- To sensitize and to appreciate agricultural diversity of the rural community and ways of protecting this diversity;
- To conserve rural traditions and to strengthen the rural identity of the children and youths to avoid their future emigration to other places;
- To enrich the alimentary culture by introducing the consumption of vegetables, fruits and grains of local production; and
- To link the school and the students to the environmental actions of the community.
Work with children and adolescents can be carried out in different ways. For example, in Cuba (Dueñas et al. 2005), facilitators that were trained in participatory techniques conducted specific workshops with youth groups. The activities included traditional games, participatory dynamics and techniques for reflection and development of knowledge about agro-diversity and the importance of vegetables for a healthy diet.
In the CBDC-BUCAP project (in Bhutan, Thailand, Laos, Vietnam and Philippines), diversity seed fairs were organized within a wider context in which teachers in rural areas were encouraged to systematically include activities for students that highlighted the importance of crop diversity and of the maintenance of these native/indigenous varieties in local production. In Thailand and in the Mekong Delta, students (school children or college students) participate as the key actors in organizing seed/biodiversity fairs. Such participation is usually part of the learning requirements for courses on PGR conservation and development.
When developing activities targeted towards children, time events so that you can make use of the knowledge generated at the fair’s activities. To this end, aim to develop close relations between the organizers of the fairs, the educational centers of the town, the community organizations and also individuals including researchers, local extension agents, government officials, etc.
Scientists and researchers
The presence of scientists can positively and/or negatively contribute to a seed fair. A lot depends on the attitude of the scientists.
Often scientists and extensionists have a tendency (and are expected) to explain to farmers how things should be done. Frequently, scientists to come to a community event armed with information and materials to provide (what according to them should be) the solution to a problem.
If scientists taking part in a seed fair do not recognize the value of small-farmer systems, they will typically collect material from farmers and at the same time distribute the “improved” varieties they developed at their institutions. This only reinforces the vertical relationship between scientists and farmers and does nothing to enhance agro-biodiversity or on-site conservation of agro-diversity. Scientists can, in fact, reinforce farmers’ dependency.
Sometimes seed fairs are used by formal breeding institutions as a means of obtaining participatory input from farmers in the selection of varieties/ lines developed (as in the case of INCA in Cuba, see Ríos and Wright, 1999). This is usually done as part of an effort to increase seed diversification using varieties developed and introduced at the breeding institutes As a result, the primary role of the breeders is reinforced, and the farmers’ role is reduced to assisting breeders in developing their selection criteria by identifying the most promising lines.
On a more positive note, seed fairs are great opportunities for scientists to study farmers’ practices, knowledge and PGRs. Seed fairs can also facilitate access to certain rare varieties (see FAO, 2006). In this way, seed fairs help raise awareness among scientists about the importance of agro-biodiversity, and can open their eyes to farmers’ capacity to breed varieties as shown in the CBDC-BUCAP-organised seed fairs.
The challenge [for scientists] is not to find ways to integrate, in modern management practices, knowledge, innovations and practices of indigenous and local communities. Rather, it is to define, in collaboration with indigenous and local communities, which modern tools may be of help to them, and how these tools might be used, to strengthen and develop their own strategy for conservation and sustainable use of biological diversity, fully respecting their intellectual and cultural integrity and their own vision of development (UNEP 1994:4 cited by Gonzales, 2000).
If the right approach is taken, the participation of scientists can be useful to help farmers improve their selection methods and possibly increase diversity, as shown in an example from Peru:
The participation of the university facilitated a wider exchange of knowledge about the breeding of seed diversity and generated interest in the diversity of knowledge about the culture of the seed possessed by the Quispillacta comuneros. Through the exhibition of its plant germplasm collection, particularly a number of “lost” ecotypes, the university attracted the attention of the farmer participants. (Gonzales, 2000)
Traders and seed companies (and the modern seed varieties they often bring)
A seed fair that offers direct seed handouts is not ideal. The idea of conducting seed fairs with vouchers has been promoted as an improvement on this model. Seed fairs are supposed to offer farmers greater choice of seed to replenish their stocks.
However, farmers manage to keep more seed than often thought:
Assessments of what is later [after the distribution of fresh seeds] planted reveal a multiplicity of seed sources, including stocks saved despite the worst disasters. Supplies of certain seed crops may be limited, but most farmers are generally able to save some seed from a previous harvest, and trade between households is common (Rohrbach and Mazvimavi, 2006).
This does not mean that farmers are uninterested in easy (cheap) access to modern varieties. Seed fairs with subsidized access (vouchers) represent an opportunity to access modern varieties (see Rohrbach and Mazvimavi, 2006). And after a debilitating drought or other disaster, farmers will be looking for modern varieties; these are the most likely to have been lost because they are less adapted to local farming conditions.
One further has to keep in mind that seed fairs utilizing vouchers normally occur in a context where farmers are more or less used to frequent support in the form of seed handouts and, thus, might very well speculate that the seed fair won’t be the last time they will be receiving support. Adapting their behaviour to this context, one can expect that these farmers might be more inclined to access modern varieties, their risks being mitigated and their access facilitated. In fact at some of these fairs the set up is such that farmers are to first “buy” modern varieties of maize before they can look for other seeds:
Maize seed accounted for almost 80% of the total quantity of seed sold at seed fairs from the nine districts surveyed. This high volume and amount partly reflects the links between NGOs and agrodealers (Mazvimavi et al., 2008).
While open pollinated varieties have been introduced as an option allowing farmers to save money on seed by not purchasing fresh seed each season, most farmers seem to recognize the yield advantages offered by hybrids. Most are willing to continue to pay for this seed each year; though if an NGO is willing to provide this seed for free (perhaps through a voucher) farmers are even happier (Mazvimavi et al., 2008).
Participation of traders and seed companies at seed fairs facilitates access to modern seed varieties. This can make it more worthwhile, economically, for farmers to acquire and cultivate broadly accepted modern varieties of various staple crops (as ICRISAT experience in Zimbabwe shows).
However, it is not easy to ensure participation of seed companies and seed traders at the fairs, as they look for guarantees to make sure their investment (in time and transport costs) is worthwhile; they don’t like to return home with a couple of tons of unsold seed. Organizers, therefore, tend to accommodate traders and companies by negotiating seed quantities and prices, beforehand, with a limited number (1 or 2) of traders. This can end up limiting the diversity of seeds available:
Due to the nature of the relationship between NGOs and a few agro-dealers, most fairs were dominated by the supply of either a single variety of open-pollinated maize seed (generally ZM 521), or a single variety of hybrid maize seed. In effect, most farmers did not have a choice of what type of maize seed to purchase (Mazvimavi et al., 2008).
If markets are not functioning properly, the use of subsidized seed (by vouchers)--combined with the presence of traders/seed companies--might be a way to facilitate access to modern varieties of seed. However, according to Rohrbach and Mazvimavi (2006), while these seed fairs may increase seed choices compared to fairs with direct hand-outs, there is no evidence they contribute to improvements in agro-biodiversity:
As sometimes argued that the injection of cash through vouchers would stimulate the local economy the cited evaluation found that sales of maize seed accounted for more than 90% of the total value of seed sold at the fairs. By inference, the vast majority of seed investment left the local community and ended up in the hands of urban-based agro-dealers and seed companies (Mazvimavi, 2008).
Furthermore, the modern varieties are more likely to be lost again (as they are less adapted than landraces), so if you promote them at seed fairs you might be perpetuating a vicious cycle.
In order to facilitate access to seed, seed should be inexpensive. At seed fairs in Zimbabwe, where the voucher system was used, and traders participated, prices were at least two times higher than in local markets, even though costs were partially off-set by vouchers for poor farmers (Mazvimavi, 2008):.
The development gains often attributed to seed fairs [with a voucher system] compared to seed handouts (eg, increasing community incomes, promoting local seed production and improving agro-biodiversity) appear to be overestimated. Seed fairs facilitate community seed trade. But they may be monetizing a traditional obligation to share limited seed stocks. There may be more seed on the informal community market, but its accessibility to poorer households may be diminished unless vouchers continue to be provided. The fairs appear to be inflating local seed prices and they do little to strengthen the stocking of seed in local retail shops (Mazvimavi, 2008)
By contrast, in seed fairs organized by the provincial farmers’ union in Nampula, in which only farmers participated, participants appreciated that prices were low compared to the local markets (Bakker and Martínez, 2009).
So unless you are dealing with modern varieties that fill a unique niche and/or are well-adapted to the area and farmers are genuinely free to choose, it seems unnecessary (and costly) to have traders and seed companies at seed fairs. The participation of seed companies and seed traders does nothing to strengthen the position of farmers in terms of reducing reliance on external resources. Furthermore, as the experience in Nampula, Mozambique shows, farmers explicitly appreciate having a marketing forum of their own (Bakker and Martínez, 2009).
When holding a seed fair, take into account the availability of farmers. It is generally better to hold a fair in the slack (usually the dry) season, after harvest and before land preparation for the next rainy season.
Do not try to hold a seed fair very late in the dry season. Rains can start early and they can be late, but farmers need to be prepared for the earliest rain. Farmers in many different countries complain that the rainy season is becoming more unpredictable, both in timing and in amount of rain, a clear indication of changing climates. For each day’s delay of planting after the rain, a farmer loses an estimated 1.5% of the potential harvest. Early rains can be as much as two months before the “normal” rainy season. Therefore, depending on the crop, in areas with very irregular rainfall, farmers tend to sow the seed before it actually rains which means, that when rain is insufficient or stops, the farmers have to plant again. If the fair is held late in the dry season, farmers might already have planted much of their seed.
At the same time, a seed fair should not be held too soon after harvest. Planning is easier when the fair only deals with one type of seed ( e.g. tomatoes), like the seed fairs held in Cuba by INCA, (Ríos and Wright, 1999), but not all crops are harvested at the same time. Harvests also depend on the time of planting, which can vary considerably from one year to the next. Consider the main staple and commercial crops in Northern Mozambique: peanuts can be harvested from February to April, maize and cowpeas can be harvested from April until June, and rice and sorghum are harvested from June until August. Pigeon peas and cassava are ready even later, and are harvested in August and September. Seed must be well-dried to maintain its quality, so sufficient time should be allowed for proper drying of the harvest.
Many organizations try to schedule a seed fair so that it can be integrated into existing local cultural festivities or other events (such as World Food Day) that take place more or less around the time of the planned seed fair. This may be a way to firmly establish seed fairs within the community and to ensure better participation, but take care that the main focus of the seed fair is not lost.
In general, depending on the seasons, a seed fair is best held between two and three months before the “normal” start of the rainy season, in order to guarantee timely access and availability of good quality seeds and high levels of participation from farmers.
Organization of the seed fair
“Unlike more formal agricultural fairs, which farmers attend as passive spectators of others’ materials and technology, a seed diversity fair gives farmers the opportunity of meeting to discuss and demonstrate not only their own seeds, but also their local practices and knowledge that are linked to specific seed varieties, storage, processing and use.” (FAO, 2006)
The value of a seed fair—its benefit to farmers as an occasion for the exchange of seeds and ideas—is not determined solely by the planner’s technical knowledge (e.g. of how to enhance diversity and knowledge development). Success also depends on the organizers’ recognition and appreciation of the connection between seed diversity and cultural norms influencing the exchange of seeds and knowledge.
The organizers (external organizations, farmers’ organizations or communities) of a seed fair start discussing the seed fair with the communities involved preferably two months or more before the envisioned date. Discussion should include the objectives, the distribution of responsibilities, the selection (process) of the participants (including gender) according to the objectives, timing, location, guests to invite, cultural activities and logistics.
For farmers to become active participants (as opposed to passive visitors), they must assume the lead in organizing the fair, perhaps by having the community or farmers’ organization select an organizing committee. When farmers do the organizing, dynamics change.
Involving farmers in organizing the seed fair is a way to include the community’s own culture and identity. In most cases, this results in the integration of cultural activities (songs, dance, theatre) conducted by local groups. The activities strengthen the connection to the local community and at the same time serve as a way of providing messages and opportunities for reflection regarding seed, diversity, organization etc.
The invitation of guests (for example local authorities) is a strategic issue. Seed fairs are a very suitable platform for bringing various stakeholders engaged in PGR conservation together. Thus the fair can serve as a meeting point for policy makers, district administrators, researchers, extension agents, seed companies, local government officials, representatives of religious institutions, students and farmers. Through the concrete activity of the fair they can better understand and appreciate the value of PGR and the role of farmers. When local government becomes interested, seed fairs can be integrated in annual activity planning, which could help to ensure continuity of the activity (see Katwal and Wangdi, undated). By inviting media (e.g. local radio), the messages of the fairs can be spread even further.
The location is important, as it should be a place where farmers feel comfortable and that facilitates exchange. Usually it is best to let the community identify the area. For example, in Mozambique the fairs are usually held under a couple of trees, with the area demarcated with grass fences, where the participants display their seed on mats. For the officials, who are invariably (and rightly) invited, tables and chairs are available. At the FAO-promoted fairs in Tanzania, a bit more effort was invested into setup of the fair ground, with “all the displays arranged on well-constructed tables in temporary huts,” but still making sure that “everything was open and easy to see.” (FAO, 2006)
Rotating the seed fairs among communities involved will stimulate wider participation and exchange between communities.
In a lot of seed fairs, farmers (or farmers’ groups) are customarily rewarded for diversity, quality/and or quantity of the seed presented at the fair by giving awards according to pre-established criteria. In Mozambique, symbolic presents were also offered to the local authorities as appreciation for their support and participation.
If the set up of the seed fair is well done, it can become a means for farmers’ communities to show the richness of their culture and their wealth of knowledge so often ignored by officials, extensionists and scientists. This, in turn, can boost the self confidence of farmers.
Context of the seed fair
Holding a seed fair can be a practical way to initiate discussion and exchange on issues like access to seed, biodiversity and sustainable agriculture. Although seed fairs do improve access to seed and the maintenance of agrodiversity, by themselves they are unlikely to reverse trends in uniformization of seed and increasing farmers’ dependency on external inputs (seed, chemical fertilizers and pesticides).
Seed fairs, for example, don’t address directly the issue of seed quality in terms of genetic improvement and of guaranteeing that seeds are free from viruses and have good germination. Yet, seed quality issues are important considering that formal breeding institutions have developed seed quality standards that are being used (via legislation for example) to promote modern varieties (which can impede the distribution of landraces).
It is, thus, not surprising that most seed fairs are organized within the context of a wider program of development alternatives for small-scale farmers. Examples of such development programs have been PPB projects (CBDC-BUCAP project in Asia), the promotion of local production and the set up of community seed banks in Ceará, Brazil (Pinheiro and Peixoto, 2004), the promotion of sustainable and/or organic agriculture in Costa Rica (Greenheck, 2010) and the development of farmer-to-farmer approaches for agricultural innovation in Nampula, Mozambique (Bakker and Martínez, 2009).
These approaches have several things in common:
- An emphasis on farmers’ skills, knowledge and involvement;
- The realization that small-scale farming systems have specific seed requirements; and
- A recognition of the key role small-scale farmers play in maintaining agro-biodiversity.
The sustainability of seed fairs depends to a large extent on how well the activity becomes integrated in the community, and on the attitude of local officials to seed fairs. Although seed fairs, as such, are not necessarily an existing local activity, there are many links to local culture such as informal seed exchanges, the festivities related to the agricultural calendar, and a recognition of the importance of diversity.
There are a number of “success stories.” In Costa Rica, a local NGO forum took up the idea of seed fairs (Greenheck, 2010). In Mozambique the concept of seed fairs has spread from Nampula province to other provinces through the national farmers’ union (UNAC).
Sustainability can be enhanced by including local authorities in seed fairs, but this depends on the relationship that farmers’ organizations have with local authorities. For example, in Bhutan, the annual plans of local agricultural offices have begun to feature seed fairs as part of their activities, with religious institutions and schools also participating in the fairs (Katwal and Wangdi, undated). But in Mozambique, where agricultural extension offices receive money from seed companies (Monsanto), government officials tried to convince the farmers’ organizations to include companies selling modern varieties of seed, fertilizers and herbicides at the fair.
It does not cost much to organize a seed fair within a community (without a subsidized seed distribution component), and in fact if a community picks up the idea, it can easily be self sustained. All that is needed is a space with some shade and some food. When the fairs are bigger, with farmers from different communities involved, they become more expensive as logistics get more complicated (e.g. transport is the biggest expense). But if the fair is organized by farmers and their organizations, costs will remain limited.
Different types of activities are covered by the term seed fair. One type of seed fairs takes place in the context of on-farm biodiversity management and strenghthening of farmers position, while the other type takes place in a context of seed distribution following calamitous growing conditions (droughts, floods). While the benefits of the latter, which mainly has been compared to direct seed handouts, appear to be overestimated the first type seems to play an important role within its context.
Dueñas, F. et. al. 2005. “As crianças e as feiras de agrobiodiversidade: uma vivência em Cuba.” Agriculturas (LEISA Brasil), 2(1): 30-33.
Greenheck, F. M. 2010. Reviving Traditional Seed Exchange and Cultural Knowledge in Rural Costa Rica.
Katwal, T. B. and N. Wangdi. 2009. “Mainstreaming Local Partners into Agro-biodiversity Conservation, RNR RDC Wengkhar, Bhutan” 2pp. Excerpt from CBDC-BUCAP 2009 Annual Report.
Martínez, F. Z. and N. Bakker. 2006. Memorias da etapa do diagnósticos e aproximação as zonas agrícolas (cooperativas e associações) de Nampula. UGCAN-Oxfam Bélgica,16p.
Mazvimavi K, Rohrbach D, Pedzisa T and Musitini T. 2008. A review of seed fair operations and impacts in Zimbabwe. Global Theme on Agroecosystems Report no. 40. International Crops Research Institute for the Semi-Arid Tropics. Bulawayo, Zimbabwe: 36 pp. Web:
Noray, S. and Y. Coulibaly. 2009. Évaluation finale interne projet “Produire des semences de céréales en milieu paysan au Mali”. Oxfam Solidarité (unpublished), 81p.
Ríos, H. and J. Wright. 1999. “Early attempts at stimulating seed flows in Cuba”, ILEIA Magazine, 15:(3-4): 38-39.
Rohrbach, D. and K. Mazvimavi. 2006. “Do seed fairs improve food security and strengthen rural markets?” Protracted Relief Program for Zimbabwe. Briefing note nr. 3 Department for International Development (DFID) and International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), Bulawayo, Zimbabwe, 4p. Web:
SEARICE. 2007. Valuing Participatory Plant Breeding: A review of tools and methods. International Development Research Centre, Report of Proceedings of the International Workshop on Participatory Plant Breeding. Southeast Asia Regional Initiatives for Community Empowerment (SEARICE), Viet Nam, February 23-25.
Smith, J. 1996. Gender, environment, and development concerns in irrigated rice schemes in West Africa. Senior Honors Thesis, Wisconsin University, 61p. web: minds.wisconsin.edu/.
Sperling, L. and others. 2001. “A framework for analyzing participatory plant breeding approaches and results.” In: Hodlé, J. J Lançon and G. Trouche, Sélection participative, Montpellier, 5-6 septembre 2001, 206-219pp.
Van der Ploeg, J.D. 1993. “Potatoes and knowledge.” In an anthropological critique of development. The Growth of Ignorance. Ed. M. Hobart, London and New York: Routledge.
Vander Vennet, B et. al. 2010. “Sustainability of specialized and mixed farm systems taking risk into account.”, Paper prepared for: International Society for Ecological Economics (ISEE),11th Biennial Conference: Advancing Sustainability in a Time of Crisis 22-25 August 2010, Oldenburg and Bremen, Germany, 18 p. |
If the midweek hump has you in a contemplative spirits, this stunning image of Earth as pictured by the Cassini spacecraft from Saturn, 898 million miles (1.44 billion kilometers) away, may offer a little context. The Earth and the Moon appear to be seemingly insignificant specks from the perspective of the spacecraft from its orbit around the gas giant, the second biggest planet in the Solar System. But as it turns out, Cassini is actually talking us up.
The image, taken on Friday, represents the first time that Cassini–Huygens (to give the spacecraft its full name) has captured the Earth and Moon as distinguishable from each other using its highest res wide-angle camera. Both the Earth and the moon actually appear larger than they should due to the long exposure used to capture as much light as possible. At the distance of Cassini from Earth, a single pixel of the wide-angle camera captures a distance of 53,800 miles (86,600 km) across, whereas the diameter of Earth is only about 7,900 miles (12,700 km). If a single flashlight could emit as much light as the Earth reflects, it would look just the same to Cassini if you pointed it in its direction.
This was a rare opportunity to capture such an image. Normally, the Earth's proximity to the Sun rules out such pictures being taken, as the Sun puts the extremely sensitive sensors of such cameras at risk. But in this case, the Sun had moved behind Saturn from Cassini's point of view, hence Saturn itself appearing as a dark mass to the left of the image. This is only the third image of Earth captured from the outer solar system where the gas giants reside.
NASA also says that this is the first time that people on Earth had advance warning that a photo of the planet was to be taken from interplanetary range. At NASA'S invitation, 20,000 people around the world are thought to have photographed themselves smiling and waving at Saturn to coincide with the event.
"We can't see individual continents or people in this portrait of Earth, but this pale blue dot is a succinct summary of who we were on July 19," said Cassini project scientist Linda Spilker. "Cassini's picture reminds us how tiny our home planet is in the vastness of space, and also testifies to the ingenuity of the citizens of this tiny planet to send a robotic spacecraft so far away from home to study Saturn and take a look-back photo of Earth."
The image will form part of a mosaic of Saturn's rings which NASA scientists are currently piecing together. This is set to be released some time in the coming weeks.
The image was joined by another of planet Earth taken on the same day by the MESSENGER spacecraft from a distance of 61 million miles (98 million kilometers) from its orbit about Mercury.
Launched in 1997, Cassini arrived at Saturn in 2004, commencing its 4-year mission to study the planet, its rings, and its natural satellites. Its second mission, Cassini Equinox, extended its observation of the Saturn system by 2 years. Its third and current mission, Cassini Solstice, will run until 2017, allowing observation of a complete seasonal cycle on the planet since its arrival. The mission will come to an end with a series of close flybys of the planet passing inside its rings. Cassini will be destroyed shortly after when it plummets into Saturn itself. |
Generally speaking, when particles of light, called photons, meet the semiconductor material in a solar panel electricity is produced. This process is call Photo-Voltaics or PV. The resulting electricity can be used to charge batteries, power direct-current devices or be converted into the alternating current that powers our utility grid.
For our purposes, we will discuss the third purpose: creating electricity to reduce our monthly power bill. An array of solar panels can become a secondary source of power for a home or business. This is achieved through an arrangement with the local utility company called “Net-Metering”. When the array produces more power than is consumed, the meter runs backwards. The customer pays the difference between consumption and production.
A typical solar array feeds DC power to an inverter which feeds AC power to the circuit breaker panel. Briefly stated, when the sun shines the system works. Solar science does the work. A PV solar system requires little to no maintenance or interaction with the user. |
Imagine Antarctica. Imagine an island, with mountains, peaks, ridges, and valleys. Imagine further that a thick layer of ice covers, not only the surface of the island that lies above the sea but also an extensive portion of the perimeter that is beneath the sea. The peaks are higher above sea level than on any continent. In winter, the sea freezes because temperatures drop to less than -80 degrees Celsius (-112 degrees Farenheight), and the island’s area grows to about 10 million square miles. In summer when some of the ice melts, the ice cover remains on average more than a mile thick, although the overall surface area of the island shrinks to about five million square miles. Even in summer, however, the island is still larger than Europe or Australia. It is Antarctica, and it is impossible to imagine.
So let us instead consider an island that is a large glacier with a thick cover of ice that extends outward, well beyond its land area. The island is shaped roughly like an infinity symbol, with the right (east) side much larger than the left (west). The west side is really a peninsula and archipelago that share a common bedrock, but this is invisible because of the ice cover. What we can see is that even at the perimeter, where there is no land above sea level, there is ice. In some places, the ice reaches down, well beneath the water surface, all the way to the bedrock.
This situation is unstable, because in principle, the mass of ice that is beneath the sea and in continuous contact with liquid water should eventually melt. When it does, this initially leaves an overhanging shelf of ice over the water at the island’s perimeter. Being less dense than water, this shelf will want to float up and, given enough time, will eventually break away from the more interior ice that is pinned to land above sea level. Indeed, about 40 percent of Antarctica’s perimeter consists of such ice shelves; in another 40 percent of the perimeter, the ice cover reaches all the way down to the bedrock.
Island, ice and sea have coexisted for millennia in an uncomfortable equilibrium. In particular, the sea temperatures have not grown sufficiently warm to erode the ice edge irreversibly. Furthermore, the mass of ice on the surface has remained relatively constant, with the seasonal flows of water out to sea in the summer being replaced by deposits of ice in winter. The ice shelves have not thinned sufficiently to become so weak that they would snap and float away out to sea. This was all before the one-degree Celsius warming in the Earth’s surface since around 1980.
Currently, the warmer seawater is eroding the island’s submerged perimeter of ice; simultaneously, the warmer air is also melting the ice cover at such an accelerated rate that it cannot be entirely replaced in the winters. As humans continue to pour more carbon dioxide (CO2) from burning of fossil fuels into the air and more methane (CH4) from operations such as fracking, the intensified greenhouse effect, and continued warming, will accelerate yet further the erosions of both the surface and edge of the island. Once both kinds of erosion become irreversible, meaning that no net ice is replaced, the ice mass will shrink and become more and more bare, in a process that will accelerate out of control until the ice appears suddenly to vanish.
This is more or less the story that Eric Rignot and his colleagues reported about West Antarctica in a Geophysical Research Letters article that was accepted for publication on May 12, 2014. In particular, when they used satellite-based radar interferometry to map the edges of a series of glaciers that drain into a large bay called the Amundsen Sea Embayment, and combined their data with the results of other kinds of surveys, they discovered that between 1992 and 2011:
*Thwaites Glacier retreated 8.7 miles (14 km) at its core and zero to six miles (1 to 9 km) at its edges,
*Haynes Glacier retreated 6 miles (10 km) at its edges,
*Smith/Kohler Glacier retreated about 22 miles (35 km), and its ice shelf is barely pinned to the surface.
*Pine Island Glacier retreated 19 miles (31 km) at its center and snapped and detached from the ground.
All these retreats occurred mostly between 2005 to 2009. The authors note that they must have had a common cause and that the most reasonable explanation is the general warming of the ocean. They further explain that there is no natural land mass to prevent the movement of the massive glaciers out to sea. They conclude:
“The retreat is proceeding along fast-flowing, accelerating sectors that are thinning, become bound to reach floatation and un-ground from the bed. We find no major bed obstacle upstream of the 2011 grounding lines that would prevent further retreat of the grounding lines farther south. We conclude that this sector of West Antarctica is undergoing a marine ice sheet instability that will significantly contribute to sea level rise in decades to come.”
In other words, the disappearance of West Antarctic ice is well under way, and it is irreversible.
It is notable that this research was done under difficult circumstances. For example, the authors write that, since 2001, the ERS-2 satellite has operated without its gyroscopes, and “This made it difficult to control the antenna pointing….” They further observe that “In July 2011, ERS-2 terminated its mission after 16 years of services, far exceeding its planned operational lifespan.” In addition, they make a point of acknowledging “two anonymous reviewers for their comments.” Possibly, the report was delayed, and some of its more frightening arguments had to be removed before publication.
In a later publication for the general public, Rignot stressed that the estimate of 200 years for the Radmunsen sea collapse, which has been repeated again and again in the press, is based on the melting continuing at its current rate. This we know to be impossible because the melting is an exponential process that has been accelerating all the time and will continue to accelerate even more. The acceleration is driven, among other things, by an accelerated warming of the atmosphere and sea surface, continued expansion of the ozone hole, strengthening of currents that bring greater masses of warm waters from the tropics to Antarctica, weakening of the ice shelves due to accelerated melting of the surface ice, weakening of the attachment of the ice below sea level due to an accelerated erosion, and decreasing reflectivity of the Earth.
With regard to climate change, again and again, exponential processes have been treated as if they would develop linearly, despite scientists knowing quite well that they would not. Consider for example, a storm that is approaching your house from six miles away. The storm is currently moving at five miles per hour, but it is expected to double its speed with every new mile. Do you make sure to have cover within one hour and 12 minutes, or within about 22 minutes? Again and again, scientists have done the equivalent of feigning surprise when their timelines, based on a completely bogus linearity, have turned out to be too long. Things have gone much too far for us to continue to play such numbers’ games.
Rignot blames carbon emissions, which have tripled since the Kyoto Protocol, for the current state of affairs, and he categorically says that the collapse of the ice cover from “the Amundsen sea sector of West Antarctica [is] unstoppable, with major consequences – it will mean that sea levels will rise one metre [more than 3 feet] worldwide. What’s more, its disappearance will likely trigger the collapse of the rest of the West Antarctic ice sheet, which comes with a sea level rise of between three and five metres [10 to more than 16 feet]. Such an event will displace millions of people worldwide.”
The sea-level rise of 10 to 16 feet will come in decades, rather than 200 years. It will submerge essentially every port city in the world, including Guangzhou, Mumbai, Shanghai, Ho Chi Minh City, Kolkata, Osaka-Kobe, Alexandria, New York, New Orleans, Miami, and indeed all of South Florida. This will likely displace over 300 million people, many of them in countries that have equated development with movement of the majority of their populations to low-elevation coastal zones in port cities. The displacement and homelessness from the changes in sea level might be the least of humanity’s problems. Being a geophysicist, Rignot does not address the possible effects of the changes in ocean salinity on sea life, but one can expect that such a huge influx of fresh water into the oceans will cause radical changes in the areas of high primary productivity (i.e. that feed those at the lowest levels in the food chain) and result in massive fish kills. Other changes of the oceans are likely to ensue, the details of which we cannot begin to imagine. One of these, for certain, will be a change in the ocean currents.
The report on Antarctica by Rignot and colleagues has been characterized as a “holy shit” and “point of no return” moment for humanity, but it is a report on data that were collected years ago about a process that accelerates with each passing year. What does one call a holy-shit-point-of-no-return moment that happened three years ago?
Dady Chery, is the co-Editor in Chief at News Junkie Post. |
The positive connection between games and online learning
By Mitch Weisburgh, cofounder of Games4Ed
Game-based learning has the potential to drastically improve the way children are taught.
Games have peculiar qualities that let them engage hard-to-reach students in a way lessons cannot. Researchers have begun to explore the intrinsic qualities of games that make them promising learning tools, and anecdotal evidence is available everywhere.
I personally know a student who struggled in history until Assassin’s Creed sparked his interest in the French revolution; he is now an honors history student. I know many students who spend hours playing Minecraft and many hours more learning new skills and techniques on YouTube, which they then apply to Minecraft. Clearly, a good game is a powerful motivator for learning. It engages the mind and the passions simultaneously, with obvious results. But why, and how, does this work, and how can we harness it in schools?
Who uses games? 99% of boys, 94% of girls, and 62% of teachers play video games.
Games foster ideal conditions for learning
There is a sweet spot for learning that lies between what a person can do without help, and what they can only accomplish with help. Lev Vygotsky coined the term zone of proximal development to describe this spot. In the zone of proximal development, the lesson is neither so easy that the student is bored, nor so difficult that he gives up.
Teachers use their training and skill to create lessons that fall into their students’ zone of proximal development, but Plass, Homer, and Kinzer show in Playful Learning: An Integrated Design Framework that successful games tend to aim toward this same zone. The tantalizing opportunity provided by games is a lesson that measures player skill, and then delivers an appropriate response automatically.
Gamers beware, however. According to Tobias et al, when the game mechanics become complex, the zone of proximal development is overshot and learning can be inhibited. Game designers “need to be mindful of the cognitive load imposed on players” to learn to play.
Games encourage growth
Games relate to another key aspect of learning. Carol Dweck pioneered the idea is that individuals who see themselves as evolving through hard work and dedication will grow their abilities, while those who see their talents as fixed traits will not. She called this the growth mindset paradigm, laid out in her book Mindset: The New Psychology of Success. Games reinforce the growth mindset through their treatment of failure.
Games that support a growth mindset allow for “graceful failure” by embedding low-stakes failure into the game mechanics. These games encourage balanced risk-taking and exploration. A player who fails at a well-made game immediately tries again, and when the player eventually succeeds, the idea of growth through practice is reinforced. Kris Mueller, an eighth grade teacher writing for Edutopia, wrote: “A well-designed game leads players through carefully-leveled tasks that prepare them to succeed in bigger challenges.”
Games improve spatial skills
There are literally hundreds of research and pseudo-research papers on games. A meta-analysis of more than 100 studies, Effects of video-game play on information processing: a meta-analytic investigation, found that studies generally agreed: games improve visual processing, visual-spatial manipulation of images, and auditory processing. The analysis, undertaken by Powers, Brooks, Aldreich, Palladino, and Alfieri, attributed much of the improvement to video games demanding that players interpret, mentally transform, manipulate, and relate dynamic changing images.
Games have significant value for education because the skills cultivated by games are widely applicable outside of games. Tobias, Fletcher, and Chen showed this in a review of 95 studies, Digital Games as Educational Technology: Promise and Challenges in the Use of Games to Teach (to be published later in 2015). They found “evidence of near and far transfer in applying learning from games to external tasks.”
Specifically, action games, often called First Person Shooter (FPS) games, improve attention, mental rotation, task switching, speed of processing, sensitivity to inputs from the environment, resistance to distraction, and flexibility in allocating cognitive as well as perceptual resources. Not only did people learn these skills from video games, there was a significant ability to transfer that learning to other activities.
Games are linked to STEM achievement and greater creativity
Spatial skills “can be trained with video games (primarily action games) in a relatively brief period” and that these skills “last over an extended period of time.” More excitingly, the improvement in visual-spatial skills is related to other, more scholarly, improvements. The Benefits of Playing Video Games (Granic, Lobel, and Egels in American Psychologist, January, 2014) noted that those learning these skills from video games show increased efficiency of neural processing. Improvements in spatial skills predict achievement in science, technology, engineering, and mathematics.
There are also links between playing video games and enhanced creativity, although we do not yet know the exact nature of the connection. Perhaps games enhance players’ creativity, or creative people tend to play video games, or some combination is at work.
Games foster engagement
One of the most important factors related to learning is time on task. It is highly related to proficiency and can be used to predict math proficiency to the nearest tenth of a grade placement. Yet, students are found to be thinking about topics entirely unrelated to academics a full 40% of the time while in classrooms. In fact, on average, high school students are less engaged while in classrooms than anywhere else.
In the Handbook of Positive Psychology in Schools, Shernoff and Chikszentmihalyi make two points that relate directly to the need for increased engagement. They found that enjoyment and interest during high school classes are significant predictors of student success in college, and that this engagement is a rarity in US schools.
High engagement is observed when students focus on mastering a task according to self-set standards or a self-imposed desire for improvement. You’ll remember that those standards are linked to the growth mindset outlined by Dweck. Engagement (enjoyment and interest) is represented by heightened concentration and effort in skill-building activities along with spontaneous enjoyment from intrinsic interest and continued motivation.
This relationship between time spent and skill applies to video games as well. The more time spent playing educational games, the greater the gain in skills and knowledge. Unlike class time, however, video games are great at capturing and holding attention. The average gamer spends 13 hours a week playing games.
It is not clear whether the positive effects of game-based learning stem from greater time spent learning, or increased efficiency in learning, or both. It is clear, however, that more time is spent learning when educational games are used than when they are not. Tobias et al report that those who learn using games, “tend to spend more time on them than do comparison groups.”
What makes an optimal learning environment?
Shernoff and Chikszentmihalyi propose conditions for an optimal learning environment which match strikingly with the benefits of educational gaming. An optimal learning environment:
- presents challenging and relevant activities that allow students to feel confident and in control
- promotes both concentration and enjoyment
- is intrinsically satisfying in the short term while building a foundation of skills and interests
- involves both intellect and feeling
- requires effort and yet feels like play
Their research shows that video games may foster this environment. Students using a video game approach made considerably greater learning gains than those in a traditional classroom, and were linked to a higher level of engagement.
Shernoff provides an example: a full semester college course, Dynamic Systems and Control.
A college course was designed around a video game in which students race a virtual car around a track for homework and lab exercises. The students reported a higher level of interest, engagement, and flow, and the video game was able to maintain “the high level of rigor inherent to the challenging engineering course while adding the perception of feeling active, creative, and in control characteristic of flow activities. The students who interacted with the video game also demonstrated greater depth of knowledge and better performance in the course.”
SRI, in research on GlassLab STEM games for K12, found that, for the average students, learning achievement increases by 12 percent when game based learning augments traditional instruction, and if the “game” is a simulation, achievement increases by 25 percent.
The research so far points to the tremendous value of games in education, and marks signposts for differentiating “good” and “bad” games. Yet there is still little knowledge on the most effective ways to produce games “the reliably yield pre-specified instructional objectives.” Also, it’s hard to know in advance if students will master a specific standard through X hours playing any one game.
A combination of games and other instructional methods has been shown to be especially effective. “Integrating games into the curriculum improves transfer from games to school learning tasks.”
Games, combined with other instructional strategies, may be the solution to Bloom’s two-sigma problem.
Effects of video-game play on information processing: A meta-analytic investigation Powers, Brooks, Aldreich, Palladino, Alfieri; Psychonomic Society, 22 March, 2013
Digital Games as Educational Technology: Promise and Challenges in the Use of Games to Teach Tobias, Fletcher, Chen; Educational Technology, due in September or October 2015
Playful Learning: An Integrated Design Framework Plass, Homer, Kinzer; Games for Learning Institute; December, 2014
Flow in Schools Revisited Shernoff, Chikszentmihalyi, Handbook of Positive Psychology in Schools, Second Edition, Routledge, Taylor & Francis Group
Engagement and Positive Youth Development: Creating Optimal Learning Environments David J Shernoff, APA Educational Psychology Handbook, Chapter 8
Independent Research and Evaluation on GlassLab Games and Assessments, SRI, 2012, http://ww2.kqed.org/mindshift/2014/06/27/games-in-the-classroom-what-the-research-says/
The Benefits of Playing Video Games, Granic, Lobel, Engels; American Psychologist, January, 2014
Mitch Weisburgh is the cofounder of Games4Ed. |
Sea cucumbers are a class of echinoderms, the Holothuroidea. They have a longish body, and leathery skin. Sea cucumbers live on the floor of the ocean. Most sea cucumbers are scavengers. There are about 1500 species of sea cucumbers. Sea cucumbers have a unique respiratory system, and effective defences against predators. Chinese eat them.
Like all echinoderms, sea cucumbers have an endoskeleton just below the skin, calcareous structures that are usually reduced to isolated ossicles joined by connective tissue. These can sometimes be enlarged to flattened plates, forming an armour. In pelagic species the skeleton is absent.
Overview[change | change source]
A remarkable feature of these animals is the collagen which forms their body wall. This can be loosened and tightened at will. If the animal wants to squeeze through a small gap, it can undo the collagen connections, and pour into the space. To keep itself safe in these cracks, the sea cucumber hooks up all its collagen fibres to make its body firm again.
The animals have an internal respiratory tree which floats in the internal watery cavity. At the rear, water is pumped in and out of the cloaca, so gaseous exchange takes place with the resiratory tree in the gut.p80
Defence[change | change source]
Some species of coral reef sea cucumbers defend themselves by expelling sticky cuvierian tubules to entangle potential predators. These tubules are attached to the respiratory tree in the gut. When startled, these cucumbers may expel the tubules through a tear in the wall of the cloaca. In effect, this squirts sticky threads all over a predator. Replacement tubules grow back in one-and-a-half to five weeks, depending on the species. The release of these tubules can also be accompanied by the discharge of a toxic chemical known as holothurin, which has similar properties to soap. This chemical can kill any animal in the vicinity and is one more way in which these sedentary animals can defend themselves. Other cucumbers, lacking this device, can split their intestinal wall, and spew out their gut and respiratory tree. They regenerate them later. Zoologists who experience this believe it to be an impressive deterrent. "The mess one individual can make must be seen to be believed".p81
The existence of these defences explains why the holothurians were able to do without the strong skeleton of their ancestors.
Feeding[change | change source]
Highly modified tube feet around the mouth are always present. These are branched and retractile tentacles, much larger than the regular tube feet. Sea cucumbers have between ten and thirty such tentacles, depending on the species. There is a ring of larger ossicles round the mouth and oesophagus to which the muscles of the tube feet are attached. With their sticky tentacles the animal collects detritus and small organisms.
References[change | change source]
- Pelagic sea cucumber: Information from Answers.com
- Reich, Mike (2006). Lefebvre B. David B. Nardin E. & Poty E.. ed. "Cambrian holothurians ? – The early fossil record and evolution of Holothuroidea". Journées Georges Ubaghs (Dijon, France: Université de Bourgogne): 36–37. http://www.geobiologie.uni-goettingen.de/people/mreich/pdf/PDFs/POST_Dijon_Seegurken1.pdf.
- Piper, Ross (2007). Extraordinary animals: an encyclopedia of curious and unusual animals. Greenwood Press. ISBN 0313339228.
- A cloaca is a joint anus and sexual opening
- Nichols D. 1962. Echinoderms. Hutchinson, London. ISBN 0-09-065994-5
- Flammang, Patrick; Ribesse, Jerome & Jangoux, Michel (2002-12-01). "Biomechanics of adhesion in sea cucumber cuvierian tubules (echinodermata, holothuroidea)". Integrative and Comparative Biology 42 (6): 1107–1115. doi:10.1093/icb/42.6.1107. http://icb.oxfordjournals.org/cgi/content/abstract/42/6/1107. Retrieved 2007-10-03.
- Barnes, Robert D. (1982). Invertebrate zoology. Philadelphia, PA: Holt-Saunders. pp. 981–997. ISBN 0-03-056747-5. |
Photosynthesis is a unique process that allows independent colonization of the land by plants and of the oceans by phytoplankton. Although the photosynthesis process is well understood in plants, we are still unlocking the mechanisms evolved by phytoplankton to achieve extremely efficient photosynthesis. Here, we combine biochemical, structural and in vivo physiological studies to unravel the structure of the plastid in diatoms, prominent marine eukaryotes. Biochemical and immunolocalization analyses reveal segregation of photosynthetic complexes in the loosely stacked thylakoid membranes typical of diatoms. Separation of photosystems within subdomains minimizes their physical contacts, as required for improved light utilization. Chloroplast 3D reconstruction and in vivo spectroscopy show that these subdomains are interconnected, ensuring fast equilibration of electron carriers for efficient optimum photosynthesis. Thus, diatoms and plants have converged towards a similar functional distribution of the photosystems although via different thylakoid architectures, which likely evolved independently in the land and the ocean.
Photosynthesis is a unique process that converts sunlight energy into organic matter on Earth, feeding almost the entire food chain. Photosynthesis is accomplished on the land, which is dominated by plants, and in the ocean, which is mostly colonized by phytoplankton. In eukaryotes, this process occurs in a specialized organelle: the plastid. Plant photosynthetic plastids (chloroplasts) are derived from a cyanobacterium-like organism via primary endosymbiosis, whereas the majority of phytoplankton plastids are derived from a red eukaryotic microalga via secondary endosymbiosis. Their different phylogenetic origins have led to distinct structural plastid designs. Differences can be observed at the level of the envelope, the membrane system surrounding the stromal space and of the photosynthetic membrane network, the thylakoids. Primary plastids contain a two-membrane envelope, whereas secondary plastids generally have four envelope membranes1. Primary plastids also contain differentiated thylakoid domains that segregate the components of the photosynthetic electron flow chain: the two photosystems (PS), which perform light photochemical conversion and the cytochrome b6f, which catalyses electron exchanges between the two PSs. PSII is mostly located in the appressed grana stacks, PSI is mainly found in the non-appressed stroma lamellae whereas the cytochrome b6f is more homogeneously distributed2. The lateral heterogeneity and the consequent physical confinement of the PSs prevents energy withdrawal from PSII by PSI via the thermodynamically favourable energy transfer (energy spillover)2. However, this segregation imposes a need for long-range diffusion of intermediary electron carriers (plastoquinones, plastocyanins or soluble cytochromes) between the two domains. Restricted diffusion within the crowded thylakoid membranes and/or in the narrow luminal space are limiting the maximum rate of photosynthetic electron flow in some conditions3,4.
No thylakoid subdomains are visible in secondary plastids, where available electron micrographs show loose stacks of mostly three thylakoids (sometimes two or four) with few anastomoses in some cases5,6. Moreover, while the membrane distribution of a few complexes (PSI and the light harvesting complex, Fucoxanthin Chlorophyll Protein-FCP)6 is known, no complete picture of the arrangement of the photosynthetic machinery is available to date. Overall, the mechanisms ensuring optimum light absorption and downstream electron flow are still undetermined in secondary plastids, although the organisms containing these plastids are believe to be responsible for ∼20% of the global oxygen production7. Here, we combine functional, biochemical, immunolocalization analyses with 3D imaging in the diatom Phaeodactylum tricornutum, to reveal a sophisticated thylakoid membrane network that orchestrates photosynthetic light absorption and utilization. We show that segregation of the PSs in specific thylakoid subdomains within a functionally seamless space allows balanced light capture without restraining electron flow for optimal photosynthetic activity.
Energy spillover in P. tricornutum
The reported loose thylakoid structure of diatoms should promote random distribution of PSI and PSII, thereby favouring PSII to PSI energy spillover via physical contacts between the complexes2. Indeed spillover has been earlier reported upon poisoning PSII (refs 8, 9) in red algae, considered to be the ancestors of secondary plastids and, more recently, in dinoflagellates (Symbiondinium)10, which are derived from secondary endosymbiosis. We tested this hypothesis by measuring changes in PSI activity upon inhibition of PSII in P. tricornutum. We reasoned that if PSI and PSII are in physical contact (Fig. 1a), inhibition of PSII photochemistry should increase the utilization of PSII-absorbed light by PSI, thus enhancing PSI activity. Conversely, no change in activity is expected if PSI and PSII are separated and do not share their excitation energy, similar to plants (Fig. 1b).
We found that inhibition of PSII with 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU) plus hydroxylamine (HA, Fig. 1c) did not appreciably accelerate PSI activity in P. tricornutum cells (Fig. 1d–f). This was revealed by the lack of significant changes in the oxidation rate of P700 (the primary donor to PSI, Fig. 1d) and of its cytochrome electron donors (Fig. 1e, see Methods), that is, of the overall pool of PSI donors (Fig. 1f). Similar results were obtained under different light intensities (Supplementary Fig. 2), indicating that if present11, spillover is of very limited amplitude in P. tricornutum. This finding is in line with earlier reports in other diatoms (Cyclotella meneghiniana)12, where absence of spillover can be deduced based on fluorescence lifetime analysis.
Segregation of photosynthetic complexes in P. tricornutum
Thus, either (i) lipid or biochemical barriers prevent energy exchange between adjacent PSs or (ii) PSI and PSII are physically segregated in different thylakoid domains. To distinguish between the two possibilities, we immunolocalized the two PSs in cells prepared using the Tokuyasu protocol13, a method that ensures optimum antibody accessibility while preserving membrane structures (Supplementary Fig. 3). We localized PSI using two different antibodies against a core subunit (PsaA, Fig. 2a) and a more peripheral subunit of the complex (PsaC, Supplementary Fig. 4a). We prefentially found this complex in the external, ‘peripheral’ stromal-facing thylakoid membranes (Fig. 2d, green sectors), in agreement with earlier results6. On the other hand, we mainly located PSII in the ‘core’ thylakoid membranes (Fig. 2d, violet sectors) using two different antibodies (PsbA, Fig. 2b and PsbC, Supplementary Fig. 4b). We also immunolocalized the cytochrome b6f complex (using the PetA antibody, Fig. 2c), finding that its distribution was similar to that of PSI.
A statistical analysis of 258 micrographs (Principal Component Analysis, Fig. 2e,f, Supplementary Fig. 4d,e and Supplementary Tables 1 and 2) indicated that the barycentre of the PSI, PSII and cyt b6f complexes’ distribution do not localize in the same thylakoid compartments. This analysis confirmed the preferential ‘core’ location of PSII (black squares in Fig. 2e,f and Supplementary Fig. 4d,e) and the ‘peripheral’ location of PSI (red circles in Fig. 2e,f and Supplementary Fig. 4d,e). Conversely, while the cyt b6f complex is more concentrated in the peripheral membranes (cyan triangles in Fig. 2e,f and Supplementary Fig. 4d,e), its distribution is more homogeneous than that of PSI and PSII. Overall, this non-homogeneous distribution of the photosynthetic complexes is reminiscent of previous results in plants2 and in green algae14.
We complemented the immunolocalization analyses with biochemical fractionation. In plant thylakoids15, PSI, which is located in the stromal-exposed thylakoid lamellae, is more prone to solubilization by detergents than PSII, which is buried in the appressed membranes of the grana. We investigated the detergent accessibility of PSs in chloroplasts isolated from P. tricornutum cells by exposing them to increasing concentrations of the mild detergent digitonin and analysed the solubilized supernatant and pellet fractions for the presence of the PSs and cyt b6f by immunoblotting (Supplementary Fig. 5). As shown in Fig. 2g, solubilization of PSI and cytochrome b6f requires a lower detergent concentration than for PSII, suggesting that PSI and cyt b6f are located in the stroma-accessible thylakoids while PSII is in the less accessible membranes of the diatom chloroplasts, in agreement with the immunolocalization results.
Functional consequence of photosynthetic complex segregation
The segregation of PSI and PSII in different thylakoid sub-compartments should confine the two PSs in slow diffusion domains, as observed in plants3,4. We tested this hypothesis using a functional approach3. We compared the theoretical (Kth) and experimental (Kexp) equilibrium constants between PSI and its electron donors (cytochromes c6 and cytochrome f, see Methods). Kth was deduced from the redox potentials of cyt c6 (the soluble electron donor to PSI, 349 mV)16 and P700 (the primary electron donor to PSI, 420 mV)17. Kexp was calculated (equation (2), see Methods) from an ‘equilibration plot’ (Fig. 3), which shows the relationship between oxidized P700 (Fig. 3a) and oxidized c-type cytochromes (cyt, Fig. 3b) during dark re-reduction after illumination (Fig. 3a–c). Kexp should be equal to Kth in the absence of diffusion domains, but Kexp will be less than Kth if electron flow is limited by diffusion domains3,4. In this second case, the redox state of P700 and cyt in each domain will depend on their relative stoichiometry. During the reduction process that follows the light offset, complete reduction of P700+ and a partial reduction of cyt+ is expected in the compartments with a low P700/cyt stoichiometry. Conversely, a large fraction of P700 will still be oxidized in domains with a high P700/cyt stoichiometry. Because the equilibration plot averages the local redox states of P700 and cyt of all the different domains, the concomitant presence of P700+ (in high P700/cyt domains) and of reduced cyt c (in low P700/cyt domains) translates into a Kexp, estimate lower than the Kth value. We generated several equilibration plots (Fig. 3c) by poisoning photosynthetic electron flow (induced by saturating illumination) with increasing concentrations of DCMU (see also Fig. 3d), and found that diffusion was restricted (Kexp<Kth) when PSII generates more than 150 electrons per second (Fig. 3c, blue and green data points). However, Kexp=Kth (diffusion is no longer restricted) when the PSII rate is less than 100 electrons per second (Fig. 3c, dark red, red, orange and pink points). Thus, the compartmentalization of PSI and PSII in different thylakoid domains also generates diffusion domains in P. tricornutum, similar to plants. However, their equilibration time, 10 ms (corresponding to 100 electrons per second), is much faster than in plants (∼150 ms, corresponding to∼7 electrons per second)3.
Chloroplast structure in P. tricornutum cells
To explain the fast equilibration time of redox carriers in diatoms, we re-examined the TEM micrographs of samples prepared with the Tokuyasu protocol. By preserving the membrane structures, this technique allows observing additional features of the P. tricornutum thylakoids. We identified regions where thylakoid membranes are apparently interconnected (Supplementary Fig. 6a,b) or where they abruptly ‘disappear’ in cross-sections (Supplementary Fig. 6c, yellow circles), as if they tilt out of the micrograph plane. These features suggest the existence of a more complex 3D thylakoid network than the simple layout of three loosely juxtaposed thylakoids that is often presented in the case of secondary plastids. We collected 600 images of ultrathin sections using focused ion beam scanning electron microscopy (FIB-SEM) to reconstruct the 3D structure of a P. tricornutum cell (Supplementary Movie 1). By segmenting the 3D volume, we identified the organelles and their contacts (Fig. 4a). The mitochondrion (red) appears as a continuous network sitting on the chloroplast (green) with physical contacts between the two organelles (Fig. 4b). This mitochondrial localization likely facilitates energetic exchange between the two organelles, as recently reported18. Contact points are also seen between the chloroplast and nucleus (Fig. 4c), as expected since the outer membrane of the chloroplast envelope in secondary plastids is in connection with the nuclear ER in secondary plastids due to their evolutionary history19. These contacts could possibly mediate exchanges between the two compartments, including redox signalling20 as recently proposed in plants via the formation of transient connections between the chloroplasts and the nucleus, the stromules21.
The 3D structure of the photosynthetic membranes (Fig. 5a–d) confirmed the presence of parallel layers of stacked thylakoids (purple), but also revealed the presence of connections (Fig. 5b–d, yellow circles) between them. Although the resolution of these images (4 nm pixel, see Methods) does not allow to distinguish the individual thylakoid membranes, we could nonetheless distinguish the connections from plastoglobules, chloroplast lipoprotein particles often observed between the photosynthetic membranes, which appear as globular particles in our 3D reconstruction (Fig. 5c, red circles).
Our 3D FIB-SEM reconstruction of the P. tricornutum plastid thus suggests the existence of an intricate thylakoid network, at variance with previous hypotheses suggesting that the photosynthetic membranes of secondary endosymbiotic plastids are loosely structured22. The compartmentalization of the PSs in the peripheral and core thylakoid membranes (Fig. 2) is compatible with the hypothesis that the core membranes are enriched in monogalactosyldiacylglycerol, since this lipid favours the stability and function of the dimeric PSII complex23,24. The observed organization of the PSs in the thylakoids accounts for optimum partitioning of absorbed light in low and high light conditions. Limited spillover prevents unbalanced light capture by PSI and PSII, which have similar absorption spectra in diatoms, unlike plants22. This may explain why state transitions, the migration of the light harvesting complexes between PSII and PSI to optimize low light capture in plants25, have been reported to not exist in diatoms26. Limited spillover in diatoms could also explain the high capacity of PSII to thermally dissipate excess light through non-photochemical quenching27. Indeed, non-photochemical quenching is not expected if the surplus energy in PSII were to be dissipated via spillover to PSI, as in red algae9.
Our results suggest that viridiplantae (including plants and green algae) and diatoms have achieved a similar functional topology of the PSs to optimize photosynthetic light utilization. However, this functional equivalence is achieved with different thylakoid architectures, which likely evolved independently in primary and secondary plastids, and differently affect the electron flow capacity. While PSs confinement constrains electron flow in plants, possibly limiting photosynthesis, no such limitation is observed in diatoms, where the less structured thylakoids allow very fast redox equilibration between the two PSs. Indeed, the presence of connections between thylakoid layers should facilitate diffusion of cyt c6 between the cyt b6f complexes in the core membranes and the PSI in the peripheral ones, and the diffusion of plastoquinones from PSII in the core membranes towards the cyt b6f complexes in the peripheral regions. Overall, the faster diffusion of the soluble electron carriers would promote fast redox equilibration between the photosynthetic complexes in the diatom even at very high electron flux, unlike plants.
We propose that these features, along with the tight interactions between organelles for efficient energetic exchange18, provide the most adapted framework for high photosynthetic efficiency and acclimation capacity to the ever-changing ocean environment. Indeed, the less ‘rigid’ structure of secondary plastid could allow the establishment of physical contacts between PSs possible under conditions where substantial protection of PSII is needed. Consistent with this idea, red microalgae can develop sustained spillover to protect PSII in high light9, while the symbiotic alga Symbiodinium triggers PSII spillover in response to temperature stress10. In the latter case, occurrence of topological changes favouring physical contacts between the two PSs has been proposed to account for the enhancement of spillover10. On the other hand, accumulation of PSI in specific thylakoids domains has been reported in P. tricornutum cells exposed to a particular light regime (prolonged far red light illumination), possibly to segregate it from PSII (ref. 28).
Similar structural features have been reported in green algae. In Chlamydomonas, where the number of stacks can vary from 2 to 15, with a median of 3 thylakoids29,30, connections between thylakoids also appear in cryo-tomograms29. While no spillover exists between PSII and PSI in this alga31, recent data have shown that exposure to different light qualities induces major structural changes in the thylakoids (as revealed by SANS, Small Angle Neutron Scattering), triggering changes in the harvesting capacity of PSII (ref. 32).
Phaeodactylum tricornutum cultivation
The P. tricornutum Pt1 strain (CCAP 1055/3) was obtained from the Culture Collection of Algae and Protozoa, Scottish Marine institute, UK. Cells were grown in the ESAW (Enriched Seawater Artificial Water) medium33, in 50 ml flasks in a growth cabinet (Certomat BS-1, Sartorius Stedim, Germany), at 19 °C, a light intensity of 20 μmol photon m−2 s−1, a 12-h light/12-h dark photoperiod and shaking at 100 r.p.m. Cells were collected in exponential phase, concentrated to a density of 2 × 107 cells per ml and used for experimental characterization.
Spectroscopic analysis was performed on intact cells at 20 °C, using a JTS-10 spectrophotometer (Biologic, France). To assess energy spillover from PSII to PSI, redox changes of P700 and of its electron donor pool were monitored. Because of the high equilibrium constant between P700 and its electron donor pool, one needs to estimate the redox states of both P700 and of this pool to quantify the whole amount of electrons that is delivered to PSI. In diatoms, a c-type cytochrome acts as the electron donor to PSI, equivalent to plastocyanin in plants. This cytochrome will be referred as to cytochrome c6 (ref. 34), instead of cytochrome cx (ref. 35) or cytochrome c6A (ref. 36) as used in other publications. Cyt c6 and cytochrome f of the cyt b6f complex have very similar absorption features. It is thus not possible to distinguish them spectroscopically. We define therefore as ‘cyt’, the pool of cyt c6+cyt f. Cyt redox changes were calculated as −0.4 × −0.4 × , where , and are the absorption difference signals at 554, 520 and 566 nm, respectively18. P700 redox changes were measured at 705 nm. To rule out any possible contribution of fluorescence emission at this wavelength, experiments were repeated at 820 nm (where P700+ is still detected but chlorophyll fluorescence is not measured). Similar results were obtained at both wavelengths, indicating that the interference between P700 redox changes and chlorophyll fluorescence emission was negligible.
Kinetics of oxidation of P700, cyt and of the total donors to PSI oxidation result from concomitant electron injection by PSII and withdrawal by PSI. Inhibiting PSII activity with DCMU also modifies the rate of electron injection into cyt+ and P700+. This translates into an increase of the net oxidation rate of P700 and of cyt, which could be misinterpreted as an increase of the PSI activity. Therefore, to calculate the true PSI oxidation rates, we evaluate the reduction rate of this electron donor pool as the slope (SD) of signal relaxation upon switching the light off (Supplementary Fig. 1b). This rate was added to the net oxidation rate, which we estimate from the slope in the light (SL). The sum (SL+SD) provides the absolute oxidation rate (see Supplementary Fig. 1 for an example in the case of the total PSI electron donor pool).
Inhibition of PSII activity by DCMU and HA was probed following changes in chlorophyll emission from F0 (minimum fluorescence level in which QA, the primary quinone acceptor of PSII, is oxidized) to the Fm level, in which QA is fully reduced because PSII is blocked. As shown in Fig. 1c and Supplementary Fig. 2a–c, light reduces QA in DCMU poisoned samples but this inhibitor alone is not sufficient to fully reduce QA in the short time (4 ms) employed in our tests to measure oxidation of P700 and of cyt. This is particularly evident in low light (for example, Supplementary Fig. 2a, red circles), because at low photon flux the rate of QA reduction is diminished. Since full reduction of QA is needed to induce spillover, DCMU alone could not be sufficient to probe the occurrence of energy spillover in our experimental conditions. On the other hand, a complete reduction of QA is observed in the presence of HA, because this inhibitor prevents re-oxidation of reduced QA in PSII (ref. 37). By ensuring QA reduction (Fm level) at the beginning of illumination (Fig. 1c, Supplementary Fig. 2a–c green triangles), HA and DCMU ensure optimum conditions to test the occurrence of spillover.
To assess the existence of restricted diffusion domains, we compared the theoretical equilibrium constant (Kth) between PSI and its electron donors (cyt) with the experimental one (Kexp), following previous approaches in plants3 and bacteria38. In P. tricornutum we calculate a value of 16 for Kth, based on the redox potentials of cyt c6 (349 mV)16 and of P700 (420 mV)17. To evaluate Kexp the following equation was used to relate redox changes of P700 and of cyt in an ‘equilibration plot’ (Fig. 3c).
where [cyt], [cyt+], [P700] and [P700+] represent the concentration of the oxidized and reduced form of the cyt and P700 pools.
From equation (1) the relationship between the relative amount of oxidized P700 and of cyt can be derived as:
Experiments were performed under light saturated conditions, in which photosynthesis is limited by electron flow itself rather than by other factors (that is, light harvesting by PSI and PSII). In these conditions, it is possible to quantify the rate of equilibration between the diffusion domains from measurements of photosynthetic electron flow.
Finally, the P700/cyt stoichiometry was calculated in intact cells exposed to a saturating single turnover laser flash. This flash generates 1 turnover per PSI, leading to oxidation of 1 cyt per PSI. The amount of oxidized cytochrome was quantified 300 μs after the flash (that is, when P700 is fully re-reduced by the cytochromes), and compared to the amount of cyt oxidized in continuous light in the presence of DCMU (40 μM). Because the flash (which generates one positive charge per PSI) oxidizes 33% of the total oxidable cyt pool (per PSI), we conclude that the c-type cytochromes (cyt c6+cyt f)/PSI ratio is ∼3.
An original protocol was developed to purify intact chloroplasts from P. tricornutum. Cells were collected by centrifugation at 5,000g, 10 min, 4 °C. The pellet was resuspended gently in 10 ml of isolation buffer (0.5 M Sorbitol; 50 mM Hepes-KOH; 6 mM EDTA; 5 mM MgCl2; 10 mM KCl; 1 mM MnCl2; 1% (w/v) Poly Vinyl Pyrrolidone 40 [K30]; 0.5% BSA; 0.1% cysteine, pH 7.2–7.5) and passed slowly through a French Press at 90 MPa. Ten millilitres of the isolation buffer were added to the mixture of broken cells on ice in the dark before centrifugation at 300g for 8 min to remove intact cells and cell debris. The supernatant was collected and subjected to centrifugation at 2,000g for 10 min at 4 °C. The pellet containing the chloroplasts was gently resuspended with a soft paint-brush in 2 ml of washing buffer (0.5 M Sorbitol; 30 mM Hepes-KOH; 6 mM EDTA; 5 mM MgCl2; 10 mM KCl; 1 mM MnCl2; 1% PVP 40 [K30]; 0.1% BSA, pH 7.2–7.5) and loaded on a discontinuous Percoll gradient (10, 20, 30%) in the same buffer. After centrifugation (SW41Ti rotor) at 10,000g for 35 min, the chloroplast fraction was recovered in the 20% Percoll layer of the gradient, diluted in the washing buffer (without BSA) and subjected to centrifugation at 14,000g for 10 min at 4 °C. Chloroplasts were resuspended in washing buffer and intactness was tested with a Clark electrode (Hansatek, UK) using sodium ferricyanide (1.5 mM) as an electron acceptor. Oxygen evolution in saturating light was measured before and after an osmotic shock (induced by incubation for 5 min in the washing buffer without sorbitol). The ratio between the two rates was used to evaluate intactness, which was approximately 70% in our case.
Membrane solubilization and immunoblot analysis
To differentially solubilize the two thylakoids compartments (core and peripheral), chloroplasts were incubated at a final chlorophyll concentration of 0.2 mg ml−1 for 10 min at 4 °C with digitonin (C56H92O29, Sigma Aldrich) at increasing final concentrations (0.1, 0.2, 0.5 and 1%). Samples were subjected to centrifugation at 100,000g for 5 min (rotor TLA-100), supernatants were collected and pellets were resuspended in the same volume of washing buffer without sorbitol. Samples (1.4 μg chlorophyll) were loaded onto 4–20% polyacrylamide SDS gels and blotted onto nitrocellulose membranes. Antisera against PSI (PsaA and PsaC, subunits of photosystem I, Agrisera, Se, catalogue numbers: AS06172 and AS10939, respectively), PSII (PsbA and PsbC core subunits of PSII, Agrisera, Se, catalogue numbers: AS05084 and AS111787, respectively) and cytochrome b6f (PetA, Agrisera, Se, catalogue number: AS06119) were detected by ECL using a CCD (charge-coupled device) imager (Chemidock MP Imaging, Bio-Rad, USA). Antibodies were used at a dilution of 1/10,000 (PsaA, PsaC, PsbA and PsbC) or 1/2,000 (PetA) (Supplementary Fig. 5).
Sample preparation for immunolocalization
Cells of P. tricornutum were fixed in a double-strength fixative (4% (w/v) formaldehyde, 0.4% (v/v) glutaraldehyde) in PHEM buffer (PIPES 60 mM, HEPES 25 mM, EGTA 10 mM, MgCl2 2 mM; pH 7.0) in an equal volume to the culture medium (ESAW), and then diluted into a standard strength fixative (2% (w/v) formaldehyde (EMS, USA) and 0.2% (v/v) glutaraldehyde (EMS, USA)). After 15 min, fresh standard strength fixative was replaced and fixation proceeded for 30 min at 20 °C, under agitation. Cells were washed three times with 50 mM glycine in PHEM buffer and after centrifugation were embedded in 12% gelatin in PHEM. The gelatin-embedded blocks were cryo-protected in 2.3 M sucrose in rotating vials at 4 °C (16 h). Samples vitrification was obtained in liquid nitrogen following the plunge and freezing technique13. Thin sections (80 nm) were prepared at −110 °C with a diamond knife (Diatome, Switzerland). Ribbons were picked-up with a drop of 1% (w/v) methylcellulose/1.15 M sucrose in PHEM buffer. Sections were thawed and transferred to Formvar carbon-coated nickel grids.
Immunolabelling was performed using an automated system (Leica microsystems EM IGL). Samples were post-fixed with 2% glutaraldehyde in PBS, pH 7.4, for 5 min and finally washed (three times in PBS, pH 7.4, for 2 min and six times with deionized water for 2 min). Six nanometers gold conjugate goat anti-rabbit secondary antibodies (Aurion, Wageningen, the Netherlands, catalogue number: 806.011, dilution 1/5) were used to detect PsaA and PetA. Goat anti-rabbit gold ultra-small 1.4 nm secondary antibodies (Aurion, Wageningen, the Netherlands, catalogue number 800.011, dilution 1/20), were used to dectect PsbA, PsaC and PsbC and sections were enhanced with silver (Aurion R-Gent SE-EM) for 25 min and again washed on deionized water (six times for 2 min). For observation, grids were incubated 5 min on 2% uranyl acetate (pH 7.0) and transferred to a mixture of 1.6% methyl cellulose and 0.4% uranyl acetate on ice, the excess of the viscous solution was drained away and the grids were let to dry. Grids were imaged in an electron Tecnai 12 electron microscope (FEI, USA), using an Orius CCD camera (Gatan, USA). Primary antibodies were used at a dilution of 1/50. Gold particle counting for statistical analysis was done manually. First, the total number of labels (11,932) was assessed and then particles were attributed to various compartments. If gold particles were uncertainly located (3,995), they were not considered for further analysis.
Principal components analysis
The principal components analysis (PCA) allows reducing the dimensionality39 detecting possible groupings in a given data set40. We performed PCA on our immunolabelling data considering four possible subcellular compartments for the antibodies against PSI, PSII and the cytochrome b6f complex: the internal (core) and external (peripheral) thylakoid membranes, as well as the pyrenoid and the envelope, to account for possible aspecific labelling. This led to a 4-dimension localization space (core, peripheral, pyrenoid and envelope) of 258 images from four independent cultures, where values represent the number of immunolabelling in a given localization. For data analysis, we first normalized the localization space of each of the 258 images. To do so, for each localization, we calculated the number of gold particles for a given image minus the average number of particles in that localization (for example, for a x(i,j), we obtained x1(i,j)=x(i,j)−mean(column j). This value was then normalized by the s.d. in the same localization (for example, x1(i,j)/s.d.(column j) in the above considered case).
To represent the distribution of these normalized dimensional data for the 258 images, the direction (a four-dimensional vector) giving the largest possible variance of the distribution (that is, accounts for as much of the variability in the data as possible) was selected as the direction for the first principal component. Then, the direction (another four-dimensional vector) orthogonal to the previous one(s) giving the largest possible variance of the distribution was selected as the direction for the second principal component. The repetition of this procedure automatically selects vectors representing the scatter of the distribution from major ones to minor ones. Based on singular value decomposition, PCA is a principal axis rotation of the original variables that preserves the variation in the data. Therefore, the total variance of the original variables is equal to the total variance of the principal components. The principal component coefficients correspond to the percentage of explained variance. All statistical analysis was done with the R software41.
Logistic regression was used to describe data and to explain the relationship between one dependent binary variable and one or more independent variables. The two major assumptions are: (i) that the outcome must be discrete, that is, the dependent variable should be dichotomous in nature and (ii) there should be no high intercorrelations (as demonstrated42, the assumption is met for values less than 0.9) among the predictors.
We use a dose–response relationship model where the predictors are the multiple continuous variables, that is, the number of immunolabelling in the different localizations (core, peripheral, pyrenoid and envelope). Since probabilities have a limited range and regression models could predict off-scale values below zero or above 1, it makes more sense to model the probabilities of getting a given antibody on a transformed scale; this is what is done in logistic regression analysis43. A linear model for transformed probabilities can be set up as in which is the log odds. Each xi is the number of gold beads in the localization i and statistics about the coefficients αi will provide insight about the impact of the localization I on the probability to get a given antibody. The analysis of deviance table and the Akaike information criterion allows the identification of the relevant predictors44.
The table of correlations shows that there are no strong intercorrelations between the variables (Supplementary Note 1). Starting from a complete model (Supplementary Note 1) and based on the variable coefficients P values (Pr(>|z|)), we see that we can recursively delete the two variables envelope, env. and pyrenoid, pyr. without significantly reducing the Akaike information criterion45, which is a common measure of the relative quality of a statistical model for a given set of data. The final model demonstrates that the relevant variables to predict the antibody are the number of immunolabelling in core (P value=2 e−03) and peripheral, per. (P value<8 e−8) area. Bootstrap procedure allows the evaluation of the average percentage of wrong prediction (18%, Supplementary Table 2).
FIB-SEM and 3D reconstruction
P. tricornutum cells were fixed in 0.1 M cacodylate buffer (Sigma-Aldrich), pH 7.4, containing 2.5% glutaraldehyde (TAAB), 2% formaldehyde (Polysciences) for 1 h at 20 °C and prepared according to a modified protocol from (https://ncmir.ucsd.edu/sbem-protocol). FIB tomography was performed with a Zeiss NVision 40 dual-beam microscope. In this technique, the Durcupan (Sigma-Aldrich) resin-embedded cells of P. tricornutum were cut in cross-section, slice by slice, with a Ga+ ion beam (of 700 nA at 30 kV). After a thin slice was removed with the ion beam, the newly exposed surface was imaged in SEM at 5 kV using the in-column EsB backscatter detector. For each slice, a thickness of 4 nm was removed, and the SEM images were recorded with a pixel size of 4 nm. The image stack was then registered by cross-correlation using the StackReg plugin in the Fiji software.
For 3D reconstruction, a stack of 600 images was analysed with FIJI ImageJ software and projected in three-dimension (x,y,z axis) using the AVIZO (FEI, USA) and CHIMERA softwares (https://www.cgl.ucsf.edu/chimera/, UCSF, USA). Experiments were also performed at higher resolution (voxel size 2 nm, Supplementary movie 2). However, no significant improvement of the resolution was observed in these conditions. This likely stems from the fact that under these imaging conditions, the recorded backscatter signals emerge primarily from an area <5 nm across and <10 nm thick, setting an empirical lower limit for pixel size and slicethickness46. Moreover, the higher electron dose per surface unit might also enhance electron beam fluctuations during the acquisition and/or thermal damages to the sample, thus further limiting the imaging resolution.
The authors declare that all data supporting the findings of this study are available within the manuscript and its supplementary files or are available from the corresponding authors on request.
How to cite this article: Flori, S. et al. Plastid thylakoid architecture optimizes photosynthesis in diatoms. Nat. Commun. 8, 15885 doi: 10.1038/ncomms15885 (2017).
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The authors are grateful to Pierre Joliot (Institut de Biologie Physico Chimique, Paris, France), Arthur Grossman (The Carnegie Institution, Stanford, USA) and Chris Bowler (Ecole Normale Supérieure, Paris, France) for critically reading the manuscript. We thank the Marie Curie Initial Training Network Accliphot (FP7-PEOPLE-2012-ITN; 316427 to S.F., G.F.), the HFSP (HFSP0052 to G.F.), GRAL (ANR-10-Labx-49-01 to B.G., C.M., L.F.E., D.P., C.B., G.S.), the DRF impulsion FIB-Bio program (to P.-H.J., B.G., C.M., L.F.E., D.F., G.S., G.F.), the University of Konstanz (to A.S., C.R.B., P.G.K.) and a stipend from the Graduate School of Chemical Biology (KoRS-CB to A.S.). This work used platforms from ScopeM (ETH Zurich) and the Grenoble Instruct Centre (ISBG: UMS 3518 CNRS-CEA-UJF-EMBL) with support from FRISBI (ANR-10-INSB-05-02) within the Grenoble Partnership for Structural Biology (PSB). |
The Tritone Paradox was discovered by Deutsch in 1986, first reported at a meeting of the Acoustical Society of America (Deutsch, 1986)1, and first published in Deutsch, Music Perception (1986)2.
The basic pattern that produces this illusion consists of two computer-produced tones that are related by a half-octave. (This interval is called a tritone). When one tone of a pair is played, followed by the second, some people hear an ascending pattern. But other people, on listening to the identical pair of tones, hear a descending pattern instead. This experience can be particularly astonishing to a group of musicians who are all quite certain of their judgments, and yet disagree completely as to whether such a pair of tones is moving up or down in pitch.
The Tritone Paradox has another curious feature. In general, when a melody is played in one key, and it is then transposed to a different key, the perceived relations between the tones are unchanged. The notion that a melody might change shape when it is transposed from one key to another seems as paradoxical as the notion that a circle might turn into a square when it is shifted to a different position in space.
But the Tritone Paradox violates this rule. When one of these tone pairs is played (such as C followed by F#) a listener might hear a descending pattern. Yet when a different tone pair is played (such as G# followed by D), the same listener hears an ascending pattern instead. (Another listener might hear the C-F# pattern as ascending and hear the G#-D pattern as descending.) 1 - 3
The following sound demonstration presents six tritone pairs. When you listen to each pair of tones, decide whether it forms an ascending pattern or a descending one. This demonstration works best when you play the tones to a group of listeners. After presenting each tritone pair, ask the listeners for a show of hands (‘Do you hear this pattern as ascending?’ ‘Do you hear it as descending?’). You will most probably find that the listeners disagree amongst themselves as to which pair of tones is ascending, and which is descending in pitch. This is particularly surprising when the demonstration is played to a group of musicians who are all certain of their judgments.
Listen to six examples of Deutsch's Tritone Paradox
The tones that are employed to create the Tritone Paradox are so constructed that their note names (C, C#, D and so on) are clearly defined, but they are ambiguous with respect to which octave they are in. For example, one tone might clearly be a C, but in principle it could be middle C, or the C an octave above, or the C an octave below. This ambiguity is built into the tones themselves. So when someone is asked to judge, for example, whether the pair of tones D-G# is ascending or descending in pitch, there is literally no right or wrong answer. Whether the tones appear to move up or down in pitch depends entirely on the mind of the listener. (Ambiguous tones such as these were used by Roger Shepard and Jean-Claude Risset to create illusions of endlessly ascending or descending pitches.)
The way that any one listener hears the Tritone Paradox depends on the names of the notes that are played. The musical scale is created by dividing the octave into twelve semitone steps, and each tone is given a name: C, C#, D, D#, E, F, F#, G, G#, A, A# and B. The entire scale, as it ascends in height, consists of the repeating occurrence of this succession of note names across octaves. So when you move up a piano keyboard in semitone steps beginning on C, you go first to C#, then D, then D#, and so on, until you get to A#, then B, and then C again. At this point you have reached an octave, and you begin all over, repeating the same series of note names in the next octave up the keyboard.
Because all Cs sound in a sense equivalent, as do all C#s, all Ds, and so on, we can think of pitch as varying both along a simple dimension of height and also along a circular dimension of pitch class - a term that musicians use to describe note names. So, for example, all Cs are in pitch class C, all C#s are in pitch class C#, and all Ds are in pitch class D.
Figure 1. The pitch class circle. This corresponds to the twelve pitch classes within the octave. In experiments on the Tritone Paradox, pairs of tones are played that are opposite each other along the circle, such as C-F#, or G#-D.
Let us suppose that listeners mentally arrange pitch classes as a circular map, like a clockface, as shown in Figure 1. To explain different listeners' perceptions of the Tritone Paradox, I conjectured that one person might orient his or her clockface so that C is in the 12 o'clock position, C# is in the 1 o'clock position, and so on around the circle. This listener would tend to hear the pattern C-F# (as well as B-F, and C#-G) as descending, and the pattern F#-C (as well as F-B and G-C#) as ascending. But another person might orient is or her clockface so that F# is in the 12 o'clock position, G is in the 1 o'clock position, and so on. This listener would instead tend to hear the pattern C-F# (as well as B-F, and C#-G) as ascending, and the pattern F#-C (as well as F-B and G-C#) as descending. In other words, differences between listeners in perception of the Tritone Paradox could be due to differences in the way they orient their maps of the pitch class circle.3
In one experiment, I played many such pairs of tones to a group of subjects, and they judged in each case whether they heard an ascending or a descending pattern. I then plotted the proportion of times that each subject heard a descending pattern, as a function of the pitch class of the first tone of the pair. The results supported my conjecture - the judgments of most subjects varied systematically depending on the positions of the tones along the pitch class circle: Tones in one region of the circle tended to be heard as higher, and tones in the opposite region as lower.
Figure 2. Perception of the Tritone Paradox by a subject who perceived the illusion in a pronounced fashion. The upper figure shows the orientation of the pitch class circle with respect to height, derived from the judgments of the subject shown in the graph. This subject’s peak pitch classes were G# and A.
In addition, the orientation of the pitch class circle varied strikingly from one subject to another. To illustrate these differences, the judgments of two subjects are shown in Figures 2 and 3. Both these subjects heard the Tritone Paradox in a very pronounced fashion, but quite differently from each other. The subject whose judgments are shown in Figure 2 heard tone pairs C#-G, D-G#, D#-A and E-A# as ascending, but F#-C, G-C#, G#-D, A-D#, A#-E, and B-F as descending. In contrast, the subject whose judgments are shown in Figure 3 heard tone pairs B-F, C-F#, C#-G, D-G#, D#-A, and E-A# as descending, and F#-C, G-C#, G#-D, and A-D# as ascending. So for the most part when the first subject heard an ascending pattern the second subject heard a descending one, and vice versa. The upper parts of the figures show the two orientations of the pitch class circle with respect to height which were derived from the judgments of these subjects. For the first subject, pitch classes G# and A stood at the top of the circle, but for the second subject, C# and D stood in this position instead. To further illustrate the differences between listeners in perception of the Tritone Paradox, the judgments of four more subjects are shown in Figure 4.
Figure 3. Perception of the Tritone Paradox by another subject who perceived it in a pronounced fashion, but quite differently from the first subject. The upper figure shows the orientation of the pitch class circle with respect to height, derived from the judgments of the subject shown in the graph. This subject’s peak pitch classes were C# and D.
Another surprising consequence of the Tritone Paradox concerns absolute pitch - the ability to name a note in the absence of a reference note. This ability is generally considered to be very rare. But the Tritone Paradox shows that the large majority of people possess an implicit form of absolute pitch, since on listening to this pattern they hear tones as higher or as lower depending simply on their pitch classes, or note names.
Figure 4. Perception of the Tritone Paradox by four more subjects.
Why do people orient their maps of the pitch class circle in different ways? I conjectured that the answer might lie in the speech patterns that we hear. When people from other countries visited my laboratory in California, they often heard this pattern differently from native Californians. And when I demonstrated the effect to audiences in other countries, they appeared to differ from one country to another in what they heard.
So on the basis of these observations, I compared two groups of subjects. One group had grown up in California, and the other group had grown up in the south of England. As shown in Figure 5, these two groups differed strikingly in how they heard the Tritone Paradox: Frequently when a Californian subject heard a pattern as ascending, a subject from the south of England heard the identical pattern as descending, and vice versa4.
In another study, my colleagues and I found a significant correspondence between the pitch range of a person's speaking voice and how he or she perceived this pattern. This study provided a further indication that speech patterns influence the way the Tritone Paradox is heard5. Further, in an experiment described in the page entitled The pitch of speech in two Chinese villages my colleagues and I found that the pitch ranges of speech clustered within a linguistic community, but differed across communities. This is in line with the conjecture that the individual develops a pitch class template that is derived from the pitch ranges of speech to which he or she is most frequently exposed. This template then influences the pitch range of his or her own speech, and also influences how he or she hears the Tritone Paradox. The 2013 book chapter 6 provides further information and discussion of this effect.
Figure 5. Distributions of peak pitch classes in two groups of subjects. One group had grown up in the south of England, and the other group had grown up in California. The two groups heard the Tritone Paradox in strikingly different ways.
Other studies have uncovered regional differences within the U.S. and Canada in the perception of the Tritone Paradox6. Because there are regional dialects within the U.S., it seems that speech patterns are likely to lie at the root of these differences also. It even appears that the way a person hears the Tritone Paradox is related, not only to the geographical region in which he or she had grown up, but also to the regions in which his or her parents had grown up. In one study we found that, among subjects who had grown up in the area of Youngstown, Ohio, the perceptions of those whose parents had also grown up in Youngstown differed significantly from those whose parents had grown up elsewhere in the U.S.7. In a further study, I found a significant correlation between the way children and their mothers heard the Tritone Paradox 8,9. This correlation was obtained even though the children had all been born and raised in California, whereas their mothers had grown up in many different geographical regions, both inside and outside the U.S.
Another study examined what happens when an individual had been exposed to one language in infancy and later acquired a different language. My colleagues and I tested subjects who had been born in Vietnam and now reside in California. One group had arrived in the U.S. as adults and spoke perfect Vietnamese but little English. The second group had arrived in the U.S. as infants or young children; they spoke perfect English, but most of them were not fluent speakers of Vietnamese. Figure 6 shows the distributions of peak pitch classes in the two Vietnamese groups combined, together with those of the subjects who had been born and raised in California and spoke only English. The two Vietnamese groups did not differ statistically in how they heard the Tritone Paradox, but both differed statistically from the native speakers of Californian English.10 This leads to the conclusion that the speech to which we were exposed as children influences the way we hear the Tritone Paradox as adults.
Figure 6. Distributions of peak pitch classes among subjects who had been born in Vietnam and whose first language was Vietnamese, and among subjects who were native speakers of Californian English.
References 11-15 provide further information about the Tritone Paradox. The sound patterns for a full experiment on the illusion, together with instructions as to how to score the answers, are published in the compact disc 'Musical Illusions and Paradoxes.'
This strange illusion has implications for the relationship between speech and music. Philosophers and composers have argued for centuries that strong linkages must exist between these two forms of communication. Indeed, many composers, in their search for expressivity, have incorporated into their music features that are characteristic of spoken language. The Tritone Paradox shows that the speech patterns to which we have been exposed can indeed influence how music is perceived.
1. Deutsch, D. An auditory paradox. Journal of the Acoustical Society of America, 1986, 80, s93. [Web Link]
2. Deutsch, D. A musical paradox. Music Perception, 1986, 3, 275-280. [PDF Document]
3. Deutsch, D. Paradoxes of musical pitch. Scientific American, 1992, 267, 88-95, [PDF Document]
4. Deutsch, D. The tritone paradox: An influence of language on music perception. Music Perception, 1991, 8, 335-347. [PDF Document]
5. Deutsch, D., North, T. and Ray, L. The tritone paradox: Correlate with the listener's vocal range for speech. Music Perception, 1990, 7, 371-384. [PDF Document]
6. Deutsch, D. The processing of pitch combinations In D. Deutsch (Ed.). The psychology of music, 3rd Edition, 2013, 249-325, San Diego: Elsevier. [PDF Document] [Web Link]
7. Ragozzine, F. and Deutsch, D. A regional difference in perception of the tritone paradox within the United States. Music Perception, 1994, 12, 213-225. [PDF Document]
8. Deutsch, D. Mothers and their children hear a musical illusion in strikingly similar ways. Invited Lay language paper presented at the 131st meeting of the Acoustical Society of America. 1996, May, Indianapolis. [Laylanguage version]
9. Deutsch, D. Mothers and their offspring perceive the tritone paradox in closely similar ways. Archives of Acoustics, 2007, 32, 3-14. [PDF Document]
10. Deutsch, D., Henthorn T. and Dolson, M. Speech patterns heard early in life influence later perception of the tritone paradox. Music Perception, 2004, 21, 357-372. [PDF Document]
11. Deutsch, D. The tritone paradox: A link between music and speech. Current Directions in Psychological Science, 1997, 6, 174-180. [PDF Document]
12. Deutsch, D. The tritone paradox: Some further geographical correlates. Music Perception, 1994, 12, 125-136. [PDF Document]
13. Deutsch, D. Some new pitch paradoxes and their implications. In Auditory Processing of Complex Sounds. Philosphical Transactions of the Royal Society, Series B, 1992, 336, 391-397. [PDF Document]
14. Deutsch, D., Kuyper, W. L. and Fisher, Y. The tritone paradox: Its presence and form of distribution in a general population. Music Perception, 1987, 5, 79-92. [PDF Document]
15. Deutsch, D. The tritone paradox: Effects of spectral variables. Perception & Psychophysics, 1987, 41, 563-575. [PDF Document] |
Reedbeds are a type of wetland or swamp habitat that are dominated by stands of the common reed Phragmites communis, with the water table at or above ground level for the majority of the year.
- Site: Freshwater, brackish or tidal waters
- Main species: Common reed
- UK: 5000 ha
- NI: 3,228 ha
- Reedbeds form on the margins of water bodies, along lowland and upland streams, estuaries, reservoirs, clay pits, sewage treatment works, industrial lagoons & as successional habitats on fens & bogs (NIEA, 2005).
- Reedbeds may be defined as species-poor stands of herbaceous vegetation, dominated by reeds, other large grasses or tussock-forming sedges, typically dominated by one or a few species (NCC, 1990; Fossit, 2000; NIEA, 2005).
- Reedbeds in Northern Ireland correspond to the NVC plant community: S4 Phragmites australis swamp & reedbeds.
- Reedbeds tend to occur as discrete stands but may also be a component of a mosaic of other habitat types such as lakes, fen, wet woodland & coastal & floodplain grazing marsh.
- To be classified as a reedbed priority habitat, the following criteria must be fulfilled:
– tall herbaceous wetland vegetation with >30% cover of Phragmites
– reedbed area >0.5 ha
– reedbed width over the whole area of at least 5 metres (NIEA, 2005)
Current Status in UK & Northern Ireland
- The area of the reedbed habitat was estimated at 5000 ha (UK Biodiversity Steering Group, 1998), but in 2002 the resource in England & Wales was estimated at 12,400 ha, with an estimated additional 1,138 ha in Scotland (UK Biodiversity Action Plan online report, 2002; NI Habitat Action Plan, 2005).
- The NICS 2000 estimated that there are 3,228 ha (0.2% of land area) of reedbeds in Northern Ireland. This figure, however, may include stands that of <0.5 ha or 5m width. Studies are required to assess the extent of this habitat.
- Reedbeds in NI tend to be unmanaged but conservation management could increase their biodiversity.
- The NICS 2000 estimated that there are 3 ha of Reedbeds in the Antrim Coast & Glens AONB & Sperrins AONB, compared to 67 ha in Binevenagh AONB.
- In NI reedbeds tend to be associated with lowland wetlands around large lakes & inter-drumlin wetlands. For example, Portmore Lough & Blackers Rock (Lough Neagh) have several large stands (>10 ha) around them. The Bann Estuary has documented Reedbeds. Upper Lough Erne also has extensive reedbeds. An estimated 40 sites in Armagh & Down contain stands greater than 2 ha (Shaw et al., 1996). Larne Lough contains a Phragmites australis reedbed.
- Reedbeds are threatened by drainage, which can result in desiccation, invasion of scrub & drier vegetation. They are also threatened by fly tipping (building rubble, agricultural & domestic waste), industrial & urban development eutrophication, acidification, nitrogen enrichment & climate change.
- Bann Estuary
- ECOS (stands of <1 ha)
- Larne Lough |
We haven't had much snow for the first part of the winter, but in case you're wondering how snowflakes are formed, here you go.
A snowflake is formed when a water droplet freezes onto some particle in the atmosphere. That particle can be anything from pollen to dust in the clouds. From this an ice crystal is created. As the ice crystal falls to the ground, more water vapor freezes onto the primary crystal and the snowflake will grow in size. This is how the six arms of the snowflake are formed.
It's often said that no two snowflakes are the same. This is because each snowflake follows a slightly different path on it's journey through the atmosphere and finally to the ground. Along that path a number of different factors will come into play. Temperature is the biggest factor in that formation, although humidity also has an effect on the final look of the snowflake.
For example, when a snowflake forms at temperature in the middle 20s(Fahrenheit), the crystal will look long and needle-like. However, flakes that form at a temperature in the single digits(F) will tend to appear flat and plate-like. |
In Howard Gardner’s Frames of Mind, he proposes that there are seven main areas in which all people have special skills; he calls them intelligences. His research at Harvard University was in response to the work that Alfred Binet had done in France around 1900. Binet’s work led to the formation of an intelligence test; we are all familiar with the “intelligence quotient,” or “IQ,” the way that intelligence is measured on his test.
This type of IQ test was used as the basis of another one with which most of us are familiar: the Scholastic Aptitude Test (SAT), which is taken my most college-bound high school students.
Both of these tests look predominantly at two types of intelligences: verbal and math. If a person does well on these, s/he is considered “intelligent,” and is a candidate for one of the better colleges or universities. But what about everyone else? How many of you who are reading these words have used the phrase “not good at taking tests,” when talking either about yourself or your child?
The Multiple Intelligences (MI) theory proposes that there are other measures of intelligence beside these two. I offer this information to you so that you can understand that while many teachers have some knowledge of MI theory, most of our schools are not fully set up to use it to the advantage of all students.
That being the case, perhaps you can either (1) be involved in helping your child’s teachers and school to provide a more balanced program that develops his intelligences that are not more included in the curriculum or (2) find activities outside of the school environment in which your child can develop his dominant areas of intelligence.
You should also know that MI theory posits that each of us has, to some degree or another, all of these intelligences. Some of them are simply more developed than others. Furthermore, we are all able to improve our ability in each of these areas.
Howard Gardner stresses that the intelligences are equal in their importance. In alphabetical order, they are:
Bodily-kinesthetic: using one’s body to solve problems and express ideas and feelings. Actors, athletes, and dancers use their whole bodies in this way, much the same way that craftspeople, sculptors, and mechanics use their hands.
These questions can determine if an adult has a strength in Bodily-Kinesthetic Intelligence:
- Do you regularly participate in a sport or some physical activity?
- Is it difficult to sit still for long periods of time?
- Do you enjoy working with your hands in creating things?
- Do you find that ideas and solutions to problems come to you while you are exercising or doing some sort of physical activity?
- Do you enjoy spending your free time outdoors?
- Do you speak with your hands or other body gestures?
- Do you learn more about things by touching them?
- Do you enjoy thrilling amusement park rides such as the roller coaster and other activities like this?
- Do you think of yourself as being well-coordinated?
- In order to learn a new skill, do you have to practice it to learn it, rather than read about it or see it in a video?
These are some questions to determine if children may be exhibiting a well-developing Bodily-Kinesthetic Intelligence. Does your child:
- excel in more than one sport?
- move various body parts when required to sit still for long periods of time?
- have the ability to mimic others’ body movements?
- enjoy taking things apart and putting them back together?
- have a hard time keeping hands off objects?
- enjoy running, jumping, or other physical activities?
- show skill in activities that require fine-motor coordination, such as origami, making paper airplanes, building models, finger-painting, clay, or knitting?
- use his body well to express himself?
Interpersonal: perceiving the moods, feelings, and needs of others. It includes salespeople, teachers, counselors, and those we have come to call the helping professions.
These questions can determine if an adult has a strength in Interpersonal Intelligence:
- Have people always come to you for advice?
- Have you always preferred group sports to solo sports?
- Do you usually prefer talking to other people about a problem, rather than figure it out on your own?
- Do you have at least three close friends?
- Do you prefer social activities over individual pursuits?
- Do you enjoy teaching others what you can do well?
- Are you considered to be a leader, either by yourself or others?
- Do you feel comfortable in a crowd?
- Do you prefer to spend your time with others than alone?
These are some questions to determine if children may be exhibiting a well-developing Interpersonal Intelligence. Does your child:
- enjoy socializing with friends?
- seem to be a natural leader?
- empathize easily with others, which leads to his give advice to friends who come to him with problems?
- seem to be street-smart?
- enjoy belonging to organizations?
- enjoy teaching other kids – either peers or younger ones?
- have two or more close friends?
- serve as a magnet for social activities with others?
Intrapersonal: turning inward with a well-developed self-knowledge and using it successfully to navigate oneself through the world.
These questions can determine if an adult has a strength in Intrapersonal Intelligence:
- Do you regularly spend time alone meditating, reflecting, or thinking about important life questions?
- Have you attended counseling sessions or personal growth seminars to learn more about yourself?
- Do you have a hobby or interest that you keep to yourself?
- Have you set goals for yourself regularly?
- Do you have a realistic view of your strengths and weaknesses?
- Would you prefer spending time by yourself rather than with many people around you?
- Do you keep a diary or journal to record the events of your inner life?
- Are you either self-employed or have you given serious consideration to starting your own business?
These are some questions to determine if children may be exhibiting a well-developing Intrapersonal Intelligence. Does your child:
- show a sense or independence or a strong will?
- have a realistic sense of her abilities and weaknesses?
- do well when left alone to play or study?
- “march to the beat of a different drummer” in living and learning?
- have a hobby or interest she doesn’t talk about much?
- have a good sense of self-direction?
- prefer working alone to working with others?
- accurately express how he is feeling?
- learn from failures and successes?
- have good self-esteem?
Linguistic: using words, either orally or written, in an effective manner. This intelligence is associated with storytellers, politicians, comedians, and writers.
These questions can determine if an adult has a strength in Linguistic Intelligence:
- Have you always enjoyed books and given them importance?
- Do you hear words in your head before you speak or write them?
- Do you enjoy talk shows more than television or movies?
- Do you enjoy word games, puns, rhymes, tongue-twisters, and poetry?
- Do you have a highly developed vocabulary and enjoy knowing words that other people do not know?
- In your own education, did you enjoy subjects related to words and ideas, such as English and social studies, more than math and science?
- Have you enjoyed learning to read or speak other languages?
- In your speech, do you refer to information that you have read or heard about?
- Have you been praised, recognized, or paid for your writing?
These are some questions to determine if children may be exhibiting a well-developing Linguistic Intelligence. Does your child:
- write better than average for her age?
- enjoy telling stories and jokes?
- have a good memory for names, places, dates, and other information?
- enjoy word games, either visually or auditorally?
- enjoy reading books?
- spell better than other children the same age?
- appreciate rhymes, puns, tongue twisters?
- enjoy books on tape without needing to see the book itself?
- enjoy hearing stories without seeing the book?
- have an excellent vocabulary for his age?
- communicate thoughts, feelings, and ideas well?
Logical-Mathematical: understanding and using numbers effectively, as well as having good powers to reason well. Exemplars are mathematicians, scientists, computer programmers, and accountants.
These questions can determine if an adult has a strength in Logical-Mathematical Intelligence:
- Have you always done math in your head easily?
- When you were in school, were math and/or science your best subjects?
- Do you enjoy playing games that require logical thinking?
- Do you set up experiments to see “what if” in your course of jobs around the house or at work?
- Do you look for logical sequences and patterns, with the belief that almost everything has a logical explanation?
- Do you read science periodicals or keep track of the latest scientific developments?
- Do you like finding logical flaws in things that people say and do?
- Do you feel the need to have things measured, categorized, analyzed, or quantified in some way?
- I think in clear, abstract, wordless, imageless concepts.
These are some questions to determine if children may be exhibiting a well-developing Logical-Mathematical Intelligence. Does your child:
- demonstrate curiosity about how things work?
- have fun with numbers?
- enjoy math at school?
- enjoy math and/or computer games?
- play and enjoy strategy games such as chess and checkers, brain teasers, or logic puzzles?
easily put things into categories?
- like to do experiments, either at school when assigned or on her own?
- show an interest in visiting natural history or discovery-type museums and exhibits?
Musical: relating in a wide range of ways to music. This can take many forms, as a performer, composer, critic, and music-lover.
These questions can determine if an adult has a highly developed Musical Intelligence:
- Do you have a pleasant singing voice?
- Can you tell when a musician plays a note off-key?
- Do you frequently listen to music?
- Do you play a musical instrument?
- Was it easy for you to learn to play a musical instrument?
- Do you think your life would not be as rewarding without music?
- Do you usually have music going through your mind?
- Can you keep time to music?
- Do you know the tunes to many different songs or musical selections?
- Can you usually sing back a melody accurately after you hear a new selection only once or twice?
These are some questions to determine if children may be exhibiting a well-developing Musical Intelligence. Does your child:
- tell you when she recognizes that music is off-key?
- easily remember song melodies and sing them?
- have a pleasant singing voice, either alone or in a chorus?
- play a musical instrument?
- speak or move in a rhythmical way?
- hum or whistle to himself?
- tap on the tabletop or desktop while working?
- show sensitivity to noises in the environment?
- respond emotionally to music she hears?
Naturalist Intelligence: excellent at recognizing and classifying both the animal and plant kingdoms, as well as showing understanding of natural phenomena.
These questions can determine if an adult has a strength in Naturalist Intelligence:
- Do you like to spend time in nature?
- Do you belong to a volunteer group related to nature?
- Do you enjoy having animals around the house?
- Are you involved in a hobby that involves nature, such as bird watching?
- Can you easily tell the differences among species of flora and fauna?
- Do you read books or magazines, or watch television shows or movies that feature nature?
- On vacation, do you prefer natural settings to cultural attractions?
- Do you enjoy visiting zoos, aquariums, or other places where the natural world is studied?
- Do you enjoy working in your garden?
These are some questions to determine if children may be exhibiting a well-developing Naturalist Intelligence. Does your child:
- talk about favorite pets or preferred natural spots?
- enjoy nature preserves, the zoo, or natural history museum?
show sensitivity to natural formations? (Note that in urban environments, this type of “formation” can include cultural icons.)
- like to play in water?
- hang around the pet in school or at home?
- enjoy studying environment, nature, plants, and animals?
- speak out about animal rights and earth preservation?
- collect bugs, flowers, leaves, or other natural things to show to others?
Spatial: perceiving the visual-spatial world in an accurate way, so as to be able to work in it effectively. The people who do this cover a wide range of fields that, upon first glance, do not seem to have much in common. Compare, for example, hunters, sailors, engineers, inventors, and surgeons to interior decorators, architects, painters, and sculptors.
These questions can determine if an adult has a strength in Spatial Intelligence:
- Have you always been able to reproduce clear images in your mind, even when your eyes are closed or the objects are not in front of you?
- Are you sensitive to color?
- Do you take a lot of photographs or home movies?
- Do you enjoy jigsaw and other visual puzzles?
- Do you have vivid dreams?
- Do you usually have an easy time getting around, even if it’s your first time in a new place?
- Do you enjoy drawing or doodling?
- Was geometry easier for you than algebra?
- Do you have an easy time reading maps and translating their information into reality?
- Do you enjoy books and magazines with many illustrations, photos, and design elements?
These are some questions to determine if children may be exhibiting a well-developing Spatial Intelligence. Does your child:
- recall visual details in objects?
- have an easy time learning to read and understand maps and charts in books?
- daydream a lot?
- enjoy the visual arts?
- demonstrate ability in using art materials and creating drawings, sculptures, or other three-dimensional objects?
- enjoy visual presentations such as videos, television, and movies?
- get a lot of information from illustrations in books she reads?
- scribble, doodle, or draw on all available surfaces?
I have seen limited reference to another intelligence: Naturalist, which is described as being able to recognize plant or animal species in the environment. This one is not included in the two Gardner books I list it here for your perusal, but it was added after this original research.
Howard Gardner’s books on this topic are Frames of Mind and Multiple Intelligences: The Theory in Practice and Multiple Intelligences: New Horizons in Theory and Practice.
In addition, Thomas Armstrong continues the work in his Multiple Intelligences in the Classroom. To get a sense of your child’s areas of strength, go to www.familyeducation.com, where you can find a page entitled Test Your Child’s Talents, which is based on Armstrong’s book.
This article has been incorporated and expanded in Teach Your Children Well: A Teacher’s Advice for Parents
This article is reprinted with the author’s permission. |
by TeachThought Staff
The best lessons, books, and materials in the world won’t get students excited about learning and willing to work hard if they’re not motivated.
Motivation, both intrinsic and extrinsic, is a key factor in the success of students at all stages of their education, and teachers can play a pivotal role in providing and encouraging that motivation in their students. Of course that’s much easier said than done, as all students are motivated differently and it takes time and a lot of effort to learn to get a classroom full of kids enthusiastic about learning, working hard, and pushing themselves to excel.
Even the most well-intentioned and educated teachers sometimes lack the skills to keep kids on track, so whether you’re a new teacher or an experienced one, try using these methods to motivate your students and to encourage them to live up to their true potential.
21 Simple Ideas To Improve Student Motivation
While guidance from a teacher is important to keeping kids on task and motivated, allowing students to have some choice and control over what happens in the classroom is actually one of the best ways to keep them engaged. For example, allowing students to choose the type of assignment they do or which problems to work on can give them a sense of control that may just motivate them to do more.
2. Define the objectives.
It can be very frustrating for students to complete an assignment or even to behave in class if there aren’t clearly defined objectives. Students want and need to know what is expected of them in order to stay motivated to work. At the beginning of the year, lay out clear objectives, rules, and expectations of students so that there is no confusion and students have goals to work towards.
3. Create a threat-free environment.
While students do need to understand that there are consequences to their actions, far more motivating for students than threats are positive reinforcements. When teachers create a safe, supportive environment for students, affirming their belief in a student’s abilities rather than laying out the consequences of not doing things, students are much more likely to get and stay motivated to do their work. At the end of the day, students will fulfill the expectations that the adults around them communicate, so focus on can, not can’t.
4. Change your scenery.
A classroom is a great place for learning, but sitting at a desk day in and day out can make school start to seem a bit dull for some students. To renew interest in the subject matter or just in learning in general, give your students a chance to get out of the classroom. Take field trips, bring in speakers, or even just head to the library for some research. The brain loves novelty and a new setting can be just what some students need to stay motivated to learn.
Not all students will respond to lessons in the same way. For some, hands-on experiences may be the best. Others may love to read books quietly or to work in groups. In order to keep all students motivated, mix up your lessons so that students with different preferences will each get time focused on the things they like best. Doing so will help students stay engaged and pay attention.
6. Use positive competition.
Competition in the classroom isn’t always a bad thing, and in some cases can motivate students to try harder and work to excel. Work to foster a friendly spirit of competition in your classroom, perhaps through group games related to the material or other opportunities for students to show off their knowledge.
7. Offer rewards.
Everyone likes getting rewards, and offering your students the chance to earn them is an excellent source of motivation. Things like pizza parties, watching movies, or even something as simple as a sticker on a paper can make students work harder and really aim to achieve. Consider the personalities and needs of your students to determine appropriate rewards for your class.
8. Give students responsibility.
Assigning students classroom jobs is a great way to build a community and to give students a sense of motivation. Most students will see classroom jobs as a privilege rather than a burden and will work hard to ensure that they, and other students, are meeting expectations. It can also be useful to allow students to take turns leading activities or helping out so that each feels important and valued.
9. Allow students to work together.
While not all students will jump at the chance to work in groups, many will find it fun to try to solve problems, do experiments, and work on projects with other students. The social interaction can get them excited about things in the classroom and students can motivate one another to reach a goal. Teachers need to ensure that groups are balanced and fair, however, so that some students aren’t doing more work than others.
10. Give praise when earned.
There is no other form of motivation that works quite as well as encouragement. Even as adults we crave recognition and praise, and students at any age are no exception. Teachers can give students a bounty of motivation by rewarding success publicly, giving praise for a job well done, and sharing exemplary work.
11. Encourage self-reflection.
Most kids want to succeed, they just need help figuring out what they need to do in order to get there. One way to motivate your students is to get them to take a hard look at themselves and determine their own strengths and weaknesses. Students are often much more motivated by creating these kinds of critiques of themselves than by having a teacher do it for them, as it makes them feel in charge of creating their own objectives and goals.
12. Be excited.
One of the best ways to get your students motivated is to share your enthusiasm. When you’re excited about teaching, they’ll be much more excited about learning. It’s that simple.
13. Know your students.
Getting to know your students is about more than just memorizing their names. Students need to know that their teacher has a genuine interest in them and cares about them and their success. When students feel appreciated it creates a safe learning environment and motivates them to work harder, as they want to get praise and good feedback from someone they feel knows and respects them as individuals.
14. Harness student interests.
Knowing your students also has some other benefits, namely that it allows you to relate classroom material to things that students are interested in or have experienced. Teachers can use these interests to make things more interesting and relatable to students, keeping students motivated for longer.
15. Help students find intrinsic motivation.
It can be great to help students get motivated, but at the end of the day they need to be able to generate their own motivation. Helping students find their own personal reasons for doing class work and working hard, whether because they find material interesting, want to go to college, or just love to learn, is one of the most powerful gifts you can give them.
16. Manage student anxiety.
Some students find the prospect of not doing well so anxiety-inducing that it becomes a self-fulfilling prophecy. For these students, teachers may find that they are most motivated by learning that struggling with a subject isn’t the end of the world. Offer support no matter what the end result is and ensure that students don’t feel so overwhelmed by expectations that they just give up.
17. Make goals high but attainable.
If you’re not pushing your students to do more than the bare minimum, most won’t seek to push themselves on their own. Students like to be challenged and will work to achieve high expectations so long as they believe those goals to be within their reach, so don’t be afraid to push students to get more out of them.
18. Give feedback and offer chances to improve.
Students who struggle with class work can sometimes feel frustrated and get down on themselves, draining motivation. In these situations it’s critical that teachers help students to learn exactly where they went wrong and how they can improve next time. Figuring out a method to get where students want to be can also help them to stay motivated to work hard.
19. Track progress.
It can be hard for students to see just how far they’ve come, especially with subjects that are difficult for them. Tracking can come in handy in the classroom, not only for teachers but also for students. Teachers can use this as a way to motivate students, allowing them to see visually just how much they are learning and improving as the year goes on.
20. Make things fun.
Not all class work needs to be a game or a good time, but students who see school as a place where they can have fun will be more motivated to pay attention and do the work that’s required of them than those who regard it as a chore. Adding fun activities into your school day can help students who struggle to stay engaged and make the classroom a much more friendly place for all students.
21. Provide opportunities for success.
Students, even the best ones, can become frustrated and demotivated when they feel like they’re struggling or not getting the recognition that other students are. Make sure that all students get a chance to play to their strengths and feel included and valued. It can make a world of difference in their motivation.
This is a cross-post from onlinecollegecourses.com |
Planting your Acorns
Plant your acorns as soon as possible! They have been refrigerated since October and will begin to grow as soon as they encounter warmer temperatures and moist soil.
Students will be able to:
- Describe the steps in planting an acorn.
- Assume responsibility for the care of an acorn.
Time: 30 minutes
For each student:
For the class:
- Permanent marking pen for labeling
- Plastic tray
Distribute acorns. Have students make observations of the acorns (color, length of sprout, size) and record/sketch information in their journals. Demonstrate how to plant an acorn. Distribute deepots and assist as students plant their acorns.
If the acorns have not begun to sprout, have each student poke a hole in the soil of the deepot with his/her finger. The hole should begin at the edge of the pot and end near the middle. Help students identify the pointy end of the acorn. Explain that both the root and shoot grow from this end. Instruct students to place the acorns in the holes, pointy side first, and to push the acorns into the dirt until they are just below the surface.
If the acorns have begun to sprout, be careful not to damage their roots. Instruct students to remove an inch of soil from their deepots, and to use their fingers (or a pencil if the root is very long) to make a narrow hole for the root. Tell students to carefully place the acorns into their pots with the root extending down into the hole, and to gently refill their pots with the removed soil so that the acorn is fully covered.
Use a permanent marker to write student names on the craft sticks and insert them into the pots. Gather pots on the tray and take the tray outside for watering
To water, put the pots outside on a grassy or mulched area. This will absorb the runoff which will contain traces of fertilizer. Add water until it flows out of the bottoms of the pots and the soil looks evenly wet. Let the pots drain for 10-15 minutes. Place the pots on the plastic tray to contain additional drips.
Back in the classroom, place the tray on a sunny windowsill in the classroom.
Distribute the Oak Seedling Adoption Certificates. Read and discuss the responsibility involved in caring for the seedlings. You may want to read the pledge together and then have each student sign his certificate. |
Biome- China’s Mountain Forest Climate- 50-260 inches of rain each year. Humid most of the year because of the rainfall. warm temperate zone, it is hot and rainy in summer, cold and dry in winter Location- Sichuan, Shaanxi, and Gansu, in China. Organisms- some of the organisms that can be found there are bamboo, Giant pandas, Red pandas, Golden monkeys, Tragopans, musk deer, Golden Cats, and snow leopards. Water features- lakes and waterfalls Land features- Mountains, snow, trees & bamboo
Description- Bamboo is a grass whose contents are about half water. Bamboo grows over 100 feet tall. And Grows high to reach the sunlight Habitat- Bamboo is found in different climate zones, from hot tropical regions to cold mountains.
Description- Bamboo rats measure 15-25 cm. long and their tail is 6-8 cm. long. They weigh from 500- 760 grams or 1.2- 1.7 pounds Habitat- They live in grassy areas, forests, and sometimes in gardens. Food- Bamboo Rats eat bamboo, tea bushes, sugar cane, and tapioca. Predators- some are humans and pandas
We shouldn’t be concerned about bamboo rats because there are so many rats that communities don’t know how to control the problem. Bamboo rats ruined local stocks of yarn for weaving, they finished goods like blankets and clothing, and they also cause diseases, and contaminate water. They also bit people while they are sleeping. Status
Second-level- Giant Pandas Description-. black and white relative of bears. They measure 28- 30 inches and 220- 330 lbs. Males are usually 10 percent larger than females Habitat- China’s rainforests Predators- Some are Golden Cats, Musk Deer, and humans Food- herbs, tree bark, bamboo rats, bamboo, and they usually get their water from bamboo, but they sometimes get their water from licking the snow. Adult pandas Baby Panda
When the panda is born it’s pink, blind, and helpless, it's the size of a rat. At two years old the cubs first task is to find its own home. A panda mother is 900 times heavier than her new born cub. Baby Pandas
Giant pandas are endangered animals. People can stop that by not hunting down pandas and by not burning down their habitat. Status & People
Adaptations The Giant Panda has a sixth thumb that allows it to grasp Bamboo better for eating. It also helps the Panda pull the shoots and leaves off of Bamboo stems. It is a very important adaptation for the Panda to be able to eat so much Bamboo.
**Questions** #1- What is one of the panda’s predators? Any- Golden Cats, snow leopards, Musk deer, and humans #2- What is the climate in the rainforest? Mostly rainy /humid #3- What are some of the water or land features that the rainforest has? Mountains, snow, trees, bamboo, lakes, waterfalls #4- How tall can bamboo be? Up to 100 feet tall #5- What two ways do pandas get their water? By bamboo and by licking the snow |
Beginning and Intermediate Algebra Open Course Ware (MOOC):
This course covers a range of algebraic topics: Setting up and solving linear equations, graphing, finding linear relations, solving systems of equations, working with polynomials, factoring, working with rational and radical expressions, solving rational and radical equations, solving quadratic equations, and working with functions. More importantly, this course is intended to provide you with a solid foundation for the rest of your math courses. As such, emphasis will be placed on mathematical reasoning, not just memorizing procedures and formulas.
Tutorials in all things related to beginning algebra.
Pre-algebra, algebra, and pre-calculus lessons
Purplemath's algebra lessons are written with the student in mind. These lessons emphasize the practicalities rather than the technicalities, demonstrating dependable techniques, warning of likely "trick" questions, and pointing out common mistakes.
All students agree that our math flashcards above are engaging and almost game-like. All-in-all a great way to master basic skills for lifelong math confidence. Students can repeat them over and over until they are satisfied with their score. Available in Spanish and English.
Learning Express Library:
This adult learning center gives free access to math skills practice tests, math skills e-books, and basic math tutorials.
Math is Power 4 U:
This site provides almost 5,000 free mini-lessons and example videos with no ads. The videos are organized by course and topics ranging from number sense to nursing and statistics.
Solving Word Problems:
Solving arithmetic problems can be difficult because the wording of the questions is frequently confusing. Don't despair! Use your experiences in Real Life to help you make sense of the nonsense.
Video lectures in for almost all math topics. Some include: trigonometry, geometry, engineering, statistics, calculus, linear algebra, etc. |
symptoms, treatment, allergic reactions, learn how a bee stings
All kinds of bees and other stinging insects including wasps fall into the phylum Arthropod category, which is a part of the Hymenoptera family. Bees are very important in many different ecosystems because they help to pollinate many different kinds of plants that are food to other animals.
Bees have long been a fear of many human beings for their ability to sting, and inject lethal poison into nearly any animal. The sting of a bee can range greatly in the amount of pain felt depending on the type of bee that you are stung by. The sting of a bee feels much like a pinpointed pinch, and is often a very local pain.
Almost immediately after the sting, you will notice that the area around the sting will turn red, and the area may become raised and sore to the touch. These symptoms can last up to 48 hours, with a dying-down during the second half of the period. Depending on the reaction that your body has to the sting, some individuals have experienced a raised portion that is wider than 12 inches in diameter. In cases like these, you could be experiencing an allergic reaction, and it is recommended that you seek medical help as soon as possible to make sure that the situation does not worsen.
The average person can withstand one bee sting for every pound of body weight, as long as they do not have any adverse reactions to the sting. This means that approximately 1000 bee stings could kill the average sized adult male, while only 500 stings could potentially kill a small child. To receive that many stings, you would likely have to upset a nest of some kind, and be swarmed by a large amount of bees. Most stinging insects that are a part of the same family as bees will die after they sting you, but a number of insects that belong to the family will not die after the initial stinging and may be able to sting you several times.
If you are allergic to a bee sting, you should notice an allergic reaction within the first hour after being stung. Most have been led to believe that it may take several stings for you to notice an allergic reaction, but this is not necessarily true. Individuals that are allergic to bee stings will likely notice the effects after the first sting, and within the first hour after being stung.
Having an allergy to bee stings could place a person in anaphylactic shock, which is a very dangerous condition that could potentially kill. Those that are most at risk as a result of an allergic reaction to bee stings are young children and elderly adults, although they have typically been known to have less severe reactions to bee stings.
Other symptoms that are commonly associated with a bee sting will include itching at the site of the sting, and a bacterial skin infection. Although these infections are quite uncommon, you could notice one developing in the first 24 hours after being stung. Some infections have been known to take as long as 36 hours to set in.
Allergic reactions are characterized by the appearance of hives, which are raised bumps all over the body and can be very itching. You may also notice that your mouth or throat swell quite a bit, which can inhibit breathing and requires that you seek immediate medical attention. You may also experience nausea, vomiting and chest pain during allergic reactions. If you experience any of these symptoms, or difficulty breathing and unconsciousness, it is important that you seek medical help from a professional as soon as possible, especially if it is your first time being stung by a bee.
Luckily, the reactions to a bee sting are usual very easy to identify, and someone who knows that they are allergic to bee stings should be able to get moving toward medical treatment as soon as they have identified that they have been stung. Although on the larger scale bee stings are not that dangerous, they can become very dangerous if an allergic reaction is left untreated and the individual goes into anaphylactic shock, which can result in death. |
Seismology is the study of earthquakes and seismic waves that move through and around the earth.
What Are Seismic Waves?
Seismic waves are the waves of energy caused by the sudden breaking of rock within the earth or an explosion. They are the energy that travels through the earth and is recorded on seismographs.
Types of Seismic Waves
There are several different kinds of seismic waves, and they all move in different ways. The two main types of waves are body waves and surface waves.
Body waves can travel through the earth's inner layers, but surface waves can only move along the surface of the planet like ripples on water. Earthquakes radiate seismic energy as both body and surface waves.
1. Name and describe in detail the two types of body waves.
2. Name and describe in detail the two types of surface waves.
An earthquake is a sudden movement of rocks, caused by buildup of stress within the Earth, especially when adjacent plates move in different directions. When the stress exceeds the strength of rocks at a fault, an earthquake occurs, releasing energy partly in the form of earth vibrations (seismic waves) and partly by producing a sudden offset of the rock.
Areas tens-to-hundreds of kilometers from an earthquake are at risk from seismic waves, which can transmit energy through the earth for thousands of kilometers. Most earthquakes occur along the edge of the oceanic and continental plates.
10. Define a fault and describe three types of faults.
Geologists have used properties of P-waves and S-waves to predict the composition of Earth’s interior. They believe that Earth consists of three main zones: the crust, the mantle, and the core. They believe the core consists of a liquid outer core and a solid inner core.
P-waves and S-waves travel through various rock materials at different velocities.
S-waves cannot pass through molten (liquid) rock. If Earth’s composition were that of a uniform solid, the velocities of P-waves and S-waves would increase steadily with depth, because increasing pressure beneath the surface increases the elastic properties of the rock, which in turn increases wave velocities. However, the interior rock composition is not uniform; it changes with depth, so earthquake wave velocity does not increase smoothly, as shown in the graph below.
Use the graph and information from the website to answer the following questions:
3. How fast do P-waves move in the crust?
4. How fast do S-waves move in the crust?
5. What happens to S-waves approximately 2900 km below Earth’s surface? Why?
6. Using only data on P-waves, how could you determine the depth of the
boundary between the mantle and the outer core?
7. How does P-wave speed indicate that the inner core is composed of solid rock?
8. S-waves can travel through solid rock, and the inner core is solid. Why then are
no S-waves found in the inner core?
9. Which is likely to be a more distinct transition: from the mantle to the outer core |
One of man’s most urgent requirements is food. In contemplating virtually any hypothetical survival situation, the mind immediately turns to thoughts of food. Unless the situation occurs in an arid environment, even water, which is more important to maintaining body functions, will usually follow food in our initial thoughts. The survivor must remember that the three essentials of survival—water, food, and shelter—are prioritized according to the estimate of the actual situation. This estimate must not only be timely but accurate as well. We can live for weeks without food but it may take days or weeks to determine what is safe to eat and to trap animals in the area. Therefore, you need to begin food gathering
in the earliest stages of survival as your endurance will decrease daily. Some situations may well dictate that shelter precede both food and water.
Using Animals for Food in the Wild
Unless you have the chance to take large game, concentrate your efforts on the smaller animals. They are more abundant and easier to prepare. You need not know all the animal species that are suitable as food; relatively few are poisonous, and they make a smaller list to remember. However, it is important to learn the habits and behavioral patterns of classes of animals. For example, animals that are excellent choices for trapping, those that inhabit a particular range and occupy a den or nest, those that have somewhat fixed feeding areas, and those that have trails leading from one area to another. Larger, herding animals, such as elk or caribou, roam vast areas and are somewhat more difficult to trap. Also, you must understand the food choices of a particular species to select the proper bait.
You can, with relatively few exceptions, eat anything that crawls, swims, walks, or flies. You must first overcome your natural aversion to a particular food source. Historically, people in starvation situations have resorted to eating everything imaginable for nourishment. A person who ignores an otherwise healthy food source due to a personal bias, or because he feels it is unappetizing, is risking his own survival. Although it may prove difficult at first, you must eat what is available to maintain your health. Some classes of animals and insects may be eaten raw if necessary, but you should, if possible, thoroughly cook all food sources whenever possible to avoid illness.
Insects as Food in the Wild
The most abundant and easily caught life-form on earth are insects. Many insects provide 65 to 80 percent protein compared to 20 percent for beef. This fact makes insects an important, if not overly appetizing, food source. Insects to avoid include all adults that sting or bite, hairy or brightly colored insects, and caterpillars and insects that have a pungent odor. Also avoid spiders and common disease carriers such as ticks, flies, and mosquitoes.
Rotting logs lying on the ground are excellent places to look for a variety of insects including ants, termites, beetles, and grubs, which are beetle larvae. Do not overlook insect nests on or in the ground. Grassy areas, such as fields, are good areas to search because the insects are easily seen. Stones, boards, or other materials lying on the ground provide the insects with good nesting sites. Check these sites. Insect larvae are also edible. Insects that have a hard outer shell such as beetles and grasshoppers will have parasites. Cook them before eating. Remove any wings and barbed legs also. You can eat most soft-shelled insects raw. The taste varies from one species to another. Wood grubs are bland, but some species of ants store honey in their bodies, giving them a sweet taste. You can grind a collection of insects into a paste. You can mix them with edible vegetation. You can cook them to improve their taste.
Worms as Food in the Wild
Worms (Annelidea) are an excellent protein source. Dig for them in damp humus soil and in the rootball of grass clumps, or watch for them on the ground after a rain. After capturing them, drop them into clean, potable water for about 15 minutes. The worms will naturally purge or wash themselves out, after which you can eat them raw.
Crustaceans as Food in the Wild
Freshwater shrimp range in size from 0.25 centimeter (1/16 inch) up to 2.5 centimeters (1 inch). They can form rather large colonies in mats of floating algae or in mud bottoms of ponds and lakes.
Crayfish are akin to marine lobsters and crabs. You can distinguish them by their hard exoskeleton and five pairs of legs, the front pair having oversized pincers. Crayfish are active at night, but you can locate them in the daytime by looking under and around stones in streams. You can also find them by looking in the soft mud near the chimney-like breathing holes of their nests. You can catch crayfish by tying bits of offal or internal organs to a string. When the crayfish grabs the bait, pull it to shore before it has a chance to release the bait.
You can find saltwater lobsters, crabs, and shrimp from the surf’s edge out to water 10 meters (33 feet) deep. Shrimp may come to a light at night where you can scoop them up with a net. You can catch lobsters and crabs with a baited trap or a baited hook. Crabs will come to bait placed at the edge of the surf, where you can trap or net them. Lobsters and crabs are nocturnal and caught best at night.
NOTE: You must cook all freshwater crustaceans, mollusks, and fish. Fresh water tends to harbor many dangerous organisms (see Chapter 6), animal and human contaminants, and possibly agricultural and industrial pollutants.
Mollusks as Food in the Wild
This class includes octopuses and freshwater and saltwater shellfish such as snails, clams, mussels, bivalves, barnacles, periwinkles, chitons, and sea urchins (see picture above). You find bivalves similar to our freshwater mussel and terrestrial and aquatic snails worldwide under all water conditions. River snails or freshwater periwinkles are plentiful in rivers, streams, and lakes of northern coniferous forests. These snails may be pencil point or globular in shape.
In fresh water, look for mollusks in the shallows, especially in water with a sandy or muddy bottom. Look for the narrow trails they leave in the mud or for the dark elliptical slit of their open valves.
Near the sea, look in the tidal pools and the wet sand. Rocks along beaches or extending as reefs into deeper water often bear clinging shellfish. Snails and limpets cling to rocks and seaweed from the low water mark upward. Large snails, called chitons, adhere tightly to rocks above the surf line.
Mussels usually form dense colonies in rock pools, on logs, or at the base of boulders.
CAUTION – Mussels may be poisonous in tropical zones during the summer! If a noticeable red tide has occurred within 72 hours, do not eat any fish or shellfish from that water source.
Steam, boil, or bake mollusks in the shell. They make excellent stews in combination with greens and tubers.
CAUTION – Do not eat shellfish that are not covered by water at high tide!
Fish as Food in the Wild
Fish represent a good source of protein and fat. They offer some distinct advantages to the survivor or evader. They are usually more abundant than mammal wildlife, and the ways to get them are silent. To be successful at catching fish, you must know their habits. For instance, fish tend to feed heavily before a storm. Fish are not likely to feed after a storm when the water is muddy and swollen. Light often attracts fish at night. When there is a heavy current, fish will rest in places where there is an eddy, such as near rocks. Fish will also gather where there are deep pools, under overhanging brush, and in and around submerged foliage, logs, or other objects that offer them shelter.
There are no poisonous freshwater fish. However, the catfish species has sharp, needlelike protrusions on its dorsal fins and barbels. These can inflict painful puncture wounds that quickly become infected.
Cook all freshwater fish to kill parasites. As a precaution, also cook saltwater fish caught within a reef or within the influence of a freshwater source. Any marine life obtained farther out in the sea will not contain parasites because of the saltwater environment. You can eat these raw.
Most fish encountered are edible. The organs of some species are always poisonous to man; other fish can become toxic because of elements in their diets. Ciguatera is a form of human poisoning caused by the consumption of subtropical and tropical marine fish which have accumulated naturally occurring toxins through their diet. These toxins build up in the fish’s tissues. The toxins are known to originate from several algae species that are common to ciguatera endemic regions in the lower latitudes. Cooking does not eliminate the toxins; neither does drying, smoking, or marinating. Marine fish most commonly implicated in ciguatera poisoning include the barracudas, jacks, mackerel, triggerfish, snappers, and groupers. Many other species of warm water fishes harbor ciguatera toxins. The occurrence of toxic fish is sporadic, and not all fish of a given species or from a given locality will be toxic. This explains why red snapper and grouper are a coveted fish off the shores of Florida and the East Coast. While they are a restaurant and fisherman’s favorite, and a common fish market choice, they can also be associated with 100 cases of food poisonings in May 1988, Palm Beach County, Florida. The poisonings resulted in a statewide warning against eating hogfish, grouper, red snapper, amberjack, and barracuda caught at the Dry Tortuga Bank. A major outbreak of ciguatera occurred in Puerto Rico between April and June 1981 prompting a ban on the sale of barracuda, amberjack, and blackjack. Other examples of poisonous saltwater fish are the porcupine fish, cowfish, thorn fish, oilfish, and puffer (see pictures above).
Amphibians as Food in the Wild
Frogs are easily found around bodies of fresh water. Frogs seldom move from the safety of the water’s edge. At the first sign of danger, they plunge into the water and bury themselves in the mud and debris. Frogs are characterized by smooth, moist skin. There are few poisonous species of frogs. Avoid any brightly colored frog or one that has a distinct “X” mark on its back as well as all tree frogs. Do not confuse toads with frogs. Toads may be recognized by their dry, “warty” or bumpy skin. They are usually found on land in drier environments. Several species of toads secrete a poisonous substance through their skin as a defense against attack. Therefore, to avoid poisoning, do not handle or eat toads.
Do not eat salamanders; only about 25 percent of all salamanders are edible, so it is not worth the risk of selecting a poisonous variety. Salamanders are found around the water. They are characterized by smooth, moist skin and have only four toes on each foot.
Reptiles as Food in the Wild
Reptiles are a good protein source and relatively easy to catch. Thorough cooking and hand washing is imperative with reptiles. All reptiles are considered to be carriers of salmonella, which exists naturally on their skin. Turtles and snakes are especially known to infect man. If you are in an undernourished state and your immune system is weak, salmonella can be deadly. Cook food thoroughly and be especially fastidious washing your hands after handling any reptile. Lizards are plentiful in most parts of the world. They may be recognized by their dry, scaly skin. They have five toes on each foot. The only poisonous ones are the Gila monster and the Mexican beaded lizard. Care must be taken when handling and preparing the iguana and the monitor lizard, as they commonly harbor the salmonellal virus in their mouth and teeth. The tail meat is the best tasting and easiest to prepare.
Turtles are a very good source of meat. There are actually seven different flavors of meat in each snapping turtle. Most of the meat will come from the front and rear shoulder area, although a large turtle may have some on its neck. The box turtle is a commonly encountered turtle that you should not eat (see picture above). It feeds on poisonous mushrooms and may build up a highly toxic poison in its flesh. Cooking does not destroy this toxin. Also avoid the hawksbill turtle (see picture above), found in the Atlantic Ocean, because of its poisonous thorax gland. Poisonous snakes, alligators, crocodiles, and large sea turtles present obvious hazards to the survivor.
Birds as Food in the Wild
All species of birds are edible, although the flavor will vary considerably. The only poisonous bird is the Pitohui, native only to New Guinea. You may skin fish-eating birds to improve their taste. As with any wild animal, you must understand birds’ common habits to have a realistic chance of capturing them. You can take pigeons, as well as some other species, from their roost at night by hand. During the nesting season, some species will not leave the nest even when approached. Knowing where and when the birds nest makes catching them easier. Birds tend to have regular flyways going from the roost to a feeding area, to water, and so forth. Careful observation should reveal where these flyways are and indicate good areas for catching birds in nets stretched across the flyways. Roosting sites and waterholes are some of the most promising areas for trapping or snaring.
Nesting birds present another food source—eggs. Remove all but two or three eggs from the clutch, marking the ones that you leave. The bird will continue to lay more eggs to fill the clutch. Continue removing the fresh eggs, leaving the ones you marked.
Mammals as Food in the Wild
Mammals are excellent protein sources and, for Americans, the tastiest food source. There are some drawbacks to obtaining mammals. In a hostile environment, the enemy may detect any traps or snares placed on land. The amount of injury an animal can inflict is in direct proportion to its size. All mammals have teeth and nearly all will bite in self-defense. Even a squirrel can inflict a serious wound and any bite presents a serious risk of infection. Also, any mother can be extremely aggressive in defense of her young. Any animal with no route of escape will fight when cornered.
All mammals are edible; however, the polar bear and bearded seal have toxic levels of vitamin A in their livers. The platypus, native to Australia and Tasmania, is an egg-laying, semiaquatic mammal that has poisonous claws on its hind legs. Scavenging mammals, such as the opossum, may carry diseases. |
Ancestor of horses, rhinos may have originated in India
Working at the edge of a coal mine in India, a team of Johns Hopkins researchers and colleagues has filled in a major gap in science's understanding of the evolution of a group of animals that includes horses and rhinos. That group likely originated on the subcontinent when it was still an island headed swiftly for collision with Asia, the researchers reported recently in the online journal Nature Communications.
Modern horses, rhinos, and tapirs belong to a biological group, or order, called Perissodactyla. Also known as "odd-toed ungulates," animals in the order have, as their name implies, an uneven number of toes on their hind feet and a distinctive digestive system. Though paleontologists had found remains of Perissodactyla from as far back as the beginnings of the Eocene epoch, about 56 million years ago, their earlier evolution remained a mystery, says Ken Rose, a professor of functional anatomy and evolution at the Johns Hopkins School of Medicine.
Rose and his research team have for years been excavating mammal fossils in the Bighorn Basin of Wyoming, but in 2001 he and Indian colleagues began exploring Eocene sediments in Western India because it had been proposed that perissodactyls and some other mammal groups might have originated there. In an open-pit coal mine northeast of Mumbai, they uncovered a rich vein of ancient bones. Rose says he and his collaborators obtained funding from the National Geographic Society to send a research team to the mine site at Gujarat in the far western part of India for two weeks at a time once every year or two over the last decade.
The mine yielded what Rose says was a treasure trove of teeth and bones for the researchers to comb through back in their home laboratories. Of these, more than 200 fossils turned out to belong to an animal dubbed Cambaytherium thewissi, about which little had been known. The researchers dated the fossils to about 54.5 million years ago, making them slightly younger than the oldest known Perissodactyla remains, but, Rose says, the finding provides a window into what a common ancestor of all Perissodactyla would have looked like. "Many of Cambaytherium's features, like the teeth, the number of sacral vertebrae, and the bones of the hands and feet, are intermediate between Perissodactyla and more primitive animals," Rose says. "This is the closest thing we've found to a common ancestor of the Perissodactyla order."
Cambaytherium and other finds from the Gujarat coal mine also provide tantalizing clues about India's separation from Madagascar, lonely migration, and eventual collision with the continent of Asia as the Earth's plates shifted, Rose says. In 1990, two researchers, David Krause and Mary Maas of Stony Brook University, published a paper suggesting that several groups of mammals that appear at the beginning of the Eocene, including primates and odd- and even-toed ungulates, might have evolved in India while it was isolated. Cambaytherium is the first concrete evidence to support that idea, Rose says. But, he adds, "it's not a simple story."
"Around Cambaytherium's time, we think India was an island, but it also had primates and a rodent similar to those living in Europe at the time," he says. "One possible explanation is that India passed close by the Arabian Peninsula or the Horn of Africa, and there was a land bridge that allowed the animals to migrate. But Cambaytherium is unique and suggests that India was indeed isolated for a while."
Rose says his team was "very fortunate that we discovered the site and that the mining company allowed us to work there," although, he adds, "it was frustrating to know that countless fossils were being chewed up by heavy mining equipment." When coal extraction was finished, the miners covered the site, he says. His team has now found other mines in the area to continue digging.
Other authors on the study from Johns Hopkins were Katrina E. Jones and Heather E. Ahrens.
This study was funded by the National Geographic Society, Belgian Science Policy Office, National Science Foundation, and Wadia Institute of Himalayan Geology. |
Know how to use "fewer" and "less"? Find out.
phosphate mineral, hydrated aluminum phosphate (AlPO42H2O), which is valued as a semiprecious gemstone and an ornamental material. Both variscite and strengite, a similar mineral in which iron replaces aluminum in the crystal structure, occur as glassy nodules, veins, or crusts, in near-surface deposits: variscite is produced by the action of phosphate-rich waters on aluminous rocks, and strengite by alteration of iron-containing phosphates. Variscite is usually green; strengite, red. Variscite occurs in Germany, Austria, Czech Republic, Congo (Kinshasa), and Australia and in commercially important quantities near Fairfield, Utah, U.S. It also occurs with apatite on islands where phosphatic solutions from guano (seafowl excrement) have altered aluminous igneous rocks. Strengite deposits are known in Germany, Portugal, Sweden, and the United States. For detailed physical properties, see phosphate mineral (table) |
Hepatitis B: Introduction
Hepatitis B is a form of hepatitis, a group of serious diseases that cause inflammation of the liver. Hepatitis B is an infectious form of hepatitis that is caused by the hepatitis B virus. Hepatitis B is one of the most common forms of hepatitis.
The liver is a vital organ, and normal functioning of the liver is crucial to health and life. Hepatitis B can result in complications of the liver, such as cirrhosis of the liver and liver failure. This reduces liver's ability to do its vital job in helping the body to fight infection, stop bleeding, clear the blood of toxins, store energy, produce healthy blood, digest food, and remove waste. Having hepatitis B also increases the risk of developing liver cancer.
The hepatitis B virus is spread by having contact with the blood, semen, and vaginal secretions of a person infected with the hepatitis B virus. High risk activities include having unprotected sexual activity, having multiple sexual partners, sharing contaminated needles, or getting a tattoo or body piercing using unsterilized needles. A baby born vaginally to an infected woman can also contract an infection of hepatitis B. Any person who comes into frequent contact with blood, such as healthcare workers, are also at risk for hepatitis B.
In some people with early hepatitis B infection, there may be no symptoms. General symptoms common to hepatitis B include flu-like symptoms, fever, fatigue, muscle aches and jaundice, a yellowing of the skin and whites of the eyes. Complications can be serious, even life-threatening, and include the development of cirrhosis and liver failure. For more information about additional symptoms and complications, refer to symptoms of hepatitis B.
Making a diagnosis of hepatitis B includes performing a complete medical evaluation and history and physical examination. This includes questions about risk factors for contracting hepatitis B, such as having unprotected sex, sharing needles, or having tattoos or body piercings that were made with unsterilized needles.
Diagnostic blood tests include tests that can check for the antibodies that the body makes to fight hepatitis B and the hepatitis B surface antigen test. Blood tests can also be done that help to determine how likely it is that a person is infectious and will spread hepatitis B.
Liver function tests are blood tests that can help to determine the level of severity of hepatitis B by checking level of functioning of the liver and if there is any damage to the liver. Imaging tests that create a picture of the liver include an ultrasound, CT, and/or a nuclear liver scan.
It is possible that a diagnosis of hepatitis B can be missed or delayed because symptoms can be vague or there may be no symptoms in some people. In addition, symptoms of hepatitis B can be similar to symptoms of other diseases and conditions. For more information about diseases and conditions that can mimic hepatitis B, refer to misdiagnosis of hepatitis B.
There is no cure for Hepatitis B. Treatment includes rest, ensuring good nutrition, and antiviral medications in some cases. For serious cases I which liver damage or liver failure has occurred, hospitalization may be necessary. Treatment in the hospital may include medications and other diagnostic testing and liver transplant. For more information on treatment, refer to treatment of hepatitis B. ...more »
Hepatitis B: Viral liver infection spread by sex or body fluids.
More detailed information about the symptoms,
causes, and treatments of Hepatitis B is available below.
Hepatitis B: Symptoms
In many cases there are no symptoms in the early stages of hepatitis B infection. Many children and some adults do not develop any symptoms until complications, such as cirrhosis develop.
Symptoms of hepatitis B can include flu-like symptoms, fever, headache, nausea, muscle aches and weakness. Jaundice, a yellowing of the skin and whites of ...more symptoms »
Hepatitis B: Treatments
The most effective treatment plan for hepatitis B uses a multifaceted approach. Treatment plans are individualized to best fit the patient's age, medical history, and type and stage of the disease. The goal of treatment is to stop or lessen damage to the liver and minimize and quickly treat any complications, such as such as cirrhosis of the liver.
Most adults recover from hepatitis B, ...more treatments »
Hepatitis B: Misdiagnosis
A diagnosis of hepatitis B may be overlooked or delayed because there may be no symptoms, especially in children. In addition, symptoms, such as flu-like symptoms, fever, poor appetite, fatigue, and weakness, may be similar to symptoms of other disease and conditions, such as cirrhosis of the liver, flu, gallstones, peptic ulcer, and other forms of ...more misdiagnosis »
Symptoms of Hepatitis B
See full list of 14
symptoms of Hepatitis B
Treatments for Hepatitis B
- Liver transplant
- Treatment of Hepatitis B depends upon whether the infection is acute or chronic, and the severity of the illness. Treatments include:
- Avoidance of alcohol and medications that may worsen hepatic function, or rely on the liver for metabolism
- more treatments...»
Read more about treatments for Hepatitis B
Home Diagnostic Testing
Home medical testing related to Hepatitis B:
Wrongly Diagnosed with Hepatitis B?
Hepatitis B: Related Patient Stories
Hepatitis B: Deaths
Read more about Deaths and Hepatitis B.
Alternative Treatments for Hepatitis B
Alternative treatments or home remedies that have been listed in various sources as possibly beneficial for Hepatitis B may include:
Types of Hepatitis B
Read more about Types of Hepatitis B
Diagnostic Tests for Hepatitis B
Read more about tests for Hepatitis B
Hepatitis B: Complications
Review possible medical complications related to Hepatitis B:
Causes of Hepatitis B
Read more about causes of Hepatitis B.
More information about causes of Hepatitis B:
Disease Topics Related To Hepatitis B
Research the causes of these diseases that are similar to, or related to, Hepatitis B:
Hepatitis B: Undiagnosed Conditions
Commonly undiagnosed diseases in related medical categories:
Misdiagnosis and Hepatitis B
Chronic digestive conditions often misdiagnosed: When diagnosing chronic symptoms
of the digestive tract, there are a variety of conditions that may be misdiagnosed.
The best known, irritable bowel syndrome, is over-diagnosed,...read more »
Intestinal bacteria disorder may be hidden cause: One of the lesser known causes of diarrhea
is an imbalance of bacterial in the gut, sometimes called intestinal imbalance.
The digestive system...read more »
Antibiotics often causes diarrhea: The use of antibiotics are very likely
to cause some level of diarrhea in patients.
The reason is that antibiotics kill off not...read more »
Food poisoning may actually be an infectious disease: Many people who come down
with "stomach symptoms" like diarrhea assume that it's "something I ate" (i.e. food poisoning).
In fact, it's more likely to be an infectious diarrheal...read more »
Mesenteric adenitis misdiagnosed as appendicitis in children: Because appendicitis is one of the
more feared conditions for a child with...read more »
Celiac disease often fails to be diagnosed cause of chronic digestive symptoms: One of the most common chronic digestive
conditions is celiac disease, a malabsorption disorder with a variety of symptoms (see symptoms of...read more »
Chronic liver disease often undiagnosed: One study reported that 50% of patients
with a chronic liver disease remain undiagnosed by their primary physician.
The reasons are...read more »
Chronic digestive diseases hard to diagnose: There is an inherent
difficulty in diagnosing the various types of chronic digestive diseases.
Some of the better known possibilities are peptic ulcer, colon...read more »
Read more about Misdiagnosis and Hepatitis B
Hepatitis B: Research Doctors & Specialists
Research related physicians and medical specialists:
- Digestive Health Specialists (Gastroenterology):
- Liver Health Specialists (Hepatology):
- more specialists...»
Other doctor, physician and specialist research services:
Hospitals & Clinics: Hepatitis B
Research quality ratings and patient safety measures
for medical facilities in specialties related to Hepatitis B:
Hospital & Clinic quality ratings »
Choosing the Best Hospital:
More general information, not necessarily in relation to Hepatitis B,
on hospital performance and surgical care quality:
Hepatitis B: Rare Types
Rare types of diseases and disorders in related medical categories:
Latest Treatments for Hepatitis B
See full list of 8
latest treatments for Hepatitis B
Evidence Based Medicine Research for Hepatitis B
Medical research articles related to Hepatitis B include:
Click here to find more evidence-based articles on the TRIP Database
Hepatitis B: Animations
More Hepatitis B animations & videos
Prognosis for Hepatitis B
More about prognosis of Hepatitis B
Research about Hepatitis B
Visit our research pages for current research about Hepatitis B treatments.
Clinical Trials for Hepatitis B
The US based website ClinicalTrials.gov lists information on both federally
and privately supported clinical trials using human volunteers.
Some of the clinical trials listed on ClinicalTrials.gov for Hepatitis B include:
See full list of 190
Clinical Trials for Hepatitis B
Prevention of Hepatitis B
Prevention information for Hepatitis B has been compiled from various data sources
and may be inaccurate or incomplete.
None of these methods guarantee prevention of Hepatitis B.
Read more about prevention of Hepatitis B
Statistics for Hepatitis B
Hepatitis B: Broader Related Topics
Types of Hepatitis B
Hepatitis B Message Boards
Related forums and medical stories:
User Interactive Forums
Read about other experiences, ask a question about Hepatitis B, or answer someone else's question, on our message boards:
Definitions of Hepatitis B:
RAEB: Used for the diseases or the viruses.
- (Source - Diseases Database)
An acute (sometimes fatal) form of viral hepatitis caused by a DNA virus that tends to persist in the blood serum and is transmitted by sexual contact or by transfusion or by ingestion of contaminated blood or other bodily fluids
- (Source - WordNet 2.1)
Contents for Hepatitis B:
User Surveys and Discussion Forums
» Next page: What is Hepatitis B?
Medical Tools & Articles:
Tools & Services:
Forums & Message Boards
- Ask or answer a question at the Boards: |
Just like other object-oriented languages, Pony has classes. A class is declared with the keyword
class, and it has to have a name that starts with a capital letter, like this:
Do all types start with a capital letter? Yes! And nothing else starts with a capital letter. So when you see a name in Pony code, you will instantly know whether it's a type or not.
What goes in a class?
A class is composed of:
These are just like fields in C structs or fields in classes in C++, C#, Java, Python, Ruby, or basically any language, really. There are three kinds of fields: var, let and embed fields. A var field can be assigned to over and over again, but a let field is assigned to in the constructor and never again. Embed fields will be covered in more detail in the documentation on variables.
class Wombat let name: String var _hunger_level: U64
Wombat has a
name, which is a
String, and a
_hunger_level, which is a
U64 (an unsigned 64 bit integer).
What does the leading underscore mean? It means something is private. A private field can only be accessed by code in the same type. A private constructor, function, or behaviour can only be accessed by code in the same package. We'll talk more about packages later.
Pony constructors have names. Other than that, they are just like constructors in other languages. They can have parameters, and they always return a new instance of the type. Since they have names, you can have more than one constructor for a type.
Constructors are introduced with the new keyword.
class Wombat let name: String var _hunger_level: U64 new create(name': String) => name = name' _hunger_level = 0 new hungry(name': String, hunger': U64) => name = name' _hunger_level = hunger'
Here, we have two constructors, one that creates a
Wombat that isn't hungry, and another that creates a
Wombat that might be hungry or might not.
What's with the single quote thing, i.e. name'? You can use single quotes in parameter and local variable names. In mathematics, it's called a prime, and it's used to say "another one of these, but not the same one". Basically, it's just convenient.
Every constructor has to set every field in an object. If it doesn't, the compiler will give you an error. Since there is no
null in Pony, we can't do what Java, C# and many other languages do and just assign either
null or zero to every field before the constructor runs, and since we don't want random crashes, we don't leave fields undefined (unlike C or C++).
Sometimes it's convenient to set a field the same way for all constructors.
class Wombat let name: String var _hunger_level: U64 var _thirst_level: U64 = 1 new create(name': String) => name = name' _hunger_level = 0 new hungry(name': String, hunger': U64) => name = name' _hunger_level = hunger'
Wombat begins a little bit thirsty, regardless of which constructor is called.
Functions in Pony are like methods in Java, C#, C++, Ruby, Python, or pretty much any other object oriented language. They are introduced with the keyword
fun. They can have parameters, like constructors do, and they can also have a result type (if no result type is given, it defaults to
class Wombat let name: String var _hunger_level: U64 var _thirst_level: U64 = 1 new create(name': String) => name = name' _hunger_level = 0 new hungry(name': String, hunger': U64) => name = name' _hunger_level = hunger' fun hunger(): U64 => _hunger_level fun ref set_hunger(to: U64 = 0): U64 => _hunger_level = to
The first function,
hunger, is pretty straight forward. It has a result type of
U64, and it returns
_hunger_level, which is a
U64. The only thing a bit different here is that no
return keyword is used. This is because the result of a function is the result of the last expression in the function, in this case the value of
Is there a
return keyword in Pony? Yes. It's used to return "early" from a function, i.e. to return something right away and not keep running until the last expression.
The second function,
set_hunger, introduces a bunch of new concepts all at once. Let's go through them one by one.
refkeyword right after
This is a reference capability. In this case, it means the receiver, i.e. the object on which the
set_hunger function is being called, has to be a
ref type. A
ref type is a reference type, meaning that the object is mutable. We need this because we are writing a new value to the
What's the receiver reference capability of the
hunger method? The default receiver reference capability if none is specified is
box, which means "I need to be able to read from this, but I won't write to it".
What would happen if we left the
ref keyword off the
set_hunger method? The compiler would give you an error. It would see you were trying to modify a field and complain about it.
= 0after the parameter
This is a default argument. It means that if you don't include that argument at the call site, you will get the default argument. In this case,
to will be zero if you don't specify it.
- What does the function return?
It returns the old value of
Wait, seriously? The old value? Yes. In Pony, assignment is an expression rather than a statement. That means it has a result. This is true of a lot of languages, but they tend to return the new value. In other words, given
a = b, in most languages, the value of that is the value of
b. But in Pony, the value of that is the old value of
...why? It's called a "destructive read", and it lets you do awesome things with a capabilities-secure type system. We'll talk about that more later. For now, we'll just mention that you can also use it to implement a swap operation. In most languages, to swap the values of
b you need to do something like:
var temp = a a = b b = temp
In Pony, you can just do:
a = b = a
Finalisers are special functions. They are named
_final, take no parameters and have a receiver reference capability of
box. In other words, the definition of a finaliser must be
The finaliser of an object is called before the object is collected by the GC. Functions may still be called on an object after its finalisation, but only from within another finaliser. Messages cannot be sent from within a finaliser.
Finalisers are usually used to clean up resources allocated in C code, like file handles, network sockets, etc.
What about inheritance?
In some object-oriented languages, a type can inherit from another type, like how in Java something can extend something else. Pony doesn't do that. Instead, Pony prefers composition to inheritance. In other words, instead of getting code reuse by saying something is something else, you get it by saying something has something else.
On the other hand, Pony has a powerful trait system (similar to Java 8 interfaces that can have default implementations) and a powerful interface system (similar to Go interfaces, i.e. structurally typed).
We'll talk about all that stuff in detail later.
By now it shouldn't be very surprising to learn that Pony is written in ASCII. ASCII is a standard text encoding that uses English characters and symbols, and almost every programming language in existence defines source code as a subset of it.
A Pony type, whether it's a class, actor, trait, interface, primitive, or type alias, must start with an uppercase letter. After an underscore for private or special methods (behaviors, constructors, and functions), any method or variable, including parameters and fields, must start with a lowercase letter. In all cases underscores in a row or at the end of a name are not allowed, but otherwise any combination of letters and numbers is legal.
In fact, numbers may use single underscores inside as a separator too! But only valid variable names can end in primes. |
Definition of plasma in English:
1The colorless fluid part of blood, lymph, or milk, in which corpuscles or fat globules are suspended.
- Instead, blood is often separated into its three main components; red blood cells, plasma, and platelets.
- The use of smaller VTS in humans leads to reduced concentrations of polymorphonuclear cells and cytokines in both plasma and bronchoalveolar lavage fluid.
- It is composed of: red corpuscles, white cells, platelets, and blood plasma.
1.1Plasma taken from donors or blood donated by donors for administering in transfusions.
- Patients with IgA deficiency need to be informed about the possibility of having a serious reaction to plasma or blood transfusions, because of antibodies to IgA.
- Another area of concern is China, where a policy of re-injecting blood donors with plasma to allow them to donate more frequently has infected hundreds of thousands.
- Every batch of immunoglobulin is manufactured from the pooled plasma of many blood donors, so attention has focused on its potential infective risks.
2An ionized gas consisting of positive ions and free electrons in proportions resulting in more or less no overall electric charge, typically at low pressures (as in the upper atmosphere and in fluorescent lamps) or at very high temperatures (as in stars and nuclear fusion reactors).
- This expansion of the atmosphere significantly increases the number of microscopic collisions between the satellite and the gases and plasma of the upper atmosphere.
- Research on nuclear fusion in the 1940s shifted the focus of plasma research from the stars to laboratories on Earth.
- The photons can break apart, or ionize, molecules and atoms of the atmosphere into protons and electrons, producing plasma.
2.1An analogous substance consisting of mobile charged particles (such as a molten salt or the electrons within a metal).
- However, shielded metal arc welding, plasma arc, and electron beam welding processes can be used.
- Due to its lower flame temperature and particle velocity compared with plasma spraying, flame spraying produces a less dense coating having lower adhesion strength.
- The team grew the nano-needles by saturating droplets of molten gold with zinc oxide plasma.
4 another term for cytoplasm or protoplasm.
- Analysis of homozygous germline clones can be employed to reveal the role of pleiotropic genes in pole plasm formation.
- First, we have checked that in K + buffer plasma and mitochondrial potentials were dissipated.
- Scattered throughout the plasma in cells are organelles called mitochondria.
- Example sentences
- The results showed that the patients of group 1 presented low plasmatic levels of vitamin E and that the patients of group 2 presented significantly lower levels of vitamin E after 2 or 4 cycles of cisplatin than before treatment.
- A small set of membrane proteins, directly energized through the hydrolysis of MgATP, MgGTP or MgPP i, constitutes the basic framework for establishing distinct chemical milieus in the plasmatic and extraplasmatic compartments.
- Fixation was used in this experiment since it allows rosette particles to partition into the plasmatic fracture face, like the other particles (double ring) of the exocytotic site.
- Example sentences
- A derivative of Japan's long samurai-manga tradition (especially Lone Wolf and Cub), it has the requisite tangled storyline and some thrilling, plasmic exchanges rendered with a prodigious brush.
- The crimson puddles stretch into rivers, glisten and clot into islands of plasmic banks.
- After fish oil-based lipid infusion, a rapid increase in free plasmic eicosapentaenoic acid and docosahexaenoic acid levels was noted, rising to an average of approximately 35 and 65 [mu] M, respectively.
Early 18th century (in the sense 'mold, shape'): from late Latin, literally 'mold', from Greek plasma, from plassein 'to shape'.
Words that rhyme with plasmamiasma
What do you find interesting about this word or phrase?
Comments that don't adhere to our Community Guidelines may be moderated or removed. |
Click here to view our New Zealand Pollen Calendar
Before we consider their role in allergies, there are many fascinating things we should know about pollen grains. First, pollen, like sheep, is a collective noun, so we never say or write "pollens", although a surprising number of professional people make that mistake.
All of the plants that are grouped together as Flowering Plants or Angiosperms produce pollen as part of their reproductive process. Pollen grains are tiny, often roughly spherical structures that contain and transport male sex cells of flowering plants. The familiar flowers that decorate your table or garden all have a similar structure with bright colours and showy petals. These features of flowers have been designed by nature to attract insects that aid in the transfer of the male component of reproduction (pollen) to the respective female organ (stigma).
But not all flowering plants depend on insect visitors; in fact the primary culprits of pollen allergies are the best examples of wind pollination. All of the trees and shrubs that cause spring allergy are wind-pollinated. Their flowers have been stripped down to the bare minimum and are often grouped together in long dangling structures (anthers) that expose the pollen grains to the wind. During pollination, the wind blows pollen off the anthers and carries it for various distances eventually to land on some surface (soil, lakes, nose and eyes of humans), but only a very few will find their way to a receptive female stigma.
Wind pollinated species compensate for this less precise transfer of sex cells by sending clouds of pollen into the air, and because of this, individuals are more often allergic to these species. Individuals can have allergic reactions to insect pollinating species, however similar symptoms to allergy (sneezing or wheezing) can occur in response to the aroma of a flower or plant.
Can you see a pollen grain?
Yes and no. With the aid of a compound microscope, the pollen grains of different plant types can be differentiated allowing scientists to study the number and types of pollen grains released into the air. Masses of pollen are visible to the naked eye on the end of a stamen of a tulip or other flowers. But the naked eye cannot distinguish an individual pollen grain; it is far too small.
How many pollen grains float through the air?
That depends on the type. For example, the white birch is one of the most allergenic taxa. Their flowers, called catkins, are long, dark, pendulous, worm-like structures on the ends of the branches. Each catkin can produce roughly 2 to 5 million pollen grains hence a typical tree will produce and release to the air roughly 2,000,000,000 pollen grains per season. In a typical residential area roughly 30 years old, the frequency of white birch is 45 trees for every kilometre of street. Therefore it is not surprising to find concentration of white birch pollen as high as 4000 grains per cubic metre of air at the height of its flowering period. Other trees such as oak and pine can also reach concentration of 1000 to 2000 at their peak, however grass concentrations are generally lower, reaching roughly 200 grains per cubic metre of air.
How long is the flowering season?
Trees flower in the springtime when temperature is increasing. For each type of tree, the flowering period is defined by specific conditions, which usually occur at approximately the same time each year, lasting roughly 2 weeks and the peak pollinating period (time when there are the maximum concentration of pollen in the air) lasting only a few days. The tree season in New Zealand is relatively short compared with Europe, where the birch season is several months long. Here it is a month or less.
Tree pollen is, therefore, less of an issue compared with grass pollen. Grass allergy is a severe problem because its season goes from August/September through to March. This makes New Zealand’s pollen season a nine-month nasal marathon!
Many people allergic to grass are allergic to more than one species creating a long protracted suffering period.
As a further complication, pollen concentrations in any flowering period vary on a daily basis in response to the various weather conditions. Pollen release is favourable on warm, dry, windy days whereas rain washes the air clean of pollen. Due to the biology of the plants, pollination usually occurs in the morning. Pollen concentrations are typically lowest at roughly 6 am increasing to the peak at 12 noon and decreasing through the afternoon and evening. See our Pollen Calendar for specific high risk times for different species of trees, grasses and weeds.
So what is it about pollen grains that make them allergenic?
Pollen grains carry on their exterior coat 30-40 different proteins that are required by the female parts of the flower to identify which pollen grains are a suitable match for pollination. When pollen grains are breathed into the nasal passages or contact the membranes of the eye, they release these proteins to the mucous membranes just as they would onto the surface of the receptive female stigma. This exposes the proteins to the immune system in the blood vessels of the mucous membranes.
The immune system is designed to rid the body of "foreign" proteins, and this usually occurs on a daily basis without any notice at all. However, for some people, for reasons that are still undiscovered, the immune system does not discard some of these pollen proteins through the usual route, but instead produces a special class of antibodies, IgE antibodies. The IgE antibodies bind to specialised cells called mast cells and upon contact with the pollen protein, signal the mast cell to release its contents.
One of the chemicals released in this process is histamine, which is responsible for producing the symptoms of allergy, e.g. swelling, redness, itchiness and secretion of mucous. All of these symptoms can occur when the immune system recognises one or more of these pollen proteins and produces IgE antibody to it. Some proteins are more likely to become allergenic than others, and some pollen types carry proteins that are more allergenic. For example, pine is a prolific pollen producer, but very few people are allergic to the pollen proteins, whereas ragweed, which produces less pollen, has proteins that are very allergenic.
How can pollen be avoided?
We know that pollen concentrations vary in both space and time. Learn to identify the plants that you are allergic to (there are many books to help you), find out where they like to live and know at what time of year they are pollinating, then STAY AWAY. The highest concentrations of pollen are within 10 metres of the plant and concentrations drop quickly as you move farther away, so you can significantly reduce your exposure to pollen by removing yourself physically from the plant when it is pollinating. Also, keep your windows closed during this time and stay indoors, especially in the morning hours. Before and after the pollination period, the plant should pose no harm to you (unless you have a contact type of plant allergy), so you can take walks in the woods at these times.Pollination is one of nature's wonders - learning about it helps us to cope with the bad luck of being allergic.
Based on information supplied by Christine Rogers |
Meteorologists use anemometers to measure wind speed in one area. With
this data, they can determine how quickly a storm, or weather system,
will travel to other areas.
Build Your Own Weather Tool!
Use the materials and follow the directions below.
Five 3-ounce paper cups
One straight pin
Pencil (with eraser)
Two straight plastic straws
Watch with a second hand
- Punch one hole in each of four paper cups, about ½" below
the rim. Color the outside of one of the cups.
- In the fifth cup, punch four evenly spaced holes about ¼ "
below the rim.
- Push a straw through the hole of the colored cup. Fold down the tip
of the straw inside the cup, and staple it to the cup on the side opposite
- Push the straw through two opposite holes in the four-hole cup. Attach
another cup to the opposite end of the straw. Make sure that the second
cup faces the opposite direction from the first cup.
- Repeat the above step with the other two cups and straw.
- Position the four cups so that they face the same direction
clockwise or counterclockwise. Make sure the cups are all the same distance
from the center.
- Poke a hole in the bottom of the center cup. Push the eraser end
of the pencil through the hole.
- Push the pin through the intersection of the two straws. Then push
it into the eraser as far as possible.
- With a friend, take the anemometer outside to an open area where the
wind is blowing.
- While one of you times exactly one minute on the watch, the other
counts how many times the colored cup goes by in one minute. This is
the number of revolutions per minute (RPM).
- Convert your answer for RPM to miles per hour (MPH) using this formula:
RPM X 0.2142 = MPH
- Record this number on your Weather
Data Sheet (PDF).
View and print using Adobe Acrobat Reader® software, version 4.0
or higher. Get
Adobe Reader for free. |
Difference between Tsunami and Earthquake
In recent years hazardous phenomena has begun to take a larger role on the worldwide stage due to global climate warming, but also due to volcanic activity and earth plates’ movement. Tsunamis and earthquakes are much more destructive than before and much more frequent. Scientists have been developing methods to anticipate such events for many years, but the efficacy of such research is valuable when it’s a matter of a few minutes which may not be enough time to find a safe spot. In some cases, protection measures cannot even measure up to the destructive effects of tsunamis and earthquakes. Still, trying to fight back defines us as human beings and new solutions are explored every day to increase our safety.
A tsunami is a series of very tall waves of water which are caused by a displacement of a body of water. Tsunami waves can reach up to 2 meters (6.6 ft.) to 14 meters (46 ft.) or more. On the ocean’s surface the wavelength can have up to 800 kilometers (500 miles) per hour, but it slows down upon approaching the coastal line getting to 80 kilometers (50 miles) per hour which has a devastating impact anyway. Almost 80% of tsunami activity has been recorded in the Pacific Ocean, but the risk also exists on mainland wherever there are lakes and active volcanoes or frequent earthquakes. An earthquake is generated when the earth’s crust receives a strong release of energy as a result of tectonic movement. Their magnitude is measured on two different scales: Richter and Mercalli. Earthquakes of more than 7 degrees can cause important damages to constructions which can fell apart within seconds collapsing over the people inside. Many earthquakes happen in the Pacific Ring of fire – a hot spot for about 81% of large scale earthquakes. If the epicenter of the earthquake – the point of origin – is located on the ocean’s floor it can lead to tsunami phenomena.
Tsunamis are caused by earthquakes, volcanic eruptions, underwater explosions triggered by detonations, ocean floor plate movement. Tropical cyclones can also generate giant tidal waves. Earthquakes are caused by collision of tectonic plates, by volcanic activity or by underground explosions in mines. Teletsunamis are tidal waves generated by earthquakes causing waves which travel across the oceans. Megatsunamis are massive waves. The best example is probably a 1958 landslide in Alaska which created a wave measured at 524 meters (more than 1,700 feet) high.
Tsunami’s can sweep everything on the coast line from houses to cars, trees, drowning people and animals. If the wave speed is very big the tidal wave can go beyond the coastal line flooding the city and destroying everything in its way. Earthquakes have caused deep craters, fatal damage to constructions, many losses of human lives in just seconds wherever they happened in the world. Japanese have developed even a rolling system for their blocks of flats to diminish the impact of earthquakes.
Similarities and Differences
- Tsunamis are series of high tidal waves which can sweep people and constructions in seconds. Earthquakes are a result of tectonic plates’ movement which releases a wave of energy out to the surface.
- Tsunamis and earthquakes can be caused by volcanic activity or underwater explosions.
- Both tsunamis and earthquakes have destructive consequences. |
This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action.
Peripheral Blood Stem Cell Harvesting In Children
WHAT YOU SHOULD KNOW:
- Peripheral blood stem cell (PBSC) harvesting is a procedure that removes stem cells from your child's blood. Stem cells are created in your child's bone marrow. Bone marrow is soft, spongy tissue inside bones. Stem cells may become healthy cells that replace cells that are damaged from sickness. Stem cells removed from your child's blood may be put back into your child or someone else. Before the procedure, your child's caregiver will test his blood. He also will give your child medicine to increase the number of stem cells in his blood.
- Your child's blood will go through a tube into a machine that removes the stem cells. His blood is then returned to your child's body. Your child might have PBSC harvesting if he has chemotherapy (chemo) cancer treatment, which kills or damages many blood cells. New stem cells may help your child grow healthy blood cells to replace these damaged cells. Your child also may donate stem cells to a sick family member or someone else. New stem cells may help your child or someone else make healthier blood cells. Healthy blood cells may help your child or someone else recover faster after chemo. Stem cells also may help treat diseases such as cancer and bleeding problems.
Your child's medicines are:
- Keep a current list of your child's medicines: Include the amounts, and when, how, and why they are taken. Bring the list and the medicines in their containers to follow-up visits. Carry your child's medicine list with you in case of an emergency. Throw away old medicine lists. Give vitamins, herbs, or food supplements only as directed.
- Give your child's medicine as directed: Call your child's healthcare provider if you think the medicine is not working as expected. Tell him if your child is allergic to any medicine. Ask before you change or stop giving your child his medicines.
- Antibiotics: This medicine is given to help prevent or treat an infection caused by bacteria.
- Pain medicine: Your child may need medicine to take away or decrease pain. Know how often your child should get the medicine and how much. Watch for signs of pain in your child. Tell caregivers if his pain continues or gets worse. To prevent falls, stay with your child to help him get out of bed.
Ask for more information about where and when to take your child for follow-up visits:
For continuing care, treatments, or home services for your child, ask for information.
Returning to school or previous activity:
Ask your child's caregiver when it is okay for your child to return to school or normal daily activities.
CONTACT A CAREGIVER IF:
- Your child feels sick to his stomach or throws up.
- Your child feels dizzy, weak, or has the chills.
- Your child has pain that does not go away, even with medicine.
- Your child has less energy or sleeps more than usual.
- Your child is more upset or cries more than usual.
- Your child has a fever (high body temperature).
- You have questions or concerns about your child's procedure, medicine, or care.
SEEK CARE IMMEDIATELY IF:
- Your child has a seizure (uncontrolled shaking).
- Your child faints.
- Your child has trouble breathing.
- Your child complains of pain in his chest.
Always consult your healthcare provider to ensure the information displayed on this page applies to your personal circumstances. |
Recently, I applied to a fellowship with Math for America, a program dedicated to improving mathematics education in U.S. public schools by recruiting, training, and retaining highly qualified secondary school math teachers. In my quest to get the fellowship, I started to fiddle around with some math ideas that made me curious. One of those is the idea of factoring a polynomial, and specifically, how we teach it.
For one, I'm not a fan of FOIL (first-outside-inside-last) for a plethora of reasons. While I think it's handy to have an acronym that reminds students of a procedure, it only works in a very special case. In this case, FOIL works only for multiplying a binomial by another binomial. Does FOIL lead students toward understanding multiplication of all types of polynomials, or understanding why the distributive property works even with variables? I'm not so sure.
We do know that there are alternate ways of approaching multiplication of binomials, but I'd like to focus on using the geometric method of multiplication because, well, because I can.
Multiplication and Factoring Using Areas
Multiplying (x + 2) by (x + 3) can be represented like so (see Figure 1):
This makes the following operations look rather simple:
(x + 2)( x + 3)
x 2 + 2x + 3x + 6
x2 + 5x + 6
Using the area method for multiplying binomials also makes factoring an easy task. We can visualize the squares and rectangles in this shape while thinking to ourselves, "Which two numbers have a sum of 5 (second term) and a product of 6 (third term)?" If we look carefully, students have another method for understanding why we get the terms we do after multiplying the binomials shown. How does this relate to trinomials? Let's see.
Multiplication and Factoring of Cubes
Let's take the last example and multiply it by (x + 4). Like so (see Figure 2):
(x + 2)(x + 3)(x + 4)
(x2 + 5x + 6)(x + 4)
x3 + 9x2 + 26x + 24
Or geometrically (see Figure 2): This has awesome implications for finding both the surface area and volume of this figure. Since we already figured out the "face" of this cube earlier (x2+ 5x + 6), we're basically multiplying that face by the length of x and by the length of 4. This yields:
x(x2 + 5x + 6) + 4(x2 + 5x + 6)
. . .
x3 + 9x2 + 26x + 24
Factoring The Cube
Once we find the quadrinomial, the cube gives us a hint for finding the lengths that created the quadrinomial. One would only need to figure out which three numbers give us a sum of 9 (second term) and a product of 24 (last term). These numbers are 2, 3, and 4, so we'll get (x + 2)(x + 3)(x + 4).
Is this much better than using the cubic formula? Absolutely, especially for our students.
What about a quadrinomial like 2x3 - 11x2 + 12x + 9? We can try to determine all the real roots of this polynomial, or we can take a look at the second and last terms. The simplest combination for a product of 9 is multiplying 3, 3, and 1. The first term's coefficient, 2, makes getting a -11 tricky. We can't get -11 from the set of numbers without considering the first coefficient.
Yet, as we've seen with the other cubes (see Figure 3):
We will see that the term of 2x will multiply with any lengths that aren't associated with it. Thus, (2x - 3) + (2x - 3) will give us 6x + 6x, or 12x. Since we needed to get -11x, this means the remaining 1x is positive and the 12x is actually -12x.
Therefore, our quadrinomial gets factored to (x - 3)( x - 3)(2x + 1) or (x - 3)2(2x + 1).
Word of Caution
I'm still exploring this for other cases (looking at x3 + 27, for example), and what imaginary numbers would look like using this model. These methods work great as a way to draw students in, but some special cases will probably require more space than I could lend it here.
Also, this was my rebellion against the cubic formula which, as many mathematicians know, makes little sense to introduce in the classroom.
Let me know what you think in the comments below. |
The key word here is 'possible'. Regardless of whether or not it is discovered, given modern understanding of the relation between space and time (relatively), it is more than probable a method of transportation across dimension is available, given our resources.
Now whether or not humanity is -clever- enough to develop these concepts in the allotted time is up for debate. After all, instantaneous space maneuvering is already an observable phenomenon, but unfortunately, it requires an unprecedented amount of gravitational force (e.G. Mass) in a proportionally small volume in space.
So the theory of light-speed travel came to play. However, strangely enough, this theory seems even less probable than the theorized 'Wormhole'. And then there remains the issue of time relativity. Increasing velocity, approaching the same magnitude as light, will slow one's perception of "time" to closer to a standstill than in slower motion.
The closest star to us, besides our sun, is Alpha Centauri which is four light years away. Light travels just below 5.9 trillion miles a year, meaning the Alpha Centauri is 23.6 trillion miles away. It would take several hundred thousand years to get there by means of our current shuttles. That is just one star of 200+ billion in our galaxy, The Milky Way, which is between 100 to 120 thousand light years across. The nearest major galaxy to us, is Andromeda which is 2.3 million light years away. We haven't even put a person on Mars. Mars at is it's closest is 33.9 million miles away, and 250 million at it's furthest. The only way we will be able to travel across our galaxy, let alone to another galaxy is if they every find a way of making the Alcubierre Drive, or something similar possible. As for now, it will be hundreds if not thousands of year before that is possibly a reality. |
The expansion of the tropical belt towards the north pole may be due to human made factors like black carbon and tropospheric ozone, says a new study.
In the Northern Hemisphere, the main cause appears to be black carbon and tropospheric ozone pollution, where as in the Southern Hemisphere, the cause of tropical expansion was earlier linked to the depletion of stratospheric ozone.
The lead author of the study, Robert J. Allen, notes that continuing expansion will have large-scale impacts on atmospheric circulation worldwide.
“If the tropics are moving poleward, then the subtropics will become even drier,” Allen said. “If a poleward displacement of the mid-latitude storm tracks also occurs, this will shift mid-latitude precipitation poleward, impacting regional agriculture, economy, and society.”
According to recent observations, the tropics have been widening by 0.7 degrees every decade, with global warming reason to some, but not all, of the tropical expansion.
“Both black carbon and tropospheric ozone warm the tropics by absorbing solar radiation,” Allen explained. “Because they are short-lived pollutants, with lifetimes of one-two weeks, their concentrations remain highest near the sources: the Northern Hemisphere low- to mid-latitudes. It’s the heating of the mid-latitudes that pushes the boundaries of the tropics poleward.”
Black carbon aerosols are tiny particles of carbon created from the burning of biomass, and incomplete fossil fuel combustion, such as in diesel engines. Tropospheric ozone is a pollutant generated from volatile organic compounds (VOCs) reacting with sunlight.
“Greenhouse gases do contribute to the tropical expansion in the Northern Hemisphere,” Allen said. “But our work shows that black carbon and tropospheric ozone are the main drivers here. We need to implement more stringent policies to curtail their emissions, which would not only help mitigate global warming and improve human health, but could also lessen the regional impacts of changes in large-scale atmospheric circulation in the Northern Hemisphere.”
As the tropics spread further, they also carry wind and precipitation patterns with them, potentially drying out the tropics relative to their current state.
“For example, the southern portions of the United States may get drier if the storm systems move further north than they were 30 years ago,” he said. “Indeed, some climate models have been showing a steady drying of the subtropics, accompanied by an increase in precipitation in higher mid-latitudes. The expansion of the tropical belt that we attribute to black carbon and tropospheric ozone in our work is consistent with the poleward displacement of precipitation seen in these models.”
We should implement policies to reduce the emissions of green house gases, tropospheric ozone and black carbon that are driving the tropical expansion before the tropics expand even more. |
The Aztecs, who referred to themselves as the Mexica, extended throughout much of central Mexico and existed from the 14th century until the 16th century when they were conquered by Spanish conquistadors led by Hernan Cortés. However, to understand the Aztec Empire, it is first important to understand their connections to the other Mesoamerican people that came before them and the influences that these people had on the Aztec civilization. One of the main connections between the Aztec and the other societies of Mesoamerica can best be seen in art.
The Aztec Empire is famous for many of its features including the amazing art and artistic objects that the Aztec people created. At its core, Aztec art was heavily influenced by the religious and cultural practices of the Aztec people. With that said, the Aztec religion and culture were based on earlier Mesoamerican civilizations, and thus Aztec art shared many similarities with the rest of Mesoamerica.
For instance, the Aztec considered themselves to be the successors to the earlier Toltec. In fact, the Aztec admired the Toltec for many different aspects, including: art, architecture, craftsmanship and culture. Some historians have questioned whether or not the Aztec people were the descendants of the earlier Toltec society, but this suggestion has also been made about other earlier Mesoamerican civilizations, including the Teotihuacan. Regardless, the Toltec language was Nahuatl, which was the same as the Aztec. As well, the Nahuatl word for Toltec, in the Aztec society, came to mean ‘artisan’ in reference to their view that the Toltec were the height of culture, art and design in Mesoamerica.
Aztec art is seen in many of the objects and structures that the Aztec people used on a daily basis. For example, Aztec clothing, pottery, jewelry, temples, and weapons contained artistic styles. More specifically, the Aztec were known to use bright colors and vivid imagery to convey their culture and religion on these objects. Common materials used to create these objects included: feathers (especially from the quetzal bird), shells, gold, silver, glass beads, and other gemstones.
As stated above, Aztec religion and gods were central to Aztec art. As such, much of the surviving Aztec art is based on different Aztec gods. For instance, the ‘Tlaloc Vessel’ is a ceramic pot that was discovered in the ruins of the Templo Mayor (Aztec Temple) in Tenochtitlan. Historians believe that the pot dates from around 1470. It shows a depiction of the Aztec god Tlaloc. Tlaloc was an important god in Aztec religion. In Nahuatl, the Aztec language, Tlaloc translates to ‘earth’ and modern historians interpret the name as meaning ‘he who is made of earth’. The Aztecs considered him to be the god of rain, earthly fertility and water. He was a popular god throughout the Aztec Empire and widely recognized as a ‘giver of life’. The ‘Tlaloc Vessel’ is significant in Aztec art because it shows the craftsmanship of the Aztec people, as well as their use of bright colors.
Symbolism was another important aspect of Aztec art. For instance, the natural world featured prominently in different pieces of Aztec art. Several common examples include: jaguars, frogs or toads, eagles, shells, serpents and more. More specifically, in the ruins of the Templo Mayor, a pair of frog statues was discovered which historians have referred to as the ‘Frog Altar’. The sculptures are said to have been created for the god Tlaloc and are meant to represent water, for which Tlaloc was related. As stated above, the serpent was another important symbol in Aztec art. This is best seen in different representations of the god Quetzalcoatl. Quetzalcoatl, whose name means ‘feathered serpent’, was another main god of the Aztec and played a significant role in Aztec history. For instance, he was considered the god of wind and wisdom or learning. Quetzalcoatl was an important god throughout Mesoamerican history and societies and was not just related to the Aztecs. For example, there is evidence of the celebration of Quetzalcoatl by the Teotihuacan people near the 1st century AD. Furthermore, a ‘feathered serpent’ was an important figure of many different Mesoamerican cultures in the centuries that followed. These other cultures referred to him in other names, but the imagery of a feathered serpent was always constant.
Some of the most beautiful Aztec art that remains today are the different mosaics. These are often created with many small pieces of stone, shells or glass and generally depict different Aztec gods or important figures. The ‘Mosaic Skull of Tezcatlipoca’ from the British Museum is one of the best examples of this. Tezcatlipoca was a significant god in Aztec religion. His name is translated as ‘smoking mirror’ in the Nahuatl language of the Aztec and he is often associated with several different concepts, including: the night sky, night winds, hurricanes, the north, jaguars, obsidian, and war. In Aztec tradition Tezcatlipoca was considered to be an opposite and rival to Quetzalcoatl. The ‘Mosaic Skull of Tezcatlipoca’ was made from an actual human skull and had the back portion removed to allow it to be worn as a mask. For example, it had deer-skin straps to allow it to be worn, along with a jaw that was hinged so it could be moved. The surface of the skull was decorated in several different types of materials, including: blue turquoise and black lignite. Iron pyrite and white shells were used for the eyes while the nose was covered in a red oyster shell. The skull was likely worn in ceremonies honoring the god Tezcatlipoca. Overall, the skull showed the Aztecs artistic skill and the importance of gods in Aztec daily life.
Another important factor in Aztec art was the different surviving Aztec codices. These are books containing Aztec writing that were created before, during and after the arrival of Europeans during the Age of exploration. The codices are important to our modern understanding of the Aztec because they are some of the best first-hand accounts of Aztec history. The codices were not books in the same sense, as we understand them to be today. Instead, they were more like long, folded sheets that were made out of deer skin. As well, the Aztec had no known written language, and instead displayed their ideas in glyphs or pictures. This means that the Aztec wrote using images that represented the different words or themes of which they wished to express. Most of the surviving Aztec codices are from the timeframe around European colonization of central Mexico, with very few remaining from before the arrival of European explorers. For example, the Florentine Codex is one of the best examples of an Aztec codex. It was created by Spanish Franciscan friar Bernardino de Sahagún from about 1545 until 1590. Sahagún worked with different Nahua men from the region to research and organize his findings in the Florentine Codex. In all, the work ended up filling twelve books totaling over 2400 pages. As well it included over 2000 pictograms drawn by Mesoamerican artists that depict the history and life of the Aztec people. While, Sahagún titled his work ‘The Universal History of the Things of New Spain’, it is more commonly known today as the Florentine Codex due to it currently being located in Florence, Italy. These surviving codices display the Aztec artistic representation of different aspects of their life, such as: cultural traditions, religious traditions, gods, ceremonies, historical events and more. |
Ghana has a rich history of education, with a strong emphasis on traditional teaching methods that prioritize rote memorization and textbook learning. However, with the rapid advancement of technology, it is becoming increasingly clear that traditional teaching methods may not be enough to prepare students for the challenges of the modern world. One of the most important tools for modern learning is the smartphone, a device that has transformed the way we access information, communicate, and learn. In this article, we will explore the reasons why Ghana should allow SHS students to use smartphones and other tech tools in class and the benefits this could bring.
Access to Information:
One of the most important benefits of allowing smartphones in the classroom is the increased access to information that they provide. With a smartphone, students can instantly access a wealth of educational resources, including videos, podcasts, e-books, and online courses. This can help students deepen their understanding of a wide range of topics, from science and mathematics to literature and history. Additionally, smartphones can be used to research topics in real-time, allowing students to quickly find answers to their questions and supplement their learning.
Collaboration and Communication:
Another key benefit of allowing smartphones in the classroom is the enhanced collaboration and communication that they enable. Smartphones can be used to facilitate group work, allowing students to share ideas and work together on projects in real time. Additionally, smartphones can be used to communicate with teachers and classmates, whether it is to ask questions, share feedback, or receive support. This can help create a more collaborative and engaging learning environment, one where students can learn from each other and benefit from the collective knowledge of their peers. Some may argue that students may misuse this opportunity and use it as an avenue to engage in various indiscipline acts. Adolescents are naturally curious, adventurous, and explorative.
Questions I always ask opponents of the use of smartphones at the SHS level are: How has the decision to ban smartphones helped in terms of discipline at the SHS so far? Do we have a 100% success rate of SHS discipline? One interesting fact is these same students who are not allowed to use these devices at school are the same students posting nudes on social media and engaging in insubordination toward the elderly. I have realized from experience in handling adolescents that the more you hide these things from them, the more they look for them elsewhere. That is why I believe it should be introduced but with the necessary guidelines and policies.
To Be at Par with the Trending World
With a smartphone, students can access a wide range of educational apps and tools, many of which are designed to adapt to the needs of the student. This can help students learn at their own pace, with content that is tailored to their individual strengths and weaknesses. One important point is the world is moving at a fast pace toward a world where tech is literally used for everything. The traditional teaching methods that involve solely on chalkboard illustrations need to change.
During my SHS days as a math student, the only method I experienced was the teacher solving a bunch of questions on the board, and we were to follow the procedure and use it to solve other similar questions. This is still in practice in most schools in Ghana and other countries. The result of this is that students gain some sort of procedural fluency without any conceptual understanding.
I have had the opportunity to observe how high school students learn in Canada and the approach is worthy of emulation. First of all, the class was filled with iPads, laptops, and smartphones. The students knew when to use them and when to put them away, indicating that there were guidelines and rules to the use of these devices. Tech was present in every aspect of the activities. I wish I could share pictures of how organised the class was in terms of using these devices. But I cannot do that because I would need consent or permission from the students, teachers, principal and the parents and apparently they do not allow that. That is how serious they are with privacy related issues. I was very surprised when a teacher teaching how to graph quadratic functions asked the students to take their iPads and smartphones. The teacher used tools such as Desmos to teach the quadratic graphs and GeoGebra for other aspects of math.
Image of Desmos Tool (Graph of Y=X²)
And in real-life situations, these students will be arguably better suited for the world of work than those who were solely exposed to the traditional methods. This is because these tech tools used in the Canadian classrooms serve as a base point for them when it comes to using the engineer’s AutoCAD Civil 3D, the accountant’s Tally software, or the business marketer’s CRM software, to name a few. These tools are used in industry, and the world is moving in the direction of technology. This is why I think policymakers in Ghana’s education system should rescind their decision to restrain high school students from using these devices in class.
In conclusion, there are many compelling reasons why Ghana should allow SHS students to use smartphones in class. From increased access to information and enhanced collaboration to the fact that the world is moving toward tech. Smartphones can help create a more modern and effective learning environment that better prepares students for the challenges of the modern world. While there are certainly challenges and risks associated with allowing smartphones in the classroom, with the right policies and guidelines in place, these can be minimized and managed. Ultimately, the benefits of allowing smartphones in the classroom far outweigh the risks, and it is time for Ghana to embrace this important tool for modern learning. |
Windsor Castle Built
The Normans (1066 - 1215) built the first castles in the style of Motte and bailey and later stone castles for better protection
The Normans invaded England in 1066 and after killing England's King, they set about taking over the whole country. In order to do this, they needed to build defences to protect themselves while they advanced across the rest of the country.
The Normans built motte and bailey castles to begin with. These castle were quick to build using just earth and timber.
Later, once William the Conqueror, the leader of the Normans, had firmly established his rule in England, the Normans built huge stone keep castles. They were built to last a long time and many can still be seen today.
The layout of the stone castles remained very similar to the wooden castles. The motte and bailey became the keep and bailey.
Windsor Castle was the first in a series of nine castles that England's King William built around London. |
Scientists announced that an experimental gene therapy may have cured babies of a rare genetic disorder, commonly known as “bubble boy disease,” which causes male babies to be born with little or no immune system.
The genetic disorder called X-linked severe combined immunodeficiency (XSCID) and colloquially referred to as “bubble boy” disease, is caused by mutations in a gene on the X chromosome called IL2RG. The mutation causes male babies to be born without the capability of producing immune cells, making those affected highly susceptible to life-threatening infections by viruses, bacteria, and fungi. Even catching a common cold can be fatal. Unless properly diagnosed and placed in a sterile environment, most individuals born with XSCID die within 2 years.
The rare condition that affects only 40 to 100 babies each year in the United States became widely known to the public from news and a 1980s movie about David, nicknamed the Boy in the Bubble. After David’s brother died of the same disease, doctors placed him in a plastic isolation unit that sheltered him from the outside world. He basically lived in a plastic bubble for nearly 13 years until he died in 1984 following an unsuccessful bone marrow transplant — at the time, his only chance at restarting his immune system.
Bone marrow transplant is still the most effective therapy for XSCID, however, the procedure is risky and requires a match from a sibling. In 1990, a form of SCID became the first human disease treated by gene therapy when scientists transferred a normal gene into the defective white blood cells of two young girls. These patients are still alive today and continue to participate in on-going studies. However, XSCID, the version that only affects males, has proved a lot more difficult to treat — until now.
Researchers at St. Jude Children’s Research Hospital and the National Institute of Health (NIH) performed an experimental therapy on 8 children, aged 2 months to 14 months, who could not find a donor match for their bone marrow transplant. The research team engineered a lentivirus vector from a de-activated HIV virus, which also included insulators that blocked the activation of certain genes in order to prevent leukaemia — a side effect of a previous gene therapy experiment. Bone marrow was collected from the infants, which then received 2 days of low-dose busulfan chemotherapy in order to make space for new cells to grow. The bone marrow with the engineered virus was then reinfused into the baby boys.
Within 3 months, immune cells were present in the blood of all but one patient, which had to undergo a second dose of therapy. All three main types of immune cells (T-Cells, B-cells, and natural killer cells) were produced. What’s more, most patients responded to vaccination and now seem to be living a normal life.
“A diagnosis of X-linked severe combined immunodeficiency can be traumatic for families,” said Anthony S. Fauci, director of National Institute of Health (NIH)’s National Institute of Allergy and Infectious Diseases (NIAID). “These exciting new results suggest that gene therapy may be an effective treatment option for infants with this extremely serious condition, particularly those who lack an optimal donor for stem cell transplant. This advance offers them the hope of developing a wholly functional immune system and the chance to live a full, healthy life.”
Although it may be still early to claim that this procedure is a cure, the fact that all three types of immune cells were restored suggests that this may be the case. It has now been more than two and a half years since the treatment began and there is no indication of leukaemia in the patients. The researchers are now still closely monitoring the children to see how durable the treatment is and to catch any signs of long-term side effects.
“The broad scope of immune function that our gene therapy approach has restored to infants with X-SCID — as well as to older children and young adults in our study at NIH — is unprecedented,” said Harry Malech, chief of the Genetic Immunotherapy Section in NIAID’s Laboratory of Clinical Immunology and Microbiology. “These encouraging results would not have been possible without the efforts of my good friend and collaborator, the late Brian Sorrentino, who was instrumental in developing this treatment and bringing it into clinical trials.”
The researchers at St. Jude say they might use the same strategy for other genetic disorders, such as sickle cell disease. |
Learn Something New Every Day – Lecture 20 – Death Investigators
Oxygen deprived blood is a dark red colour. When it’s well oxygenated it is a brighter/vibrant red.
Veinous blood may look blue because of light diffusion through skin and livor mortis (lividity) makes the pooling of blood in a dead body look purple/blue for the same reason. So you’d expect someone suffering from carbon monoxide poisoning (where the blood fills with carbon monoxide and so has no room for oxygen) to have deep, dark red blood, right?
Not so. Their blood is cherry red (and victims of carbon monoxide poisonings take on a cherry red skin colour making them easy to diagnose).
This is because haemoglobin binds much better to C0 than 0 so as far as the blood is concerned it’s fully loaded (and therefore bright red) but oxygen can’t get a look-in and so the cells start to die.
‘Trails of Evidence: How Forensic Science Works” is a The Great Courses DVD lecture series |
Learning other languages and understanding other cultures is a 21st Century skill that is vital to success in the global environment. Language education and cultural awareness not only contribute to students' career and college readiness; it also helps develop the individual as they take on a new and more invigorating view of the world.
At the Elementary level, we are pleased to offer before and after school Spanish or French through Fun Fluency. Students will be clustered into a K-2 or 3-5 level as they explore learning a new language through engaging lessons throughout the year. Please visit the Fun Fluency website for more information about classes offered at each elementary school.
At the Middle School level, incoming 6th-grade students select a World Language that will be a part of their daily schedule. Typically, students will begin at the Introductory level and move onto Developing level in 7th grade and Transitional level in 8th grade(some exceptions may apply). South Middle School offers French, Italian, and Spanish; and Thomas Middle School offers French, German, and Spanish.
The videos below feature District 25 students explaining why learning a language is important to them, how they selected a language, and other fun facts about learning a new language!
World Languages Coordinator
World Language Letter to 5th Grade
World Language FAQ
South Middle School Languages
Thomas Middle School Languages
22-23 Fun Fluency Flyers:
The American Council of Teaching Foreign Languages (ACTFL) explains how students advance toward greater proficiency. The District 25 World Language teachers use the World-Readiness Standards for Learning Languages to develop high quality and authentic curriculum for the various proficiency levels. The national standards focus on five goals known as The Five C's: Communication, Cultures, Connections, Comparisons, and Communities.
Other standards that impact foreign language curriculum design are:
The Illinois Learning Standards for Foreign Languages
- State Goal 28: Communication
- State Goal 29: Culture and Geography
- State Goal 30: Connections and Applications
Curriculum: The curriculum for each year of study is divided into four thematic units. Each unit is guided by essential questions, allowing students to self-assess using "can-do" statements. Students are exposed to authentic materials throughout the unit and have a variety of learning experiences. Students are assessed throughout the unit on their ability to use the language and each unit culminates in a series of performance tasks designed to show what students know and can do with the language.
Modes of Communication: Learning a language involves communicating through listening, speaking, reading, and writing. The modes of communication explain how the individual skills are used, each mode is described here: Modes of Communication |
White-nose syndrome is a disease caused by a pathogen called Pseudogymnoascus destructans: a cold-loving fungus introduced from Europe that grows on the skin of bats when their body temperature drops during hibernation. Lesions on the wings of infected bats affect their ability to retain water, forcing them to wake up to rehydrate. These frequent disturbances eat away at fat stores, causing the bats to starve before spring arrives. This disease was first detected in Canada in 2010, and by 2015 it had caused a 94 per cent overall decline in hibernating myotis bats in Nova Scotia, New Brunswick, Ontario and Quebec. The sudden and dramatic impact of white-nose syndrome led several vulnerable bat species to be added to the federal Endangered list in 2014. In response, Wildlife Preservation Canada and the Hunter Foundation funded a feasibility assessment to determine whether captive management could address this growing threat. The research explored several key questions: Can we find suitable numbers of bats to establish a conservation breeding program? Do existing facilities like zoos and wildlife rehabilitation centres have the infrastructure and expertise to keep large bat colonies and reintroduce them to the wild? And if successful reintroductions are possible, will it actually mitigate the effects of white-nose syndrome? |
Ancient Greek painting from the most ancient times till the decline of Ellas could be considered as those which are closely connected with other types of art. The unity of depiction and material pattern in the painting, as well as the sound and word in poetry, reveal essential feature of ancient Greek culture, which is universalism. As Tarnas noted (1991), the ancients in the so called classic period appreciated the united cosmos that could defeat from destroying chaos (p. 36). Therefore, it is hardly possible to judge ancient Greek painting by ancient Greek vases and different types of walls. The examples of painting on vases that have survived until the modern times are linked with the names of such artists as Exekias, Klytios, Polygnotus (Thomas, 1988, p. 92). That is why in the center of attention in the current essay stands famous work by Polygnotus named Helen abducted by Theseus. According to Hynson (2006), the artist Polygnotus is often considered as one of the most talented representatives of the red-figure vase painting of the late classical era (p. 87). The painting is dated approximately 430 – 420 B. C. and belongs to the red-figure technique. It is currently a part of exhibition at National Archeological Museum, Athens, Greece.
Firstly, the content of the painting is traditional as for the time of its creation. At first glance, the painting opens entire mythological story. Showing four significant characters in mythological consciousness of the ancients (Helen, Theseus, Pherios, Phoiba), the painting may be called the preface to the history of the Trojan War. From this point of view, the picture maintains typical feature of ancient Greek painting that is a depiction of mythological gods and heroes, as Tarnas showed in his study (1991, p. 52). All the four characters are depicted in motion and seem to be relevantly symmetric and realistic. It is possible to reconstruct real people, their behavior, their cloths and gestures, from the traits of their characters, although the painting is not overwhelmed with excessive details. This is so to say realistic intention of the artist that may even have some didactical connotations, which to some extent contradict with the ornament underneath the very painting, which undoubtedly has merely decorative function with no appeals to history and mythology.
Secondly, structure of the painting is rather monolithic with united foreground and background. It has nothing to do with perspective as well. Looking at the painting, it is difficult to judge about position of the painter when he was working on it. However, one of the possible explanations of the perspective neglecting that does not bring any disproportion into the painting is the round shape of the vase. The entire space of the painting is likely to be divided into two parts, each of which includes a couple of characters and is situated as if the parts of the couple care only of another element in the couple. Thus Theseus is moving forward and pushing Helen who is looking back at him and also moving forward. Pherios is slightly bowed toward the chariot and looks at Phoiba who looks at him. As for the depiction of human body, Polygnotus worked out well the muscles, especially males’. The legs and arms of Theseus and Pherios have distinct patterns, while the arms of females are rather plump. It is worth noticing that faces of all the characters have relatively sharp features, which also include pointed noses. Conversely, the eyes are not as distinct as they might have been so that the face expressions are unclear and the painting at all does not belong to the portrait in contemporary meaning of the word. What concerns painting technique, the artist uses thin lines that enclose the painting to the carving.
To sum up, Polygnotus in his Helen abducted by Theseus uses popular plot telling about the beginning of Trojan War. He is assertive in depicting details of human body and clothes, so that looking at his painting even more than two thousand years later it is possible to build the portrait of the ancients. Using thin pointed lines Polygnotus depicts healthy people who are men with distinct muscles, chariot, and spear and beautiful women. Thus the artist shares the view on ideal people who dominated in the classical period in ancient Greece. |
Telescopes are one of the oldest pieces of technology still in use today. But how do they actually work? In this post, we’ll take a look at the basics of how telescopes work and what they’re used for. We’ll also explore some of the different types of telescopes available on the market. So, if you’re interested in learning more about this amazing technology, keep reading!
What Are Telescopes And What Do Telescopes Do?
Telescopes are devices using a set of lenses or a combination of curved mirrors and lenses, which magnify distant objects and make them appear closer. They are used in astronomy and astrophysics to gather information about distant planets, stars, galaxies, and other astronomical objects. There are different types of telescopes, including refracting telescopes, reflecting telescopes, and catadioptric (compound) telescopes. Telescopes can also be classified based on their size, with large telescopes collecting more light than small ones. Camera stores usually carry a variety of Telescopes for different purposes.
What Are The Steps In The Making Of A Telescope?
The history of the telescope spans centuries and there have been many advances in telescope technology over time. Today, telescopes are used for a variety of purposes, from amateur astronomy to professional research. But how are these amazing instruments made? Here is a look at the steps involved in the making of a telescope.
First, the optics for the telescope must be created. This involves grinding and polishing lenses or mirrors to the correct shape. Once the optics are complete, they are placed into the telescope tube. The tube is then aligned so that the optics are pointing in the right direction.
Next, a mount is attached to the telescope tube. The mount helps to keep the telescope steady while in use. Finally, any additional accessories, such as eyepieces or finder scopes, are added.
Once all of these steps are complete, the telescope is ready to be used! With a little practice, anyone can learn to point the telescope in the right direction and get a great view of the night sky.
Telescopes: How Do They Work?
Have you ever wondered how telescopes work? If so, you’re not alone! Telescopes are one of the most popular items sold in camera stores, and yet many people don’t actually know how they work.
In simple terms, a telescope is basically just a very powerful set of binoculars. The lenses in a telescope are much larger than those in a pair of binoculars, which allows them to gather more light. This allows you to see things that are much farther away than you could with the naked eye.
There are two different types of telescopes: refracting and reflecting. While reflecting telescopes use mirrors to focus light, refractive telescopes employ lenses. Astronomical objects like stars and planets can be observed with either types of telescope.
If you’re interested in learning more about how telescopes work, there are plenty of resources available online and in libraries. With a little bit of research, you’ll be an expert on telescopes in no time!
What Is The Function Of A Refracting Telescope And How Is It Different From Catadioptric Telescopes?
Telescopes come in many different shapes and sizes, each designed for a specific purpose. But what exactly is a refracting telescope, and how does it differ from other types of telescopes?
A refracting telescope is a type of optical telescope that uses a lens to gather and focus light. The lens is usually located at the front of the telescope, and the light is then transmitted to an eyepiece or camera at the back. This design is simple and effective, but it has one major downside: chromatic aberration. This occurs when different colors of light are focused at different points, resulting in a fuzzy or distorted image. To combat this problem, many refracting telescopes use multiple lenses of different colors.
Catadioptric telescopes, on the other hand, use both lenses and mirrors to gather and focus light. This design is more complex, but it eliminates chromatic aberration. In addition, catadioptric telescopes often have a longer focal length than refracting Telescopes, making them ideal for astronomical observation. However, they are generally not suitable for terrestrial viewing due to their long focal length and narrow field of view. So if you’re looking for a telescope to help you stargaze, be sure to head to your local camera store and pick up a catadioptric telescope!
What Is A Reflecting Telescope? How Is A Reflecting Telescope Different From Refracting Telescope?
Telescopes come in many different shapes and sizes, but they all have one common goal: to allow us to see distant objects in greater detail. There are two main types of telescopes: reflecting and refracting. Reflecting telescopes use mirrors to gather and focus light while refracting telescopes use lenses. Both types of telescopes can produce stunning images, but each has its own advantages and disadvantages.
Reflecting telescopes are typically larger and heavier than refracting telescopes, but they often provide a wider field of view and a more detailed image. Refracting telescopes, on the other hand, are typically smaller and lighter, making them more portable. They also don’t require as much maintenance as reflecting telescopes. Ultimately, the type of telescope you choose depends on your specific needs and preferences.
What Are The Common Types Of Mounts Generally Used For Telescopes?
Of the four types of telescope mounts, the Dobsonian is the simplest and most popular. It uses a friction fit to keep the telescope in place, which makes it easy to use and adjust. The equatorial mount is another common type, and it uses gears to track objects in the night sky. The German Equatorial Mount (GEM) is a variation of the equatorial mount that is often used for astrophotography.
Finally, the computerized mount is the most advanced option, and it uses motors to track objects automatically. No matter which type of mount you choose, be sure to visit the Diamonds Camera website for all your telescope needs.
Telescopes are one of the most important pieces of equipment in astronomy. They allow us to see things that we can’t see with our eyes alone. But how do they work? And what do they do? In this post, we have discussed the basic explanation of how telescopes actually work and what their main purposes are. Thanks for reading! |
Dr. Seuss was the pen name of Theodor Seuss Geisel, who wrote more than 60 published books. Some of these books had themes of waste and pollution. You can use other Seuss books as jumping-off points for science lessons or themes of food safety.
The waste of natural resources is the theme of "The Lorax." The story is about an entrepreneur called the Onceler, who chops down truffla trees to make sweater-type garments called thneeds. The business thrives but the environment suffers. A creature named the Lorax warns of the environmental disaster that will come if the Onceler continues to pursue his unsustainable manufacturing policies. "The Lorax" is an effective story to weave into lessons about reuse. An activity that can be pursued, if you have not yet implemented this, is to create areas for recycling in the classroom with bins for juice boxes, paper and glass. Encourage students to bring their lunches in lunch boxes and reusable bags instead of paper bags. Asking children to bring items that have been recycled into the classroom is another way to interest students in recycling.
10 Apples Up on Top
"10 Apples Up On Top" is about animals that balance apples on their heads. To fit in with the theme, you can plant apples seeds in foam cups. You will need cups for each student, dirt and apple seeds. Ask the children to fill the cups with dirt and press the apple seeds into the cups. Water the seeds and keep the dirt moist. Chart the growth of the seeds throughout the month.
Horton Hears a Who
"Horton Hears a Who" is a story about an elephant that finds a dust speck containing tiny creatures called Whos. Horton becomes their champion for survival despite opposition from the other animals in the jungle. This story offers an interesting opportunity to see what creatures and matter create dust. You will need microscopes, flat glass slides, cover slides, pipettes and dust collected from around the classroom. Preparing the slides involves placing dust on a flat slide, piping a drop of water on the slide and putting the cover slide over the water-held dust. Put the prepared slide under the microscope and record on a chart what is seen in the microscope.
Horton Hatches the Egg
"Horton Hatches the Egg" is a story about Horton the Elephant, who is tricked by a bird named Mayzie into sitting on her egg while she flies off to have fun. The story follows the tribulations of the elephant as he keeps his word to Daisy “one hundred percent” to sit on the egg. The story could be a start to studying the incubation of an egg and the life cycle of a baby chick, as well as other newborn animals.
Gerald McBoing Boing
"Gerald McBoing Boing" is a story about a little boy who speaks in noises instead of words. The story can be used to explore the properties of sound. Students can find out how the larynx works by a simple experiment that requires no equipment but themselves. Ask the children to put their hand midway and firmly on their throats and say, “Ahhh!” very loudly. They should feel the vibration of the sounds on their hands.
Green Eggs and Ham
"Green Eggs and Ham" is about an unnamed protagonist who is persuaded by a creature named Sam into eating eggs and ham that are green. This story can be expanded to the theme of food safety. Buy enough bananas for each child to have one, and ask the children to place the fruit into plastic bags. Label bags with each child's name. Let the bananas sit on a counter for a few days and record the results on a chart day by day for a week. Dispose the experiments into the garbage or a compost bin.
- Jupiterimages/BananaStock/Getty Images |
ABOUT THIS BOOK
This book is for educators, film lovers, and anyone who sees how important media literacy is in our current culture. The goal of this book is to help teachers integrate The Third Dimension of literacy into their existing curricula so that students are better able to analyze what they watch and responsibly create audio-visual media.
What is the Third Dimension of Literacy? Consider the first dimension of literacy a word, and the second dimension of literacy the stringing together of words to form a complete thought. The third dimension of literacy is the visualization of that complete thought by adding audio and video. So viewing audio-visual media becomes the third dimension of reading and making audio-visual media becomes the third dimension of writing. From an educational standpoint the classic essay transforms into a documentary film and the short story or creative writing project becomes a narrative, silent or experimental film.
Take Two Film Academy has taught thousands of students how to responsibly create and consume audio-visual media. Drawing on years of expertise, this book is a practical how-to guide for educators interested in bringing the Third Dimension of Literacy into the classroom.
- 8x8 softcover
- 168 full-color pages
- Printed on premium materials
Table of Contents:
Part 1: Fundamentals
Chapter 1 - What is the Third Dimension of Literacy?
Understanding filmmaking as the next dimension of literacy; the Take Two Method for Project-Based Learning; and what to expect from this book.
Chapter 2 - Preparation
The importance of buy-in from administration; the equipment, software, and resources you will need and how to set up your classroom; and how to get your students organized and respectful for filmmaking.
Chapter 3 - Lesson Plans
How to create customized lesson plans for both narrative/silent film and documentary film projects.
Chapter 4 - Resources, Software, and Safeguards
Screenwriting and editing software options; understanding intellectual property, including where to find royalty-free resources for use in films; distinguishing between credible and non-credible sources; and a note on cyber-bullying.
Part 2: Pre-Production
Chapter 5 - Viewing: The New Reading
Passive versus active media consumption, and how to keep students engaged during viewings.
Chapter 6 - Documentary Film: The New Essay
Understanding the different types of documentaries and the steps required to produce them, from creating your rubric to script-writing.
Chapter 7 - Narrative and Silent Film: The New Creative Writing Assignment
Understanding what makes a good narrative film and the steps required to make one, from storyboarding to screenplay.
Part 3: Production
Chapter 8 - The Grammar of Filmmaking
How to use the grammar of filmmaking to get the right shot for the right moment and effect, including an educational production game.
Chapter 9 - Documentary Production
What it takes to produce a documentary film, from recording voice-over to recording interviews.
Chapter 10 - Narrative Production
A guide to student roles during production, including an educational game; and tips for shooting and for recording audio, and how to back up your students work.
Part 4: Post-Production
Chapter 11 - Editing
Key features of editing software; managing your class workflow; backing up data and watching dailies; and an overview of typical editing tools.
Part 5: Maximize Your Impact
Chapter 12 - Sharing and Community Impact
An overview of hosting your screening and the importance of sharing the film. |
Fun Classroom Activities
The 20 enjoyable, interactive classroom activities that are included will help your students understand the text in amusing ways. Fun Classroom Activities include group projects, games, critical thinking activities, brainstorming sessions, writing poems, drawing or sketching, and more that will allow your students to interact with each other, be creative, and ultimately grasp key concepts from the text by "doing" rather than simply studying.
1. Endless Personalization
Have the students create an identity of what member of the Endless family they would be. They can choose one of the identities revealed in this book, or come up with a new one. Make sure they know to choose a new name that begins with D.
2. Puppet Show
Split the class into groups and assign each group one portion of this story. Have each group create a puppet show of their section to present to the...
This section contains 730 words|
(approx. 3 pages at 300 words per page) |
Your computer’s Central Processing Unit (CPU) and Graphics Processing Unit (GPU) interact every moment you’re using your computer to deliver you a crisp and responsive visual interface. Read on to better understand how they work together.
Photo by sskennel.
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites.
SuperUser reader Sathya posed the question:
Here you can see a screenshot of a small C++ program called Triangle.exe with a rotating triangle based on the OpenGL API.
Admittedly a very basic example but I think it’s applicable to other graphic cards operations.
I was just curious and wanted to know the whole process from double clicking on Triangle.exe under Windows XP until I can see the triangle rotating on the monitor. What happens, how do CPU (which first handles the .exe) and GPU (which finally outputs the triangle on the screen) interact?
I guess involved in displaying this rotating triangle is primarily the following hardware/software among others:
- System Memory (RAM)
- Video memory
- LCD display
- Operating System
- DirectX/OpenGL API
- Nvidia Driver
Can anyone explain the process, maybe with some sort of flow chart for illustration?
It should not be a complex explanation that covers every single step (guess that would go beyond the scope), but an explanation an intermediate IT guy can follow.
I’m pretty sure a lot of people that would even call themselves IT professionals could not describe this process correctly.
Although multiple community members answered the question, Oliver Salzburg went the extra mile and answered it not only with a detailed response but excellent accompanying graphics.
Image by JasonC, available as wallpaper here.
I decided to write a bit about the programming aspect and how components talk to each other. Maybe it’ll shed some light on certain areas.
What does it take to even have that single image, that you posted in your question, drawn on the screen?
There are many ways to draw a triangle on the screen. For simplicity, let’s assume no vertex buffers were used. (A vertex bufferis an area of memory where you store coordinates.) Let’s assume the program simply told the graphics processing pipeline about every single vertex (a vertex is just a coordinate in space) in a row.
But, before we can draw anything, we first have to run some scaffolding. We’ll see why later:
// Clear The Screen And The Depth Buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Reset The Current Modelview Matrix glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // Drawing Using Triangles glBegin(GL_TRIANGLES); // Red glColor3f(1.0f,0.0f,0.0f); // Top Of Triangle (Front) glVertex3f( 0.0f, 1.0f, 0.0f); // Green glColor3f(0.0f,1.0f,0.0f); // Left Of Triangle (Front) glVertex3f(-1.0f,-1.0f, 1.0f); // Blue glColor3f(0.0f,0.0f,1.0f); // Right Of Triangle (Front) glVertex3f( 1.0f,-1.0f, 1.0f); // Done Drawing glEnd();
So what did that do?
When you write a program that wants to use the graphics card, you’ll usually pick some kind of interface to the driver. Some well known interfaces to the driver are:
For this example we’ll stick with OpenGL. Now, your interface to the driver is what gives you all the tools you need to make your program talk to the graphics card (or the driver, which then talks to the card).
This interface is bound to give you certain tools. These tools take the shape of an API which you can call from your program.
That API is what we see being used in the example above. Let’s take a closer look.
Before you can really do any actual drawing, you’ll have to perform a setup. You have to define your viewport (the area that will actually be rendered), your perspective (the camera into your world), what anti-aliasing you will be using (to smooth out the edged of your triangle)…
But we won’t look at any of that. We’ll just take a peek at the stuff you’ll have to do every frame. Like:
Clearing the screen
The graphics pipeline is not going to clear the screen for you every frame. You’ll have to tell it. Why? This is why:
If you don’t clear the screen, you’ll simply draw over it every frame. That’s why we call
glClear with the
GL_COLOR_BUFFER_BIT set. The other bit (
GL_DEPTH_BUFFER_BIT) tells OpenGL to clear the depthbuffer. This buffer is used to determine which pixels are in front (or behind) other pixels.
Transformation is the part where we take all the input coordinates (the vertices of our triangle) and apply our ModelView matrix. This is the matrix that explains how our model (the vertices) are rotated, scaled, and translated (moved).
Next, we apply our Projection matrix. This moves all coordinates so that they face our camera correctly.
Now we transform once more, with our Viewport matrix. We do this to scale our model to the size of our monitor. Now we have a set of vertices that are ready to be rendered!
We’ll come back to transformation a bit later.
To draw a triangle, we can simply tell OpenGL to start a new list of triangles by calling
glBegin with the
There are also other forms you can draw. Like a triangle strip or a triangle fan. These are primarily optimizations, as they require less communication between the CPU and the GPU to draw the same amount of triangles.
After that, we can provide a list of sets of 3 vertices which should make up each triangle. Every triangle uses 3 coordinates (as we’re in 3D-space). Additionally, I also provide a color for each vertex, by calling
glColor3f before calling
The shade between the 3 vertices (the 3 corners of the triangle) is calculated by OpenGLautomatically. It will interpolate the color over the whole face of the polygon.
Now, when you click the window. The application only has to capture the window message that signals the click. Then you can run any action in your program you want.
This gets a lot more difficult once you want to start interacting with your 3D scene.
You first have to clearly know at which pixel the user clicked the window. Then, taking your perspectiveinto account, you can calculate the direction of a ray, from the point of the mouse click into your scene. You can then calculate if any object in your scene intersects with that ray. Now you know if the user clicked an object.
So, how do you make it rotate?
I am aware of two types of transformations that are generally applied:
- Matrix-based transformation
- Bone-based transformation
The difference is that bones affect single vertices. Matrices always affect all drawn vertices in the same way. Let’s look at an example.
Earlier, we loaded our identity matrix before drawing our triangle. The identity matrix is one that simply provides no transformation at all. So, whatever I draw, is only affected by my perspective. So, the triangle will not be rotated at all.
If I want to rotate it now, I could either do the math myself (on the CPU) and simply call
glVertex3f withother coordinates (that are rotated). Or I could let the GPU do all the work, by calling
// Rotate The Triangle On The Y axis glRotatef(amount,0.0f,1.0f,0.0f);
amount is, of course, just a fixed value. If you want to animate, you’ll have to keep track of
amountand increase it every frame.
So, wait, what happened to all the matrix talk earlier?
In this simple example, we don’t have to care about matrices. We simply call
glRotatef and it takes care of all that for us.
glRotateproduces a rotation of
angledegrees around the vector x y z . The current matrix (seeglMatrixMode) is multiplied by a rotation matrix with the product replacing the current matrix, as ifglMultMatrix were called with the following matrix as its argument:
x 2 1 – c + c x y 1 – c – z s x z 1 – c + y s 0 y x 1 – c + z s y 2 1 – c + c y z 1 – c – x s 0 x z 1 – c – y s y z 1 – c + x s z 2 1 – c + c 0 0 0 0 1
Well, thanks for that!
What becomes obvious is, there’s a lot of talk to OpenGL. But it’s not telling us anything. Where is the communication?
The only thing that OpenGL is telling us in this example is when it’s done. Every operation will take a certain amount of time. Some operation take incredibly long, others are incredibly quick.
Sending a vertex to the GPU will be so fast, I wouldn’t even know how to express it. Sending thousands of vertices from the CPU to the GPU, every single frame, is, most likely, no issue at all.
Clearing the screen can take a millisecond or worse (keep in mind, you usually only have about 16 milliseconds of time to draw each frame), depending on how large your viewport is. To clear it, OpenGL has to draw every single pixel in the color you want to clear to, that could be millions of pixels.
Other than that, we can pretty much only ask OpenGL about the capabilities of our graphics adapter (max resolution, max anti-aliasing, max color depth, …).
But we can also fill a texture with pixels that each have a specific color. Each pixel thus holds a value and the texture is a giant “file” filled with data. We can load that into the graphics card (by creating a texture buffer), then load a shader, tell that shader to use our texture as an input and run some extremely heavy calculations on our “file”.
We can then “render” the result of our computation (in the form of new colors) into a new texture.
That’s how you can make the GPU work for you in other ways. I assume CUDA performs similar to that aspect, but I never had the opportunity to work with it.
We really only slightly touched the whole subject. 3D graphics programming is a hell of a beast.
Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here. |
Measurement of the LANGUAGE PIE
It has been a long time, that intellectuals, scholars and Western linguists, unable to discover the real roots of the languages have come up with the theory of Proto-Indo-European language, according to them, a language already disappeared, by which always according to them, lay out all European languages, giving the same credit to the natural offspring and vehicular ones. The following example is one of the many, that this would likely be not true, empirical roots of PIE language are a "scientific" fairytale , which adds more mystery in the whole account. The truth is that all words of written/spoken, are graphical /lexical transformation product of the primitive words which carry a particular concept in themselves, and there is nowhere common PIE roots, which linguists has filled their books with, defocusing our attention from the simple truth.
Already proven by modern linguistics, in basis of primitive words, where it points-out the verbs, is a system c-v-c (consonant-vowel-consonant) sounds. In Albanian language almost all the verbs obey this rule, eg: 'catch, step, get, increase, see, plays, etc.(kap, hap, marr, rris, shoh, loz) or a even a simpler system eat, drink, do, stay etc(ha, pi, bej, rri), thus a combination of a consonant with a vowel. One of the most ancient primitive words of Albanian language that has been created when the language has innate itself, is the verb “mas” (measure), which is pronounced so often indiscriminate as “mat”, especially when used for lexical forms of the past, participle/infinitive, names or surnames. What is measuring(matja) itself? Measurement is a action performed by human to LEARN about the object/phenomenon, which we evaluate in the vast majority of cases of its dimensions, mass and all other physical measurable characteristics , that make us able to distinguish it from other objects/phenomenas. Measurement is done also for people, ex. when we meet an unknown person, except measurement (evaluation) of the stature of his appearance, we also learn from him his intellectual level, while measuring(estimating) it. So measurement(matja) is to learn(mesuar). It is the latter that has been introduced ready from Albanian as a graphical concept to provide the English term of measurement/mass(matjes/mases):
pamesu = pa + mesu
The verb MAT (or MAS) of Albanian has formed innumerable lexical forms not only in Albanian itself but in other "foreign” languages too, which are only forgotten spoken Albanian idioms customized to write the language, and later separated as a distinct new language:
"Others" Europeans, the English seem to have properly used the Albanian language and its expressions to build their vocabulary, whose words being written with Latin letters as well as Albanian , do not create confusion as "Greek" does on the graphical aspect. For example to use pjekuri(mature) the English uses the word mature:
Because such they were not, and for this has come ORA (time) to say the truth, the time made us cautious(MATUR), to discover out our divine language history and to take credit for our past.. |
Eastern Yellow Wagtail
Members of this diverse group make up more than half of the bird species worldwide. Most are small. However their brains are relatively large and their learning abilities are greater than those of most other birds. Passerine birds are divided into two suborders, the suboscines and the oscines. Oscines are capable of more complex song, and are considered the true songbirds. In Washington, the tyrant flycatchers are the only suboscines; the remaining 27 families are oscines.
The wagtails and pipits are small to medium-sized open-country ground-dwellers. Pipits are found worldwide, but wagtails are generally restricted to the Old World. Wagtails are often brightly colored with high-contrast patterns, and pipits are typically cryptically colored and patterned. Wagtails are sexually dimorphic: males and females have different plumage. Pipits are generally not sexually dimorphic, that is, males and females look alike. Wagtails and most pipits bob or wag their tails and bob their heads as they walk along the ground. Most are long-distance migrants. They eat primarily insects and other invertebrates, which they take from the ground, but they also eat seeds and fruit. They are monogamous, and both parents help tend the young.
Two records (late July, mid-September), both from Ocean Shores (Grays Harbor County).
North American Range Map
|Federal Endangered Species List||Audubon/American Bird Conservancy Watch List||State Endangered Species List||Audubon Washington Vulnerable Birds List| |
Artificial Intelligence (also known as AI) is a branch of computer science that focuses on developing computer systems that have the ability to think, work, and react like humans and to perform tasks that normally require human intelligence. This involves accurately and efficiently processing large amounts of data and making decisions or predictions based on that data. AI can also be used to create self-learning systems that can learn from data and improve their performance over time, and more and more businesses are using AI as a way to streamline their operations. Let’s discuss and give some examples of how AI is currently being used. Keep in mind that these examples of AI are only a few of its many uses. These uses also often overlap in one app or piece of technology.
Image recognition is a process in which a computer system is able to recognize and identify objects. These objects include people, places, writing, and actions in both still images and videos. With image recognition, a user can take a picture of an object, and AI will tell them who or what the object is. Google Lens, CamFind, and Amazon Rekognition are apps that currently use image recognition technology. While this can be used to benefit the user, in some cases, apps can use AI with user photos in ways their users may not expect. Benefits, unfortunately, sometimes come with negative consequences as well.
Natural Language Processing and Speech Recognition
Natural language processing and speech recognition work hand-in-hand to enable machines to understand, interpret, and generate human language. These technologies are integrated in many home devices that allow you to talk to the machine to turn on your lights and other appliances in your home without lifting a finger and to use Siri or Cortana to answer questions and fulfill requests. Natural language processing and speech recognition are also used in automated customer service and dictation software.
Machine Learning uses algorithms that enable a machine to learn from provided data and make predictions based on that data. In this manner, systems access data and use it to automatically learn and improve without being explicitly programmed. One app that uses machine learning is Netflix. Netflix recommends content based on the content the user already watched through machine learning. Netflix also uses it to identify and block fraudulent accounts, to detect and reduce piracy, and to target potential customers with personalized ads.
Predictive analysis uses data mining and statistical analysis to identify patterns and trends in data and then to predict future outcomes and behaviors. It can be used to make predictions about customer behavior and market trends. It can also be used to predict the success of a product or service, the likelihood of an event occurring, or the outcome of a particular decision. Weather apps use predictive analysis for more accurate forecasting, financial apps for identifying trends in the market and offering investment advice, and healthcare apps for identifying health risks and suggesting preventive measures.
Robotics is a branch of technology that deals with the design, construction, operation, and use of robots as well as computer systems for their control, sensory feedback, and information processing. These technologies are used to develop machines that can substitute for humans and replicate human actions. Starship Technologies is a mobile app that allows users to order food and other items which are then delivered by robots.
During the decision making process, a machine makes a choice between two or more alternatives. It involves gathering information and assessing the available options to make the best decision. Shopping apps like Amazon, Ebay, and Wish use decision making to make personalized product recommendations to customers and to suggest alternatives if the user’s first choice is not available. Health and fitness apps such as MyFitnessPal and Fitbit also use decision making to track and analyze user activity and dietary habits. They then suggest personalized plans to help the user reach their health and fitness goals.
Artificial Intelligence is still an emerging technology, and we’ve only touched the surface of what might eventually be possible. It has and will continue to change how we interact with the physical world, to take over mundane tasks, to optimize operations in many fields, and to help us make better decisions.
Matraex is a premier software and app development company based in Boise, Idaho. Do you have any questions regarding app development? Matraex would like to become your go-to source for answers so you can be an informed consumer. Feel free to contact us, call us at (208) 344-1115, send us a message on our website, or post a question on our Google Business Profile. We’d love to hear from you.
Sign up to receive answers to your questions delivered directly to your inbox! |
There is no doubt that the world is facing serious environmental problems, from trash and chemicals in the ocean to toxic fumes in the air. However, a new study led by Xiao-Peng Song and Matthew Hansen of the University of Maryland gives us some good environmental news for a change, there are more trees on the planet than they were 30 years ago.
The researchers used data taken by satellites from 1982 to 2016 and found that despite ongoing deforestation and forest fires, the world’s tree cover actually increased by 2.24 million square kilometers, which is an area the size of Texas and Alaska combined. Unfortunately, some bad news was revealed in the report as well, although the overall amount of trees increased, researchers also noticed an extreme die-off of the earth’s most diverse ecosystems and tropical forests.
“The results of this study reflect a human-dominated Earth system. Direct human action on landscapes is found over large areas on every continent, from intensification and extensification of agriculture to increases in forestry and urban land uses, with implications for the maintenance of ecosystem services,” the researchers wrote.
“A global net gain in tree canopy contradicts current understanding of long-term forest area change; the Food and Agriculture Organization of the United Nations (FAO) reported a net forest loss between 1990 and 2015. However, our gross tree canopy loss estimate (−1.33 million square kilometers, −4.2%) agrees in magnitude with the FAO’s estimate of net forest area change (−1.29 million square kilometers, −3%), despite differences in the time period covered and definition of forest,” the study said.
The researchers also warned that:
“Expansion of the agricultural frontier is the primary driver of deforestation in the tropics. The ‘arc of deforestation’ along the southeastern edge of the Amazon has been well-documented. Clearing of natural vegetation for export-oriented industrial agriculture also prevailed in the Cerrado and the Gran Chaco. Spatially clustered hotspots of deforestation are also found in Queensland, Australia, and in Southeast Asia—including Myanmar, Vietnam, Cambodia and Indonesia—diminishing the already scarce primary forests of the region. In sub-Saharan Africa, tree cover loss was pervasive across the Congolian rainforests and the Miombo woodlands, historically related to smallholder agriculture and increasingly to commodity crop cultivation. Forests in boreal Canada, eastern Alaska and central Siberia exhibited large patches of tree canopy loss and short vegetation gain, similar to the tropics. However, these are the result of persistent disturbances from wildfires and subsequent recovery of natural vegetation.”
This study shows that humans can make a huge impact on the environment with just a small amount of effort, but there is still much work to be done.
Below are some charts provided by Mongabay, showing the findings of the study: |
It affects mostly males, as it is an X chromosome linked condition. Hemophilia affects 1 in 5,000 male births in the U.S. and approximately 400 babies are born with hemophilia each year. 400,000 people worldwide are living with hemophilia and about 20,000 are living with it in the United States alone. All races and economic groups are affected equally. People with hemophilia who have access to factor replacement therapy have a normal life expectancy.
Types of Hemophilia
Bleeding disorders are treated differently depending on what protein is missing in the blood. Hemophilia is one of the most common bleeding disorders and is classified as follows:
- Hemophilia A – Also called classic hemophilia, it is 4 times more common than hemophilia B, and it occurs when factor VIII levels are deficient.
- Hemophilia B – Also called Christmas disease, it occurs when factor IX levels are deficient
- Hemophilia C聽–聽A type of hemophilia that occurs when factor XI levels are deficient.
- Acquired hemophilia – A person can develop hemophilia as a result of illness, medications, or pregnancy. Acquired hemophilia is extremely rare and usually resolves itself with proper diagnosis and treatment.
A person with hemophilia can bleed inside or outside of the body. People with hemophilia do not bleed more than people without hemophilia, they just bleed longer. The most common types of bleeds are into the joints and muscles. Other symptoms include:
- Nose bleeds
- Prolonged bleeding from minor cuts
- Bleeding that stops and resumes after stopping for only a short time
- Blood in the urine
- Blood in the stool
- Large bruises
- Easy bruising (unexplained bruising)
- Excessive bleeding with dental work or tooth extraction
- Heavy periods and/or periods lasting more than 7 days |
Body Mapping is a method especially for musicians developed by Barbara and William Conable. The aim of this method is to avoid pain and prevent injuries such as tendonitis, carpal tunnel syndrome, and ganglion cysts, from the constant practice of a musical instrument.
Body Mapping is important for the practice of any instrument as it focuses on the quality of movement in the body and not on a specific technique of an instrument. Body Mapping is a tool to identify restricted movements, which limit our body and our musical performances. It is important to note that this method does not replace medical advice.
Based on neurophysiology, Body Mapping uses the concept of “body maps”, which are representations of the body in the brain that influence, among many things, how we play an instrument. If we have an accurate body map then the movement of the body will be fluid, balanced, and free of tension. However, if our body map is not precise, the motion will be rigid and limiting, which can produce pain or injury.
In Body Mapping we identify our body maps (our ideas about our bodies) and, if it is necessary, we transform them through the understanding of how the body is anatomically designed for playing music. However, we must understand that Body Mapping is more than theoretical information, it is an experience.
Barbara Conable is the founder of Andover Educators, an organization of professional musicians committed to the development of music through the teaching of Body Mapping. |
Find great books for preschool, elementary, and middle school children and teens along with ideas of ways to teach with them in the classroom across the curriculum.
We'll all agree that picture books have an important role in every classroom. The wonderful combination of visual and textual story that picture books offer is a valuable literary experience. I use picture books to introduce themes or areas of study throughout the curriculum from the preschool level to high school.
Think about it. If you start a unit of study on conservation by reading aloud a novel like Jean George's Everglades, it's going to take about three weeks to get through it. Better to use that kind of experience for sustaining interest while the research is going on. If you start with a reading from the textbook, you'll seldom get them motivated for much. Movies and TV shows are all right, but they take us out of the print world for starters.
Begin with a picture book such as Dave Bouchard's The Elders Are Watching and you've touched the poet in their souls and made them think about the earth with which we've been entrusted. And, you've done that in about twelve minutes of enjoyable reading with gorgeous pictures. Furthermore, when you turn the kids loose to do their own investigations, some kids are going to have to go to simpler texts than others. By starting with a thin, non-threatening book, you've already validated the sources for all those less able readers.
So picture books are a treasure, but they do present their own particular set of problems, don't they? First of all there's the problem of how to physically share the book. If you stand up in front of a group of children to read the short text on the first page, you can't turn the page without hearing choruses of, "I can't see! Let me see!" You can stop and slowly walk around the room letting each child look at the picture but, by the time you get back up there and finally turn the page, the mood has been broken. Few kids can even remember what the story said so far.
Alternatively, if you flash the picture quickly while turning it from side to side, they may remember the text but nobody except the kids in the very front have a clue as to what the picture shows. Some teachers have developed the skill of turning themselves upside down so that they can read the text while simultaneously showing the pictures but this contortion is often ludicrous and hardly enjoyable for the teacher and may not be comfortable for viewers either. Reading the text while bent over the book means reading the print upside down and seldom does that make for fluency.
What's a teacher to do? If it's a book where appreciation necessitates viewing text and illustration together such as Where the Wild Things Are, I like to sit in a low chair and gather the children of any age on the floor as close to me as possible. Kindergarten kids are used to that setup but even for eighth graders, chairs just get in the way. Some teachers prefer a story circle but that's not really optimal for picture book sharing. I like them close and all facing the same way -- toward the book and toward me.
I let them look closely at the cover and make oral observations about what they see and think. You can carry the prediction thing too far but some predictions usually occur and we discuss them briefly. I can then read the text and turn the book over for them to quickly absorb the illustrations as we go without letting it take too long. If the book is really good, that brief viewing is still not enough and I encourage the kids to take longer careful looks after the first go-through. Often they see things I've missed and enrich future readings of it for me as well as others. They know that the book will be accessible for as long as they need it.
Many times, however, the text and illustrations can be viewed separately on the first encounter. Books by Chris Van Allsburg, for instance, fit this category. In those cases, I stand in front of the class and read the text as if there were no pictures. I tell the kids before I start that I first want to concentrate on the words in this book and that they'll get plenty of time to look carefully later. After the reading and discussion, individuals or small groups can look at the illustrations and reread the text.
Other times, I introduce a picture book without reading it aloud, just describing it briefly or telling the kids what attracts me to the book. Then I structure discussion groups to deal with it. Often, I've assembled ten or more picture books that effectively deal with different aspects of a theme, genre or subject.
Three children of any age and one picture book can be set up with these roles: one to concentrate on the text, reading it aloud; one to concentrate on the illustrations, pointing out details as the book is read and the third to point out what the other two miss. They make notes as they go and then move to a different picture book, sharing their observations when I call the class back together as a whole group. We enter their observations on charts for each book and then suggest that every child should revisit one of the picture books to better understand it. Many times kids who were not particularly thrilled by a book have missed some detail which was important and the whole group discussion brings it out. This activity has the added effect of teaching kids to make careful observations of a picture book and not whip through it unheeding its special contributions.
Author/illustrator studies of picture book creators can lead to observations about their techniques and styles, influences of other artists in the fine arts field as well as that of illustration, and discussions about topics, symbols and objects that recur in their work. Learning a bit about the author's life can sometimes help us understand his or her books better. I think it's always important to help the kids discover what the author is attempting to do in any book. What he or she wants us to understand or ponder is really the deepest level of a book -- theme, in a literary sense, if you will.
With older kids, it's sometimes possible to find a picture book that gets at things like genre, theme, climax, anti-climax, prologue, epilogue, passage of time and literary techniques such as flashback, foreshadowing, cliff-hangers, irony and satire which can be more difficult to identify in longer works. Where the Wild Things Are, for instance, is a fantasy which gives us pictorial foreshadowing. There's a drawing of a wild thing on the wall of the house before Max's adventure begins. It's a hint of the plot to come. The climax is right there in the wild rumpus. The anti-climax is clearly identifiable as Max finds himself back in his room with his supper waiting for him, still hot, which indicates that the whole adventure may have occurred in a moment or two, contrary to the earlier equation Max makes between time and distance.
Other books by Sendak and almost any work of Van Allsburg and Anthony Browne are good places to look for such subtleties and conventions.
A few other things to keep in mind when using picture books in the classroom:
Search Our Site
Subscribe to our Free Email Newsletter.
In Times Past
by Carol Hurst and Rebecca Otis
Integrating US History with Literature in Grades 3-8.
Enliven your US History curriculum!
Teach US History using great kids books.
By Carol Otis Hurst!! |
Laissez-faire is an economic and political philosophy. It is from a French phrase that means to "leave alone". It means that government does not interfere with business and economy. Finance and trade decisions are left for the private individual to make. It is the belief that unregulated competition in business represents the best path to progress. Supporters claim that a free and unregulated market creates a natural balance between supply and demand. The phrase is supposed to have come from the 18th century. In a meeting between the French finance minister Colbert and a businessman named Le Gendre, Colbert asked how the government could help commerce. Le Gendre replied "Let us do what we want to do".
In Ancient China, there were three schools of political thought. Taoism believed in almost no economic interference by the government. Legalism included the belief that the state should have the maximum power. They created the traditional Chinese bureaucratic empire. Confucianism was split between these two extremes although was closer to Legalism than Taoism.
During the 19th century Laissez faire developed as a social and economic philosophy. It was believed that government involvement in business was harmful at worst and ineffective at best. Socially, it was believed that government intervention to help the poor was harmful because it made them lazy and dependent on the government. Economically there was debate at this time in Europe and the United States over whether free trade or tariffs promoted the most economic growth. Up until the 1840s protectionism was favored over Laissez faire. In Britain the Corn Laws placed high tariffs on imported corn to protect British farmers and land owners.
- "laissez faire". Merriam-Webster. Retrieved 5 November 2016.
- "Laissez Faire Capitalism". Importance of Philosophy. Retrieved 5 November 2016.
- "Laissez Faire". Investopedia, LLC. Retrieved 5 November 2016.
- "Why did The Economist favour free trade?". The Economist Newspaper Limited. 6 September 2013. Retrieved 5 November 2016.
- Michael Scaife, History: Modern British and European (London: Letts Educational, 2004), p. 32 |
Repeatability—How much variability in the measurement system is caused by the measurement device.
Reproducibility—How much variability in the measurement system is caused by differences between operators.
Whether your measurement system variability is small compared with the process variability.
Whether your measurement system is capable of distinguishing between different parts.
For example, several operators measure the diameter of screws to ensure that they meet specifications. A gage R&R study (Stat > Quality Tools > Gage Study) indicates whether the inspectors are consistent in their measurements of the same part (repeatability) and whether the variation between inspectors is consistent (reproducibility).
Examples of repeatability and reproducibility
Repeatability and reproducibility are the two components of precision in a measurement system. To assess the repeatability and reproducibility, use a gage R&R study (Stat > Quality Tools > Gage Study).
Repeatability is the variation due to the measurement device. It is the variation that is observed when the same operator measures the same part many times, using the same gage, under the same conditions.
Operator 1 measures a single part with Gage A 20 times, and then measures the same part with Gage B.
Reproducibility is the variation due to the measurement system. It is the variation that is observed when different operators measure the same part many times, using the same gage, under the same conditions.
Operators 1, 2, and 3 measure the same part 20 times with the same gage. |
Once you graduate from the simple, passive components that are resistors, capacitors, and inductors, it's time to step on up to the wonderful world of semiconductors. One of the most widely used semiconductor components is the diode.
In this tutorial we'll cover:
- What is a diode!?
- Theory of diode operation
- Important diode properties
- Different types of diodes
- What diodes look like
- Typical diode applications
Some of the concepts in this tutorial build on previous electronics knowledge. Before jumping into this tutorial consider reading (at least skimming) these first:
What is a Circuit?
Voltage, Current, Resistance, and Ohm's Law
What is Electricity?
Series and Parallel Circuits
The key function of an ideal diode is to control the direction of current-flow. Current passing through a diode can only go in one direction, called the forward direction. Current trying to flow the reverse direction is blocked. They're like the one-way valve of electronics.
If the voltage across a diode is negative, no current can flow*, and the ideal diode looks like an open circuit. In such a situation, the diode is said to be off or reverse biased.
As long as the voltage across the diode isn't negative, it'll "turn on" and conduct current. Ideally* a diode would act like a short circuit (0V across it) if it was conducting current. When a diode is conducting current it's forward biased (electronics jargon for "on").
|Ideal Diode Characteristics|
|Operation Mode||On (Forward biased)||Off (Reverse biased)|
|Diode looks like||Short circuit||Open circuit|
Every diode has two terminals -- connections on each end of the component -- and those terminals are polarized, meaning the two terminals are distinctly different. It's important not to mix the connections on a diode up. The positive end of a diode is called the anode, and the negative end is called the cathode. Current can flow from the anode end to the cathode, but not the other direction. If you forget which way current flows through a diode, try to remember the mnemonic ACID: "anode current in diode" (also anode cathode is diode).
The circuit symbol of a standard diode is a triangle butting up against a line. As we'll cover in the later in this tutorial, there are a variety of diode types, but usually their circuit symbol will look something like this:
The terminal entering the flat edge of the triangle represents the anode. Current flows in the direction that the triangle/arrow is pointing, but it can't go the other way.
Above are a couple simple diode circuit examples. On the left, diode D1 is forward biased and allowing current to flow through the circuit. In essence it looks like a short circuit. On the right, diode D2 is reverse biased. Current cannot flow through the circuit, and it essentially looks like an open circuit.
*Caveat! Asterisk! Not-entirely-true... Unfortunately, there's no such thing as an ideal diode. But don't worry! Diodes really are real, they've just got a few characteristics which make them operate as a little less than our ideal model...
Real Diode Characteristics
Ideally, diodes will block any and all current flowing the reverse direction, or just act like a short-circuit if current flow is forward. Unfortunately, actual diode behavior isn't quite ideal. Diodes do consume some amount of power when conducting forward current, and they won't block out all reverse current. Real-world diodes are a bit more complicated, and they all have unique characteristics which define how they actually operate.
The most important diode characteristic is its current-voltage (i-v) relationship. This defines what the current running through a component is, given what voltage is measured across it. Resistors, for example, have a simple, linear i-v relationship...Ohm's Law. The i-v curve of a diode, though, is entirely non-linear. It looks something like this:
Depending on the voltage applied across it, a diode will operate in one of three regions:
- Forward bias: When the voltage across the diode is positive the diode is "on" and current can run through. The voltage should be greater than the forward voltage (VF) in order for the current to be anything significant.
- Reverse bias: This is the "off" mode of the diode, where the voltage is less than VF but greater than -VBR. In this mode current flow is (mostly) blocked, and the diode is off. A very small amount of current (on the order of nA) -- called reverse saturation current -- is able to flow in reverse through the diode.
- Breakdown: When the voltage applied across the diode is very large and negative, lots of current will be able to flow in the reverse direction, from cathode to anode.
In order to "turn on" and conduct current in the forward direction, a diode requires a certain amount of positive voltage to be applied across it. The typical voltage required to turn the diode on is called the forward voltage (VF). It might also be called either the cut-in voltage or on-voltage.
As we know from the i-v curve, the current through and voltage across a diode are interdependent. More current means more voltage, less voltage means less current. Once the voltage gets to about the forward voltage rating, though, large increases in current should still only mean a very small increase in voltage. If a diode is fully conducting, it can usually be assumed that the voltage across it is the forward voltage rating.
A specific diode's VF depends on what semiconductor material it's made out of. Typically, a silicon diode will have a VF around 0.6-1V. A germanium-based diode might be lower, around 0.3V. The type of diode also has some importance in defining the forward voltage drop; light-emitting diodes can have a much larger VF, while Schottky diodes are designed specifically to have a much lower-than-usual forward voltage.
If a large enough negative voltage is applied to the diode, it will give in and allow current to flow in the reverse direction. This large negative voltage is called the breakdown voltage. Some diodes are actually designed to operate in the breakdown region, but for most normal diodes it's not very healthy for them to be subjected to large negative voltages.
For normal diodes this breakdown voltage is around -50V to -100V, or even more negative.
All of the above characteristics should be detailed in the datasheet for every diode. For example, this datasheet for a 1N4148 diode lists the maximum forward voltage (1V) and the breakdown voltage (100V) (among a lot of other information):
A datasheet might even present you with a very familiar looking current-voltage graph, to further detail how the diode behaves. This graph from the diode's datasheet enlarges the curvy, forward-region part of the i-v curve. Notice how more current requires more voltage:
That chart points out another important diode characteristic -- the maximum forward current. Just like any component, diodes can only dissipate so much power before they blow. All diodes should list maximum current, reverse voltage, and power dissipation. If a diode is subject to more voltage or current than it can handle, expect it to heat up (or worse; melt, smoke,...).
Some diodes are well-suited to high currents -- 1A or more -- others like the 1N4148 small-signal diode shown above may only be suited for around 200mA.
That 1N4148 is just a tiny sampling of all the different kinds of diodes there are out there. Next we'll explore what an amazing variety of diodes there are and what purpose each type serves.
Types of Diodes
Standard signal diodes are among the most basic, average, no-frills members of the diode family. They usually have a medium-high forward voltage drop and a low maximum current rating. A common example of a signal diode is the 1N4148.
Very general purpose, it's got a typical forward voltage drop of 0.72V and a 300mA maximum forward current rating.
A rectifier or power diode is a standard diode with a much higher maximum current rating. This higher current rating usually comes at the cost of a larger forward voltage. The 1N4001 is an example of a power diode.
The 1N4001 has a current rating of 1A and a forward voltage of 1.1V.
And, of course, most diode types come in surface-mount varieties as well. You'll notice that every diode has some way (no matter how tiny or hard to see) to indicate which of the two pins is the cathode.
Light-Emitting Diodes (LEDs!)
The flashiest member of the diode family must be the light-emitting diode (LED). These diodes quite literally light up when a positive voltage is applied.
Like normal diodes, LEDs only allow current through one direction. They also have a forward voltage rating, which is the voltage required for them to light up. The VF rating of an LED is usually larger than that of a normal diode (1.2~3V), and it depends on the color the LED emits. For example, the rated forward voltage of a Super Bright Blue LED is around 3.3V, while that of the equal size Super Bright Red LED is only 2.2V.
You'll obviously most-often find LEDs in lighting applications. They're blinky and fun! But more than that, their high-efficiency has lead to widespread use in street lights, displays, backlighting, and much more. Other LEDs emit a light that is not visible to the human eye, like infrared LEDs, which are the backbone of most remote controls. Another common use of LEDs is in optically isolating a dangerous high-voltage system from a lower-voltage circuit. Opto-isolators pair an infrared LED with a photosensor, which allows current to flow when it detects light from the LED. Below is an example circuit of an opto-isolator. Note how the schematic symbol for the diode varies from the normal diode. LED symbols add a couple arrows extending out from the symbol.
Another very common diode is the Schottky diode.
The semiconductor composition of a Schottky diode is slightly different from a normal diode, and this results in a much smaller forward voltage drop, which is usually between 0.15V and 0.45V. They'll still have a very large breakdown voltage though.
Schottky diodes are especially useful in limiting losses, when every last bit of voltage must be spared. They're unique enough to get a circuit symbol of their own, with a couple bends on the end of the cathode-line.
Zener diodes are the weird outcast of the diode family. They're usually used to intentionally conduct reverse current.
Zener's are designed to have a very precise breakdown voltage, called the zener breakdown or zener voltage. When enough current runs in reverse through the zener, the voltage drop across it will hold steady at the breakdown voltage.
Taking advantage of their breakdown property, Zener diodes are often used to create a known reference voltage at exactly their Zener voltage. They can be used as a voltage regulator for small loads, but they're not really made to regulate voltage to circuits that will pull significant amounts of current.
Zeners are special enough to get their own circuit symbol, with wavy ends on the cathode-line. The symbol might even define what, exactly, the diode's zener voltage is. Here's a 3.3V zener diode acting to create a solid 3.3V voltage reference:
Photodiodes are specially constructed diodes, which capture energy from photons of light (see Physics, quantum) to generate electrical current. Kind of operating as an anti-LED.
Solar cells are the main benefactor of photodiode technology. But these diodes can also be used to detect light, or even communicate optically.
For such a simple component, diodes have a huge range of uses. You'll find a diode of some type in just about every circuit. They could be featured in anything from a small-signal digital logic to a high voltage power conversion circuit. Let's explore some of these applications.
A rectifier is a circuit that converts alternating current (AC) to direct current (DC). This conversion is critical for all sorts of household electronics. AC signals come out of your house's wall outlets, but DC is what powers most computers and other microelectronics.
Current in AC circuits literally alternates -- quickly switches between running in the positive and negative directions -- but current in a DC signal only runs in one direction. So to convert from AC to DC you just need to make sure current can't run in the negative direction. Sounds like a job for DIODES!
A half-wave rectifier can be made out of just a single diode. If an AC signal, like a sine wave for example, is sent through a diode any negative component to the signal is clipped out.
A full-wave bridge rectifier uses four diodes to convert those negative humps in the AC signal into positive humps.
These circuits are a critical component in AC-to-DC power supplies, which turn the wall outlet's 120/240VAC signal into 3.3V, 5V, 12V, etc. DC signals. If you tore apart a wall-wart, you'd most likely see a handful of diodes in there, rectifying it up.
Ever stick a battery in the wrong way? Or switch up the red and black power wires? If so, a diode might be to thank for your circuit still being alive. A diode placed in series with the positive side of the power supply is called a reverse protection diode. It ensures that current can only flow in the positive direction, and the power supply only applies a positive voltage to your circuit.
This diode application is useful when a power supply connector isn't polarized, making it easy to mess up and accidentally connect the negative supply to the positive of the input circuit.
The drawback of a reverse protection diode is that it'll induce some voltage loss because of the forward voltage drop. This makes Schottky diodes an excellent choice for reverse protection diodes.
Forget transistors! Simple digital logic gates, like the AND or the OR, can be built out of diodes.
For example, a diode two-input OR gate can be constructed out of two diodes with shared cathode nodes. The output of the logic circuit is also located at that node. Whenever either input (or both) is a logic 1 (high/5V) the output becomes a logic 1 as well. When both inputs are a logic 0 (low/0V), the output is pulled low through the resistor.
An AND gate is constructed in a similar manner. The anodes of both diodes are connected together, which is where the output of the circuit is located. Both inputs must be logic 1 forcing current to run towards the output pin and pull it high also. If either of the inputs are low, current from the 5V supply runs through the diode.
For both logic gates, more inputs can be added by adding just a single diode.
Flyback Diodes and Voltage Spike Suppression
Diodes are very often used to limit potential damage from unexpected large spikes in voltage. Transient-voltage-suppression (TVS) diodes are specialty diodes, kind of like zener diodes -- lowish breakdown voltages (often around 20V) -- but with very large power ratings (often in the range of kilowatts). They're designed to shunt currents and absorb energy when voltages exceed their breakdown voltage.
Flyback diodes do a similar job of suppressing voltage spikes, specifically those induced by an inductive component, like a motor. When current through an inductor suddenly changes, a voltage spike is created, possibly a very large, negative spike. A flyback diode placed across the inductive load, will give that negative voltage signal a safe path to discharge, actually looping over-and-over through the inductor and diode until it eventually dies out.
That's just a handful of applications for this amazing little semiconductor component.
Now that your current is flowing in the right direction, it's time to put your new knowledge to good use. Whether you're looking for a starting point or just stocking up, we've got an Inventor's Kit as well individual diodes to choose from.
Resources and Going Further
Now that you've gotten a handle on diodes, maybe you'd like to further explore more semiconductors:
- Or learn about integrated circuits, like:
- 555 Timers
- Operational Amplifiers
- Shift Registers
Or discover some of the other common electronic components: |
Hybrid embryos are made by combining an animal egg and a human cell. They are used to overcome the shortage in human eggs available for research.
In May 2008 the UK Government passed a law allowing the use of hybrid embryos in research. They can be used to produce embryonic stem cells to study cell reprogramming and development. They can help us research treatment for diseases such as Parkinson’s. It is hoped that in the future this work will lead to new sources of embryonic stem cells and therapies for major diseases.
Why do this?
To study stem cells, scientists need to have access to enough of them. But this has been difficult because most have been harvested from donated human embryos. The embryos are destroyed in the process. Some people object to this, there have been legal barriers and a shortage of donors. Hybrid embryos offer a method to create new stem cells for research without needing to use donor embryos.
Why is this technology controversial?
The use of these hybrid or admix embryos has appalled some people and excited others. Some see this technique as crossing the boundary between humans and animals, creating embryos that are no longer fully human. Members of different faiths have condemned the technology on moral grounds. However, a survey of members of the public by the Human Fertility and Embryology Authority (HFEA) showed that, when the technology and its potential benefits were described to them, 61% were in favour of it.
BBC Religion and Ethics article on Human Animal Hybrids
The creation and uses of hybrid embryos |
By Oli Usher, on 29 October 2014
Scientists at UCL have developed a new way of changing information stored in quantum bits – a vital technology for ensuring computers continue to increase in power over the next century.
Classical computer architecture is coming close to its limits. The ever increasing power of computer chips rests in part on making the circuits inside them ever smaller – but these are now so small, just a few atoms across, that there is not much further to go.
Quantum computing, in which the 1s and 0s of binary code that computers process are replaced by values that can be both 0 and 1 at the same time, is a promising technology for further improving computer performance. In theory, quantum computers should be able to carry out multiple operations in parallel.
Classical computers represent 0s and 1s with circuits which are either open or closed – essentially, many billions of microscopic switches. These 0s and 1s are known as ‘bits’. If they are to become a viable replacement for classical computers, quantum computers need to have a chip technology which can physically encode bits which are both 1 and 0 at the same time, using the ability of quantum systems to exist in several states at once (known as quantum superposition). These quantum bits are known as ‘qubits’.
“One way of creating a qubit is to encode the information using the spin in a particle,” explains Gary Wolfowicz, the study’s lead author and a PhD researcher in the London Centre for Nanotechnology at UCL. “The direction of a particle’s spin, and hence its magnetic orientation, follows quantum principles, and can exist in both states at once. The challenge is building a system in which the spin is quite stable, and so doesn’t change on its own, but still easy to modify when you want to manipulate it.”
The team experimented with a material that is already used in classical computer design: the silicon wafers which integrated circuits are etched onto. As in classical computer chips, the silicon had a small amount of a different element – in this case, antimony – dispersed through it. Since antimony atoms have one extra electron in their outer shells, this normally creates a sea of unbound electrons that can move throughout the silicon, the key property that makes them behave as semiconductors and allows transistors to be made out of them. The team’s technique departs from classical computing in what happens next: they immobilise these electrons by cooling down the silicon, then use the antimony atoms as their qubits, encoding information in their spin.
Using a technique that is already widely used in the construction of computer chips is a great advantage. Techniques for creating silicon wafers are now very advanced, meaning that they are extremely pure and have very few defects. This gives silicon an advantage over more novel materials – for example, this purity means the spins in silicon can keep their quantum state for up to a few hours, so the information encoded in them is long-lasting.
“Spin is a magnetic phenomenon, so the easiest way to change its orientation is to apply an oscillating magnetic field whose frequency resonates with the particle’s spin,” Wolfowicz says. “Unfortunately, it is very difficult to apply such magnetic fields locally to individual quantum bits within a processor, but this is necessary in order to be able to control the states of different qubits.” Electric fields are much easier to apply locally – you merely need to apply a voltage to a tiny wire close to your qubit, without the need for large and heat-wasting coils. The trouble is that electric fields do not directly affect spins.
The team’s solution was to use the electric field to pull at the electrons, moving them slightly further away from the antimony nuclei, changing their resonance frequency. This makes it possible to subject the entire silicon crystal to an oscillating magnetic field, but antimony atoms that are being tugged at by the electric field don’t respond. “It’s like a sergeant barking the same orders at a whole platoon of spins” said Prof John Morton, who leads the Quantum Spin Dynamics group at UCL, “but we give some of them earplugs.”
This is the first practical demonstration for spin-qubits in silicon of how to use electric fields to ”switch” on and off the qubit response to the magnetic field. The team has not yet managed to narrow down the effect to a single atom, but combined with recent demonstrations of control of single spins by collaborators in UNSW, this represent a major step towards selective control of qubits, and more importantly a scalable silicon-based quantum computer.
- The research appears in a paper entitled “Conditionalk control of donor nuclear spins in silicon using Stark shifts”, published in the journal Physical Research Letters |
We have, of course, two ears which jointly allow us to localize the direction of incoming sound. But our ears are much more than simple point hearing devices; there must be some more advanced mechanisms contained within. Why? Because we can tell whether a sound is coming from our back or our front.
Imagine the case in our ears were replaced with two simple mono microphones of appropriate sensitivity. Any sound would be picked up by the two microphones, though at different delays. Knowing the speed of sound, it is simple to calculate the distance of the source from each microphone and hence, the intersections (locii). Note the word intersections. With two point microphones, there are two intersections (in a 2-D scenario) and infinite intersections (dispersed in a circle in a 3-D scenario).
If our ears were only point microphones, we would not be able to differentiate sounds coming from behind us.
But of course, we can. That is because our ears are directionally sensitive, or at least, structures in our ears serve as directional filters.
One interesting result of this line of thought is that with headphones, it is not possible to duplicate full directional sound. Headphones are merely two point sources of sound; there will be ambiguity of direction, leading us to confuse front-back sounds. |
Are travelers at risk for the bird flu? What are the symptoms associated with this illness? ABCNEWS.com asked Dr. William Schaffner to answer questions about the risks the disease poses to the U.S. population. Schaffner is an infectious disease specialist and the chairman of the Department of Preventive Medicine at Vanderbilt University in Nashville, Tenn.
Question: What is bird flu?
Answer: Influenza comes in a variety of forms and this is a type of influenza that is for all intents and purposes confined to birds.
This virus is found principally in Southeast Asia but it has also infected flocks of migratory water fowl, and their migration patterns extend into Eastern Europe. That's important because it opens up the possibility that this virus could be transported by migratory water fowl and get into the poultry populations of Eastern Europe.
Although bird-related influenza has been known for years, a single strain that could spread geographically so extensively is a new phenomenon.
Question: How is it contracted?
Answer: Influenza is what we call a respiratory virus. When we breath out, there are microscopic secretions that can contain a virus. And if you're close to an individual and in turn inhale those secretions, you can get an infection.
Also, if I get these secretions on my hands I can touch someone else and perhaps inoculate them.
Influenza has the capacity to spread rather rapidly in enclosed spaces, and remember: This is a wintertime virus, as most respiratory viruses are.
Eating an infected chicken won't give you the infection; that's not a risk. The problem is that you don't want an infected flock around that could spread it to more chickens or humans.
Question: What are the symptoms?
Answer: All influenza manifests itself pretty much the same way: You feel poorly. You develop a fever, general aches and pains, you lose your appetite and your energy, and importantly you develop a cough. We're talking about adults here – in children they may cough a little less and have abdominal pain.
Influenza viruses can range – as with most infections diseases – from relatively mild to severe and overwhelming, and because humans have not had experience with this sort of influenza virus before, it is anticipated that basically anyone on the planet is susceptible and therefore illness would be rather severe.
This anticipation of avian flu is born out of these few early cases in humans, where about half the people have died. That's a frightening thought.
Question: Is there a cure?
Answer: Yes, fortunately we have antiviral drugs and the most common one is Tamiflu. It has to be administered early and it is effective in shortening the course of infection and bringing it more rapidly to a close.
Of course there's also supportive care, keeping up your liquids, perhaps taking medication for the fever, going to bed for a few days, and seriously ill people would have to be admitted to the hospital.
Question: How do you know if you are at risk for avian flu?
Answer: There is a notification network around the world run by the World Health Organization that will let us know if the bird flu has developed the capacity to get into humans and has spread.
When that happens, we will be tracking this bird flu. So it's not a matter of someone in Peoria, Ill., or New Mexico becoming ill and thinking, "Gee I might have the bird flu." It doesn't happen in isolation like that. We will know it's coming.
The Centers for Disease Control would let us know and the various state health departments would be involved. This happens every year with regular influenza.
Question: Is there an added risk for those who frequently travel abroad?
Answer: The concern is for that very small proportion of international travelers who find themselves out in the agricultural areas of China, Vietnam, Thailand and the like.
In those circumstances, they will find that those farmers are using precautions and they should be careful. Spend as little time as needed there, don't get close to the chickens and certainly wash yourself very quickly when you leave. If you have soiled garments or shoes, handle them carefully and get them cleaned.
This is not to be a major concern until early cases are discovered and start to spread, and when that happens we would hear a lot just as we did during the era of SARS. They can travel from one country to another and when they land they can spread the disease. There's also some spread on airplanes because one of the things we know is you can start exhaling the infective virus 24 or 48 hours before you get sick, so you can be completely healthy and transmit the virus.
Question: How concerned do you think Americans should be?
Answer: There should be a degree of concern, but free-floating anxiety doesn't help us very much.
I regard this a little bit like the levees in New Orleans. There has to be a sense in the population that the national influenza preparedness plan be completed and that we actually engage in it. Now, this costs money and so the average person can let their member of Congress know or send a note to the White House that says you are concerned about the bird flu.
It's a little bit like hurricanes in New Orleans – we don't know when it's going to come but we know it will come, so let's spend some money. I think it's worrisome that in this era there are proposals to reduce the CDC's budget; that seems awkward as we're trying to prepare ourselves for pandemic flu. The CDC is the lead agency that would active at the whole public health system in our response. |
Spirometry measures how much air you breathe in and out and how fast you blow it out. This is measured two ways: peak expiratory flow rate (PEFR) and forced expiratory volume in 1 second (FEV1).
PEFR is the fastest rate at which you can blow air out of your lungs. FEV1 refers to the amount of air you can blow out in 1 second.
During the test, a technician will ask you to take a deep breath in. Then, you'll blow as hard as you can into a tube connected to a small machine. The machine is called a spirometer.
Your doctor may have you inhale a medicine that helps open your airways. He or she will want to see whether the medicine changes or improves the test results.
Spirometry helps check for conditions that affect how much air you can breathe in, such as pulmonary fibrosis (scarring of the lung tissue). The test also helps detect diseases that affect how fast you can breathe air out, like asthma and COPD (chronic obstructive pulmonary disease).
Lung Volume Measurement
Body plethysmography (pleth-iz-MOG-re-fe) is a test that measures how much air is present in your lungs when you take a deep breath. It also measures how much air remains in your lungs after you breathe out fully.
During the test, you sit inside a glass booth and breathe into a tube that's attached to a computer.
For other lung function tests, you might breathe in nitrogen or helium gas and then blow it out. The gas you breathe out is measured to show how much air your lungs can hold.
Lung volume measurement can help diagnose pulmonary fibrosis or a stiff or weak chest wall.
Lung Diffusion Capacity
This test measures how well oxygen passes from your lungs to your bloodstream. During this test, you breathe in a type of gas through a tube. You hold your breath for a brief moment and then blow out the gas.
Abnormal test results may suggest loss of lung tissue, emphysema (a type of COPD), very bad scarring of the lung tissue, or problems with blood flow through the body's arteries.
Tests To Measure Oxygen Level
Pulse oximetry and arterial blood gas tests show how much oxygen is in your blood. During pulse oximetry, a small sensor is attached to your finger or ear. The sensor uses light to estimate how much oxygen is in your blood. This test is painless and no needles are used.
For an arterial blood gas test, a blood sample is taken from an artery, usually in your wrist. The sample is sent to a laboratory, where its oxygen level is measured. You may feel some discomfort during an arterial blood gas test because a needle is used to take the blood sample.
Testing in Infants and Young Children
Spirometry and other measures of lung function usually can be done for children older than 6 years, if they can follow directions well. Spirometry might be tried in children as young as 5 years. However, technicians who have special training with young children may need to do the testing.
Instead of spirometry, a growing number of medical centers measure respiratory system resistance. This is another way to test lung function in young children.
The child wears nose clips and has his or her cheeks supported with an adult's hands. The child breathes in and out quietly on a mouthpiece, while the technician measures changes in pressure at the mouth. During these lung function tests, parents can help comfort their children and encourage them to cooperate.
Very young children (younger than 2 years) may need an infant lung function test. This requires special equipment and medical staff. This type of test is available only at a few medical centers.
The doctor gives the child medicine to help him or her sleep through the test. A technician places a mask over the child's nose and mouth and a vest around the child's chest.
The mask and vest are attached to a lung function machine. The machine gently pushes air into the child's lungs through the mask. As the child exhales, the vest slightly squeezes his or her chest. This helps push more air out of the lungs. The exhaled air is then measured.
In children younger than 5 years, doctors likely will use signs and symptoms, medical history, and a physical exam to diagnose lung problems.
Doctors can use pulse oximetry and arterial blood gas tests for children of all ages. |
Although East Asia has been subjected to the challenges and influences of Western civilization, including pressures of modernization and of capitalism, it did not succumb, as did other regions of Asia, to the colonization efforts of the West. Contacts between East Asia and the West resulted, however, in political, economic, military, and ideological conditions that have contributed to mass emigration, through displacement and recruitment, from China, Japan, and Korea.
The United States of America, emerging as a world power in the late nineteenth century and as an increasingly dominant world power in the twentieth, became one of the main destinations of East Asian emigrants. This historical pattern has resulted in an impressive literature. Asian immigration has followed a pattern of labor shortages followed by legal restrictions. Recruited as laborers during different periods (the Chinese, 1850-1882; the Japanese, 1885-1924; the Koreans, 1903-1905), East Asians often came first to work in the fields of Hawaii or in the western areas of the United States mainland. Many of the Hawaiian Asians later moved to the mainland, but as a result of Asian immigration, Hawaii in the 1990 census was the only state of the union where Asian Americans constituted the majority of the population. Until after World War II, Asian Americans were subjected to many discriminatory practices and laws. Their property rights and civil rights were often limited or violated, and...
(The entire section is 600 words.) |
Influenza is primarily a disease of birds. Most emerging infectious diseases in humans are started out as diseases of animals, what are called zoonoses. We worry about zoonoses for that reason. It is one of the hardwired tendencies of any species to think of their own survival first — that’s natural — but humans are only one species amongst many. And while we worry about viruses we might catch from animals, the animals are also getting sick. It’s not just influenza we share with birds. Birds suffer from other diseases they can pass on to humans, too, and one of these is West Nile virus (WNV) infection, one of a group of insect borne encephalitis viruses that infect both birds and humans. Since arriving in North America in 1999, WNV has killed almost 1000 people but done considerably more damage to the bird population. A new paper in Nature tries to estimate how much damage.
West Nile virus or a similar disease could wipe out many of the U.S.’s backyard birds, profoundly changing some of the country’s most familiar wildlife and ecosystems.
That’s the finding of a new analysis of 26 years of data from the national Breeding Bird Survey?data that reveal the dramatic effects of the 1999 arrival of West Nile virus in the U.S.
Lead author Shannon LaDeau of the Smithsonian Migratory Bird Center and her colleagues found that species that thrive near humans suffered extremely high death rates from the disease.
Up to 45 percent of crows died after the virus arrived, with robins, chickadees, and eastern bluebirds not far behind.
Some of these populations had been increasing before the virus hit, which is a good indication that West Nile caused the declines, the authors write.
The disease may not completely wipe out bird populations on its own, the scientists add, but it is an alarming addition to existing population threats such as climate change and habitat loss.
“They’re our backyard species, and we haven’t been watching them as much as we’re watching the other species, because people consider them safe,” LaDeau told National Geographic News. (National Geographic News)
The virus is spread from bird to bird — and from bird to human — by mosquitoes. You might wonder how a mosquito can bite a feather covered bird, but there are small areas around the eye and elsewhere where the insect can and does gain access to the bird’s bloodstream. If that mosquito bites us, we can be infected, but this requires a “bridge species” of mosquito that bites both humans and birds and most mosquitoes don’t do that. Enough do, however, so 12,000 human cases of WNV have appeared in 44 of the 50 states and bird infections in every state except Alaska and Hawaii (map and 2006 case counts here). Human cases occur where there are infected birds because person to person transfer of the virus by a mosquito is thought not to happen because the level of virus in the human blood stream is not high enough.
But it is the effect on 20 common species of backyard birds that is the subject of the Nature paper. Disentangling the effects of the virus from the many other factors that affect bird populations is not an easy task:
To do so, they designed species-specific predictive models based on knowledge of the prevalence of the virus, exposure to mosquitoes and overall mortality for 20 different bird species, each species representing a specific combination of urban (human) association and susceptibility to the virus. The model was applied to 26 years of population data for six geographical regions to construct probability distributions for the expected abundance of each bird species in a given region before and after the arrival of the virus. (Carsten Rahbek News and Views, Nature)
The most affected birds, by their estimate, were the “peri-domestic” (backyard) species common to cities and suburban environments. One of the hardest hit was the American robin, but other species, including crows, chickadees, and bluebirds also experienced ten year population lows after the large outbreaks of 2002 – 2003. It was originally thought that crows and jays were the main victims of WNV, but that is probably because these birds are large and their corpses visible. We now know that many bird species, probably well over 100, are also infected, although many are apparently quite unaffected by the virus and may be a significant reservoir.
This is one of the first to gauge the effects of this bird infection on common species in North America, and even in the small number of species examined in the paper there are many uncertainties. WNV is a nasty human disease with a significant case fatality ratio and frequently debilitating after effects in survivors.
Maybe we don’t care that much about other species. But there are some good reasons why we should. |
This lesson plan was created by members of Historica Canada’s teacher community. Historica Canada does not take responsibility for the accuracy or availability of any links herein, and the views reflected in these learning tools may not necessary reflect those of Historica Canada. We welcome feedback regarding the content that may be linked to or included in these learning tools; email us at [email protected].
The teacher will ask the students to research how different groups of Canadians were affected by Canada’s participation in the Second World War. Students will then create a journal that follows the life of a fictitious Canadian before, during and after the war. Although the students will be creating a piece of fiction, the journals they write must be historically accurate and make reference to historical events. The teacher will assign, or allow students to choose, their character from a list of brief character descriptions. To complete the task, students will have to research the relevant information, and then develop and record the plausible actions and thoughts of their historical character in a three-part journal. The journal will be written in first person and follow their character pre, post and during the Second World War. Finally, student journals will be presented in class and then bound and placed on display in the school library. By exploring many different experiences felt by Canadians during the war, students will develop a sense of historical empathy, and an appreciation of the diversity of perspective in Canada’s history.
Students may choose one of the following characters. Once they have developed a persona for their character, they will conduct research to enable them to write a journal from the perspective of their character. The teacher may wish to add a local dimension to the following list. In addition, a student may wish to base her/his character’s bio on the student's own heritage group. Both of these amendments will make the project more meaningful to the students and their wider learning community. Suggested list of Canadians:
- Soldier in Europe
– Light Infantry
- Member of the Royal Canadian Air Force stationed in Britain
- Female nurse in Europe
- Soldier in Hong Kong
- A pacifist (religious reasons)
- The mother of a soldier in Europe
- War widow
- Japanese Canadian living in Vancouver
- Jewish Canadian
- Canadian of German heritage
- French Canadian conscripted
- Metis soldier
- Aboriginal soldier
- Woman working at a Canadian munitions factory
- Saskatchewan farmer conscripted
- A deserter from the army
Students will gain an understanding and empathy with the Canadian war experience by researching primary sources and then creating personal narratives.
Time Allowance: 4 – 5, 80-minute periods. This will include 2 research periods, 2 periods to write the journal entries and one period to present work to the class.
Preparatory Phase, Prior to Presenting the Scenario
1. Present the historical context. This may include a brief outline of the causes of the Second World War, the players involved, and a timeline highlighting Canada’s military participation in the war, as well as significant battles and statistics. Students should also be introduced to the question of why Canada entered the war and the various aspects of that debate.
2. Students then explore the idea of how Canada’s involvement in the war is remembered. This can be done through a series of photos that include battlefields, as well as Canadian memorial sites both in Europe and in Canada. The military-themed Heritage Minutes could also be used in this exploration of war and the Canadian memory.
3. Students may then reflect on memory in general, and consider their own memories of events in their own lifetime. For example, the teacher may ask them to write about their memory of the first day of school in as much detail as possible (how they felt, what they were wearing, describe the school, their friends, parents/guardian, teacher). Students will then share their memory with the class and discover that, although there are similarities, each one of them will have a unique perspective of this event in their lives. Would it be possible for them to agree on a common memory that could then be recorded in history books? What is the difference between personal memory and national memory? Whose memory becomes national memory?
4. Present the scenario “Memories of the War:” In commemoration of the Second World War, your school library would like to publish a collection of journal accounts written by Canadians involved in, or effected by, Canada’s participation in the war. However, because actual journals are hard to find, the school librarian has elicited the help of your class to write a historically accurate collection of journal entries that reflect the unique perspectives of a wide range of Canadian experiences during the Second World War. This “Memories of the War” collection will be showcased in the library – on the front counter – during the week of Remembrance Day.
5. Hand out the assignment as outlined in the attachment below.
Students will need paper on which to write their journals. They may wish to soak the paper in tea or carefully burn the edges in order to give the journal a more authentic old look.
Worksheet: Memories of War Assignment
Students may wish to use the following resources:
Class history text
‘Canada: A People’s History’ DVD and text series
The Canadian Encyclopedia
Supporting documents for this Learning Tool
|File type||File size||Action|
|Memories of War Assignment||152 KB||Download| |
1. A person reading with dyslexia may get 20 times more exhausted using 20 times more energy than a typical reader. Allow extra time for tasks to compensate, or have a scribe/writing partner to help.
2. Everyone with dyslexia has a special gift, as their dyslexic brain will work amazingly well in a different way to a typical brain.
3. It’s worth finding out what this special gift is. Ask what the person is good at or likes doing. They may like building 3d models or be great at problem solving.
4. Children with dyslexia may have very creative brains and see the bigger picture.
5. Reading a flat 2d page may be difficult for someone with dyslexia who may see their world in 3d images, or moving pictures.
6. Holding a book up in the air, or tilting the page may be easier to read, find something that can be used to give a book a slant on the desk.
7. Dyslexia is a language accessing problem, manipulating words may be difficult and memory of some (trigger) words may be hard. Make a note of trigger words and find ways to remember them.
8. Try multi- sensory games to build memory hooks for a dyslexic learner to use. (see, hear, touch)
9. Don’t give up!
10. May people with dyslexia go on to be successful entrepreneurs, due to their amazing “thinking out of the box’ skills. |
Try a practice set to see how Sorting a simple list works in a Word document. On a blank word document enter in the list of names shown below. Be certain that when you type DaleAnn and JonLuc that you do not separate them with a space between each proper name. For the purpose of this exercise, you want Word to interpret these names as one word.
Now let's sort the list.
• Select the list of names carefully to avoid picking up the line above or line below your list of names. One technique that will help to control your selection is to place your insertion point in from of the B in Brian. Hold your shift key down and left click with your mouse directly after the R in Anger. Your entire list should be selected.
• Select from the main menu Table | Sort
• From the Sort By drop down, select Word 2 to sort by the last name.
• Accept the default for type "Text"
• Accept the default for an Ascending Sort
• Click on OK
Your list will now be sorted by the last name alphabetically beginning with Anger. Had you selected descending order, the list would be sorted by the last name beginning with Thile.
Data consistency can have a very big impact on how Word interprets the sort command you are executing. Here is what happens when you separate Jon Luc and Dale Ann with a space. Word now gives you a choice of sorting with Word 1, Word 2 or Word 3 in the "Sort by" drop down box. However, you get inconsistent results whether you chose Word 2 or Word 3. When you choose Word 2, Word sorts Dale Ann between Anger and Brown and sorts Jon Luc between Grier and Marshall because Ann and Luc are the 2nd word for those two names. When you choose Word 3, it doesn't change the sort order other than to put Dale Ann Bradley and Jon Luc Ponte at the end of the list because those are the only two names that contain a third word. Try doing each of these activities, so that you can see what happens.
The more you practice with Word features, the faster you will become quite competent. |
Valid Versus a Well-Formed XML Document
Part of the XML For Dummies Cheat Sheet
In XML, a valid document must conform to the rules in its DTD (Document Type Definition) or schema, which defines what elements can appear in the document and how elements may nest within one another. If a document isn’t well-formed it doesn’t go far in the XML world so you need to play by some very basic rules when creating an XML document. A well-formed document must have these components:
All beginning and ending tags match up. In other words, opening and closing parts must always contain the same name in the same case: <tag> . . . </tag> or <TAG> . . . </TAG>, but not <tag> . . . </TAG>.
Empty elements follow special XML syntax, for example, <empty_element/>.
All attribute values occur within single or double quotation marks: <element id="value"> or <element id='value'>. |
Ancient languages hold a treasure trove of information about the culture, politics and commerce of millennia past. Yet, reconstructing them to reveal clues into human history can require decades of painstaking work. Now, scientists at the University of California, Berkeley, have created an automated “time machine,” of sorts, that will greatly accelerate and improve the process of reconstructing hundreds of ancestral languages.
In a compelling example of how “big data” and machine learning are beginning to make a significant impact on all facets of knowledge, researchers from UC Berkeley and the University of British Columbia have created a computer program that can rapidly reconstruct “proto-languages” — the linguistic ancestors from which all modern languages have evolved. These earliest-known languages include Proto-Indo-European, Proto-Afroasiatic and, in this case, Proto-Austronesian, which gave rise to languages spoken in Southeast Asia, parts of continental Asia, Australasia and the Pacific.
Picky eater fish clean up seaweeds from coral reefs
Using underwater video cameras to record fish feeding on South Pacific coral reefs, scientists have found that herbivorous fish can be picky eaters — a trait that could spell trouble for endangered reef systems.
In a study done at the Fiji Islands, the researchers learnt that just four species of herbivorous fish were primarily responsible for removing common and potentially harmful seaweeds on reefs — and that each type of seaweed is eaten by a different fish species. The research demonstrates that particular species, and certain mixes of species, are potentially critical to the health of reef systems.
Related research also showed that even small marine protected areas — locations where fishing is forbidden — can encourage reef recovery.
“Of the nearly 30 species of bigger herbivores on the reef, there were four that were doing almost all of the feeding on the seven species of seaweeds that we studied,” said Mark Hay, a professor in the School of Biology at the Georgia Institute of Technology.
Carbon sponge could soak up coal emissions
Emissions from coal power stations could be drastically reduced by a new, energy-efficient material that adsorbs large amounts of carbon dioxide, then releases it when exposed to sunlight.
In a study published Feb. 11 in Angewandte Chemie, Monash University and CSIRO scientists for the first time discovered a photosensitive metal organic framework (MOF) — a class of materials known for their exceptional capacity to store gases. This has created a powerful and cost-effective new tool to capture and store, or potentially recycle, carbon dioxide.
By utilising sunlight to release the stored carbon, the new material overcomes the problems of expense and inefficiency associated with current, energy-intensive methods of carbon capture. Current technologies use liquid capture materials that are then heated in a prolonged process to release the carbon dioxide for storage.
Associate Professor Bradley Ladewig of the Monash Department of Chemical Engineering said the MOF was an exciting development in emissions reduction technology. |
From the moment they are born, children are learning how to control their small and large muscles. While there is a pattern to this motor development, no two children develop on the exact same schedule. Here are some tips to keep in mind while planning a motor curriculum for your classroom.
When discussing motor development, it is important to note that children develop from head to toe and from the inside out; meaning, most infants will learn to control their head and neck first, then arms and finally their legs and feet. Also, this means that children will gain control of their trunk before learning to control their hands and fingers effectively.
When planning activities for infants, toddlers and preschoolers, be sure to keep these simple concepts in mind. Keep a balance of appropriate gross motor activities with fine motor games and challenges. Know the learning level of the children in your care, as well as their special needs and general temperament before attempting any gross or fine motor activity.
Appropriate Infant Activities
Infant Motor Development: 0 to 6 months: By the time a child is six months old, she will learn to lift and control her head and neck, roll from her back to her stomach and may even be learning to sit up independently and reach for objects. This month by month guide will help you understand what development can be expected, and ways to help young infants reach these milestones.
Infant Motor Development: 7 to 12 months: By the time a child celebrates his first birthday; he will learn to sit independently, crawl, pull to a stand and may even be ready to take his first steps! Tremendous gross motor development takes place between the seventh and twelfth month of life, and caregivers should be ready to help facilitate this rapid growth. The tips included here will help you plan appropriate activities to guide both fine and gross motor development for older infants.
What's Up With Tummy Time? Many infants will cry in protest when placed on their stomachs to play, but tummy time is something parents and caregivers can't afford to skip. Bright Hub author Dr. Anne Zachry, a pediatric occupational therapist, outlines the importance of this play time and provides tips to the types of games and activities infants should be engaged in while on their tummies.
Infant Toys for Fine and Gross Motor Development: Most parents and teachers can appreciate the benefits of a homemade toys and materials. Not only cost effective, caregivers are also able to control the types of materials their little ones are exposed to. These quick and inexpensive ideas will get little ones moving and help develop both large and small muscles.
The Importance of Crawling: Is it really important for babies to learn to crawl before they walk? Many cognitive connections are made when infants learn to crawl, including helping the left and right brain work together. This fascinating look at the relationship between crawling and other developmental milestones help you understand the importance of this skill.
Activities for Busy Toddlers
Toddler Gross Motor Games: Toddlers may still be a little wobbly on their feet, but walking, bending and hopping are skills many toddlers have begun to perfect. Simple games will help toddlers practice these new skills, as well as build gross motor strength. Get those little legs moving with these fun activities.
Toddler Fine Motor Games: Many toddlers are just beginning to gain control of their small muscles, including hands and fingers. Providing plenty of opportunities in your classroom for fine motor development will help toddlers meet these small muscle milestones. Try water play, boxes and clothespins for fun ways to help facilitate fine motor development.
Exercise Games for Toddlers: Get those little bodies moving, hopping and running! Exercise can help toddlers understand the importance of keeping healthy, as well as provide plenty of gross motor skill development. These fun and easy exercises will keep toddlers engaged and those large muscles pumping.
Assessing Motor Skills in Early Childhood: Identifying delays and differences early and putting the appropriate interventions in place can make a world of difference to a child with significant gross and fine motor development issues. The Peabody Motor Development Scale is one tool occupational therapists use to identify and assess motor delays. Learn more about this test and what to expect if you are caring for a child with motor delays with this informative guide.
Fine & Gross Motor Development for Preschoolers
Preschool Physical Fitness Theme: Introduce your preschool class to the benefits of daily exercise with this fun theme. These cross-curricular ideas include activities for circle time, reading, math, music and art. Planning and implementing these ideas will also help you take stock of and assess the gross and fine motor development of the preschoolers in your classroom.
Galloping: An Important Preschool Locomotor Skill: Can your preschoolers gallop? Can you? Galloping involves the use of both the left and right hemispheres of the brain, control of the trunk, legs and feet, as well as the development of a good sense of balance. Try some of the games outlined here to help children learn this important gross motor skill.
Preschool Obstacle Course: What better way to challenge your preschooler's gross motor skills than with a fun obstacle course? Include activities for crawling, walking, climbing, jumping and stretching in your indoor or outdoor obstacle course. The tips here can help you plan a course for optimum gross motor development.
Outdoor Preschool Active Movement Games: When children play outdoors, they are able to stretch and move their large muscles more than they would be able to do inside of a classroom. Gross motor development is one of the main reasons preschoolers need outdoor play each and every day. These fun movement games will help you plan a solid outdoor curriculum and get your preschoolers moving!
A List of Fine Motor Activities for the Preschool Classroom: Fine motor skills are important for learning how to write, zip a coat, tie a shoe, and many other school readiness activities. There are many preschool activities that can help facilitate development of the small muscles, including painting, turning book pages, playing with dough and manipulating puzzles. Keep this handy list in your classroom and refer to it whenever your fine motor curriculum needs a boost.
Preschool Writing Activities for Fine Motor Development: Learning to write is a huge accomplishment for preschoolers. Getting little hands ready for this high level skill will take some time, and the activities in this article will help you. Learn the importance of repetition and reward, as well as some simple games you can plan for your preschool fine motor curriculum.
The Learning Environment
As shown here, children learn both fine and gross motor skills through a variety of methods. When teaching these skills it is important for preschool students to learn through play. Whether the outcome is to throw a ball, hold a pencil, stand on one foot, or read there must be constant interaction between the teacher and student and teacher-modeling for success. Provide consistent feedback to these little ones so they know when they are improving, and most importantly, give praise and have fun! |
In Astrobiology as presently discussed, the word “habitable” is different to ‘Earth-like’. A habitable planet is defined as one on which liquid water is possible on the surface and thus some kind of Life-As-We-Know-It (LAWKI) is possible. In popular imagination the word ‘habitable’ means ‘Earth-like enough for humans’ – and that’s a much tighter set of constraints. Life can survive in a much broader environmental range than ‘unprotected’ humans. Even most of planet Earth is inimical to human life without technological adaptations, but that’s another discussion.
Clearly there’s some preconditions to Earth-like. Similar size, similar gravity, similar water/land ratio, similar insolation levels, and similar atmosphere mix. For example, we can probably live comfortably in a pressure range between half and twice current levels, if the gas mix remains the same. More oxygen, and the pressure minimum goes down, or more of some other mixing gas, and the pressure range we can live in goes up. Presently there’s no hard data on how a planet’s atmospheric pressure varies with its other features, so I’ll leave it for future discoveries to inform us. On Earth the pressure has varied between maybe half to maybe ~1.5 times current levels. Even in early Dinosaur times, oxygen was once so low we’d’ve found it hard to breathe. Yet animals did survive so maybe we could tweak our own biology and survive too.
Two possibly essential features have had some interesting recent results.
(1) Water delivery. Earth-like means significant amounts of water. At least oceans of some depth and water in the mantle to keep the geophysical wheels lubricated. Sean Raymond & Andre Izidoro posted this preprint:
…which simulated the delivery of water rich planetesimals to the Asteroid Belt and the Inner Planets, during the formation of Jupiter and Saturn. They concluded that such water delivery was a pretty generic process of Giant Planet formation. But just how frequent are Jupiter-analogs? The latest work indicates that about 6% of Sun-like stars (i.e. FGK stars, about 20% of stars) have Gas Giants between 3-7 AU. Red dwarfs, the most abundant type of stars, have about half that frequency of Gas Giants.
Sean Raymond gives a popular account of his paper here: Where did Earth’s (and the asteroid belt’s) water come from?
(2) Plates Tectonics powers geochemical cycles on Earth, keeping elements from being buried in the oceans by erosion. A new study suggesting it’s possible for planets around 1/3 of stars appeared this week: Stellar Chemical Clues As To The Rarity of Exoplanetary Tectonics
The basic idea is that tectonics is driven by the ability of certain crustal mineral mixes to increase in density as they’re buried and transformed in the mantle. This pushes old crust down, allowing new crust to erupt. It’s a balance between the tendency to float and the tendency to sink. Tectonics needs both. Thus the right mix of minerals is required, though it’s a pretty broad range. In about 1/3 of the stellar chemical composition range, a planetary crust wouldn’t float, while for another 1/3 the crust won’t sink. And the middling range combines sinking and floating in the right way.
If we’re looking at *just* Sun-like stars, then we get a frequency of ‘Earth-like’ mixes of ~3/50 x 1/3 = 1/50 Sun-like stars probably has more ‘Earth-like’ planets. And thus 1/250 stars in general. About ~400 million planets in our Galaxy of 100 billion stars. Of course all the data suggests that *every* star has at least a planet and a significant fraction of those sit in the “Goldilock’s Zone” of just the right insolation. But they’ll be *different* even if they’re warm enough to be ‘habitable’. |
Darwin's Theory of Evolution by Natural Selection
Darwin's theory was based on four observations:
• Individuals within a species differ from each other– there is variation.
• Offspring resemble their parents – characteristics are inherited.
• Far more offspring are generally produced than survive to maturity – most organisms die young from predation, disease and competition.
• Populations are usually fairly constant in size.
Darwin realised that the organisms that die young were not random, but were selected by their characteristics.
He concluded that individuals that were better adapted to their environment compete
better than the others, survive longer and reproduce more, so passing on more of their successful genes to the next generation.
Darwin's Theory of Evolution by Natural Selection
- Darwin used the analogy of selective breeding(or artificial selection) to explain natural selection.
- In selective breeding, desirable characteristics are chosen by humans, and only those individuals with the best characteristics are used for breeding.
- In this way species can be changed over a period of time.
- All domesticated species of animal and plant have been selectively bred like this, often for thousands of years, so that most of the animals and plants we are most familiar with are not really natural and are nothing like their wild relatives (if any exist).
Summary of Natural Selection
1. There is genetic variation in a characteristics within a population
2. Individuals with characteristics that make them less well adapted to their environment will die young from predation, disease or competition.
3. Individuals with characteristics that make them well adapted to their environment will survive and reproduce.
4. The allele frequency will change in each generation.
Types of Natural Selection
- Populations change over time as their environment changes.
- These changes can be recorded as changing histograms of a particular phenotype (which of course is due to changes in the underlying alleles).
- Occurs when one extreme phenotype (e.g. tallest) is favoured over the other
extreme (e.g. shortest).
- This happens when the environment changes in a particular way.
- "Environment" includes biotic as well as abiotic factors, so organisms evolve in response to each other. e.g. if predators run faster there is selective pressure for prey to run faster, or if one tree species grows taller, there is selective pressure for other to grow tall.
- Most environments do change (e.g. due to migration of new species, or natural catastrophes, or climate change, or to sea level change, or continental drift, etc.), so
directional selection is common.
Types of Natural Selection 2
Disruptive (or Diverging) Selection.
- This occurs when both extremes of phenotype are selected over intermediate types.
- For example in a population of finches, birds with large and small beaks feed on large
and small seeds respectively and both do well, but birds with intermediate beaks have no advantage, and are selected against.
Stabilising (or Normalising) Selection.
- This occurs when the intermediate phenotype is selected over extreme phenotypes, and tends to occur when the environment doesn't change much.
- For example birds’ eggs and human babies of intermediate birth weight are most likely to survive.
- Natural selection doesn't have to cause a directional change, and if an environment doesn't change there is no pressure for a well-adapted species to change.
- Fossils suggest that many species remain unchanged for long periods of geological time.
The Origin of New Species – Speciation
New species arise when one existing species splits into two reproductively isolated populations that go their separate ways. This most commonly happens when the two populations become physically separated from each other (allopatric speciation):
1. Start with an interbreeding population of one species.
2. The population becomes divided by a physical barrier such as water, mountains, desert, or just a large distance. This can happen when some of the population migrates or is dispersed, or when the geography changes catastrophically (e.g. earthquakes, volcanoes, floods) or gradually (erosion, continental drift). The populations must be reproductively isolated, so that there is no gene flow between the groups.
3. If the environments (abiotic or biotic) are different in the two places (and they almost certainly will be), then different characteristics will be selected by natural selection and the two populations will evolve differently. Even if the environments are similar, the populations may stil lchange by random genetic drift, especially if the population is small. The allele frequencies in the two populations will become different.
The Origin of New Species – Speciation 2
4. Much later, if the barrier is now removed and the two populations meet again, they are now so different that they can no longer interbreed. They therefore remain reproductively isolated and are two distinct species. They may both be different from the original species, if it still exists elsewhere. |
(Gr. iodes, violet) Discovered by Courtois in 1811, Iodine, a halogen, occurs sparingly in the form of iodides in sea water from which it is assimilated by seaweeds, in Chilean saltpeter and nitrate-bearing earth, known as caliche in brines from old sea deposits, and in brackish waters from oil and salt wells.
Ultrapure iodine can be obtained from the reaction of potassium iodide with copper sulfate. Several other methods of isolating the element are known.
Iodine is a bluish-black, lustrous solid, volatizing at ordinary temperatures into a blue-violet gas with an irritating odor; it forms compounds with many elements, but is less active than the other halogens, which displace it from iodides. Iodine exhibits some metallic-like properties. It dissolves readily in chloroform, carbon tetrachloride, or carbon disulfide to form beautiful purple solutions. It is only slightly soluble in water.
Thirty isotopes are recognized. Only one stable isotope, 127I is found in nature. The artificial radioisotope 131I, with a half-life of 8 days, has been used in treating the thyroid gland. The most common compounds are the iodides of sodium and potassium (KI) and the iodates (KIO3). Lack of iodine is the cause of goiter.
Iodine compounds are important in organic chemistry and very useful in medicine. Iodides, and thyroxine which contains iodine, are used internally in medicine, and as a solution of KI and iodine in alcohol is used for external wounds. Potassium iodide finds use in photography. The deep blue color with starch solution is characteristic of the free element.
Care should be taken in handling and using iodine, as contact with the skin can cause lesions; iodine vapor is intensely irritating to the eyes and mucus membranes. The maximum allowable concentration of iodine in air should not exceed 1 mg/m3 (8-hour time-weighted average - 40-hour).
Sources: CRC Handbook of Chemistry and Physics and the American Chemical Society.
Last Updated: 11/19/97 |
Pluto’s mysterious floating hills
The hills are likely miniature versions of the larger jumbled mountains on Sputnik Planum’s western border.
The nitrogen ice glaciers on Pluto appear to carry an intriguing cargo: numerous isolated hills that may be fragments of water ice from Pluto’s surrounding uplands. These hills individually measure one to several miles across, according to images and data from NASA’s New Horizons mission.
The hills, which are in the vast ice plain informally named Sputnik Planum within Pluto’s “heart,” are likely miniature versions of the larger jumbled mountains on Sputnik Planum’s western border. They are yet another example of Pluto’s fascinating and abundant geological activity.
Because water ice is less dense than nitrogen-dominated ice, scientists believe these water ice hills are floating in a sea of frozen nitrogen and move over time like icebergs in Earth’s Arctic Ocean. The hills are likely fragments of the rugged uplands that have broken away and are being carried by the nitrogen glaciers into Sputnik Planum. “Chains” of the drifting hills are formed along the flow paths of the glaciers. When the hills enter the cellular terrain of central Sputnik Planum, they become subject to the convective motions of the nitrogen ice and are pushed to the edges of the cells, where the hills cluster in groups reaching up to 12 miles (20 kilometers) across.
At the northern end of the image, the feature informally named Challenger Colles — honoring the crew of the lost space shuttle Challenger — appears to be an especially large accumulation of these hills, measuring 37 by 22 miles (60 by 35km). This feature is located near the boundary with the uplands, away from the cellular terrain, and may represent a location where hills have been “beached” due to the nitrogen ice being especially shallow.
The image shows the inset in context next to a larger view that covers most of Pluto’s encounter hemisphere. The inset was obtained by New Horizons’ Multispectral Visible Imaging Camera (MVIC) instrument. North is up; illumination is from the top-left of the image. The image resolution is about 1,050 feet (320 meters) per pixel. The image measures a little over 300 miles (500km) long and about 210 miles (340km) wide. It was obtained at a range of approximately 9,950 miles (16,000km) from Pluto, about 12 minutes before New Horizons’ closest approach to Pluto on July 14, 2015. |
A Glow-in-the-Dark World Beneath the Sea
Who's the brightest of them all? Ocean dwellers in the know will tell you it's the many species of "glow-in-the-dark" fish that inhabit the deepest depths of the ocean. Most fish that live in the deepest depths are able to give off their own glow, a process called bioluminescence. In fact, in the sea, bioluminescence is everywhere – in fish, sea slugs, squid, jellyfish and many other deep-sea dwellers. Camouflage. Some fish, like the lantern fish or lampfish, from the family Myctophidae, use photophores as camouflage by producing a counter-illumination. Their photophores are located on their undersides, heads and tails. They spend their days deep in the ocean, but often migrate to shallower depths at night. They often form schools of hundreds of thousands of fish, and deep in the ocean, they are preyed upon by tuna, bonito, albacore, dolphin fish and others, but they are invisible from below because their glowing undersides match the light of the sunlit or moonlit sea surface. At night in shallower water, their dark unlighted backs blend into the darkness of the deep ocean water and they are invisible from above, keeping them safe from sea birds, like penguins and seals.
What is Bioluminescence?
Bioluminescence is a glow that is the result of a chemical reaction within the tissues. This reaction takes place when a special enzyme and a special protein inside the cells are exposed to oxygen and water at a temperature of about 75 degrees Fahrenheit (25 degrees Celsius). The result is energy that gives off light and creates patches of bioluminescent tissue, photophores, which are kidney-shaped organs arranged in distinct groups on the organism. It's the same process that causes a firefly to light up in the summer night.
Almost all of marine bioluminescence are blue in color. There are two reasons for this: First, blue-green light travels furthest in the water. Second, most organisms are sensitive only to blue light – they lack visual ability to absorb longer red or shorter ultraviolet light.
Photophores help fish survive in the deep sea in three ways:
Some fish and shrimp are able to confuse their predators by leaving behind clouds of luminescent or glowing bacteria. A common North Atlantic shrimp takes a more direct approach. When confronted by an enemy it spews out a glowing cloud of bioluminescent plankton, then takes its chance to flip away from the startled predator.
Lure. Some fish use their glow as lure to attract their dinner. For example, bioluminescent bacteria live in pockets under some fish's eyes, making the eyes look like tiny headlights, which attracts smaller fish. Luckily, when a predator swims by, however, the fish closes his eyes.
The female angler fish eats fish and shrimp that are attracted by what looks like a fishing rod growing out of the top of her head with a light at the end. She also attracts prey by vibrating the lure. This is much like the use of colorful lure by fishermen.
Identification and attraction. Fish may use their glow to signal other fish, and the distinct groupings of photophores may help them to identify others in their species. Some experts believe fish use bioluminescence in the same fashion as a peacock displaying his colors. And what female wouldn't be attracted to a glowing mate?
A Light Show Under the Sea
Today fewer creatures on earth are as interesting and unique as fish. They have survived in an environment completely different from ours – in water that is often very dark and very deep, where sea life is sometimes a flashing, glowing, flickering show of lights.
NOTE: Some glow-in-the-dark fish are available to the aquarist. However, these are colored artificially to improve salability. For example, the "Painted" Glass Fish is not a natural color morph. These fish are subjected to a torturous dip in a chemical bath to strip their protective slime coat, painted with fluorescent colored paint, and then placed into an irritant to expedite the re-generation of the slime coat. The color eventually fades (usually within 6 months) from those few specimens that survive long enough for this to happen. Legitimate aquarists do not condone the practice.
Unfortunately, laws preventing cruel treatment of animals only apply directly to mammals, and never apply to amphibians or fish. |
Ultraviolet sterilization (UV) is a process to eliminate biological contamination, namely parasite fungus and bacteria. Two types are commercially available, both in tube size. Generally the one containing a wet bulb -at which the water passes directly past the UV bulb- is cheaper.
The other type available has a protective quartz tube around the bulb (dry bulb). The latter has the advantage of easier cleaning, since debris and slime will eventually settle on the bulb, or quartz surrounding. Both work on the same principle.
UV sterilization exposes the contaminants with a lethal dose of energy in the form of light. The UV light will alter the DNA of the pathogens, by virtually gluing DNA molecules together. The changed cell structure prevents the organism from reproducing itself (sterilization), therefore eliminating it.
Central multi tank filtration or expensive reef set-ups should consider a UV sterilizer.
The effectiveness of the UV sterilization depends on the exposure time and light/energy intensity. Generally 36’000 microwatts per square cm per second will kill or damage the common pathogens in an aquarium, lower numbers can successfully remove most of them. (1 microwatt is the millionth part of 1 Watt/ 36’000 microwatts would therefore translate to 0.036 Watt)
In general, the effectiveness of the sterilizer is based solely on the flow rate of the water.
Hard water will result in mineral build up on the bulb/quartz surface, reducing its effectiveness. These minerals can also protect the pathogens from the energy/light source, allowing them to pass the system unharmed. This effect is also known as shadowing.
UV lights don’t simply burn out, but will gradually loose their efficiency over time by as much as 60% in one year. The general recommendation is to replace the bulb every 6 month.
Some medications are rendered useless when exposed to UV light, especially antibiotics. UV light should beturned off while medicating the tank.
Some reef critters depend on microscopic organisms as a food source. These organisms grow and reproduce freely but will be destroyed if they pass through the UV sterilizer tube.
UV light is not only damaging to pathogens in the water but also harmful to the human eye. Avoid any direct or indirect eye contact with the light.
UV Light Close-up
Light is defined by the wavelength expressed in manometers (nm). UV is defined by a wavelength (electromagnetic radiation) from 10 – 400 nm. The natural source of UV light is the sun. A mercury vapor lamp artificially creates UV at different wavelengths as follows:
- UVA 315 – 400 nm usually found in black light or tanning equipment
- UVB 280 – 315 nm causes sunburn
- UVC 200 – 280 nm damaging to exposed cell |
Radio Waves from Space
by SpaceHike.comMore articles in Telescopes
Karl Jansky discovered radio static coming from the Milky Way in 1932, and this was the beginning of radio astronomy. The British scientist, Stanley Hey, heard strong radio outbursts from the Sun in 1942, and in 1949, the first radio sources outside our solar system were detected by radio astronomers in Australia.
With the assistance of radio astronomy, some of the most explosive and energetic objects in the universe have been discovered. This included radiation from the Big Bang, supernovae remains, and super-massive black holes. Radio telescopes have the ability to find molecules in space. These molecules are the raw materials for the beginnings of new life and new planets. Remains of a supernova create radio waves by high speed electrons that become trapped in magnetic fields. This type of radio wave is called synchrotron radiation. It is strongest at the longer wavelengths. It is forbidden for anyone to broadcast on the wavelengths used by scientists to study the universe. Radio telescopes have an enemy with increasing power-radio pollution, which comes from cell phones, to name only one source.
For scientists to study radio waves, the waves need to hit the inside of a large dish, which then reflects and focuses onto an antenna. The antenna will produce electrical signals that are sent out to a computer. The computer will store these signals and then convert the signals into electronic images. There is more to this process than simply listening to radio wave static.
Japan has a dish that is 45 meters in diameter and covers more than ten times the area of a tennis court. It is named Nobeyama Radio Observatory. The telescope has a smooth surface and it has been accurately formed to less than the width of a blade of grass. The precision of the surface allows the dish to focus radiation. It is so precise that it can focus radiation of millimeter wavelengths from gas molecules in space between the stars.
The drawback to radio telescopes is the fuzzy view compared to optical telescopes. This results from radio waves being so much longer than light waves. Scientists compensate for this by synchronizing several small telescopes. The Very Large Array has 27 dishes that can be moved along three railroad tracks. The maximum distance is approximately 36 km apart. The Very Long Baseline Array provides an even sharper image than the Hubble Space Telescope. The VLBA stretches across the United States.
The single line of telescopes will leave gaps that may cause the final radio picture to be distorted. A solution was suggested by Martin Ryle in the 1950s: rather than taking snapshots views that were full of holes, the telescopes would observe the same radio source for twelve hours. When the Earth rotates, each telescope would be carried around the others in a slow half-circle. This provided a synthesizing of the larger telescope parts.
Harvard scientists found a 21 cm signal sent out by hydrogen in the Milky Way in 1951, and the first quasar was discovered in 1963. The first interstellar molecule, hydroxyl, was discovered in 1963 by its radiation wavelength. The first pulsar was found in 1967 by Tony Hewish and Jocelyn Bell Burnell. The Cosmic Background Explorer measured the cosmic background radiation ripples in 1992.
1. Couper, Heather and Nigel Henbest. Space Encyclopedia DK Publishing, Inc.: NY 1999
2. Editors. Secrets of the Universe. International Master Publishing: US. 1999 |
The more greenhouse gases that are in the atmosphere, the more heat gets trapped, and as the temperature rises, the result is climate change. Scientists predict that as the climate changes, the spread of diseases will increase, agricultural production will decline, and extreme weather such as floods and tornadoes will become common.
Carbon dioxide is one of the most problematic greenhouse gases as far as climate change goes. The amount of carbon in the atmosphere increases as fossil fuels are burned, so the use of coal, oil, and gas for heat and transportation means that carbon dioxide is released into the atmosphere in excessive amounts, at very fast rates, and the Earth does not have the capacity to absorb it. The effects on the climate are now becoming obvious. The northern hemisphere is warmer than it has been at any point in the past 1000 years, natural disasters including hurricanes and floods are increasing, and changes in lake and river levels mean that food supplies are threatened.
In order to stall or turn back global warming, we need to dramatically reduce the amount of fossil fuels we emit, and ensure that carbon sinks are protected. Unfortunately, by developing the Alberta tar sands, oil companies are doing precisely the opposite.
Tar sands development is the single largest contributor to the increase in climate change in Canada, as it accounts for 40 million tonnes of CO2 emissions per year, and means that thousands of hectares of ancient Boreal Forest are clearcut and destroyed. These numbers are increasing: by 2011 it is expected that the tar sands will emit 80 million tonnes of CO2 emissions. Please note that these numbers only take into account the production of oil from the tar sands. Once tar sands oil is burned as fuel, it does create further “end-use” emissions.
Canada made an international commitment to meeting GHG emissions reduction targets outlined in the Kyoto Protocol – the goal was to reduce emissions to six per cent below 1990 levels by 2010. Unfortunately, Canada has been unsuccessful at achieving even this small number so far. As of 2004 emissions levels had significantly increased. In order to meet the targets, emissions must go down by 280 million tonnes per year. If the tar sands continue to operate as predicted, there is no hope of accomplishing this.
Why do the tar sands cause so many emissions?
The oil that is being sought from the tar sands is literally stuck in tar and it is very difficult to separate them. Huge industrial machines are needed to dig the mineable tar sands out of the earth, and these burn a lot of fuel. As two tonnes of tar sands must be moved in order to create a single barrel of oil, this means that 35 kg of CO2 equivalent is emitted, making oil from the tar sands the most energy intensive type of oil available.
If the tar sands are located deeper than 100 metres from the Earth’s surface, and cannot be mined, they are extracted by a process called steam-assisted gravity drainage (SAGD), which creates even more emissions than mining: 55 kg of CO2 per single barrel of oil. In SAGD operations, steam is injected into the tar sands to make it flow, and then it is pumped to the surface. Heating the water for the steam greatly increases the amount of fossil fuels that are burned.
As mentioned above, bitumen is the heaviest and worst quality oil available. It has to be processed and refined heavily to be turned into synthetic crude oil, which involves further use of steam and energy. |
Homo sapiens arose in Africa at least 300,000 years ago and left to colonize the globe. Scientists think there were several dispersals from Africa, not all equally successful. Last week's report of a human jaw showed some members of our species had reached Israel by 177,000 to 194,000 years ago.
Now comes a discovery in India of stone tools, showing a style that has been associated elsewhere with our species. They were fashioned from 385,000 years ago to 172,000 years ago, showing evidence of continuity and development over that time. That starting point is a lot earlier than scientists generally think Homo sapiens left Africa.
This tool style has also been attributed to Neanderthals and possibly other species. So it's impossible to say whether the tools were made by Homo sapiens or some evolutionary cousin, say researchers who reported the finding Wednesday in the journal Nature.Nowhere is there any indication that modern humans are associated with the tools at Attirampakkam (ATM). In fact, there are no hominin remains whatsoever at ATM. The Indian subcontinent has always had a paucity of human remains, the most notable of which is the Narmada Homo erectus cranium. In Europe, the Levallois technology, which also shows up at ATM, is almost exclusively associated with Neandertals and in the Levant, at the Skhul, Qafzeh, Tabun and Amud sites in the Mt. Carmel region, it is used by both Neandertals and early moderns. Consequently, while someone was using early Middle Palaeolithic tools at ATM, we don't know who.
What is incredibly striking about the ATM site is that, from 385 ky down to 172 ky, there is continuous technological evolution, beginning with what the describers call the “terminal Acheulean,” through a punch flake industry with points, to knives. Another point made by the authors is that this Middle Palaeolithic industry appears and flourishes during a time where other sites in India are still using Acheulean technology. This suggests “that spatial variability among Palaeolithic cultural sequences is larger than previously thought.” From the Nature article1:
The behavioural transformations that mark the advent of the Indian Middle Palaeolithic at ATM are summarized by the following diagnostic features: the obsolescence of Acheulian large-flake reduction sequences, with a directional shift towards smaller tool components; the adoption and continuance of Levallois recurrent and preferential strategies; a gradual intensification of blade reduction; and an increased use of finer grained quartzite during phase II than during A gradual discontinuation of biface use—which becomes definite at ATM after approximately 172 ± 41 ka—has been reported at other Middle Palaeolithic and Middle Stone Age sites worldwide (see Supplementary Information and references therein).This is an astounding find for so many reasons. It establishes the appearance of the Levallois technology, one we know originated in Africa, outside Africa over 150 thousand years earlier than we thought. This is a technology that was considered to have originated some 400 thousand years ago. We may now have to revise this date further into the past.
It shows a clear progression of human cultural evolution, from the late Acheulean hand axes, through Middle Palaeolithic Levallois technology to a more blade-oriented technology. Sadly, it appears as though the site began to fall into disuse and the sequence stops cold around 74 kya. This is not surprising since it coincides with the Toba supervolcano eruption in Indonesia. Nonetheless, as the authors point out, it suggests multiple migrations out of Africa of first archaic Homo sapiens, then modern Homo sapiens, such as those in Misliya and that these groups interacted with the Archaic hominins they encountered along the way.
1Akhilesh, K., Pappu, S., Rajapara, H. M., Gunnell, Y., Shukla, A. D., & Singhvi, A. K. (2018). Early Middle Palaeolithic culture in India around 385–172 ka reframes Out of Africa models. Nature, 554, 97. https://doi.org/10.1038/nature25444 |
What is Bank Rate?
The rate at which the Central Bank of the country (in India, RBI is the central bank) allows commercial banks to lend money to corporate or borrowers against their securities. In other words, we can say that the Bank rate is a rate at which banks borrow short term loan from RBI. The newer terms base rate and prime rate have replaced the term bank rate. As soon as the RBI hikes the bank rate, it will not only directly or indirectly affects the interest rates on deposits, bond issues and mortgages but also increase or decrease in EMI.
How does change in Bank Rate affect Inflation?
The RBI (Reserve Bank of India) can regulate the level of economic activity by managing the bank rate. When unemployment is high, the RBI can lower bank rates which can help to expand the economy by lowering the cost of fund for borrowers. On the contrary, when inflation is higher than the desired level, the RBI can higher the bank rates which can help to reign in the economy. Therefore, we can say that in case the RBI hikes the bank rate, it will certainly compel the commercial banks to hike their lending rates. |
“I have a dream…” the famous words spoken by Dr. Martin Luther King Jr. on August 28, 1963 still impacts humanity as we pause in annual remembrance of this civil rights martyr. PA has MLK day off to commemorate this federal holiday.
Born on January 15, 1929 in Atlanta, Georgia, King was a predominant social activist, protestant minister, and leader of the civil rights movement. Two hundred fifty thousand gathered in Washington, D.C. at the Lincoln Memorial to join the crusade by marching on Washington and hearing his famous “I Have a Dream” speech. His words reawakened the American conscience to the importance of equality and freedom among men of every race and creed.
Dr. Arthur Hippler, Chairman of the Religion Department talks in PA moral theology classes about the importance of King’s Letter from Birmingham Jail. King was originally imprisoned for underpayment of his taxes and wrote the letter during the Easter season, responding to objections from local clergymen. They were criticizing him for breaking the law and King wanted to convince the Rabbi, Episcopalians and the catholic bishop to develop different ways of thinking.
“I really appreciate the [letter from] Birmingham Jail as it has real substance to it and it explains thoroughly our rights and duties given by God. It is also something that is often ignored or people don’t hear about it. King gives value to our human dignity and he ensures that people regardless of color can protect and respect each other,” Hippler stated.
He also noted, “In his letter he [drew] freely from Plato and Socrates as they were great thinkers. King sought common tradition as a great treasure trove of human thinking, destiny, fairness and dignity.”
King also organized the Montgomery Bus Boycott which was a protest against segregation on public transport. “Boycotts are used to influence behavior in positive ways as it mobilizes public opinion and exercises economic pressure in a society,” Hippler added.
It’s not just upper school students who learn to appreciate King in PA classrooms. Lower school students made art projects of Dr. King and shared some of the things they were grateful for. Annaliese, a pre- kindergartener says she has a dream that “everyone has food and water” and Stella, another pre-kindergartener says she has a dream that “everyone would be thankful for the things they have in life.”
PA celebrates King’s life in many ways at PA. As Dr. Flanders notes “King’s movement worked for the full vindication of the equal humanity of all, regardless of skin color.” |
An asteroid identified as 2021 NY1 is traveling through space at 21,000 MPH. This month, it will pass the Earth, and the National Aeronautics and Space Administration (NASA) has warned it could be potentially hazardous. The asteroid has been classified as a “Near-Earth Object.”
NASA is now tracking the asteroid regarding its future trajectory. The asteroid is between 427 and 984 feet wide and is most likely to pass the Earth on Sept. 22. This will be the day when the asteroid will be at its closest point with the Earth as it will come within 930,487 miles of the planet. It will approach the Earth less than 1.3 times the distance from Earth to the Sun.
According to SpaceReference.org, the 2021 NY1 asteroid orbits the Sun every 1,400 days or 3.83 years. “Based on its brightness and the way it reflects light, 2021 NY1 is probably between 0.127 to 0.284 kilometers in diameter, making it a small to average asteroid, very roughly comparable in size to a school bus or smaller,” it said.
Analysts suggest that the next time the 2021 NY1 asteroid will be close enough to be considered as a Near-Earth Object will be on September 23, 2105.
Near-Earth Objects are comets and asteroids that the gravitational attraction of nearby planets pushed into orbits. This phenomenon allows the asteroid to enter the Earth’s neighborhood.
According to NASA’s official Near-Earth Objects’ website, cosmic objects are mainly composed of “water ice with embedded dust particles, comets originally formed in the cold outer planetary system while most of the rocky asteroids formed in the warmer inner solar system between the orbits of Mars and Jupiter. The scientific interest in comets and asteroids is due largely to their status as the relatively unchanged remnant debris from the solar system formation process some 4.6 billion years ago.” |
In today's world, rapid changes are taking place and the impact of these changes on education and teachers is great.
Education today requires that a child be shaped into a person capable of effectively contributing to society and the community of the world. Children living in the world today need to be taught and not just remembered. A sense of responsibility must be developed for knowledge. Without any responsibility, learning without wisdom can be dangerous.
Children need to "learn learning" should be raised. They need to enjoy the design adventure, feel the performance experience, and yet they will be able to cope with the things that are all part of the work.
The art of teaching, as the art of healing, is the adult of every child. Teacher must learn what keeps students. Can understand and respond openly: Why do not children go to school with the same interest and enthusiasm as playing? Or why they want to throw away books about nature and birds, and yet they run after butterflies in the garden.
Much of today's teaching / learning process is carried out in front of the school. A lot of information about the world and other people is learned from a variety of sources from day to day. But schools, colleges and universities help integrate these information bits into healthy education.
All satisfactory formal education must achieve at least three minimum goals –
(i) Education must provide the student with basic knowledge and basic skills that he or she as a community worker requires. (ii) Secondly, there is a social purpose. Education should strive to integrate students into the society in which they will work and determine ethical and moral norms that govern their decisions and give them a social sense of responsibility. (Iii) Third, there is the cultural goal. Education should help the student to become more independent, develop internal resources and lead a rich and rewarding life. |
Coastal impacts, adaptation, and vulnerabilities: a technical input to the 2013 National Climate Assessment
The coast has long provided communities with a multitude of benefits including an abundance of natural resources that sustain economies, societies, and ecosystems. Coasts provide natural harbors for commerce, trade, and transportation; beaches and shorelines that attract residents and tourists; and wetlands and estuaries that are critical for fisheries and water resources. Coastal ecosystems provide critical functions to cycle and move nutrients, store carbon, detoxify wastes, and purify air and water. These areas also mitigate floods and buffer against coastal storms that bring high winds and salt water inland and erode the shore. Coastal regions are critical in the development, transportation, and processing of oil and natural gas resources and, more recently, are being explored as a source of energy captured from wind and waves. The many benefits and opportunities provided in coastal areas have strengthened our economic reliance on coastal resources. Consequently, the high demands placed on the coastal environment will increase commensurately with human activity. Because 35 U.S. states, commonwealths, and territories have coastlines that border the oceans or Great Lakes, impacts to coastline systems will reverberate through social, economic, and natural systems across the U.S.
Impacts on coastal systems are among the most costly and most certain consequences of a warming climate (Nicholls et al., 2007). The warming atmosphere is expected to accelerate sea-level rise as a result of the decline of glaciers and ice sheets and the thermal expansion of sea water. As mean sea level rises, coastal shorelines will retreat and low-lying areas will tend to be inundated more frequently, if not permanently, by the advancing sea. As atmospheric temperature increases and rainfall patterns change, soil moisture and runoff to the coast are likely to be altered. An increase in the intensity of climatic extremes such as storms and heat spells, coupled with other impacts of climate change and the effects of human development, could affect the sustainability of many existing coastal communities and natural resources.
This report, one of a series of technical inputs for the third NCA conducted under the auspices of the U.S. Global Change Research Program, examines the known effects and relationships of climate change variables on the coasts of the U.S. It describes the impacts on natural and human systems, including several major sectors of the U.S. economy, and the progress and challenges to planning and implementing adaptation options. Below we present the key findings from each chapter of the report, beginning with the following key findings from Chapter 1: Introduction and Context.
|Title||Coastal impacts, adaptation, and vulnerabilities: a technical input to the 2013 National Climate Assessment|
|Publisher location||Washington, D.C.|
|Contributing office(s)||Office of Associate Director-Climate and Land Use Change|
|Description||xxx, 185 p.|
|Larger Work Title||National Climate Assessment regional technical input reports|
|Google Analytic Metrics||Metrics page| |
Bolts are cylindrical locking elements, which are either cylindrical only or have a collar. Bolts are thus a favourable way of making fuses. Steel or stainless steel is always used for the material selection (Bolts).
Today, bolts are safe and indispensable connecting elements in industry and in model making, but also safety elements that are used in thousands of different shapes and sizes. In this article we would like to tell you more about the different bolt shapes and pin bolts, how they are constructed and what materials are used, such as plastic bolts, aluminium bolts or steel bolts, or even the very high quality stainless steel bolts, which can be used in two different alloys, V2A and V4A. The manufacturing industry in particular requires metal bolts in thousands of designs, which have to perform safety functions on machine parts and components every day and also require a high load-bearing capacity. Also in model making, as already briefly mentioned, metal bolts are installed in many different positions, so that components can be connected with a kind of bolt joint and can exercise an axial rotational function without allowing radial misalignment. Of course there are other types of bolts, but not all of them can be discussed here in this article. However, we would like to go into more detail in this article about all pins that have a locking function and a safety function. We speak here in the form of bolts that can lock or bolts that can be locked. Every bolt that is installed in such standard parts has a very special structure. This structure of the bolts is exactly matched to the inner workings of such a standard part, so that the function continues to be given without problems. Basically, the industry does not use pure bolts made of plastics. However, these could be made of polyamide or another plastic. The load-bearing capacity of today's plastics has increased by far, but it is not sufficient for such securing functions, for example a bolt that is to lock a component or secure two components against each other, because the shear forces in particular are extremely high in relation to the bolt cross-section. The consequence would be a bolt breakage. This would be very bad, because a maximum of safety is required especially on machine and machine parts and for this reason alone, the use of bolts made of plastics is not possible. All other materials are shortlisted because the strengths are constantly increasing in relation to the cross-section. So bolts can be used in the following order of strength. The aluminium bolt is already much more resilient, but is also only used for light locking activities on machines when weight is still an issue. Then comes the pure steel bolt. This is already very high loadable, because steel corresponds to the fixed sliding classes. If you want to use the highest grade of bolt materials and want to combine strength or shear strength with corrosion, you should use the stainless steel bolt. As already mentioned, this is available in two different material alloys, which can be selected according to requirements. On the one hand there is the stainless steel V2A and also the V4A. V2A is usually sufficient if the bolt is installed in an environment that can come into contact with normal water or moisture. The highest level is the V4A stainless steel. This alloy is also resistant to alkalis and all acids and this stud material can be used in the chemical industry without any problems.
The bolt in the component which engages
Basically, a bolt that is installed in a grid element as a standard part must take over a certain function and different materials are also used in such a residual element. We would now like to tell you more about the bolt grid element and how the different materials can be used. The body of the housing, which is to enable the entire locking function with the stud element, can be made of one metal. These metals include aluminium and also steel or stainless steel as V2A or V2A. On such a grid element base body there is always a head that can shy away from the screwed component to enable the bolt to be released. These heads on the basic bodies are not really ballastable and so these heads can be made of various plastics or metal, such as steel, stainless steel or aluminium. The variations are almost unlimited. The bolt can move up and down in the bolt housing in a standard part. This means that it can move acially and, due to its construction, also move radially, which is not really important for such a pin retention element. As already mentioned, the basic body of the bolt is made of metal and can be produced as a cast or turned part. The production of a cast basic body always requires the production as a casting. With this type of production of a stud base body, it is necessary to design a base body which is then produced in a wax part. This wax part can then be immersed in a mud bath and then be firmly fired in a subsequent firing process with above-average heat exposure. During this process the mud layer becomes very hard because the water evaporates. In the same breath the wax evaporates and a solid shell remains. This shell is the measure of all things and can be filled with liquid metal such as steel or stainless steel in the subsequent process. When the metal casting has cooled down, the coring process is done by destroying the layer. Now the stud body can be reworked and set up with the appropriate holes and fits for the stud. The bolt itself is never a casting but always a turned part. These turned parts are always produced on appropriate CNC machines. A loader is always connected to the machine, which is in direct communication with the machine. Once the correct metal billet material is loaded into the loader and the machine is set up, the production of the billets can begin. The first piece of raw material is now drawn into the machine and clamped in the chuck. Now the contours of the stud are turned in so that the stud can be cut off after a certain working process and then falls into a collecting container. If enough bolts are collected, they can be collected for the assembly of a locking bolt. In the last section we would like to look at the head area of the bolt body and take a closer look at the production process. In this example we assume that we are using a plastic head part. These plastics are produced in a completely different process and are later mounted on the bolt body. In this case, a metal mould consisting of two halves has to be milled so that it can be clamped in an injection moulding machine for plastics and filled with plastic under high pressure. When the plastic has cooled down, the mould opens and the bolt head can be removed. In the last working step, the base body of the metal bolt can be mounted together with the head and then the bolt that is to snap into place can be integrated into the base body. The finished component would then be ready as a standard part with a bolt as a grid element and can be installed. |
Digital Citizenship at Collins Hill
What is Digital Citizenship?
Digital citizenship is the quality of habits, actions, and consumption patterns of technology that affect individuals and communities. Basically, it is the behaviors and attitudes we exhibit when using the technological tools vital to our digital lives. There are many aspects of being a good digital citizen such as recognizing your digital footprint, being safe online, and how to effectively communiate in our technological age.
How does Collins Hill teach digital citizenship to its community?
Collins Hill takes a multi-faceted approach to teaching our community about digital citizenship. First, we use curriculum developed by Common Sense Media to teach students about digital citizenship through our advisement classes. Second, beginning in January 2020, every month will have a different digital citizenship focus, and there will be two ways we approach these topics. First, we will send out a short desciption of the topic to the community via our school website, social media channels, and school newsletter. We hope this way parents can speak with their students about what it means at school and at home to be a good digital citizen. In addition, weekly digital citizenship tips and bits of information will be included in our daily announcements and our scrolling news announcements.
Resources for Parents |
Please note that you can also find the download button below each document. This worksheet includes video lessons and the answer key! Homonyms Exercises: vocabulary 2 40 Questions Show answers. So does a homonym have to be both a homograph and a homophone, or can it be just one or the other? By latsa67 If you found these worksheets useful, please check out, Another great article you can read about homophones is available at, Context Clues Worksheets With Answers PDF, Punctuation Marks Worksheets With Answers, Gender Of Nouns Worksheets with Answer Key PDF. This quiz is incomplete! To heighten interest, all of the sentences are quotes from various authors' writings in books and magazine articles published over the years. Another worksheet for students to practise words they often are confused about. Circle the homophones in each sentence below. Commonly Confused Words. The worksheets also cover related skills including phonics, spelling, and word meanings.The worksheets are divided into 10 convenient sets that increase in difficulty. Advanced exercises. Homographs are words that are spelled alike but not always pronounced the same. Minute. Students have to fill in the blanks using the appropriate word then match the sentences to the pictu... A quick definition of the terms homophone and homonym, with a partial list of homophones. (b) being in a horizontal position 1. This worksheet consists of some grammar explanation and four different exercises dealing with English homonyms (homographs and homophones... A worksheet to learn about homonyms. Using a simple chart can help to clarify the difference between homographs, homophones and homonyms. Homophone-Worksheet-15. Entrance. Homophones are words that sound the same but are spelled differently and have a different meaning. The first column contains homonyms in alphabetical order, while the second and third columns list the corresponding homonym, homophone, or homograph as applicable. Proceeds. It is a quick and concise lesson on these terms with a focus on word roots to help students remember the meanings. If the answer is YES, then try reading this sentence out loud. This comprehensive unit on Homophones, Homographs, and Homonyms will help expand vocabulary for commonly confused words. Homo means Same and Graph means to draw or write, Homophones (homo meaning same and phone meaning sound) are words that are pronounced the same but are different in meaning. :) Dr. Richard Nordquist is professor emeritus of rhetoric and English at Georgia Southern University and the author of several university-level grammar and composition textbooks. Homophones or homographs or homonyms: Homonyms can refer to both homographs and homophones. Examples: Bow and bow. To vs Too vs Two Homophones Worksheet : Write which version of (to, too, two) that best completes each sentence. Did you notice anything strange? Other examples of homonyms: bear (an animal)/bear (to withstand or hold up) can (a metal container)/can (able to). Through context clue activities, engaging stories, and pictorial support, these worksheets will enable kids to look and sound like homophone and homograph experts. attribute - a characteristic or quality/to think of as belonging to or originating in some ... Homonyms, Homographs, and Homophones Homonyms: Words that have the same spelling and same pronunciation, but different meanings. 6. Taking too long? Homophones are the words with the same pronunciation, but different spelling and different meanings. You may also often see homographs - or words that are spelled the same but pronounced differently. Practise some homophones by choosing the right option and write the words on the empty lines. Another great article you can read about homophones is available at visual explanation of homophones. |
Poultry farmers could soon be the source of much more than buffalo wings and omelets. Chickens byproducts could be used to make biodegradable plastics and cheap energy, two new studies find.
Many types of animal waste and plants, including corn and soybeans, have been proposed as alternative sources of plastics and fuel, and demand for them is on the rise.
So one researcher has turned to agricultural waste, such as poultry feathers and eggs that didn’t pass inspection, which are currently used in low-value animal feed or simply thrown away, to develop more environmentally friendly plastics.
“Twelve percent of all plastic packaging ends up in landfills because only a fraction is recycled,” said Virginia Tech researcher Justin Barone, who is heading up the agricultural waste effort. “Once in a landfill, it doesn’t biodegrade. The challenge is, how can we create a simpler plastic bag or a bottle that will biodegrade?”
Today, packaging adds 29 million tons of non-biodegradable plastic waste to landfills every year, according to the U.S. Environmental Protection Agency,
Plastics from biomass (animal waste and plant materials), like some recently developed to dissolve in seawater, are made the same way as petroleum-based plastics, are actually cheaper to manufacture and meet or exceed most performance standards. But they lack the same water resistance or longevity as conventional plastics, said Barone, who presented his research at the March 29 American Chemical Society National Meeting in Chicago.
Adding polymers created with keratin, a protein that makes hair, nails and feathers strong, may improve the strength and longevity of the plastics made from chicken feathers and eggs. Other modifications to the polymer, such as adding chicken fat as a lubricant, should help the polymer to be processed faster and smell better.
Another scientist has developed a furnace system that converts poultry litter into a fuel that can be used to heat chicken houses.
The fuel, made from poultry waste and rice hulls and wood shavings once used as chicken bedding, can be gathered from hen houses, stored on-site, and put into a heat-generating furnace, reducing farmers’ energy costs by as much as 80 percent.
While the fuel would reduce greenhouse gas emissions, it does produce an ash that could hurt sensitive watersheds if dumped there, said Tom Costello of the University of Arkansas, who led work to develop the furnace.
- Top 10 Emerging Environmental Technologies
- New Biodegradable Plastics Could Be Tossed into the Sea
- All About the Environment |
This story is from the category Computing Power
Date posted: 20/02/2014
New technology to capture the kinetic energy of our everyday movements, such as walking, and to convert it into electrical energy has come a step closer thanks to research to be published in the International Journal Biomechatronics and Biomedical Robotics.
Researchers have for many years attempted to harvest energy from our everyday movements to allow us to trickle charge electronic devices while we are walking without the need for expensive and cumbersome gadgets such as solar panels or hand-cranked chargers. Lightweight devices are limited in the voltage that they can produce from our low-frequency movements to a few millivolts. However, this is not sufficient to drive electrons through a semiconductor diode so that a direct current can be tapped off and used to charge a device, even a low-power medical implant, for instance.
Now, Jiayang Song and Kean Aw of The University of Auckland, New Zealand, have built an energy harvester that consists of a snake-shapes strip of silicone, polydimethylsiloxane, this acts as a flexible cantilever that bends back and forth with body movements. The cantilever is attached to a conducting metal coil with a strong neodymium, NdFeB, magnet inside, all enclosed in a polymer casing. When a conductor moves through a magnetic field a current is induced in the conductor. This has been the basis of electrical generation in power stations, dynamos and other such systems since the discovery of the effect in the nineteenth century. Using a powerful magnet and a conducting coil with lots of turns means a higher voltage can be produced.
In order to extract the electricity generated, there is a need to include special circuitry that takes only the positive voltage and passes it along to a rechargeable battery. In previous work, this circuitry includes a rectifying diode that allows current to flow in one positive direction only and blocks the reverse, negative, current. Unfortunately, the development of kinetic chargers has been stymied by current diode technology that requires a voltage of around 200 millivolts to drive a current.
Song and Aw have now side-stepped this obstacle by using a tiny electrical transformer and a capacitor, which acts like a microelectronic battery. Their charger weighing just a few grams oscillates, wiggling the coil back and forth through the neodymium magnetic field and produces 40 millivolts. The transformer captures this voltage and stores up the charge in the capacitor in fractions of a second. Once the capacitor is full it discharges sending a positive pulse to the rechargeable battery, thus acting as its own rectifier.
The team concedes that this is just the first step towards a viable trickle charger that could be used to keep medical devices, monitors and sensors trickle charged while a person goes about their normal lives without the need for access to a power supply. The system might be even more useful if it were embedded in an implanted medical device to prolong battery life without the need for repeated surgical intervention to replace a discharged battery. This could be a boon for children requiring a future generation of implanted, electronic diagnostic and therapeutic units.
See the full Story via external site: www.eurekalert.org
Most recent stories in this category (Computing Power):
19/02/2017: Printable solar cells just got a little closer |
Intelligent character recognition (ICR) recognizes letters and numbers by analyzing features like lines, line intersections, and closed loops. It combines this feature analysis with traditional pixel-based processing to achieve high accuracy character recognition.
For example, an “O” is a closed loop, but a “C” is an open loop. These features are compared to vector-like representations of a character, rather than pixel-based representations. Because intelligent character recognition looks at features instead of pixels, it works well on multiple fonts and with handprinted characters.
Intelligent character recognition is an advance in a technology known as optical character recognition (OCR).
How Traditional Character Recognition Works
Traditional OCR uses a “matrix matching” algorithm to identify characters using pattern recognition. The character on the document’s image may look like this:
It is compared to a stored example that looks like this:
By comparing a matrix of pixels between the character on the image and the stored example, the software determines the character is a “G”.
Seems like a good approach – but beware of the pitfalls! Because it is comparing text to stored examples pixel by pixel, the text must be very similar. Even if there are hundreds of examples stored for a single character, problems often arise when matching text on poor quality images or using uncommon fonts.
How Intelligent Character Recognition Uses Features
ICR decomposes characters into their component features rather than by comparing pixels to known examples.
Instead of pixels, features:
Features matching how the character is drawn are often easier for software to understand since the margin of error is less. Feature detection is less susceptible from errors caused by random pixelization.
Now you know why intelligent character recognition is an improvement over standard OCR.
How Does Intelligent Character Recognition Work?
Intelligent character recognition engines work by combining both traditional and feature-based OCR techniques. The results of both algorithms are combined to produce the best matching result. Each character is given a “confidence score,” which corresponds to how closely the character pixels or features match or a combination of the two.
Even with this blended approach the typical OCR villains are on the attack: poor document quality, multiple font types, and different font sizes.
What is this character? Is it a “G”, a “C”, a “0”, or is it even a character at all?
Intelligent character recognition must make a decision and it may not make sense within the context of the word or sentence. If a human can’t read the character, then OCR will certainly have trouble.
OCR Post-Processing to the Rescue
Without additional context, character recognition errors make sense. Even if the character isn’t discernable, a human knows “ballboy” is an indie band from Scotland and “bollboy” is just gibberish:
The most common post-processing done by OCR engines is basic spell correction. Often, errors from poor recognition result in small spelling mistakes. All commercial OCR engines compare results with a lexicon of common words and attempt to make logical replacements.
But what about proper nouns and other important words that aren’t in the lexicon of common words?
Here’s where intelligent character recognition really shines. There are two easy ways to identify incorrect characters:
- The easiest way is to import custom lexicons for words related to your organization or industry. You may have medical terms or even customer / company information that you need to match against. Using a custom lexicon will provide even better chances at finding the right match.
- Another method for improving OCR character accuracy is something called “fuzzy matching.” Fuzzy matching is a method of providing weighted thresholds to characters and allowing the software to substitute characters based on likely good replacements. For example, the software would be allowed to try an “o” when a “0” provides a bad result. Same for an “l” instead of a “1”, etc. |
This video shows how to calculate the circumference of a given circle. The video first describes the circumference or perimeter of a circle as the distance around the outside of a circle. To find the circumference we need either the radius or the diameter of the circle. When you know the diameter of the circle, the formula to find the circumference denoted by 'C' is 'pi' times the diameter, where 'd' is the diameter and 'pi' is a constant, the approximate value being 3.14. And when you are given the radius, the formula for circumference is pi times 2 times r where 'r' is the radius. The speaker says both the formulas are the same because 2 times radius is equal to the diameter. He then explains an example where the diameter is given as 10 meters and you are asked to find the exact circumference. You can calculate the circumference as C=pi times d. The value is 10pi meters. He says that when you are asked to find the exact circumference, then you are supposed to leave pi as it is and not substitute pi with its approximate value 3.14 while finding the answer. Then he explains another example asking you to find the exact circumference. Here, the radius of the circle is given as 4 miles. He asks you to calculate the circumference using the formula C=pi times 2 times r. The value then becomes 2 times pi times 4 which is 8pi. The circumference of this circle, he says, is 8pi miles. Another example asks you to calculate the approximate circumference of the given circle. Here, the radius is given as 15.3 mm and you are asked to use pi=3.14 in the solution. So he asks you to use the formula C=pi times 2 times radius which is 2 times 15.3 times 3.14. The value works out to be 96.084 with the approximate circumference of this circle being 96mm. He then finally signs off saying that if you want to learn more about circumference of other different circles, there is video part two available.
Want to master Microsoft Excel and take your work-from-home job prospects to the next level? Jump-start your career with our Premium A-to-Z Microsoft Excel Training Bundle from the new Gadget Hacks Shop and get lifetime access to more than 40 hours of Basic to Advanced instruction on functions, formula, tools, and more.
Other worthwhile deals to check out:
- 97% off The Ultimate 2021 White Hat Hacker Certification Bundle
- 98% off The 2021 Accounting Mastery Bootcamp Bundle
- 99% off The 2021 All-in-One Data Scientist Mega Bundle
- 59% off XSplit VCam: Lifetime Subscription (Windows)
- 98% off The 2021 Premium Learn To Code Certification Bundle
- 62% off MindMaster Mind Mapping Software: Perpetual License
- 41% off NetSpot Home Wi-Fi Analyzer: Lifetime Upgrades |
A common parameter of interest in meteorological and air quality systems is the wind direction. It is often measured by some type of device which gives an output proportional to the direction on a compass. This is called the "polar coordinate" system. Very accurate spontaneous measurements can be made in this way. However, very serious errors can and will occur when doing averages and sigmas (standard deviations) in the polar system.
The reason for these problems is simple. In a 360 degree system 360° = North = 0°. Consider the following two-sample average: Sample 1, 350° (10° West of North) and Sample 2, 10° (10° East of North). Averaging, (350 + 10) / 2 = 180. Intuition would suggest (correctly) that the average should be North, but the answer comes out South, as wrong as possible. This is called "wrap-around."
Another problem with doing a straight average on wind direction is that no account is taken of speed. Suppose the wind is calm for 1/2 hour, and the wind comes in at 5 mph from the South. Clearly, the hourly average, using straight averaging, will be South-East, but the only wind was from the South. This problem is called the "unweighted direction problem."
The use of a 0-540° wind sensor will improve the situation, but will not solve the problem. Every time the sensor switches mode, which may be several times a day, serious errors will occur.
The above problems can be solved by translating the wind speed and wind direction into an X-Y (or Cartesian) coordinate system. Thus each wind direction observation is converted into a vector, that is, an Easterly and a Northerly component of speed, and the data is accumulated in vector form. When the time comes for an output report, computations are done and the results are transformed back to polar coordinates. |
The birth of radiologic technology began on Nov. 8, 1895, when German physicist Wilhelm Conrad Roentgen discovered the x-ray. Hailed as a medical miracle, scientists and physicians started using x-rays in the clinical setting soon after, and its use skyrocketed in the early 20th century. Fast-forward 120 years and health care providers still rely on the x-ray to detect bone fractures, find foreign objects in the body and identify lung disease.
The x-ray is a true pioneer. It paved the way for advanced medical imaging procedures like computed tomography, magnetic resonance imaging, nuclear medicine and ultrasound. In addition, radiation therapy used in cancer treatments is a descendant of the x-ray. Quite simply, the x-ray changed health care and continues to be a key player in patient care on a global scale.
An important part of the x-ray’s history includes the radiologic technologists who perform medical imaging and radiation therapy procedures. Since the early stages of the x-ray, radiologic technologists have worked to establish patient-safety protocols, patient positioning techniques, equipment processes and radiation safety guidelines. Their contributions to medical imaging and radiation therapy are a vital piece of the x-ray’s story.
In honor of this year’s National Radiologic Technology Week, Nov. 8-14, the American Society of Radiologic Technologists is highlighting the x-ray’s birthday and its profound effect on patient care with a “Discovering the Inside Story” timeline. The infographic tracks the history of medical imaging and radiation therapy using exhibits and resources featured in the newly-opened ASRT Museum and Archives, the only museum in the world devoted to telling the story of the radiologic technology profession.
Learn more about the ASRT Museum and Archives and schedule a tour here. |
There are 4 basic Language Skills
This article introduces each of these skills and explains what they are. There are also links to more in-depth articles on each of the skills.
Grouping Skills Together
These can be grouped in different ways.
We can talk about the oral skills (listening and speaking) or the written skills (reading and writing).
We can also group them by the direction of communication: receiving (listening or reading) and producing (speaking or writing).
In general, the way in which we learn these skills are in this order: listening, speaking, reading, writing. That is a child will listen to the language around them and then begin to utter a few words. These develop into fuller utterances (i.e. spoken sentences). With the help of an adult the child will begin to read simple texts and then finally produce written texts themself.
Of course learners of English pick up the four skills in more or less the same order, however remember that they are not isolated and it is almost impossible to develop one skill without also developing the other skills.
Listening is not only hearing but also understanding what is being said. In general there are two kinds of listening: active where we are in a face to face conversation or on the phone, etc; and passive when we watch television or listen to the radio.
Within this skill area there are also sub-skills which need to be learned. These include learning to “hear” the boundaries between words; learning to understand what a change in intonation or stress means and so on.
See the main article, Listening.
As with listening, speaking can be active or passive. Active speaking is when we speak on the phone or face to face and there is interaction between the speaker and listener. Passive speaking is when we speak with no interruptions or feedback from others e.g. giving a speech or a teacher droning on and on and on!
Sub-skills here include pronunciation as well as using stress and intonation in the correct way. There are also more semantic skills such as how to choose the correct word and building an argument, etc.
See the main article, Speaking.
Reading is well developed in most societies. Sub-skills here include deciphering the script (e.g. the Roman alphabet or Cyrillic or Chinese characters), recognizing vocabulary and picking out key words in the text. Here a knowledge of syntax comes into play and also the ability to transfer what is written into real-life knowledge.
There are also important reading sub-skills such as skimming, reading for gist, reading for detail and so on. These all have to be taught to students.
See the main article, Reading.
Sub-skills here include spelling and punctuation, using the correct vocabulary and of course using the correct style whether that be formal, poetic or whatever the occasion demands, from a shopping list to wedding vows.
These days as well there is not only the physical ability to use a pen and write but also the use of a keyboard or keypad.
See the main article, Writing. |
Peach trees produce abundant, sticky sap that can ooze out of the tree for a variety of reasons. The sap hardens into gummy yellow to orange nodules on the bark. Sap oozing can be caused by insects, diseases, injury or a natural response to environmental conditions in dwarf peaches. Preventative measures are best because not all the problems can be controlled after they occur.
American plum borer larvae cause the most damage when they drill into the crotches of branches and graft unions. Spray with carbaryl if they are present. Pacific flathead borers attack trunks and branches damaged by sunburn or other injury. Peach twig borers and shothole borers make small holes with limited sap oozing. Peachtree borers can kill a tree by boring into the lower trunk. Dig them out of the tree with a pocketknife. Take care not to remove too much wood or you will kill the tree.
Oozing sap without borer holes is more likely caused by canker diseases. Bacterial canker and cytospora cause irregularly shaped brown cankers with amber-colored gum oozing from the margins. Cytospora canker may also have orange threadlike structures exuding from the canker. Avoid the diseases by planting in deep, well-drained soil, delaying pruning until late winter, pruning in dry weather and fertilizing with a micronutrient solution in spring. Remove diseased wood.
Injury, whether mechanical or environmental, can cause sap to run from the tree. Use tree wraps on young trees or whitewash trunks to prevent sunburn. Adequate water and phosphorous fertilizer also help prevent damage. When mowing or using a weed trimmer, avoid damaging the trunks of trees. Cutting the bark all the way around the tree will kill it, but any injury can cause sap to flow and create an entry point for disease or insects.
Genetic dwarf peach trees are often grafted onto more vigorous rootstock to ensure their survival. This rootstock will take in more moisture than the top requires. When not all of the moisture in the tree can be expired from the leaves, it will be expelled from the branches as sap or gum. More sap will be present if the tree is planted in heavy clay soil. The problem will correct itself as the tree grows.
- Comstock Images/Comstock/Getty Images |
Every day an average of two children 14 years old or younger die from unintentional drowning. For every child who dies from drowning, another five receive emergency department care for nonfatal submersion injuries. Nonfatal drowning injuries can cause severe brain damage that may result in long-term disabilities, such as memory problems, learning disabilities, and permanent loss of basic functioning or a permanent vegetative state.
Who is at most risk? Drowning is a leading cause of unintentional injury death worldwide, with the highest rates among children. Drowning is still the leading cause of injury death among children between one and four years old. Children most commonly drown in home swimming pools.
What can increase the risk for drowning? The main factors that affect drowning risk are lack of swimming ability; lack of barriers to prevent unsupervised water access; lack of close supervision while swimming; location; failure to wear life jackets; alcohol use; and seizure disorders. Drowning doesn’t only occur during the summer months; it can happen quickly and quietly anywhere there is water, including bathtubs and buckets. For those with seizure disorders, drowning is the most common cause of unintentional injury death, with the bathtub the most common site.
To prevent drowning, all parents and children should learn survival swimming skills. Research has shown that participation in formal swimming lessons can reduce the risk of drowning among children one to four years old. Environmental protections such as pool fencing and lifeguards should be in place.
A four-sided isolation fence at least four feet high, separating the pool from the house and yard, reduces a child’s risk of drowning by half. The fence should be hard to climb, e.g., not chain-link, with a self-closing and self-latching gate that opens outward with latches that are out of reach of children. However, even when children have had formal swimming lessons, constant and careful supervision when they are in the water is important.
Alcohol use should be avoided while swimming, boating, water skiing, or supervising children. Alcohol influences balance, coordination and judgment, and its effects are heightened by sun exposure and heat. Life jackets should be used by all boaters and weaker swimmers. All caregivers and supervisors should have training in cardiopulmonary resuscitation (CPR). Seconds count. CPR performed by bystanders has been shown to save lives and improve outcomes in drowning victims. The quicker CPR is started, the better the chance of recovery. In the time it takes for paramedics to arrive, your CPR skills could save a life.
Additional tips to prevent drowning are as follows:
•Designate a responsible adult to watch children while in the bath and all children swimming or playing in or around water.
•If supervising preschool-aged children, you should be close enough to reach the child at all times.
•Because drowning occurs quickly and quietly, adults should not be involved in any other distracting activity, such as reading, talking on the phone, or grilling while supervising children, even if lifeguards are present.
•Always swim with a buddy.
•Select swimming sites that have lifeguards when possible.
•Avoid using air-filled or foam toys instead of life jackets. These toys are not safety devices.
With education and improved behavior, maybe then there will be one fewer injured child. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.