content
stringlengths
275
370k
Children with dyslexia often struggle with reading, writing, and spelling, despite getting an appropriate education and demonstrating intellectual ability in other areas. New neurological research has found that these children's difficulties with written language may be linked to structural differences within an important information highway in the brain known to play a role in oral language. The findings are published in the June 2010 issue of Elsevier's Cortex. Vanderbilt University researchers Sheryl Rimrodt and Laurie Cutting and colleagues at Johns Hopkins University and Kennedy Krieger Institute used an emerging MRI technique, called diffusion tensor imaging (DTI), to discover evidence linking dyslexia to structural differences in an important bundle of white matter in the left-hemisphere language network. White matter is made up of fibers that can be thought of as the wiring that allows communication between brain cells; the left-hemisphere language network is made up of bundles of these fibres and contains branches that extend from the back of the brain (including vision cells) to the front parts that are responsible for articulation and speech. "When you are reading, you are essentially saying things out loud in your head", said Cutting. "If you have decreased integrity of white matter in this area, the front and back part of your brain are not talking to one another. This would affect reading, because you need both to act as a cohesive unit." Rimrodt and Cutting used the DTI technique to map the course of an important white matter bundle in this network and discovered that it ran through a frontal brain region known to be less well organised in the dyslexic brain. They also found that fibers in that frontal part of the tract were oriented differently in dyslexia. Rimrodt said, "To find a convergence of MRI evidence that goes beyond identifying a region of the brain that differs in dyslexia to linking that to an identifiable structure and beginning to explore physical characteristics of the region is very exciting. It brings us a little bit closer to understanding how dyslexia happens." Explore further: Unprecedented germ diversity found in remote Amazonian tribe More information: Cortex is available online at www.sciencedirect.com/science/journal/00109452
Large areas of Histosols are found in the circumpolar region of the northern hemisphere, as well as in Southeast Asia. Histosols are formed in 'organic material' with physical, chemical and mechanical properties that differ strongly from those of mineral soil materials. Organic soil material accumaulates in conditions where plant matter is produced by a climax vegetation and where decomposition is slow. Soils having a histic or folic horizon (wet or dry organic horizons, respectively) - either 10 cm or more thick if overlying a lithic or paralithic contact, or - 40 cm or more thick and starting within 30 cm from the soil surface. Histosols do not have an andic or vitric horizon that starts within 30 cm from the soil surface. |Highland peat in western Ireland (Ombri-Sapric Histosol)||Strongly decomposed peat under cultivated grassland, the Netherlands
A gas can be compressed, this means that when we exert an increasing pressure on a cilinder filled with gas, the volume of that unit of air will decrease. Liquids, like water, are as good as incompressible. Take a cilinder filled with water, put pressure on the top, and the water volume will hardly change with increasing pressure. What is the result of this compressibility? Density changes. With increasing pressure a gas will have a density increase as well. With increasing pressure a liquid will not have a density increase, with a liquid, density will not change. Now look at the formula's used in basic aerodynamics (Bernouilli, total pressure = static + dynamic pressure). As you should know, dynamic pressure depends on density. And since air is a gas, hence compressible, density will change. It will increase with increasing pressure. But we never make this correction, most of the time we use a density value for air in "not moving" conditions. On the ground ie, it's 1.225kg/m**3. But when air is moving and being brought to a sudden stop, the density of the air will slightly increase because it will be compressed. So we make an error. We use a value for density to calculate dynamic pressure, but this value depends on the value of dynamic pressure itself. So I hope you understand something is wrong here. Dynamic pressure depends on density and density depends on dynamic pressure... so we can't really use the formula's anymore. The higher the speed of the air, the more the real density (the density of "air under pressure" or "moving air being brought to a stop") will be higher than the density we use, the larger the error. Now there have been clever engineers who calculated the errors because of compressibility: at what speeds are the errors in density between "not moving air" and "air under pressure being stopped" large enough to create a noticeable error? That's when we start to use compressibility correction factors. These factors allow us to use the "static" density of air, and depend on the speed of the air of course. Hope this helps? PS: Formula of Bernouilli can only be used for incompressible fluids, that's a basic requirement. However, at low speeds and low altitudes, the errors created are so small we can neglect them. It's far more easier to use Bernouilli than the alternatives available (thank god...). But at high speeds/high altitude we have to make corrections... but still easier than the alternatives. Last edited by BraceBrace; 12th Oct 2004 at 13:52.
The term ‘laser cutting’ sounds pretty technical. Though the process might be a bit difficult to understand initially, the technology is indeed innovative and augments manufacturing in the industrial realm. As the term suggests, the technology makes use of a laser beam to cut metal sheets and plates, piping materials and metals used for structural purposes with precision. The method is usually used in large scale manufacturing units and the laser cuts the material either by burning, melting or blowing it away. Understanding Laser Cutting Though we often use the term laser, most of us do not know that the word is an acronym that stands for ‘light amplification by stimulated emission of radiation’. It can thus be understood that the technology essentially makes use of stimulated emission which gets manifested as a low divergence beam. This beam is then made to reflect through various surfaces. Now, one may ask how a laser beam is generated. It is done by stimulating a ‘lasing’ material with the help of electrical discharges or lamps within a closed container. When this technology is used to cut metal, it is called laser cutting. The Process Involved The process is devise based and to begin with, a computer directs a high power laser depending on the material that has to be cut. The lasing material then has to be stimulated with the help of a reflector and the light beam gets transmitted internally. Mirrors are usually used to reflect the light onto the lens. Once the laser beam reaches the material, it melts or burns the metal to produce a surface finish of premium quality. Though this is the basic underlying principle, several methods like vaporization, melt and blow, scribing and thermal stress cracking are used to cut different metal-based surfaces. The cutters involved are referred to as industrial laser cutters. Here, it may be noted that custom laser cutting has gained good market recognition because it can cut metal at a great speed. - It is also a much neater and cleaner process as compared to the traditional methods of cutting. - Moreover, some rare and hard metals cannot be cut with the help of traditional ways of cutting. Laser cutting comes handy on such occasions. - It is a cost effective method of cutting high-precision parts. Further, time and energy get saved as individual parts need not be cut manually. The good part about high quality laser cutters is that they have the capacity to bore holes ranging from a few millimeters of diameter to several feet wide. They can cut objects made out of steel, copper, aluminum, brass and several others. However, laser cutters can cut metals having a certain thickness. If the thickness of the metal sheet is either less or more than the limit of the cutter, it wouldn’t be able to cut the sheet. Despite this drawback, custom laser cutting numerous applications, especially the need to cut something to a precise measurement, have made it so popular in different industries.
Investigate how you can work out what day of the week your birthday will be on next year, and the year after... Label this plum tree graph to make it totally magic! Find all the ways of placing the numbers 1 to 9 on a W shape, with 3 numbers on each leg, so that each set of 3 numbers has the same The Hints section contains additional direction for students engaging with this task. Students often possess the method but without any sense of why it's valid. Hopefully this activity will not only help them to explore this method's validity but will also encourage in them a questioning attitude and a desire to establish the validity of other common methods as they meet them. There are a large number of values displayed on the sheet called 'Patterns with differences' (see the tab at the bottom of the work area on Interactive Number Patterns 2). Students need to be given the time to understand all parts of the display. Below the formula box are three rows. These display the values of the quadratic, linear and constant terms respectively. Change the terms in the formula box to help students grasp what each row shows, and verify that the three values added together match the blue values. In the two rows of differences above the formula box and verify that the data showing is the correct difference for the blue values, and use the slider to change the red values, allowing students to verify the new values displayed within the sheet. The constant term contributes nothing to the first difference, the linear term contributes to the first difference but not the second, and the coefficient, or multiple, of the quadratic term is the only coefficient which influences the second difference.
The red flour beetle is known as an invasive species that eats any kind of wheat, and scientists have used these beetles to study specifically how invasive species spread. However, a professor at Colorado State University and a professor at the University of Colorado are studying them as a way of conservation. Ruth Hufbauer, a CSU professor in the Department of Bioagricultural Sciences and Pest Management, and Brett Melbourne, a CU assistant professor of in the Department of Ecology and Evolutionary Biology, are using red flour beetle populations to study factors that dwindle or expand a population of species. The study was released in the Proceedings of the National Academy of Sciences. Hufbauer said the beetles are better for research because they can reproduce quickly and she can use large numbers of them ethically. “I got involved in questions that would be really difficult to do in nature to ask and to address in nature with experiments, because it would be unethical to to just release things out into nature, and also there’s just so many challenges with working outside,” Hufbauer said. “We think carefully about the design of these experiments so that they can give us general answers that we can then apply to many other species in the wild, such as tigers or trees.” Melbourne said that the beetles act as “model species” for the research without wildlife in risk. “For example, you would not want to purposely introduce an invasive species to the wild to see what happens, or try different ways to make a species go extinct. So, we can use the beetle in artificial ecosystems in the laboratory to answer these difficult questions,” Melbourne wrote in an email to the Collegian. According to Hufbauer, what has been done historically to help a declining population of animals is to bring more to the population. While this helps, Hufbauer was interested in seeing if it was merely the numbers or the genetics that help a group survive. “I see the beetles as a model the way a wind engineer (would) build a scale model of a building and put it in a wind tunnel and blow the wind at it,” Hufbauer said. “You don’t want to do that with the actual building, because that’s way too expensive. But if you can build a model that represents enough of the real world system, then you can learn a lot from it.” Using natural populations of beetles allows Hufbauer to manipulate the systems while adding genetically similar individuals of beetles. She said the results of the study were that genetics matter. With just a few beetles with a different genetic background, it reduced the extinction rate and increased adaptation. Melbourne and Hufbauer are continuing to research how genetic and ecological processes combine to determine how a species goes extinct or expand its range. “I think that for each species that’s endangered or threatened, or any individual population that we’re worried about, we’ll need to have data on that,” Hufbauer said. “But by doing large replicated experiments like this in the lab, we can understand the mechanisms that are underlying this.” Collegian Science Reporter Seth Bodine can be reached at [email protected] or on Twitter @sbodine120.
The top of each listing shows the constellation name and its symbol. Immediately below that are a few lines of important information as follows: Pronunciation: Shows the correct pronunciation of the constellation in phonic form. Clicking on the "Play Audio" button will play an audio file of the correct pronunciation. Abbreviation: The standard abbreviation for the constellation name. This is the name that often appears on star charts and planishperes. Genitive: This is the gentitive form of the constellation name. The gentitive is used to indicate possession. In the case of a constellation, the gentitive form of the name is used when referring to the brightest stars in the constellation. These stars are listed alphabetically using the Greek alphabet. For example, the first star in the constellation Hercules is known as Alpha Herculis. The next star is Beta Herculis followed by Gamma Herculis and so on. Right Ascension: The right ascension is the amount of time that passes between the rising of Aries and another celestial object. Right ascension is one unit of measure for locating an object in the sky and is indicated in hours. Declination: The declination is the angular distance of the object in the sky from the celestial equator. Declination is the second unit of measure for locating an object in the sky and is indicated in degrees. Area in Square Degrees: This is the total area that the constellation occupies in the sky, expressed in units of square degrees. The celestial sphere of the sky is divided into 360 equal parts. One of these parts equals one degree. Crosses Meridian: This is the date and time that the constellation crosses the meridian. The meridian is an imaginary circle drawn through the North and South poles of the celestial equator. The next section shows a drawing of the constellation as well as a listing of all objects of interest. Please note that these images only show the brightest stars in the constellation. Because of this, the images may not represent the entire picture that the constellation suggests. Extremely dark skies are needed to see all of the faint stars that complete the entire picture. The listing next to the image includes Messier Objects and named stars. An * will appear in a column if the information is unavailable or does not exist. Object Number: Reference numbers on the drawings are used to locate the objects on the list. Messier objects are shown with a blue reference number in the list. Stars are shown with a green reference number. The Type/Translation: This column gives information about each object. For Messier objects, the Type/Translation column will show the kind of object. This is usually a galaxy, star cluster, or nebula. For stars, the Type/Translation column will show the translation of the star's name. The names of most stars are derived from words in ancient languages such as Greek, Roman, or Latin. Vmag: This is the visual magnitude of the object. Visual Magnitude is a scale used by astronomers to measure the brightness of a star or other celestial object. Visual magnitude measures only the visible light from the object. On this scale, bright objects have a lower number than dim objects. You may click on the Return to Top of Page link to return to the menu at the top of the page. The constellations are arranged by the months in which they are best visible. You may use the menu at the top to navigate forward to the next month or backward to the previous month.
Due: 19/05 (Set: 17/05) Title: Plan your article Time: 20-25 minutes Familiarise yourself with all that follows and develop and plan the following: i) argument, ii) content, iii) structure and chronology, iv) effective rhetorical devices for an argument of your own choosing The appropriate use of lexical and grammatical features, for a given context; dependent upon audience (who), topic (what), purpose (why) and location (where). - Give/respond to information; - Cite evidence and use quotations; - Include rhetorical devices; - Select, organise and emphasise facts, ideas and key points. Writing types and purposes provide you the opportunity to communicate your personal view(s). Write to persuade Persuade the reader of the statement that… Article (newspaper – broadsheet/tabloid, magazine, web log) i) Use of a simple title/headline, Use of i) a clear/apt/original headline/title, ii) a strapline (caption), iii) subheadings, iv) introductory (overview) paragraph, v) effectively/fluently sequenced paragraphs. Paragraph One: Respond to the statement. - What is your initial reaction to the statement? - Why do you think what you do? - Remember: Keep to one clear viewpoint. Paragraph Two: Uses statistics to support your response. - Use the results of an imagined survey to support your ideas. - Consider a percentage of people affected by the statement. - Remember: Keep the numbers realistic so that your work is sophisticated. Paragraph Three: Use an anecdote to make your response more personal. - Tell the story of someone that has been affected in some way by the focus of your statement. - Consider the positive/negative impacts on their life. Paragraph Four: Use an expert to make your response credible. - Use your expert to give clear reasons for your viewpoint. - Who do they work for? - What research have they conducted? - What observations have they made? Aiming for the top? – A one word sentence – A one word or one sentence paragraph – Additional statistics/experts/anecdotes.
While it’s known that a mother’s diet can affect her unborn child’s development in utero, a new study found that her eating habits around the time of conception can also alter her child’s lifelong risk of cancer. A new study published in the journal Genome concluded that a gene affecting a person’s risk of cancer can be permanently altered in utero depending on a mother’s diet. While a child’s genes are directly inherited from his parents, how the genes are expressed is controlled through modifications to the DNA, which occur during embryonic and fetal development, according to the report. Modifications can occur when gene regions are tagged with chemical compounds called methyl groups that silence genes. The compounds require specific nutrients, which means that a mother’s eating habits before and during pregnancy can permanently affect the “setting” of these tags, the report said. Researchers at the Baylor College of Medicine, the Children’s Nutrition Research Center at Baylor and Texas Children’s Hospital, the London School of Hygiene & Tropical Medicine in London, the MRC Unit and The Gambia, split into two groups and targeted specific regions of the genome called metastable epiallels that are particularly sensitive to these effects. The research groups both found that the tumor suppressor gene VTRNA2-1— which helps prevent cells from becoming cancerous— was the most sensitive to the environment created by the mother around the time of conception. “There are around 20,000 genes in the human genome,” study leader Dr. Robert Waterland, an associate professor of pediatrics and nutrition at Baylor said, according to the report. “So for our two groups, taking different approaches, to identify this same gene as the top epiallele was like both of us digging into different sides of a gigantic haystack containing 20,000 needles… and finding the exact same needle.” Typically, aside from genes on sex chromosomes, mammals inherit two copies of all genes which function equally. Researchers found that VTRNA2-1 belongs to a special class of genes that are expressed from only the maternal or paternal copy. These genes are labeled imprinted genes because they are imprinted with epigenetic marks inherited from either the sperm or egg, the report said. What further sets VTRNA2-1 apart is that it is the first example of an imprinted metastable epiallele. “Our results show that the methylation marks that regulate VTRNA2-1 imprinting are lost in some people, and that this ‘loss of imprinting’ is determined by maternal nutrition around the time of conception,” Andrew Prentice, professor at the London School of Hygiene & Tropical Medicine and head of the MRC International Nutrition Group said, according to the report. “These are large changes in gene methylation that affect a substantial subset of individuals.” Three previous studies showed that an increase in these methylation marks is a risk factor for acute myeloid leukemia, lung and esophageal cancer. However, a decrease in these marks— VTRNA2-1 loss of imprinting— led to individuals with a double-dose of the anti-cancer gene, according to the report. “The potential implications are enormous,” Prentice told The Guardian. “In this particular example, the gene involved is really crucial – it lies at the center of the immune system so it affects our susceptibility to viral infection. At the very beginning of fetal growth, the way it is labeled is going to affect the baby’s health for the rest of its life,” he said. “If a mother’s diet is poor then it causes a whole lot of damage to the genome which has a shotgun effect, so a baby might have possible adverse outcomes,” Prentice told the news site. “This general phenomenon might explain preterm births, problems in pregnancy, brain defects, or why some babies are born too small.” “We could potentially clean up a lot of adverse pregnancy outcomes by getting the diet right,” he told The Guardian. Researchers also showed the loss of VTRNA2-1 affects all cells of the body, and that the loss of imprinting is stable from childhood to adulthood. Researchers say more studies are under way to test whether methylation at VTRNA2-1 can be used as a screening test to predict risk of cancer.
All spectroscopy involves the separation of light into its individual wavelengths (or energies, or frequencies). How can one do that? In the optical, there are two main methods. One relies upon refraction of light as it passes through a piece of glass or plastic. Remember what happens when light moves from air to glass (or vice versa)? Since the refractive index of glass is different than that of air, the light bends: There's an equation which relates the angles of the light relative to the normal vector, before and after it enters the glass. Something about Snell ... Q: What is the relationship between the angles θ1 and θ2? Now, this isn't enough on its own to help us form a spectrum. We need to pick a medium which not only refracts light, but refracts it by a degree which depends on the wavelength of the light. In other words, we need a dispersive medium in which the index of refraction depends on the wavelength, n = n(λ) . In common materials, the index of refraction is larger for short wavelengths. That means Q: Suppose you are interested in taking the spectrum of a really, really faint galaxy, one for which every photon counts. Do you see any drawback to using a prism of glass? Right. Refractive optics require light to pass through some material, which can cause some photons to scatter or be absorbed, and so lost. The other main method of forming a spectrum (can) avoid this problem, because one can use a REFLECTION off a polished surface, thus preserving more photons. This second method involves diffraction gratings. For simplicity's sake, in the diagrams below, I'll show some gratings used in a transmissive, rather than reflective, manner. It just makes things easier to see. In most astronomical applications, though, light will bounce off a grating rather than going through it. So, a diffraction grating is a device with many grooves or little openings through which light rays can pass. Let's shine light through these openings and onto a detector, over on the right-hand side of this diagram. Consider one location on the detector, marked with an "X" in the diagram. Now, a light ray coming from directly across from the "X" needs to travel a distance L to reach that spot on the detector. But light rays passing through an opening to one side -- a distance d or angle θ from the exact center of the grating -- must travel a slightly longer distance to reach the "X". Q: How much extra distance must the second ray travel? That extra distance is Now, if this extra distance is equal to an integral number of wavelengths of the light, then the second ray will interfere constructively with the central ray. That means that light striking the "X" on the detector will be particularly bright. In other words, we'll see a bright spot on the detector at position "X" if But the same is true if we look at things from a slightly different point of view. If we shine light of wavelength λ through a diffraction grating with spacing d between grooves, and the light travels a distance L before it reaches our detector, then we should see a bright spot at the very center of the detector -- straight across from the entrance, at "X"; but we should also see bright spots at off-center locations for which the angle θ satisfies this criterion. Note that diffraction gratings will There are a number of ways that people have arranged optics and prisms and gratings to produce spectra. Some create the spectrum of a single object; others produce spectra for hundreds of objects at once. The spectrum produced in this way has a familiar form: a long, thin rectangle, due to the shape of the slit through which light passed. Absorption and emission lines appear as narrow vertical bars. Image by Maurice Gavin, from the wpo-amateur spectroscopy web site. Image courtesy of M. Gavin If you've ever heard of the Henry Draper (HD) catalog, or of the Harvard College Observatory computers, led by Annie Jump Cannon and Wilhemina Fleming, who created it, then you've used spectra created in this way. This sort of "slitless spectroscopy" can create some cool images: Image courtesy of Jim Ferreira The problem with some spectrographs is that they spread the light from a single object out into a very long, thin shape ... but that doesn't match the shape of our detectors. Most CCDs and other devices are roughly square, or slightly rectangular. Is there any way to make a long, thin line fit onto a roughly square detector? If we mount a spectrograph to the back of a big telescope, then as the telescope tracks and points, the spectrograph -- and all its optics and detectors -- will tilt from side to side. That can ruin the delicate optical alignment of some devices, degrading the quality of the measurements. Q: Is there some way to separate the spectroscope from the telescope? Sure! Use an optical fiber to carry the light of one object from the telescope, to a spectrograph which sits motionless on the floor. Or in another room. Here are pictures showing the fibers for the SDSS spectrograph being plugged into a metal plate, into which holes have been drilled at EXACTLY the positions of some stars and galaxies of interest. Image courtesy of the Sloan Digital Sky Survey Image courtesy of the Sloan Digital Sky Survey and BOSS Now, a single fiber allows one to carry the light of a single object to the spectrograph -- but all the light of that object is mixed together. If we are studying a relatively nearby galaxy, that means that we mix up the light from the central nucleus and from the outer spiral arms or disk. What if we could pack fibers very close together, so that one fiber measured the nucleus, but other fibers collected light from the disk or arms? Image courtesy of Jeremy Allington-Smith, University of Durham Well, one doesn't have to use fibers -- there are several methods for sending light from closely packed regions in the focal plane to a spectrograph: Image courtesy of the Integral Field Spectroscopy Wiki What are the advantages of spectroscopy over simple imaging? Well, in a simplified way, an image tells you "how bright is this star?" But what can you learn from spectroscopy? Imaging Spectroscopy -------------------------------------------------------- How bright is it? So, if spectroscopy is so great, why don't ALL astronomers ALWAYS use spectrographs? Q: Is there any advantage for a simple imaging camera over a spectroscopic camera attached to the same telescope? Well, yes. If you split up the light from an object into a spectrum, then you are spreading out the light from the object across your detector. That means that the number of photons which strike each little section of your detector is much smaller ... which means that the SIGNAL-TO-NOISE must decrease. One of the most common measurements one can make of a spectrum is the equivalent width of a line. In the spectrum above, Copyright © Michael Richmond. This work is licensed under a Creative Commons License.
WASHINGTON— For such small creatures, hummingbirds certainly have racked up an outsized list of unique claims to fame. They are the smallest birds and the smallest warm-blooded animals on Earth. They have the fastest heart and the fastest metabolism of any vertebrate. They are the only birds that can fly backward. And scientists reported on Thursday that they also have a complicated evolutionary history. Researchers constructed the family tree of these nectar-eating birds using genetic information from most of the world's 338 hummingbird species and their closest relatives. They said hummingbirds can be divided into nine groups, with differences in size, habitat, feeding strategy and body shape. The common ancestor to all species in existence today lived about 22 million years ago in South America, several million years after hummingbirds were known to be flourishing in Europe, they said. Today's hummingbirds are found only in the Americas. They boast a unique set of capabilities, said University of New Mexico ornithologist Christopher Witt, one of the scientists in the study published in the journal Current Biology. “They can hover stationarily or move in any direction with precision, even in a strong wind. They also have the highest rate of energy consumption per gram of any animal,” Witt said. “They have sparkling colors that are breathtaking when seen under perfect lighting conditions. This combination of speed, agility and beauty is unmatched in nature,” Witt added. Hummingbirds come in a spectacular range of colors, with males more colorful than females. They often have green feathers on the body, with the head coming in “virtually every color you can imagine: gold, red, blue, purple, magenta, often iridescent,” said biologist Jimmy McGuire of the University of California, Berkeley, who led the study. Their name derives from the humming sound produced by the rapid flapping of their wings. The largest hummingbirds flap about 15 times per second, while the smallest approach 80 times per second, Witt said. Hummingbirds consume mostly flower nectar, and have long, slender bills and lengthy, specialized tongues to collect this sweet treat. But because the nectar is almost devoid of protein, they also eat small insects. 'Operating on the Extremes' “They have to constantly feed because they're powering this system that has such great energy requirements. Many of these hummingbirds go into torpor [dormancy] at night so that they don't starve to death overnight, which is pretty cool. They're just operating on the extremes,” McGuire said. While hummingbirds now live only in the New World - North America, Central America, South America and the Caribbean - their oldest fossils were unearthed in Europe. That indicates hummingbirds once enjoyed a much larger range and disappeared in the Old World for unknown reasons, the researchers said. The discovery of fossils in Germany of the oldest known hummingbirds - 30 million years old - was announced in 2004. “The fossil record for hummingbirds, and other small birds, is so poor that we really don't know when European hummingbirds disappeared. It could have been 30 million years ago, or it could have been a few thousand years ago,” Witt said. The hummingbird evolutionary lineage split from a related group of small birds called swifts and treeswifts about 42 million years ago - most likely in Europe or Asia - and by 22 million years ago the ancestral species of modern hummingbirds was in South America, the researchers said. Hummingbirds found their way to South America probably after crossing a land bridge that once connected Siberia to Alaska, the researchers said. Once in South America, they expanded into new ecological niches and evolved new species, then spread back to North America about 12 million years ago and into the Caribbean about five million years ago, the researchers said. The biggest threat to hummingbirds is loss of habitat thanks to human activities. If people were not around, they “would just continue on their merry way evolving new species,” McGuire said. The smallest species today, and the smallest bird in existence, is the bee hummingbird of Cuba, which measures about two inches long (five cm) and weighs 1.6 to 1.9 grams. The largest is the giant hummingbird of South America, which measures about eight inches (20 cm) and weighs about 20 grams.
Sunday 12 March 2006 Definition: Alkylating agents are so named because of their ability to add alkyl groups to many electronegative groups under conditions present in cells. Cisplatin and carboplatin, as well as oxaliplatin, are alkylating agents. Other agents are mechlorethamine, cyclophosphamide, chlorambucil. They work by chemically modifying a cell’s DNA. Alkylating agents stop tumour growth by cross-linking guanine nucleobases in DNA double-helix strands - directly attacking DNA. This makes the strands unable to uncoil and separate. As this is necessary in DNA replication, the cells can no longer divide. These drugs acts mainly nonspecifically, some of them requires conversion into active substances in vivo (e.g. cyclophosphamide). Since cancer cells generally divide more rapidly than do healthy cells they are more sensitive to DNA damage, and alkylating agents are used clinically to treat a variety of tumours. Dialkylating agents can react with two different 7-N-guanine residues and if these are in different strands of DNA the result is cross-linkage of the DNA strands, which prevents uncoiling of the DNA double helix. If the two guanine residues are in the same strand the result is called limpet attachment of the drug molecule to the DNA. Monoalkylating agents can react only with one 7-N of guanine. Limpet attachment and monoalkylation do not prevent the separation of the two DNA stands of the double helix but do prevent vital DNA processing enzymes from accessing the DNA. The final result is inhibition of cell growth or stimulation of apoptosis, cell suicide. ethyleneimines and methylmelamines - hexamethylmelamine or altretamine - mechlorethamine (mustine) - ramustine (uracil mustard) Platinum-based chemotherapeutic drugs (termed platinum analogues) act in a similar manner. These agents don’t have an alkyl group, but nevertheless damage DNA. They permanently coordinate to DNA to interfere with DNA repair, so they are sometimes described as "alkylating-like". These agents also bind at N7 of guanine. Nonclassical alkylating agents Certain alkylating agents are sometimes described as "nonclassical". There is not a perfect consensus on which items are included in this category, but generally they include: The discovery of alkylating agents was the consequences of chemical warfare during World Wars I and II that spawned the modern era of cancer chemotherapy. For example, observations made by physicians treating mustard-gas victims of a World War II tragedy led to important clinical insights. A German bombing raid on the coastal waters of Italy in December 1943 resulted in the sinking of an American ship that contained mustard-gas bombs. This gas, when mixed with fuel oil, dispersed on the surface of the water; men exposed to this mixture soon showed lymphotoxic symptoms. Although sulphur mustards were first used in chemical warfare, it was the more stable nitrogen mustards that were developed for cancer chemotherapy. Research before and during World War II led to an appreciation of the biological effects of the nitrogen mustards. In 1946, Alfred Gilman and Frederick Phillips correctly determined that the toxic effects were due to alkylation. They reported that the side effects of exposure to nitrogen mustard - nausea, vomiting and myelosuppression - resembled those from exposure to X-rays. As these specific organ toxicities are related to the high proliferative rates of these tissues (epithelia of the gastrointestinal tract and bone-marrow cells), it seems likely that cancers such as leukaemias and lymphomas, which also have high proliferative rates compared with most normal tissues, might be particularly susceptible to these agents. Indeed, nitrogen mustards caused remissions when they were used to treat lymphomas, and this marked the beginning of the modern era in cancer chemotherapy. So, it was established that cancers with high proliferative rates could be treated with alkylating agents, such as the nitrogen mustards, and that their selectivity was dependent on quantitative differences in the rates of division between cancer and normal cells: side effects were most often associated with normal tissues that shared these characteristics. Over the next two decades, a wide range of alkylating agents were synthesized in an attempt to control their inherent chemical reactivities. Temozolomide, which is a monoalkylation drug, methylates guanine residues in DNA following a DNA-facilitated rearrangement. The most potent and efficacious agents, however, such as chlorambucil and mephalan, were found to crosslink the two complementary strands of DNA, rather than just alkylating one strand. More recently, imaginative approaches to targeting specific cancer types by site-specific delivery or metabolic activation have been used.
Overexposure to the sun's ultraviolet rays (UV rays) is what causes the skin cancer and can be prevented by simply applying the correct amount of sunscreen when you know you will be around sun exposure. Just as dangerous as the sun are UV rays from artificial sources like tanning beds or sunlamps. The three main types of skin cancer are basal cell cancer, squamous cell cancer, . In the Skin Cancer Section of this website, we talk a bit about the ABCDE system as a guide to detect signs of skin cancer in moles or growths on the skin. People with skin types that burn easily and do not tan are at highest risk for skin cancers and also, anyone who has had severe sunburns or many sunburns is at high risk for skin cancers.Sun Damage The sunlight that reaches the earth has ultraviolet A and ultraviolet B (UVA and UVB) rays. These sun UV rays are the main cause of damage to the skin . UVA and UVB rays affect the skin's sensitivity to sun exposure in different ways.
Dracaena, genus of more than 100 species of plants in the asparagus family (Asparagaceae). Members of the genus are native primarily to the Old World tropics, especially Africa, and one species is endemic to South America. Several Dracaena species are cultivated as houseplants for their ornamental foliage and are noted as effective air cleaners that remove chemicals, such as formaldehyde, from the air indoors. The genus is fairly diverse. Most species have short ringed stalks and narrow sword-shaped leaves, though some resemble trees with crowns of leaves. The small flowers are typically red, yellow, or green and produce berrylike fruit with one to three seeds. Lucky bamboo (Dracaena braunii) and corn plant (D. fragrans), with yellow leaf edges or white stripes, are common houseplants. Dragon trees, notably D. draco from the Canary Islands, can grow more than 18 metres (60 feet) tall and 6 metres (20 feet) wide. The trunk contains a red gum, called dragon’s blood, valued for its medicinal properties. A number of Dracaena species are listed as endangered on the IUCN Red List of Threatened Species because of overharvesting and habitat loss.
A team of British and Czech scientists have developed the most powerful pulse laser in the world. Their laser has an average power of 1000 watts, making it ten times more powerful than any laser of its kind when you consider its output power over time. Lasers come in two types. First there are continuous lasers, which, as the name implies, can fire continuous. Then there are pulse lasers, which can fire in short bursts. Some pulse lasers, like the Texas Petawatt Laser in Austin, can achieve peak power outputs of more than a trillion watts. They achieve this by building up their power and releasing it all at once in a very short burst. However, the long charge cycles mean that the lasers can only emit high-powered beams a few times per day, so their average power is still rather low. The new laser, called Bivoj, can fire many more times than other lasers can, giving it a much higher average power. The high average power Bivoj can achieve does come with some tradeoffs. It doesn't have as high of a peak power output as other lasers, but the teams that built it, the Central Laser Facility in Britain and HiLASE in the Czech Republic, believe the tradeoff is worth it. Creating a high-power laser that can fire more often gives it many applications in both science and industry. The team plans to spend the next few months exploring the laser's potential, and hopes to explore commercial options later this year.
Lake Malawi National Park is at the southern end of Lake Malawi. It has unique collection of fish species endemic to the lake. The lake draws researchers from all over the world, keen to learn more about the unique diversity found at the park. 5. The Lake and its Surrounding Geography - The 9,400 hectare Lake Malawi National Park is located at the southern end of the great area of Lake Malawi.The park has a land area around the Cape Maclear, the Monkey bay, Lake Malawi, and islands that are about 100 meters off shore, according to Malawi’s department of tourism. Scenery around Lake Malawi National Park comprises of wooded rocky mountains, that slope to the lake shore, and sandy coves, and beaches. The park, is also surrounded by heritage sites such as the rock of the ethnic face scars, and graves of early missionaries who settled there in the 1870s. 4. Natural History - In 1980, Lake Malawi National Park was designated a national park.Before that, the mainland area of the park had been managed as a forest reserve from 1935, according to a study by the Conservation and Sustainable Development. Due to the lake having fish species never found anywhere else in the world, Lake Malawi National Park was designated a World Heritage Site by UNESCO in 1984. From the 1840s, Salim-Bin Abdullah a Swahili-Arab slave trader set up his slave trading headquarters, on Lake Malawi’s shore. The southern shores of Lake Malawi also served as slave trade routes into Tete province, and Zambezi valley in Mozambique, according to UNESCO. 3. Research, Education, and Tourism - Due to Lake Malawi National Park’s unique biodiversity, it attracts research scientists and students keen to learn about the unique fish species in the lake. According to a World Bank study, there are an estimated 500 to 1000 fish species in Lake Malawi belonging to 11 fish families. These include the endemic cichlidae fresh water fish family, making up over 90 percent of the fish species in the lake. Lake Malawi National Park is also a vibrant tourists’ destination. Aquatic recreational activities in the lake’s waters like snorkeling, scuba diving, yachting, sail boarding, swimming, and kayaking are available. At the park’s land area tourists can hike, bird-watch, mountain bike, and walk on trails around the lake. There are also sports like golf and volleyball available to tourists. 2. Habitats and Biodiversity - Lake Malawi National Park has diverse habitats. These include rocky shorelines, sandy beaches, wooded hillsides, swamps, and lagoons.There also are granitic hills that rise steeply from the lake’s shore, and several sandy bays, according to UNESCO. Lake Malawi also has 30 percent of all known cichlids fish species in the world. Underwater habitats that are sandy, weedy, rocky, and have reed beds and algae, support the diverse fish species in the lake.The park is also rich in fauna.Birds like the collared palm thrush, marabou stork, palm swift, spinetail, and reptiles like the monitor lizard are found there. Lake Malawi National Park also has three antelope species the kudu, suni, and nyala, baboons and hyraxes. 1. Environmental Threats and Conservation Efforts - Under the laws of Malawi, Lake Malawi National Park is a protected site with a management plan for its resources instituted.But there are potential future threats cited, from human activities like overfishing, firewood collection from the park, pollution of the lake by boats, and siltation due to deforested forest land. Also agricultural related activities like pesticide and fertilizer usage, in adjacent farm lands have been cited by conservation experts, as likely to affect the lake’s biodiversity. Such promote growth of blue-green algae, harmful to the fragile aquatic life in Lake Malawi. Though Lake Malawi National Park’s terrestrial and underwater habitats are in good condition, there needs to be a plan prioritizing protection of its resources against increased human population and activities, according to UNESCO.
Recently thousands of dead and decaying pigs were pulled from rivers in Shanghai and Jiaxing, China. Apparently farmers dumped the animals into the water after the pigs became ill, and porcine circovirus was subsequently detected in the in pig carcasses and in the water. Porcine circoviruses are small, icosahedral viruses that were discovered in 1974 as contaminants of a porcine kidney cell line. They were later called circoviruses when their genome was found to be a circular, single-stranded DNA molecule. Upon entry into cells, the viral ssDNA genome enters the nucleus where it is made double-stranded by host enzymes. It is then transcribed by host RNA polymerase II to form mRNAs that are translated into viral proteins. There is some evidence that circoviruses might have evolved from a plant virus that switched hosts and then recombined with a picorna-like virus.
Brains are complicated. We all know that. Like an entangled bunch of wires. Still, over the years, neuroscientists have been able to map out several brain regions and their functions in behavior and physiology. Pretty impressive but when it comes down to the precise determination of which brain cells are responsible for what function, many things are still a mystery. Traditionally two methods were used for this kind of research, being pharmaceuticals (chemical compounds, drugs) and electrodes. While pharmaceuticals can target specific cell receptors to trigger specific neurons, they are not temporally precise. It takes time for them to work after administration, and once they work, the drugs can stay in the system for a certain amount of time. In most cases, activation actually lasts too long. It would be so much easier if cells could be turned on and off like a light switch. Therefore, many scientists have started using electrodes. The problem with electrodes, however, is that they are often not spatially precise enough. So in many cases, neurons in the vicinity of the targeted neurons are also activated. The light-bulb moment In the last decade a new method was invented to actually activate or even inhibit neurons with light. Just like turning on a light switch, literally. Scientists have found a way to insert light-sensitive receptor proteins (originally found in algae) into mammalian neurons, making them sensitive to activation by light of specific wavelengths. This offers both a temporal and spatial precision that was previously unpreceded. It’s not surprising this technique has been named Method of the year (Nature) in 2010. We are talking optogenetics here. “Optogenetics is a technology that allows targeted, fast control of precisely defined events in biological systems as complex as freely moving mammals.” (Karl Deisseroth, Nature Methods, 8, 26-29.) Optogenetics has already undergone rapid developments in the last few years. While first tested ex vivo, soon it was possible to use it in live animals, first restrained and now in animals implanted with fiber optics that are able to freely move around in their cages. Initially, blue light was used, now new receptors are developed that are sensitive to other wavelengths of color. Of course optogenetics holds great promise for behavioral research as well. Not only can we now more specifically determine the actual effects of stimulation of certain neurons on behavior, it also refines operant conditioning tasks and preference/avoidance tests. This is where Noldus come in: a perfect challenge for EthoVision XT and Trial & Hardware Control. The way we do neuroscience Garret Stuber (www.stuberlab.org) honored our headquarters (Wageningen, The Netherlands) with a visit a while back, to present his lab’s work on optogenetics. As he puts it: “Now it is cutting edge technology, relatively new, and still being refined. But in 5 years, this will be the way we do neuroscience.” Addiction and neuropsychiatric disorders At the Stuber Lab, neural circuits that are involved in addiction and neuropsychiatric disorders are studied. They are interested in learning how activation or inactivation of certain neurons influences behavior, by performing several behavioral tests. Stuber says: “It is difficult to do these kinds of test without automated triggers. It doesn’t work as well if you have to manually turn on the laser for optogenetic stimulation each time the animal is at the location or performs the behavior you specified. Therefore, Trial & Hardware Control is basically essential for these kinds of tests.” Real-time place preference As Stuber describes it, at their lab they often like to start with a straight-forward real-time place preference test. In this test the animal is placed in a rectangular open field with two distinct sides. One of these sides is paired with optogenetic stimulation. Depending on whether this stimulation is activating or deactivating, and which neurons are affected, this can be a rewarding or aversive stimulus for the animal. If in further sessions the animal spends more time on the stimulated sides, it is fair to conclude the stimulus had a rewarding effect. If the animal avoids this side, the stimulus was aversive. In practice, EthoVision XT Trial & Hardware Control is programmed to pair a zone (one side) with a trigger. This command goes through the IO-box and turns on the laser for optogenetic stimulation. “The great thing about Trial & Hardware Control is that you can create a protocol more complex than just one pairing of animal location (or behavior) to a trigger. You can create multiple output signals, for example define five different zones and pair these with five different frequencies of optogenetic stimulation. This way you test out what the optimal frequency of stimulation is. This is important because some neurons are responsible for different actions, depending on their firing rate and thus the frequency with which they are stimulated.“ Anxiolytic and anxiogenic stimuli Stuber also uses optogenetics to study the possible anxiolytic or anxiogenic functioning of neurons. He describes a 15 minute off-on-off stimulus test, in which the animal is placed in an elevated plus maze. EthoVision XT Trial & Hardware Control can be programmed to turn on a optogenetic stimulation between minute 6 and 10 of the test. The first and last five minutes of the test the animal is not stimulated. The behavior of the animal is video tracked. If the animal spends significantly more time in the open arms of the plus maze during the stimulation phase, stimulation of the neurons are thought to have a anxiolytic effect. Other possibilities for optogenetics in behavioral research A very valuable and yet straight-forward test is that of open field locomotion in response to optogenetic stimulation. Velocity, movement, and rotation are interesting behaviors that can be easily assessed with video tracking, and that are useful indicators of neuron functioning. Stuber also mentions that Trial & Hardware Control can be a great asset to automate operant tests in which the behavior of the animal is detected (e.g. a nose poke or lever push) and subsequently triggers optogenetic stimulation. This can be used for either rewarding or aversive learning. A “bright” future for optogenetics While it has undergone rapid developments, there are a lot more possibilities in the future of optogenetic research. For example, while optogenetics is often used to stimulate neurons, it can also be used to disrupt neuron signaling, effectively turning them off. And instead of just stimulating or deactivating neurons, optogenetics can also be used to record the activity of neurons. Stuber comments that there is already a lot of proof of principle, but that actual studies that incorporate this in behavioral testing are not published yet. It does show great promise for the future, as it is already possible to visualize neuron activity and even discriminate firing rate according to the colored light intensity. “It would be great to combine it all; stimulating one type of neuron, while recording the effect it has on the other type of neuron, imaging the firing rate, and while tracking the location and behavior of the animal”, says Stuber. Thank you, Garret Stuber and the Stuber Lab (www.stuberlab.org), for your help with this blog post! Go to www.stuberlab.org for more information on the work of Garret Stuber and his colleagues. Some recent publications that might interest you: - Stamatakis, A.M.; Stuber, G.D. (2012). Activation of lateral habenula inputs to the ventral midbrain promotes behavioral avoidance. Nature Neuroscience, 15(8), 1105-1107. - Van Zessen, R; Phiilips, J.L.; Budygin, E.A.; Stuber, G.D. (2012). Activation of VTA GABA neurons disrupts reward consumption. Neuron, 73, 1184–1194. - Sparta, D.R.; Stamatakis, A.M.; Phillips, J.L.; Hovelsø, N.; van Zessen, R.; Stuber, G.D. (2011). Construction of implantable optical fibers for long-term optogenetic manipulation of neural circuits. Nature Protocols, 7, 12–23. - Stuber, G.D.; Britt, J.P.; Bonci, A. (2011). Optogenetic modulation of neural circuits that underlie reward seeking. Biological Psychiatry, 71, 1061-1067. - Stuber, G.D.; Sparta, D.R.; Stamatakis, A.M.; van Leeuwen, W.; Harjoprajitno, J.E.; Cho, S.; Tye, K.M.; Kempadoo, K.A.; Zhang, F.; Deisseroth, K.; Bonci, A. (2011). Amygdala to nucleus accumbens excitatory transmission facilitates reward seeking. Nature, 475, 377-380.
Biomass System Design Biomass encompasses a variety of materials that includes wood, agricultural residues and both human and animal waste. These materials can be used for heating buildings and to a lesser extent for producing power or a combination of heat and power. With biomass systems there needs to be more operator interaction than with other forms of renewable energy such as with solar or wind. Operators of biomass systems will have to order and/ or deliver fuel, remove ash, and maintain all the moving parts. While this seems like it may be a lot of maintenance it actuality requires no more than a few minutes a day plus a few hours per year for an annual inspection and cleaning. This small amount of extra care may turn some people off to the idea of a biomass system versus a solar or wind option; however, unlike its clean energy counterparts biomass systems have the great advantage of dispatch ability. This means that the system is controllable and provides heating when it is needed. The one big disadvantage to this is that fuel needs to be purchased, delivered, and stored. Additionally, biomass combustion produces emissions that have to be monitored to ensure that they comply with government regulations. Parts of a Biomass System There are several key components to a biomass system which include the following: - Fuel storage and handling or conveying. - Fire suppression systems - Exhaust controls - System controls - Automatic ash handling (optional feature) - Back up boiler - Heat distribution system No matter the biomass system they all require storage for the fuel as well as a way of handling the fuel. Systems that typically use wood chips or pellets are often stored in silos or a bunker with an automated system that moves the fuel from the storage area to the combustion area. It is generally recommended that storage areas hold a minimum of 3 days of fuel. The day hopper is the last part of the fuel handling system and controls the rate at which the fuel is delivered to the boiler. In log or pellet systems the heat created by the boiler can be used to directly heat the air or it can be used to heat water in a system which acts as the medium by which the heat is delivered. Fire suppression systems are useful in preventing the fire from the combustor traveling back up the conveyor system where the fuel is being stored. This system can include temperature sensors and water-delivery or control systems to put out fires before it spread throughout the entire system. How Does a Biomass System Work? Biomass systems typically use direct combustion to produce heat. In this type of combustion the biomass is burned to produce hot gas which then is either used to directly heat the building or fed into a boiler to create hot water or steam. In the boiler system the steam can be used to transfer the heat to the building.
It’s generally accepted that the contemporary world is more technologically advanced than the ancient one. The Etruscans may have dreamed of space travel, but they were unable to transport themselves to Schenectady, New York, let alone the moon. Yet we can’t be too smug. Sure we carry the Internets in our pockets and heat our meals in seconds, but we can’t touch ancient Rome when it comes to concrete. Throughout the Mediterranean basin, there are ancient harbors constructed with 2000 year old Roman concrete that remain more or less is perfect functioning condition. And as we gaze about the remnants of the ancient world, we see aqueducts, roads and buildings that have survived remarkably well over time. When we compare these structures with our own, we find contemporary concrete sadly lacking. Roman concrete was superior to our own and now scientists understand why: The secret to Roman concrete lies in its unique mineral formulation and production technique. As the researchers explain in a press release outlining their findings, “The Romans made concrete by mixing lime and volcanic rock. For underwater structures, lime and volcanic ash were mixed to form mortar, and this mortar and volcanic tuff were packed into wooden forms. The seawater instantly triggered a hot chemical reaction. The lime was hydrated — incorporating water molecules into its structure — and reacted with the ash to cement the whole mixture together.” The Portland cement formula crucially lacks the lyme and volcanic ash mixture. As a result, it doesn’t bind quite as well when compared with the Roman concrete, researchers found. It is this inferior binding property that explains why structures made of Portland cement tend to weaken and crack after a few decades of use, Jackson says.
View your list of saved words. (You can log in using Facebook.) System that uses electromagnetic echoes to detect and locate objects. It can also measure precisely the distance (range) to an object and the speed at which the object is moving toward or away from the observing unit. Radar (the name is derived from radio detecting and ranging) originated in the experimental work of Heinrich Hertz in the late 1880s. During World War II British and U.S. researchers developed a high-powered microwave radar system for military use. Radar is used today in identification and monitoring of artificial satellites in Earth orbit, as a navigational aid for airplanes and marine vessels, for air traffic control around major airports, for monitoring local weather systems, and for spotting speeders. This entry comes from Encyclopædia Britannica Concise. For the full entry on radar, visit Britannica.com.
A bottle with ethanol vapor is touched with a spark, which blows the cork off the bottle. This simulates how a car’s combustion engine works. This is the same way a spark plug works. The spark ignites the fuel in the chamber and causes an explosion, which increases the pressure in the bottle and blows off the cork. The ethanol vapor explosion demonstrates that things expand when they get hot. However, instead of just being heated, the ethanol actually burns. Ethanol is a kind of alcohol made from plants. When something burns, it changes from one chemical to another. When ethanol burns, it mixes with oxygen in the air to make water vapor and carbon dioxide. A little bit of ethanol is put inside a bottle and then the top is sealed with a cork. Two screws are in the sides of the bottle so that there points almost touch. The bottle must first be shaken so the ethanol evaporates and mixes thoroughly with the oxygen inside. This makes it burn faster. To ignite the engine, we need some energy, which we get from a high voltage sparker. (See the section on Tesla Coils for how the sparker works.) Eventually, a spark jumps between the points of the screws inside the bottle. The spark ignites the ethanol, which burns very quickly. As it burns, it changes to the hot gases carbon dioxide and water vapor. The heat of the burning makes them expand, causing an explosion. This blows the cork off the bottle. You may know that cars run on ethanol instead of gasoline. A car engine works the same way as the exploding bottle. Gasoline or ethanol comes into the engine and mixes with oxygen. A spark from the spark plug lights the mixture causing an explosion. This forces the piston out, like the cork in the bottle. The pistons are attached to the wheels, which turn to make the car go. The explosion in the bottle is like a one cylinder car engine.
As always, you can contact our office to answer any questions or concerns. The ear is made up of three sections: the outer ear, middle ear and inner ear. Each of these areas is susceptible to infections, which can be painful. Young children have a greater tendency to get earaches. While most ear pain resolves itself in a matter of days, you should get a physical examination to understand the type of infection, prevent it from spreading and obtain treatment to help alleviate the pain. Outer Ear Infection (Otitis Externa) Also known as Swimmer’s Ear, outer ear infections result from an inflammation, often bacterial, in the outer ear. Generally, they happen when water, sand or dirt gets into the ear canal. Moisture in the air or swimming makes the ear more susceptible to this type of ear infection. Symptoms include: severe pain, itching, redness and swelling in the outer ear. There also may be some fluid drainage. Often the pain is worse when chewing or when you pull on the ear. To reduce pain and prevent other long-term effects on the ear, be sure to see a doctor. Complications from untreated otitis externa may include hearing loss, recurring ear infections and bone and cartilage damage. Typically, your doctor will prescribe eardrops that block bacterial growth. In more severe cases, your doctor may also prescribe an antibiotic and pain medication. Most outer ear infections resolve in seven to 10 days. Middle Ear Infection (Otitis Media) Middle ear infections can be caused by either bacterial or viral infection. These infections may be triggered by airborne or foodborne allergies, infections elsewhere in the body, nutritional deficiencies or a blocked Eustachian tube. In chronic cases, a thick, glue-like fluid may be discharged from the middle ear. Treatment is contingent on the cause of the infection and ranges from analgesic eardrops, medications to the surgical insertion of a tube to drain fluid from the middle ear or an adenoidectomy. Inner Ear Infection (Otitis Interna) Also known as labyrinthitis, inner ear infections are most commonly caused by other infections in the body, particularly sinus, throat or tooth infections. Symptoms include dizziness, fever, nausea, vomiting, hearing loss and tinnitus. Always seek medical attention if you think you may have an inner ear infection. If you suspect you or your child may have an ear infection, please contact our office and schedule an appointment with one of our otolaryngologists.
When you get general anesthesia, you're "put under," which means that you're totally unconscious and immobilized. You "go to sleep" and don't feel, sense or remember anything that happens after the drugs begin to work on your system. It's not completely clear exactly how general anesthetics work, but the current accepted theory is that they affect the spinal cord (which is why you end up immobile), the brain stem reticular activating system (which explains the unconsciousness) and the cerebral cortex (which results in changes in electrical activity on an electroencephalogram). Major, complex surgeries that require a long period of time to perform typically require general anesthesia. Patients may be under for just a few hours for a knee replacement, or as many as six hours for something more complicated, such as heart bypass surgery. If you're preparing for a surgery requiring general anesthesia, you'll typically meet with the anesthesiologist to give him or her your medical history. This is important because people certain with conditions might require special care under anesthesia -- a patient with low blood pressure might need to be medicated with ephedrine, for example. Patients who are heavy drinkers or drug users also tend to react differently to anesthesia. During this meeting, you'll be instructed not to eat for several hours before surgery. It's possible for someone under general anesthesia to aspirate, or breathe in, the contents of the stomach. When you're under general anesthesia, you'll be wearing a breathing mask or breathing tube, because the muscles become too relaxed to keep your airways open. Several different things are continuously monitored while you're under -- pulse oximetry (oxygen level in the blood), heart rate, blood pressure, respiratory rate, carbon dioxide exhalation levels, temperature, the concentration of the anesthetic and brain activity. There's also an alarm that goes off if your oxygen level drops below a certain point. There are four stages of general anesthesia: - During the first stage, induction, the patient is given medication and may start to feel its effects but hasn't yet fallen unconscious. - Next, patients go through a stage of excitement. They may twitch and have irregular breathing patterns or heart rates. Patients in this stage don't remember any of this happening because they're unconscious. This stage is very short and progresses rapidly to stage three. - During stage three, the muscles relax, breathing becomes regular and the patient is considered fully anesthetized. - Stage four anesthesia isn't a part of the regular process. This is when a patient has received an overdose of drugs, which can result in heart or breathing stoppage, brain damage or death if swift action isn't taken. We'll look at the drugs administered during general anesthesia, as well as recovery, next.
Introduction and Basic Principles of Energy Sustainability Introduction and Basic Principles of Energy Sustainability The generation and use of energy is central to the maintenance of organization. Life itself is a state of organization maintained by the continual use of sources of energy. Human civilization has reached the state it has by the widespread use of energy, and for the large fraction of the world that aspires to a higher standard of living, more energy will be required for them to achieve it. Therefore, I embrace the idea that we need energy, and probably need much more of it than we currently have. We should never waste energy, and should always seek to use energy efficiently as possible and practical, but energy itself will always be needed. This weblog is about the use of thorium as an energy source of sufficient magnitude for thousands of years of future energy needs. Thorium, if used efficiently, can be converted to energy far more easily and safely than any other energy source of comparable magnitude, including nuclear fusion and uranium fission. Briefly, my basic principles are: 1. Nuclear reactions (changes in the binding energy of nuclei) release about a million times more energy than chemical reactions (changes in the binding energy of electrons), therefore, it is logical to pursue nuclear reactions as dense sources of energy. 2. Changing the binding energy of the nucleus with uncharged particles (neutrons inducing fission) is much easier than changing the nuclear state with charged particles (fusion), because fission does not contend with electrostatic repulsion as fusion does. 3. Naturally occuring fissile material (uranium-235) will not sustain us for millennia due to its scarcity. We must fission fertile isotopes (uranium-238, thorium-232) which are abundant in order to sustain energy production for millenia. Fertile isotopes such as U-238 and Th-232 basically require 2 neutrons to fission (one to convert, one to fission), and require fission reactions that generate more than 2 neutrons per absorption in a fissile nucleus. 4. For maximum safety, nuclear reactions should proceed in a thermal (slowed-down) neutron spectrum because only thermal reactors can be designed to be in their most critical configuration, where any alteration to the reactor configuration (whether through accident or intention) leads to less nuclear reactions, not more. Thermal reactors also afford more options for achieving negative temperature coefficients of reactivity (which are the basic measurement of the safety of a nuclear reactor). Reactors that require neutrons that have not been slowed significantly from their initial energy (fast-spectrum reactors) can always be altered in some fashion, either through accident or intention, into a more critical configuration that could be dangerously uncontrollable because of the increased reactivity of the fuel. Basically, any fast-spectrum reactor that is barely critical will be extremely supercritical if its neutrons are moderated in some way. 5. “Burning” uranium-238 produces a fissile isotope (plutonium-239) that “burns” inefficiently in a thermal (slowed-down) neutron spectrum and does not produce enough neutrons to sustain the consumption of uranium-238. “Burning” thorium-232 produces a fissile isotope (uranium-233) that burns efficiently in a thermal neutron spectrum and produces enough neutrons to sustain the consumption of thorium. Therefore, thorium is a preferable fuel, if used in a neutronically efficient reactor. 6. Achieving high neutronic efficiency in solid-fueled nuclear reactors is difficult because the fuel sustains radiation damage, the fuel retains gaseous xenon (which is a strong neutron poison), and solid fuel is difficult to reprocess because it must be converted to a liquid stream before it is reprocessed. 7. Fluid-fuel reactors can continuously strip xenon and adjust the concentration of fuel and fission products while operating. More importantly, they have an inherently strong negative temperature coefficient of reactivity which leads to inherent safety and vastly simplified control. Furthermore, decay heat from fission products can be passively removed (in case of an accident) by draining the core fluid into a passively cooled configuration. 8. Liquid-fluoride reactors have all the advantages of a fluid-fueled reactor plus they are chemically stable across a large temperature range, are impervious to radiation damage due to the ionic nature of their chemical bond. They can dissolve sufficient amounts of nuclear fuel (thorium, uranium) in the form of tetrafluorides in a neutronically inert carrier salt (lithium7 fluoride-beryllium fluoride). This leads to the capability for high-temperature, low-pressure operation, no fuel damage, and no danger of fuel precipitation and concentration. 9. The liquid-fluoride reactor is very neutronically efficient due to its lack of core internals and neutron absorbers; it does not need “burnable poisons” to control reactivity because reactivity can continuously be added. The reactor can achieve the conversion ratio (1.0) to “burn” thorium, and has superior operational, safety, and development characteristics. 10. Liquid-fluoride reactors can retain actinides while discharging only fission products, which will decay to background levels of radiation in ~300 years and do not require long duration (>10,000 year) geologic burial. 11. A liquid-fluoride reactor operating only on thorium and using a “start charge” of pure U-233 will produce almost no transuranic isotopes. This is because neutron capture in U-233 (which occurs about 10% of the time) will produce U-234, which will further absorb another neutron to produce U-235, which is fissile. U-235 will fission about 85% of the time in a thermal-neutron spectrum, and when it doesn’t it will produce U-236. U-236 will further absorb another neutron to produce Np-237, which will be removed by the fluorination system. But the production rate of Np-237 will be exceedingly low because of all the fission “off-ramps” in its production. 12. We must build thousands of thorium reactors to displace coal, oil, natural gas, and uranium as energy sources. This would be impractical if liquid-fluoride reactors were as difficult to build as pressurized water reactors. But they will be much simpler and smaller for several reasons. They will operate at a higher power density (leading to a smaller core), they will not need refueling shutdowns (eliminating the complicated refueling equipment), they will operate at ambient pressure and have no pressurized water in the core (shrinking the containment vessel dramatically), they will not require the complicated emergency core cooling systems and their backups that solid-core reactors require (because of their passive approach to decay heat removal), and their power conversion system will be much smaller and power-dense (since in a closed-cycle gas turbine you can vary both initial cycle pressure and overall pressure ratio). In short, these plants will be much smaller, much simpler, much, much safer, and more secure. That said, I am not an apologist for the nuclear industry. I think that a fundamental mistake was made when thorium was overlooked as the prime nuclear fuel in favor of uranium, and this blog is an attempt to explain my position on that topic. In such a position, I think I stand in some good company. Dr. Alvin Weinberg, former director of the Oak Ridge National Laboratory and inventor of the pressurized-water reactor (he holds the patent) said in 1970: The achievement of a cheap, reliable, and safe breeder remains the primary task of the nuclear energy community. (In expressing this view, I suppose I betray a continuing frustration at the slow progress of fusion research, even though the Russian success with the tokamak has quickened the pace.) Actually not much has changed in this regard in 25 years. Even during World War II, many people realized that the breeder was cent ral. It is only now, with burner reactors doing so well, that the world generally has mobilized around the great aim of the breeder. As all readers of Nuclear Applications & Technology know, the prevailing view holds that the LMFBR is the proper path to ubiquitous, permanent energy. It is no secret that I, as well as many of my colleagues at ORNL, have always felt differently. When the idea of the breeder was first suggested in 1943, the rapid and efficient recycle of the partially spent core was regarded as the main problem. Nothing that has happened in the ensuing quarter-century has fundamentally changed this. The successful breeder will be the one that can deal with the spent core most rationally—either by achieving extremely long burnup, or by greatly simplifying the entire recycle step. We at Oak Ridge have always been intrigued by this latter possibility. It explains our long commitment to liquid-fueled reactors-first, the aqueous homogeneous and now, the molten salt. The molten-salt system has been worked on, mainly at Oak Ridge, for about 22 years. For the first 10 years, our work was aimed at building a nuclear aircraft power plant. The first molten-salt reactor, the Aircraft Reactor Experiment, was described in a series of papers from Oak Ridge that appeared in the November 1957 issue of Nuclear Science and Engineering. The present series of papers reports the status of molten-salt systems, and particularly the experience we have had with the Molten-Salt Reactor Experiment (MSRE). The tone of optimism that pervades these papers is hard to suppress. And indeed, the enthusiasm displayed here is no longer confined to Oak Ridge. There are now several groups working vigorously on molten salts outside Oak Ridge. The enthusiasm of these groups is not confined to MSRE, nor even to the molten-salt breeder. For we now realize that molten-salt reactors comprise an entire spectrum of embodiments that parallels the more conventional solid-fueled systems. Thus molten-salt reactors can be converters as well as breeders; and they can be fueled with either 239Pu or 233U or 235U. However, we are aware that many difficulties remain, especially before the most advanced embodiment, the Molten-Salt Breeder, becomes a reality. Not all of these difficulties are technical. I have faith that with continued enlightened support of the US Atomic Energy Commission, and with the open-minded, sympathetic attention of the nuclear community that these papers should encourage, molten-salt reactors will find an important niche in the unfolding nuclear energy enterprise. Weinberg’s faith in the AEC was unjustified, for just a few years later they moved to kill the liquid-fluoride reactor in favor of the liquid-metal fast breeder. I think this was (and is) a mistake, for only in the liquid-fluoride reactor can we find the safety, economy, and efficiency needed to unlock the potential of thorium energy for tens of thousands of years.
Concerted evolution is the tendency of the different genes in a gene family or cluster to evolve in concert. This means that each gene locus in the family comes to have the same genetic variant. The a globin genes of primates illustrates the principle: All primates have two alpha globins; we can therefore assume that the common ancestor of primates had two alpha globin genes. The sequence of each alpha globin gene differs between primate species; in the great apes any two species differ by about 2.5 amino acid substitutions in each gene. If one gene accumulates about 2.5 amino acid changes in the time between two species, then two different genes (alpha1 and alpha2) which have been separated for maybe 300 million years should have accumulated many more changes if they have been evolving independently. The conclusion is that they have not evolved independently; they have evolved in concert. Figure: a phylogeny of the human globin genes. The genes multiplied by gene duplications, and the dates on the figure are the times of the duplications as inferred from the molecular clock. In fact, as the text explains, the inferred dates are untrustworthy. From Jeffreys et al. (1983). Why does concerted evolution take place?
Driving Safety for Teenagers Learning how to drive is an important step your child will take in their life. Driving is an opportunity for them to become more independent. However, most parents are frightened at the idea of their teen getting on the road for the first time on their own. You probably envision all the different scenarios that could happen: what if they’re not paying attention, what if they text and drive, or what if they get into an accident? You should make your child understand that driving is only a privilege. It is a skill that they must know how to use in a responsible manner. Parents should lay out ground rules that their teen must always follow, at least until you feel comfortable with their driving skills. Below we’ve provided some insight on how to keep your child safe on the road.What are the Most Important Driving Rules I Should Teach my Teen? - They should always buckle up whenever they drive, no matter how slow they’re going or how close the destination is. Explain to your child just how important seatbelt use is and the consequences they could face if they don’t wear one. Statistics tell us that seat belts saved an estimated 14,955 lives in 2017 alone, and 47% of passenger vehicle occupants killed in 2017 were unrestrained. It is not something your teen can afford to not wear. - Tell your child to always stay focused on the road. Driving distracted will dramatically increase the risk for accidents. Discourage them from using their phone, eating, playing loud music, or doing their makeup while behind the wheel. - Discourage your child from playing any loud music while driving as it could be a significant distraction. If your teen is just starting out, don’t allow them to play music at all. - Strongly enforce a no texting and driving rule. Texting and driving is a huge problem in the country, and most teens admit to doing it despite knowing the risks. Texting and driving can quickly turn into a life-or-death situation because drivers have reduced reaction time while being distracted by their phone. You can be a good example for your child by staying focused on the road and not texting while driving yourself. - Your child must follow the speed limit because speeding makes them more likely to cause an accident. Excessive speeding is the cause of many teen-related car accidents. Your child has little experience with getting a feel for their driving speed. When your child drives too fast, they have less time to stop or react. This poses a danger for both them and others. Maintaining a steady speed helps your child stop safely on the road. Let your child know when they are driving too fast, so they know how to monitor your speed. - Set some ground rules for your child to follow once they get their license. This means giving them a time in the evening that they are not to use the car. Bar them from driving with friends and allow them to use the car for going to school and emergencies such as errands. Setting these rules gives your child a disciplined approach to driving. Inexperience behind the wheel increases the risk of causing an accident. Your child’s reflexes are not developed enough to a level where they can act immediately during an emergency. They have also not acquired skills that an experienced driver has. They include awareness and interaction with the flow of traffic. Many teens shrug off their parents worries by claiming they know how to drive. This may be true, but inexperience automatically increases their risk of causing or being in an accident. There’s a reason why most insurance companies charge higher rates for car insurance for teens and young adults.Why is Good Posture Important to Driving? Having the right posture ensures that your child drives safely and effectively. Sitting up straight in the driver’s seat helps them see the road ahead of them. Do not allow them to recline the seat into a position that could interfere with their field of vision. Some teens think driving in a reclined position appears cooler, but this is just an accident waiting to happen. They should also hold the steering wheels in the 3 and 9 o’clock positions. Failure to do this can result in their hands flying in their face when airbags are deployed.What Distance Should my Child Keep Between Them and Other Cars? Maintaining ample distance between one’s car and others is crucial to avoiding accidents. Discourage your child from tailgating, which even experienced drivers continue to do. Talk to them about how to avoid and control road rage, as it is inevitable that they’ll eventually become frustrated with someone on the road. Your child should maintain a distance of at least two seconds away from vehicles in front of them. When your child is driving at higher speeds, they should be further away from the vehicle in front of them. This decreases their risk of causing an accident, despite their speeding.What is Defensive Driving, and why Should my Child Learn how to Drive This way? Defensive driving is: - Your child has only one thing to stay focused on while driving, and it is the road. Driving is a cerebral task that requires anticipation of many different variables on the road. These variables include paying attention to road conditions, speed, traffic laws, road markings, following directions, and awareness of other cars around them. Staying focused on the road ensures safe driving. - Always remaining alert is also very important. It ensures that your child can react to difficult scenarios on the road. These scenarios may include the driver of the car in front slamming the brakes hard at the last minute. This requires keeping a good distance away and knowing to brake appropriate to prevent a potential rear-end collision. - Drivers should plan ahead and expect the unexpected. You never know what could suddenly happen on the road. Don’t expect drivers to do what you should they should be doing. Poor weather conditions are scary for any driver, regardless of experience. However, for those with no experience, such conditions are especially scary because they do not know how to properly handle them. Rain, hail, and snow can significantly impact traction, which makes it important to slow down. If your child speeds in these conditions, they will likely lose on control of their car and crash. They should also keep their headlights, regardless of the time of day, to remain visible to other motorists. If you can, have them practice driving in poor weather conditions, while in a controlled environment such as a vacant parking lot. Tell your child to avoid driving if weather conditions appear too dangerous, even if they feel like their plans are important. It’s always better to be safe than sorry.What Should my Child do When They are in an Accident? Your child should take several steps to ensure their safety and the safety others in the case of an accident. If there are injuries, call 9-1-1 immediately. Someone must take photos of the vehicles and other relevant information such as the car insurance information for all parties involved. File a police report, which can be done by calling for them to come to the scene. You and your child can also go to your local police station to file the report if no police were called. Call your insurance company as well and send them photos that show the extent of damage. If you are not at the scene, your child should call you as soon as they can. Drivers must exchange information that includes phone numbers, license plates, and insurance information.How Should my Child Approach Alcohol and Driving? In addition to discouraging underage drinking, emphasize to your child how important it is to drive sober. Have them understand that one drink can impair their judgement and reflexes significantly. They should also understand that the legal blood alcohol content limit for anyone under 21 is zero. If your child does end up drinking, they should have a plan to get home safely. Set up a ground rule that they are always allowed to call you for a ride if they drink too much, even if they’re not supposed to be drinking. Set a good example for them by not drinking and driving yourself. Teens witnessing their parents drinking and driving will think it is acceptable for them to do so themselves.Sources and Additional Literature Hamann, C., et al. (2019). Influence of Family Communication Patterns on Teen Risky Driving and Driving Intervention Effectiveness (No. 19-01432). Classen, S., et al. (2019). An integrative review on teen distracted driving for model program development. Frontiers in public health, 7, 111. Das, S., et al. (2019). Understanding crash potential associated with teen driving: Survey analysis using multivariate graphical method. Journal of Safety Research, 70, 213-222. Duddu, V. R., et al. (2019). Crash risk factors associated with injury severity of teen drivers. IATSS research, 43(1), 37-43. Knezek, C. M., et al. (2019). A Multitiered Holistic Approach to Traffic Safety: Educating Children, Novice Teen Drivers and Parents, and Crash Investigators to Reduce Roadway Crashes-An Eight-Year Introspective Project. In Transportation. IntechOpen. Delgado, M., et al. (2019). Perceptions of Smartphone Technology for Reducing Cellphone Use While Driving Among Teen Drivers and Their Parents (No. 19-03659).
Early massive galaxies—those that formed in the three billion years following the Big BangBig BangThe well-supported theory that some 13.8 billion years ago, the entire universe was staggeringly small, hot, and dense. In a fraction of an instant, the universe expanded and continues to expand to this day. —should have contained large amounts of cold hydrogen gas, the fuel required to make stars. But scientists observing the early Universe with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Hubble Space Telescope have spotted something strange: half a dozen early massive galaxies that ran out of fuel. The results of the research are published today in Nature. Known as “quenched” galaxies—or galaxies that have shut down star formation—the six galaxies selected for observation from the REsolving QUIEscent Magnified galaxies at high redshiftRedshiftThe shift of all the spectral lines toward longer wavelengths due to the object moving further away as seen from the Earth. This recession, at great distances, is due to the overall expansion of the universe. (See spectral lines for more), or the REQUIEM survey, are inconsistent with what astronomers expect of the early Universe. “The most massive galaxies in the Universe lived fast and furious, creating their stars in a remarkably short amount of time. Gas, the fuel of star formation, should be plentiful at these early times in the Universe,” said Kate Whitaker, lead author on the study, and assistant professor of astronomy at the University of Massachusetts, Amherst. “We originally believed that these quenched galaxies hit the brakes just a few billion years after the Big Bang. In our new research, we’ve concluded that early galaxies didn’t actually put the brakes on, but rather, they were running on empty.” To better understand how the galaxies formed and died, the team observed them using Hubble, which revealed details about the stars residing in the galaxies. Concurrent observations with ALMA revealed the galaxies’ continuum emission—a tracer of dust—at millimeter wavelengths, allowing the team to infer the amount of gas in the galaxies. The use of the two telescopes is by careful design, as the purpose of REQUIEM is to use strong gravitational lensing as a natural telescope to observe dormant galaxies with higher spatial resolution. This, in turn, gives scientists a clear view of galaxies’ internal goings-on, a task often impossible with those running on empty. “If a galaxy isn’t making many new stars it gets very faint very fast so it is difficult or impossible to observe them in detail with any individual telescope. REQUIEM solves this by studying galaxies that are gravitationally lensed, meaning their light gets stretched and magnified as it bends and warps around other galaxies much closer to the Milky Way,” said Justin Spilker, a co-author on the new study, and a NASA Hubble postdoctoral fellow at the University of Texas at Austin. “In this way, gravitational lensing, combined with the resolving power and sensitivity of Hubble and ALMA, acts as a natural telescope and makes these dying galaxies appear bigger and brighter than they are in reality, allowing us to see what’s going on and what isn’t.” The new observations showed that the cessation of star formation in the six target galaxies was not caused by a sudden inefficiency in the conversion of cold gas to stars. Instead, it was the result of the depletion or removal of the gas reservoirs in the galaxies. “We don’t yet understand why this happens, but possible explanations could be that either the primary gas supply fueling the galaxy is cut off, or perhaps a supermassive black hole is injecting energy that keeps the gas in the galaxy hot,” said Christina Williams, an astronomer at the University of Arizona and co-author on the research. “Essentially, this means that the galaxies are unable to refill the fuel tank, and thus, unable to restart the engine on star production.” The study also represents a number of important firsts in the measurement of early massive galaxies, synthesizing information that will guide future studies of the early Universe for years to come. “These are the first measurements of the cold dust continuum of distant dormant galaxies, and in fact, the first measurements of this kind outside the local Universe,” said Whitaker, adding that the new study has allowed scientists to see how much gas individual dead galaxies have. “We were able to probe the fuel of star formation in these early massive galaxies deep enough to take the first measurements of the gas tank reading, giving us a critically missing viewpoint of the cold gas properties of these galaxies.” Although the team now knows that these galaxies are running on empty and that something is keeping them from refilling the tank and from forming new stars, the study represents just the first in a series of inquiries into what made early massive galaxies go, or not. “We still have so much to learn about why the most massive galaxies formed so early in the Universe and why they shut down their star formation when so much cold gas was readily available to them,” said Whitaker. “The mere fact that these massive beasts of the cosmos formed 100 billion stars within about a billion years and then suddenly shut down their star formation is a mystery we would all love to solve, and REQUIEM has provided the first clue.” “Quenching of star formation from a lack of inflowing gas to galaxies,” K. Whitaker et al., 2021 Sept. 23, Nature, https://www.nature.com/articles/s41586-021-03806-7, preprint: [https://public.nrao.edu/wp-content/uploads/2021/09/Whitaker_Galaxies_Nature_Preprint.pdf] A complementary press release has been published by STScI at: https://hubblesite.org/contents/news-releases/2021/news-2021-039 The Atacama Large Millimeter/submillimeter Array (ALMA), an international astronomy facility, is a partnership of the European Organisation for Astronomical Research in the Southern Hemisphere (ESO), the U.S. National Science Foundation (NSF) and the National Institutes of Natural Sciences (NINS) of Japan in cooperation with the Republic of Chile. ALMA is funded by ESO on behalf of its Member States, by NSF in cooperation with the National Research Council of Canada (NRC) and the Ministry of Science and Technology (MOST) and by NINS in cooperation with the Academia Sinica (AS) in Taiwan and the Korea Astronomy and Space Science Institute (KASI). ALMA construction and operations are led by ESO on behalf of its Member States; by the National Radio Astronomy Observatory (NRAO), managed by Associated Universities, Inc. (AUI), on behalf of North America; and by the National Astronomical Observatory of Japan (NAOJ) on behalf of East Asia. The Joint ALMA Observatory (JAO) provides the unified leadership and management of the construction, commissioning and operation of ALMA. Amy C. Oliver Public Information Officer, ALMA Public Information & News Manager, NRAO +1 434 242 9584
Globular clusters once ruled the Milky Way. Back in the old days, back when our Galaxy first formed, perhaps thousands of globular clusters roamed our Galaxy. Today, there are less than 200 left. Over the eons, many globular clusters were destroyed by repeated fateful encounters with each other or the Galactic center. Surviving relics are older than any Earth fossil, older than any other structures in our Galaxy, and limit the universe itself in raw age. There are few, if any, young globular clusters in our Milky Way Galaxy because conditions are not ripe for more to form. The featured video shows what it might look like to go from the Earth to the globular cluster Terzan 5, ending with a picture of the cluster taken with the Hubble Space Telescope. This star cluster has been found to contain not only stars formed in the early days of our Milky Way Galaxy, but also, quite surprisingly, others that formed in a separate burst of star formation about 7 billion years later.
A limiting factor in the development of reading skills in students and an impediment to classroom management for teachers is the ability of children to read independently for significant periods of time. A valuable strategy to deal with both issues is increasing reading stamina. Building reading stamina supports students to read for an extended time, trains muscle memory and supports an increased attention. Teaching a procedure for independent reading enables students to read without direct supervision, enabling the teacher to work with a group (teach), or work with a student (teach, monitor or assess). Roles and responsibilities of the teacher and students can be clearly outlined for students on a T-Chart. Both the student and teacher have responsibilities in the process. Students: - Collect enough books - Sit … (at my desk/on my own/etc.) - Start reading right away - Read the whole time - Think about what I’m reading Teacher responsibilities include: That is, working with a student or working with a group. How to build reading stamina - Set a purpose for the reading. Explain how practice reading helps us to become better readers. Support students to make a chart which may contain some or all of the following prompts for improving reading: - My phrasing and fluency supports meaning making and sounds like talking. - I use the punctuation to regulate reading phrasing and fluency. - I adapt the pitch, tone, stress and volume of my voice to match meaning. - I problem solve quickly and efficiently. - I self-correct if needed. - I think about meaning. - Teach for a procedure (above) which outlines student and teacher responsibilities. - Start with a manageable time for all students (i.e. a time which can be achieved by all students). - Instruct students to follow the procedure for independent reading for the designated time. Monitor. - Gather students together. Invite discussion (paired/group/class) about the reading. Provide feedback on the application of the procedure. Troubleshoot where needed. - Gradually extend the reading time and continue to monitor. - Continue to share, provide feedback and troubleshoot as reading time continues to increase and reading stamina grows. Begin to gradually withdraw perceived supervision to build independence. - When students are able to sustain the reading for the desired time, begin taking guided reading or guided reading/reciprocal teaching. Independent reading then provides another option for supporting independent learning. Immediate benefits for students and teachers Feedback from a teacher at one of our recent workshops demonstrates the quick and valuable gains that can be made using this strategy. I attended your Toowoomba session last Thursday (26 April) and yesterday tried stamina reading with my 2/3 class. I did as you suggested and discussed the importance of reading and used that verse about the more you read the better you get etc, talked about what good readers do and then did the T chart ‘my job your job’. I allowed them to choose 2 books – their home reader and a guided reader from their box. When I asked the children how long they thought that they could do it for 1 child said half an hour (and he probably could have) and some others said 10 or 15 minutes. They were really confident about being able to do it. We settled on 5 minutes to start with. It was lovely to see every head down reading for the whole time. We shared something we read with a partner at the end. After complimenting the class on their great reading with stamina I asked them if they had enjoyed it and all said they had because it was so easy to concentrate on their reading. Thank you for this great idea for making the most of independent reading time. Regards Donna (Donna Gray, Highfields State School, 3rd May) Fact sheet and workshops The Reading Stamina Fact Sheet can be downloaded and used as a guide to the implementation of Reading Stamina in the classroom. A more in-depth examination and discussion of this topic is a component of the following professional development workshops. - Reading Stamina (online short course) - P-3 Reading in the Australian Curriculum - Prep Reading & Writing Have you implemented reading stamina in your classroom / school? What was the impact? We’d love to hear your comments!
Credit: M. Abkarian and H.A. Stone Speech and singing spread saliva droplets, a phenomenon that has attracted much attention in the current context of the Covid-19 pandemic. Scientists from the CNRS, l’université de Montpellier, and Princeton University* sought to shed light on what takes place during conversations. A first study published in PNAS revealed that the direction and distance of airflow generated when speaking depend on the sounds produced. For example, the accumulation of plosive consonants, such as the “P” in “PaPa,” produces a conical airflow that can travel up to 2 metres in 30 seconds. These results also emphasize that the time of exposure during a conversation influences the risk of contamination as much as distance does. A second study published on 2 October in the journal Physical Review Fluids describes the mechanism that produces microscopic droplets during speech: saliva filaments form on the lips for the consonants P and B, for example, and are then extended and fragmented in the form of droplets. This research is being continued with the Metropolitan Opera Orchestra (“MET Orchestra”) in New York, as part of a project to identify the safest conditions for continuing this prestigious orchestra’s activity. *The French scientists work at the Centre for Structural Biology (CNRS/Université de Montpellier/Inserm) and the Alexander Grothendieck Institute of Montpellier (CNRS/Université de Montpellier). Related Journal Article
Milk fever is a disorder mainly of dairy cows close to calving. It is a metabolic disease caused by a low blood calcium level (hypocalcaemia). Between 3% and 10% of cows in dairying districts are affected each year, with much higher percentages occurring on some properties. Jersey cows that are mature and fat and graze lush, clover dominant pasture before calving are most susceptible. Losses are due to deaths (about one in 20 affected cows dies), a reduction in the productive lifespan of each affected cow of about three years, and reduction in milk production following each milk fever episode, as well as costs of prevention and treatment. In typical cases cows show some initial excitement or agitation and a tremor in muscles of the head and limbs. Then they stagger and go down to a “sitting” position, often with a ‘kink’ in her neck, and finally lie flat on their side before circulatory collapse, coma and death. A dry muzzle, staring eyes, cold legs and ears, constipation and drowsiness are seen after going down. The heart beat becomes weaker and faster. The body temperature falls below normal, especially in cold, wet, windy weather. These signs are due mainly to lowered blood calcium levels. Sometimes there are additional signs due to complicating factors. Bloat is common in cows unable to “sit up” because the gas in the rumen is unable to escape. Pneumonia and exposure may affect cows left out in bad weather. About 80% of cases occur within one day of calving because milk and colostrum production drain calcium (and other substances) from the blood, and some cows are unable to replace the calcium quickly enough. High producers are more susceptible because the fall in their blood calcium level is greater. Selecting cows for high production may, therefore increase the problem with milk fever. Some individual cow families or breeds (for example, Jerseys) are more susceptible than others. Age is important. Heifers are rarely affected. Old cows increase in susceptibility up to the fifth or six calving because they produce more milk and are less able to replace blood calcium quickly. The feeding management of dry cows in the 2 weeks before calving is very important, because it affects both the amount of calcium available to replace blood calcium and the efficiency with which the available calcium can be used. When the amount of calcium in the diet is greater than is needed, the efficiency of absorbing calcium from the intestine and the efficiency of transferring calcium from the skeleton both become very sluggish and the chance of milk fever is greatly increased. Also, grazing pastures in Southern Australia winter and spring results in alkaline blood which creates conditions unfavourable for the availability of calcium in the body and predisposes the cow to milk fever. Feeding hay prior to calving and restricting access to green feed results in acidic blood which favours calcium mobilisation from bone and improves calcium absorption from the intestines, both of which are important factors in preventing the occurrence of milk fever. Fat cows are at a greater risk than thin cows. This is partly because their feed and calcium intake has been higher and partly because fat cows produce more milk at calving time. Some cows get milk fever several days or even weeks before or after calving. This is usually due to the feed, especially the dietary calcium, being insufficient to meet the heavy demand due to the rapidly growing foetus or milk production in early lactation. In early lactation, cows should receive as much calcium as possible, and clover-dominant pasture are therefore desirable. They will help to prevent grass tetany as well as milk fever. Treatment should be given as soon as possible. Use 300 ml, or more, of a 40% solution of calcium borogluconate or, preferably, a combined mineral solution such as “three-in-one” or “four-in-one”. Often 600ml may be required. The combined solutions contain additional ingredients such as magnesium, phosphorus and dextrose (for energy), which may also be at low levels in the blood while cows have milk fever. Packets of solution together with an injection kit are best kept on hand for emergencies. All equipment should be kept sterile to avoid abscess formation at the site of injection. Injection of the solution by farmers should be in several places under the skin on the neck or behind the shoulder, unless the cow is in a coma or there are other reasons for desiring a quick response. Injection into a vein should be left to a veterinarian as it can cause sudden death if not carried out properly. Veterinary assistance is also advisable if there is not a quick response to treatment, because other problems may also be present. Cows that are “flat out” should be propped up into a normal resting position to relieve bloat. If weather conditions are bad, or the response to treatment is slow, transfer the cows to shelter to prevent exposure and other complications. Provide feed and water. Rugging helps. Some cows that have been comatosed may have regurgitated and inhaled rumen content into the lungs. If there is ruminal material around the nose one should be suspicious that this may have happened and intensive antibiotic treatment should be commenced as soon as possible as inhalation pneumonia is often fatal. Recovered cows should not be milked for 24 hours; then the amount of milk taken should be gradually increased over the next 2-3 days. Management of the diet can be a valuable aid preventing milk fever. Cows should be kept on a low calcium diet while they are lactating (dry). This stimulates their calcium regulatory system to keep the blood levels normal by mobilising the body stores of calcium from the bone. When the demand for calcium increases as calving, calcium can be mobilised much more rapidly from bone than the feed, therefore preventing milk fever. With cows at greater risk – Jersey cows of mature age and in forward to fat condition – green feed should be restricted and plenty of hay fed for at least 1-2 weeks before calving. Neither should contain a high percentage of clover or capeweed. If it is necessary to improve the body condition of cows in order to improve milking performance, feeds high in energy but low in calcium may be used, for example cereal grain or oaten hay. Cereal grain is also high in phosphorus content, and this is of additional value. Cows close to calving should be kept in a handy paddock to enable frequent observation and early detection of milk fever. On the point of calving, and afterwards, the available feed and calcium should be unrestricted. Calcium feed supplements may be helpful at this point, but should not be given earlier. Where dietary management is inadequate, other methods are sometimes used. Vitamin D3 given by injection 2-8 days before calving may be useful. As the calving date is often difficult to predict, repeated treatments are sometimes necessary. A common treatment used to prevent milk fever is the injection of calcium borogluconate just before or just after calving. Some cows are given more than one treatment. This is quite successful because the calcium provides a reservoir to increase blood calcium just at the time it is needed for milk and colostrum. The danger is that it may not last long enough and milk fever may still occur before the calcium-regulating mechanism of the cow is working efficiently. Drenching cows with a calcium/magnesium infusion the day before and then twice daily for 1 to 2 days after calving has considerably reduced the incidence of milk fever in some herds where other methods alone have been unsatisfactory. Cows that have required injections to treat milk fever will benefit from the drench to help prevent relapses. This Information Note was originally developed by the Animal Health Bureau, Attwood, and the previous version was published in September 1998.
The ancient Greeks are often credited with building the foundations upon which all western cultures are built, and this impressive accolade stems from their innovative contributions to a wide range of human activities, from sports to medicine, architecture to democracy. Like any other culture before or since, the Greeks learnt from the past, adapted good ideas they came across when they met other cultures, and developed their own brand new ideas. Here are just some of the ways ancient Greeks inventions have uniquely contributed to world culture, many of which are still going strong today: - Human Sculpture - Jury System - Mechanical Devices - Mathematical Reasoning - Olympic Games Columns & Stadiums Just about any city in the western world today has examples of Greek architecture on its streets, especially in its biggest and most important public buildings. Perhaps the most common features invented by the Greeks still around today are the Doric, Ionic, and Corinthian columns which hold up roofs and adorn facades in theatres, courthouses, and government buildings across the globe. The Greeks used these architectural orders primarily for their temples, many of which are still standing today despite earthquake, fire, and cannon shots - the Parthenon, completed in 432 BCE, is the biggest and most famous example. The collonaded stoa to protect walkers from the elements, the gymnasium with baths and training fields, the semi-circular theatre with rising rows of seats, and the banked rectangular stadium for sports, are just some of the features of Greek architecture that any modern city would seem strange indeed without. HUman Sculpture in Art Greek innovations in art are perhaps seen most clearly in figure sculpture. Previous and contemporary ancient cultures had represented the human figure in a simple standing and rather static pose so that the people represented often looked as lifeless as the stone from which they were carved. Greek sculptors, though, inched towards a more dynamic result. In the Archaic period the stance becomes a little more relaxed, the elbows a little more bent and both tension and movement are thus suggested. By the Classical period statues have broken away from all convention and become sensuous, writhing figures that seem about to jump off the plinth. Greek sculpture and art, in general, began a preoccupation with proportion, poise, and the idealised perfection of the human body that was continued by the Romans and would go on to influence Renaissance art and many sculptors thereafter. Democracy & Jury System in Law One of the big ideas of the Greeks was that ordinary citizens should have an equal say in not just who governed them but also how they governed. Even more importantly, that input was to be direct and in person. Consequently, in some Greek city-states, 5th-4th-century BCE Athens being the most famous example, citizens (defined then as free males over 18) could actively participate in government by attending the public assembly to speak, listen, and vote on issues of the day. The Athenian assembly had a physical capacity of 6,000 people, and one can imagine that on many days only the most enthusiastic of the demos (people) would have turned up but when the big issues were on the table the place was packed. A simple majority vote won the day and was calculated by a show of hands. On top of this already startling idea of direct democracy, all citizens could, and indeed were expected to, participate in government by serving as magistrates, jurors, and any official post they were capable of holding. Further, anyone seen to abuse their public position, which was usually only for a temporary term anyway, could be kicked out of the city in the secret vote known as ostracism. Part and parcel of the democratic apparatus was the jury system - the idea that those accused of crimes were judged by their peers. Nowadays a jury system usually consists of twelve people but in ancient Athens, it was the entire assembly and each member was picked at random using a machine known as the kleroterion. This device randomly dispensed tokens and if you got a black one then you had to do jury service that day. The system made sure that nobody knew who would be the jurors that day and so could not bribe anyone to influence their decision. In a carefully considered system that thought of everything, jurors were even compensated their expenses. Engineering & Mechanical devices The Romans might have grabbed all the accolades for best ancient engineers but the Greeks did have their own mechanical devices which allowed them to move massive chunks of marble using the block and tackle, winch, and crane for their huge temples and city walls. They created tunnels in mountains such as the one-kilometre tunnel in Samos, built in the 6th century BCE. Aqueducts was another area the Greeks were not lacking in imagination and design, and so they shifted water to where it was most needed; watermills, too, were used to harness nature's power. Perhaps the area of greatest innovation, though, was in the small-scale production of mechanical devices. The legendary figure of Daedalus, architect of King Minos' labyrinth, was credited with creating life-like automata and all manner of mechanical wonders. Daedalus may never have existed, but the legends around him indicate a Greek love of all-things magically mechanical. Handy Greek devices included the portable sundial of Parmenion made from rings (c. 400-330 BCE), the water alarm clock credited to Plato (c. 428- c. 424 BCE) which used water dropping through various clay vessels which eventually caused air pressure to sound off a whistle-hole, Timosthenes' 3rd-century BCE anemoscope to measure the wind direction, and the 3rd-century BCE hydraulic organ of Ktesibios. Then there was the odometer which measured land distances using a wheel and cogs, the suspended battering ram to provide more punch when breaking down enemy gates, and the flamethrower with a bellows at one end and a cauldron of flammable liquid at the other which the Boeotians used to such good effect in the Peloponnesian War. Mathematical Reasoning & Geometry Other cultures had shown a keen interest in mathematics but perhaps the Greeks' unique contribution to the field was the effort to apply the subject to practical and everyday problems. Indeed, for the Greeks, the subject of maths was inseparable from philosophy, geometry, astronomy, and science in general. The great achievement in the field was the emphasis on deductive reasoning, that is forming a logically certain conclusion based on the reasoning of a chain of statements. Thales of Miletus, for example, crunched his numbers to accurately predict the solar eclipse of May 28, 585 BCE, and he is credited with calculating the height of the pyramids based on the length of their shadow. Undoubtedly, the most famous Greek mathematician is Pythagoras (c. 571- c. 497 BCE) with his geometric theorem which still carries his name - that in a right triangle the square of the hypotenuse is equal to the squares of the short sides added together. The early Greeks considered illness a divine punishment, but from the 5th century BCE a more scientific approach was taken, and both diagnosis and cure became a lot more useful to the patient. Symptoms and cures were carefully observed, tested, and recorded. Diet, lifestyle, and constitution were all recognised as contributing factors to disease. Treatises were written, most famously by the 5th-4th-century BCE founder of western medicine Hippocrates. A better understanding of the human body was achieved. Observation of badly wounded soldiers showed, for example, the differences between arteries and veins, although dissection of humans would only come in Hellenistic times. Medicines were perfected using herbs; celery was known to have anti-inflammatory properties, egg-white was good for sealing wounds, while opium could provide pain relief or work as an anaesthetic. While it is true that surgery was avoided and there were still many wacky explanations floating about, not to mention a still strong connection to religion, Greek doctors had begun the long road of medical enquiry which is still being pursued to this day. Sporting competitions had already been seen in the Minoan and Mycenaean civilisations of the Bronze Age Aegean, but it was in Archaic Greece that a sporting event would be born which became so popular and so important that it was even used as a reference for the calendar. The first Olympic Games were held in mid-July in 776 BCE at Olympia in honour of the Greek god Zeus. Every four years, thereafter, athletes and spectators gathered from across the Greek world to perform great sporting deeds and win favour with the gods. The last ancient Olympics would be in 393 CE, after an incredible run of 293 consecutive Olympiads. There was a widely respected truce in all conflicts to allow participants and spectators to travel in safety to Olympia. At first, there was only one event, the stadion - a foot race of one circuit of the stadium (about 192 m) in which some 45,000 all-male spectators gathered to cheer on their favourite. The event got bigger and bigger over the years with longer footraces added to the repertoire and new events held such as the discus, boxing, pentathlon, wrestling, chariot racing, and even competitions for trumpeters and heralds. Specially trained judges supervised the events and dished out fines to anyone breaking the rules. The winners received a crown of olive leaves, instant glory, perhaps some cash put up by their hometown, and even immortality, especially for the winners of the stadion whose name was given to that particular games. The Olympic Games were revived in 1896 CE and, of course, are still going strong, even if they have another thousand years to go to match the longevity of their ancient version. The great Greek thinkers attacked all of the questions that have ever puzzled humanity. Figures such as Socrates, Plato, and Aristotle in the 5th and 4th century BCE endlessly questioned and debated where we come from, how we have developed, where we are going to, and should we even be bothering to think about it all in the first place. The Greeks had a branch of philosophy to suit all tastes from the grin-and-bear-it Stoics to the live for the minute, live simply and live happily Epicureans. In the 6th century BCE, Anaximander provides the first surviving textual reference of western philosophy and he considered that “the boundless” was responsible for the elements - so we have still not made very much progress since that statement. Collectively, all of these thinkers illustrate one common factor: the Greek's desire to answer all questions no matter their difficulty. Neither were Greek philosophers limited to theoretical answers as many were also physicists, biologists, astronomers, and mathematicians. Perhaps the Greek approach and contribution to philosophy, in general, is best summarised by Parmenides and his belief that as the senses cannot be trusted, we must apply our minds to cut through the haze of superstition and myth and use whatever tools at our disposal to find the answers we are looking for. We may not have found many more solutions since the Greek thinkers provided theirs but their unbounded spirit of enquiry is perhaps their greatest and most lasting contribution to western thought. Science & Astronomy As in the field of philosophy, Greek scientists were keen to find solutions which explained the world around them. All manner of theories were proposed, tested and debated, even rejected by many. That the earth was a globe, that the world revolved around the sun and not vice versa, that the Milky Way was composed of stars, that humanity had evolved from other animals were just some of the ideas the Greek thinkers floated around for contemplation. Archimedes (287-212 BCE) in his bath discovered displacement and cried “Eureka!”, Aristotle (384-322 BCE) developed logic and classified the natural world, and Eratosthenes (276-195 BCE) calculated the circumference of the globe from the shadows cast by objects at two different latitudes. Once again, though, it was not the individual discoveries that were important, it was the general belief that all things can be explained by deductive reasoning and the careful examination of available evidence. It was the ancient Athenians who invented theatre performance in the 6th century BCE. Perhaps originating from either the recital of epic poems set to music or rituals involving music, dance and masks to honour the god of wine Dionysos, Greek tragedies were first performed at religious festivals, and from these came the spin-off genre of Greek comedy plays. Performed by professional actors in purpose-built open-air theatres, Greek plays were popular and free. Not only a fleeting pastime performance, many of the classic plays were studied as a staple part of the education curriculum. In the tragedies, people were engrossed in the twists presented on familiar tales from Greek mythology and the no-win situations for the heroic but doomed characters. The cast might have been very limited but the chorus group added some musical oomph to the proceedings. When comedy came along, there was fun in seeing familiar politicians, philosophers, and foreigners lampooned, and playwrights became ever more ambitious in their presentations, with all-singing, all-dancing chorus lines, outlandish costumes, and special effects such as actors dangling from hidden wires above the beautifully crafted sets. As in many other fields, the entertainment industry of today owes a great debt to the ancient Greeks.
DRAM is an acronym that stands for dynamic random access memory. DRAM frequency is the speed at which data can be transferred between the DRAM and the CPU. The higher the frequency, the faster the data transfer. DRAM frequency is measured in MHz (megahertz) or GHz (gigahertz). The most common DRAM frequencies are 800MHz, 1066MHz, 1333MHz, 1600MHz, 1866MHz, and 2133MHz. DRAM frequency can be increased by overclocking, which is the process of setting the DRAM to run at a higher speed than its factory-specified speed. Overclocking can improve system performance, but it also increases the risk of hardware damage. For this reason, overclockers should be sure to follow all safety precautions when overclocking their DRAM. What is DRAM frequency? DRAM frequency is the speed at which a DRAM device can relay information to and from a computer processor. The frequency is measured in MHz, and the higher the number, the faster the DRAM. Most computer systems use a DDR3 or DDR4 DRAM, which have frequencies of 800MHz or 1600MHz respectively. The CPU then multiplies this by a number to get the RAM’s effective speed. For example, if a system has a DDR3-1600 RAM and a multiplier of 8, the effective speed would be 12800MHz (1600 x 8). Higher-end systems may use a multiplier of 10 or even 12 to achieve even faster speeds. However, it should be noted that RAM speed is only one factor in overall system performance. Processor speed, bus speed, and data storage speed are all important as well. What should your DRAM frequency be? DRAM frequency relates to the speed at which data can be transferred between the DRAM and the CPU. The CPU can only process data so fast, and so a high DRAM frequency will not result in a significant increase in performance. However, a low DRAM frequency can bottleneck the CPU, causing delays and reducing overall performance. As a result, it is important to choose a DRAM frequency that is appropriate for your system. For most users, a frequency of 1600MHz will be more than sufficient. However, if you are using demanding applications or if you plan on overclocking your system, you may need to choose a higher frequency. Ultimately, the best way to determine the ideal DRAM frequency for your system is to experiment and see what provides the best balance of performance and stability. What is the use of DRAM frequency? The DRAM frequency is the number of times per second that the data in DRAM can be accessed. The higher the DRAM frequency, the faster the data can be accessed. DRAM frequency is measured in MHz (megahertz). The most common DRAM frequencies are 800MHz, 1066MHz, and 1200MHz. Higher frequencies offer a significant performance advantage over lower frequencies. For example, a 1200MHz DRAM frequency will allow the data in DRAM to be accessed 50% faster than an 800MHz DRAM frequency. When choosing a DRAM frequency, it is important to consider the capabilities of both the CPU and motherboard. The CPU must be able to support the desired frequency, and the motherboard must be able to provide sufficient power to the memory modules. Many motherboards have multiple memory slots, allowing for different memory configurations. For example, a motherboard with four slots may be able to support two modules of DDR3-1600 in dual-channel mode or four modules of DDR3-1333 in single-channel mode. The choice of memory configuration will depend on the desired performance level. Is it safe to increase DRAM frequency? As computers become more powerful, the demand for faster DRAM chips increases. While it is possible to increase the frequency of DRAM chips, there are some potential risks involved. One risk is that the increased frequency can cause signal integrity issues, resulting in data corruption. In addition, higher frequencies can place a strain on the power delivery system, potentially leading to component failure. As a result, care must be taken when increasing DRAM frequency, and it is important to consult with manufacturers to ensure compatibility. Overall, while there are some risks involved, increasing DRAM frequency can be safe if proper precautions are taken. Should I adjust DRAM frequency? When it comes to optimizing your PC’s performance, there are a lot of factors to consider. One of the most important is DRAM frequency. DRAM (Dynamic Random-Access Memory) is the main type of memory used in PCs, and it plays a vital role in everything from gaming to web browsing. Theoretically, higher DRAM frequencies should result in better performance. However, in practice, the differences can be hard to notice. As a result, many users choose to run their DRAM at lower frequencies in order to save power and extend its lifespan. If you’re looking for every last bit of performance, then you may want to experiment with different settings. However, if you’re happy with your current setup, there’s no need to change anything. What should my RAM frequency be? If you’re a PC gamer, then you know that one of the most important parts of your rig is the Random Access Memory, or RAM. Your RAM is responsible for storing data temporarily so that it can be accessed quickly by the CPU. This is essential for gaming, as it ensures that information can be processed quickly and smoothly. So, what’s the ideal RAM frequency for gaming? Many experts believe that the sweet spot for RAM frequency is around 3200 MHz. Anything higher may offer marginal benefits, but it will also consume more power and generate more heat. So, if you’re looking to get the most out of your gaming rig, aim for a RAM frequency of 3200 MHz. You’ll be glad you did. What is the best DRAM frequency for gaming? When it comes to gaming, every little bit counts. That’s why many gamers choose to upgrade their computers with high-performance components, including DRAM with a high frequency. But is a higher frequency always better? The truth is that the ideal DRAM frequency for gaming depends on a variety of factors, including the type of game you’re playing and your computer’s overall capabilities. For some games, a higher frequency can lead to improved performance, while for others it makes no difference at all. Ultimately, it’s important to experiment with different settings to figure out what works best for you. That said, if you’re looking for a starting point, most experts recommend running DRAM at a frequency of 2133 MHz or higher. With that in mind, it’s important to choose a motherboard that supports high-frequency DRAM, as not all do. Fortunately, there are plenty of options out there, so you should be able to find something that meets your needs. Happy gaming! What DRAM frequency should I use DDR4? DRAM frequency is one of the most important factors to consider when choosing DDR4 memory for your PC. The frequency, or speed, of DDR4 memory is measured in megahertz (MHz) and denotes the number of transfers per second that can be made between the RAM and the CPU. The higher the frequency, the faster the data transfer rate and the better overall performance you can expect from your system. When it comes to DRAM frequency, GPUs are typically more sensitive than CPUs. For example, a GPU-intensive game may benefit from a higher frequency, while a CPU-heavy task like video editing may not see as much of a performance boost. Ultimately, it’s up to you to decide what DRAM frequency is best for your needs. If you’re not sure, err on the side of caution and choose a lower frequency. You can always upgrade later if you find that your system could use a little more speed. Is 3200MHz RAM good? As any PC gamer will tell you, RAM is important for achieving good performance. The faster your RAM, the better your gaming experience will be. So, is 3200MHz RAM good? Yes. 3200MHz RAM is fast enough to provide a significant boost to your gaming performance. If you’re looking to get the most out of your gaming rig, 3200MHz RAM is a great option. Is 2400mhz RAM good for gaming? Many gamers believe that faster is always better when it comes to RAM speed. However, there is no clear consensus on what the ideal speed for gaming is. Some experts recommend a minimum of 2400MHz, while others argue that anything over 1600MHz is overkill. Ultimately, the RAM speed that is best for gaming depends on the specific requirements of the game being played. Some games are more demanding than others, and will require faster RAM speeds in order to run smoothly. In general, however, 2400MHz should be more than sufficient for most gaming needs. Is higher RAM frequency better? RAM, or random access memory, is an essential component of any computer. It is used to store information that can be quickly accessed by the processor. The speed of RAM is measured in MHz, or millions of cycles per second. A higher MHz rating generally indicates better performance. However, other factors such as the type of RAM and the quality of the components can also affect performance. In general, faster RAM will provide a noticeable boost in speed, particularly when multitasking or using demanding applications. For most users, opting for faster RAM will be worth the investment. DRAM frequency is one of the most important factors in determining the overall performance of a computer system. By increasing the frequency, you can improve the overall performance of your computer. However, it is important to make sure that your computer’s motherboard and other components are able to support the increased frequency. For most users, a DRAM frequency of 3200MHz should be more than sufficient. However, if you are a power user or gamer, you may want to consider a higher frequency. Ultimately, the choice is up to you.
Gay Pioneers is the story of the first organized annual “homosexual” civil rights demonstrations held in Philadelphia, New York and Washington, DC from 1965-69. When few would publicly identify themselves as gay, these brave pioneers challenged pervasive homophobia. In the early hours of June 28, 1969, a police raid of the Stonewall Inn exploded into a riot when patrons of the LGBT bar resisted arrest and clashed with police. The Stonewall Riots are widely considered to be the start of the LGBT rights movement in the United States. In this lesson, students analyze four documents to answer the question: What caused the Stonewall Riots? This lesson plan explores the history of LGBTQ Liberation from 1959 - 1979, and is a companion to the exhibit "Stonewall 50: The Spark That Lit the Flame" from the Center on Colfax's Colorado LGBTQ History Project. It includes primary sources and panels from the exhibit designed to weave together, in cooperative small-group learning, the narrative of Stonewall with the LGBTQ history of Denver. Students will use primary sources not widely available, and will understand the context leading up to Stonewall and the changes which occurred there after. From the Mattachine Society, the Black Cat Tavern and Compton's Cafeteria Riot, to the Denver Gay Revolt, Harvey Milk, as well as a detailed timeline of the riots, and the diverse voices there-in. Your students will be among the first generation of Americans to know and tell these stories. Their words will shape the future and change the world. (Includes: Bibliography, Teacher Resources, Understanding By Design, Colorado Content Standards Aligned, Grades 8-12). During this lesson students will answer a question open to historical debate "Why were the Stonewall riots the moment that sparked the LGBTQ Liberation Movement in American History?" Students will then be given panels from the Stonewall 50 history exhibit talking about the history of Stonewall: the events leading up to Stonewall, the events of the riots themselves, and the events and organizations that developed after the riots, such as the Gay Activist Alliance (GAA) and Gay Liberation Front (GLF), as well as the first Denver LGBTQ pride event, and the National March on Washington for Gay & Lesbian Rights in 1979. Students will be given 15 minutes to read panels from the exhibit underlining the important names, dates and events. Students will then share what they learned. Students will then create their own posters outlining the events of the riots as a formative assessment. This activity is designed as a fun and interactive way to raise students’ awareness of LGBT people and the contributions they made in the history of the United States. Students will learn about key events in the LGBT civil rights movement. Students will have an opportunity to create signs regarding these events to spread awareness throughout the school. Students will learn about the history of Pride in the U.S. and Brenda Howard, an American bisexual rights activist who originated the idea for a week-long series of events around Pride Day that are now held around the world every June. This lesson plan covers queer film representation from the 1920’s to 1970’s, specifically focusing on the impact of the Motion Picture Production Code, otherwise known as the Hays Code. The goal of this lesson is to explain the historical context behind LGBTQ+ stereotypes that still persist today in Western media. In this lesson, students will learn about transgender and LGBTQ history, the key role of transgender and gender non-conforming women of color in the modern LGBTQ movement, and the Stonewall Inn Riots in June 1969. They will accomplish this by watching and discussing a video about transgender rights and LGBTQ history and learning about the activists Marsha P. Johnson, Sylvia Rivera, Miss Major, and Stormé DeLarverie. In this lesson, students will listen to or read non-fiction texts for understanding, design a poster with key information on a prominent LGBTQ person or historical event, make a short presentation for the class based on their research, and write a short essay on a key moment in LGBTQ history or about a famous LGBTQ person. By doing this, students will learn about events in American history that are often omitted from textbooks and prominent LGBTQ people and about historical events that were part of the LGBTQ civil rights movement. This is Part 1 of the 2-part Pride Parade for LGBTQ+ Families lesson plan. It can be used as a standalone. Students will examine the relationship of modern-day families to the history of their community through exploring the importance of Pride for LGBTQ+ families. This is a two-part lesson in which students will engage with the storybook ‘This Day in June’, which welcomes readers to experience a Pride celebration, and therefore (1) examine the origins of Pride- the Stonewall Riots, and (2) discuss the struggle for Marriage Equality in the United States.
The Tatacoa Desert, also known as the Valley of Sorrows, spans an area of 330 square kilometers in Colombia, being the second largest arid expanse in the country after the Guajira Peninsula. The region is located 38 kilometers from the city of Neiva, the capital of the Department of Huila. The Tatacoa Desert does not fit into the conventional definition of deserts as it lacks any form of sand deposits or sand dunes but is instead a heavily eroded rocky terrain scarred by dry canyons which hosted lush, green tropical forests during the Tertiary Period. Two distinct regions occur within the Tatacoa Desert, the ocher-colored Cuzco landscape, and the grey-colored Los Hoyos landscape. 4. Historical Role The Tatacoa Desert served as the home of thousands of species of plants and animals in prehistoric times and this fact is evident from the discovery of a large number of prehistoric fossils at this site. The desert is believed to host the most diverse paleontological records of the continent dating back to the Miocene and Pleistocene period. Paleontologists from various institutes of Colombia, the U.S.A., and Japan have thus been attracted to this site to study the fossilized remains and evolutionary history of the lost species. One of the most important fossil specimens were discovered here in the La Tatacoa area which belonged to the early primates of the world, providing scientists a further insight into mechanisms of the evolutionary process. Besides plant and animal life, the Tatacoa Desert also presents evidence of cultural evolution of humans. Relics from prehistoric anthropological sites dating back to the Pleistocene and early Holocene have been discovered here. This has helped anthropologists study the development of Indian and other indigenous cultures in Columbia. 3. Modern Significance Besides paleontologists, archaeologists and anthropologists, the Tatacoa Desert also attracts a large number of tourists who visit the site to explore its unique terrain, historical, geological and paleontological wonders. An astronomical observatory at the pollution free location allows detailed observations of astronomical objects through telescopic eyes. Many tourists camp at the desert or hike along its terrain to marvel at its geological wonders. An artificial swimming pool created in the desert landscape is also a major tourist attractant in the area. 2. Habitat and Biodiversity The Tatacoa Desert region is subject to high temperatures and low humidity. The plants and animals inhabiting this region are thus well adapted to survive in the extreme conditions of the desert. The plants growing here have an extensive system of roots that spread over long distances both horizontally and vertically. Animal life here includes such reptiles as turtles, snakes, alligators, and lizards, several species of invertebrates like spiders and scorpions, such mammals as rodents and wildcats, and birds of prey like eagles and other birds. 1. Environmental Threats and Territorial Disputes Since the Tatacoa Desert is uninhabitable and effectively not arable, the desert habitat is spared from human interventions. Thus, adverse effects of anthropogenic activities like high levels of air pollution, the decimation of wild species for human needs, and damage to the archaeological and paleontological treasures from encroaching human settlements, do not exist in this region. Future potential threats from a growing tourist burden can, however, be not ruled out. Emissions from tourist vehicles could impact that air quality of the region and waste generated by tourists might mar the pristine nature of the Tatacoa Desert.
The first part of this lab requires that we measure the resistance through a piece of conducting paper. There are a couple ways we can measure a resistance. The first that comes to mind is to use Ohm's law: Measure the voltage across the "resistor" with a voltmeter, and the current that passes through it with an ammeter, and the ratio of these gives the resistance. But our multimeter does both of these jobs at once – just turn the dial into the ohmmeter region (labeled with an "\(\Omega\)"), and connect the leads (one in the black COM port and the other in the red V\(\Omega\)Hz port) to the opposite ends of the resistor, and the display reads the resistance. You will likely have to adjust the dial to find the right range of values – you can select maximum values from 200\(\Omega\) to 200M\(\Omega\) ("M" stands for "mega" = "million"). While resistance is a property of a specific object, resistivity is a property of a material that can have any shape or size. The shape and size combined with the resistivity is what determines an object's resistance. The simplest model to discuss the relationship is a rectangular prism: Figure 4.1.1 – Resistor Model The ohmmeter in this figure is measuring the resistance along the direction joining the two leads – down the length \(L\) of the prism. If current were to flow along this direction, it would pass through a cross-sectional area equal to the product of the width and thickness: \(A=w\tau\). The resistance of this object is related to the resistivity of the material from which it is constructed according to: In our lab, we will measure the dimensions of such an object, and use an ohmmeter to measure its resistance, thereby allowing us to compute the resistivity of the material. Confirmation of Kirchhoff's Rules The second part of this lab consists of connecting a small network of batteries and resistors in order to confirm Kirchhoff's rules. In order to make this confirmation, one needs to measure both voltage drops with a voltmeter and currents with an ammeter. Both these meters can be selected in the multimeter. The voltmeter you already know from a previous lab. The ammeter section of the dial is labeled with an "A" with dots and dashes after it (not the wavy line). Just as in the case of the voltmeter from the previous lab, this meter is for direct current circuits like we are working with here. There is one critical piece of information about connecting these two types of meters to measure quantities in a circuit. Not following this can damage (or at least temporarily disable) the ammeter. While voltmeters can be used in a "probing manner" connect each lead to opposite ends of a resistor to measure the voltage drop, ammeters must be connected into the circuit. That is, if you want to measure current with an ammeter, you must disconnect your circuit and connect the ammeter into the branch through which you wish to measure the current. If you are not taking your circuit apart to put your ammeter into it, then you are using the ammeter incorrectly, and are likely to disable your multimeter and aggravate your TA. If you have any questions about your connection, ask your TA for confirmation that your setup is okay before powering it up. In order to create this network, you will need two batteries and several resistors. The batteries are the plug-in DC power supplies like you used in the previous lab. The resistors you will find on one side of the component board – they are the objects with two black jacks: Figure 4.1.2 – Component Board
The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. A standard reference used for comparisons is the 35 mm format, which is a sensor of size 36×24 mm. A standard wide angle lens would have around 28 to 35 millimeters based on the 35 mm format. The smaller the number, the wider the lens is.Close The focal length is a measure of how a lens converges light. It can be used to know the magnification factor of the lens and given the size of the sensor, calculate the angle of view. The native focal length of the sensor cannot be used for comparisons between different cameras unless they have the same size. Therefore, the focal length in 35 mm terms is a better reference. For the same sensor, the smaller the number, the wider the lens is.Close Indicates the type of image stabilization this lens has: The horizontal field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close The vertical field of view in degrees this lens is able to capture, when using the maximum resolution of the sensor (that is, matching the sensor aspect ratio, and not using sensor cropping).Close Shows the magnification factor of this lens compared to the primary lens of the device (calculated by dividing the focal length of the current lens by the focal length of the primary lens). A magnification factor of 1 is shown for the primary camera, ultra-wide cameras have magnification factors less than 1, and telephoto cameras have magnification factors greater than 1.Close Physical size of the sensor behind the lens in millimeters. All other factors being equal (specially resolution), the larger the sensor the more light it can capture, as each physical pixel is bigger.Close The size (side) of an individual physical pixel of the sensor in micrometers. All other factors being equal, the larger the pixel size, the better the image quality is. In this case, each photoreceptor can capture more light and potencially can better differential the signal from the noise, yielding better image quality, specially in low-light.Close The maximum picture resolution this sensor outputs images in JPEG format. Sometimes, if the sensor can also provide images in RAW (DNG) format, they can be slightly larger because of an additional area used for calibration purposes (among others). Unfortunately, firmware restrictions for third-party apps also mean that the maximum picture resolution exposed to third-party apps might be considerably lower than the actual resolution of the sensor, therefore the resolution shown here is the maximum resolution third-party apps can access from this sensor.Close The available output picture formats this camera is able to deliver: The focusing capabilities of this camera: It displays whether this lens can be set to focus at infinity or not. Even if the camera supports autofocus and manual focus, it might happen that the focus range the lens is able to adjust to does not include the infinity position. This property is important for astrophotography, as in such low-light scenarios the automatic focus does not work reliably.Close The distance from which objects that are further away from the camera always appear in focus. Therefore, if the camera is set to focus at infinity, any object further away from this distance will appear in focus.Close The range of supported manual exposure in seconds (minimum or shortest to maximum or longest). This camera might support exposures outside this range, but only in automatic mode and not in manual exposure mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer longer or shorter exposures times.Close The range of supported manual sensitivity (ISO). This camera might support ISO sensitivities outside this range in automatic mode. Also, note that this range is the one third-party apps have access to, as often the first-party app preinstalled on the phone by the manufacturer might have privileged access to the hardware and offer an extended manual sensitivity range.Close The maximum ISO sensitivity possible in manual mode is usually reached by using digital amplification of the signal from the maximum supported analog sensitivity. This information, if available, will let you know what is the maximum analog sensitivity of the sensor.Close The data on this database is provided "as is", and FGAE assumes no responsibility for errors or omissions. The User assumes the entire risk associated with its use of these data. FGAE shall not be held liable for any use or misuse of the data described and/or contained herein. The User bears all responsibility in determining whether these data are fit for the User's intended use.
Get here the summary, questions, answers, textbook solutions, extras, and pdf of the poem “The Daffodils” by William Wordsworth of the Assam Board (AHSEC / SEBA) Class 11 (first year) Alternative English (Chinar) textbook. However, the given notes/solutions should only be used for references and should be modified/changed according to needs. Summary: The Daffodils, written by William Wordsworth, is a classic example of Romantic Era poetry since it has many of the defining characteristics of the genre. In the first lines of the poem, the poet describes his aimless wandering in a state of disengagement from the outside world until he comes across a swarm of yellow daffodils blooming on the bay’s outskirts. There were so many flowers that they formed what looked like an endless row. Wordsworth likens the dazzling Milky Way galaxy to a vast field of daffodils. The swaying movement of the daffodils’ heads adds to their allure and captivates him. The poet is profoundly moved by the sight of hundreds of daffodils blowing in the wind against the backdrop of waves lapping at the shore. The daffodils, as they sway in the breeze, look like a lively dance troupe. He is so captivated by the view that the golden swaying of the flowers seems to him to surpass the beauty of the waves on the lake. The poet is moved by the beauty and serenity of the scene, specifically by the daffodils swaying joyfully in the wind next to the lapping waves of the lake. He is completely captivated by it. The poet closes by recognising the enduring effect that nature’s beauty has had on him. While Wordsworth spends a long time admiring a field of daffodils waving in the breeze, it isn’t until much later that he appreciates the wealth that the field of flowers has brought him. The scene had such a powerful impact on him that he can’t help but remember it, and he often finds comfort in thinking about it during his quiet, reflective times. The delight of first seeing the daffodils persists long after the event itself has passed. As a result, the poet William Wordsworth found lasting happiness in a beautiful scene he discovered by chance. 1. After reading the poem, can you guess what a daffodil is? Answer: Daffodils are, in fact, a plant with bright yellow flowers. 2. The poet says that he ‘wandered lovely as a cloud’, which means he was (a) going from place to place without a special reason or purpose (b) moving about in a crowd. (c) the cloud was all alone in the sky. Answer: (a) going from place to place without a special reason or purpose 3. The poet says that he saw “ten thousand.” This means (a) there were ten thousand at them (b) there were many of them (c) they were countless Answer: (c) they were countless of a glance’. Does 4. The poet says that a “poet could not be but gay/ In such a jocund company!” What do you think “jocund company” means? Is he happy or unhappy in such a company? Answer: Company that is jocund is one that is full of joy and positivity. The poet felt uplifted by the scene of twirling daffodils and crashing seas. The poet finds joy in such companions. 5. What does the poet mean when he says that he ‘gazed and gazed – but little thought/ what wealth the show to me had brought? How does the scene benefit him, either materially or emotionally? Give reasons for your answer. Answer: Here, the poet is expressing his delight at the sight of the golden daffodils, to the point where he forgets to consider the material benefits that the sight provides. He had no idea what a treasure trove of joy he was carrying around inside of him. He was so overjoyed by the scene that he stared at them, baffled and motionless, for a long period of time. This scene flashed in the poet’s thoughts whenever he was feeling down or deep in meditation, and it always lifted his spirits. Therefore, he had amassed an unending supply of joy from the sight of daffodils. As a result, he has profited monetarily and psychologically from his view of happy daffodils because it provides him with a companion who makes him smile even when he is alone. 6. What does “vacant and pensive mood” mean? Select the correct option. (a) a thoughtful and contemplative mood (b) a sad mood (c) a thoughtless state of mind Answer: (b) a sad mood. 7. What does the “inward eye” mean? What is it that flashes before the eye? Do you think the poet is affected by it in any way? Give reasons for your answer. Answer: The ‘inward eye’ refers to the mind’s eye or the intuitive senses. The poet says that a bright scene of golden daffodils flashes before his mind’s eye. And every time the sight of the daffodils enters his head, the poet is filled with joy and good spirits, and he dances with the golden daffodils. His mind is stimulated by the upbeat scene before him. The daffodils help him in this way. In times of isolation and low spirits, the poet finds comfort in remembering the sight of the golden daffodils. Seeing it for the first time fills his head with happiness, and the golden daffodils begin to dance. In this way, he is able to forget his problems and enjoy life. 8. Why has the poet described solitude as being blissful? Answer: The poet describes solitude as a state of happiness, saying that whenever he is feeling down, a happy image of the daffodils pops into his head and instantly lifts his spirits. Quite the opposite, no one can truly appreciate being alone. To put it simply, being alone is dull, and nobody understands how you feel but you. The poet, however, is shown to be ecstatic and upbeat even when alone, so long as the thought of daffodils is present in his head. In other words, it makes him happy. As the daffodils float by, he finds solace in his solitude. Because being alone does not strip him bare, but rather reminds him of the bright vista. This is why the poet considers the sight of the daffodils to be solitary happiness. 1. What is the rhyme scheme of the poem? Answer: Daffodils follows the ABAB rhyme scheme, with a rhyming couplet (CC) at the end of each stanza. 2. Look at these two lines : Beside the lake, beneath the trees, Fluttering and dancing in breeze. The last words of these two lines rhyme with each other. Such rhyming lines are an example of a rhyming couplt. Find other examples of such couplets in the poem. Answer: Here are some rhyme pairs from the poem “Daffodils:” (a) “Ten thousand saw I at a glance tossing their heads in sprightly dance.” (b) “I gazed-and gazed-but little thought what wealth the show to me had brought.” (c) And then my heart with pleasure fills and dances with the daffodils. 3. Poets sometimes describe non-living objects as human beings. For example, in the line ‘And the storm blast came and he was tyrannous and strong,’ the poet describes the storm as a strong and powerful tyrant in order to bring out the magnitude of the stormy weather. Such a description of an inanimate object as a human being is called personification. Find three examples of personification in the poem. What is the effect created by the use of personification? Answer: Giving lifeless things human characteristics is called personification. In “The Daffodils,” the poet envisions himself as a cloud in the sky. To appreciate the daffodils, a poet should imagine floating above the landscape like a cloud and looking down at the valleys and mountains below. Here, the poet gives the cloud the ability to perceive the daffodils, giving the cloud a sense of agency over something it normally couldn’t. Just like daffodils and waves, humans have the ability to express themselves through dancing. Despite the fact that flowers and waves are physically incapable of dancing, Wordsworth personified them by giving them the ability to do so. Wordsworth imagined a throng of daffodils to be a group of people. They’re dancing with their heads tossed back and forth, just like people do when they’re happy. Wordsworth uses personification, to demonstrate the inseparability of man from the natural world. The daffodils have come to life, connecting humans to the natural world. 4. Pick out words in the poem which mean “being companionless.” Do you think the poet is happy to find himself in this state? Give reasons for your answer. Answer: Words like “wandered lonely,” “vacant,” “pensive,” and “solitude” all refer to being alone in “The Daffodils” poetry. For the first time in his life, the poet felt content with his solitude. Through his isolation, he learned the need to establish a rapport with the natural world in order to gain perspective on one’s place in the cosmos. He’s enjoying the “bliss of solitude,” and his heart is full of joy. 5. Poets sometimes compare two dissimilar objects or things to make their descriptions more vivid to the reader. These comparisons are, at times, direct. Direct comparison or simile uses words such as like or as to compare two things. In the poem “Rime of the Ancient Mariner”, Coleridge highlights the plight of the stationary ship in the following way: We stuck, nor breath nor motion; As idle as a painted ship Upon a painted ocean. A metaphor is an expression that describes a person or object in a literary way by referring to something that is considered to possess similar characteristics to the person or object you are describing. In the ‘Seven Ages of Man’, Shakespeare compares reputation to an ephemeral bubble: Jealous in honour; sudden and quick in quarrel, Seeking the bubble reputation Even in the cannon’s mouth. Pick out examples of similes and metaphors from the poem ‘The Daffodils’. Answer: It is clear that the author of “The Daffodils” intended for the use of similes and metaphors to heighten the poem’s artistic impact. Metaphors and similes used in the poem include: a. “I wandered lonely as a cloud” This statement contains a simile since it likens the narrator’s solitude to that of a cloud floating in the sky, uniting him with the natural world. “Continuous as the stars that shine And twinkle on the milky way” Using a simile, the poem connects the natural world to the cosmos by drawing parallels between the endless march of daffodils and the stars in the Milky Way. “…..I saw a crowd, A host, of golden daffodils” Wordsworth used metaphor here, comparing the daffodils to a gathering of people and a multitude of perspectives. “Tossing their hands in sprightly dance.” It’s a metaphor because it makes the daffodils sound like they’re dancing and their petals look like they’re swaying and twirling. “What wealth the show to me had brought” The poet has utilised a metaphor involving wealth here. In this context, “wealth” does not refer to monetary value. Belief in God is a reflection of our recognising of his majesty and gratitude for his favour on our lives. 6. List out the adjectives describing the waves. Why do you think the poet has described them in such a manner? Answer: Poetically, the waves in “The Daffodils” are described as “sparkling” with a variety of other similar adjectives. It’s a visual description of the waves that sparkle and flit in the sunshine. The poet has painted a picture of the sea that makes the daffodils sound even brighter and sunnier than they really are. Since everything appears to be dazzling, twinkling, shining, and sparkling, the ‘waves’ dazzle also develops their relationship with the stars. Nature’s jubilant side is brought to light. Reference to the context 1. Continuous as the stars that shine And twinkle on the milky way, They stretched in never-ending line Along the margin of a bay: Ten thousand saw I at a glance Tossing their heads in sprightly dance. (a) What does ‘they’ refer to? Answer: ‘They’ refers to the golden daffodils. (b) Why have they been compared to the Milky Way? Answer: They were like the Milky Way in that they danced joyfully like the stars. (c) Pick out an example of personification from these lines. What is the picture created by this description? Answer: Personification can be seen in the phrase “ten thousand sprightly dance.” Like human beings, daffodils express their elation and excitement by dancing and tossing their heads. Despite the fact that daffodils are actually unable to dance, the poet has assigned to them the human quality of the aforementioned activity. (d) Find an example of a rhyming couplet from these lines. Answer: The rhyming couplet is: Ten thousand saw I at a glance Tossing their heads in sprightly dance. 2. Ten thousand saw I at a glance Tossing their heads in springhtly dance. The waves beside them danced, but they Out-did the sparkling waves in glee: A Poet could not be but gay In such a jocund company! (a) What did the poet see at a glance? Were they really ten thousand in number? Answer: The poet saw a glimpse of 10,000 daffodils at once. They didn’t number in the tens of thousands, but they were countless. (b) How did ‘they’ outdo the waves? Answer: Waves couldn’t match the enthusiasm and joy of the yellow daffodils, so the flowers “out-did” them. (c) What do the waves refers to? Answer: The lake water waves are referred to as the “waves.” (d) How did the scene affect the poet? Answer: The poet felt a surge of happiness at the sight. (e) Pick out three words that mean ‘being happy.’ Answer: Glee, gay, jocund. (f) Find two examples of personification from these lines. Answer: Two examples of personification from these lines are : “Tossing their heads in sprightly dance” In this passage, Wordsworth gives the daffodils human characteristics. Daffodils have been endowed with the ability to dance, much like humans. “The waves beside them danced.” Along with daffodils, the waves are endowed with the ability to dance. Waves and daffodils are thus humanised. 3. I gazed – and gazed – but little thought What wealth the show to me had brought: For oft, when on my couch I lie In vacant or in pensive mood, They flash upon that inward eye Which is the bliss of solitude; (a) What is the ‘wealth’ that the poet is referring to in these lines? What kind of poetic device is this? Answer: The “wealth” in this scene is the view of the golden daffodils. The metaphor serves as a poetic device here. (b) Why does the poet refer to it as ‘wealth’? Answer: The poet calls it “wealth” because, like wealth, it gave him joy and happiness when he was alone. It supported him during his difficult moments. (c) When does the poet feel blissful? Answer: When the poet is sad and depressed, he experiences bliss. (d) Had the poet realised the importance of the scene when he had first seen it? Give reasons for your answer. Answer: The poet had not realised the significance of the daffodil scene when he first saw it. He simply looked intently for a long time, becoming cheerful as he took in the beauty of the daffodils. He paid no heed to the effect the view had on his psyche. But it wasn’t until he was alone and in a bad mood that he realised how important it was. Because when the scene passed through his thoughts, it made him joyful, and he forgot about his material surroundings and felt himself among the cheerful daffodils. Only then did he realise that the vista had left him with a treasure trove of joy and brightness. Additional/extra questions and answers/solutions 1. How does the poet connect daffodils to other flowers? Answer: The poet compares a swarm of golden daffodils to the stars. Daffodils, like stars in the Milky Way, are not only numerous, but they also dance in a never-ending line with full vitality and excitement. 4. When did the poet discover the ‘wealth the show had brought’, according to the poet? Answer: Only after he has returned home does the poet comprehend the significance of the event that has unfolded before his eyes. During his darkest contemplative times, the remembrance of the daffodils brought him serenity and solace. Get notes of other classes or subjects
In Panama’s lowland tropical forest, tree species growing on low phosphorus soils grew faster, on average, than species growing on high phosphorus soils. Photo courtesy Smithsonian Tropical Research Institute Archives Addthis Share Tools Accepted ecological theory says that poor soils limit the productivity of tropical forests, but adding nutrients as fertilizer rarely increases tree growth, suggesting that productivity is not limited by nutrients after all. Researchers at the Smithsonian Tropical Research Institute (STRI) resolved this apparent contradiction, showing that phosphorus limits the growth of individual tree species but not entire forest communities. Their results, published online in Nature, March 8, have sweeping implications for understanding forest growth and change. Vast areas of the tropics occur on old landscapes where rock-derived nutrients have been leached away by years of heavy rainfall. Phosphorus is particularly scarce, because the iron oxides that give tropical soils their characteristic red color bind to the phosphorus, making it unavailable to plants. However, the addition of fertilizer to diverse forests in Africa, Southeast Asia and the Americas has not increased tree growth. The only place where fertilization resulted in increased tree growth was in Hawaii, where the forest is dominated by a single tree species. An alternative way to study nutrient limitation is by comparing the growth rates of trees in forests that naturally differ in soil nutrient availability: the tiny but highly biodiverse tropical country of Panama provides a perfect setting for this. The complex geology of central Panama means that natural levels of plant-available phosphorus in the soil vary more than 300-fold—similar to the range of phosphate availability in tropical soils around the world. And because the soils in Panama also vary in moisture and other nutrients such as nitrogen, calcium and potassium, researchers can study the effects of these variables on growth at the same time. To examine the effect of phosphorus on tree growth, researchers measured 19,000 individual trees in 541 different tree species in a series of long-term forest monitoring plots that are part of the Forest Global Earth Observatory (Smithsonian ForestGEO) network managed by the Center for Tropical Forest Science at STRI. On average, growth rates of individual tree species increased in soils with higher levels of plant-available phosphorus, consistent with ecological theory. Surprisingly, however, tree species that occurred on low phosphorus soils grew faster, on average, than species growing on high phosphorus soils. And in a final twist, variation in the tree species present across plots meant that community-wide growth rates did not change according to the level of soil phosphorus. “Finding that species adapted to low phosphorus soils are growing so fast was a real surprise,” said Ben Turner, STRI staff scientist, who led the study. “We still don’t understand why this occurs, nor why high phosphorus species are not growing faster than they are. Perhaps trees invest extra phosphorus in reproduction rather than growth, for example, because seeds, fruits and pollen are rich in phosphorus. For now, these results help us to understand how soil fertility influences tree growth in tropical forests, and demonstrate once again the power of tropical diversity to surprise us.” “This study highlights our limited understanding of how plants cope with phosphorus-poor soils, a significant challenge to farmers through much of the tropics,” said Jim Dalling, STRI research associate and professor and head of the Department of Plant Biology at the University of Illinois Urbana-Champaign. “Comparing how plants adapted to high versus low phosphorus availability acquire and use this critical nutrient could suggest new approaches for increasing food production without relying on costly fertilizers.” The Smithsonian Tropical Research Institute, headquartered in Panama City, Panama, is a unit of the Smithsonian Institution. The Institute furthers the understanding of tropical biodiversity and its importance to human welfare, trains students to conduct research in the tropics and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems. Website. Promo video. # # # Turner, B.L., Brenes-Arguedas, T., Condit, R. 2018. Pervasive phosphorus limitation of tree species but not communities in tropical forests. Nature. Doi:10.1038/nature25789 (202) 633-4700 x 28216
Lake Water Quality Monitoring water quality in lakes and reservoirs is key in maintaining safe water for drinking, bathing, fishing and agriculture and aquaculture activities. Long-term trends and short-term changes are indicators of environmental health and changes in the water catchment area. Directives such as the EU's Water Framework Directive or the US EPA Clean Water Act request information about the ecological status of all lakes larger than 50 ha. Satellite monitoring helps to systematically cover a large number of lakes and reservoirs, reducing needs for monitoring infrastructure (e.g. vessels) and efforts. The Lake Water Products (lake water quality, lake surface water temperature) provide a semi-continuous observation record for a large number (nominally 4,200) of medium and large-sized lakes, according to the Global Lakes and Wetlands Database (GLWD) or otherwise of specific environmental monitoring interest. Next to the lake surface water temperature that is provided separately, this record consists of three water quality parameters: - The turbidity of a lake describes water clarity, or whether sunlight can penetrate deeper parts of the lake. Turbidity often varies seasonally, both with the discharge of rivers and growth of phytoplankton (algae and cyanobacteria). - The trophic state index is an indicator of the productivity of a lake in terms of phytoplankton, and indirectly (over longer time scales) reflects the eutrophication status of a water body. - Finally, the lake surface reflectances describe the apparent colour of the water body, intended for scientific users interested in further development of algorithms. The reflectance bands can also be used to produce true-colour images by combining the visual wavebands.
The relationship between language and culture is complex yet one is a part of the other. You learn the culture once you start learning a. This article emphasizes the fundamental role of language in culture and vice versa. Language Because of the relationship between language and culture Geertz defined culture as "Interprets symbols through . A translator finds the very. PDF | On May 1, , Mehryar Nooriafshar and others published Book language, culture, and their relationships within relationships and makes references to the .. several mental conversion processes take place. Philology is the study of language in oral and written historical sources; it is a combination of literary criticism, history, and linguistics. Philology is more commonly defined as the study of literary texts as well as oral and written recordsthe establishment of their authenticity and their original form, and the determination of their meaning. A person who pursues this kind of study is known as a philologist. In older usage, especially British, philology is more general, covering comparative and historical linguistics. Phonological Awareness refers to an individual's awareness of the phonological structureor sound structure, of words. Phonological awareness is an important and reliable predictor of later reading ability and has, therefore, been the focus of much research. Tone I don't like the Tone of your Voice Tone in Language is when different tones will change the meaning of the wordseven if the pronunciation of the word is the same otherwise. A word's meaning could be different depending on which syllable is stressed. Tone in linguistics is the use of pitch in language to distinguish lexical or grammatical meaning. All verbal languages use pitch to express emotional and other paralinguistic information and to convey emphasis, contrast, and other such features in what is called intonation, but not all languages use tones to distinguish words or their inflections, analogously to consonants and vowels. Tone Deaf - Hearing Errors Stress in linguistics or accent is relative emphasis or prominence given to a certain syllable in a word, or to a certain word in a phrase or sentence. This emphasis is typically caused by such properties as increased loudness and vowel lengthfull articulation of the vowel, and changes in pitch. High Rising Terminal is a feature of some variants of English where declarative sentence clauses end with a rising-pitch intonation, until the end of the sentence where a falling-pitch is applied. New research suggests that the actual rise can occur one or more syllables after the last accented syllable of the phrase, and its range is much more variable than previously thought. Intonation in linguistics is variation of spoken pitch that is not used to distinguish words; instead it is used for a range of functions such as indicating the attitudes and emotions of the speaker, signaling the difference between statements and questions, and between different types of questions, focusing attention on important elements of the spoken message and also helping to regulate conversational interaction. Vocal Inflection contrasts with tone, in which pitch variation does distinguish words. So when your voice rises at the end of a question, that is technically called intonation. Inflection has two meanings: Inflection is a change in the form of a word usually by adding a suffix to indicate a change in its grammatical function. The patterns of stress and intonation in a language. A manner of speaking in which the loudness or pitch or tone of the voice is modified. Deviation from a straight or normal course. Inflection is the modification of a word to express different grammatical categories such as tense, case, voice, aspect, person, number, gender, and mood. The inflection of verbs is also called conjugation, and one can refer to the inflection of nouns, adjectives, adverbs, pronouns, determiners, participles, prepositions, postpositions, numerals, articles etc. An inflection expresses one or more grammatical categories with a prefix, suffix or infix, or another internal modification such as a vowel change. For example, the Latin verb ducam, meaning "I will lead", includes the suffix -am, expressing person firstnumber singularand tense-mood future indicative or present subjunctive. The use of this suffix is an inflection. In contrast, in the English clause "I will lead", the word lead is not inflected for any of person, number, or tense; it is simply the bare form of a verb. The inflected form of a word often contains both one or more free morphemes a unit of meaning which can stand by itself as a wordand one or more bound morphemes a unit of meaning which cannot stand alone as a word. For example, the English word cars is a noun that is inflected for number, specifically to express the plural; the content morpheme car is unbound because it could stand alone as a word, while the suffix -s is bound because it cannot stand alone as a word. These two morphemes together form the inflected word cars. Words that are never subject to inflection are said to be invariant; for example, the English verb must is an invariant item: Its categories can be determined only from its context. Requiring the forms or inflections of more than one word in a sentence to be compatible with each other according to the rules of the language is known as concord or agreement. For example, in "the choir sings", "choir" is a singular noun, so "sing" is constrained in the present tense to use the third person singular suffix "s". Languages that have some degree of inflection are synthetic languages. Languages that are so inflected that a sentence can consist of a single highly inflected word such as many American Indian languages are called polysynthetic languages. Languages in which each inflection conveys only a single grammatical category, such as Finnish, are known as agglutinative languages, while languages in which a single inflection can convey multiple grammatical roles such as both nominative case and plural, as in Latin and German are called fusional. Terminology Coordination Unit Languages such as Mandarin Chinese that never use inflections are called analytic or isolating. Conjugate is to give the different forms of a verb in an inflected language as they vary according to voice, mood, tense, number, and person. Grammatical Conjugation is the creation of derived forms of a verb from its principal parts by inflection alteration of form according to rules of grammar. Conjugation may be affected by person, number, gender, tense, aspect, mood, voice, case, and other grammatical categories such as possession, definiteness, politeness, causativity, clusivity, interrogativity, transitivity, valency, polarity, telicity, volition, mirativity, evidentiality, animacy, associativity, pluractionality, reciprocity, agreement, polypersonal agreement, incorporation, noun class, noun classifiers, and verb classifiers in some languages. Agglutinative and polysynthetic languages tend to have the most complex conjugations albeit some fusional languages such as Archi can also have extremely complex conjugation. All the different forms of the same verb constitute a lexeme, and the canonical form of the verb that is conventionally used to represent that lexeme as seen in dictionary entries is called a lemma. The term conjugation is applied only to the inflection of verbs, and not of other parts of speech inflection of nouns and adjectives is known as declension. Also it is often restricted to denoting the formation of finite forms of a verb — these may be referred to as conjugated forms, as opposed to non-finite forms, such as the infinitive or gerund, which tend not to be marked for most of the grammatical categories. Correlative Conjunction come in pairs some are: She'd rather play the drums than sing. Conjugation is also the traditional name for a group of verbs that share a similar conjugation pattern in a particular language a verb class. For example, Latin is said to have four conjugations of verbs. This means that any regular Latin verb can be conjugated in any person, number, tense, mood, and voice by knowing which of the four conjugation groups it belongs to, and its principal parts. A verb that does not follow all of the standard conjugation patterns of the language is said to be an irregular verb. The system of all conjugated variants of a particular verb or class of verbs is called a verb paradigm; this may be presented in the form of a conjugation table. Deflexion is a linguistic process related to inflectional languages. All members of the Indo-European language family belong to this kind of language and are subject to some degree of deflexional change. The process is typified by the degeneration of the inflectional structure of a language. This phenomenon has been especially strong in Western European languages, such as English, French, and others. Variation in linguistics is a characteristic of language: Speakers may vary pronunciation accentword choice lexiconor morphology and syntax sometimes called "grammar". But while the diversity of variation is great, there seem to be boundaries on variation — speakers do not generally make drastic alterations in sentence word order or use novel sounds that are completely foreign to the language being spoken. Language variation does not equate with language ungrammaticality, but speakers are still often unconsciously sensitive to what is and is not possible in their native tongue. Multilingual states can exist and prosper; Switzerland is a good example. But linguistic rivalry and strife can be disruptive. Language riots have occurred in Belgium between French and Flemish speakers and in parts of India between rival vernacular communities. A language can become or be made a focus of loyalty for a minority community that thinks itself suppressed, persecuted, or subjected to discrimination. The French language in Canada in the midth century is an example. Language is a systematic means of communicating by the use of sounds or conventional symbols A language may be a target for attack or suppression if the authorities associate it with what they consider a disaffected or rebellious group or a culturally inferior one. There have been periods when American Indian children were forbidden to speak a language other than English at school and when pupils were not allowed to speak Welsh in British state schools in Wales. Both these prohibitions have been abandoned. After the Spanish Civil War of the s, Basque speakers were discouraged from using their language in public as a consequence of the strong support given by the Basques to the republican forces. Interestingly, on the other side of the Franco-Spanish frontier, French Basques were positively encouraged to keep their language in use, if only as an object of touristic interest and consequent economic benefit to the area. Translation So far, some of the relatively large-scale effects of culture contacts on languages and on dialects within languages have been surveyed. A continuous concomitant of contact between two mutually incomprehensible languages and one that does not lead either to suppression or extension of either is translation. As soon as two users of different languages need to converse, translation is necessary, either through a third party or directly. Before the invention and diffusion of writing, translation was instantaneous and oral; persons professionally specializing in such work were called interpreters. In predominantly or wholly literate communities, translation is thought of as the conversion of a written text in one language into a written text in another, though the modern emergence of the simultaneous translator or professional interpreter at international conferences keeps the oral side of translation very much alive. The main problems have been recognized since antiquity and were expressed by St. Semantically, these problems relate to the adjustment of the literal and the literary and to the conflicts that so often occur between an exact translation of each word, as far as this is possible, and the production of a whole sentence or even a whole text that conveys as much of the meaning of the original as can be managed. These problems and conflicts arise because of factors already noticed in the use and functioning of language: Even between the languages of communities whose cultures are fairly closely allied, there is by no means a one-to-one relation of exact lexical equivalence between the items of their vocabularies. In their lexical meanings, words acquire various overtones and associations that are not shared by the nearest corresponding words in other languages; this may vitiate a literal translation. In modern times translators of the Bible into the languages of peoples culturally remote from Europe are well aware of the difficulties of finding a lexical equivalent for lamb when the intended readers, even if they have seen sheep and lambs, have no tradition of blood sacrifice for expiation or long-hallowed associations of lambs with lovableness, innocence, and apparent helplessness. The English word uncle has, for various reasons, a cozy and slightly comic set of associations. This is because poetry is, in the first instance, carefully contrived to express exactly what the poet wants to say. Second, to achieve this end, poets call forth all the resources of the language in which they are composing, matching the choice of words, the order of words, and grammatical constructions, as well as phonological features peculiar to the language in metreperhaps supplemented by rhymeassonanceand alliteration. The available resources differ from language to language; English and German rely on stress-marked metres, but Latin and Greek used quantitative metres, contrasting long and short syllables, while French places approximately equal stress and length on each syllable. Translators must try to match the stylistic exploitation of the particular resources in the original language with comparable resources from their own. Because lexical, grammatical, and metrical considerations are all interrelated and interwoven in poetry, a satisfactory literary translation is usually very far from a literal word-for-word rendering. The more poets rely on language form, the more embedded their verses are in that particular language and the harder the texts are to translate adequately. This is especially true with lyrical poetry in several languages, with its wordplay, complex rhymes, and frequent assonances. Remarkable advances in automatic computer translation were made during the s—the result of progress in computational techniques and a fresh burst of research energy focused on the problem—while the spread of the Internet in subsequent decades transformed approaches to, and the ease of, all forms of translation. Translation on the whole is, arguably, more art than science. The Italian epigram remains justified: Sometimes people want to restrict it. Confidential messages require for their efficacy that they be known to and understood by only the single person or the few persons to whom they are addressed. Such are diplomatic exchanges, operational messages in wartime, and some transmissions of commercial information. Protection of written messages from interception has been practiced for many centuries. Twentieth-century developments in telegraphy and telephonyand the emergence and growth of the Internet, made protection against unauthorized reception more urgent, whether of texts transmitted as speech or those sent as series of letters of the alphabet. Codes and ciphers cryptography are of much longer standing in the concealment of written messages, though their techniques are being constantly developed. Such gains are, of course, countered by developments in the techniques of decipherment and decoding as distinct from getting hold of the key to the system in use. An important by-product of such techniques has been the reading and interpretation of inscriptions written in otherwise unknown languages or unknown writing systems for which no translation exists. Language - Language and culture | hair-restore.info Linear B inscribed tablet, c. It has been pointed out above that the process of first-language acquisition as a medium of communication is largely achieved from random exposure. There is legitimate controversy, however, over the nature and extent of the positive contribution that the human brain brings, both cognitively and linguistically, to the activity of grammar construction—the activity by which children develop an indefinitely creative competence from the finite data that make up their actual experience of the language. The importance of social interaction between children and their interlocutors is another significant factor. Creativity is what must be stressed as the product of first-language acquisition. By far the greater number of all the sentences people create during their lifetime are new; that is, they have not occurred before in their personal experience. But individuals find no difficulty at all in understanding at once almost everything they hear or otherwise receive or for the most part in producing sentences to suit the requirements of every situation. This very ease of creativity in human linguistic competence makes it hard to realize its extent. It is simply part of what is expected in growing up. Different people may be singled out for praise in certain uses of their language, as good public speakers, authors, poets, tellers of tales, and solvers of puzzles, but not just as communicators. Bilingualism The learning of a second and of any subsequently acquired language is quite a separate matter. Of course, many people never do master significantly more than their own first language. It is only in encountering a second language that one realizes how complex language is and how much effort must be devoted to subsequent acquisition. It has been said that the principal obstacle to learning a language is knowing one already, and common experience suggests that the faculty of grammar construction exhibited in childhood is one that is gradually lost as childhood recedes. AdstockRF Whereas most people master their native language with unconscious ease, individuals vary in their ability to learn additional languages, just as they vary in other intellectual activities. Situational motivationhowever, appears to be by far the strongest influence on the speed and apparent ease of this learning. The greatest difficulty is experienced by those who learn because they are told to or are expected to, without supporting reasons that they can justify. Given a motive other than external compulsion or expectation, the task is achieved much more easily this, of course, is an observation in no way confined to language learning. In Welsh schools, for instance, it has been found that English children make slower progress in Welsh when their only apparent reason for learning Welsh is that there are Welsh classes. Welsh children, on the other hand, make rapid progress in English, the language of most further education, the newspapers, most television and radio, most of the better-paid jobs, and any job outside Welsh-speaking areas. Similar differences in motivation have accounted for the excellent standard of English, French, and German acquired by educated persons in the Scandinavian countries and in the Netherlands, small countries whose languages, being spoken by relatively few foreigners, are of little use in international communication. This attainment may be compared with the much poorer showing in second-language acquisition among comparably educated persons in England and the United States, who have for long been able to rely on foreigners accommodating to their ignorance by speaking and understanding English. It is sometimes held that children brought up bilingually in places in which two languages are regularly in use are slower in schoolwork than comparable monolingual children, as a greater amount of mental effort has to be expended in the mastery of two languages. This has by no means been proved, and indeed there is evidence to the contrary. The question of speed of general learning by bilinguals and monolinguals must be left open. It is quite a separate matter from the job of learning, by teaching at home or in school, to read and write in two languages; this undoubtedly is more of a labour than the acquisition of monolingual literacy. Two types of bilingualism have been distinguished, according to whether the two languages were acquired from the simultaneous experience of the use of both in the same circumstances and settings or from exposure to each language used in different settings an example of the latter is the experience of English children living in India during the period of British ascendancy there, learning English from their parents and an Indian language from their nurses and family servants. However acquired, bilingualism leads to mutual interference between the two languages; extensive bilingualism within a community is sometimes held partly responsible for linguistic change. Interference may take place in pronunciation, in grammar, and in the meanings of words. Speaking, signing, and writing are learned skills, but there the resemblance ends. Children learn their first language at the start involuntarily and mostly unconsciously from random exposure, even if no attempts at teaching are made. Literacy is deliberately taught and consciously and deliberately learned. There is ongoing debate on the best methods and techniques for teaching literacy in various social and linguistic settings. Literacy is learned by a person already possessed of the basic structure and vocabulary of his language. Such facts should be obvious, but the now-accepted standard of near-universal literacy in technologically advanced countries, along with the fact that in second-language learning one usually acquires speech and writing skills at the same time, tends to bring these parts of language learning under one head. Literacy is manifestly a desirable attainment for all communities, though not necessarily in all languages. It must be borne in mind that there are many distinct languages spoken in the world today by fewer than 1, or or even 50 persons. The capital investment in literacy, including teaching resources, teacher time and training, printing, publications, and so forth, is vast, and it can be economically and socially justified only when applied to languages used and likely to continue to be used by substantial numbers over a wide area. Literacy is in no way necessary for the maintenance of linguistic structure or vocabulary, though it does enable people to add words from the common written stock in dictionaries to their personal vocabulary very easily. It is worth emphasizing that until relatively recently in human history all languages were spoken or signed by illiterate speakers and that there is no essential difference as regards pronunciation, structure, and complexity of vocabulary between spoken or signed languages that have writing systems used by all or nearly all their speakers and the languages of illiterate communities. Literacy has many effects on the uses to which language may be put; storage, retrieval, and dissemination of information are greatly facilitatedand some uses of language, such as philosophical system building and the keeping of detailed historical records, would scarcely be possible in a community wholly without writing. In these respects the lexical content of a language is affected, for example, by the creation of sets of technical terms for philosophical writing and debate. Because the permanence of writing overcomes the limitations of memory span imposed on speech or signing, sentences of greater length can easily occur in writing, especially in types of written language that are not normally read aloud and that do not directly represent what would be spoken. An examination of some kinds of oral literaturehowever, reveals the ability of the human brain to receive and interpret spoken sentences of considerable grammatical complexity. In relation to pronunciationwriting does not prevent the historical changes that occur in all languages. Part of the apparent irrationality of English spellingsuch as is found also in some other orthographies, lies just in the fact that letter sequences have remained constant while the sounds represented by them have changed. For example, the gh of light once stood for a consonant sound, as it still does in the word as pronounced in some Scots dialects, and the k of knave and knight likewise stood for an initial k sound compare the related German words Knabe and Knecht. A few relatively uncommon words, including some proper names, are reformed phonetically, specifically to bring their pronunciation more in line with their spelling. Spelling pronunciations, as these are called, are a product of general literacy. In London the pronunciation of St. Aristotle expressed the relation thus: But it is not as simple as this would suggest. Alphabetic writing, in which, broadly, consonant and vowel sounds are indicated by letters in sequence, is the most widespread system in use today, and it is the means by which literacy will be disseminatedbut it is not the only system, nor is it the earliest. Evolution of writing systems Writing appears to have been evolved from an extension of picture signs: Other words or word elements not readily represented pictorially could be assigned picture signs already standing for a word of the same or nearly the same pronunciation, perhaps with some additional mark to keep the two signs apart. This opens the way for what is called a character script, such as that of Chinesein which each word is graphically represented by a separate individual symbol or character or by a sequence of two or more such characters. Writing systems of this sort have appeared independently in different parts of the world. Chinese character writing has for many centuries been stylized, but it still bears marks of the pictorial origin of some characters. Chinese characters and the characters of similar writing systems are sometimes called ideograms, as if they directly represented thoughts or ideas. This is not so. Chinese characters stand for Chinese words or, particularly as in modern Chinese, bits of words logograms ; they are the symbolization of a particular language, not a potentially universal representation of thought. Character writing is laborious to learn and imposes a burden on the memory. Alternatives to it, in addition to alphabetic writing, include scripts that employ separate symbols for the syllable sequences of consonants and vowels in a language, with graphic devices to indicate consonants not followed by a vowel. The Devanagari script, in which classical Sanskrit and modern Hindi are written, is of this type, and the Mycenaean writing system, a form of Greek writing in use in the 2nd millennium bce and quite independent of the later Greek alphabet, was syllabic in structure. Japanese employs a mixed system, broadly representing the roots of words by Chinese characters kanji and the inflectional endings by syllable signs kana. These syllable signs are an illustration of the way in which a syllabic script can develop from a character script: The Greek alphabet came from the Phoenician scripta syllabic-type writing system that indicated the consonant sounds. By a stroke of genius, a Greek community decided to employ certain consonantal signs to which no consonant sound corresponded in Greek as independent vowel signs, thus producing an alphabeta set of letters standing for consonants and vowels. The Greek alphabet spread over the ancient Greek world, undergoing minor changes. From a Western version sprang the Latin Roman alphabet. Also derived from the Greek alphabet, the Cyrillic alphabet was devised in the 9th century ce by a Greek missionary, St.
Vanguard School Junior High Curriculum Summaries Critical Thinking Strand Integrated into our junior high curricula is an emphasis on critical thinking. Students in 7th and 8th-grade build upon the facts and skills they have mastered in elementary school and further develop their analytical thinking skills across the curriculum by direct instruction of critical thinking concepts and by applying these concepts as they solve problems and engage in primary source readings. Direct Instruction Methodology One of the underlying assumptions of Direct Instruction is that all children can be taught, regardless of past history. By design, Direct Instruction applies purposeful instructional planning to give students extensive support as they practice and apply newly learned concepts and skills. Presentation has a lot to do with how effectively students learn. Teachers use quick pacing and group responses to keep all students engaged and implement planned correction procedures to prevent errors from becoming learned habits.
Our everyday is full of gendered language, so we need to know how to use the proper pronouns when referring to anyone. Pronouns are one of the most common ways to address someone’s gender identity. So, how come it is still so hard for people to understand which pronouns to use? Take this as a crash course on the basics of pronouns and how to be a better ally to trans, gender non-conforming and non-binary folks. Language is evolving As we learn more and more about gender we become more aware that gender is fluid. So, our language should adapt with the identity of those using it. We might have grown up in an environment where only “she” and “he” were taught as singular pronouns. But now, they/them are acceptable singular pronouns. Since english lacks a gender neutral singular pronoun a change needed to be made, and dictionaries, academics and sociologists agreed that they/them would be an acceptable alternative. This means that if someone’s pronouns are they/them it is grammatically correct! For example: instead of saying “he is going home,” let’s try “they’re going home.” Pronouns are a personal decision Since gender is fluid someone’s identity is incredibly personal. So, only they can tell you how they identify. I feel like this should go without saying. Thinking that someone presents more feminine or masculine does not mean that they identify as woman or man. We can’t decide someone’s identity for them, so we shouldn’t try to impose their pronouns either. Pronouns are not preferred, they are mandatory When someone tells you their pronouns it is not a suggestion. Use those pronouns. I’ve heard the term “preferred pronouns” a lot, and that implies that other pronouns are acceptable, but we have a favorite one. The pronouns we identify with are just that, our pronouns. There is no other option. So, if someone shares their pronouns with you, respect them. Normalize the conversation There are a lot of people who might not be comfortable sharing their pronouns the first time they meet someone. A good way to make sure that everyone feels comfortable can be introducing yourself and adding your pronouns. E.g. “My name is Maria. My pronouns are she/her/hers.” That will open up the conversation and others will feel more comfortable introducing their pronouns. We use pronouns on a daily basis so normalizing that introduction is a really easy way to open up our space and making sure that people are comfortable. Also, it is super easy. It is okay to make mistakes Being comfortable with pronouns is a learning process. It is human to make mistakes. However, we should try to correct our mistakes, apologize, and make an effort to not misgender people. One of the easiest way to avoid misgendering someone is by using they/them pronouns until we know which pronouns are the right ones to use. It might be hard to adapt to a shift in language, but we should respect people’s gender and identity. At the end of the day it is easier with practice and it will go a long way. Go the extra mile Gendered language is all around us. An example is walking into a classroom and assuming that everyone there identifies as female so we say “hey ladies.” Instead we can adopt more gender neutral language such as “hey y’all” or “hi everyone”. Gender inclusive language will make everyone feel more comfortable and accepted in a space, and it is really common. We just need to be aware of the words we use on a day to day, and how they impact those around us.1 Also published on Medium.
Tendrils of dark matter channelled gas deep into the hearts of some of the universe’s earliest galaxies, a new computer simulation suggests. The result could explain how some massive galaxies created vast numbers of stars without gobbling up their neighbours. Dramatic bursts of star formation are thought to occur when galaxies merge and their gas collides and heats up. Evidence of these smash-ups is fairly easy to spot, since they leave behind mangled pairs of galaxies that eventually merge, their gas settling into a bright, compact centre. But several years ago, astronomers began finding disc-like galaxies with crowded stellar nurseries that seemed to bear no hallmarks of a past collision. These galaxies, which thrived when the universe was just 3 billion years old, were at least as massive as the Milky Way, but created stars at some 50 times our galaxy’s rate. It was not clear how these galaxies could harbour such intense bursts of star formation without collisions. Smaller galaxies are thought to form when gas falls in from all directions. But this process would not work with larger galaxies – those about the Milky Way’s size or heavier. These galaxies grow so hot and dense they create a shock-wave-like barrier that heats incoming gas and prevents it from falling in. But Avishai Dekel of Hebrew University in Jerusalem thinks an influx of gas could be responsible for the star formation after all. This gas could flow along filaments of dark matter that make up a cosmic web still seen today in the distribution of galaxies across the sky. Dekel and colleagues used fluid dynamics simulations to model the cosmic web of gas and dark matter at a time when the universe was some 3 billion years old. They tracked how gas accumulated in galaxies lying at the nodes of the web, where dark matter filaments intersect. The team found that gas in the tendrils was so dense that collisions between particles would dissipate energy quickly, making it less susceptible to shocks in the surrounding, hotter gas. The cool gas could then fall into the galaxy’s disc fast enough to fuel dramatic starbursts. “We found the gas can penetrate all the way through the hot material,” Dekel told New Scientist. “This is solving the riddle of where this star formation is coming from.” Reinhard Genzel of the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, says the explanation could work, but adds that the simulation cannot estimate how rapidly the gas can be converted to stars, which would be a crucial test. More detailed simulations and studies of the galaxies themselves could confirm the model. “We need more time to test it out, but it smells like the right answer,” Genzel told New Scientist. See also: Galaxies give birth to stars on cosmic highways Journal reference: Nature (vol 457, p 451) More on these topics:
Optoreflector sensors contain a matched infrared transmitter (LED) and infrared receiver (usually a phototransistor) pair. These devices work by measuring the amount of light that is reflected into the receiver. Because the receiver also responds to ambient light, the device works best when well shielded from abient light, and when the distance between the sensor and the reflective surface is small (the graph below shows how distance affects the output value). IR reflectance sensors are often used to detect white and black surfaces. White surfaces generally reflect well, while black surfaces reflect poorly. Example Digital Circuit The circuit below can be used to detect objects directly in front of the QRB1114. The potentiometer must be used to adjust for ambient lighting conditions. The output is an analog signal like the one at the top of the page - dependent on distance. The sensor works best when it is at least partially-shielded from ambient light. There are two ways to process the output. 1) build extra circuitry using a schmitt trigger or 555 timer to turn the output into a digital output. 2) Use an analog input and use a software threshold. The second way is easier and works well. Start with the program below to adjust your voltage thresholds (using the relay block) and light sensitivity (by adjusting the potentiometer). Example Digital Circuit 2 The circuit below uses hardware to process the signal and uses the PC/104 stack's digital input. Increasing the value of the resistor R1 will increase the sensitivity, and vice versa. Beware that if you make the sensor too sensitive, ambient light will be able to trip the sensor. The following XPC Target program will count the number of times the sensor is tripped, and print it to the Seetron BPI-216 LCD. To use it, plug the output of the ciruit above into channel 2 of the digital input on the break-out board. Pressing Bit 1 on the break-out board will reset the count. The Simulink model can be downloaded here: optoreflector_ex2_XPC_program.zip Example Analog Circuit To make an analog circuit, just take out the Schmitt trigger from Example Digital Circuit 2, and connect the input directly to the analog input on the break-out board. The ADC will output the closest integer voltage of the input between -10V and 10V. For this circuit, the voltage will fall between 0V and 5V. The Simulink model for the example above can be downloaded here:optoreflector_analog_XPC_program.zip
How Did Life Get Started? The origin of life remains the deepest of enigmas: How did this supremely complex phenomenon get started? The explanation historically has revolved around DNA, the genetic molecule that serves as a pattern for building proteins. Proteins, in turn, form enzymes, which catalyze, or facilitate, biochemical reactions, including the construction of DNA. And thus the paradox: Genes require enzymes, but enzymes require genes. Which came first? After a long focus on DNA, many life scientists are coalescing around a concept called the RNA World, which postulates that life began with RNA, which, like DNA, is built of chains of molecules called nucleotides. Our understanding of RNA has come a long way since the 1960s, when the “central dogma” of molecular biology held that RNA was a simple messenger-boy that carried DNA’s information to ribosomes, the cellular factories where proteins get built. Around 1980, biologists realized that not only could RNA transfer information, but, like proteins, it could also process chemicals – it could catalyze reactions. That ability to do both jobs suggested that RNA, not DNA, could be the primary molecule in life. DNA stores information “like a computer hard drive,” says Niles Lehman, professor of chemistry at Portland State University, “but beyond that, DNA doesn’t do anything. RNA, on the other hand, can fold into a 3-D structure, that also allows it to catalyze a chemical reaction.” (As Lehman indicates, to perform its catalytic function, an enzyme requires a specific three-dimensional shape.) Still, even if RNA can catalyzes reactions, in modern cells it gets its information from DNA. So how could RNA have been assembled in a epoch before DNA existed? In a series of recent experiments, Lehman may have found an answer: Individual units, or “nucleotides,” of the RNA chain can “self-assemble” spontaneously. Lehman and colleagues started their experiments by removing from a bacterium an RNA molecule that works as a self-replicating enzyme, cut it into four chunks, each about 50 nucleotides long, and then watched the chunks reassemble themselves into a working enzyme. “We mix the fragments together in salt water at 48 degrees, have lunch, and come back, and we have self-replicating RNAs in the test tube,” Lehman says. Obviously, reassembling an enzyme you have stolen from a bacterium and then diced into pieces does not prove that a working enzyme could have formed in the prebiotic world, but there was a method to Lehman’s madness. Fifty bases is something of a “magic number,” says Lehman, noting that chemist James Ferris of Renssalaer Polytechnic Institute has been able to string together 40 to 50 individual RNA nucleotides using clay as the catalyst. It’s conceivable that this could have happened in the prebiotic world as well. Ferris said that Lehman’s self-assembly experiment answered a big unknown remaining from his study, which produced strings of RNA that were still too short to function as a catalyst. “One of the big questions is how we would get these longer RNAs that will be needed to catalyze reactions, and this sounds like an interesting possibility.” If, as these experiments suggest, the RNA world begins with three steps (prebiotic synthesis of the individual RNA nucleotides, assembly of the intermediate chains, and then final assembly into longer chains) Ferris and Lehman have demonstrated steps two and three.. However Ferris notes that nobody has yet demonstrated a prebiotic synthesis for the individual nucleotide bases from which he constructed the RNA strands. Still, Lehman says the new results suggest that RNA can achieve enough complexity to transition into the biological realm, especially since the RNA begins to replicate itself. At first, the RNA fragments join end to end, but the completed strands then begin to catalyze further assembly of RNA. This “autocatalysis” accelerates the reaction, but even more important, Lehman notes, “Forming more of itself is a critical essence of life.” William Scott, an associate professor of chemistry and biochemistry who works on RNA at the University of California at Santa Cruz, commented that the self-assembly of fragments brings the RNA World one step closer to acceptance. “I think the idea that complex molecules can be assembled from RNA fragments instead of just RNA nucleotides is a very reasonable one.” As the RNA World hypothesis becomes more plausible, RNA is gaining more respect. For one thing, it’s known to be ubiquitous, both as a temporary storehouse for information, and since 1980, as a catalyst. “The core of the ribosome, which makes proteins, is catalytic RNA,” says Lehman, “and all cells have ribosomes, so it’s absolutely fair to say that catalytic RNA is manifest in every single cell that we know.” Lehman’s work was funded by a grant from NASA’s Exobiology and Evolutionary Biology program.
Cancer is one of the most dreaded diseases in the developed world. Its forms are many and its symptoms are diverse, but all variants cause pain and suffering. Finding a cure for the disease is perhaps the most enduring medical dream of all, and increasingly the hope of medical researchers lies with fast-developing computing technology. Can computers cure cancer? We don't know the answer to that question yet. But what we do know is that no other field shows us more vividly what computers can do. At Ohio State University Medical Center – home of one of the fastest supercomputers in the world – scientists are weighing proteins in order to find and measure the microscopic differences between healthy and abnormal cells. At the Swedish Medical Center in Seattle, gene-sequencing techniques are providing rich information about brain cancer – considered the most challenging disease to research – and its potential treatment. And at the School of Informatics at Indiana University, researchers have used colossal computers to create a huge database of cell structures, hoping to understand exactly how they work and – most importantly – how they interact with each other. Though the study of cancer also shows what computers can't do, it's by focusing on and resolving these problems that a cure may be discovered and the power of the technology advanced further. Finding the magic bullet One of the main goals in cancer research is to find a 'magic bullet' that can enter the human body, find mutated cells, target specific proteins in order to switch off the cancer's self-replications and destroy the mutated cells. Part of the obstacle to achieving this cure is knowing enough about the cancer cells and molecules. Jake Chen, an assistant professor at Indiana University, says that this process – called 'finding drug targets' – requires a massive database of biomedical information. His team has developed one of the largest Oracle-powered relational databases, holding about one half of a terabyte. Chen and his team – who have focused their efforts on breast cancer research, one of the most common forms of cancer – are currently analysing tens of GBs of raw mass spectrometry data from 80 blood samples, with more coming soon. These samples should help to further research into our understanding of the relationships between cancer and normal cells down at the molecular level, which is a particular difficulty at the moment. To help with this, Chen's team created complex algorithms not widely used in the biomedical field. The algorithms analyse not just the characteristics of individual molecules but also how each one affects others. This is what makes cancer research so complex – the interrelationships that exist and the data analysis required. Chen says that the closest analogy to this relational study is the Internet itself. For example, the servers for AOL are widely known on the web, and it's easy to see the links between one AOL server and another. Yet there are many servers on the outer edges of the web which only link to a few others. These are the 'molecules' that are harder to understand. When one of them crashes, it can effect that part of the Internet in adverse ways – causing server outages, for example. Data visualisation software can help researchers understand these 'fringe' areas of systems biology. Correlating the data requires complex algorithms which are still evolving. It might mean culling data from 100 other researchers around the world who have all found a likely protein target, analysing 25,000 genes and a few hundred thousand protein fragments, archiving the data for later retrieval and finally processing the algorithms using the Big Red cluster at Indiana University. It's a highly collaborative effort. "The answer is in the data set, but the computer is not intelligent enough yet," says Chen. "We need to make the computer smarter. Today's computers are used primarily for managing information; we need to make them smart about interpreting the data." Chen had an interesting analogy for how this works. When you look at a painting of a face, you can see what it is immediately. A computer can analyse the colours and chemicals of the painting, but it's not clever enough to see the face. Similarly, Chen is trying to produce an algorithm that can see through the noise of intricate molecular interaction networks in cancer and find the critical proteins where drug interventions may occur. Part of the computational challenge is transferring what we already know about curing cancer in mice to humans. The drugs used to cure cancer in mice could be used for humans, but they might provoke a different set of side effects. The informatics question is how to find a cure that works on 100 per cent of cancer patients. "This exciting conquest will likely go on in the next one to two decades, and will rely on systems biology informatics techniques," says Chen.
Filter By Category Alzheimer's disease is a type of progressive disorder that causes brain cells to degenerate and die which extirpate memory and other functions of the brain. It most commonly causes dementia which gives rise to continuous decline in thinking, behavioral and social skills that distort the ability to function and think independently. As the symptoms gets worse, it becomes difficult for people to remember the recent events and to recognize people they know. Alzheimer's disease has no cure at present, but treatments for reducing the symptoms are available. Although the available Alzheimer's treatments cannot stop it from progressing, but they can temporarily slow down the worsening of dementia symptoms and improve quality of life. - The exact cause is still unclear but this is mostly caused by a genetic transformation, or permanent change in one or more specific genes. Disease can inherit through parent genetic transformation to affect the upcoming generation. - A neurodegenerative disorder which causes continuous brain cell death over a period of time to lead to Alzheimer’s disease. SIGNS AND SYMPTOMS – - Memory loss that distorts daily life – Affected people tend to forget communications frequently, important dates or events, ask for the same information repeatedly, and rely upon memory aids like reminder notes or electronic devices. - Challenges in thinking and finding the solution - Difficulty in developing and following a plan to work with numbers, keeping track of monthly bills, concentration and time management. - Problem in completing similar tasks at home, at work or at leisure - Difficulty in doing routine tasks, trouble in driving to a familiar location, managing budget at work, forgetting basic tasks such as dressing and bathing. - Confusion with time or place - Lose track of dates, seasons and the passage of time, difficulty in recognizing the place. - Difficult in understanding visual images and spatial relationships- Difficulty in reading and determining color. - Difficulty with words in speaking or writing - Misplacing things and losing the ability to retrace steps - Putting things in unusual place, accuse others for stealing, getting lost in familiar places. - Decreased or poor judgment - Changes in decision making, less attention to grooming or keeping own self clean. - Changes in mood and personality - Confused, suspicious, depressed, fearful or anxious. - Mental status testing - Neuropsychological tests - Helps to determine the condition of dementia. - Conversation with friends and family - Brain-imaging Tests - Magnetic Resonance Imaging (MRI) - Computerized Tomography (CT) - Positron Emission Tomography (PET)-It uses a radioactive substance known as Tracer to detect substances in the body. The most commonly used PET scan is a fluorodeoxyglucose (FDG) which identifies brain regions with decreased glucose metabolism. - Regular exercise and quality sleep - Healthy diet - Minimal use of sugar and refined carbohydrates. - Avoid Trans fats - The fats can cause inflammation and produce free radicals which are hard on the brain. - Fruits and vegetables in major quantities. - Ensure fresh, wholesome meals are consumed that are high in brain-healthy nutrients and low in sugar, salt, unhealthy fat and additives. - Mental stimulation - Learn something new, practice memorization and play strategy games, puzzles, and riddles. - Practice the 5 W’s - Keep a “Who, What, Where, When, and Why” list of your daily experiences. Capturing visual views keeps neurons firing. - Stress management - Breathe- Deep, abdominal breathing and restorative breathing is powerful. - Schedule daily relaxation activities- Make relaxation a priority, yoga, or a soothing bath. - Nourish inner peace - Regular meditation, prayer, reflection, and religious practice may immunize you against the damaging effects of stress. - Make fun a priority - Stop smoking and control blood pressure and cholesterol levels - Watch your weight and drink in moderation - The global widespread of dementia has been estimated to be high up to 24 million and is predicted to double every 20 years till at least 2040. - The number of old people in the age group of 65+ years in the world were estimated to be 420 million and with the proportion of old people increasing from 7 to 12%. - The global widespread of dementia was estimated to be 3.9 % people aged 60+ years with the regional widespread of 1.6 % in Africa, 4.0 % in China and Western Pacific regions, 4.6 % in Latin America, 5.4 % in Western Europe, and 6.4 % in North America.
Main Focal Species American Crows are familiar over much of the continent: large, intelligent, all-black birds with hoarse, cawing voices. They are common sights in treetops, fields, and roadsides, and in habitats ranging from open woods and empty beaches to town centers. They usually feed on the ground and eat almost anything – typically earthworms, insects and other small animals, seeds, and fruit but also garbage, carrion, and chicks they rob from nests. Their flight style is unique, a patient, methodical flapping that is rarely broken up with glides. American Crows are highly adaptable and will live in any open place that offers a few trees to perch in and a reliable source of food. Regularly uses both natural and human created habitats, including farmland, pasture, landfills, city parks, golf courses, cemeteries, yards, vacant lots, highway turnarounds, feedlots, and the shores of rivers, streams, and marshes. Crows tend to avoid unbroken expanses of forest, but do show up at forest campgrounds and travel into forests along roads and rivers. Avoids deserts. American Crows eat a vast array of foods, including grains, seeds, nuts, fruits, berries, and many kinds of small animals such as earthworms and mice. They eat many insects, including some crop pests, and also eat aquatic animals such as fish, young turtles, crayfish, mussels, and clams. A frequent nest predator, the American Crow eats the eggs and nestlings of many species including sparrows, robins, jays, terns, loons, and eiders. Also eats carrion and garbage. American Crows are highly social birds, more often seen in groups than alone. In addition to roosting and foraging in numbers, crows often stay together in year-round family groups that consist of the breeding pair and offspring from the past two years. The whole family cooperates to raise young. Winter roosts of American Crows sometimes number in the hundreds of thousands. Often admired for their intelligence, American Crows can work together, devise solutions to problems, and recognize unusual sources of food. Some people regard this resourcefulness and sociality as an annoyance when it leads to large flocks around dumpsters, landfills, and roosting sites; others are fascinated by it. American Crows work together to harass or drive off predators, a behavior known as mobbing. Both members of a breeding pair help build the nest. Young birds from the previous year sometimes help as well. The nest is made largely of medium-sized twigs with an inner cup lined with pine needles, weeds, soft bark, or animal hair. Nest size is quite variable, typically 6-19 inches across, with an inner cup about 6-14 inches across and 4-15 inches deep. © Michael Andersen | Macaulay Library Size & Shape A large, long-legged, thick-necked bird with a heavy, straight bill. In flight, the wings are fairly broad and rounded with the wingtip feathers spread like fingers. The short tail is rounded or squared off at the end. American Crows are all black, even the legs and bill. When crows molt, the old feathers can appear brownish or scaly compared to the glossy new feathers. The American Crow is nearly identical to both Northwestern and Fish crows. To distinguish Fish Crows, check range maps and listen for the Fish Crow's more nasal calls. Northwestern Crows occur only along the Pacific Northwest coast; they are slightly smaller and best separated by habitat. Common Ravens are larger, longer winged, and heavier beaked than crows. Ravens' tails are tapered at the end, giving them a diamond or wedge shape compared to a crow's shorter, squarer tail. Did you know?! - To eat road kill, crows have to wait for something else to tear open the body or for the body to decompose and soften, since a crow’s beak isn’t usually strong enough to tear open the dead animal’s skin. - Young crows may stay with their parents for years until they can find a home of their own. The young crows help their parents guard their territories and raise new young.
An engine without fuel that can carry astronauts to Mars in only a few weeks would revolutionize the field of space exploration and usher in a new era in the history of humanity’s extraterrestrial travels. The authors of a recent study have claimed to achieve this lofty goal, but debate has surrounded their findings. The Cannae engine, which was developed by Guido Fetta, has been touted as the embodiment of this groundbreaking achievement. During tests of the new technology, NASA scientists created a small amount of thrust using two Cannae engines, with one of them rigged to fail. The fact that it succeeded even though it was designated for failure is seen as a red flag by critics. Another problem revolves around the origin of the study, which was produced by a research team within NASA rather than the space agency itself. The so-called “impossible” engine produces electromagnetic waves that create a difference in radiation pressure, which leads to thrust. The Cannae engine is similar to the EmDrive, which uses microwaves for its propulsion system. Roger J. Shawyer, who develops prototypes of the device at his UK-based company Satellite Propulsion Research Ltd., explained that the engine utilizes patented microwave technology in order to convert electricity into thrust and does not use any propellant in the conversion process. According to Shawyer, Chinese researchers have replicated the EmDrive thruster experiments. Skeptics of the EmDrive have pointed out that it seems to violate Newton’s law of conservation of momentum, which states that momentum can neither be created nor destroyed, and that it must be conserved during an interaction between objects. This means that the amount of momentum must be unchanged by the interaction. In spite of this apparent violation of Newtonian physics, the EmDrive does appear to work thus far, and therefore prompted questions about how it has managed this extraordinary feat.
Almost all multicellular organisms need a circulatory system to transport oxygen and nutrients through the body. Evolution has led to the existence of two types of circulatory systems namely: - Open circulatory system: primarily found in invertebrates. Here, the blood flows freely through cavities and there are no vessels to conduct the blood. - Closed circulatory system: is found in vertebrates and a few invertebrates like earthworms. This system has the presence of vessels that conduct blood throughout the body. The main difference between the open and closed circulatory system is the way blood flows in an organism. Blood can flow through vessels inside the body such as arteries and veins. This type of circulation is called closed circulation. Open circulation happens when there are no vessels to contain the blood and it flows freely through the cavities of the body. Also Read: Human Circulatory System Difference Between Open and Closed Circulatory System Vertebrates and few invertebrates have a closed circulatory system. The open circulatory system, on the other hand, is most commonly seen in invertebrates such as cockroaches and crabs. The other major difference between Open and Closed Circulatory System are summarized below: |Open Circulatory System||Closed Circulatory System| |The hemolymph directly bathes the organs and tissues.||The blood circulates within closed vessels.| |The blood and interstitial fluid cannot be distinguished.||Blood and interstitial fluid are distinct.| |Present in molluscs and arthropods.||Present in annelids and vertebrates.| |Blood is pumped into the body cavity.||Blood is pumped through the vessels by the heart.| |Dorsal blood vessel present.||Dorsal and ventral blood vessels present.| |Capillary system is absent.||Capillary system found.| |Blood is in direct contact with the tissues.||Blood is not in direct contact with the tissues.| |Nutrients are exchanged directly between blood and tissues.||The nutrients are exchanged via tissue fluid.| |No transport of gases.||Gases are transported.| |The fluid flowing in this system is called hemolymph.||Fluid flowing in this system is called blood.| |No respiratory pigments are present.||Respiratory pigments are present.| |The volume of blood cannot be controlled.||Volume of blood can be controlled by contraction ad relaxation of blood vessels.| |Blood flow is slow.||Blood flow is rapid.| |The open spaces are called sinuses and lacunae.||Closed spaces involve arteries and veins.| |Organisms with OCS: Snails, clams, cockroaches and spiders.||Organisms with CCS: Humans, squids, Cats, earthworms.| Also Read: Circulatory System To learn more about the difference between closed and open circulatory system, keep visiting BYJU’S website or download BYJU’S app for further reference. Frequently Asked Questions What is open and closed circulation? In the open circulation, the blood is not enclosed in the blood vessels and is pumped into a cavity called hemocoel. On the contrary, in the closed circulation, the blood is pumped through the vessels separate from the interstitial fluid of the body. What type of circulatory system do humans have? Humans have a closed circulatory system. The blood is enclosed in the vessels and the heart while circulating. The blood travels through arteries and veins and carries important molecules throughout the body. What is the advantage of a closed circulatory system over an open system? In the closed circulatory system, the blood is transferred faster than in the open circulatory system. That is why the closed circulatory system is more advantageous than the open system. What are the main components of a circulatory system? The main components of a circulatory system include: - Blood vessels Which organism has an open circulatory system? The insects have an open circulatory system. Unlike humans, the blood in the insects flows freely throughout the body.
A cosmochemist warned that human extinction is almost guaranteed if Earth gets hit by a planet-killer asteroid. The scientist also noted that despite NASA’s plans to protect Earth from asteroids, the chances of a major impact happening are still pretty big. In her latest book “Catching Stardust,” cosmochemist Dr. Natalie Starkey briefly explained the true meaning of International Asteroid Day. This is an annual event that commemorates the large explosion caused by a meteor that detonated over a region in Siberia on June 30, 1908. For Starkey, International Asteroid Day should remain as a constant reminder of what space rocks can do to the planet and its inhabitants. She also noted that the annual event serves as a warning about how a major asteroid impact can easily wipe out humans on the planet. “The United Nations designated June 30 as International Asteroid Day, which to many people may seem like a strange thing to do,” she said according to Express. “It certainly isn’t because asteroids are about to become extinct, like some endangered wildlife.” Top articles3/5Trump, Backed By Australia’s Morrison, Talks Tough On China “Instead, it’s because there’s a threat that we, as humans, could become extinct if an asteroid was to collide with Earth,” she added. Similar to what happened to the dinosaurs 66 million years ago, humans are in danger of getting wiped out if an asteroid several miles long hits Earth. Aside from the magnitude of the initial explosion, the wide-scale extinction will also be caused by extreme environmental events that will be triggered by a major asteroid impact. Of course, NASA and other space agencies around the globe are doing their best to prevent another extinction-level event from happening. Through satellites and other sophisticated monitoring systems, these agencies are keeping track of asteroids that might collide with Earth in the future. So far, the agencies noted that they haven’t detected a major asteroid that has a 100% chance of hitting Earth within the next century. Despite the agencies’ assurance, Starkey believes that it is still highly possible for an asteroid to remain undetected in the vastness of space. For the cosmochemist, this kind of asteroid is what the people of Earth should be worried about. “There is always the possibility a random object that scientists can’t yet see is lurking out there in the outer Solar System, in an orbit that intersects that of Earth within the next few decades,” she said. “The problem is that it is currently impossible for astronomers to track every object in the Solar System, particularly the small and fast-moving ones that are on random orbits,” Starkey added.
At a conference on innovative teaching and learning, I attended a memorable panel conversation about the skills that students should develop by the time they start college or enter a career. The panel was made up of men and women who headed large and small businesses, and the skills they wanted incoming employees to have were: - Communication for internal and external clients - Problem solving - Strong work ethic A 2015 survey by the National Association of Colleges and Employers confirms the importance of these skills. Yet it seems that young people are often not masters of these skills when they graduate high school. How can we ensure that high school graduates are highly proficient in these key career skills? Collaboration, communication, and critical thinking, sometimes called the three Cs, can be fostered in K–12 environments: Students already work in teams, give presentations, write papers, and solve complex challenges. Innovations in teaching and learning—including STEM/STEAM, project-based learning, inquiry-based learning, and design thinking—also enhance these skills. And traditional education includes opportunities for Socratic seminars, labs, literary analysis, and studying complex mathematical formulas and scientific hypotheses. Teachers provide these experiences in school, but colleges and business people continue to say that young people lack a grounding in the three Cs. The reason for this puzzling disconnect is best expressed by students. I’ve interviewed many students across the U.S. in all kinds of schools, asking them to define and describe communication, collaboration, and problem solving. In classrooms where good collaboration took place, students would say that people were positive and did their work. But they could not describe the behaviors they demonstrated that could be replicated each time they worked in teams. The same vague responses were shared regarding good communication and problem solving experiences. There were some students who could give clear definitions and provide concrete examples of their practice. Their classrooms and schools shared common practices, described below, so that students understood the three Cs as well as they did the rest of the curriculum. Students and educators need to share a common language that describes the three Cs in concrete behavior. These descriptors become the guide by which students monitor their actions and those of peers and the adults—everyone is held to these behaviors. For example, communication can be described concretely as: - Listens to others, fully present to others’ meaning. - Seeks to understand before being understood. - Encourages through verbal and nonverbal cues. - Expresses ideas and questions in clear and concise language. - Uses pitch and tone to express thoughts in an appropriate manner. - Is mindful of communication skills when having difficult conversations. Sometimes teachers or staff develop the first draft of the behaviors. Then students propose revisions in language that makes sense to them. The resulting description charts are best posted on all walls so that they can be seen and used by everyone. Introduce one chart per skill, and gradually add the other Cs during the year. Being intentional is key to learner growth (see my new book, So All Can Learn: A Practical Guide to Differentiation). Use your charts to dialog with students about their work. For example, chemistry teacher George Hutcheson would move between student teams to monitor their progress on a project-based learning experience. He redirected students toward the assignment when needed by having them reflect on their collaboration. He also gathered project leaders from each team to discuss their responsibility to keep everyone on task and contributing—both of which are collaboration skills. Students need opportunities to reflect on their use of the three Cs. For example, Jennifer Dyer, a French teacher, would start a lesson by having students work with partners to reflect on and discuss the behaviors from their chart that they felt were important to the work. At the end of a lesson, the students evaluated their success with the skills in completing the work. Use reflection before and after activities that require students to practice the three Cs, such as after protocols. Five minutes in total for reflection is time well spent. The Fourth C: Citizenship Leadership comprises the three Cs and is exhibited through being a participating citizen. Becoming an active citizen requires thoughtful practice of the Cs in connection with involvement in communities both local and farther afield. A great example of teachers and students growing a culture of deeper learning through 21st-century skills is Isle of Wight County Schools in Virginia. Check their twitter hashtag, #IC5Cs, for a wealth of shared practices and experiences. Being intentional with developing 21st-century skills is the only way that students consciously grow the skills. Imagine a class of students who develop a deep understanding of these four Cs from kindergarten to third grade. Now picture them as high school seniors after more years of practice. These amazing citizens could transform expectations in college and the workplace.
Anther : upper portion of the stamen that produces the pollen. Image Awn : A long, bristle-like appendage as on the floret of a grass. Image Calyx : Outermost ring of flower parts, composed of sepals. Usually green. Encloses the developing bud and usually persists below the open flower. Image Composite flower head : The specialized flower of the Sunflower family (Asteraceae). Tiny flowers, crowded together on a common base (the receptacle), resemble a single bloom. Flowers are of one or two types: symmetrical disk flowers and strap-shaped ray (or ligulate) flowers. Image Compound leaf : A leaf composed of distinct, leaf-like leaflets arranged along the leaf stem. Arrangement either bilaterally symmetrical (pinnate) or radially symmetrical (palmate). A leaflet may be distinguished from a true leaf because a leaf has a small bud where the stem joins the main plant; a leaflet does not. Corolla : Collective term for the petals on a flower. Image Cotyledon : “Seed leaf”; leaf produced in the embryo and usually the first to emerge on a young plant. Traditionally, flowering plants were divided into Monocotyledons (monocots) and Dicotyledons (dicots) depending on whether the embryo had one or two cotyledons. Although this system of classification has been modified, the distinction is still useful. Dicot : Or Dicotyledon; in a previous taxonomic system this was one of two major divisions of flowering plants, characterized in part by having two seed leaves (cotyledons) and flower parts in multiples of 4’s or 5’s. The dicots were recently replaced by Eudicots and Basal Dicots; the former corresponds closely with the original Dicot division. Disk floret : (Disk flower): A tiny, symmetrical flower of a composite flower head (sunflower family, Asteraceae). In a daisy, disk florets form the inner eye. Image Filament : Thread-like part of the male reproductive structure that supports the anther. Image Flower head : In the sunflower family, one group of flowers clustered on a common base; often assumed to be one blossom. Image Glume : One of two scale-like bracts at the base of a grass spikelet; unlike most bracts, a glume does not subtend a flower. Image Involucre : A group of bracts (phyllaries) that form a unit below a flower, flower cluster, flower head (composite flower) or fruit. Image Lemma : he lower of two bracts at the base of a grass floret; the upper is the palea. Image Nectary : Nectar producing glands, often in the base of a flower. Image Palea : The upper of two bracts at the base of a grass floret; the lower is the lemma Image Pappus : A hairy or bristly modified calyx on the seeds of some composite flowers; often aids in seed dispersal. Image Petal : One of several modified leaves that surround the reproductive structures of a flower; often brightly colored. Image Phyllary : One of several bracts around the base of a composite flowerhead Image Pistil : The female reproductive structure of a plant, usually consisting ovary, style and stigma. Image Ray floret : (Ligulate floret): A tiny, strap-shaped flower of a composite flower head (sunflower family, Asteraceae). In a daisy, ray florets form the outer halo. Some botanists recognize two types of strap-shaped florets: a ray floret has three petals fused into the strap and two rudimentary petals, is found only on the periphery of the flower head and usually lacks stamens; other strap-shaped florets are called ligualate florets. Image Receptacle : The part of the plant to which a flower is attached. Image Sepal : Individual element of the calyx, which encloses the flower bud; usually leaf-like. Image Stamen : The male reproductive structure of a flowering plant. Consists of a pollen-producing anther supported by the thread-like filament. Image Stigma : Top portion of the pistil which captures the pollen. Image Style : Narrow portion of the pistil that connects the stigma and the ovary. Image Umbel : A flower arrangement in which the pedicels of the flowers originate from a single point, much like the ribs of an umbrella.
Key Components of the QSI Educational Model – DO, KNOW, BELIEVE Outcomes – There are four hierarchical levels of Outcomes: Exit Outcomes, Program Outcomes, Course Outcomes, and Unit Outcomes (These are a bit like babushka dolls all nested within each other.) I. Exit Outcomes - The starting point is to imagine our definition of a model graduate. What would this person need to be able to do, know, and be like as a person? This leads to dividing the Exit Outcomes into three parts: 1) Competencies (Do), 2) Knowledge (Know), and 3) Success Orientations (Be or Believe). a. Competencies – Verbal & Written Communication Skills; Numeracy & Mathematical Skills; Psychomotor Skills; Commercial Skills; Fine Arts Skills; Thinking & Problem Solving Skills; Decision Making & Judgement Skills. b. Knowledge Categories – English/Literature, Mathematics, Cultural Studies, Science, Languages other than English, Creative & Applied Arts, and Personal Health & World Environmental Issues. c. Success Orientations – Trustworthiness, Responsibility, Concern for Others, Kindness/Politeness, Group Interaction, Aesthetic Appreciation, and Independent Endeavor. QSI particularly stresses the ‘Success Orientations’. SOs are an integral part of every aspect of the school and are inherent in the ‘Program Outcomes’. II. Program Outcomes - These are derived from the Exit Outcomes. They outline the school's curriculum in each of the seven ‘Competencies’ and ‘Knowledge’ categories. Each course, such as Algebra, British Literature, or 5 Year Old Music, is identified in one of the seven ‘Program Outcomes’. III. Course Outcomes - These are derived from the ‘Program Outcomes’. They give a more detailed description of each course and include information on learning objectives, materials, and resources available for the course. There are essential units, which must be taught and assessed, as well as selective units from which the teacher and students may choose. The average course is designed to lead to the mastery of 10 units. IV. Unit Outcomes - A unit consist of a general statement and a number of ‘Unit Outcomes’, or TSWs (which stands for The Student Will…) which are clearly defined and measurable learning objectives. The number of ‘Unit Outcomes’ (TSWs) may vary. The average unit requires 12 to 18 class periods to attain mastery in ALL ‘Unit Outcomes’ (TSWs). Teachers and students use rubrics to identify what knowledge and skills must be demonstrated in order to receive an A or a B for each ‘Unit Outcome’ (TSW). Mastery may be determined using formative and/or summative assessments such as oral evaluations, paper/pencil tests, assignments, projects, performances, or other appropriate means of determining student success. The following tenets are crucial to the QSI Educational Model: Alignment - The teacher teaches; the materials support; and assessments reflect the objectives of the ‘Unit Outcomes’ (TSWs). In other words teachers teach what they test, and test what they teach. To do otherwise is unethical. We want Mastery Learning, not Mystery Learning. Expanded Opportunities - Students differ in time needed to attain mastery on a ‘Unit Outcome’ (TSW). A variety of ways are employed to allow each student the appropriate learning time. Those who need less time to demonstrate mastery engage in selective outcomes and may receive additional credit. Credentialing – Our reporting systems aligns with the philosophy of mastery learning and reflects the overall structure of the four outcome levels. Mastery of each unit is evaluated at the time of completion with an 'A', 'B' (mastery grades), or ‘P’, which stands for ‘in progress’ and means that the student has not yet demonstrated mastery. Mediocre or poor work is not accepted. If a student has mastered a unit with a 'B', s/he is given the opportunity to earn an ‘A’ though work that demonstrates higher order thinking skills. This can happen immediately or later in the school year. This approach encourages continued learning. Data is gathered and reported on a regular basis allowing 'Status Reports' to be produced at any time. A time period (quarter, term, semester) is not evaluated
Interactive Java Tutorials Polarization of Light (3-D Version) Sunlight and almost every other form of natural and artificial illumination produces light waves whose electric field vectors vibrate in all planes that are perpendicular with respect to the direction of propagation. If the electric field vectors are restricted to a single plane by filtration of the beam with specialized materials, then the light is referred to as plane or linearly polarized with respect to the direction of propagation, and all waves vibrating in a single plane are termed plane parallel or plane-polarized. This tutorial explores the effects of two polarizers having adjustable transmission axes on an incident beam of white light. To rotate the tutorial, click and drag anywhere within the applet window. The tutorial initializes with a simulated beam of "white" light, traveling from left to right in the window, incident on two linear polarizers, each of which have their transmission azimuths oriented vertically (represented by Venetian-blind type slits). In order to operate the tutorial, use the Polarizer Angle sliders to adjust the angle of the polarizers with respect to the incident white illumination. The red, green, and blue waves propagating from the left are intended to simulate the light vibrating in all planes perpendicular to the direction of propagation. Polarizer 1 allows only light waves to pass that are vibrating parallel to the polarization direction (the red color is for ease of illustration only and has nothing to do with the wavelength distribution). Polarizer 2 is initially positioned parallel to polarizer 1, and also passes light passed by the first polarizer. When the slider bars are translated, the polarizers are rotated, affecting the passage of light through the virtual polarizing system. The human eye lacks the ability to distinguish between randomly oriented and polarized light, and plane-polarized light can only be detected through an intensity or color effect, for example, by reduced glare when wearing polarized sun glasses. In effect, humans cannot differentiate between the high contrast real images observed in a polarized light microscope and identical images of the same specimens captured digitally (or on film), and then projected onto a screen with light that is not polarized. The basic concept of polarized light is illustrated in Figure 1 for a non-polarized beam of light incident on two linear polarizers. Electric field vectors are depicted in the incident light beam as sinusoidal waves vibrating in all directions (360 degrees; although only six waves, spaced at 60-degree intervals, are included in the figure). In reality, the incident light electric field vectors are vibrating perpendicular to the direction of propagation with an equal distribution in all planes before encountering the first polarizer. The polarizers illustrated in Figure 1 are actually filters containing long-chain polymer molecules that are oriented in a single direction. Only the incident light that is vibrating in the same plane as the oriented polymer molecules is absorbed, while light vibrating at right angles to the polymer plane is passed through the first polarizing filter. The polarizing direction of the first polarizer is oriented vertically to the incident beam so it will pass only the waves having vertical electric field vectors. The wave passing through the first polarizer is subsequently blocked by the second polarizer, because this polarizer is oriented horizontally with respect to the electric field vector in the light wave. The concept of using two polarizers oriented at right angles with respect to each other is commonly termed crossed polarization and is fundamental to the concept of polarized light microscopy. The polarized light microscope is designed to observe and photograph specimens that are visible primarily due to their optically anisotropic character. In order to accomplish this task, the microscope must be equipped with both a polarizer, positioned in the light path somewhere before the specimen, and an analyzer (a second polarizer), placed in the optical pathway between the objective rear aperture and the observation tubes or camera port. On most microscopes, the polarizer is located either on the light port or in a filter holder directly beneath the condenser. The polarizer can be rotated through a 360-degree angle and locked into a single position by means of a small knurled locking screw, but is generally oriented in an East-West direction by convention. Other microscopes typically have the polarizer attached to the substage condenser assembly housing through a mount that may or may not allow rotation of the polarizer. Some polarizers are held into place with a detent that allows rotation in fixed increments of 45 degrees. Polarizers should be removable from the light path, with a pivot or similar device, to allow maximum brightfield intensity when the microscope is used in this mode. Light diffracted, refracted, and transmitted by the specimen converges at the back focal plane of the objective and is then directed through an intermediate tube, which houses another polarizer, often termed the "analyzer". The analyzer is another HN-type neutral linear Polaroid polarizing filter positioned with the direction of light vibration oriented at a 90-degree angle with respect to the polarizer beneath the condenser. By convention, the vibration direction of the polarizer is set to the East-West (abbreviated E-W) position. The same convention dictates that the analyzer is oriented with the vibration direction in the North-South (abbreviated N-S) orientation, at a 90-degree angle to the vibration direction of the polarizer. Mortimer Abramowitz - Olympus America, Inc., Two Corporate Center Drive., Melville, New York, 11747. Matthew J. Parry-Hill and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310. Questions or comments? Send us an email. © 1998-2015 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners. This website is maintained by our
A recent toast to James Watson highlights a tolerance for bigotry many want excised from the scientific community. Chimpanzees may reinforce social bonds by involuntarily mimicking a fellow chimp’s pupil size. August 25, 2014| FLICKR, WILLIAM WARBYLike humans, chimpanzees possess a capacity to unconsciously dilate their pupils to match those of a conspecific, according to research published last week (August 20) in PLOS ONE. The results suggest this involuntary action likely evolved to help humans and chimps communicate sympathy and strengthen social bonds within groups. In face-to-face interactions, people often involuntarily imitate each other’s facial expressions, eye blinks, or pupil size to convey empathy. These physical cues help communicate emotions to both individuals in the interaction, facilitating trust and cooperation within groups. But precisely when and how these signals evolved isn’t clear. So Mariska Kret of the University of Amsterdam in the Netherlands and her colleagues searched for signs of pupil mimicry in chimps. Eight chimp and eighteen human participants were shown images of both human and chimp eyes with either dilated or constricted pupils. The researchers found that both humans and chimps mimicked pupil dilation more strongly in response to their own species, although the reaction was slightly stronger in humans. The effect was also the strongest in mothers. Previous work has suggested that these eye signals between individuals evolved with the appearance of the whites of eyes, which offer more contrast and make such cues easier to spot. “Traditionally, it's been thought that the evolution of white sclera was driven by its enhanced ability to indicate gaze direction, and hence share attention,” Neil Harrison of the University of Sussex in the UK told New Scientist. Harrison said the discovery of pupil mimicry in chimps is significant. “The current findings show that pupil mimicry is not uniquely human but is possible in a species that has no clear visible eye-white and has less communicative eyes than humans,” the authors wrote in their paper.
Conservation and Biodiversity What does information about threats to biodiversity need between countries? what do they need to decide? name two examples of successful international cooperation? what does Rio aim to develop? and? what did they make part of the law? what does it provide guidance to? what does CITES stand for? what is it? whats it designed to increase? what did member coutnries all agrre to make illegal? what does the agrrement help to conserve? how? and making what illegal? such as? whats it also designed to raise? of what? through what? why is interantional cooperation really important? What does EIA stand for? what is it? such as? what does it estimate? and evaluating? whats identifyed? what is also identified? whats decided? what are they? for example? what are local authroties often under? from who? what do they argue? how do they feel? what do enviromental impact assessments ensure? who are they used by? to decide what? shared, conservation methods and implement them together, Rio convention on Biodiversity and CITES Agreement, to develop intrnational startegies ton the conservation of biodiversity and how to use animal and plant resources in a sustainable way, conserving biodiversity is everyones respinsibility, guidance to goverments on how to conserve biodiversity, convention on international Trade un Endangered Species, agrrement designed to increase inernational cooepration in regualting trade in wild animald and placen specimens, illegal to kill endangered species, conserve species by limting trade through licensing by making it illegal to trade product froms endangered animals like rhino ivory, raise awareness of threats to biodiversity through education, its pointless if its just one country Enviromental impact assesssment, an assessment of the impact a developing project have on the eveiroment, buildings or shopping centre, bidoiversity on the project site, how the the development might affect biodiversity, ways the biodiversity could be conserved, threatened or endangered species on teh project site and the laws relating to their conservation, planning stipulations, measures that will have to be impleme ted if the project proceeds, relocatingor protectecing endangered species, under presure from conservationists, developments damage the enviroment and dsiturb wildlife . habitats should be left alone, decision makers consdier thee enviromental impact of development projects to decide if and how projects proceed What is classificaiton the act of? based on what? who does it make it easier for? to do what? What is taxonomy? how many classificaiton systems are in use? what do they all invovle? in a what? How many levels of groups used in classificaiton? what are the groups called? what are similiar organisms are first sorted into? called what? give example? similiar organism are then sorted into what? called what? give examples? similiar organisms from that kingdom are then grouped into what? similiar organisms from each phylum are then grouped into what? name all eight and order them? as you move down the heircarchy what are ther more of? at what? but fewrer what? in what? what does the hierarchy end with? these groups only contain what? give examples? How many kingdoms do you need to know? what about them? name the five kingdoms? give example of prokaryotae? state features? examples of protoctista? features? exampls of fungi and featurs? examples of plantae and features? examples of animalia and features? arranging organisms into groups based on similarites and differences, scientists, identify and study them, the study of classification, few, into groups in a taxonomic heirarchy, eight levels of groups, taxonomic groups, sorted into three very large groups called domains, animals plants and fungi are in the eukarya domain, sorted into slightly smaller groups called kingdoms, all animals are in the animal kingdom,phylum, class, domain kingdom phylum class order family genus species, more groups at each lvel but fewer organisms in each group, species, groups that only contain one type of organims, human dog ecoli five and general charcateristcs or organism in them, prokaryotae (monera) protocista fungi plantae animalia, bacteria-ptrokaryotic unicelluar(single-celled) no nucleus less then 5 um, algae prtozoa- eukaryotic cells usually live in water single clled or simple mutlcielluar organsms, moulds yeats mushrooms- eukaryotic chitin cell wall saproptophic (absorb substances from dead or decaying organisms), mossess ferns flowering plants- eukaryotic multicelluar cell walls made of cellulose can photosynthesise contain chloryphyll authrophoc (produce their own food), nematodes(roundworms) molluscs insects fish reptiles birds mammals- eukaryotic multicelluar no cell walls heterotrophic (consume plants and animals) Whats the nomenclature? whats the one used for classificaiton called? what are all organisms given? in what? that has what? whats the first part of the name? what does it have? whats the second part of the name? what does it begin usuing the binomial system what are humans? what are names always written in? or if handwirtten? what does the binomial system help avoid? give examples? what is phylogeny the study of? of what? what have all organis dne? from what? meaning? Example- whos in the hominade family? what they have evolved from? what diverged firsT? meaning? what did next? then what? followed by? what does phylogeny tell us?and what?is is it recently that closely related species have diverged? what tree can show the hominade ancestor tree? why are humans closely related to each other? how can you this on the tree diargam? what are more distantly related then? why? what are there branches like? what does classifciaton systems now take into account? when? naming systembionomial system, inernationally accepted scientific name, lating, two parts, is the genus name has a capital letter, is the species name and begins with a lower case letter, Homo sapiens, italics, underlined, confusion of common names, 100 different plant species are caled raspberries and one species of buttercup has over 90 common names, is the study of the evolutionary history of groups of organisms, evolved from shared common ancesrtoes, relatives, great apes and humans, evolved froma common ancestor, orangutans diverged, evolved to become a differnet species from this common ancestor, gorillas followed by humans by bonobos and chimpanzees, tells us whos related to who and how cloesly related, most recently, phylogenetic tree, closely related as they diverged very recently, branches are close, distantly related as they divereged longer ago so their branches are furher apart, takes into account phylogeny Evolution of Classificaiton Systems What did early classification systems only used what to place ogranisms into groups? which are? gice example? does this method have a problem? what cant scientists always agree on? of what? and what may groups based solely on physcial features may not show? give example? what are classification systems now based on? along with? what dose the more similiar an organism mean? what do we now use to see how similiar(realated) organism are? name the four? whats molecular evidence? what will more closely related organisms have? what can you compare? give example? what is embryological evidence?whats the anatomical evidence? whats the behavioural evidence? Give example of new technologies? what can they result in? where can scientists share discoveris? what is continually revised? to take what into account? Example- what were skuns classified? until? so where were they classified? Only used observational features, things you can see, wether they lay eggs can fly etc, has aproblem, the rleative importance of differnt fetures, may not show how related organisms are, sharks and whales look quiet similiar and they both live in the sea but theyre not closely related, oberable features along with other evidence, The more related they are, a wide range of evidence to se ehow related organisms are, molecular evidence embryological evidence anatomical evidence and behavioral evidence, similiarties of proteins and DNA, more ismiliar moelcules, how DNA is stored the dequence of DNA bases sequence of Amino Acids in proteins, base sequence for human and chimpanze DNA is 94% the same, similiarties and differences in the early stages of an organism devlopment, the similiarites in structure and dunction of differnet body parts, similairites in behaviour and social organisation of organisms, DNA techniques better microscopes, can result in new discovers being made, meetings or scieiftic journals, how organisms are classififed, take into account of any new findings that scientists discover, family mustelidae, moelcular evidence revealed their DNA sequence was significantly differnet of hother members in the family, relcassified into the family mephitidae Evolution of Classificaiton Systems are the three domain classificiation systsme mentioned on the other card new or old? why was it suggested? in the older system what were the largest group? where all organisms palced? what was propeosed in 1990? what does this sytem have? what are they? what are they above? in what? in the three domain system where are ognaims with nucelus placed? what does this include? what do organisms in the prokaryotae kingdom have? what are they seperated into? what happens to the lower hierarchy? name them? why was the three domain system proposde? mainly what evidence? give example? why? what two pieces of evidence? what did the molecular evidence show? what is RNA polymerae needed for? what has similiar histones to what? what are histones? what doesnt? what was the cell membrane evidence?the devleopment and composition of what was also different? what do most scientists now agree? and what are more clsoely realted to what? rather than? what does the three domain system reflect? What do dichotomous keys provide? based on? give examples? what do they consist of? eahc with what? what does eaach answer lead to? until what? what could you be asked to do in the exam? relativly new, new evidence, the five kingdoms, placed into one of these groups, the three domain system was proposed, three domains, large superkingdoms, kingdoms in the taxhamonic hierarchy, domain Ekarya, four of the five, contains unicellaur organisms without a ncuelus, two domians, archaea and bacteria, stays the same, kingdom phylum class order family genus species new evidience, mainly molecular, prokaryotae were reclassified into two domains because new evidecne showed large differences between the archae and bacteria,molecular evidence and cell membrane evidnece, enzyme RNA polymerase is different in Bacteria and Archaea, needed to make RNA, Archawa have similiar histones to Eukarya, proteins that bind to DNA, but not bacteria, the bonds of the lipids in the cell membranes of Bacteria and Archaea are differnet, flagellaea, Archaea and Bacteria evolved seperatly and that Archaea are more clsoely related to Eukarya than Bacteria, how differne the archaea and Bacteria are Provide a way to identify organisms, observable features, colour types of leaves, series of questions, two possible answers, either the name or another question, the organism is identified, to use a dichtomous key to identify some organisms What is variation? what is every individual organism? what are clones? what do they even show? where can it occur? what is variation within a species called? give example? what can they varry in? whats the variation between bewteen species? what is the lightest species of bird? what does it weigh? what is the heaviest bird? what can it weigh? What is continous variation? what are no distinct? give example? not just ___ or ___? examples in animals? examples in plamts? examples in micrpoorganisms? What is discontinous variation? what does each individual fall into? what are there none of? examples in animals? exam,ples in plants? examples in microorganisms? The differences that exist between indivudals? unique? identical twins, show some variation, within species and between species, intraspecific variation, individual european robins weigh between 16g to 22 g also variation in length wingspan colour and beak size, interspecfic variation, bee hummingbird, 1.6g and the heaviest is the ostrich, 160 kg, Is when indivuals in a population vary withing a range, there are no distinct cateogries, humans can be any jeight withing a rang of 139-185, tall or short, milk yield- cows can produce any volume of milk within a rang and Mass- humans can be any mass within a range, Number of leaves- trees can have any number of leaves withing a range and Mass- the mass of seeds froma flower head can be within a range, width- the width of e coli varies and length- of flaggelum can vary Is when there are two or more distinct categories, falls into any of these categories, no intermdeiates, sex- humans are male or female and blood group- humans can be A B AB or O, Colour- courgettes are yellow dark green or light green and Seed shape- some pea plants have smooth seeds and some wrinkled,Antibitoic resistant- they are resistant or not and pigm ent production- some types of bacteria can produce a coloured pigment some cant What two things can variation be cayused by? can it be caused by both? what do different species have? what do individuals of the same spcies have? but differnent what? called what? what do the genes and alleles an organism has make up? what does the difference in genotype result in? in what? meaning? give an example of variation only caused by genetic factors? and what? what do you inherit genes? what does this mean? What can variation also be caused by? give examples? what can characterists controlled by eneviromental factors change over? give example of variation cuased only by enviromental facotrs? What do genetic factors determine? what can enviromentals factors influence? give two examples? what do genes determine height wise? give example? however what enviromental facotors affect how tall it actually grows? what determines if a microorganism can grow a falggelum? why will only some start to grow? give example? Genetic factors or enviromental facotrs or both, different genes, same genes but differnt versions, alleles, genotype, variation in phenotype, the characteritics dispalyed by an organism, blood group in humans and antibitoic resitance in bacteria, from your parents, is inherited, differneces in the enviroment, climate food lifestyle, can change over an organisms life, accnets or pierced ears born with, can influence how some characteristics develop, genes determine how tall an organism can grow, tall parents have tall children, diet or nutreitn avaibility affect how tall an organism acutally gorws, genes some will only start to grow them in certain envriometns, metal ions are present, What three things does being adapted to an enviroment mean? what are these features called? what three things can they be? what do adaptations develop because of? by what? what individuals in each generation are more likely to survive and reproduce? what do they pass to their offspring? what are individuals less well adapted more likely to do? What are behavioural adaptations? give two examples? why do possums play dead? to escape? what does this increase? what does this make sure they attarct? increasin what? What are physiological adaptations? what does it increase? give two examples? how do they hibernate? what does this mean? over what season?what does this conserve? so what dont they need to do? whys that good? what does it increase? what do these kill? where? whats there less? increasing? What are anatmoical (structural) adaptations? what does it increase? give two examples? what does its streamlined shape make easier? so whats easier? increasin what? what does there blubber keep them? what does this increase? where? an irganism has features that increse chances of survival reporudction and chance of offspring reproducing successfully, adaptations, behavioural physiological and anatomical, evololution by natural selection, best adapted individualas, offspring, die before reproducing, ways an organisms acts, chacne of survival, possums sometimes play dead and scorpions dnace before mating, if theyre being threatened by a predator, escape attack, chance of survival, this makes sure they attract a mate of the same species, liklihood of successful mating processes inside an organisms body, survival, brown bears hibernate and some bacteria produce antibiotics, lower metabolism, all chmeical reactions taking place in their body, winter, conserves enegry, food when its scarce in winter, chance of survival, these kill other species of bacteria in the area, less compeition, likely to survive, Structural features of an orgabnism body, chance of survival, otters have a streamlined shape and whales have a thick layer of blubber (fat), glide through water, catch prey and escape predators, chance of survival, keeps them warm, chacne of survival, where food is found in the cold sea The Theory of Evolution What do scientists use theories to attempt? who did this? How many key observations did darwin make? of what? What observation did he make of organisms offspring? about variation? charactericts and inheritance? indiduvlas that are best adapted? is natural selection the only process by which evoloution occurs? What was darwins theory on? what did it exaplin? What do individuals within a population show? in their? which is? what three things create a struggle for survival? what are better adapations? give example? what do they give organisms a better chacne of? over time what increases? whatd do they have? over generations what does this lead to? why? Why was there opposition to darwins theory? over time whats happened? why? what hasnt been found? what does eveidence increase? the more eveidence the more what? to attempt to exaplin their observations, charles darwin, four, about the world around him, organisms produce more organisms than survive, theres variation in the charcteristics of members of the same species, some of these charcteristics cna be assed on from one genrtation to the next and indicuals best adapted to their enviroment are more likely to survive, is one process, of evoloution by natural selection to explain his observations Show variation in their phenotypes, their charcateristics, predation disease and compeition, characteristics that give a sleective advantage, being able to run away from predators faster, are mor elikely to survive reproduce and pass one their advantegous adaptions to their offsrping, the number of indivudalas with the advantageous adapations increase, evoloution, as the favourable adaptiosn become more common in the population Conflcited with religous beliefs, increasingly acceptd, more evdience has been found to support it, and none to discredit it, increases scientists confidence ina theory, the more chance of soemthing becoming an accepted scientific explanation The Theory of Evolution What is specification? what is a species defined as? that can do what? what can species exist as? for example? when does specification happen?whats the example of how evolouton can lead to speicifciaton? What did Darwin observe? where? what are the galapogos islands? what was each species of finch unique to? were they similiar? what differed?what were they adapted to? Darwin Theorised- what did all the spcies of finch have? what did different populations become? where? what did each population evolve? to what? what did the populations evolve to become? what couldnt they do? what had they evolved into? The formation of a new species,as a group of similiar ogrnaisms that can reproduce to produce fertile offspring, as one or more populations, populations of the american black bear in parts of USA and Canada, when populations of the same speices evolve to become so differnt that they cant breed with one another to produce fertule offspring, 14 species of finch on the galapgos islands, a grorup of islands in the pacific ocean, each specie of finch unique to a singel islands, they were similair, size shape and beak dffiered, to the food sources found on their island, had a common ancestor, isolated on differnt islands, evoved adaptions to their enviroment, to become so differnt that they could no longer breed to produce fertile offspring, they had evolved into seperate species The Theory of Evolution What are the three pieces of evidence that support evolution? what are fossils? preserved where? what are fossils arranged in? what can be observed that provide of evolution? what does the fossil record of the horse show? What does the theory of evolution suggest? what do closely realted species do? meaning? did it happen long ago or recently? what is evolution caused by? where? what shouild organims that have diverged away from each other more recently have? why? who have found this? Example- name the three species that evolved from a common ancestor? which diverged longest ago? which diverged recently? which have similiar DNA base sequence percentage wise? give percentages? Other than DNA where can similiarites be? what can scientists comapre? where? and compare whaT? what do organisms that diverged away from each other more recently have? why? fossil record evidence DNa evidence and molecular evidence, are remains of organisms preserved in rocks, chronological order, gradual changes, shows a gradyak change in its characteristicsincluding incresiing size and hoof development that all organisms have evolved from shared common ancestors, diverged, evolved to become different species, more recently, gradual changes in the base sequence of organisms DNA, should have more similiar DNA, as less time has passed for changes in the DNA sequence to occur, scientists, humans chimps mice, humans and mice, humans and chimps, humans and chimps 94%, human and mice DNA is 85% the same Other molecules, the sequence of amino acids in proteins, anitbodies, have ore similiar molecules, as less time has passed for changes in proteins and other molecules to occur The Theory of Evolution what are antibiotics? what do they kill? or inhibit? what have scientists observed? in what species? give example? whta is it a strain of? whats it resistant to?called? what can the evolution of antibiotic resistance be exmpalined by? what is their in a population of bacteria? what makes some bacteria naturally resistrant to an antibitoic? if its exposede to an antibitoitc which ones will survive? what will be passed on to the next generation? what has happened? when are infections harder to treat? such as? what are some speices of bacteria resistant to? what do docots have to figure out? what could of happened in that time? when is there a real problem? whats developed to prevent this? what two things does this require? will one antibiotic gurantee all bacteria is killed? Are drugs, kill or inhibt the growth of bacteria, the evolution of anitibotic resistance in many species of bacteria, MRSA (methicilliin-resistant staphylococcus aureus), a strain(type) of bacteria, thats resistant to the antibitoic methicillin, natural selection, variation, genetic mutatuions, only the indiduvlas with resistance will survive to reproduce, the alleles, and so the population will evolve to become resistant to the drug caused by anti-bitotic resistant bacteria, MRSA, a lot of differnet anitbitoitcs, it takes doctors a while to figure out which anitibitoics iwll get rid of the infection, pateitn could become ill or die,the point where bacterium have developed resistance to all known antibitoics, new anitbiotics, time and money, wont always kill all of them The Theory of Evolution what are pesticides? what do they do? like? what have scientists observed? in what? give example?what have they evolved? what pesticide? what do pollen beetles do? what do they resist to? what can the evolution of pesticide resistance be expalined by? what is thre in populations of insects? what makes some insects naturall resistant to a pesticide? if the population of insects is ecposed to that pesticide what will survive? whats passed on to the next generation? what will have happend? what are the implications of pesticde similiar to? what sort of crop infesetation is harder to control?who does it take a while to figure out? what do they have to figure out? what could have happend? what do they have to do if the insects are resistant to specific pesticides?what do they do? what could also be killed?name a disease carryig insect? if they become pesticide-resistant what could increase? what could a population of insects evolve to resist? how do we prevent this? what does this take? are chemicals, that kill petss, insetcs that damage crops, evolution of pesticide resistance in many species of insect, some opulations of mosquitoe have evolved resistance to the pesiticide DDT, whcih damage the crop oilsee ****, are resistant to yrethroid pesticides, natural selection, variation, gentic mutations, only the indivudals with resistance will survive to reproduce, allelles which cause the pesticde resistance, so the population will evolve to become more resistant to the chemical, antibittic resistance, harder to control, to lost of differnret pesticdes, farmers, to figure out which pesticide will kill the insect , all the crop could be destroyed, might have to use a broader pesticide, those that kill a range of isnects, which could kill beneficial insects, mosquito, the spread of disease, could evolve resitacne to all pesticides in use, new pesticides, time and money What percent does water make up a cells contents? what does it have? what two places? what is water? in what? give two examples? what is water also? what dose this mean? where do most biological reactions take place? making water what? what does water transport? what makes it easy? to transport all sorts of materials? give examles of substances? around what? what does water hel with? what does it carry away? when? from where? what does this do? lowering? What is a molecule of water made up of? what are they joined by? what charge are the hydrogen electrons? what are they pulled towards? whats charge is the other side of the hydrogen atom? what charge are the unshared electrons on the oxygena tom? what charge do they give the oxygen atom? what do both these charges make water molecules? what do the negatively charged oxygena toms attract? whats the attraction called?what does it give water? What does the structure of water molecules give it? what do they explain? what is specific heat capacity? how much energy can hydrogen bonds between water moelvules absorb? what does water have then? meaning? whys is this useful for living organisms? allowing? 80, importnat functions, inside and outside cells, reactant in loads of important chemical reactions, photosyntheis and hydrolysis reactions, solvent, some subatcnes dissolve in it, take place ina soloution, waters essential, substances, liquid and a solvent, all sorts of materials, glucose and oxygen, around plants and animals, temperature control, heat energy when it evaportates from a surface, cools the surface and helps to lower the temperature, One atom of oxygen joined to two hydorgen atoms, shared electrons, negative, pulled twoards the oxygen atom, slight positive charge, negative, slight negative charge, olar, attract the positivly charged hdyriogen atoms of other water moelcules, hydrogen bonding, useful properties Properties, functions, is the enrgy needed to raise the temp of 1 gram of a substance by 1 degree, absorb a lot of energy, high specific heat capacity, it takes a lot of energy to heat it up, it stops rapid temp changes, their temp to keep fairly stable How much enery is required to break the hydrogen bonds between water molecules? when we say energy what do we mean? what does water have high of then? meaning? why is this useful for living organisms? What is cohesion? between what? wahat are water molevules very? meaning? why?what does this help water do? making it great for what? What are a lot of important substances in biological reactions?giive example? what does this mean? give example? seeing as water is polar what dose the postiie end of a wtaer molcule attrac? and the what to what? what does the ion become? in other words? so waters what makes it a useful what for what? A lot, heat, high latent heat of evaporation, a lot of energy is used up when water evaprotes, because it menas waters great for cooling things, is the attraction between molecules, two water molecules, are very cohesive, they tend to stick togetgher, theyre polar, flow making it great for transporting substances, ionic, salt, one positivly charged atom or molecule and one negativly charged atom or molecule, salt is made from a positive sodium ion and a negative chloride ion, the positve end of a water molecule will be attracted to the negative ion, negative end of a wtaer molveule will be attracted to the postiive ion, totally surround by water molecuules, theyl dissolve, polarity makes it a useful solvent for other polar molecules
Seppuku (stomach-cutting) is a form of Japanese ritual suicide by disembowelment. Seppuku was originally reserved only for Samurai. Part of the samurai honor code, seppuku was used voluntarily by samurai to die with honor rather than fall into the hands of their enemies, as a form of capital punishment for samurai who have committed serious offenses, and for reasons that shamed them. Seppuku is performed by plunging a sword into the abdomen and moving the sword left to right in a slicing motion. Seppuku was a key part of bushido, the code of the samurai warriors; it was used by warriors to avoid falling into enemy hands, and to attenuate shame and avoid possible torture. Samurai could also be ordered by their daimyo (feudal lords) to commit seppuku. Later, disgraced warriors were sometimes allowed to commit seppuku rather than be executed in the normal manner. The most common form of seppuku for men was composed of the cutting of the abdomen, and when the samurai was finished, he stretched out his neck for an assistant to decapitate him. Since the main point of the act was to restore or protect one's honor as a warrior, those who did not belong to the samurai caste were never ordered or expected to commit seppuku. Samurai generally could commit the act only with permission. Sometimes a daimyo was called upon to perform seppuku as the basis of a peace agreement. This would weaken the defeated clan so that resistance would effectively cease. Toyotomi Hideyoshi used an enemy's suicide in this way on several occasions, the most dramatic of which effectively ended a dynasty of daimyo forever, when the Hōjō were defeated at Odawara in 1590. Hideyoshi insisted on the suicide of the retired daimyo Hōjō Ujimasa, and the exile of his son Ujinao. With this act of suicide, the most powerful daimyo family in eastern Japan was put to an end. In time, committing seppuku came to involve a detailed ritual. This was usually performed in front of spectators if it was a planned seppuku, not one performed on a battlefield. A samurai was bathed, dressed in white robes, and fed his favorite meal. When he was finished, his instrument was placed on his plate. Dressed ceremonially, with his sword placed in front of him and sometimes seated on special cloths, the warrior would prepare for death by writing a death poem. With his selected attendant (kaishakunin, his second) standing by, he would open his kimono (robe), take up his tanto (knife) or wakizashi (short sword)—which the samurai held by the blade with a portion of cloth wrapped around so that it would not cut his hand and cause him to lose his grip—and plunge it into his abdomen, making a left-to-right cut. The kaishaku would then perform dakikubi, a cut in which the warrior was all but decapitated. The maneuver is done such that a slight band of flesh is left attaching the head to the body. Because of the precision necessary for such a maneuver, the second was a skilled swordsman. The principal agreed in advance when the kaishakunin was to make his cut. Usually dakikubi would occur as soon as the dagger was plunged into the abdomen. The process became so highly ritualised that as soon as the samurai reached for his blade the kaishakunin would strike. Eventually even the blade became unnecessary and the samurai could reach for something symbolic like a fan and this would trigger the killing stroke from his second. The fan was likely used when the samurai was too old to use the blade, or in situations where it was too dangerous to give him a weapon in such circumstances. This elaborate ritual evolved after seppuku had ceased being mainly a battlefield or wartime practice and become a para-judicial institution (see next section). The second was usually, but not always, a friend. If a defeated warrior had fought honorably and well, an opponent who wanted to salute his bravery would volunteer to act as his second. In the Hagakure, Yamamoto Tsunetomo wrote: |“||From ages past it has been considered an ill-omen by samurai to be requested as kaishaku. The reason for this is that one gains no fame even if the job is well done. Further, if one should blunder, it becomes a lifetime disgrace. In the practice of past times, there were instances when the head flew off. It was said that it was best to cut leaving a little skin remaining so that it did not fly off in the direction of the verifying officials. A specialized form of seppuku in feudal times was known as kanshi (death of understanding), in which a retainer would commit suicide in protest of a lord's decision. The retainer would make one deep, horizontal cut into his stomach, then quickly bandage the wound. After this, the person would then appear before his lord, give a speech in which he announced the protest of the lord's action, then reveal his mortal wound. This is not to be confused with funshi (indignation death), which is any suicide made to state dissatisfaction or protest. A fictional variation of kanshi was the act of kagebara (陰腹, "shadow stomach") in Japanese theater, in which the protagonist, at the end of the play, would announce to the audience that he had committed an act similar to kanshi, a predetermined slash to the stomach followed by a tight field dressing, and then perish, bringing about a dramatic end. Some samurai chose to perform a considerably more taxing form of seppuku known as jūmonji giri (cross-shaped cut), in which there is no kaishakunin to put a quick end to the samurai's suffering. It involves a second and more painful vertical cut on the belly. A samurai performing jumonji giri was expected to bear his suffering quietly until perishing from loss of blood, passing away with his hands over his face.
The Mayflower sails from Plymouth, England, bound for the New World with 102 passengers. The ship was headed for Virginia, where the colonists–half religious dissenters and half entrepreneurs–had been authorized to settle by the British crown. However, stormy weather and navigational errors forced the Mayflower off course, and on November 21 the “Pilgrims” reached Massachusetts, where they founded the first permanent European settlement in New England in late December. Thirty-five of the Pilgrims were members of the radical English Separatist Church, who traveled to America to escape the jurisdiction of the Church of England, which they found corrupt. Ten years earlier, English persecution had led a group of Separatists to flee to Holland in search of religious freedom. However, many were dissatisfied with economic opportunities in the Netherlands, and under the direction of William Bradford they decided to immigrate to Virginia, where an English colony had been founded at Jamestown in 1607. The Separatists won financial backing from a group of investors called the London Adventurers, who were promised a sizable share of the colony’s profits. Three dozen church members made their way back to England, where they were joined by about 70 entrepreneurs–enlisted by the London stock company to ensure the success of the enterprise. In August 1620, the Mayflower left Southampton with a smaller vessel–the Speedwell–but the latter proved unseaworthy and twice was forced to return to port. On September 16, the Mayflower left for America alone from Plymouth. In a difficult Atlantic crossing, the 90-foot Mayflower encountered rough seas and storms and was blown more than 500 miles off course. Along the way, the settlers formulated and signed the Mayflower Compact, an agreement that bound the signatories into a “civil body politic.” Because it established constitutional law and the rule of the majority, the compact is regarded as an important precursor to American democracy. After a 66-day voyage, the ship landed on November 21 on the tip of Cape Cod at what is now Provincetown, Massachusetts. After coming to anchor in Provincetown harbor, a party of armed men under the command of Captain Myles Standish was sent out to explore the area and find a location suitable for settlement. While they were gone, Susanna White gave birth to a son, Peregrine, aboard the Mayflower. He was the first English child born in New England. In mid-December, the explorers went ashore at a location across Cape Cod Bay where they found cleared fields and plentiful running water and named the site Plymouth. The expedition returned to Provincetown, and on December 21 the Mayflower came to anchor in Plymouth harbor. Just after Christmas, the pilgrims began work on dwellings that would shelter them through their difficult first winter in America. In the first year of settlement, half the colonists died of disease. In 1621, the health and economic condition of the colonists improved, and that autumn Governor William Bradford invited neighboring Indians to Plymouth to celebrate the bounty of that year’s harvest season. Plymouth soon secured treaties with most local Indian tribes, and the economy steadily grew, and more colonists were attracted to the settlement. By the mid 1640s, Plymouth’s population numbered 3,000 people, but by then the settlement had been overshadowed by the larger Massachusetts Bay Colony to the north, settled by Puritans in 1629. The term “Pilgrim” was not used to describe the Plymouth colonists until the early 19th century and was derived from a manuscript in which Governor Bradford spoke of the “saints” who left Holland as “pilgrimes.” The orator Daniel Webster spoke of “Pilgrim Fathers” at a bicentennial celebration of Plymouth’s founding in 1820, and thereafter the term entered common usage.
According to the study submitted to the Monthly Notices of the Royal Astronomical Society and conducted by researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) , a joint institute of Stanford University and the SLAC National Accelerator Laboratory , there may be 100,000 times more "nomad planets" wandering homeless in our galaxy than stars. Scientists had thought that some of the first apparently nomadic planets detected might be orbiting stars far away, but later, they confirmed that majority have no parent star. Though scientists had long predicted the existence of nomad planets, the discovery that they actually outnumber "normal" planets and even outnumber stars was surprising and led to defining a whole new class of astronomical bodies, raising fundamental questions that may change existing theories of planet formation. Puzzles remain, however: The characteristics of this class of planets is unknown and for all astronomers can tell they could be icy, rocky or even gas giants like the massive planets in our solar system. According to Stanford University News , the existence of nomad planets also raises new questions in the ongoing search for extraterrestrial life. Lead author Louis Strigari, speculates that while nomadic planets do not have a parent star as source of heat, they could generate heat from tectonic activity or from internal radioactive decay . Strigari says: "If any of these nomad planets are big enough to have a thick atmosphere, they could have trapped enough heat for bacterial life to exist." reports astronomers use a technique called microlensing to detect nomadic planets. Physorg.com reports that last year, researchers using gravitational microlensing technique detected about a dozen nomad planets. With this method, observers on Earth watch the effect generated when a massive body passes in front of a star. A massive body passing in front of a star, as predicted by General Relativity , bends and magnifies the light from the star much like a lens does and makes the star light wax and wane in time. This creates a "light curve" whose characteristics indicate to scientists certain properties of the body passing in front of the star, such as its mass. reports that conservative estimates from results obtained so far suggests there are approximately two nomadic planets for every average star in our galaxy, and what is even more astounding, and significant, in the light of the recent Planet Nibiru apocalyptic hysteria, is that nomad planets may be as much as 50,000 times more common than initial estimates suggest. Alan Boss of the Carnegie Institution of Science in Washington D.C., describes this staggering conclusions with flair: "To paraphrase Dorothy from 'The Wizard of Oz,' if correct, this extrapolation implies that we are not in Kansas anymore, and in fact we never were in Kansas. The universe is riddled with unseen planetary-mass objects that we are just now able to detect." According to Stanford News , the researchers made estimates of the "unseen planetary-mass" by calculating the known gravitation pull of the Milky Way, the amount of matter in the galaxy available for forming nomad planets and the distribution of the matter required to make formation of such planetary bodies possible. But because astronomers are uncertain exactly where these bodies come from, they are unable to come to definitive conclusions. It is believed that some of the rogue worlds might have been ejected from star systems, but according to Strigari, not all of the bodies could have been formed this way. The researchers hope, however, that with the introduction of a new generation of telescopes, they will be able to study the nomad planets in greater detail and acquire more specific information about them. Stanford News reports the space-based Wide-Field Infrared Survey Telescope , and the Large Synoptic Survey Telescope , are scheduled to begin operation in the 2020s. Scientists are particularly excited about the potential implications of their estimates of the number of nomad planets in the Milky Way galaxy because it has significant implications for the question of the incidence of life in our galaxy. Nomad planets wandering in space may impact on other bodies, breaking out chunks of the planet and dispersing them in space. If these chunks carry microbial life nomad planets may become means for disseminating life in our galaxy. Stigari said: "If any of these nomad planets are big enough to have a thick atmosphere, they could have trapped enough heat for bacterial life to exist." Co-author Roger Blandford, director of KIPAC , said: "Few areas of science have excited as much popular and professional interest in recent times as the prevalence of life in the universe. What is wonderful is that we can now start to address this question quantitatively by seeking more of these erstwhile planets and asteroids wandering through interstellar space, and then speculate about hitchhiking bugs."
Video + storytelling = Videotelling. In this technique, the teacher is the deliverer of the material. The delivery process is a whole class communicative event. The secret to videotelling is to decide what you are going to say and how you are going to say it. This will force you to consider ways to encourage learners to interact (e.g. what questions are you going to ask?) - Language level: Pre-intermediate; Intermediate (A2; B1) - Learner type: Young learners; Teens; Adults - Time: 25 minutes - Activity: Videotelling - Topic: Weddings - Language: Present simple; Wedding vocabulary - Materials: Video Video + storytelling = videotelling In this activity, the teacher is the deliverer of the material and the delivery process is a whole class communicative event (see video above). The format that is used in this lesson plan can be used for virtually any short clip with a strong visual narrative. The secret to preparation is: - Get to know the clip as well as you can and look for ways to exploit the visual narrative. Look for ‘hidden’ language possibilities and issues to discuss. - Decide what you are going to say and how you are going to say it. In other words, create a script. - Consider ways in which you can encourage learners to interact. Write out a list of predictive questions. - Look for opportunities to relate content and questions to your learners’ own lives. - Identify learning potential (e.g. lexical sets or grammatical structures for drilling or dictation, etc.) - Practise the script before class. Lesson plan outline - Tell your students that you recently saw a YouTube video that made you laugh. Tell them that you are going to describe the clip and they have to consider whether or not they have seen it before. - Talk students through the narrative of the video. Put questions back to students and encourage them to interact whenever possible. Familiarise yourself with the script below for this purpose. - Ask individual students what they think happens next. For each guess, tell students if they are hot or cold (hot = close to the answer; cold = not close). You can also use terms such as boiling, warm, freezing, getting warmer, etc. - Once students have explored all possible outcomes, dictate the following: - Write the following on the board: __________ best man __________ our wedding - Direct students to the video. I want to tell you about this funny video I saw on YouTube the other day. Perhaps you’ve seen it. I’ll describe it to you and you can tell me it you’ve seen it before. It starts with a small group of people – three men and two women. They are standing beside a swimming pool. It is a sunny day. Q: What do you think they are wearing? [Students will probably guess bikini, swimsuit, swimming trunks, sunglasses, etc] Well you would expect that but they’re not. Let me tell you what they are wearing: two of the men are wearing suits. One of the women is wearing a long yellow dress and she is holding a bunch of flowers. The other woman is wearing a long white dress. Q: Can you guess what is happening? A: It’s a wedding. Q: Can you identify and name the five people present? A: Bride and groom, the best man, the bride’s maid and the priest.] Now this is the point in the ceremony when they take their vows. Q: What do you say when you take your vows? A: I do or I will. So in the video, the bride (Chloe) and the groom (Keith) say “I do”. Q: Now, what do we need at this point? A: The rings Q: Who carries the rings and who wears the rings? A: The best man carries the rings and the bride and groom wear the rings. OK, so at this point we need the best man. He steps forward and then disaster strikes. Who thinks they have seen this video? Put up your hand but don’t say anything yet. Q: So what happens next? Note that this is an example of the language that could come from the teacher during this activity. Of course, the actual communicative event may involve repetition, questions from students, digression, clarification, etc. If any students in the class have seen the clip before, you could make use of them during this part of the activity. Ask them to give the hot/cold answers to others’ suggestions. Note however, that even if a student has already seen the clip, this does not mean that he/she will remember the exact details of the outcome. The best man steps forward with the rings. He trips up and pushes the bride and priest into the pool. Tell students that this is the title of the clip on YouTube. Ask them to guess the two missing words. (Answer = clumsy best man ruins our wedding) Note that this type of activity lends itself well to out-of-class viewing. - Invite students with SMART phones to find the clip for themselves and show it to others. - Ask students to watch the clip after class (They will be able to find it on YouTube now that they have the title.) - If you have a class blog or wiki: tell students that you will link to the video from the blog or wiki later that day. - After class, email students a link to the video. During the videotelling activity, ask students personalised questions about the topic whenever possible (see examples below). This may cause digression but that might be a good thing. - What is the difference between a wedding and a marriage? - Who is married? - Who in the class has been married the longest? - Who is the most recently married person in the class? - Who has been to a wedding recently? Whose wedding was it? - Have you ever been a bride’s maid or best man? Give information. - What do you like/dislike about weddings? - What are the differences between Spanish weddings and British weddings? * * Adapt as necessary Follow up 1 Ask students to decide whether the clip is (a) real or (b) set up. Put students into groups so that those who believe (a) are grouped with those who believe (b). Give the following task: - Say why you believe that the clip is real or set up. - Listen carefully to all of the ideas from everyone in your group. - Come to a consensus group decision about the clip: Is it real or was it set up? Follow up 2 Dictate or draw students’ attention to the language of the wedding vows. Ask them to compare what the clergyman in the video clip says with the language of vows from their own culture(s). “Chloe, will you take Keith to be your wedded husband, to live together in the covenants of marriage, to love him, comfort him, honour him and keep him in sickness and in health, and forsaking all others, be faithful to him as long as you both shall live?’ Chloe answers, ‘I will.” This can also be a good opportunity to introduce students to the idea of archaic language. Since the language of ceremonial passages like this are generally preserved over time, some of the words will go out of mainstream use. For example: - Covenants of marriage
FROM a geologist's point of view the Grand Canyon is of recent originthat is, Colorado River may have begun its work of digging the canyon less than a million years ago. But the rocks exposed in it and in other parts of the Plateau country represent many an ancient landscape. Some of these date back so far that it is useless to speculate as to their age. Their antiquity may be stated, not in years, but in milleniums of centuries. The crystalline rocks of the inner gorge were formed amid scenes wholly different from any that are familiar to us. So far as we know now, there were then no plants or animals. But storms raged and streams flowed then as now, and the lifeless landscape shaped by them is represented by an uneven plain, which separates the crystalline rocks from the younger sedimentary rocks that rest upon them. This plain is marked in the walls of the canyon by what is known to geologists as an unconformity. The plain which shows as a line at the junction of the ancient crystalline and the overlying sedimentary rocks was at one time the surface. On it gathered the material of the younger rocks. The old crystalline rocks, called Archean because of their great age, have lain in their present buried position through the uncounted ages that have elapsed since they were thus entombed. During this time mountains have been thrown up and slowly eroded away, seas have swept over the scene and vanished, and whole groups of living beings have developed, run their course, and disappeared. A Lifeless Landscape If this book had been written a few years ago, before the nebular hypothesis had been seriously questioned, the statement would probably have been made that the crystallne rocks at the bottom of the canyon are parts of the earth's crust formed when it originally cooled from a molten state. But, as the planetesimal hypothesis, referred to at some length in a later chapter, has proved acceptable to some who believe that there never has been any original crust covering a general liquid interior, we may look for a possible alternative explanation of the crystalline character of the Archean rocks. Two explanations seem possible. Rocks similar to the ancient crystallines of Grand Canyon have been traced laterally into rocks that are clearly of sedimentary origin. In such places the crystalline character is due to changes in the sediments after they were laid down. Second, in many places rocks are found which represent matter forced in a molten condition into previously solidified rocks. It is not always possible to determine whether a given mass of crystalline rock originated as sediments or as intrusive rock. However they may have originated, the old Archean rocks of the canyon were in place long ages ago, before life as we know it began on the earth. The geologist is asked frequently when and how life began. There have been many speculations, some of them centuries old. But, after summing up all the wisdom of philosophers from the days of Moses to the present time it may be frankly admitted that we are not far beyond the statement that "in the beginning God created." But although the Archean landscape was not adorned by any plant or animated by any form of life such as we are now familiar with, the processes of nature were probably in operation in much the same way that they are now, for during the time that intervened between the period represented by these ancient rocks and those of the succeeding Algonkian time, mountains were raised and these in turn were cut down and swept into the sea. The destruction of mountains and the formation of plains such as that which we can trace between the old crystallines and the younger rocks were accomplished then, as they are now, by the action of rain, stream, wind, and wave. Less is known of the Archean than of any other geologic period. In this respect it is comparable to the prehistoric stage in human history, concerning which there are legends and myths, inferences and guesses, but no written records. The inferences under the nebular hypothesis as to the early history of the earth are well known, as they have been portrayed by many a fantastic word-picture. But the planetesimal hypothesis is new, hence the possible course of events under it which culminated in the formation of such rocks as the ancient crystallines of Grand Canyon is not so well known. The story of the development of the earth under this hypothesis is quite different from that of the supposed development under the older hypothesis. The chief events may be enumerated as follows (fuller descriptions may be found in the writings of Prof. T. C. Chamberlin): According to the planetesimal hypothesis, there was a time when the young earth, only a small fraction of its present size, was without form or at least had a very indefinite form and was growing rapidly by the fall of particles of matter or planetesimals attracted to it from surrounding space. This young earth had no water and no atmosphere, because the attractive force of the small mass was not strong enough to hold the gases of air and water. In time, when the nucleus had grown by accretion or the addition of solid particles to something like one-tenth of the present mass of the earth, it began to gather an atmosphere, because then its attractive force was able to hold the gases. The moon (1/81 of the earth's mass) has no atmosphere, but Mars (about 1/9 of the earth's mass) has a thin atmosphere. When the growing earth had become large enough to hold by its attractive force the swiftly moving particles of oxygen, nitrogen, and water vapor, and prevent them from escaping into space, these gases began to accumulate and form an atmosphere. The heavy gases of slow movement, such as carbonic acid gas, may have been captured first; the lighter and swifter ones later, when the earth was larger and had greater attractive force. But the earth is not large enough yet to capture and hold hydrogen and helium, although it might hold these gases in its atmosphere should it grow sufficiently large, for these gases exist in the atmosphere of the sun. The rocks, cold at first, may have been heated by impact in the fall of the planetesimals, by compression, by internal friction of the compressed masses under gravitative force, or by other processes, less easily understood. Probably it cannot be proved that no large part of the earth was ever molten at one time, but the fact that it is now as solid as steel harmonizes with the belief that the earth has never been a molten globe and that it has never possessed the high temperatures at the surface which the nebular hypothesis demands. When the atmosphere had gathered enough water vapor to become saturated, precipitation began. Previous to this time there were no streams, lakes, or seas. The rain water falling on the surface naturally dissolved the more soluble material and carried it in solution into the depressions of the earth's surface, just as it does today. Also, just as today, the water evaporated, leaving the soluble salts in the basins. Thus the hollows at first held fresh-water lakes, which gradually increased in volume and salinity through uncounted ages until they became great briny oceans. If this process began when the earth was only as large as Mars its outer shell, about 1900 miles thick, grew up in the presence of water, and to the salinity of the ocean nine tenths of the earth's mass has contributed part of its soluble matter. In this connection also it may be noted that in place of a universal ocean, which, according to the older belief, covered the face of the earth, the newer notion postulates universal land, from the higher parts of which streams washed rock débris into basins that finally became filled with lakes and seas. Also under the newer hypothesis there is no reason for believing that the climate of these very early stages in the development of the earth was greatly different from that of later time. The succession of events in the growth of the earth, as stated, gives no grounds for supposing that living beings could not exist very early in its history. Some of the oldest forms of life known (the crustaceans whose fossil remains are obtained from rocks of Algonkian age) stand relatively high in the scale of animal life. Living beings must have existed for long ages in order to be developed into animals of the high order of these crustaceans. The observed facts demand conditions favorable to life for a long time prior to the Algonkian period, and the succession of events just outlined indicates that such conditions may have prevailed during the period represented by the Archean rocks at the bottom of Grand Canyon. The plainly defined line of unconformity between the rocks of Archean age and those next younger in the walls of Grand Canyonseparating the Unkar from the underlying granite and gneissis shown in Figure 1. This unconformity represents a long period of erosion, during which the region was reduced to a nearly level plain. The Algonkian Period This Archean plain was submerged and covered by sediments, deposited chiefly in water. The epoch of deposition continued until the sediments had in some places gathered to a thickness of about 12,000 feet. In other places, however, the surface was above water and was being worn down to supply these sediments. Some of the material is coarse and may have been deposited along streams; some of it is fine and was deposited as mud in shallow water; and some is limestone, which probably accumulated when the site of Grand Canyon was beneath the sea. These rocks differ in character and appearance from the younger rocks which overlie them. They are changed or metamorphosed. When they were deposited as sediments they did not differ from the sand, mud, and limy ooze which are being deposited at the present time. But through the long ages that have elapsed they have been subjected to pressure and heat. They have been fractured, and molten rock has been forced into them. They have begun the change by which they may at some time in the future attain the crystallne condition of the Archean rocks beneath them. Little is known of the climate of Algonkian time, for the climate of a geologic period must be judged chiefly by its fossils. The living beings of Algonkian time consisted mainly of low types of marine animals and plants, such as seaweeds and worms, and relatively little is known of them. But there is some convincing evidence concerning the Algonkian climate. Certain deposits indicate the presence of glaciers in Algonkian time, perhaps during more than one epoch, for the period was a long and varied one. Evidences of Algonkian glaciation have been found in Canada, Australia, India, China, South Africa, Europe, and elsewhere. Hence there is good reason for believing that the climate during this period was not greatly different from that of later time, and that instead of sweltering in a murky atmosphere of steam and heated gas, as some have supposed, the earth seems to have been colder on the average than now. Even in some equatorial regions it was clothed in ice during parts of this very early period. After the deposition of the Grand Canyon series of strata there was extensive uplift and disturbance of the rocks, during which the material originally laid down in horizontal sheets was warped and tilted and broken into blocks. These blocks were faulted or moved out of place and the whole disturbed mass was eroded until at some places the Algonkian rocks were entirely swept away. At other places certain parts of them were preserved by being depressed below the level where erosion is possible, and some low hills were left where unusually hard rock cropped out. This series of events is known collectively as the Grand Canyon revolution. If the process of erosion by which the Grand Canyon was formed should continue until the river had swept away the whole plateau and formed a plain near sea level where the mile-high mesas now stand, the work accomplished would be less than that performed during the period of erosion which followed the Algonkian, for a thickness of more than 12,000 feet of rock was removed at that time. Length of Algonkian Time This great length of time, which has been regarded by some geologists as equal in duration to that of the ten great geologic periods ranging from Cambrian to the present day, is represented in some places in the canyon walls by a single line. Where the older rocks were tilted before they were eroded and covered with younger sediments the line represents what is called an angular unconformity, because the bedding planes of the older rocks meet those of the younger at an angle, as shown in the sections in Figure 1, such as that between the Unkar and the overlying shale. Where no angularity is apparent the line represents what is called an unconformity by erosion. In either case the line marks a break in deposition covering a period of time not represented by sedimentary rocks. It represents a period when records were being destroyed by the wasting away of the rock. Records of some of the periods unrecorded here are found in other parts of the world, for the rocks torn from the highlands in one place must find rest in some other place. But during the post-Algonkian period of erosion the continental areas of the whole world seem to have been high, so that the unconformity appears to be world-wide. This hiatus or break in recorded time is the greatest known, unless it is exceeded by that represented by the erosional unconformity between the Archean and Algonkian. Thus the two greatest unconformities in the world are represented in Grand Canyon, and both merge into a single unconformity at the base of the Tonto group, where the rocks of this group rest on the granite. The Cambrian Period The Cambrian is one of the long periods in the life of the earth and may be compared to a year in the life of man. The name was given by geologists to rocks that were first studied in Wales, a country known to the Romans as Cambria. Later, when it was learned that rocks were formed in other countries at the same time, the name Cambrian was applied to them also. The Archean has been likened to the prehistoric age of human history, comparable in a sense to the old stone age, and the Algonkian to the legendary age, comparable to the period of Greek mythology. With the Cambrian period, represented by the Tonto group of the Grand Canyon, begins the stage of well-recorded geologic history. To carry the simile a step farther, this new era, like many in human history, began with a revolution called the Grand Canyon revolution. It is a modern era in earth history, for the Cambrian is said to date back only about a hundred million years, whereas the preceding periods date back infinitely farther. There may have been some land plants of low order at this time, but the known plants of the Cambrian period are algae or seaweeds. Although little is known of Cambrian plant life it must have been abundant, for animals depend on plants for food, and the animals of Cambrian time were numerous. They were low in the scale of life and were sea dwellers. No remains of land animals have been found and no vertebrates, even of the lowest type. Thus at the time the Tonto beds of Grand Canyon were forming there were sea worms, mollusks, sponges, and several low forms of life that were wholly unlike any now living. Some of these were preserved as fossils. Fossils and History Fossils are the symbols in which the history of the world's life is written, and a knowledge of the symbols is necessary before the story can be read. Animals and plants that lived long ago were buried in mud just as those of today are being so buried in some places. The mud hardened to rock and was covered in turn by other layers of mud. Some of these layers were lifted and eroded when the mountains were pushed up, exposing again the remains of the buried animals and plants. These remains are the characters in which some of the story of the earth is written. But the language must be known before the story can be translated. If you look carefully at the shell of a living oyster you will observe certain definite features. You could not mistake it for anything else. Many things have been learned about the habits and life of this oyster. In some places it thrives; in others it cannot live. It is at home in a sheltered bay where the great waves of the ocean cannot disturb it, but is not happy in deep water far from shore. It cannot live long in fresh water and is happiest where the water is not so salty as in the open ocean. If now we compare this shell with a fossil oyster shell we see that the two are similar. The one was found in Chesapeake Bay, the other in solid rock perhaps a mile above sea level and a thousand miles from any place where an oyster could now live, yet there is no doubt that the fossil is the shell of an oyster, and there should be no question that oysters thrive only in bodies of water connected with the sea. Records in Stone The fossil oyster shell tells us just as plainly as if the story were written in words that the place where it was found was once a mile lower than it is now and that the sea once occupied the region where it was found. It is evident also that in that olden time the streams were carrying sand and mud into the sea just as they are carrying them into the sea in our day. The mud hardened into rock, and this rock, once at the bottom of the sea, was in time pushed up with others to form mountains. Nature not only wrote her story in plain language, but she arranged it in regular order, so that it can be divided into what we may call chapters. These are the geologic ages, such as the Cambrian. But although a complete story was written, not all parts of it can be found in any one place. Some can be found only in Europe, others only in America; some can be read only in the frozen lands of the Eskimo, others only in the Tropics. Not all of its parts have been found. Some may yet be recovered, but others may not, for the agents of destruction have been as busy in the past as they are in the present, and, they have destroyed many records. The rocks seen in the canyon walls above the Tonto group are of Devonian and Carboniferous age. They are younger than those of the Tonto formation by many millions of years. This long stretch of time, here unrecorded by deposition, is marked in the canyon walls by an unconformity. This unconformity differs from that found below the rocks of Cambrian age in two significant ways. The Tonto beds were not planed off, nor were they entirely removed at any place that is now exposed. The canyon region may have been above sea level during the time represented by the unconformity and may have received no sediments; or sedimentary rocks may have been formed here and later eroded away. However this may be, it is evident that the Cambrian rocks in this region were not crumpled up into mountains and that they were not raised far above sea level, otherwise they would have been deeply eroded or entirely removed. Evidently the region lay low throughout all the time between the end of the Cambrian period and the beginning of the Carboniferous; in other words, the Ordovician and Silurian periods are not represented, and the Devonian is represented only in part, by thin beds, which are not readily distinguished from the overlying rocks. The Carboniferous Period Next above the Devonian and resting on it, or on the Cambrian rocks where the Devonian is absent, is a great layer of limestone, which is about 600 feet thick in the southern part of the canyon region and much thicker farther north. This limestone is harder than the rocks above and below it and has resisted erosion more strongly than they. It crops out midway in the canyon walls, where it forms bold, red promontories. Because of its color and the wall-like face of the cliffs it is called the Redwall limestone. It forms prominent cliffs in Cheops Pyramid and in several of the so-called temples, many of which are familiar to visitors in Grand Canyon National Park. The Sea and Grand Canyon Although the wall formed by this limestone is red, the rock itself is light blue or gray. Its color is due chiefly to wash from the red shale and sandstone above. This limestone contains fossil shells of sea animals and these fossils tell us that the Grand Canyon region, which had been above sea level during much of the time since the Cambrian period, subsided during the Carboniferous and was covered with sea water. During the early part of the period represented by the Redwall limestone, sea water covered broad areas of the interior part of North America. Later in the period, although there was an open sea in the Grand Canyon region, broad swamps formed at many places in the interior of North America, and in these swamps accumulated the vegetable matter which formed the Carboniferous coal. Rocks of this age, both in America and in other continents, contain quantities of coal so vast that they are often called the Coal Measures. The name Carboniferous was suggested by the carbon of the coal. The rocks above the Redwall limestone, the Supai red beds, 1,400 feet thick; the Coconino sandstone, 300 feet thick, which outcrops high in the canyon walls and which usually forms a nearly vertical cliff; and the Kaibab limestone, 600 feet thick, which forms the rim of the canyon in the National Park, were formed late in Carboniferous time. Through geologic work in the canyon country we are continually learning more about the rocks in which the canyon is carved. It is now known that the beds formerly called Supai consist of two parts separated by an unconformity and the name Supai is today restricted to the lower part. The upper part is called the Hermit shale and is classed with the Coconino sandstone and the Kaibab limestone as Permian. But as this subdivision is likely to interest few besides professional geologists the older usage is here allowed to stand. This series of rocks is clearly and beautifully exposed in the southern part of the canyon region, but it is not so clearly exposed in some other parts. North of Marble Canyon it is difficult to identify some of these formations. Parties sent out in recent years by the United States Geological Survey have obtained valuable additional information, which, however, has not yet been published, hence the old nomenclature is presented here. PRINCIPAL DIVISIONS OF GEOLOGIC TIME.a aMany of the time divisions shown above are separated by unconformitiesthat is, the dividing lines in the table represent local or widespread uplifts or depressions of the earth's surface. bEpoch names omitted; in less common use than those given. The first animals lived in the sea. But in Carboniferous time the higher forms of life became air breathers and lived on land. It has aptly been said that during this period life rose from the sea and took possession of the land. Little is known of the land forms of earlier ages, but the broad lowlands and swamps of Carboniferous time were favorable to the growth of land plants and to the development of animals that subsist on such plants. The trees were altogether different from those of the present day (Plate XXI). There were among them some cycadsa type that culminated in a later ageand some conifers. There were also giant reeds called Calamites, and the ferns of that day grew to be large trees. But the dominant forest trees of Carboniferous time have no living representatives, hence no common name. There was Lepidodendron, a great tree that began life in an earlier geologic age and died out in the Permian epoch, at the end of the Carboniferous period. Some of these trees grew to be 100 feet in height and 3 or 4 feet in diameter. Their great pithy trunks of porous materialit can not properly be called woodforking into a few prongs, were clothed with small evergreen leaves. Another common tree of this time is Sigillaria, distinguishable from Lepidodendron by parallel rows of leaf scars that ran straight up the fluted trunk, for those of Lepidodendron ran in spirals around the trunk. With the lower orders of plants came the lower orders of insects, such as spiders, dragon-flies, and beetles, but no bees or ants, for there were no flowering plants, on which the higher insects live. About a thousand species of Carboniferous insects have been found, half of which are cockroaches. They may have been as harmless as their repulsive descendants, but we may give a sigh of relief and a smile of complacent satisfaction with present conditions when we reflect that some of the Carboniferous cockroaches were 4 inches long. We may also feel the same sense of relief in respect to the other insects, some of which, such as the dragon-fly, had a spread of wing of more than 2 feet. Rise of Animals from Water to Land The amphibians, which are sometimes defined as those vertebrates which made the transition from aquatic to terrestrial life, began at an early stage in the history of life and developed into true air-breathers. Their evolution during long ages is reproduced in miniature in the individual development of modern amphibians, such as toads and frogs, which begin life as purely aquatic gill-breathing creatures and later become air-breathers. The amphibians and the reptiles, which began their existence in Coal Measures time, were the most highly developed beings of that age. They were very different from the living forms, and some of them were as large as crocodiles. The development of these air-breathing vertebrates is regarded as the most important event in the whole progress of evolution, for it represents the rise of animal life out of the water. From the Carboniferous period to the present time the air-breathers have been the dominant forms of living creatures. The climate of the Coal Measures period seems to have been warm and moist, and the growing season seems to have lasted the year round. Coal beds on the Antarctic continent and also far to the north, as well as coral reefs in Spitzbergen, indicate that a warm climate then extended nearly to both poles. A picture of the monotonous Carboniferous landscape is drawn by Charles Schuchert as follows: In these forests of Pennsylvanian time might have been seen flying about the largest insects that have ever lived,great dragonflies reaching a wing spread of over 2 feet. Huge cockroaches abounded everywhere in great variety, giants of four inches in length not being rare. As a rule these insects were Carnivorous and did not transfer the pollen from one flower to another, as is so commonly done by living insects among present-day plants. The smaller forms were preyed upon by scorpions or spiders, the latter not making webs but living on the ground or in rotten logs along with many myriapods or thousand-legs. No insect of this kind, so far as known, produced chirping or other sounds and the soughing of the wind among the trees was possibly rarely interrupted, save by the croak of an amphibian in the marsh. Reptiles and amphibians were common in the swamps, and it is probable that many small reptiles were running over the ground and about the trees. No large land animals such as we know and no birds were to be seen. Most of the rocks of Carboniferous age in the Grand Canyon were formed in water, but in many parts of the world rocks of this age contain beds of coal, which renders appropriate the name Carboniferous. While the limy ooze was gathering in the sea in northern Arizona, broad swamps were forming at many places in central and eastern North America and in many other parts of the world. In these swamps accumulated the vegetable matter which formed the coal of this age. The Formation of Coal Most of the coal now used in America is mined in the eastern and central fields, or between Pennsylvania and Kansas, where, in an area of 214,686 square miles, the coal stored in beds thick enough for mining and near enough to the surface to be reached in mines has been estimated at 1,142,340,000,000 tons. If the coal of all the geologic ages should be included in the estimate the United States has 496,776 square miles of coal land and a supply of 3,157,243,000,000 tons. Nearly all coal beds represent ancient swamps in which vegetable matter accumulated, as it does now, in standing water, where it partly decomposes to form muck and peat. The peat undergoes certain changes, is packed or consolidated by the weight of the sediments that gather on it, and becomes lignite, a soft near-coal whose woody character may easily be distinguished. The process by which peat is changed through lignite and bituminous coal to anthracite and finally to graphite is somewhat similar to the production of coke from coal, by which some constituents are converted into gas and driven off, leaving much of the carbon in the form called coke. This change of peat to anthracite may be caused by natural heat, but in many places it seems to have been caused rather by pressure and warping of the rocks, together, perhaps, with such heat as was caused by this warping. A Measure of Geologic Time The rate of accumulation of carboniferous material has been made the basis for computing the age of the rocks which contain the coal. It has been estimated1 that under the best conditions enough vegetable matter may be produced on a tract of land in about 300 years to make a coal bed one foot thick covering the same tract, provided none of it is lost. On the assumption that coal was produced in Carboniferous time at the same rate, the accumulation of the maximum thickness of 300 feet of coal in the Appalachian basin required 90,000 years. Naturally the rate of plant growth during the Carboniferous period is not known, nor is it known what proportion of the plants decayed at the surface or for any other reason was not included in the mass that was buried and turned to coal. But the figures seem to indicate that probably much more than 90,000 years was consumed in forming the beds of coal. This takes no account of the time required for the accumulation of the thousands of feet of sedimentary rock in which the coal was embedded. Another way of arriving at the approximate length of a geologic period is by the rate of accumulation of rock material. Some geologists have estimated that in regions where coral reefs abound limy mud (calcium carbonate) may accumulate at such a rate that a foot of limestone would be formed in 200 years. In regions where calcium carbonate is precipitated chemically, with little aid from organisms, its accumulation may be very much slower, possibly less than a foot in 1,000 years. This rate may be used in illustrating the method of reaching an approximation of geologic time but should not be used as if based on a secure foundation. If the Redwall limestone was formed at this rate, the thickness of 600 feet represents 600,000 years. Its massive nature and the scarcity of fossils in it suggests slow accumulation and correspondingly long time. Several methods have been used to reach a measure of geologic time. The most recent one is based on radio-activity, and this method indicates surprising lengths of time. On this subject Dr. W. H. McNairn1 says: Among those elements which are known to undergo the mysterious change due to disintegration of the atom is uranium. By giving off particles of helium at a constant and definite rate, uranium is believed to pass over into radium and lead. If in any given uranium-bearing mineral we can determine the relative proportions of uranium, radium, and helium, and lead if it is present, knowing the rate at which these changes take place, we should be able to determine the age of the mineral itself. This method, first Suggested by Sir Ernest Rutherford in 1906, was subsequently made good by the Honorable R. J. Strutt. His results were somewhat startling in the unexpectedly great periods of time which they indicated. For instance, he allotted the very respectable antiquity of 141,000,000 years to some rocks which were found about half-way down to the earliest fossiliferous deposits. However, these first figures were not uniform. Of recent years these have been tabulated and indicate a certain amount of consistency, particularly in their unanimity in extending the reach of geological time to an extent undreamed of by the geologists. Who, for example, would have dared to suggest, from geological evidence alone, that we have to do with periods of from 800 to 1,600 million years? The Permian Epoch The Permian epoch takes its name from the province of Perm, in the Ural Mountain region of eastern Russia, where the rocks of the youngest division of the Carboniferous system were first studied by Murchison, an English geologist, and named in honor of that province. Late in Carboniferous time, probably at the beginning of the Permian epoch, some condition arose that caused world-wide changes. In some parts of the world mountains were formed, but in the Canyon country the land was raised but little out of the sea and was not greatly disturbed. Thick beds of sand accumulated on the low-lying area. In some places these accumulations attained a thickness of more than a thousand feet. Then the sea returned and covered the sand with limy ooze. In time these deposits hardened into the rock formations known to geologists as the Coconino sandstone and the Kaibab limestone. These are the rocks of the precipitous cliffs at the rim of the canyon. In other parts of the plateau country the rocks of this age have a different aspect. Probably the most notable scenic objects formed by the sandstones of Permian age are the natural bridges in eastern Utah. As these are among the most remarkable of known bridges, let us take a side trip and see them before we consider the landscapes of Permian time. The natural bridges in White Canyon, Utah, first made known to the general public in 1904, were proclaimed a national monument in 1908. In this monument there are three bridges of great size and beauty. The largest is called Augusta by some and Sipapu (portal of life) by others; the smallest is the Edwin or Owanchomo (rock mound) (Plate XVII); and the third is called Caroline or Kachina (guardian spirit). The Indian name was suggested by a symbol carved on the bridge and recognized as that of the Kachina, the sacred dancers of the Hopi Indians. These great bridges consist of white sandstone of Carboniferous age and were formed by erosion when White Canyon and its tributary, Armstrong Canyon, were cut. The smallest, and probably the oldest of the group, was formed in Armstrong Canyon about 3 miles above its junction with White Canyon. Here a narrow ridge was left between Armstrong Creek and an unnamed tributary. The two streams undercut this dividing ridge from opposite sides until they met under the upper part of the ridge, which was left as a bridge. The larger stream captured the smaller, leading the captive underneath the newly formed arch. Since the time of this stream piracy each channel has been cut far below the level of the ancient stream bed. The beam of rock which now bridges the captured stream is slowly weathering away but is still 10 feet thick and 35 feet wide in its narrowest part. It spans the unnamed valley 108 feet above the stream bed, the slender, graceful strip of rock being supported by abutments 194 feet apart. The dimensions here used for these bridges are those furnished by the National Park Service. The Caroline bridge (Plate XVIII) was formed in a somewhat different manner. Long ago the stream that cut White Canyon at its junction with Armstrong Canyon flowed in a sinuous or meandering course, which it maintained as it cut downward into the white sandstone and thus formed what is known technically as an intrenched meander. Here it made a horseshoe-shaped gorge around a peninsula or mass of rock that is connected with the main wall by a narrow neck. As erosion proceeded the stream in White Canyon impinged against the rocks of this neck and undercut them. From the opposite side the same stream, aided by Armstrong Creek, which joins White Creek at this point, undercut the neck until a hole was formed through which White Creek took a short cut, abandoning the longer course around the peninsula. The upper part of the neck now forms Caroline bridge. Caroline bridge is the youngest and most massive of the White Canyon group of bridges. It is rough-hewn, having the appearance of vigorous youth and giving the impression of great strength and endurance. Huge blocks of rock fallen from the walls suggest that the master workman was interrupted before his task was finished. Great masses of rocks, weighing tons, still hang to the wall from which nature chiseled them. According to the National Park Service, Caroline bridge stands 205 feet above the bed of the stream it spans, and it springs from abutments 186 feet apart. The arch is reported to be 49 feet wide and to have a minimum thickness of 107 feet. Augusta bridge (Plate XIX) stands in White Canyon 2-1/2 miles above its junction with Armstrong Canyon. It is an enormous arch of white sandstone, whose abutments stand 261 feet apart. The great stone beam, 65 feet thick and 128 feet wide in its smallest part, spans the stream at a height of 222 feet above the water. This wonderful piece of nature's handiwork, so perfectly carved and so symmetrically proportioned that it is difficult to realize its size, is set in the midst of a group of impressive rock forms. The canyon walls, which are formed of barren white sandstone, are high, steep, and rugged, and rise sheer from the narrow bed of the creek. In almost inaccessible nooks high in these walls may be seen cliff dwellings. Some of these dwellings, reminders of the ancient race of human beings which once flourished here but finally vanished, have never since been entered. The charm of mystery still lingers about them. The modern Indians, who are supposed to be the descendants of the cliff dwellers, believe that they come into the world from a lower region through an opening which the Hopis call Sipaputhe door of life. After death they return through the same hole to the nether region, there to dwell for a time before mounting to the sky to become "rain-gods." These bridges are far from ordinary routes of travel, and few people have seen them. One result of this isolation is their unmolested, clean appearance. The walls have not been marred by initials carved in the stone, nor have messages of the zealot or the advertiser been painted or smoked on the smooth faces of the rock. Some who have visited places of interest that were popular long ago and have seen the thousands of names and dates that disfigure the walls experience a thrill of delight as they read the notice posted at the camping place under Caroline bridge, which states in no uncertain words that name scratching is positively forbidden. These natural bridges are the result of normal stream erosion in an elevated region. There is no mystery about them. Doubtless thousands of similar bridges have been formed and destroyed in ages past and other thousands will be made and later destroyed in ages to come. The great sandstone of which they are made was formed long ago near sea level and was later covered by other beds of sediment many thousands of feet thick. Some of these rocks were formed by deposition in the sea after the white sandstone had been depressed far below sea level. After being buried for uncounted eons, this sandstone, in common with other rocks of the plateau region, was raised and its elevated surface was exposed to erosion. Then, as now, the rain formed rills and the water of the rills gathered into rivulets and finally into rivers. These sought the lowest places on the surface in their way to the sea, just as they do at the present time. Probably the elevation was slow and the rate of rise of the surface differed from place to place. Where the rise was slow enough broad, shallow valleys were eroded. Where the rise was relatively rapid deep, narrow canyons were cut. Also where the streams were flowing over soft rocks they tended to form broad valleys, and where they were flowing over hard rocks they tended to form narrow valleys. Furthermore, the rate of elevation of the rocks varied from time to time. During a time of slow upward movement, or even of cessation of such movement, the streams tended to broaden their valleys and to meander widely over the evenly graded bottom lands. These principles may be applied in picturing the formation of White Canyon. As the surface rose higher and higher the waters of the higher lands formed streams. These streams cut their channels deeper and eroded away the rocks at their sides, just as the streams do now. During the long ages in which this elevation and erosion were going on a thickness of thou sands of feet of rock was removed from the plateau region. In the course of its down-cutting the little stream which carved White Canyon meandered widely, carrying away the soft material of the red rocks that once covered the white sandstone. But when in its downward course it reached this hard sandstone it found erosion more difficult. But its meandering course was established and it cut its trench into the sandstone along this course. It continued its lateral cutting but made little headway toward broadening its canyon in the hard rock. Thus were formed the intrenched meanders, such as those at Caroline and Augusta bridges. At Augusta bridge and at Caroline bridge the stream in its meandering course formed a loop resembling an ox-bow, flowing about a peninsula of rock that had a narrow neck. This neck was at the point where the stream was obliged to turn sharply in order to flow around the end of the peninsula. Also, on its return to the other side of the neck, it made a sharp turn in the opposite direction. It is the law of streams that they cut their banks on the outer side of curves. Thus the neck of the peninsula was undermined by the floods that surged against it from both sides. In time they broke through the neck and took the short cut through the hole thus formed. The end of the peninsula was left as an island, and the upper part of the neck remains as a bridge binding this island to the mainland. Returning now from our side trip, we may bring to an end our consideration of the Carboniferous period by mentioning the outstanding characteristic of this closing epoch of Carboniferous time. In order to picture the Permian landscape we must go far afield, for the rocks of Permian age are known more intimately in other parts of the world than in America. The equable climate of Coal Measures time, which was so mild the world over that the epoch may be called one of universal summer, was changed in a most remarkable manner. The Permian epoch, which followed, seems to have been one of almost universal winter, for glaciers formed in some places practically at the equator. The material left by glacial ice in Permian time is found in India, Australia, Africa, Brazil, Canada, and other countries. The change from low, swampy lands having a uniform, warm climate to a mountainous country having a cold, variable climate caused corresponding changes in the plants and animals. The changes, however, were not sudden. They extended over a period of time measured probably in hundreds of thousands or perhaps millions of years. Two conspicuous results were the development of cold-climate plants and of highly organized reptiles. The Permian reptiles in all parts of the world seem to have had unusually strange characteristics. One of the lizard-like creatures from Brazil had a tail as long as its body proper, with a notable enlargement in the middle of it. The Permian reptiles whose remains are found in South Africa were large, clumsy creatures with bones so massive and so curiously shaped that they have been called "reptilian bone-piles." But probably the most peculiar creature of all is the finback lizard, shown in Plate XXI, B, whose remains are found in Texas. The so-called fin was produced by elongated spines half as long as the body. Many varieties of these strange creatures have been found, ranging in length from 3 to 10 feet. Last Updated: 31-Dec-2009
Freedom of Information Freedom of Information (FOI) refers to the public right to access information held by government. This cornerstone of our democracy is an integral element of open and accountable government, and supports the principles that: - People have a general right to know what information government holds about them, - Well-informed people are more likely to become involved in both policy making and government, and - A government open to public scrutiny is more accountable The Victorian Freedom of Information Act 1982 (the Act) commenced on 5 July 1983 and became fully operational on 5 July 1984. All agency employees are expected to comply with the Act’s objectives and obligations. The Victorian Government has instructed departments and agencies to interpret the Act in a manner that reflects a willingness to disclose information.
EPWS 310 EXAM 2, Fall 2014 1. (2 pts each) Define the following terms: 2. (9 pts) Which classes of fungi that we discussed in class produce zoospores? What environmental conditions favor diseases caused by these fungi? What methods can be used to control all these fungi? 3. (4 pts) What are two types of diseases induced by Olpidium, a Chytridiomycete. 4. (12 pts) Compare the primary inoculum, dissemination mechanism, secondary inoculum, the fungal component that overseasons and the location that overseasoning occurs, and control of club root of crucifers and powdery mildew of roses. 5. (6 pts) List and explain 4 ways that Oomycetes differ from Glomeromycetes (Zygomycetes). 6. (6 pts) Both Rhizopus soft rot and damping off are problems primarily in man-made environments. What are the environmental conditions that promote each disease, what spore types are most responsible for spread, and how can each be controlled? 7. (12 pts) Compare the dissemination methods, primary inoculum, secondary inoculum, overseasoning methods and forms, and control of late blight of potato and downy mildew of grape. (Extra credit 2 pts each, give the genus of the causal agent). 8. ( 4 pts) Phytophthora ramorum, which causes sudden oak death, is of major concern in the US and Europe. Describe control of the disease in trees and shrubs in homeowner lawns and public parks. 9. You have been called to a local field to look at wilted dying chile plants. You suspect root rot. a) (8 pts) How can you microscopically determine if you are looking at root rot caused by Phytophthora capsici or root rot caused by Pythium? Which fungal components would look similar and which would differ? Which of these two diseases would you more expect to find on young chile in April? On mature chile in August? b) (18 pts) You decide that the wilt and root rot is caused by Phytophthora capsici. You need to explain the disease to the grower. Draw the disease cycle, indicating what serves as primary inoculum, secondary inoculum, how it is disseminated, and how it overwinters. Include the sexual gametes and show where plasmogamy, karyogamy, and meiosis occur. c) (3 pts) The grower wants to know if changing their irrigation practices will help control the disease. Do you answer yes or no, and what control measures will you advise? (8 pts extra credit) Outline the disease/life cycle of peach leaf curl. What is the causal agent (genus and species)? How can the disease be controlled? (4 pts extra credit) On Monday, a woman from a local day care brought in a orange-colored growth from the bark chips on their playground. She wants to know if it will hurt the children, if she should be worried about it and how she can get rid of it. The bag is in the front of the room. Please come look at it when making your decision. What answers should I provide the woman when I email her?
Introduction and materials This 'Living' unit introduces the people who live and work in Antarctica, and explores the personal and professional qualities they need to have, the effects of isolation and the constraints of their temporary home. It also shows how people chronicle their individual experiences and provides a springboard for discussing career options. Students will discover what it is like to live and work in Antarctica. The activities explore the nature of community, and the qualities people need to live in harmony. Because of the wide range of jobs and diversity of people in an Antarctic community, it can be used as a microcosm of society. It provides a way of exploring human dynamics and gender equity issues, and learning conflict resolution skills. Students can research the following questions using the materials listed below. - What jobs do people do in Antarctica? - What skills do they need for these jobs? - What special personal qualities do people need in order to live and work harmoniously with others in an isolated community? - What sorts of difficulties are experienced by people living in Antarctica? - Do women or children live in Antarctica? - What do expeditioners eat in Antarctica? - What is involved in planning an expedition? - Expeditioner profiles - Vocational guidance test part A [PDF] - Vocational guidance test part B [PDF] - Scoring the vocational guidance test [PDF] - Mini job chart [PDF] - List of expeditioner positions and career types [PDF] - Conflict resolution checklist [PDF] - An 11-year-old's observations of Antarctica [PDF] - Food list per person per year [PDF] - Field ration pack list [PDF] Useful information - books, videos, websites and places to visit - is listed in the references and resources section.
Quantum entanglement has been called "spooky action at a distance" by Einstein and has often been called spooky or weird since then. Recently two diamonds, big enough to see with your eye, were observed to have entangled quantum mechanical states. This is the first time solid objects at room temperature have been measured to be in an entangled state!1-4 It’s a big deal! Read on to learn about quantum weirdness, entanglement, and this experiment. Figure 1. (a) Experimental equipment used to measure entangled diamonds. (b) Diamond used in experiment. On the atomic scale physicists use quantum mechanics to describe the physical properties of objects and predict outcomes of measurements. Quantum mechanics was created because the classical ways of describing things, which includes Newton’s laws, were failing when applied to very small things like atoms. Quantum mechanics is excellent in predicting and describing so many observable phenomena on the atomic scale. It hasn’t failed yet! Unfortunately, it has some strange things built into it that make even experts in the field ponder its weirdness. Many of the founders of quantum mechanics were unhappy with it. You may have heard about some of the weirdness that results from a quantum mechanical approach. The not weird, but important Heisenberg's Uncertainty Principle: You can never know exactly where something is and how fast it is going at any instant because the act of measuring causes the object’s speed or location to be altered. This holds true for all systems of variables related to each other in a special way. Frequency and time are examples of related variables, and the energy of a state in quantum mechanics is directly related to the frequency, so energy and time are related too. Superposition of States and Schödinger's Cat: Place a cat in a closed box with a lethal substance in a very breakable container. Describe the state of the cat without taking any measurement. Quantum mechanics describes the cat as dead and alive simultaneously! The description of the cat’s state before it is measured is said to be in a superposition of states. It is the act of measurement that forces the cat to be in a single state, hopefully alive! You may want to look at this short video of the effect created by Kwiat's research group at University of Illinois Urbana-Champaign Image Credit: Wikimedia Commons Einstein, Podolsky, Rosen (EPR) Paradox and Entanglement: This originated as a thought experiment designed to point out some of the concerns of quantum mechanics. Think about one system with its very own "wave function" that describes its physical state. Then consider transforming this single system to two systems located apart from each other. These two systems don't have two identical wave functions. Instead the two systems share the original wave function, and are connected, or entangled even over very, very, long distances. An example of this might be one particle decaying into two new particles, or an atom with extra internal energy emitting that energy in the form of a packet of light energy (photon). The two new objects can travel in any old direction as long as they travel in such a way that the sum of their momenta (mass times velocity) is the same as the initial object’s momentum, and the sum of their energies (plus any that may have went into the surroundings) is the same as the energy of the initial object. If one of the objects interacts with something else, then things change. For the entanglement to last they can’t interact with anything else as they move very, very far away from each other. Measure the velocity of one object, and the other object’s velocity is known with certainty before even measuring it. This entanglement between objects allows you to sometimes make better measurements than if the objects were not entangled. If you measure the velocity of the first particle and the location of the second particle you can know the velocity and location of each particle. Entanglement may be considered as not being able to determine the exact physical origin causing the outcome of a measurement, so you always have to describe the state before the measurement as a superposition of possible states, like Schrödinger's Cat. For example if you have a single packet of light energy (photon) go through a 50-50 beam splitter (partially reflective glass), the photon has a 50% chance of reflecting off the beam splitter and a 50% chance of going through the beam splitter. The photon after the beam splitter is an entangled state described by a superposition of the photon going through and reflecting off of the beam splitter. If a detector is set up on each pathway, then only one of the detectors will register a photon, and it is impossible to predict which one. The measurement "collapses" the photon wave function into one of the two possibilities at random on each trial of the experiment. You cannot learn from this measurement whether the photon is in an entangled state or not, just as you cannot know whether the cat was in a superposition state before you learned whether it was alive or dead. If, however, a second beam splitter is set up to recombine the two paths, and a detector measures the photon after the two paths have been combined, then the photon will always end up going the same way out of the second beam splitter. This is a signature that they are entangled, and provides proof of entanglement. This is just what has been done with these two macroscopic diamonds! The big deal about entangling two diamonds In quantum mechanics this connection between entangled states can easily be disrupted by the many other interactions from the many other objects around. Designing an experiment to measure entanglement isn’t the easiest, but certainly is possible and has been done a number of times. The big deal is that it’s never before been done with solid objects at room temperature. The Ultrafast Quantum Optics Group at the University of Oxford set up two diamonds near each other (15 cm apart) on a lab table (Figure 1a), and did not change any other environmental condition (like temperature or pressure). Diamonds are made of carbon atoms arranged in a special way. The research group selected a strong laser pulse with a frequency just right for being absorbed by a carbon atom in a diamond, and through another process (Raman scattering) results in a slightly lower frequency photon (packet of light energy) being emitted and an increase in one of the diamond’s vibrational energy states. The slightly lower frequency light emitted is said to be red-shifted. The extra internal vibrational energy is quickly distributed over the carbon atoms in diamonds because of the way the carbon atoms are placed (lattice structure) to make a diamond. The laser pulse was sent through a 50-50 beam splitter toward the two diamonds so both diamonds were simultaneously pumped with light (50% going toward the first diamond, 50% going toward the second diamond). If energy is absorbed from the laser pulse, the diamond’s vibrational state would increase, and red-shifted (lower frequency) light energy would be emitted. This red-shifted light provides evidence that a vibrational state exists in the diamond. They combined the red-shifted light from both possible paths and measured it with a detector (Fig 3a). By measuring exactly one red-shifted photon (packet of light energy) they were able to infer that the two diamonds should be entangled: they know that one of the diamonds has extra vibrational energy (and emitted the photon), but they can’t tell which one. |Figure 3.(a) Generating an entangled state: A single laser pulse pump beam goes through a 50-50 beam splitter and simultaneously pumps the top diamond and the bottom diamond. Energy is absorbed from the pump pulse by the diamond(s), causing an increase in internal vibrational energy and the emission of red-shifted light (lower frequency than the pump beam). The red-shifted light from the two paths is combined and measured. (b) Verifying entanglement: A probe laser pulse (different from the pump pulse) goes through the 50-50 beam splitter and simultaneously probes the top diamond and the bottom diamond. When it reaches a diamond it causes the diamond to give up its extra vibrational energy and emit light with a higher frequency (blue-shifted) than the probe field. The blue-shifted light from the two paths is combined and measured. Image credits: Ian Walmsley’s Ultrafast quantum optics and optical metrology lab at the University of Oxford To verify the entangled state the researchers decided to probe the increased vibrational energy state of the diamonds. To do this they would have to send a laser probe beam to the diamonds before the diamonds transferred their vibrational energy to the environment, before the diamonds have a chance to decohere. The increased vibrational energy state of a diamond is short lived. A diamond crystal will give up its energy to its surroundings in the average time of 7 picoseconds or 7 trillionths of a second. Scientists call this the average decay time or lifetime of the increased energy state. Seven trillionths of a second was not a problem for this ultrafast optical lab! They were able to probe this vibrational state by sending a second, ultrafast, laser probe pulse within about 0.35 picoseconds (0.35 trillionths of a second) from the initial laser pulse. When the probe laser pulse reaches a diamond in an increased vibrational state, it causes the diamond to emit its extra energy resulting in a decreased vibrational energy state of the diamond, and the emission of a photon with a higher frequency than the probe. The higher frequency photon is said to be blue-shifted. Both possible paths of the blue-shifted light are combined and measured (Fig 3b). There is no way to know which diamond the red and blue shifted light came from, which means the diamonds must be described as sharing a vibrational state. The information from the measurements of the red-shifted and blue-shifted light provides a lower limit on the entanglement of the vibrational states of the two diamonds. If the red-shifted and blue-shifted photons are entangled, then the vibrational states of the diamond are entangled by at least the same amount. The coincident count of the two types of photons were measured and found to be highly correlated, indicating highly correlated vibrational energy states of the two diamonds. By analyzing their results to see how related these two measurements were, they found entanglement between the diamonds to be 0.85, where perfect entanglement is equal to 1.00!1 This means that when describing the two diamonds' vibrational energy, they must be described as sharing a single quantum of vibrational energy. Future research and applications Quantum entanglement is used in quantum communication, quantum teleportation, quantum computing, and quantum cryptography. The results of this experiment demonstrating entanglement may be used for further tests on the fundamentals of quantum mechanics, and will have implications for the applications listed above that utilize entanglement. References, resources, and links 1. Lee, K.C. et al. Entangling Macroscopic Diamonds at Room Temperature, Science 334, 1253 (2011). 2. Duan, L-M. Quantum Correlation Between Distant Diamonds, Science, 334, 1213 (2011). 3. Matson, J. Quantum Entanglement Links 2 Diamonds, Scientific American, Dec 2011. 4. Walmsley, I. and Nunn, J., Entang-bling, 2Phyiscs, 25 December 2011 5. Ian Walmsley, Clarendon Laboratory, Dept. of Physics, University of Oxford, UK 6. Betz, Eric. Quantum Migration
By GreatSchools Staff Technology in the third grade classroom can provide a rich, entertaining range of learning opportunities that engage young minds and get them excited about all aspects of the curriculum. Your child will use technological tools to enhance her understanding of core subjects, including language arts, science, and math. According to the Common Core Standards Initiative that the majority of states adopted in 2010-2011, third graders should master certain basic technology skills that can be used in core subjects like reading, writing, science, and math. (Many states also follow the National Educational Technology Standards for Students.) In third grade, your child will build on essential reading and writing skills, memorize math facts, and, through the lens of science, learn about the world around them. While using technology is no substitute for reading a book, mastering the multiplication tables, or conducting research for a science project, it's an important tool to supplement classroom instruction. Even more important, technological literacy is essential for your child's future. Language arts — once exclusively the realm of paper and ink — get an enormous boost from technology. Third grade students learn basic essay writing skills and begin to write short opinion essays and informational reports, and they're likely to do some of their research on the Internet. (See the Common Core Standards.) Audio books and audio-enhanced text books allow third graders to immerse themselves in a culture of storytelling, fit more books into their busy lives, allow books to compete with other media for entertainment value, and get hooked on reading as a lifelong pleasure. Using a tablet or a computer, students can learn to look up unfamiliar words to master new vocabulary and practice pronunciation. And digital book creation, video editing, and animation tools enable students to become authors of their own stories. A word processor — with grammar correction — can improve students' grammar and spelling as they write, by noting mistakes as they happen and offering corrections. Technology helps kids master math concepts with games and apps that illustrate more complex multiplication and division, as well as fractions and geometric concepts. A host of educational apps for tablets ask children to touch and manipulate math concepts on the screen. Math-based computer games transform rote drills into games that take advantage of gaming fever to drill facts into memory. Online animations and multimedia lessons can turn a math lesson into entertainment that teaches as it enthralls; they also allow students to review a lesson whenever they wish. And the Internet brings concepts and teachers — outstanding teachers like Salmon Kahn of Khan Academy (which offers hundreds of video classes on math, science, and other subjects) — into the classroom to inspire young minds. To track and chart scientific data, your third grader may use spreadsheet programs like Excel. You child may be also introduced to creating and using database software such as FileMaker Pro or Microsoft Access to classify information. Kids may work from templates in which a spreadsheet or database has already been created and they need to enter the information. Your third grader may contribute to a spreadsheet of the class's favorite foods or a database classifying their library of books. In an Internet-connected classroom, science is as close as the whiteboard, monitor, tablet, or computer screen. At this level, children can watch close-up footage or animation of the human body, dinosaurs, space, or cells. They can play with animated versions of the elements in the periodic table or simulations of tornados or the night sky. Websites like Khan Academy, Brainpop.com, Discovery Education, and The Jason Project allow kids to access multimedia lessons and animations that transform science instruction into adventure. And to help young students imagine themselves as scientists, the teacher can invite working scientists — virtually — into the classroom and let students ask questions of the researchers themselves.
Exchanging People for Trade Goods When Europeans landed on the coasts of Africa they found societies engaged in a network of trade routes that carried a variety of goods back and forth across sub-Saharan Africa. Some of those goods included kola nuts, shea butter, salt, indigenous textiles, copper, iron and iron tools, and people for sale as slaves within West Africa. Gold, pepper, a little ivory, dried meat and hides were also exported in the Trans-Saharan trade routes along with a few “slaves” to the Middle East and beyond. Phillip Curtin estimates this trade to have been no more than 500–4000 “slaves” a year (1990:40–41). From this trade and early West African slave trade by the Portuguese, a sizeable number of Africans ended up in Portugal and Spain. By the middle of the 16th century, 10,000 black people made up 10 percent of the population of Lisbon. Some had been manumitted. Some had purchased their freedom. Some were the offspring of African and Portuguese marriages and liaisons. Seville had an African population of 6000. These were some of the people accompanying Spanish explorers on the North American mainland. More importantly, this was the nascent beginning of the Transatlantic Slave Trade (Curtin 1990:40–11). All of the Sub-Saharan African societies discussed above participated in the slave trade as the enslaved or as slavers, brokers. While Europeans created the demand side for slaves, most historians would agree with John Thornton that African political and economic elite and leaders, although capable of defending their countries from seaborne European marauders, did the primary work of enslaving, transporting and selling Africans to slave traders on the African coast (Thornton 2002:36). Why Africans participated in the slave trade, given its drain on the most productive adults from Africa’s populations, is one of the enigmas of history. The seeds of rebellion, violence and war sown by the slave trade were perhaps even more disruptive to African societies. One answer might be that the institution of slavery already existed in African societies. However, slavery in Africa was different from the kind of slavery that evolved in the New World, particularly the English colonies, a topic discussed in the section below on Laws. The kind of slavery that became dominant on the American plantation was special,” in Curtin’s words, “different from slavery in most of the Muslim world and West Africa (1990:40–41).” Most legal systems in Africa recognized slavery as a social condition according to Thornton. He comments that slaves constituted a class of people, captives or their descendants, over whom private citizens exercised the rights of the state to make laws, punish, and control. Although these rights could be sold, in practice people of the slave class who had been settled in one location for a sufficient time came to possess a number of rights, including immunity from resale or arbitrary transfer from one owner or location (Thornton 2002:43). Birmingham says there was no such thing as a class of slaves in Kongo, but that many people belonged to a transitory group of servile subjects. “These were people of foreign origin, people who had been outlawed for criminal acts, people who had lost the protection of their kinfolk, or become irredeemably indebted to others. They differed from those enslaved by Europeans in that under normal conditions they were likely to be reabsorbed into society (Birmingham 1981:32).” Many of those enslaved and brought to the New World were people who had participated in local and long-distance trade. Depending upon their resources, they were skilled agriculturists; artisans of textiles, bronze, gold, ivory sculpture, jewelry and sacred objects; craftsmen of wooden tools, furniture, and architectural elements; as well as potters and blacksmiths. Others were skilled linguists in more than one African language and often one or more European languages as well. In some cases, they had developed trade languages that facilitated between group communication even among African people whose language they did not know. Even though those who were enslaved became part of one of the most heinous of historical tragedies, Africans enslaved in North American and the few Africans who voluntarily migrated to the New World also became part of one of the greatest triumphs of human history. African people and their descendants helped to open the Western World, develop it, and create a new nation. Both stories are explored here in terms of their meaning in the African American perspective on heritage preservation and what constitutes African American ethnographic cultural resources. Although the cultural achievements, social history, and contributions to the opening of the New World by African peoples were known to Europeans, this information somehow became lost in the myth and mendacity of a developing European racialized worldview that persisted well into the 20th century. Why was African Heritage Lost? Over time, a number of factors combined to obscure knowledge of Africa as well as the African American presence and contributions to exploration, settlement and the founding of America. The most important of these factors was the development of the concept of race differences that occurred in conjunction with the opening of the New World. Renaissance thinkers used color as one of several criteria for the classification of people. The term “black” was used as an adjective to describe variations in skin color. In the 15th century when sub-Saharan Africans were first brought to Europe, people had little difficulty in seeing them as humans. By the end of the 15th century, when American Indians were brought to Europe, a shift occurred in European thinking. Europeans seeking explanations for why American Indians and Africans did not look alike and reasons why both were different from themselves began to gradually lump Indians and Africans together as examples of sub-human beings. They were viewed as the lowest human forms in the “Great Chain of Being” model of all living forms that had come down from the ancient Greek writers. Africans sometimes came to be called the “Missing Link” suggesting they were less than human but more than an animal. Over the next 200 years as the economic importance of slavery grew, belief in the existence of different races of men became firmly established. It seems strange now that the Enlightenment movement that was based on notions of progress through reason and rationality could give rise to both the birth of a new nation based on the rights of man and an ideology that justified the enslavement of Africans. Elements of the 18th Century European Worldview - Human differences in appearance and behavior are the basis for classification. - Ranking humans from high to low, based on the Great Chain of Being, is an vital aspect of systematic classification of human differences. - Outer physical characteristics of humans such as skin color, type of hair, body size or shape, are surface manifestations of inner realities such as intelligence and tendencies to different social behaviors. - Assignment of the highest rank to people with European physical attributes equated with superior intellect and “appropriate” social behavior, the lowest rank to people with African physical attributes equated with inferior intellect and “sub-human” behavioral tendencies. - Belief that physical attributes, behavior, inner tendencies and social rank are inheritable. - Beliefs that human “race-based” differences were created by nature or God so are fixed, unalterable and could never be bridged. (Pandian 1985; Smedley 1993) The 18th century scientist Linneaus, influenced by Enlightenment positivist doctrine to seek scientific explanation for natural facts, developed a systematic classification of human beings in the first half of the 18th century. Count de Buffon, also influenced by positivism, introduced the term “race” in 1749 to distinguished six (6) varieties of humans based on color, shape of body, and disposition. Building on these fundamentals, by the end of the 18th century, all the elements of a folk concept and world view of race were formulated and accepted as a hierarchy of human inequality based on people’s race. Over time, the European race-based worldview was modified and extended by associated negative attitudes, beliefs, myths and assumptions about the world’s non-European people. Thus came into being myths about white superiority and black inferiority, about American Indians as “noble savages” about Chinese as “inscrutable Orientals” and other race-based stereotypes (Pandian 1985:70–95; Smedley 1993:25–28). This kind of worldview did not permit acknowledgement of African people’s social history and cultural achievements. The European race-based worldview was used as a rationalization for conquest of American Indians, enslaving Africans, and colonialism. The need to reinforce notions of white supremacy, African inferiority and African enslavement resulted in a legacy of historical omissions, suppression and misinterpretation of African social history and cultural heritage. Ideas of white supremacy included notions of cultural supremacy as well. On the balance, it is important to note that other factors also contributed to the lost knowledge of African social history and culture. Most African civilizations passed on historical knowledge through oral traditions. Documentation of African life and culture before the Transatlantic Slave Trade are mostly descriptions in Greek, Arabic, and Portuguese written by travelers, merchants, and monks. Ideas of European cultural supremacy continued into the twentieth century and acted to suppress or misinterpret the African cultural heritage in African American culture. Even some African American social scientists, for example E. Franklin Frazier, misinterpreting the cultural patterns of their own people, viewed African American social and cultural patterns as pathological if they differed from Euro-American standard behaviors. E. Franklin Frazier was one of Herskovits most vehement critics stating in reference to Herskovits’ book Myths of the Negro Past: “Nevertheless, the reviewer…[Frazier]…cannot agree with the author that to establish the fact that the Negro had a ‘cultural past’ and that the Negro’s ‘cultural past’ still influences his behavior will not alter his status in America (Frazier as cited in Long 1975:565).” In the early twentieth century, black and white sociologists projected a pathological view of “Negroes” ascribing deviations from European cultural behaviors as the result of the slavery experience in the New World (Long 1975:564). Attempting to uncover lost knowledge and refute myth, mid-twentieth century anthropologists, archeologists, and historians, many of African descent, began to reexamine and reassess available data and to extend the scope of their investigations to formerly untapped data sources. Even so, the notion of African American culture as developing during and after enslavement continued to be advanced by leading social scientists and continues in contemporary publications (Mintz and Price 1976, 1992). More recent scholarly works revisiting the Transatlantic Slave Trade, African history, the history and archeology of colonial African Americans from the 16th century through the American Revolution refute earlier myths concerning African American culture. These scholarly works suggest the need for revisionist approaches to interpreting African American life and culture during the colonial period (Hollaway 1990; Midlo Hall 1992; Eltis 2001; Walsh 2001). Who were the First Africans in America? Portuguese exploration of the African coastline first brought West Africans in contact with Europeans. As Africans participated in trade with the Europeans they developed linguistic skills and came to understand European commercial practices, cultural conventions, and diplomatic etiquette. By 1491, Kongo royalty had converted to Catholicism and the King of the Kongo sent his sons to be educated in the royal court of Portugal and the Vatican in Rome. Other West African ethnic groups sent their sons to be educated in Portugal. Portuguese and West Africans, particularly people from West Central Africa, formed families in Africa and in Portugal and Luzo-Africans, a new class of people emerged from these families. As the 15th century ended, Africans and Luzo-Africans lived in Portugal and Spain. Some were slaves and some free. At least two generations of Luzo-Africans had grown to adulthood. These are the kinds of people Berlin refers to as Atlantic Creoles. From their ranks, Africans, mostly but not exclusively men, sailed with the Portuguese and Spaniards for the New World (Thornton 1983; Berlin 1998:73; Restall 2000:171–205). From the very start, lack of white labor hampered Spanish exploration and settlement of the circum-Caribbean and West Indies Islands. King Ferdinand initiated the African slave trade on September 3, 1501 in a letter to the Governor of Hispanola in which he said: “In view of our earnest desire for the conversion of the Indians to our Holy Catholic Faith, and seeing that, if persons suspect in Faith went there, such conversion might be impeded, we cannot consent to the immigration of Moors, heretics, Jews, re-converts or persons newly converted to our Holy Faith, unless they are Negro or other slaves who have been born in the power of Christians who are our subjects and nationals and carry out our express permission” (Williams 1971:41–42). In 1505, seventeen Africans were sent to work in copper mines in Hispaniola. Five years later, fifty more were sent, and so it began. The Portuguese controlled the slave trade leaving African ports. A royal asientos or contract with the Portuguese was thus required to send Africans as slaves to Spanish America. The Portuguese slave trade monopoly and the Spanish government concern with heresy, led the Spaniards to turn first to the large population of Africans living in Spain for servants, soldiers, and other labor. Most of the Africans in Spain, or Ladinos as they were called, originated in the Kongo arriving on the Iberian peninsula by way of the Portuguese. From the Spanish point of view, Ladinos, had the advantage of being Catholic coverts, having knowledge of Spanish customs and language. Some of these Ladinos were manumitted, born free or had purchased their freedom and it was to them that the Spaniards first looked to supply labor for New Spain. By 1511, Spanish settlements existed on all of the islands in the Greater Antilles and white immigration had become insufficient to solve the island labor problems. The cost of conquering the mainland Indians decimated the ranks of the Spanish Conquistadors. More significantly, Indian depopulation was the inevitable outcome of their slaughter during battles of conquest, succumbing to European diseases against which they had no immunity, and being exploited as slave laborers. Another reservoir of labor was required to explore, fight, and develop a subsistence economy and an export economy. Spain looked to Africa or at least to African people for a greater supply of labor (Williams 1971:41–42). In 1517, an asiento was arranged between the Spanish Crown and private enslavers for the importation of four thousand Africans into Spanish Americas over the next 8 years. By 1540, thirty thousand had been imported into Hispanola and more than one hundred thousand into all the Spanish dominions (Williams 1971:41–42). These Africans helped explore and settle Puerto Rico, New Spain, Hispanola, as well as Florida and New Mexico, within the borders of the contemporary continental United States. The Spanish monarch, Carlos V, began issuing more and more asientos in the 1590s in order to expedite the importation of slaves. Africans The africanization of Spanish American colonies would have long ranging effects on the course of African American heritage. The Spanish church, Spanish law, Spanish organization of slave labor and the encroachment of the other European powers on colonial Spanish holdings in Florida, the Mississippi delta, and the Southwest all combined to create a sizeable but scattered population of free African Americans and at least one free African American community in Spanish America. These long range effects of the influences, the development of free African American people and communities in Spanish America are explored further in Part II African American Heritage of this unit. The positive outcome of Spanish colonialism in terms of the development of a free African American population, from the perspective of African American heritage, was overshadowed in significance by the Spanish transatlantic slave trade, a harbinger of things to come.
“Some say the world will end in fire, some say in ice,” the poet Robert Frost mused in 1920. Frost famously held “with those who favor fire,” and that poetic view surprisingly coincides with mainstream scientific consensus about the end of the world, which states the sun will in some seven billion to eight billion years evolve into a red giant star that will scorch and perhaps even engulf Earth. Yet when that happens, Earth will already have been dead for billions of years, and will more resemble present-day Venus. As the sun slowly brightens over time on its path to becoming a red giant, it will eventually cross a critical threshold in which its luminosity surpasses our planet’s ability to dissipate absorbed radiation out into space. At that point, somewhere between one billion and three billion years from now, Earth’s surface temperature will steadily rise until the boiling oceans throw a thick blanket of steamy water vapor around the planet. All that water vapor, itself a potent greenhouse gas, will raise temperatures higher still to cook another greenhouse gas, carbon dioxide, out of Earth’s rocks. The end result will be a “runaway greenhouse” in which the planet loses its water to space and bakes beneath a crushing atmosphere of almost pure carbon dioxide. Earlier this year, for the first time in human history, atmospheric carbon dioxide reached 400 parts per million (ppm), surpassing a preindustrial average of about 280 ppm that has prevailed with slight variations for the past several million years. Pessimistic projections from the United Nations Intergovernmental Panel on Climate Change forecast atmospheric carbon dioxide levels soaring beyond 1,000 ppm later this century. As the world warms not from a brightening sun but from fossil fuel–burning humans, some scientists have wondered just how close our planet might be to tumbling into a runaway state. Studies in the 1980s and ‘90s suggested the present-day Earth was safe against a runaway, but a paper published this week in Nature Geoscience argues that “the runaway greenhouse may be much easier to initiate than previously thought.” Indeed, the study suggests that without the cooling effects of certain types of clouds, modern Earth would already be well on its way to broiling like Venus. (Scientific American is part of Nature Publishing Group.) According to the study’s lead author, Colin Goldblatt of the University of Victoria in British Columbia, the disturbing result hinges less on carbon dioxide and more on humble water vapor, which recent investigations have shown absorbs solar radiation more efficiently than previously believed. “The old answer was that a runaway on Earth right now was theoretically impossible,” Goldblatt says. “Even if you evaporated a big chunk of ocean it would just rain back out, because the water vapor would radiate away more thermal energy than it absorbed through sunlight. Our new calculations show that a water vapor–rich atmosphere absorbs more sunlight and lets out less heat than previously thought, enough to put the Earth into a runaway from which there would be no return.” The upside of the new study is that even though a climate runaway may be possible in theory, it remains very difficult to cause in practice through human greenhouse gas emissions. “We’ve estimated how much carbon dioxide would be required to get this steamy atmosphere, and the answer is about 30,000 ppm of atmospheric carbon dioxide, which is actually good news in terms of anthropogenic climate change,” Goldblatt says. Thirty thousand ppm is about 10 times more carbon dioxide than most experts estimate could be released from burning all available fossil fuels, he notes, although such high values could in theory be reached by releasing large amounts of carbon dioxide from the Earth’s vast deposits of limestone and other carbonate rocks. A cloudy outlook Not everyone is convinced Goldblatt’s result is valid, however. James Kasting, a geoscientist at The Pennsylvania State University, suspects that even in theory an anthropogenic runaway remains out of reach of humanity. Kasting performed many of the earlier seminal studies that seemed to rule out a present-day runaway, and with his student Ramses Ramirez is currently polishing a new study that reinforces those conclusions. No matter how much carbon dioxide is pumped into the present-day Earth’s atmosphere in Kasting’s models, the resulting heating is insufficient to cause the planet to rapidly boil off its oceans. “The bottom line,” Kasting says, “is that we do not get a runaway.” Like Goldblatt’s team, Kasting’s group studies Earth’s climate using a one-dimensional model that simulates the absorption, transmission and reflection of sunlight by a single surface-to-space strip of atmosphere. These models’ sophisticated treatment of light’s interactions with air closely reproduce the observed warming effects of carbon dioxide, water vapor and other greenhouse gases, yet they contain only the crudest approximations of Earth’s changing weather and surface. Such models are particularly poor at accounting for the complex effects of clouds, which, depending on where and how they form, can either cool or heat the planet: Thick, low-lying clouds tend to reduce temperatures by reflecting greater amounts of sunlight back to space, whereas high, thin clouds will warm the planet by letting light pass through then trapping more of the absorbed heat. The differences between Kasting’s and Goldblatt’s conclusions largely boil down to Kasting’s 1-D approximations of clouds providing slightly more cooling whereas Goldblatt’s provide slightly less. Three-dimensional modeling is the only way around this impasse, yet current 3-D climate models aren’t up to the task of simulating how Earth’s clouds and weather will change within a very steamy or CO2-rich atmosphere. “Using today’s best models to address these extremes is like trying to drive up a mountain in a Honda Civic,” Goldblatt says. “A Civic can take you coast to coast on paved roads, but take it off-road and you run into problems. Today’s models are like that right now—they aren’t designed for extreme atmospheres. If you want to model the runaway greenhouse, you need the equivalent of a Humvee for your climate model that will take you to these wild places.” Kasting’s group recently received funding from NASA to work with other teams to develop better 3-D models, and a handful of other research groups in Europe are also pursuing similar goals. Out of the fire, into the frying pan Outside of better models, other useful constraints on the runaway greenhouse scenario come from the Earth’s long history. Measurements of 56-million-year-old sedimentary rocks have revealed an event during the mid-Cenozoic era called the Paleocene–Eocene Thermal Maximum (PETM) in which a millennia-scale pulse of greenhouse gases warmed the globe. The PETM pulse seems to have been roughly equivalent to what humans could release through burning all recoverable fossil fuels, and may have warmed the planet in excess of 10 degrees Celsius, but clearly no catastrophic runaway occurred, for otherwise we would not now be here. If it didn’t happen then, many researchers suggest, it won’t happen now from a similar, anthropogenic spike of greenhouse gas. “All these geological records tell us that even with very high levels of atmospheric carbon dioxide in the past, Earth avoided runaway,” Goldblatt acknowledges. “But that doesn’t tell us how much margin of error we have today or how close things came in the past. It’s a bit like walking around on top of a foggy cliff and not knowing whether you’re a meter or a kilometer from the edge. Even simple modeling can let you work out some hard limits to help guide behavior.” Already a wealth of modeling suggests that easily achievable amounts of global warming would fall far outside safety margins long before Earth began any runaway transition to Venus. In 2010 a study from Steven Sherwood at the University of New South Wales in Sydney and Matthew Huber at Purdue University calculated that warming slightly in excess of 10 degrees C—like that of the PETM and of pessimistic scenarios for future fossil-fuel burning—could render large portions of the planet uninhabitable for many creatures. Unprotected humans and other warm-blooded mammals can overheat and die in humid conditions hotter than about 35 degrees C, because their metabolisms produce more heat than can be easily dissipated into the surrounding air. The latest results from Kasting’s group, which are still under review, suggest that such conditions could prevail across much of the planet if human civilization burns enough fossil fuel to quadruple atmospheric levels of carbon dioxide. Reaching such dangerous levels “is certainly doable,” Huber says. “It’s our decision whether or not to dedicate the next century to burning these reserves.... There used to be subtropical forests near the poles 50 million years ago, and that doesn’t sound so bad. But the fossil record closer to the equator is really poor, and that may be an indication that life was extremely stressed during these warm periods. If over half the surface area of the planet becomes inhospitable, it will not render Earth uninhabitable, but it will be unrecognizable and existentially challenging for the majority of the people, species and communities on Earth.” As nightmarish as a runaway greenhouse seems, whether or not modern Earth is susceptible to it should perhaps be seen as essentially an academic point. Microbes could endure and even flourish on a planet at the brink of runaway, but people would still be steam-cooked whether or not such a hothouse world tipped over into a more Venusian climate. Leaving aside other effects of global warming like rising seas, stronger storms, longer droughts and plummeting biodiversity, Kasting says, “the problem of heat stress alone could become lethal to humans well before any runaway happens, and that danger may be much closer than previously realized. This is serious enough to warrant our full attention.”
Islands and coastal regions are threatened the most Durham/Vienna/Leipzig. The distribution of established alien species in different regions of the world varies significantly. Until now, scientists were uncertain about where the global hotspots for established alien species are located. An international research team that includes Marten Winter and Carsten Meyer from the German Centre for Integrative Biodiversity Research (iDiv) and Leipzig University is the first to provide an analysis of these hotspots: According to their study, most alien species can be found on islands and coastal regions. The study was published in the renowned journal Nature Ecology and Evolution on 12 June 2017. Humans are responsible for the movement of an increasing number of species into new territories which they previously never inhabited. The number of established alien species varies according to world region. What was previously unclear is where the most established alien species could be found and which factors characterise their distribution. An international team consisting of 25 researchers under the leadership of Dr Wayne Dawson from the University of Durham (United Kingdom), who began his research on this topic at the University of Konstanz, created a database for eight animal and plant groups (mammals, birds, amphibians, reptiles, fish, spiders, ants and vascular plants) that were found to occur in regions outside of their original habitat. The study of the distribution of these species led the research team to identify 186 islands and 423 mainland regions in total. This project allowed the researchers to illustrate the global distribution of established alien species within a large number of organism groups for the first time. Most important result: The highest number of alien species can be found on islands and in the coastal regions of continents. The island of Hawaii was found to have the most alien species, followed by the north island of New Zealand and the small Sunda Islands of Indonesia. The researchers also examined the factors responsible for the number of alien species in any one region. Lead author Dr Wayne Dawson, from Durham University’s Department of Biosciences, said: “Our research shows that, islands and mainland coastal regions contain higher numbers of established alien plants and animals, and this may be because these areas have major points of entry like ports. In general, regions that are wealthier, and where human populations are denser also have more alien species, but these effects are stronger for islands.” These factors increase the likelihood of humans introducing many new species to an area. This results in the destruction of natural habitats, which in turn allows non-indigenous species to spread. Islands and coastal regions seem to be particularly vulnerable because they occupy leading roles in global overseas trade. “Hawaii and New Zealand lead the field for all examined groups”, explains participating ecologist Dr Franz Essl from the University of Vienna: “Both regions are remote islands that used to be very isolated, lacking some groups of organisms altogether - such as mammals, for instance. Today, both regions are economically highly developed countries that maintain intense trade relationships. These have a huge impact on the introduction and naturalisation of non-indigenous species”. The presence of large numbers of alien species across many regions of the earth comes with serious consequences, especially in cases where indigenous species are driven out and natural habitats are changed. This is very problematic with regard to islands since many indigenous species tend to exist only on the island itself and are therefore particularly vulnerable to the threat of alien invaders. "Islands, such as New Zealand and Hawaii, are often geographically isolated and have, in evolutionary terms, unique flora and faunas, which were not at all prepared for the most diverse types of entrained species," explains the ecologist Marten Winter of iDiv. Various laws and treaties designed to reduce the spread of alien species have been passed around the globe. "That is precisely the reason the Cross-border cooperation in Europe, which has now begun with the EU regulation on invasive species, is important and necessary," says Winter. New Zealand has already passed comprehensive legislation designed to prevent the introduction of further alien species over the past few years. And on some smaller islands, alien predators such as rats or mice have been successfully eliminated. These examples show that it is possible to take successful action. W. Dawson, D. Moser, M. van Kleunen, H. Kreft, J. Pergl, P. Pyšek, M. Winter, B. Lenzner, T. Blackburn, E. Dyer, P. Cassey, S. Scrivens, E. Economo, B. Guénard, C. Capinha, H. Seebens, P. Garcia-Diaz, W. Nentwig, E. Garcia-Berthou, C. Casal, N. Mandrák, P. Fuller, C. Meyer, and F. Essl (2017): Global hotspots and correlates of alien species richness across taxonomic groups. Nature Ecology and Evolution. DOI: s41559-017-0186 The study was supported by the European Commission (COST Action TD1209), the German Science Foundation (DFG), the Volkswagen Foundation through a Freigeist Fellowship and others. Press release of Durham University: https://www.dur.ac.uk/news/newsitem/?itemno=31655 Global Naturalized Alien Flora (GloNAF) https://glonaf.org/ Dr Wayne Dawson Durham University’s Department of Biosciences Dr Franz Essl University of Vienna Dr Marten Winter, Dr Carsten Meyer German Centre for Integrative Biodiversity Research (iDiv) and Leipzig University Durham University Marketing and Communications Office Mag. Alexandra Frey University of Vienna as well as Dr Tabea Turrini, Tilo Arnhold iDiv Media and Communications Tel.: +49 341 9733 -106, -1197
Anaximenes was another resident of Miletus, the last of the Milesian philosophers. He was the student of Anaximander, though he is generally seen as taking a step backward from his great mentor. His one significant accomplishment was that he was the first person to propose a mechanism by which the physis (in his case, a misty air) transforms into the plurality of objects we see around us in the observable world. Like Anaximander's Unbounded, Anaximenes' aer is unlimited and inexhaustible. Aer, however, is definite. It is something like mist, a breathy thing. Anaximenes arrives at his physis by observing living creatures. What makes a creature alive, he observes, is that it breathes. A breathy thing, which he calls soul, both holds together and guides the living creature. There must be some similar element, he reasons, that performs that same function for the whole cosmos. An argument of this form, which reasons from the human being to the whole cosmos, is often called a microcosm/macrocosm argument. It was used frequently in ancient Greek medicine, but this is its first appearance in natural philosophy. Most commentators view Anaximenes' choice of aer for physis as a big step backward from Anaximander's Unbounded. After all, the Unbounded had the advantage of being dissociated from the changing elements that it was supposed to explain. But it is not that hard to see why Anaximenes might have believed that his physis was superior to the Unbounded. First of all, aer is not just a theoretical entity; we have a reason to believe it exists, and we can even observe it. In addition, it is not so nebulous and vague a substance, and so we can better understand its connection to the objects around us; we can conceive of how it gave rise to the opposites, whereas with the Unbounded it is difficult to understand how something with no qualities can act as the source of all the qualities in the world. Anaximenes is able to give us an account of how his physis gives rise to the plurality, something that Anaximander, presumably, would have been hard-pressed to do. Anaximenes is the first to explicitly include the processes by which his physis is transformed into the plurality of observable objects. Like most other processes the Milesians proposed, this one involves the eternal motion of the physis. As aer moves it can either become rarefied or condensed. When rarefied, aer becomes fire. When it condenses just a little it becomes wind. Condense it more and it becomes water, more and it becomes clouds, then earth, and finally, in its most condensed form, stone. In this way, Anaximenes is able to derive all the qualities in the world out of quantity. (By laying out all of these familiar substances in series, Anaximenes makes an important advance: he shows that the elements of the world are not separated by qualitative gaps, but that they instead form a continuity.) It is tempting to view the process of rarefaction and condensation in the mechanistic terms though which we understand these processes. It is unlikely, though, that Anaximenes' believed his process to involve particles moving further apart and closer together. It is not impossible, though, and if this is the case then he can be viewed as a proto-atomist. Like a good Milesian, Anaximenes provides us with evidence for the claim that rarefaction and condensation of air can give rise to qualitative changes. In particular, he provides us with evidence that condensation gives rise to coldness, and rarefaction to heat. His first piece of evidence comes from human breath. If we hold our lips far apart and breath out, the resulting breath is hot. If, on the other hand, we purse our lips, forcing the air into a smaller space, the resulting breath is cool. As another confirming instance, Anaximenes points to water, snow, and ice. Water, the most condensed form of the three, is warmest, ice coldest, and snow somewhere in between. As with his identification of aer as the physis, Anaximenes takes a step backward when it comes to the question of earth's support. Earth, Anaximenes claims, rests on a cushion of air. Because earth is flat, it covers this air like a lid and cannot be budged by wind.
5th - 6th grade Difficulty of Project $5 per student Easy; materials can be easily obtained Approximate Time Required to Complete the Project (Including analysis and write-up) What is the project about? The static electricity project will allow students to test the concepts of positive and negative charges by using various objects. Students will see first-hand the power of static electricity. What are the goals? The goal of the static electricity project is for students to put their knowledge of protons, neutrons and electrons into use by observing various electrical charges. Students should be able to identify if an object is an insulator or a conductor. What materials are required? - Balloon (2 per student) - String (2 feet per student) - Hair (hair does not need to be cut - any student with hair is fine) - Aluminium can (1 per student) - Wool fabric (small piece per student) Where can the materials be found? - What type of charge does a proton have? - What type of charge does a electron have? - What type of charge does a neutron have? - What is static electricity? For the parent/student, what terms and concepts are required to better understand the project? The concept of positive and negative charges is essential for this experiment. Also, the concepts of attraction and repulsion are very important. This experiment has 3 parts. Each part should be conducted separately, then all results should be compared for a final analysis. Part 1 - Wool Fabric - Begin by blowing up the balloons. Tie the ends of the balloons into knots, and then tie one piece of string onto the end of each balloon (about 6 inches of string per balloon is enough). - Rub the first balloon against the wool fabric, then rub the second balloon against the same wool fabric. 4 or 5 strokes against the wool fabric will suffice. Handle the balloons via the attached string - do not touch the actual plastic of the balloon. - Hold the balloons close together and observe/record their behavior. (Do the balloons cling together? Are they attracted to each other?) Part 2 - Hair - With the same inflated balloons used in part 1, begin by rubbing one of the balloons back and forth on your hair (or someone's hair). - Slowly pull the balloon away and observe/record the behavior of the hair. (Does your hair react in any way?) Part 3 - Aluminium Can - Begin by placing the aluminium can on its side on a flat surface (table, floor, etc.). - Using the same inflated balloons used in parts 1 and 2, rub one of the balloons back and forth on your hair (or someone's hair). - After rubbing the balloon on hair, hold the balloon approximately 1 inch away from the aluminium can and move the balloon slowly away from the can. - Observe/record the can's behavior.
According to www.climate.nasa.gov, most climate scientists agree that the main cause of global warming is human expansion of the “greenhouse effect”. The greenhouse effect is the warming that happens when the atmosphere traps heat radiating from Earth toward space. Some of the most abundant greenhouse gases include water vapor, carbon dioxide, methane, nitrous oxide, ozone, chlorofluorocarbons, and hydrofluorocarbons. Carbon dioxide is an extremely important element when it comes to trapping greenhouse gas, which is released through human activities such as burning fossil fuels and in natural processes such as volcanic eruptions. Carbon dioxide levels in the air are at their highest in 650,000 years at 407.06 parts per million collected on October 17, 2017. There are high chances that the temperature levels will continue to increase. The projection in the next century is a raise in temperature of 2.5 to 10 degrees Fahrenheit. There are several government associations that have contributed to this research including Climate Data Initiative, U.S. Climate Resilience Toolkit National Oceanic and Atmospheric Administration, and National Climate Assessment 2014. There are multiple pieces of evidence that lead scientists to believe in the concept of global warming. One includes that sixteen out of seventeen of the warmest years, in a 136 year span with the exception of 1998, have happened since 2001. Also, that the arctic sea ice is now declining at a rate of 13.2 percent per decade, 2012 sea ice extent is the lowest in the satellite record. With the recent snow that Prince George has been witnessing, satellite readings have revealed that the amount of spring snow cover in the Northern Hemisphere has decreased and that the snow has been melting earlier. As the earth temperature is increasing, the ocean is absorbing most of the heat with the top 2,300 feet of ocean showing a heat increase of 0.302 degrees Fahrenheit since 1969. Climate change is a growing situation that needs to be discussed more often and with more concern. As citizens, we need to help reduce the ow of heat-trapping greenhouse gases into the atmosphere and adapt to life in a changing climate.
- The expression translates to 7 + −4. Reminder: The AND keyword translates to mean “plus” because the leading keyword is SUM OF. With other leading keywords (discussed in the following sections), AND can mean other things. Also notice that you do not simplify the expression and get “3” for the answer because you are just translating words into symbols and not performing the math. Two other keywords on the addition keyword list, PLUS and INCREASED BY, can be correctly translated by the direct translation strategy. In the direct translation strategy, you translate each word into its corresponding algebraic symbol, one at a time, in the same order as written, as shown in Example 4. Example 4: Translate the following: a number increased by twenty‐four - The expression translates to x + 24. Some additional keywords, such as GAIN, MORE, INCREASE OF, and RAISE, are commonly found in story problems, as in Example 5. Example 5: Translate the following story problem into a mathematical expression about the weight of the linebacker: The defensive linebacker weighed two hundred twenty‐two pounds at the beginning of spring training. He had a gain of seventeen pounds after working out with the team for four weeks. - The expression translates to 222 + 17. Note: Not all numbers mentioned in a word problem should be included in the mathematical expression. The number “four” is just interesting fact, but it is not information you need in order to write an expression about the linebacker's weight. You may also be wondering why the answer isn't 239 pounds. That's because the question asks you to translate the story problem into a mathematical expression, not to evaluate the expression. Example 6: Translate the following word problem into a mathematical expression about the cashier's current hourly wage: A cashier at the corner grocery was earning $6.25 an hour. He received a raise of 25 cents an hour. - The expression translates to 6.25 + 0.25. Note: The hourly wage is stated in dollars, and the raise is stated in cents. Any time you are adding two numbers that have units, make sure both numbers are measured with the same units; if they aren't, convert one of the numbers to the same units as the other. Having both numbers measured with the same units is called homogeneous units. In this example, you convert his raise, the 25 cents, to $0.25 because his hourly wage is measured in dollars, not cents, so the raise must also be in dollars. Subtraction keywords also include leading keywords, keywords that can be translated one word at a time, and keywords that are found in story problems. Look at the following list of subtraction keywords: - DIFFERENCE BETWEEN _____ AND _____ One subtraction keyword (DIFFERENCE BETWEEN) is a two‐part expression that begins with a leading keyword that defines the corresponding AND. You can use the same methods of underlining and circling the keywords shown in the preceding section to translate these expressions. Example 7: Translate the following: the difference between four and six Here is how you translate Example 7:
27 Nov. 2012 Unit of Study Modern Poetry: Understanding and Writing Many students automatically label poetry with a bad name. Students often do not even try to grasp poetry; instead they wait for the teacher or another classmate to give them the answer. Instead of shoving poetry on the backburner I believe students need a whole new perspective and a brand new introduction to poetry. Poetry can be found in many forms; however, most teachers only teach the basic generalized form. Students need to be able to relate the original form of poetry, stanzas and written poetry, with modern day poetry, such as: song lyrics, spoken words, vignettes, and more. Introducing different forms of poetry will help students notice that poetry is found in many other art forms, and can also be fun and inspiring to read and write. Poetry is an incredible art form that allows individuals who write poetry to express themselves in ways they would not be able to express themselves verbally. Poetry is more than just “skin deep,” it gives those reading a deeper sense of what the writer, or speaker, was going through. Students should also know that poetry has no right or wrong answer. Poetry is unique in that it leaves room for creative interpretation. The reader is given a list of words and left to interpret the meaning entirely in their own way. Discussing different meanings of a poem will help students use their imagination and have fun with understanding poetry. This unit is purposely created to help students better comprehend poetry. They will use a poetry analysis worksheet to guide them in the steps to better understand the modern poetry work they are looking at. Students will also be able to use this worksheet in any other English class where they are studying poetry. Class discussions and writing exercises play a key role in understanding how poetry works. Students will be asked to write a number of poems, songs, and or vignettes in order to practice the art of poetry. Launching the Unit: To start off, I would expose the students to different types of poetry, as well as different rhythmic patterns and ways of writing poetry. 1. I would start with showing either a video clip or an audio clip of the song “Destroy” by Worth Dying For. The song is a Christian based song; however, it does not mention the name “Jesus.” The song is about rising up and turning life around. After hearing the song students would discuss their thoughts and opinions of the song. 2. I would show a video clip of a poet named Shihan performing one of his poems at a Def Poetry Jam (the poem is not named). This poem would be discussed as well. 3. I would read the students Love That Dog by Sharon Creech to introduce the vignette form of poetry. After all three forms of poetry are read and understood, I would have the students discuss the similarities and differences between these three forms of poetry and what they have previously learned or know about poetry. Concluding this activity I would introduce and explain the “Poetry Analysis” Worksheet (which is stapled to the back). The worksheet is used to help students dissect poems easier and get a better grasp at understanding the poem that is being read. Items on this worksheet include: identifying the speaker, identifying tone, and subject of the poem, figures of speech, rhythmic patterns, irony, images, and symbols. I would have the students read “Phenomenal Woman” by Maya Angelou and use the worksheet to understand the poem and its themes. As an assignment I would have the students create a poem using three to five of the items on the worksheet. I would either have the students read for homework or do readers theatre of Witness by Karen Hesse and read “Fork” by Charles Simic. As an assignment I would have the students combine both these ideas and create a vignette using an object in which I assign them, for example: a flashlight, a plate, a leaf, etc. Center Piece: “Still I Rise” by Maya Angelou Still I Rise may write me down in history With your bitter, twisted lies, You may trod me in the very dirt But still, like dust, I'll rise. Does my sassiness upset you? Why are you beset with gloom? 'Cause I walk like I've got oil wells Pumping in my living room. Just like moons and like suns, With the certainty of tides, Just like hopes springing high, Still I'll rise. Did you want to see me broken? Bowed head and lowered eyes? Shoulders falling down like teardrops. Weakened by my soulful cries. Does my haughtiness offend you? Don't you take it awful hard 'Cause I laugh like I've got gold mines Diggin' in my own back yard. You may shoot me with your words, You may cut me with your eyes, You may kill me with your hatefulness, But still, like air, I'll rise. Does my sexiness upset you? Does it come as a surprise That I dance like I've got diamonds At the meeting of my thighs? Out of the huts of history's shame Up from a past that's rooted in pain I'm a black ocean, leaping and wide, Welling and swelling I bear in the tide. Leaving behind nights of terror and fear Into a daybreak that's wondrously clear Bringing the gifts that my ancestors gave, I am the dream and the hope of the slave. I chose “Still I Rise” by Maya Angelou as my centerpiece work because I believe it deals with a significant issue high school student’s deal with most, and that is the idea of being helpless at one point in time, or being bullied. This poem deals with the end result of a once painful time. The poem, without bluntly stating it, shows the outcome of someone who has been verbally abused, due to weight and other accusations. I will teach this poem alongside the Poetry Analysis worksheet, having the students tear apart the poem and also discuss the themes they believe this poem is talking about. I would ask the students: What do they see in their daily life that may be harmful to others, how do they see others badly treated? And also have them list how individuals being attacked can build up their self–esteem and how witness’s of these actions can help build up the confidence of those being attacked as well. After reading, discussing, and analyzing the poem I would play “Stronger” by Kelly Clarkson and have the students compare and contrast the song to the poem. I chose this song by Kelly Clarkson because it talks about overcoming something that at one point defined who the speaker of the song was. It’s about overcoming an obstacle that once seemed too big to handle. Lastly I would pass out the “I Am” poem template and read “The Delight Song of Tsoai-Talee” by N. Scott Momaday and have the students write their own “I Am” poem. Extending the Unit: I would have students find a different poem, vignette, or song that has the same idea as “Still I Rise” by Maya Angelou. Students will present their findings to the class and after each student has presented their finding, the class as a group will take a piece from the presentation (this will be done after each student presents). These fragments will be used to create a whole new poem making sure they use different figures of speech and rhythmic patterns they have been shown in class. I would also have students go through magazines and find words, or create their own words, in order to create a short poem or vignette. The only guidelines of this project would be to be creative. The following are taken from Goodreads.com. Full citations are located on the works cited page. 1. Things I Have to Tell You: Poems and Writings by Teenage Girls by Betsy Franco a. A collection of poems written by young women across the world showing struggles and issues they personally deal with. 2. How to (Un)cage a Girl by Francesca Lia Block a. A collection of poems geared towards girls and deals with subjects on boys, self-image, fashion, popularity, etc. 3. You Remind Me Of You: A Poetry Memoir by Eireann Corrigan a. An autobiographical account of a young girl dealing with an eating disorder and her recovery. 4. Tell The World by WritersCorps a. A collection of poems written by teens on their hopes, dreams, and thoughts. Concluding the unit I would have students construct a final poem or series of vignettes. They would be required to use the Poetry Analysis worksheet as a guide and create a poem or series of vignettes about an issue in their lives (school, personal, political, etc) they have witnessed or have struggled with. Angelou, Maya. “Still I Rise.” Poem Hunter. Poem Hunter, 2012. Web. 6 Nov. 2012. Block, Francesca Lia. How to (Un)cage a Girl. New York: HarperTeen, 2008. Print. Creech, Sharon. Love That Dog. New York: HarperCollins, 2001. Print. Franco, Betsy. Things I Have to Tell You: Poems and Writings by Teenage Girls. Candlewick. 2001. Print. Hesse, Karen. Witness. New York: Scholastic Inc, 2001. Print. Momaday, N. Scott. “The Delight Song of Tsoai-Talee.” Class handout. Print. Shihan. “Shihan on Def Poetry Jam.” YouTube, 30 Apr. 2007. Web. 6 Nov. 2012. Simic, Charles. “Fork.” Poetry Foundation. Poetry Foundation, 2012. Web. 6 Nov. 2012. Worth Dying For. “Destroy.” Love Riot. 2011. CD. Writerscorp. Tell The World. New York: HarperTeen. 2008. Print.
- 1 Definitions - 2 Regions with significant multiracial populations - 2.1 North America - 2.2 Latin America and the Caribbean - 2.3 United Kingdom - 2.4 North Africa and Middle East - 2.5 Madagascar - 2.6 South Africa - 2.7 Central Asia - 2.8 South Asia - 2.9 Southeast Asia - 2.10 New Zealand - 2.11 Fiji - 3 Ethnic groups - 4 See also - 5 Notes - 6 References - 7 External links While defining race is controversial, race remains a commonly used term for categorization. Insofar as race is defined differently in different cultures, perceptions of multiraciality will naturally be subjective. According to U.S. sociologist Troy Duster and ethicist Pilar Ossorio: Some percentage of people who look white will possess genetic markers indicating that a significant majority of their recent ancestors were African. Some percentage of people who look black will possess genetic markers indicating the majority of their recent ancestors were European. In the United States: Many state and local agencies comply with the U.S. Office of Management and Budget (OMB) 1997 revised standards for the collection, tabulation, and presentation of federal data on race and ethnicity. The revised OMB standards identify a minimum of five racial categories: White; Black or African American; American Indian and Alaska Native; Asian; and Native Hawaiian and Other Pacific Islander. Perhaps, the most significant change for Census 2000 was that respondents were given the option to mark one or more races on the questionnaire to indicate their racial identity. Census 2000 race data are shown for people who reported a race either alone or in combination with one or more other races. In the English-speaking world, many terms for people of various multiracial backgrounds exist, some of which are pejorative or are no longer used. Mulato, zambo and mestizo are used in Spanish, mulato, caboclo, cafuzo, ainoko (from Japanese) and mestiço in Portuguese and mulâtre and métis in French for people of multiracial descent. These terms are also in certain contexts used in the English-speaking world. In Canada, the Métis are a recognized ethnic group of mixed European and First Nation descent, who have status in the law similar to that of First Nations. Terms such as mulatto for people of partly African descent and mestizo for people of partly Native American descent are still used by English-speaking people of the western hemisphere, but mostly when referring to the past or to the demography of Latin America and its diasporic population. Half-breed is a historic term that referred to people of partial Native American ancestry; it is now considered pejorative and discouraged from use. Mestee, once widely used, is now used mostly for members of historically mixed-race groups, such as Louisiana Creoles, Melungeons, Redbones, Brass Ankles and Mayles. In South Africa, and much of English-speaking southern Africa, the term Coloured was used to describe a mixed-race person and also Asians not of African descent. While the term is socially accepted, it is becoming an outdated due to its association with the apartheid era. In Latin America, where mixtures became tri-racial after the introduction of African slavery, a panoply of terms developed during the colonial period, including terms such as zambo for persons of Amerindian and African descent. Charts and diagrams intended to explain the classifications were common. The well-known Casta paintings in Mexico and, to some extent, Peru, were illustrations of the different classifications. At one time, Latin American census categories have used such classifications but, in Brazilian censuses since the Imperial times, for example, most persons of multiracial heritage, except the Asian Brazilians of some European descent (or any other to the extent it is not clearly perceptible) and vice versa, tend to be thrown into the single category of "pardo", although race lines in Brazil do not denote ancestry but phenotype, and as such a westernized Amerindian of copper-colored skin is also a "pardo", a caboclo in this case, despite being not multiracial, but a European-looking person with one or more African and/or Indigenous American ancestor is not a "pardo" but a "branco", or a White Brazilian, the same applies to "negros" or Afro-Brazilians and European and/or Amerindian ancestors. Most Brazilians of all racial groups (except Asian Brazilians and Natives) are to some extent mixed-race according to genetic research. In English, the terms miscegenation and amalgamation were used for unions between the races. These terms are now often considered offensive and are becoming obsolete. The terms mixed-race, biracial or multiracial are becoming generally accepted. In other languages, translations of miscegenation did not become politically incorrect. Regions with significant multiracial populations In the United States, the 2000 census was the first in the history of the country to offer respondents the option of identifying themselves as belonging to more than one race. This multiracial option was considered a necessary adaptation to the demographic and cultural changes that the United States has been experiencing. Multiracial Americans officially numbered 6.1 million in 2006, or 2.0% of the population. There is considerable evidence that an accurate number would be much higher. Prior to the mid-20th century, many people hid their multiracial heritage. The development of binary thinking about race meant that African Americans, a high proportion of whom have also had European ancestry, were classified as black. Some are now reclaiming additional ancestries. Many Americans today are multi-racial without knowing it. According to the Census Bureau, as of 2002, over 75% of all African Americans had multiracial ancestries. In 2010, the number of Americans who checked both "black" and "white" on their census forms was 134 percent higher than it had been a decade earlier. - white/Native American and Alaskan Native: 7,015,017, - white/black: 737,492, - white/Asian: 727,197, and - white/Native Hawaiian and Other Pacific Islander: 125,628. The stigma of a mixed-race heritage, associated with racial discrimination among numerous racial groups, has decreased significantly in the United States. The election of President Barack Obama, who had a European-American mother and an African father, was taken by many as a sign of progress. People of mixed-race heritage can identify themselves now in the U.S Census by any combination of races, whereas before Americans were required to select from only one category. For example they may choose more than one race from the following list: "White" (or "Caucasian"), "Black" (or African American), "Asian", "Native American" or "Alaska Native", "Native Hawaiian", other "Pacific Islander" or "Some other race". Many mixed-raced Americans use the term biracial. The U.S. has a growing multiracial identity movement, reflective of a desire by people to claim their full identities. Interracial marriage, most notably between whites and blacks, was historically deemed immoral and illegal in most states in the 18th, 19th and first half of the 20th century, due to its long association of blacks with the slave caste. California and the western US had similar laws to prohibit European-Asian marriages, which was associated with discrimination against Chinese and Japanese on the West Coast. Many states eventually repealed such laws, and a 1967 decision by the US Supreme Court (Loving v. Virginia) overturned all remaining anti-miscegenation laws in the US. The United States is one of the most racially diverse countries in the world. The American people are mostly multi-ethnic descendants of various immigrant nationalities culturally distinct in their former countries. Assimilation and integration took place, unevenly at different periods of history, depending on the American region. The "Americanization" of foreign ethnic groups and the inter-racial diversity of millions of Americans has been a fundamental part of its history, especially on frontiers where different groups of people came together. The current[update] President of the United States, Barack Obama, is a multiracial American, as he is the son of a Luo father from Kenya and a European American mother. He acknowledges both parents. His official White House biography describes him as African-American. In Hawai'i, the US state in which he was born, he would be called "hapa", which is the Hawaiian word for "mixed ethnic heritage". Multiracial Canadians in 2006 officially totaled 1.5% of the population, up from 1.2% in 2001, although, this number may actually be far higher. The official mixed-race population grew by 25% since the previous census. Of these, the most frequent combinations were multiple visible minorities (for example, people of mixed black and south Asian heritage form the majority, specifically in Toronto), followed closely by white-black, white-Chinese, white-Arab, and many other smaller mixes. During the time of slavery in the United States, a very large but unknown number of African American slaves escaped to Canada, where slavery was made illegal in 1834, via the Underground Railroad. Many of these people married in with European-Canadian and Native-Canadian populations, although their precise numbers, and the numbers of their descendants, are not known. Another 1.2% of Canadians officially are Métis (descendants of a historical population who were partially Aboriginal—also called "Indian" or "Native"—and European, particularly French, English, Scottish, and Irish ethnic groups). Although listed as a single "race" in Canada, the Métis are therefore multi-racial. In particular the Métis population may be far higher than the official numbers state, due to earlier racism causing people to historically hide their mixed heritage. This however is changing, although many Canadians may now be unaware of their mixed-race heritage, especially those of Métis descent. This brings Canada to a total "recognized" mixed population of 2.7%, greater by percentage than that of the United Kingdom and the United States. Latin America and the Caribbean Mestizo is the common word used to describe multiracial people in Latin America, especially people with Native American and Spanish or other European ancestry. Mestizos make up a large portion of Latin Americans comprising a majority in many countries. In Latin America, racial mixture was officially acknowledged from colonial times. There was official nomenclature for every conceivable mixture present in the various countries. Initially, this classification was used as a type of caste system, where rights and privileges were accorded depending on one's official racial classification. Official caste distinctions were abolished in many countries of the Spanish-speaking Americas as they became independent of Spain. Several terms have remained in common usage. Race and racial mixture have played a significant role in the politics of many Latin American countries. In most countries, for example Mexico, Dominican Republic, El Salvador, Honduras, and a majority of the population can be described as biracial or multiracial (depending on the country). In Mexico, over 80% of the population is mestizo in some degree or another. The Mexican philosopher and educator José Vasconcelos authored an essay on the subject, La Raza Cósmica, celebrating racial mixture. Venezuelan ex-president Hugo Chávez, himself of Spanish, indigenous and African ancestry, made positive references to the mixed-race ancestry of most Latin Americans from time to time. Colonialism throughout the West Indies has created diverse populations on many islands, including people of multiracial identities. Of note is the mixture of West African communities, most brought to the region as slaves, and East Indian settlers most of whom came as indentured labor after the abolition of slavery. Trinidad and Tobago, Guyana and Suriname claim the highest populations of such mixtures, known locally as douglas. In addition to mixed West African and East Indian heritage, inhabitants of Trinidad and Tobago can also have any combination of Chinese, Arab, Venezuelan, Indigenous and European heritage. |This section needs additional citations for verification. (November 2009)| According to the 2000 official census, 38.5% of Brazilians identified themselves as pardo skin color. That option is normally marked by people that consider themselves multiracial (mestiço). The term pardo is formally used in the official census but is not used by the population. In Brazilian society, most people who are multiracial call themselves moreno: light-moreno or dark-moreno. These terms are not considered offensive and focus more on skin color than on ethnicity (it is considered more like other human characteristics such as being short or tall). The most common multiracial groups are between African and European (mulato), and Amerindian and European (caboclo or mameluco). But there are also African and Amerindian (cafuzo), and East-Asian (mostly Japanese) and European/other (ainoko, or more recently, hafu). All groups are more or less found throughout the whole country. Brazilian multiracials with the following three origins, Amerindian, European and African, make up the majority. It is said today that 89% or even more of the "Pardo" population in Brazil has at least one Amerindian ancestor (most of brancos or White Brazilian population have some Amerindian and/or African ancestry too despite nearly half of the country's population self-labeling as "Caucasian" in the censuses. In Brazil, it is very common for Mulattoes to claim that they don't have any Amerindian ancestry, though studies have found that if a Brazilian multiracial can trace their ancestry to nearly 8 to 9 generations back, they will have at least one Amerindian ancestor from their maternal side of the family, which will explain many of their physical features and characteristics. Since multiracial relations in Brazilian society have occurred for many generations, some people find it difficult to trace their own ethnic ancestry. Today a majority of mixed-race Brazilians do not really know their ethnic ancestry. Due to their unique features that makes them Brazilian-looking like skin color, lips and nose shape or hair texture, they are only aware that their ancestors were probably Portuguese, African and/or Amerindian. Also there was a very large number of other Europeans (counted in the millions) who contributed to the Brazilian racial make up, including Italians (today, the city of São Paulo has the largest population of Italian descendants besides Rome), Japanese (the largest Japanese population outside Japan), Lebanese (the largest population of Lebanese outside Lebanon), Germans, Poles and Russians. There is also a high percentage of Brazilians of Jewish descent, perhaps hundreds of thousands, mostly found in the northeast of the country who cannot be sure of their ancestry as they descend from the so-called "Crypto-Jews" (Jews who practiced Judaism in secret while outwardly pretending to be Catholics, also called Marranos or New-Christians, often considered Portuguese); according to some sources, 1 out of every 3 families to arrive there from Portugal during the colonization was of Jewish origin. There is a high level of integration between all groups. However, there exists a great social and economic difference between European descendants (found more among the upper and middle classes) and African, Amerindian and multiracial descendants (found more among the lower classes), what is called Brazilian apartheid. In 1991 an analysis of the census showed that 50% of black Caribbean men born in the UK has white partners, and the 2011 BBC documentary Mixed Britannia noted that 1 in 10 British children are growing up in interracial households. In 2000, The Sunday Times reported that "Britain has the highest rate of interracial relationships in the world" and certainly the UK has the highest rate in the European Union. The 2001 census showed the population of England to be 1.4% mixed-race, compared with 2.7% in Canada and 1.4% in the U.S. estimates of 1.4% in 2002, although this U.S. figure did not include mixed-race people who had a black parent. Both the US and UK have fewer people identifying as mixed race, however, than Canada. By 2020 the mixed-race population is expected to become Britain's largest ethnic minority group with the highest growth rate. In Britain, many multi-racial people have Caribbean, African or Asian heritage. For example supermodel Naomi Campbell, who has African, Jamaican, and Asian roots. Some, like 2008 Formula One World Champion, Lewis Hamilton, are referred to or describe themselves as 'mixed'. The 2001 UK Census included a section entitled 'Mixed' to which 1.4% (1.6% by 2005 estimates) of people responded, which was split further into White and Black Caribbean, White and Asian, White and Black African and Other Mixed. Despite this, 2005 birth records for the country state at least 3.5% of newborn babies as mixed race. Additionally, multiraciality in the UK has been the subject of much recent fiction, poetry, academic study, and television programming, examining the ways in which interracial relationships function and how multiracial individuals construct British identities. The BBC's recent production Mixed Britannia documents the history of mixed race people in England beginning with the early 20th century and continuing through to 2011 when it was produced. While much of the documentary comments on narratives of "progress," it is interesting to note the histories of black and mixed people in the United Kingdom that have been historically silenced and repressed, as well as information about the government's regulation of black British bodies and the white Britons who engaged in relationships with non-white individuals. The programme deals with issues of citizenship, nationality, state-mandated violence, medical studies, transracial adoption, xenophobia, and racism in its journey through 20th century Britain and beyond, in order to examine the intricate histories of power and race in the United Kingdom. While the documentary ultimately celebrates the progress of Britain as a nation, focussing on the power of an ever-browning British population, its exposition of violence against non-white peoples is a reminder of the ways nation states control their populations and wield political power. North Africa and Middle East In North Africa, some multiracial communities can also be found. Among these are the Haratin oasis-dwellers of Saharan southern Morocco and Mauritania. They are believed to be a mixture of Black Africans and Berbers, and constitute a socially and ethnically distinct group. Also the case of Tunisia in which you can find many mixed races from the Mediterranean Sea. Countries like Iran, Iraq, Yemen and Saudi Arabia have Black African communities as a result of the slave trade, and as such, large populations in these countries are of mixed race origin. Eastern and northern Iran also possess people of some European and Mongolian descent. Almost the entire population of Madagascar is an about equal admixture of South East Asian (Indonesian) and Bantu-speaking settlers primarily from Borneo and Mozambique, respectively. Years of intermarriages created the Malagasy people, who primarily speak Malagasy, an Austronesian language with Bantu influences. In South Africa, the Prohibition of Mixed Marriages Act prohibited marriage between whites and non-whites (which were classified as Black, Asian and Coloured). Multiracial South Africans are commonly referred to as coloureds. According to the 2001 South African Census, they are the second largest minority (8.9%) after white South Africans (9.2%). Today Central Asians are a mixed race of various peoples such as Mongols, Turkics, and Iranians. The Mongol invasion of Central Asia in 13th century resulted in the massacre of the population of Iranians and other Indo-European peoples as well as a large degree of intermarriage and assimilation. Genetic studies shows that Central Asian Turkic people and Hazara are a mixture of Northeast Asians and Indo-European people. Caucasian ancestry is prevalent in almost all central Asian Turkic people. Kazakhs, Hazara, Karakalpaks, Crimean Tatars have more European mtdna than European y-dna. Kyrgyz have mostly European y-dna with substantial European mtdna. Other Turkic people like Uyghurs and Uzbeks have mostly European y-dna but also a significantly higher percentage of European mtdna. Turkmen have predominately European y-dna and mtdna. Anglo-Indians are a mixed which originated in India during the British Raj, or the Colonial period in India. The estimated population of Anglo-Indians is 600,000 worldwide with the majority living in India and the UK . As with India, Burma was ruled by the British, from 1826 until 1948. Many European groups vied for control of the country prior to the arrival of the British. Intermarriage and mixed-relationships between these settlers and merchants with the local Burmese population, and subsequently between British colonists and the Burmese created a local Eurasian population, known as the Anglo-Burmese. This group dominated colonial society and through the early years of independence. Most Anglo-Burmese now reside primarily in Australia, New Zealand and the UK since Burma received her independence in 1948 with an estimated 52,000 left behind in Burma. Due to its strategic location in the Indian Ocean, the island of Sri Lanka has been a confluence for settlers from various parts of the world, which has resulted in the formation of several mixed-race ethnicities in the Island. The most notable mixed-race group are the Sri Lankan Moors, who trace their ancestry from Arab traders who settled on the island and intermarried with local women. Today, The Sri Lankan Moors live primarily in urban communities, preserving their Arab-Islamic cultural heritage while adopting many Southern Asian customs. The Burghers are a Eurasian ethnic group, consisting for the most part of male-line descendants of European colonists from the 16th to 20th centuries (mostly Portuguese, Dutch, German and British) and local women, with some minorities of Swedish, Norwegian, French and Irish. The Kaffirs are an ethnic group who are partially descended from 16th-century Portuguese traders and the African slaves who were brought by them.The Kaffirs spoke a distinctive creole based on Portuguese, the Sri Lanka Kaffir language, now extinct. Their cultural heritage includes the dance styles Kaffringna and Manja, as well as the Portuguese Sinhalese, Creole, Afro-Sinhalese varieties. Singapore and Malaysia In Singapore and Malaysia, the majority of inter-ethnic marriages are between Chinese and Indians. The offspring of such marriages are informally known as "Chindian", though the Malaysian government only classifies them by their father's ethnicity. As the majority of these intermarriages usually involve an Indian groom and Chinese bride, the majority of Chindians in Malaysia are usually classified as "Indian" by the Malaysian government. As for the Malays, who are predominantly Muslim, legal restrictions in Malaysia make it uncommon for them to intermarry with either the Indians, who are predominantly Hindu, or the Chinese, who are predominantly Buddhist and Taoist. It is, however, common for Muslims and Arabs in Singapore and Malaysia to take local Malay wives, due to a common Islamic faith. The Chitty people, in Singapore and the Malacca state of Malaysia, are a Tamil people with considerable Malay descent. This was due to the first Tamil settlers taking local wives, since they did not bring along any of their own women with them. In the East Malaysian states of Sabah and Sarawak, there have been many incidents of intermarriage between Chinese and native tribes such as the Murut and Dusun in Sabah, and the Iban and Bisaya in Sarawak. This phenomenon has resulted in a potpourri of cultures in both states where many people claiming to be of native descent have some Chinese blood in them, and many Chinese have native blood in them. The offspring of these mixed marriages are called "Sino-(name of tribe)", e.g. Sino-Dusun. Normally, if the father is Chinese, the offspring will adopt Chinese culture and if the father is native then native culture will be adopted, but this is not always the case. These Sino-natives are usually fluent in Malay and English. A smaller number are able to speak Chinese dialects and Mandarin, especially those who have received education in vernacular Chinese schools. Philippines was a Spanish colony for about 300 years, and then by the Americans when the Spanish was defeated. This is the cause of many mixed-race Filipinos of Filipino-Spanish and Filipino-American descent. After the defeat of Spain during the Spanish–American War in 1898, the Philippines and other remaining Spanish colonies were ceded to the United States in the Treaty of Paris. The Philippines was under U.S. sovereignty until 1946, though occupied by Japan during World War II. In 1946, in the Treaty of Manila, the U.S. Recognized the Republic of the Philippines as an independent nation. Even after 1946, the U.S. maintained a heavy military presence in the Philippines, with as many as 21 U.S. military bases and 100,000 U.S. military personnel stationed there. The bases closed in 1992, leaving behind thousands of Amerasian children. Pearl S. Buck International foundation estimates there are 52,000 Amerasians scattered throughout the Philippines with 5,000 in the Clark area of Angeles. In the United States, intermarriage among Filipinos with other races is common. They have the largest number of interracial marriages among Asian immigrant groups, as documented in California. It is also noted that 21.8% of Filipino Americans are of mixed lineage, second among Asian Americans after the Japanese, and is the fastest growing. Under terms of the Geneva Accords of 1954, departing French troops took thousands of Vietnamese wives and children with them after the First Indochina War. Some 100,000 Eurasians stayed in Vietnam, though after independence from French rule. The local Māori were joined from the 1840s onward by large numbers of Europeans colonists, and successive waves of other immigrants. Racial mixing is common, including with later Pacific and Asian immigrants, so that the vast majority of New Zealand's half million Maori now also have some other ancestry, and many who identify as Pakeha may also have Māori forebears. In the 2006 census many respondents identified with multiple ethnicities while 11% chose simply to identify as "New Zealander". Examples of mixed-race New Zealanders include opera singer Kiri Te Kanawa, actress Rena Owen, sportsman Kees Meeuws and former Governor General Paul Reeves. Fiji has long been a multi-ethnic country, with a vast majority of people having multi-racial heritages even if they do not self-identify in that manner. The indigenous Fijians are of mixed Melanesian and Polynesian ancestry, resulting from years of migration of islanders from various places mixing with each other. Fiji Islanders from the Lau group have intermarried with Tongans and other Polynesians over the years. The overwhelming majority of the rest of the indigenous Fijians, though, can be genetically traced to having mixed Polynesian/Melanesian ancestry. The Indo-Fijian population is also a hodge-podge of South Asian immigrants (called Girmits in Fiji), who came as indentured labourers beginning in 1879. While a few of these labourers managed to bring wives, many of them either took or were given wives once they arrived in Fiji. The Girmits, who are classified as simply "Indians" to this day, came from many parts of the Indian subcontinent of present day India, Pakistan, and to a lesser degree Bangladesh and Myanmar. It is easy to recognize the Indian mixtures present in Fiji and see obvious traces of Southern and Northern Indians and other groups who have been categorised together. To some degree, even more of this phenomenon would have likely happened if the religious groups represented (primarily Hindu, Muslim and Sikh) had not resisted to some degree marriage between religious groups, which tended to be from more similar parts of the Indian subcontinent. Over the years, particularly in the sugar cane-growing regions of Western Viti Levu and parts of Vanua Levu, Indo-Fijians and Indigenous Fijians have mixed. Others have Chinese/Fijian ancestry, Indo-Fijian/Samoan or Rotuman ancestry, and European/Fijian ancestry (often called "part-Fijians"). The latter are often descendents of shipwrecked sailors and settlers who came during the colonial period. Migration from a dozen or more different Pacific countries (Tuvalu, Solomon Islands, Vanautu, Samoa, and Wallis and Futuna being the most prevalent) have added to the various ethnicities and intermarriages. The following is a list of ethnic divisions that are a mixture of two or more racial groups. - African Americans - Black Indians in the United States - Black Seminoles - Choctaw freedmen (see also: Cherokee Freedmen Controversy) - Garifuna people - Maroon (people) - Miskito Sambu - Afro-Asian mostly native to the Americas and Africa - Malagasy people - Multiracial American - Seychellois Creole people - Mauritian Creole people - Malagasy people - Mixed (United Kingdom ethnicity category) - African Americans - Atlantic Creole - Creoles of color - Dominican people - Griqua people - Mixed (United Kingdom ethnicity category) - Multiracial American - Rhineland Bastard - Mestizo (Hondurans and Mexican people) - Métis people (Canada) - Métis people (United States) - Multiracial American Asian-European or Eurasian origin - Anglo-Burmese people - Burgher people - Filipino mestizo - Finno-Ugric peoples - Hui people - Indo people - Kristang people - Macanese people - Māori people - Mestiços (Sri Lanka) - Mixed (United Kingdom ethnicity category) - Multiracial American - Romani people - Tajiks of Xinjiang - Turkmen people - Uyghur people - Atlantic Creole - Brass Ankles - Brazilian people - Chestnut Ridge people - Demographics of Trinidad and Tobago - Dominican people (Dominican Republic) - Louisiana Creole people - Multiracial American - Puerto Rican people - Redbone (ethnicity) - Amalgamation (history) - Ethnic group - Interracial marriage - List of people of African American and Native American admixture - Melting pot - Mixed Race Day - Race (human classification) - Race and genetics - Multiethnic society - One-drop rule - Origins of Tutsi and Hutu - Passing (racial identity) - Pre-Columbian trans-oceanic contact hypotheses - Race and society - Race traitor - William Loren Katz - "Definition of multiracial in English". Oxford Dictionaries. Oxford University Press. 2013. Retrieved 2 December 2013. - "Not surprisingly, biomedical scientists are divided in their opinions about race. Some characterize it as 'biologically meaningless' or 'not based on scientific evidence', whereas others advocate the use of race in making decisions about medical treatment or the design of research studies." Lynn B. Jorde; Stephen P. Wooding (2004). "Genetic variation, classification and 'race'". Nature Genetics 36 (11 Suppl): S28–S33. doi:10.1038/ng1435. PMID 15508000. citing Guido Barbujani; Arianna Magagni; Eric Minch; L. Luca Cavalli=Sforza (April 1997). An apportionment of human DNA diversity (PDF). Proceedings of the National Academy of Sciences USA 94. pp. 4516–4519.. - Carolyn Abraham, "Molecular Eyewitness: DNA Gets a Human Face" (quoted from Globe and Mail, June 25, 2005), RaceSci. - "Modified Race Data Summary File". 2000 Census of Population and Housing. U.S. Census Bureau. Retrieved 2009-10-30. - Denis MacShane; Martin Plaut; David Ward (1984). Power!: Black Workers, Their Unions and the Struggle for Freedom in South Africa. South End Press. p. 7. ISBN 978-0-89608-244-1. - The New Race Question: How The Census Counts Multiracial Individuals. Russell Sage Foundation. 2005. ISBN 0871546582. - "B02001. RACE – Universe: TOTAL POPULATION". 2006 American Community Survey. United States Census Bureau. Retrieved 2008-01-30. - Jones, Nicholas A.; Amy Symens Smith. "The Two or More Races Population: 2000. Census 2000 Brief" (PDF). United States Census Bureau. Retrieved 2008-05-08. - Stephen M. Quintana, Clark McKown (ed.) (2008). Handbook of Race, Racism, and the Developing Child. John Wiley & Sons. p. 211. ISBN 0470189800. Retrieved 1 January 2015. - Cohn, D'Vera. "Multi-Race and the 2010 Census". Retrieved 2011-04-26. - "Multiracial Dimensions in the United States and Around the World". diversityspectrum.com. - "President Barack Obama". whitehouse.gov. - "The Hapa Project: How multiracial identity crosses oceans". UH Today. Spring 2007. - "Keanu Reeves Film Reference biography". Film Reference. Retrieved May 10, 2008. - Hoover, Will (August 18, 2002). "Rooted in Kuli'ou'ou Valley". Honolulu Advertiser. Retrieved December 8, 2010. - "NEHGS – Articles". Newenglandancestors.org. Retrieved May 5, 2010. - "Population Groups (28) and Sex (3) for the Population of Canada, Provinces, Territories, Census Metropolitan Areas and Census Agglomerations, 2006 Census – 20% Sample Data". 2006 Census: Data Products. Statistics Canada. 2008-06-12. Retrieved 2008-07-14.[dead link] - Westbrook, Caroline (2004-02-13). "Sean Paul". Something Jewish. Retrieved 2008-07-14. - [Silva-Zolezzi I., Hidalgo-Miranda A., Estrada-Gil J., Fernandez-Lopez J.C., Uribe-Figueroa L., Contreras A., Balam-Ortiz E., del Bosque-Plata L., Velazquez Fernandez D., Lara C., Goya R., Hernandez-Lemus E., Davila C., Barrientos E., March S., Jimenez-Sanchez G. Analysis of genomic diversity in Mexican Mestizo populations to develop genomic medicine in Mexico. Proc Natl Acad Sci U S A. 2009 May 26;106(21):8611-6.] - "Censo Demográfico 2000" (PDF) (in Portuguese). Instituto Brasileiro de Geografia e Estatística. Retrieved 2008-07-14. - France Winddance Twine. A White Side of Black Britain. Durham, Duke UP, 2010. - John Harlow, The Sunday Times (London), 9 April 2000, quoting Professor Richard Berthoud of the Institute for Social and Economic Research - Changing Face of Britain, BBC, 2002. - 3.5% of newborns in the UK are mixed race - Bridget Anderson, World Directory of Minorities (Minority Rights Group International: 1997), p. 435. - On the Origins and Admixture of Malagasy: New Evidence from High-Resolution Analyses of Paternal and Maternal Lineages: "The present population, known by the general term “Malagasy,” is considered an admixed population as it shows a combination of morphological and cultural traits typical of Bantu and Austronesian speakers...[O]ur results confirmed that admixture in Malagasy was due to the encounter of people surfing the extreme edges of two of the broadest historical waves of language expansion: the Austronesian and Bantu expansions. In fact, all Madagascan living groups show a mixture of uniparental lineages typical of present African and South East Asian populations with only a minor contribution of Y lineages with different origins." - Tatjana Zerjal, R. Spencer Wells, Nadira Yuldasheva, Ruslan Ruzibakiev, Chris Tyler-Smith (2002), "A Genetic Landscape Reshaped by Recent Events: Y-Chromosomal Insights into Central Asia", The American Journal of Human Genetics 71 (3): 466–482, doi:10.1086/342096, PMC 419996, PMID 12145751 - Daniels, Timothy P. (2005). Building Cultural Nationalism in Malaysia. Routledge. p. 189. ISBN 0-415-94971-8. - Arab and native intermarriage in Austronesian Asia. ColorQ World. Retrieved 2008-12-24. - "Women and children, militarism, and human rights: International Women's Working Conference – Off Our Backs".[dead link] - Tuesday, June 19, 2001. - Stanford Publications - "Interracial Dating & Marriage". asian-nation.org. Retrieved 2007-08-30. - "Multiracial / Hapa Asian Americans". asian-nation.org. Retrieved 2007-08-30. - SOUTH VIET NAM: The Girls Left Behind. Time. September 10, 1956. - "Ethnic groups", NZ STatistics - "Multiracial Children". American Academy of Child and Adolescent Psychiatry. October 1999. Retrieved 2008-07-14. - Freyre, Gilberto; Putnam, Samuel (1946). The Masters and the Slaves: A Study in the Development of Brazilian Civilization. New York: Alfred A. Knopf. ISBN 0-520-05665-5. OCLC 7001196. - Joyner, Kara; Kao, Grace (August 2005). "Interracial Relationships and the Transition to Adulthood". American Sociological Review (American Sociological Association) 70 (4): 563–81. doi:10.1177/000312240507000402. Retrieved 2008-07-14. - The Multiracial Activist, an online activist publication registered with the Library of Congress, focused on multiracial individuals and interracial families since 1997 - ProjectRACE, an organization leading the movement for a multiracial classification - Advocacy groups - Association of MultiEthnic Americans, Inc., US - Blended People of America, US-based nonprofit organization representing the interests of the mixed-race community - Brazilian Multiracial Movement, Brazilian mixed-race organization - The Hafu Project, a study of half-Japanese people, London-, Munich-, Tokyo-based nonprofit organisation - MAVIN Foundation, an organization advocating for mixed heritage people and families - Mixed Race UK, UK-based nonprofit organization representing the interests of the mixed-race community - Mosiac UK, a UK-based organisation for mixed race families - People in Harmony UK - Swirl, US-based mixed community
The Wilderness Act of 1964 established a National Wilderness Preservation System (NWPS) to secure for the American people of present and future generations the benefits of an enduring resource of wilderness. The Act states that wilderness areas shall be administered for the use and enjoyment of the American people in such manner as will leave them unimpaired for future use and enjoyment as wilderness. Moreover, it is the responsibility of each agency that administers wilderness to preserve each area's wilderness character. Since 1964, more than 100 pieces of legislation have created an NWPS of over 100 million acres, in well over 600 individual wildernesses, administered by the U.S. Department of the Interior's Bureau of Land Management (BLM), Fish and Wildlife Service (FWS), and National Park Service (NPS); and the U.S. Department of Agriculture's Forest Service (FS). To provide for the use and enjoyment of these areas, while preserving their wilderness character, it is important for management agencies to monitor wilderness recreation visitors and the impacts they cause. Some people state that the Wilderness Act mandates that recreation impacts t be allowed to increase following wilderness designation (Worf 2001). Ideally, baseline conditions should be inventoried at the time each area is designated as wilderness and added to the NWPS, and then periodically monitored in the future to assess trends in conditions and the efficacy of existing recreation management programs. Such data will become increasingly valuable to future attempts to evaluate trends in the wilderness character of each area in the NWPS. Although baseline recreation conditions have been inventoried in many wildernesses, such data are lacking in many others. Moreover, the distribution of wildernesses with baseline recreation data is t equitable across the nation or the four agencies that manage wilderness. This report is an assessment of Wilderness Visitors and Recreation Impacts: Baseline Data Available for Twentieth Century Conditions David N. Cole Vita Wright the status of baseline recreation monitoring data for all wildernesses in the NWPS at the end of the twentieth century. It documents the proportion of the NWPS that has baseline data on recreation visitors and impacts, which wildernesses have this data, and where they are located. It identifies the types of data that have been collected, the types of sampling designs that have been employed, and how and where data have been stored. This compilation should help researchers identify wildernesses where trends can be assessed and help wilderness managers identify other managers who might be contacted about how to initiate and implement new studies. The data listed in this report are all we will ever have to gain perspective on the condition of designated wilderness in the twentieth century regarding recreation visitors and impacts. Because managers and the interested public, in future decades and centuries, will want to kw what these places were like, these data will become increasingly valuable. Although some of the data are published in reports or have been carefully archived, most are stored on paper files in ranger offices, where they are vulnerable to loss. We strongly encourage agency personnel to recognize the future value of this data and invest in archiving it in such a manner that its perpetuation is ensured. These data could be the basis for valuable assessments of recreation and impact trends across the NWPS. This report begins with an overview of the status of recreation-related monitoring across the NWPS. Three types of studies are surveyed: those that provide (1) campsite impact data, (2) trail impact data, and (3) information about visitor characteristics.
Leanne McColl and I hosted an after school professional learning session to share a resource to support grade 5 teachers and teacher-librarians in the teaching and learning about Indian Residential Schools. For the past few years, alongside efforts of the Truth and Reconciliation Commission, Canadians have been learning about the travesty in Canadian history stemming from the Indian Act, particularly enacted through the mandated attendance of children at Indian Residential Schools. The First Nations Education Steering Committee (FNESC) has developed educational resources to support teachers in the teaching of this shared history, which is now included in the elementary and secondary Social Studies curriculum. Although available for purchase through the FNESC website (fnesc.ca), these teaching resources are also available to download for free, as pdf documents. The grade 5 resource is available here: We looked through the components of the resource – enduring understandings, essential questions, literature connections, experiential learning, using primary documents, etc. We shared the literature that is referenced in the resource and teachers also prepared their own “memory bag” to correspond with the lessons in the resource. Teachers at the session had many questions which we discussed as a group and shared our ideas. Other resources were provided to the teachers such as the Project of Heart document and Aboriginal Worldviews document – links to these can be found here: Our session concluded with sharing FNESC’s Starleigh Grass’ Ted talk about reconciliation. The video can be viewed here: Due to the popularity and interest in this session, we hope to offer another one in the spring!
A preschool teacher can be the apple of a young child's eye, providing guidance, encouragement and positive feedback. The field can be rewarding, particularly with the gratitude and smiles that small children give. Preschool teachers can learn about the instructional techniques and strategies to use by enrolling in preschool teacher training or early elementary education degree programs. A Day in the Life of a Preschool Teacher The day of a preschool teacher begins early, often at 8 a.m., but sometimes even earlier if a preschool teacher works at a daycare center. Early in the morning, preschool teachers help young children, often ages 3-5, in putting their coats and jackets away and getting ready for the day. During the preschool day, preschool teachers may: - Help children learn numbers, shapes and colors - Teach letters of the alphabet - Lead students in hands-on explorations and art projects that develop language, motor and social skills - Guide students through a busy schedule that can include projects, activities, music time, snacks, lunch and rest - Set times to do calendar, weather and poem activities - Encourage children to learn - Communicate progress and challenges to parents and caregivers When children head home for the day, there is still plenty left to do. Preschool teachers clean up and put things away, plan activities for the next day and ensure that needed materials are on hand. Finally, they may need to update any student records or send out emails and make phone calls about upcoming activities, programs and planning. In a Head Start program, preschool planning may be more rigorous, following guidelines set by laws. Teachers in public school settings may need to work with other professionals to help children who have challenges. The job of a preschool teacher can sometimes be challenging, yet their patience and encouragement can go a long way. Important Characteristics for Preschool Teachers An individual who loves young children may be well-set for a career as a preschool teacher. Preschool teachers also should love teaching anything from colors to counting. Patience may be a needed virtue as well as an ability to help children make connections to the world around them. Preschool teachers should strive to be consistent, understanding and insightful. Typical Steps for Becoming a Preschool Teacher The steps for becoming a preschool teacher may vary, depending on the age group of the children and the place of business — for example, a private school versus a public setting. The following step-by-step plan provides an overview of preschool teacher education requirements and other necessary steps toward achieving a preschool teaching career: 1) Complete an educational program. A high school diploma and early education certification are usually required to work in preschool centers. However, those who want to work in a Head Start program, which is federally funded, typically need to have an associate degree. Furthermore, some Head Start programs require preschool teachers to have a bachelor's degree. Preschool teaching programs could be found under the following names: - Certificate of Achievement in Preschool Teacher - Preschool Teacher-Certificate - Preschool Early Childhood Teacher Certificate - Early Childhood Education Associate Degree - Associate of Arts in Early Childhood Education - Bachelor's in Early Childhood Education Degree - Bachelor of Science in Early Childhood Education 2) Work toward the Child Development Associate (CDA) credential. This credential is required by some, but not all, states. The credential is administered through the Council for Professional Recognition and requires a written exam, experience in the field and an observation of the candidate working with children. Other states may require candidates to have the Certified Childcare Professional (CCP) designation, which is available through the National Early Childhood Program Accreditation. This credential similarly requires passing an exam, completing coursework and having experience in the field. 3) Achieve state licensure. Licensure is typically required for public schools, particularly when teachers plan to instruct in preschool through third grade. Requirements for licensure vary state to state but usually include a bachelor's degree and continuing education coursework. 4) Find employment. Most preschool teachers work in childcare services, but many others work for preschools at the local or state level. Preschool teachers may also be employed by private organizations, such as churches. Candidates who have a bachelor's degree may find the best opportunities for employment. 5) Maintain credentials. Once employed, preschool teachers need to keep their credentials up to date through continuing education coursework. The CDA credential needs to be renewed after three years and the CCP credential needs to be renewed after two years. - Associate of Arts in Early Childhood Education, Liberty University, http://www.liberty.edu/online/associate/early-childhood-education/, accessed October 2017 - Early Childhood Education Associate Degree, Penn Foster, https://www.pennfoster.edu/programs-and-degrees/education-and-child-care/early-childhood-education-associate-degree, accessed October 2017 - Earn Your Early Childhood Education Degree Online, Ashworth College, https://www.ashworthcollege.edu/bachelors-degrees/early-childhood-education-degree-online/, accessed October 2017 - Preschool Teacher, Los Angeles Trade Tech, http://college.lattc.edu/catalog/programs/preschool-teacher/, accessed October 2017 - Preschool Teachers, U.S. Bureau of Labor Statistics, 2016-17 Occupational Outlook Handbook, https://www.bls.gov/ooh/Education-Training-and-Library/Preschool-teachers.htm#tab-2
What’s important to you? It’s a simple question – but one with profound consequences for how you live your life. It’s a question that gets to the heart of your values: things that motivate you and guide your decisions. But what are values and why are they important? 21st October is World Values Day – an annual campaign to increase the awareness and practice of values around the world. So now is a great time to pause and think about these questions. They’re also questions that are central to Acceptance and Commitment Therapy (ACT) – a therapeutic approach that focuses on clarifying and acting on your values to improve your mental health. What are values? Values are often taken to mean moral ideas, attitudes to the world, or norms and behaviours that are considered ‘good’ in a particular group, community or organisation. They’re usually abstract nouns, like ‘authenticity’ or ‘respect’. They may also simply be valued interests, activities, preferences and dispositions. It’s helpful to think of values as the things that are most important to you. They’re the things that motivate us and guide our decisions. We may have many values, and different ones in different areas of our lives – for example as individuals and members of families, groups and communities. These may also overlap – and they may change over time. A few examples are: - Personal. Individual values may include empathy, honesty, kindness or generosity. - Relationships. Interpersonal values may include trust, friendship, loyalty or intimacy. - Work. Values in your working life may include professionalism, leadership or teamwork. - Society. Values related to wider society may include environmentalism, social justice or charity. Why is it important to have values? “If you don’t have a dream, how you gonna make a dream come true?” – as the old song goes. Values help us create the future we want – because knowing what you want out of life is the first step to getting it. But values do so much more. They help grow and develop as people. They motivate us, give us a reason to get up in the morning, and give our lives meaning. Values help us live with direction and purpose – like a guiding compass. Whatever is going on in our lives, our values can show us a path forward, and help us make better choices. Values are also intimately linked to our sense of self, and they’re essential for our mental health. They create feelings of happiness, satisfaction and fulfilment, and help us develop healthy patterns of behaviour. They also connect us to other people – whether individuals, groups or communities – and help us develop meaningful relationships with them. Living in line with our values has a direct impact on how we feel about ourselves. When we’re aligned with our values, we tend to be happier, more confident and more fulfilled. Research shows that just thinking about our values keeps our stress levels low, and helps us feel more content. But when there’s a mismatch, we tend to be less happy and more stressed. For example, have you ever been in a situation where someone said or did something that you strongly disagreed with, but you didn’t speak out – and then you felt bad afterwards? When your behaviour doesn’t match your values, you may experience a drop in self-esteem, difficulty making decisions, anxiety, stress or depression. Values definition and examples – how to clarify your priorities Values are incredibly powerful. So if you don’t yet have clarity on what your values are, now is a good time to think about them. The first and most important step is to define your values. Defining your values is an important first step in ACT – and in moving towards living your best life. So how do you go about doing this? Start off by reflecting on what’s really important to you, in different areas of your life – such as relationships, career and leisure. You can also find guides and lists of values to inspire you and choose from online. These include guides on the World Values Day website, for example. Lists of values can be quite long – but are a useful starting point to generate ideas and see which resonate with you. Or just come up with your own list, based on what’s important to you in life, your goals, or what you enjoy doing. Reflect on which are most important to you, and pick your top five. Then define what each value means to you in a sentence or two. Here are just a few examples: - Honesty. I believe in being honest, truthful and sincere wherever possible, and I think it’s important to say what I really think. - Kindness. It’s important to me to be kind, compassionate and considerate. I’m generous with my time and resources to friends, family and charities, and I love helping other people. - Assertiveness. I respectfully stand up for my rights and communicate my needs. - Friendliness. I value being a good friend and time spent with companions. - Respect. It’s important to me to be respectful towards myself and others, and to be polite and considerate. - Self-development. I like to keep learning, developing, growing and improving in my knowledge, skills or life experience. The next step is to take ‘committed action’ based on your values – even in the face of obstacles. This will help you develop patterns of behaviour that will get you closer to where you want to be in life. And you’ll also experience the mental health benefits of living your values. Acceptance and Commitment Therapy – a values-based approach Acceptance and Commitment Therapy (ACT) is a therapeutic approach with values at its heart. It helps you put all of this into action, so you can live your best life – while accepting the pain that inevitably goes with it. ACT has been show to be effective for a range of difficulties, including anxiety, depression and even chronic pain. It helps you learn to accept what’s outside of your control, and commit to action that improves and enriches your life, in line with your values. The three principles of ACT are: - Accept what’s beyond your personal control and live in the present moment. - Choose valued behaviours mindfully, rather than allowing automatic responses. - Take action, rather than become stuck in painful experiences. In ACT, a therapist works with you to clarify and define your values. You and your therapist will explore the main details of your life, and help you to clarify what is truly important and meaningful to you. They then use that knowledge to guide, inspire and motivate you to change your life for the better. Your therapist will encourage you to take action, based on your values, to create a rich and meaningful life. They’ll also help you stay focused on developing resilience so you can live the life that you want, rather than be constrained by negative thoughts and feelings. Find out how you can get started with ACT with My Online Therapy, or learn more about this therapeutic approach by listening to the ACT modules in our Self-care courses. ACT helps you focus on developing a life worth living – and clarifying your values is the starting point for that. So take some time to reflect on your values today, and what truly matters to you in life. Then go out and act on them.
The Shapes in Circles puzzle is a great way to introduce shapes to kids, while developing logic and analytical thinking. The game consists of a sturdy puzzle board with 9 different shapes and 9 circular cutouts with corresponding shapes. Children learn to match each circular piece with the correct shape for a perfect fit. Handling the puzzle pieces develops fine motor skills and hand-eye coordination. Identifying matching shapes encourages logical thinking and problem solving skills. It also boosts vocabulary and concentration. What your child learns - Hand eye coordination and dexterity - Fine motor skills - Colour and shape recognition - Boosts vocabulary - Problem solving and analytical ability - Concentration and focus
The Missouri Sharecropper Protest of 1939, also known as the Missouri Sharecropper Strike, was a labor movement led by sharecroppers, tenant farmers, and agricultural workers in Missouri, United States. The protest was sparked by the poor working conditions and low wages faced by sharecroppers, who were primarily African American and lived in poverty. The protest began in the summer of 1939 when sharecroppers and tenant farmers from across the state began to organize and demand better wages and working conditions from the large landowners they worked for. The demonstrators also demanded the right to unionize and have a voice in the conditions under which they worked. The protest was met with fierce resistance from the landowners, who refused to negotiate with the sharecroppers and used intimidation and violence to break up the demonstrations. Despite this, the sharecroppers persisted and continued to organize, leading to strikes and protests throughout the state. One of the most notable moments of the protest was the “March of the Sharecroppers,” which took place in August 1939, when thousands of sharecroppers and tenant farmers marched to the state capital of Jefferson City to demand their rights. The march was met with a strong police presence, and many demonstrators were arrested or beaten. The protest ultimately failed to achieve its goals, as the sharecroppers could not secure better wages or working conditions. However, it did bring national attention to the plight of sharecroppers and tenant farmers and helped to pave the way for future labor movements in the rural South. The Missouri Sharecropper Protest of 1939 was a significant event in the history of labor and civil rights in the United States, highlighting the struggles and challenges faced by sharecroppers, tenant farmers, and agricultural workers, particularly African Americans, and their fight for fair treatment and rights, which was met with resistance and violence.
The teachers will show a PowerPoint presentation to review social studies standards, the Antebellum time period, social classes, and photos of people from the time period. Students will re-create the photos as tableaus and will write original dialogue from the perspective of the subjects of the photos. How can tableaus be used as an acting tool for various characters/people? What was life like in South Carolina during the antebellum period? What were the different classes of people in South Carolina and what were their lives like during the antebellum period? What impact did the cotton gin have on slavery in the South? What role did slavery play in South Carolina’s secession from the Union? What were many South Carolinians fearful of abolitionists? Why did South Carolinians have such a strong belief in states’ rights? Other Instructional Materials or Notes: - PowerPoint Presentation - Various photos of the Antebellum period showing multiple classes of people. Sticky notes, pencils - 3-4 The student will demonstrate an understanding of life in the antebellum period, the causes and effects of the Civil War, and the impact of Reconstruction in South Carolina. - South Carolina played a key role in events that occurred before, during, and after the Civil War; and those events, in turn, greatly affected the state. To understand South Carolina’s experiences during this tumultuous time, the student will uti... - Grade 1: Write left to right leaving space between words. Lesson Created By: ShomoneikBrown Lesson Partners: ABC (Arts in Basic Curriculum)
How Asphalt can be Used for Solar Power Research has shown that asphalt can be used to harness solar energy. In this scheme, the asphalt works as a solar panel by absorbing heat from the sun. The many miles of asphalt roads that we have can be used as solar panels. To harness electricity from the asphalt, metal pipes are run under the asphalt that carry water. The water absorbs the heat from the asphalt and is used in one of two ways. The heated water can be carried to buildings to supply hot water. The heated water can also produce electricity by being passed through a thermoelectric generator. Asphalt solar energy is in the early stages of development. Two small asphalt solar energy systems have been successfully used in the Netherlands to generate energy. Harnessing Asphalt Energy The Worcester Polytechnic Institute is experimenting with new ways to improve upon using asphalt to harness solar energy. Combining additives with the asphalt to increase the amount of heat absorbed by the asphalt could increase solar energy production. Finding optimum materials to use for metal piping that transfers the water could result in the solar energy being used more efficiently. Cost of Asphalt Solar Energy The cost of building the asphalt solar energy system would be minimized by building the system in stages. As roads went under construction for normal repairs, as they usually do approximately every ten years, they would be converted to the asphalt solar energy system. Since all of the roads would not be done at the same time, the expense would be spread over a period of time. Spreading the costs over a period of time would make the system more financially feasible to construct. Advantages and Disadvantages of Apshalt solar Power Asphalt energy is simply solar energy. Solar energy is a clean and renewable source of energy that does not emit pollution. The benefit of using asphalt over solar panels is that the asphalt is already in place and land does not have to be devoted to solar panel placement. A potential drawback is the use of water in the system. Water is a valuable resource and if the system were to use fresh water, it could place a strain on the water supply. Potable water would not have to be used to pass the water through a thermoelectric generator. The use of nonpotable water would not cause a strain on the water supply. Potable water would have to be used with the system if it were to be used to create hot water for homes and businesses. Would the system use two different sources of water? Potable water to create hot water and nonpotable water to create electricity? Solutions to water usage need to be considered and explored.
Water Security refers to the sustainable usage of water and securing water systems in a country. It also means that to ensure sustainable development of water resources and accessibility of water for all the individuals as a basic human right. Water crisis is a serious issue and surprisingly very little attention is being paid to the particular problem. Once a water abundant country is now water-stressed which is leading towards a major crisis as the water profile of Pakistan does not show a satisfactory situation, it is an emergency state in terms of water resources. According to a report of Pakistan Council of Water Research (PCWR), Pakistan is at the risk of an absolute water crisis by 2025. Not only that, it will also impact the agriculture sector for food production, energy sector for generation of electricity and healthy environment that is necessary for growth of socio-economic measures and sustainable development. The demand for water is rising in the country because of various factors such as urbanization, rapid population growth. This has completely changed the consumption patterns of water, but the available supply of water is very limited and is at risk because of climate change. Pakistan and India share rivers on the basis of Indus Water Treaty (IWT) which is traced back to 1960, when both the countries signed the treaty under the supervision of World Bank. This treaty defined the principle of water sharing. Now, the water demand has increased in two countries, due to which on and off disagreements appeared on water sharing issues. Water nationalism has risen in recent years as the supply-demand mismatch for water has widened in India and Pakistan, which has been exacerbated by rising tensions between the two nations. But, somehow IWT has always managed to avoid any serious water conflicts despite of the political tensions between Islamabad and New Delhi. Pakistan, it is on the verge of a water crisis by 2025 that is forecasted by experts. Water security is an emerging paradigm which requires recognition at all levels. South Asia has immediate and long-term difficulties from climate change, including glacier melt, sea-level rise, groundwater depletion, extreme weather events and an increase in the frequency of natural disasters, all of which are expected to worsen in the next decades. The analysis cautions that pre-existing vulnerabilities i.e. high poverty, poor governance, and restricted access to basic services and resources enhance the region’s climate risks, with potentially catastrophic consequences if warming continues at the same rate. Crop yields have declined and production losses have occurred in the region as a result of the extraordinary events of climate change. For the past few months, the warning bells are ringing about the water shortage in Pakistan. It is being said that Pakistan’s water crisis will deepen with the climate change. One of the direct results of climate change is rising temperatures and recently, the country is experiencing highest temperature in March and April. It is not only the case of Pakistan; a severe heat wave is also sweeping through India. Moreover, forecasts predictions have suggested further temperature increase in coming weeks. This will trigger already pressurized food as well as water security in Pakistan and India. Both the countries already fall in the category of top ten countries that are most vulnerable to climate change, at this time the revival of suspended Indus water talks is a good step. On 30th May 2022, a five member delegation of Pakistan went to New Delhi for 118 bilateral meeting on IWT. The two day talks ended on a positive note, but there is a need that the present crisis should be considered as a wake-up call for authorities in two countries. It is important that India and Pakistan should make efforts for development and to promote cooperation between the two sides for the smooth functioning of water systems. Also, they must reiterate their commitment towards the treaty and its implementation in literal meanings for the benefit of both countries and to avoid serious consequences due to their own negligence in future. India threatens Pakistan with diversion of rivers flowing in Pakistan, this attitude in condemnable and needed to be stopped. On the other hand, it is necessary for Pakistan to adopt a holistic approach and efficient governance system for water sector. It can be done with improvement in technology, using innovative strategies, implementing SDGs, upgrading existing and developing new infrastructure which will help to mitigate the issue. If this catastrophe will not be addressed quickly and timely, the country may suffer complete water scarcity before even realizing the seriousness of the ongoing crisis. Research Associate, Pakistan House
As I observe my piano student of all ages learning to play the piano, I’ve noticed that it’s very common to have trouble remembering the names of the white piano keys. Both children and adults can have difficulty remembering which piano key is which. It’s not that it’s hard to name the white piano keys. Even the youngest students can keep track of A-B-C-D-E-F-G. But rather, when you’re immersed in learning something at the piano, all of the keys start to look the same and it’s easy to mix them up. (Note: if you’re new the piano and not sure of how the piano keys are set up, they are labeled with letter names A-G. This sequence repeats over and over from left to right, all the way up the piano.) The most common problems I run across are: -Mixing up C and F. These 2 keys are the same shape because they are both located to the left of a set of black keys. It’s very common for students to place their hand on an F when they’re really aiming for a C. Most beginning piano music is anchored around C, it’s easy to visually confuse C with F. -Mixing up G and A. This happens almost daily in my piano studio and it makes sense. These are the 2 keys found in the middle of the group of 3 black keys. It’s easy to flip-flop them. They are also the same shape, so once again, they do look really similar. -Mixing up G and F. This is slightly less common but it’s still something that I notice frequently. These 2 keys aren’ the same shape, so they don’t necessarily look similar, but maybe as the last 2 letters of the music alphabet, it’s harder to keep track of which goes first. Or, if a student is in tune with the symmetry on the grand staff or on the keys from middle C, it’s obvious that treble F and G mirror bass G and F. I can see why it’s easy to mix these keys up. Here are my top 5 tricks for helping piano students keep the piano keys straight: Drill Letter Names One easy solution is to simply drill the order of the keys forwards and backwards, over and over. A B C D E F G A B C… G F E D C B A G F E… There are a few ways to do this: -Have the student say the letters out loud, especially as they are learning music where notes fall in consecutive order, either forwards or backwards. Scales and scale-like passages within songs are a good way to practice this. Many beginning piano songs are a series of step-wise notes, so this is a good time to drill letter names. -Write each letter of the music alphabet on a sheet of paper. Place each paper on the floor in consecutive order and have the student walk from letter to letter saying the names as they go. If you have space, it’s even better to make 2 octaves of the music alphabet so that students can have more experience transitioning from G to A. Any time you can find a way to incorporate a larger body movement like walking, it will help to reinforce the finer details students are learning at the piano. -Practice naming the keys all the way up and down the piano. Say the letters with the student. It’s OK if you need to write the sequence down to give a visual cue of the order of the notes, especially when you’re descending. Learn 5 Finger Scales In Every Key I make a point to teach my students to play 5-finger scales in every key as soon as possible. As we move from key to key, we always name the notes in both directions while playing the scale. Students become very proficient at finding the correct starting key, then also naming the other keys within the scale. When students are practicing this over and over, the key names will naturally click and identifying them will become second nature. Piano Practice Pads Piano Practice Pads are a really handy tool for studying and learning about the piano away from the piano. Sitting in front of 88 keys can get overwhelming, so sometimes it’s helpful to move away from the piano and give students more space while trying to soak up a concept. Piano Practice Pads are small plastic 2-octave keyboards. They keys don’t actually move and they don’t make any sound so this makes them perfect tools for learning. When students can turn off the sound, not worry about playing the correct notes and what the music sounds like, they have the freedom to focus on what they are trying to learn. Students can hold a piano practice pad on their lap or sit with one on the floor. It’s an easy way to practice identifying keys and to keep track of new concepts. Then, when they return to the piano, things are already making much more sense. Piano Practice Pads are available in the Pianissimo Store. Related: Learn about 15 ways to use Piano Practice Pads in piano lessons. Iwako Erasers are small erasers puzzles that fit perfectly on piano keys. These are an excellent teaching aid to keep at a piano because there are endless ways to incorporate them into piano lessons. Here are ways the students can use Iwako Erasers to learn piano keys: -Practice finding all of the like keys, such as all of the the C’s, all of the G’s, etc. -Place the erasers on each note of a scale and name the notes of that scale. -Move the eraser by steps or skips between keys and naming the key that it moves from and to. Amazon is my favorite place to get Iwako Erasers. I keep a big collection in my studio. My students are always asking to take one home, so I always include an Iwako Eraser as a part of their Christmas gift. An excellent book to reinforce piano key names is Handy Houses. Handy Houses is a short story that uses images to relate with each key on the piano. It only takes about 5 minutes to read and once a student has read it, they can easily name remember the order of keys on the piano. I’ve never had a student who couldn’t remember piano keys names after reading Handy Houses. And, many of my students who seemed to have trouble keeping track of the piano keys have immediately improved after reading this story. It is so memorable that students can always refer back to the story when they get mixed up about a note. Once a student has read Handy Houses, we often uses a paper keyboard to illustrate the Handy Houses on the keys. Then, students easily transition to playing games using Piano Practice Pads and Iwako Erasers. At this point, they love having a chance to find each letter using their Handy Houses tricks they they have just learned. Teachers, what other tricks do you use to help students keep the piano keys straight? Parents, have you noticed your child getting confused by the layout of the piano keys? Try some of these tricks at home. James Harding says Thank you for this! Some great ideas! Do you recommend any music apps that can help kids quiz through these things? I have some teachers at my store looking for suggestions. Anything you can suggest would be great! You know, I haven’t used many apps for this particular concept, but I’m sure they exist. Alex Nguyen says I like the idea of the practice pad. I use it myself for my students. They can practice the notes away from their actual piano. It’s convenient. On the way to their home from the lesson, they can just practice while their parent driving them home.
Fire safety is a major concern in any situation, and when it comes to the environment associated with electrical equipment, the risk is even higher. Electrical fires can cause extensive damage, destruction, and even life-threatening situations. Compliance with fire codes and standards is essential to ensuring effective fire protection. In this blog, we will explore the importance of following electrical fire safety regulations and the role of a fire extinguisher in meeting these requirements. Fire Extinguisher and understanding the legal situation: Fire safety regulations and compliance standards have been established to protect life, property, and critical infrastructure. These codes were developed by government agencies and international organizations to establish standardized fire prevention, protection, and response programs. - Fire extinguishers and electrical fire hazards: Electrical fires present unique challenges due to the presence of live electrical equipment. Regulatory agencies recognize this difference and provide guidelines for appropriate fire extinguishers for different types of fires. - Constitution and Laws: In the United States, the National Fire Protection Association (NFPA) establishes standards and regulations for fire safety, including the use of fire extinguishers. Titled the “Standard for Portable Fire Extinguishers,” NFPA 10 outlines requirements for the selection, installation, inspection, and maintenance of fire extinguishers - Selection of fire extinguishers: Fire extinguishers are classified on the basis of their effectiveness in fighting specific types of fires. CO2 (carbon dioxide) fire extinguishers are generally recommended for electrical fires. These extinguishers eliminate oxygen and swallow flames that do not yet have harmful residues that can damage delicate equipment. - Installation and Accessibility: The code specifies the appropriate location and accessibility of the fire extinguisher. They should be located near fire zones and moving routes and be easily accessible. Fire extinguishers must be readily accessible in areas where electrical appliances are located without posing a danger to those attempting to use them. - Inspection and Maintenance: Regulations mandate regular inspections and maintenance of fire extinguishers to ensure their continued operation. Inspection procedures, test procedures, and maintenance procedures are detailed to ensure firefighters are prepared to respond effectively to emergency situations. - Training and awareness for employees: Compliance includes not only equipment but also employee training. Fire safety training is ideal if individuals are aware of fire hazards, understand how to safely operate fire extinguishers, and are adequately prepared to respond in an emergency. - Liability and Legal Responsibility: Complying with fire safety codes isn’t just good practice—it’s a legal obligation. Failure to comply may result in fines, legal penalties, and potential injury to individuals and property. - Standards developed: Fire safety regulations are dynamic and can be updated as new technologies and practices emerge. Staying aware of these changes keeps you in compliance and provides effective fire protection. Key takeaway points - Adhering to fire safety regulations and compliance standards is crucial for safeguarding lives, property, and critical infrastructure. - Proper selection of fire extinguishers is essential, and guidelines are provided to ensure the right type is chosen for different fire scenarios, with CO2 extinguishers often recommended for electrical fires. - Proper installation and accessibility of fire extinguishers are specified by codes. Regular inspection and maintenance are mandated to ensure their ongoing effectiveness. - Staying informed about evolving standards is crucial for maintaining compliance and effective fire protection measures. Electrical fire safety isn’t just a logical statement—it’s a systematic discipline based on adherence to codes and standards. Fire codes provide a clear strategy for preventing and reducing the risk of electrical fires. By selecting, installing, and maintaining fire extinguishers in accordance with this code, individuals and organizations can ensure the safety of their premises, protect valuable assets, and protect those involved in all welfare activities there.
Key Question: Why do Geographers use Maps, and What do Maps Tell Us? Map Appendix A Notes • Maps and their functions • Map Scale • Map Projections • The Grid system • Symbols on Maps What are Maps and what are their functions • What can maps be used for? • In what ways do maps distort?...Why? • How do maps show bias? Reference Maps Show locations of places and geographic features Absolute locations What are reference maps used for? Thematic Maps Tell a story about the degree of an attribute, the pattern of its distribution, or its movement. Relative locations What are thematic maps used for? Two Types of Maps: Thematic Map What story about the population of Korea in 1973 is this map telling? Maps and their functions • Cartography = The art of map making • Reference Map = that used for navigating… ie. Road Map • Thematic Map: One used to illustrate a particular them • Mental Maps: Those that exist in one’s mind (Cognitive maps) • Topographic Maps… use lines to show contour. Map Scale • Is the ratio between actual distance on the ground and the length given on the map • Which scale would show a smaller portion of the earth, 1/1,000,000… or 1/1,000,000? • Larger scale = more zoomed in • Why are different scales needed in mapping the world? The Grid System • What function does the Grid System Serve? • What are the key aspects of the Grid system? Grid stuff to know… • Parallel • Latitude • Equator (0º N. or S.) • Tropic of Cancer (N) • Tropic of Capricorn (S)… 23.5º • Arctic Circle • 0º - 90º N or S • Meridian • Longitude • Prime Meridian 0º E or W • International date line 180º E or W Map Projections • Why are there different map projections? • Which ones do I need to know? • Azimuthal • Peters • Fuller / Dymaxion • Robinson • Mercator • What are the strengths and weaknesses? • For what are they most commonly used? Map Symbols • Dots • Tones/shades/colors • Isolines • Symbols Mental Maps: maps we carry in our minds of places we have been and places we have heard of. can see: terra incognita, landmarks, paths, and accessibility Activity Spaces: the places we travel to routinely in our rounds of daily activity. How are activity spaces and mental maps related? Aspects of a Mental Map • Nodes • Edges • Paths • Districts • Landmarks Discussion Questions • List as many type of maps and purposes for maps as you can.
Names are an important and intrinsic part of a person’s identity. Surnames, in particular, build on your family’s history, but if you’re changing your name, they can set you on a path of your own. Unless you want to be known as “Hey you” all your life, names also have meanings that can reflect the professions of your ancestors, where they came from, and provide a glimpse into their lives. In Western culture, last names — also known as family names or surnames — weren’t always a thing. Instead, people’s names often included a reference to where they were from or an epithet, like Alexander the Great or Suleiman the Magnificent. But these days, Dave of Burbank just doesn’t really have the same ring to it. In smaller civilizations, last names were not necessary because it was unusual to have a large number of people bearing the same given name (first name). However, as time went on, those first names became more common, and the need for last names became apparent. In practical terms, if there were five Johns in one town, people needed to describe which John from which family they were talking about. Beyond that, surnames were a means of relating to specific clans or tribes. Some symbolized the bonds between family; others denoted social class. Surname, which refers to an official title or name added to a person’s first name, comes from the Latin combination of sur-, meaning “over or above” and -name. Understanding last name etymology involves tracing the meaning behind the names commonly used by different cultures and nationalities. Most are based on specific occupations, personality characteristics, and other descriptive traits. Some surnames come directly from the occupation of the first person who had the name. You can easily imagine how residents in a small town would need to differentiate between George the Baker and George the Butcher. Adding the occupation makes it clear. Here's George Baker, and there's George Butcher. Other surnames with an occupational origin include: - Taylor - a tailor or someone who makes clothing (from Old French tailleur) - Brewer - someone who brews beer (from Middle Low German brauer) - Mason - a stoneworker or person who lays bricks (from Old French masson) - Carpenter - a person who builds with wood (from Middle English carpentier) - Fletcher - a person who makes feathered arrows (from Old French fleche) - Smith - a blacksmith or metalworker (from Old English smitan) - Miller - someone who grinds grain (from Middle English mille) - Tanner - a person who tans, or preserves, animal hides (from Old English tannian) - Draper - someone who makes or sells cloth (from Old French drapier) - Fisher - a person who fishes (cognate of German Fischer) Other surnames are related to specific places. Also called toponymic surnames, these names made it clear exactly who you were talking about by describing where that person lived or where they came from. - Dale - someone who lived in a wide valley (from Old English dæl) - Forrest - a person who lived in the forest - Milford - a person who lived near a mill on a ford - Bell - someone who lived close to the town's bell - Brook - someone who lived near a running stream (from Old English broc) - Underhill - a person who lived under or at the base of a hill (from Old English under and hyll) - Atwood - a person who lived in the woods (Middle English) - Banks - someone who lived near a bank of land - Abbey - someone who lived near an abbey (from Middle English abbeye) - Moore - a person who lived on a moor or open marsh land (from Middle English mor) - Moorhead - a person who lived at the head of a moor Some last names started as just adjectives that described someone's personality or physical appearance. In this way, John with the red hair could be differentiated from John with the black hair. Many names offer a clue about the personal characteristics of your ancestors, like these examples: - Stout - someone who has a sturdy build - Strong - a person of great physical strength - Young - someone who is not yet old, possibility used to differentiate between generations - Short - a person whose height is less than average - Long - a tall person - Black - someone with black hair - Brown - someone with brown hair - Stern - a person who is serious - Swift - a fast person Last names can also communicate family connections. Often, this would be used for the second or subsequent generation, referring to a father's first name or occupation. These types of last names are also called patronymic surnames. - Johnson - son of John - Thompson - son of Thomas - Jackson - son of Jack - Smithson - son of the smith - Larson - son of Lars - Nelson - son of Nels - Stevenson - son of Steven - Hansen - son of Hans - Oleson - son of Ole - Richardson - son of Richard - O'Sullivan - son of Sullivan - O'Reilly - son of Reilly - McArthur - son of Arthur Surnames could also indicate that someone came from royalty or had the blessing of royalty. Alternatively, it could be an indication that the person had a connection to other important people in the community, such as religious leaders. - Prince - someone associated with the prince - Abbott - a person associated with an abbott - Steward - someone appointed by royalty to act on the royal's behalf - King - a person associated with the king - Fitzroy - someone who was an illegitimate son of a king - Lord - a person associated with a lord - Rector - an administrative leader in the church - Dean - a cleric with a position of authority in the church - Viceroy - similar to Steward, someone acting on the behalf of royalty
Corn has been present in the human diet for 7000 years. It comes from the Aztecs, who inhabited the territory of today’s Central Mexico. They treated it in a special way in order to make tortillas from its flour. The migration of Indian tribes spread throughout the American continent, so the colonists brought it to Europe in the fifteenth century. Nowadays, 60% of the corn yield is used for feeding livestock, while about 20% is used for human consumption. It is present on the table every day, starting with corn flour, through boiled corn, and up to the world-famous popcorn. Corn is an annual plant that thrives best in slightly acidic, fertile soil. The length of the vegetation period depends on the cultivation method and the variety, i.e., types of hybrids. Hybrids can be early, medium and late, with a growing season ranging from 90 to 145 days. In addition to the quality of the soil, good weather conditions are necessary for growth. Suppose the outside temperature is below 10 degrees Celsius. In that case, it slows down the germination and growth of the plant, while very high temperatures without sufficient irrigation also slow down the growth and quality of the fruit. Six subspecies of corn are known: dent, flint, flour, sweet, pop, and waxy. Sugar corn is by far the most widely used in human nutrition. Corn kernels are rich in fiber, vitamins, carotenoids, and many other ingredients beneficial for our health. They positively affect vision, lower cholesterol levels, and reduce the risk of cataracts, diabetes, cancer, and osteoporosis.
A Guide for Researchers is intended for the following audiences and learners: - Indigenous researchers who work within post-secondary and Indigenous communities - Non-Indigenous researchers who work with Indigenous communities and participants - Post-secondary research departments, teaching faculty, and those who work in community partnership roles that involve the development and sharing of knowledge and research Each researcher approaches research with a different way of knowing, being, and doing. There is no one way of conducting Indigenized research, because it is based on place, relationships, and shared values. To create a trusting relationship with Indigenous communities, researchers need to acknowledge the importance of using Indigenous language to build shared meaning. Gregory Cajete (2000), a Tewa scholar, stresses the importance of language to Indigenous Peoples: “Indigenous people are people of place, and the nature of place is embedded in their language” (p. 74). In British Columbia, there are 34 languages spoken across the province, as well as and . This number does not include the many dialects of each language and language family, which are increasing as First Nations communities are reclaiming and revitalizing their language. Almost all Indigenous languages have similar words or phrases that are used for proper introductions. Depending on the situation and context, introductions can either follow a formal protocol or be an everyday or common greeting. As we go through this resource together, let us begin by setting a context and building a dialogue. Gilakas’la is a Kwaḱwala phrase that translates in several ways depending on the context in which it is spoken. It can be a welcome or greeting, a form of engagement, or to give thanks, because it translates to “Come, breathe of the same air.” Inherent in understanding the meaning of Gilakas’la is knowing that we are coming together for some purpose. We tell each other our names, where we are from, and our family and ancestral lineage. After we announce our intentions and where we come from, we can then enter into a discussion to clarify the purpose we wish to embark upon together. In this same way, we see this guide as a form of introducing ourselves. Gilakas’la. A language spoken by Métis people, mixing words from French, Cree, and Dene. A trade language spoken across the Pacific Northwest that mixes Chinookan, Nuu-chah-nulth, English, French, and other European languages. It’s also known as Chinuk Wawa.
Periodontal (gum) disease is an infection caused by bacterial plaque, a thin, sticky layer of microorganisms (called a biofilm) that collects at the gum line in the absence of effective daily oral hygiene. Left for long periods of time, plaque will cause inflammation that can gradually separate the gums from the teeth — forming little spaces that are referred to as “periodontal pockets.” The pockets offer a sheltered environment for the disease-causing (pathogenic) bacteria to reproduce. If the infection remains untreated, it can spread from the gum tissues into the bone that supports the teeth. Should this happen, your teeth may loosen and eventually be lost. When treating gum disease, it is often best to begin with a non-surgical approach consisting of one or more of the following: - Scaling and Root Planing. An important goal in the treatment of gum disease is to rid the teeth and gums of pathogenic bacteria and the toxins they produce, which may become incorporated into the root surface of the teeth. This is done with a deep-cleaning procedure called scaling and root planing (or root debridement). Scaling involves removing plaque and hard deposits (calculus or tartar) from the surface of the teeth, both above and below the gum line. Root planing is the smoothing of the tooth-root surfaces, making them more difficult for bacteria to adhere to. - Antibiotics/Antimicrobials. As gum disease progresses, periodontal pockets and bone loss can result in the formation of tiny, hard to reach areas that are difficult to clean with handheld instruments. Sometimes it's best to try to disinfect these relatively inaccessible places with a prescription antimicrobial rinse (usually containing chlorhexidine), or even a topical antibiotic (such as tetracycline or doxycyline) applied directly to the affected areas. These are used only on a short-term basis, because it isn't desirable to suppress beneficial types of oral bacteria. - Bite Adjustment. If some of your teeth are loose, they may need to be protected from the stresses of biting and chewing — particularly if you have teeth-grinding or clenching habits. For example, it is possible to carefully reshape minute amounts of tooth surface enamel to change the way upper and lower teeth contact each other, thus lessening the force and reducing their mobility. It's also possible to join your teeth together with a small metal or plastic brace so that they can support each other, and/or to provide you with a bite guard to wear when you are most likely to grind or clench you teeth. - Oral Hygiene. Since dental plaque is the main cause of periodontal disease, it's essential to remove it on a daily basis. That means you will play a large role in keeping your mouth disease-free. You will be instructed in the most effective brushing and flossing techniques, and given recommendations for products that you should use at home. Then you'll be encouraged to keep up the routine daily. Becoming an active participant in your own care is the best way to ensure your periodontal treatment succeeds. And while you're focusing on your oral health, remember that giving up smoking helps not just your mouth, but your whole body. Often, nonsurgical treatment is enough to control a periodontal infection, restore oral tissues to good health, and tighten loose teeth. At that point, keeping up your oral hygiene routine at home and having regular checkups and cleanings at the dental office will give you the best chance to remain disease-free. Understanding Gum (Periodontal) Disease Have your gums ever bled when you brushed or flossed? This most commonly overlooked simple sign may be the start of a silent progressive disease leading to tooth loss. Learn what you can do to prevent this problem and keep your teeth for life... Read Article Treating Difficult Areas Of Periodontal Disease Local antimicrobial or antibiotic therapy is sometimes used to treat difficult areas of periodontal (gum) disease. However, it is important to realize that while periodontal disease is a bacterially induced and sustained disease, mechanical cleaning to reduce bacteria is the best and most often used treatment... Read Article
The Forgotten Century by Don Robertson The birth of today’s classical music took place in Italy. The instruments of the violin family, the brass, the beginnings of what became Western tonal harmony, the terms (concerto, symphony, adagio, piano, forte, allegro, and so on) all this came from Italy around the beginning of the 17th century. The 1600s were ushered in by a group of highly educated noblemen who lived in Florence, Italy who called themselves the Florentine Camarata. In their regularly held meetings, they had been discussing ways that they could employ to revive the ancient art of Greek tragedy. They came up with a new style of music that was based on the extensive research into ancient Greek dramatic music conducted by Girolamo Mei, an erudite Florentine scholar who lived and worked in Rome. Based on the ideas of the Camarata, Emilio De’Cavalieri wrote the first important dramatic and liturgical works in the new style, and he and Jacopo Peri wrote the first operas, which were performed in 1600: the first year of the new century. From these humble beginnings, a new style of music was born, a style that moved away from the dominant polychordal choral singing of the previous century to instrumental music, solo singing, and a mixture of all three. This was also the beginning of a new cycle of music that was based on musical instruments, replacing the previous 700-year cycle of polyphonic music that had been based on voices. The Italian composer Claudio Monteverdi was the first renowned composer of the new era, and the first great opera was his beautiful Orfeo, composed in 1607. By mid-century, the Italian town of Bologna had become a tremendous center of music. There, the full flowering of the 17th Century took place, not only in sacred choral music, but in purely instrumental music as well. For many years, the classical music that was composed during the 17th century was described, along with music from the first half of the 18th century, as “baroque music”, although the music that was composed in the style of the 17th century is recognizably different from that of the first half of the 18th century, which naturally (as in the case of all centuries) was derived from it. Composers of the 17th century were absorbed into the general category of “baroque”, and the 18th-century composers such as George Frideric Handel, Antonio Vivaldi and Johann Sebastian Bach became the figureheads of this genre in music textbooks, while the bulk of the music from the 17th century had become forgotten. Even Monteverdi’s great operas were unknown until a shortened version of Orfeo was performed in Berlin in 1881 and in 1905, French composer and educator Vincent d’Indy directed for the first time a concert performance of Monteveri’s opera L’incoronazione di Poppea. The fact that Bologna had been a great center of music during the 17th century was finally brought to attention by the musicologist Arnold Schering during the 1920s. A Taste of the 17th Century Early 17th Century Secular Music
Tall Tales Reading Comprehension | Pecos Bill passages and questions American folktales are a fun component of Common Core State Standards. Incorporate these wacky, student-friendly tidbits about Pecos Bill into your tall tale reading, writing and language lessons (or speech/language therapy sessions!) These passages are just the right size to keep readers of all skill levels engaged and to keep struggling readers from becoming frustrated. For nonreaders, this can be used as a great listening comprehension activity! * Includes 20 short passages discussing all kinds of fun facts and background about the Pecos Bill Tall Tale with 2-4 corresponding critical thinking questions per passage. - Students can read passages independently (silently or aloud) or passages can be read aloud by teacher or therapist. For non readers, an adult should read the cards and the students can respond. - Questions require students to use skills ranging from simple recall, making personal connections to deciphering vocabulary in context, drawing conclusions, making inferences, giving opinions, identifying figurative language and justifying responses. A whole range of critical thinking skills can be applied! Although the passages can be used in any order, they are numbered in the recommended sequence. - Literature circle forms for the “jobs” of “connector, summarizer, and predictor.” - Character trait writing activity - Teaching tool/reference tool that explains how tall tales came about as well as characteristics that tall tales have in common Just print, cut and use! No wasted paper! Thank you for considering my products! ⚫ Find more from me at my ➔ Website ➔ TpT ➔ IG ➔ FB ➔ YouTube ➔ Free Resource Library ➔ Pinterest
On this page we’re going to look at the different types of dinosaurs. Dinosaurs came in many different shapes and sizes. To make it easier to understand how dinosaurs lived and evolved, scientists place dinosaurs into various groups – just as they do with all other types of animal. Rather than looking at individual species or genera, such as ‘Tyrannosaurus Rex’ or ‘Iguanodon’, we’re going to be looking at these larger groups of dinosaurs. For each group, you’ll find a list of example dinosaurs. We’ve included some of the main ‘large’ groups of dinosaur, such as Theropods and Ornithischians, and we’ve also listed some famous dinosaur families, such as the fearsome Tyrannosauridae and the herbivorous Hadrosauridae. - Want to see a list of dinosaurs rather than learning about types of dinosaur? Take a look at this page: List of Dinosaurs with Pictures and Information. - You can see dinosaurs from specific periods of the Mesozoic Era on the following pages: - Triassic Dinosaurs - Jurassic Dinosaurs - Cretaceous Dinosaurs - Need a quick reminder about the Triassic, Jurassic and Cretaceous periods? Click here: Dinosaur Periods. - You’ll find in-depth information on each of the three periods of the Mesozoic Era on the following pages: - The Triassic Period - The Jurassic Period - The Cretaceous Period Types of Dinosaurs: Introduction (This section is a brief introduction to how dinosaurs are grouped. Scroll down if you just want to see a list of the different types of dinosaurs!) Dinosaurs, like all living things, can be grouped together depending on their physical characteristics and how closely they’re related to each other. Biologists call this ‘classification’. Animal groups start off large – for example, ‘reptiles’, then get smaller and smaller until you are left with ‘genera’, which are very closely-related species, and finally the individual species themselves. (Genera is the plural of ‘genus’, a Latin word which means ‘origin’, or ‘type’.) All living things are organized into groups. One of the biggest types of group is a ‘kingdom’. You may have heard of the ‘animal kingdom’, the group that contains all animals. Dinosaurs, mammals, amphibians and humans are all in the animal kingdom. - You can learn more about the different groups of living things on this page: Animal Classification. Most dinosaurs are best-known by their genus, rather than by their species name. For example, most people would talk about the Iguanodon, which is actually a genus, rather than an individual species of Iguanodon. It’s only paleontologists (and other clever people) who would talk about Iguanodon bernissartensis, which is an individual species of Iguanodon. In fact, most people have only heard about one species of dinosaur – the Tyrannosaurus Rex, which is a species of Tyrannosaurus. All of the other famous dinosaur names, such as Spinosaurus, Allosaurus and Velociraptor, are genera. Science doesn’t stand still: as we discover more about dinosaurs, the way in which different types of dinosaurs are classified is likely to change. On this page we’ll learn about the different types of dinosaurs, and how they are currently grouped. The Two Main Types Of Dinosaur There are two main types of dinosaur: saurischia and ornithischia. It was English paleontologist Harry Seeley who first noticed that were two main types of dinosaur: those whose hips were lizard-like in structure, and those whose hips bird-like in structure. He called the lizard-hipped dinosaurs ‘saurischians’, which comes from the Greek for ‘lizard hip joint’. Those with bird-like hips he named ‘ornithischians’, which comes from the Greek for ‘bird hip joint’. Saurischians Vs Ornithischians All of the carnivorous (meat-eating) dinosaurs were saurischians, as were many herbivorous (plant-eating) dinosaurs. In general, all ornithischians were herbivores. However, there may have been a few ornithischians that were omnivorous or even carnivorous. Fossil evidence suggests that ornithischians lived in herds. - Saurischians = either carnivores or herbivores - Ornithischians = nearly all herbivores (possibly some omnivores / partial carnivores). Probably lived in herds. Confusingly, the ornithischian, or bird-hipped, dinosaurs were NOT the ancestors of birds. Birds evolved from a group of saurischian (lizard-hipped) dinosaurs called theropods. Types Of Dinosaurs: Saurischians In this section, we’re going to look at some well-known types of saurischian dinosaurs. The two main types of saurischian dinosaurs are theropods and sauropods. Theropod dinosaurs were bipedal meat eaters. (‘Bipedal’ means that they walked on two legs) Theropods Become Birds However, the Cretaceous–Paleogene extinction event wasn’t actually the end for the theropods. One group had already branched off and evolved into birds. You only have to look out of your window to see that birds – all of which are descended from theropod dinosaurs – are very much alive today! Types of Theropod Dinosaurs Coelurosaur means ‘hollow tailed lizard’. Coelurosauria is a large group of dinosaurs that contains theropods that were more like birds than Carnosaurians (a group of dinosaurs which we’ll meet further down the page). The Tyrannosaurids, including T Rex, were coelurosaurs. Maniraptora is a branch of bird-like dinosaurs. They first appeared in the Jurassic Period, and are the ancestors of modern-day birds. This group of dinosaurs includes the Dromaeosauridae (raptor) family. Dromaeosauridae are sometimes known as ‘raptors’. They were small to medium-sized feathered dinosaurs that appeared in the mid-Jurassic period. Abelisauridae is a family of theropod dinosaurs that lived in Africa, South America and Asia during the Cretaceous period. Named after the Tyrannosaurus (which means ‘tyrant lizard’), Tyrannosauridae is a family of bipedal meat-eaters. Tyrannosaurids are known for their huge skulls, powerful jaws, and short arms. Perhaps the most famous dinosaur of all, Tyrannosaurus Rex, was a member of the Tyrannosauridae family. The spinosaurids were another family of large, bipedal, meat-eating dinosaurs. Spinosaurids had long, thin, crocodile-like skulls. Some members of this family, such as Spinosaurus and Baryonyx, were specialized fish-eaters. Spinosaurus itself had a large sail on its back. This was held up by spine-like bones, from which it got its name (Spinosaurus means ‘spine lizard’). The Spinosauridae family was named after Spinosaurus, but not all members of the family have a similar sail. Carnosauria is a group of theropods that includes the families Allosauridae and Carcharodontosauridae. Allosauridae is a family of predatory dinosaurs that lived in the late Jurassic and early Cretaceous periods. Its best-known member is Allosaurus, a top-of-the-food-chain predator of the late Jurassic period. Carcharodontosauridae is a family of dinosaurs that includes some of the largest land carnivores that ever lived. The name comes from the Greek for ‘shark-toothed lizards’. The sauropods were a group of saurischian dinosaurs. Many sauropods grew to incredible sizes, and the group contains the biggest land animals ever to walk the earth. A typical sauropod had a large barrel-shaped body, a long neck, a small head, and a long, powerful tail. It stood on four tree-trunk like legs. Diplodocidae is a family of sauropod dinosaurs. Members of this family typically had very long bodies, but were not as tall as other sauropods. Titanosaurs were a group of sauropod dinosaurs that appeared in the early Cretaceous and lived right up to the end of the Mesozoic era. The group includes Argentinosaurus, a genus which, although only known from a small selection of bones, was possibly the largest ever land animal. Types Of Dinosaurs: Ornithischians As we’ve seen, the Ornithischian (bird-hipped) dinosaurs were one of the two main types of dinosaurs (the other being the lizard-hipped Saurischians). Many Ornithischians had beaks and jaws that were adapted for cutting and chewing plants Some of the best-known types of Ornithischian dinosaurs are listed below. Thyreophora (Armored Dinosaurs) Thyreophora was a branch of ornithischian dinosaurs. Thyreophora means ‘shield bearers’. Another name for this group is ‘armored dinosaurs’. Thyreophorans were heavily armored with thick skin and rows of plates running along their bodies. Many were further protected by spikes and tail clubs. Two well-known groups of Thyreophorans were Stegosauria and Ankylosauria. Stegosauria was a group of armored dinosaurs that had rows bony of plates running along their backs. Stegosaurus is the best-known Stegosaurian, but several other genera have been discovered. Ankylosauria was a group of armored dinosaurs that lived throughout the Mesozoic era. They were large, powerful, four-legged animals. All were heavily armored, and some developed tail clubs which may have been used as defensive weapons against predators. Ankylosaurus is the best known dinosaur in this group. Ornithopods were a branch of Ornithischian dinosaurs that appeared in the mid Jurassic Period and lived to the end of the Cretaceous Period. They had three-toed feet, beaks, and an advanced (for a dinosaur) chewing ability. Hadrosauridae (Duck-Billed Dinosaurs) Hadrosaurids are also known as ‘duck-billed’ dinosaurs due to their wide mouth parts. Hadrosauridae is a family of ornithopods descended from the Iguanodontians. Many Hadrosaurids were able to walk on both two and four feet; they would use all four feet when grazing, but would run using just their hind legs. Pachycephalosauria was a type of dinosaur that lived in the Late Cretaceous period. Pachycephalosaurians were bipedal and had thick skulls (the name Pachycephalosauria means ‘thick headed lizards’). Many members of this group had domed skulls, and these often had spikes. Pachycephalosaurians may have fought each other by ramming their heads together – just as deer stags fight to establish dominance today. An example pachycephalosaurian is Stegoceras, a small bipedal dinosaur with a domed head. Ceratopsia means ‘horned faces’. This group of dinosaurs became common in the Cretaceous period and were found in North America, Europe and Asia. Like all Ornithischians, Ceratopsians were herbivorous. They had beak-like mouth parts. They ranged in size from 1 meter (3.3 ft.) to 9 meter (30 ft.) long, 9 tonne giants. The best-known Ceratopsian was Triceratops, a large, powerful quadruped that had three distinctive spikes on its face and a bony frill at the back of its head. Types Of Dinosaurs: Conclusion When learning about dinosaurs, it’s important to remember that they were on Earth for tens of millions of years longer than humans have been. This gave the dinosaurs plenty of time to branch off and evolve into different types of dinosaur. Dinosaur species had been appearing – and becoming extinct – for millions of years before the mass extinction event at the end of the Cretaceous period that killed off the last non-avian dinosaurs. After watching films such as Jurassic Park it’s easy to forget that not all types of dinosaur lived together. Even dinosaurs that lived in the same period may have been separated for millions of years. For example, 57 million (or more) years passed between the disappearance of baryonyx walkeri and the appearance tyrannosaurus rex, yet both species lived during the Cretaceous Period. Like all living animals, dinosaurs can be classified into different groups. However, our entire knowledge of dinosaurs stems from fossilized remains that have been in the ground for an almost unimaginable length of time. This is why, as we unearth more and more fossils, our understanding of dinosaurs – and the relationship between different types of dinosaurs – is continuously changing. Now: Become A Dinosaur Expert! We hope that you have enjoyed learning all about the different types of dinosaur. You can find plenty more dinosaur information on the following pages:
Inquiry and the International Baccalaureate Primary Years Programme Inquiry is the primary teaching methodology of the IB Primary Years Program. An inquiry-based approach enables learners to "draw forth" and to become inquirers and lifelong learners. Questions are at the heart of the inquiry process. Inquiry comes from exploring and being interested in the world. In an inquiry classroom, curriculum is integrated and children are encouraged and given opportunities to question, explore, practice, manipulate, respond, and be engaged in learning. Inquiry classrooms are often lively and loud. Students are engaged in conversations, research, and projects. They are often collaborating to produce an end product that shows their understanding. An important element of learning is connecting to and building from one’s life experiences. This connection is essential to learning. Allowing students to explore, make their own connections, and giving time to share their connections and hear each other’s voices is fundamental. The main goals, in any classroom, are to help students learn and to meet the needs of each student. Use of the inquiry process and inquiry teaching philosophy enables the student and the teacher to explore, develop meaning, and to become active constructors of their own knowledge (i.e., their own schemas) through experiences that encourage assimilation and accommodation. At King-Murphy, teachers are using Thinking Routines from Project Zero at Harvard University to engage students in digging deeper into their thinking, questioning, and learning allowing for inquiry to become front and center in the classroom. Teachers are also implementing forms of Genius Hour to allow students independent inquiry on a passion of their choice. The use of Kath Murdoch’s inquiry cycle has given teachers a framework to guide students through the inquiry process. What does inquiry look like? - Exploring, wondering and questioning - Experimenting and playing with possibilities - Making connections between previous learning and current learning - Making predications and acting purposefully to see what happens - Collecting data and reporting findings - Clarifying existing ideas and reappraising perceptions of events - Deepening understanding through the application of a concept - Making and testing theories - Researching and seeking information - Taking and defending a position - Solving problems in a variety of ways Click here to learn more about Visible Thinking from Project Zero- Click here to learn more about inquiry from John Burrell-http://www.morecuriousminds.com/inq_strat.htm Click here to learn more about inquiry from Kath Murdoch
Researchers at Swansea University have pioneered a technique to produce a molecule used to make plastics from converted carbon dioxide (CO2). The discovery by a team from the Energy Safety Research Institute (ESRI) at the university is a major breakthrough, which could contribute to ‘offsetting global carbon emissions’. “Carbon dioxide is responsible for much of the damage caused to our environment. Considerable research focuses on capturing and storing harmful carbon dioxide emissions. But an alternative to expensive long-term storage is to use the captured CO2 as a resource to make useful materials,” said Dr Enrico Andreoli heads the CO2 utilisation group at ESRI. “That’s why at Swansea we have converted waste carbon dioxide into a molecule called ethylene. Ethylene is one of the most widely used molecules in the chemical industry and is the starting material in the manufacture of detergents, synthetic lubricants, and the vast majority of plastics like polyethylene, polystyrene, and polyvinyl chloride essential to modern society.” The team is now looking for industrial partners to commercialise the innovation.
Flag of Canada Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work! The establishment of the Canadian federation in 1867 was not accompanied by the creation of a special flag for the country. The imperial Union Jack and other British flags were considered sufficient, although a coat of arms (in the form of a heraldic shield) was granted by Queen Victoria in 1868. The Canadian shield was composed of the arms of the four original provinces—Ontario, Quebec, New Brunswick, and Nova Scotia. In 1892 this shield became a badge on the British Red Ensign, which served as a special civil ensign (later called the Canadian Red Ensign) for Canadian vessels. On land, that defaced ensign was used, without authorization, as an unofficial national flag combining Canadian patriotism and loyalty to Britain. Perhaps in imitation of the stars added to the United States flag whenever a new state joined the Union, Canadians routinely added official provincial shields to the arms of Canada. Flags with those shields were often decorated as well with the imperial crown, a wreath of maple leaves, and/or a beaver. The Union Jack continued to fly on land. A major change in symbols took place in 1921, when Canada was granted a distinctive new coat of arms; it quartered the symbols of England, Scotland, Ireland, and France with three green maple leaves on a silver background. That shield replaced the 1868 original in Canadian ensigns three years later. In 1957 a revised artistic version incorporated red maple leaves instead of green “to show the maturity of the country.” Agitation for a distinctive Canadian flag increased following World War II. While the Canadian Red Ensign was recognized for use on government buildings and as a national flag abroad, many felt that it did not properly identify the distinctive local culture and traditions. Heated debate took place in 1964 following the promise of Prime Minister Lester B. Pearson that Canada would acquire its own national flag prior to the centennial of confederation in 1967. Months of public and parliamentary debate resulted in approval (December 1964) of the new Maple Leaf Flag, which became official by royal proclamation on February 15, 1965, and is now broadly supported by the Canadian population. The maple leaf had been a national symbol since at least 1868, and its red colour has been described as a symbol of Canadian sacrifice during World War I. Pearson’s original flag proposal showed three red maple leaves on a white field with narrow blue vertical stripes at either end. Several individuals have been credited with suggestions that resulted in the final design, which broadened the stripes and changed them to red to emphasize the national colours (red and white). A single maple leaf gave a distinctive and easily recognizable central symbol. Learn More in these related Britannica articles: flag of Ontario…the proposed adoption of a Canadian national flag to replace both the Union Jack and the Canadian Red Ensign (the latter being a red flag with the Union Jack in the upper hoist and the shield of Canada in the fly). The Canadian Red Ensign, approved for use at sea… flag of Manitoba…in 1965, when the new Maple Leaf Flag was hoisted, led to calls in Manitoba for a distinctive provincial flag. Provincial Secretary Maitland Steinkopf wanted a design competition, while others preferred that the Canadian Red Ensign be adopted by the province. As a compromise it was agreed that the ensign… Canada, second largest country in the world in area (after Russia), occupying roughly the northern two-fifths of the continent of North America. Despite Canada’s great size, it is one of the world’s most sparsely populated countries. This fact,…