content
stringlengths 275
370k
|
---|
This article focuses on the morphological dimension of urban design ;that is the layout and configuration of urban form and space and referred to traditional and modernist urban space system.Morphology provides urban designers to understandlocal patterns of development and processes of change.there several key elements for terms of setlements.Conzen considered four elements that are land uses, building structures, plot pattern and street pattern.
On the cotrary other key elements, the land uses are temporary. The changing uses lead to redevolopment and creation of new buildings and changes in the street pattern.
The building structures areleast resilient element same as land uses.plot have often a recognisable progression or cycle of building development.
The figure –ground diagrams shows differencee between traditional and modernist urban spaces. İn this diagram, the important point is that in the tarditional pattern buildings defined urban spaces and have some civic significance as religious or major public buildings, in modernist pattern buildings separate pavillions freestanding in a more generalised type of road and a coarsely meshed road grid.
Symbolic and financial buildings will last longer than others for a variety reasons.on the other hand other buildings survive only if they are able to adapt to new or changing uses.
The buildings of plots change more rapidly than plot patterns. Cadastral units are typically subdivided or platted into plots or lots. Plots are often amalgamated, but more rarely subdivided.
the cadastral pattern is the layout of urban blocks and,between them, the public space/movement channels or public network.in this part, term ‘palimpsest’ is an important metaphor for processes of change, where current uses overwrite, but naot completelly erase, the Marks of prior use. The other importance is permeability for urban design quality, establishe by cadastral pattern.it can be use like a measure of opportunity movement and related accesibility.permeability requares two measure elements that are visiual and phisical permeability.visiual refers to the ağabeylity to see the routes through an environment, phisical refers to the ability to move through an environment. blocks that is on the street define measure of permeability.for example, smaller bloks increase visiual permeability. Another thing that should be concerned is urban grids that are defomed in cours of time. The urban grid’s structure is the most powerful single determinant of urban movement.
The cadastral pattern establishes an urban area’s public network and is a key element in the broader concept of the capital web. Capital web requares public anad privite properties and provides to accomodates the overlapping realms of mpvement space and social space.
The traditional understanding relates to between social life and phisical enviroment and traditional pattern of urban space, building structures composes definete streets and also squares,. On the other hand in modernist urban space pattern building structures transforms to be an object in the undefined space. |
The word “science” scares some people away. The idea of doing your own experiments may sound complicated, dangerous, or intimidating. But exposing your kids to some basic scientific principles through simple experiments can help them discover an exciting new world, as well as help develop important critical-thinking skills.
As a parent of two girls, I am also aware of the large gap between boys’ and girls’ interest in the sciences. Although I am not trying to actively push or force them in any one direction, I believe that at the very least, exposure to different subjects is beneficial. Introducing some basic ideas about physics and chemistry to your kids at a relatively young age can help these concepts seem less intimidating later.
You can probably do most of these projects with materials you already have laying around your home, or at the worst, you can pick up the things you need for a few dollars at the supermarket or hardware store.
Baking Soda and Vinegar
This is a simple way to give your kids some basic exposure to chemistry with ingredients found in almost every home. The chemical reaction between baking soda and vinegar produces an impressive display of foamy, carbon dioxide bubbles. Add a drop of food color to your vinegar for added effect!
You don’t have to bore your kids with details, but the point here is that sometimes when you put A and B together, they can change form into C (and sometimes D!). What happens when you mix baking soda and water? How about baking soda and lemon juice?
All you need for this simple project is a battery, a nail and some wire. Coil the wire along the entire length of the nail and attach the wire ends to the battery. You’ll be able to pick up paper clips and other small objects with the nail.
Pick something up with the magnet, then show your child that when you remove the wire from the battery, the object drops from the nail. Doing this exercise gives children a basic idea of how batteries work and how current flows through a circuit.
Pour a glass of your child’s favorite drink. Place two straws in the glass and have your child take a sip through both straws. Now remove one of the straw ends so it is outside the glass. Try to take a sip. What happens? Why?
Now have them place their finger over the second straw and try again. They will be able to drink through the straw. Why? This easy experiment introduces the idea of air pressure and vacuums to young minds.
Put a Cork in It
Get a tall glass and fill it halfway with water. Float an old wine cork or rubber stopper in the glass. The cork will always float to a side and appear to stick to the glass. Now remove the cork and fill up the glass so it is completely full. Float the cork again, and you will notice it now floats in the middle.
This is due to the property of water surface tension. It is why water skipping bugs can “walk” on water, as we well as why you can overfill a glass this way without the water spilling over the side. The cork seeks out the highest point, which is the middle of the water “bubble”.
Kids and sugar are a match made in heaven, so an activity featuring both can’t miss. For this simple recipe, you want to dissolve between 2 and 3 parts sugar into 1 part water in a pan over a stovetop. Stir the sugar into the water until it dissolves completely. At this stage, you may choose to add some food color as well. Explain to your child that you have created a super-saturated mixture of sugar and water.
Let the mixture cool, then poor into a glass, shallow pan or other suitable container. Place clean lengths of string or wooden bamboo skewers into the solution and secure them so they do not fall entirely into the solution. After a few days, you will start to see sugar crystals forming on your sticks or strings. About a week later, your rock candy should be ready to eat!
Of course, there are a million other science experiment possibilities and sometimes the best approach to a “teaching moment” is to keep things casual. Some kids stop paying attention when they think you’re launching into full-on teacher mode. For example, rub a balloon on your child’s hair and stick it to the wall; it’s so surprising your child won’t realize they’re learning about static electricity until it’s too late.
Or if you can’t seem to get your child’s attention, walk around on the carpet scuffing your shoes and then give them a shock!
Helping kids to understand the hows and whys of simple actions and reactions around them can be a gateway to further exploration and conversation. |
You’ve likely heard that scientists at the Large Hadron Collider (LHC) have found the Higgs Boson, but you may be wondering what exactly it is. The simplest way of putting it is that it gives stuff mass. But how? That’s what I’m going to attempt to explain here. (And no, it won’t be in Comic Sans)
To understand how the Higgs boson gives particles mass, you must first understand what mass is. A particle without mass (such as the photon; the light particle) travels at the speed of light, 186,282.397 miles per second (299,792,458 meters per second), constantly. It is physically impossible for a massless particle to travel at a different speed. Particles that have mass have the ability to change speed or direction, but can’t go at or above that speed. The more mass a particle has, the tougher it is for the particle to change speed and direction.
The way particles gain mass is by how they travel through the Higgs field (notably different from the Higgs boson). The Higgs field is everything. Spacetime (what used to be referred to as ‘ether’), the ‘fabric of the universe’, has the Higgs field in it, everywhere. When a particle travels through this Higgs field, it reacts with the Higgs field, and effectively bounces off of it. While, in between bounces, the particle is traveling at the speed of light, at appears to move much slower, or even not at all. Some particles (such as the top quark; the elementary particle with the most mass of all) react with it a lot, but others (such as the photon, a massless elementary particle) don’t react with it at all.
That’s what the Higgs field is, but what’s the Higgs boson? The Higgs boson is merely an excitation of the Higgs field; the detectable part of it. In fact, the Higgs boson reacts with the Higgs field, allow itself to have mass. Actually, the Higgs boson reacts with the Higgs field much more than it does with other particles, so the Higgs boson has a very large mass when compared to other particles.
That’s what the Higgs boson is, but how do you find a new particle? The Higgs boson, when created by protons smashing into each other, is very short lived. It instantly decays into other particles. There are a lot of other particles that it can decay into depending on the circumstances. Some are more likely than others, but they are all possible none the less. Scientists at the LHC didn’t actually see the Higgs boson, but an escalated amount of particles that it can decay into, warranting the possibility of the Higgs boson existing. While millions of collisions had to occur to reasonably deduce that there is a Higgs boson, scientists believe that it’s there. From the data that they have received, scientists can infer that there is an unidentified boson with a mass of 125 billion electron volts (the mass of 133 protons). This matches the predicted mass of the Higgs boson. So while scientists aren’t positive that the Higgs boson exists, they’re certain beyond reasonable doubt.
Scientists will postpone the previously scheduled renovations to the Large Hadron Collider to further investigate the Higgs boson and after that look into dark energy (the proposed reason for the universe expanding at an escalated rate which is believed to occupy 74% of the universe) and dark matter (which is believed to be matter that does not react with the electromagnetic force (light) and theoretically composes 80% of the 26% of matter in our universe). |
“The Trial of Adolf Eichmann” Classroom Activities were designed for students from the 7th grade to college level by Gary Grobman, who also wrote a Pennsylvania Teacher’s Guide to the Holocaust. The first responsibility of the teacher is to teach the event. The actual history must be the foundation for any critical understanding and pursuit of the issues related to the Holocaust. Understanding how these issues still apply today helps students remember that the Holocaust is not an isolated event. (An Eichmann Timeline is also available in the “In His Own Words” section of this site.)
|Background & History WWII, the Holocaust, and Eichmann From Capture to Trial|
Students will learn about–
Adolf Eichmann was the principal logistical military officer of the Nazis’ mass murder of 6,000,000 Jews during World War II. After the war, he escaped a prisoner of war camp in Germany, and eventually made his way to Argentina. In 1960, agents of the Israel government captured him and transported him to Israel where he was put on trial for his Nazi war crimes. This trial, the first ever televised, was for many people their first education about the Holocaust. Eichmann freely admitted to most of the accusations concerning his participation in a coordinated conspiracy which sent millions of Jews to their deaths, but claimed that he was powerless to resist orders from his military superiors. A 16-week trial featured the testimony of scores of survivors whose lives were shattered. Eichmann was found guilty on all 15 counts of the criminal indictment against him. He was hanged, his body was cremated, and his ashes were scattered in the Mediterranean Sea.
by Gary Grobman
copyright © 1997 Gary M. Grobman |
Read and write a 2-4 page interpretative essay on the historical significance of the Dominguez Escalante Expedition and the journal's impact on Utah's History.
- Use specific page number references and brief quotes from the journal to substantiate your points. The Dominguez Escalante Journal is a primary document that gives us a 1776 window to view Utah through.
- What do we learn? How was the expedition significant then? How did it impact Utah's history later? Why is it valuable presently?
- Use the above questions to assist you in defining how to think and write about this topic.
Points : 50 possible
In lieu of exams, below are 10 essay questions that you need to prepare good, insightful answers for. You may use any of the reading information from your assignments or any other source, but most of the information will come from Utah: The Right Place.
In writing an essay answer, the recommended method to ensure the most points is to
- turn the question in to the topic heading/thesis of your answer. For example if the question read: "What caused the Civil War?" Your answer should start: "The Civil War was caused by ..."
- then go to work on answering the question. After years of reading essay answers by students many times they do not answer the question that I asked or their answer is so general and/or vague that they do not get as good a score as they maybe could have had. Work into your essay answers specific details and examples from history, add interpretations and conclusions.
Remember that history is not merely remembering the past but interpreting the past. To get full points you must interpret and draw conclusions in addition to showing understanding of the material.
Points : Each essay is worth 25 points for a total of 250 points. Each answer should be 1-2 typed pages in length. The essays should be turned in as one assignment NOT one at a time.Essay Questions
- How has/does the geography of the state impacted its history and economy?
- Who were the early (prehistoric) peoples of Utah and what were their lives like? What changes did the acquisition of the horse bring for the Ute People?
- Outline who the significant mountain men in Utah's History were and what impact mountain men had upon the region.
- Why is an understanding of Mormon History and some of their doctrinal beliefs important and justified for Utah History? Outline the role that polygamy had in Utah, include the various anti-polygamy bills and their effect on the territory.
- Why was the army sent to Utah in 1857? Outline the events between Buchanan's sending the army and the establishment of Camp Floyd. What impact did the coming of the Army have on Utah? How and why did the Mountain Meadows Massacre occur?
- What did the Mormons hope for with the State of Deseret, both in political freedom and boundaries. Even though Utah in its territorial era had a large Mormon majority, what non-Mormon interests kept it from Statehood for thirty years? There were at least three distinctive Mormon practices that Washington politicians felt must become 'Americanized' before Utah was ready to be made a state. What were they and how were these concerns addressed.
- Compare and contrast the settlement of Utah by the Mormons and other Western States' settlement. With the coming of the railroad to Utah in 1869, Brigham Young did his best to change the economic picture of the territory. What steps did he take and what was his rational in doing so?
- One biographer of Brigham Young describes him as an "American Moses," the Saints called him "Brother Brigham," upon hearing of his many wives great numbers of people in the U.S. thought of him as a harem master, and the federal government thought of him as a "pain-in-the-#@%." How should history view Brigham Young? Defend your answer.
- What changes occurred in Utah as a result of the Depression, and how did World War II impact Utah?
- What are at least three present issues in Utah that have historical roots and why should the politicians, society, state and federal agencies, etc. think about historical factors in reaching their decisions? (Use specific examples in your answer.) How does the Federal Government's ownership of 70% of the land in Utah impact the state today? Outline the historic uses in the urban verses rural arguments that are rocking the state concerning public land use.
The film Utah: The Struggle for Statehood parts 1 - 4 should be watched. An overview of each of the two sections of film with a critique of 1 - 2 pages, should be completed. Treat the film as a document and discuss purpose, intent, bias, audience, etc. Do not merely retell the film.
This film assignment is a floating assignment. The suggested due date for this assignment is at the end of week 5, however, you may choose to complete the assignment anytime before the end of week 15.
Utah: The Struggle for Statehood , video 1995 KUED. It is available at most libraries in Utah and available for sale at University of Utah Press, 101 USB, Salt Lake City, UT. 84112 (800) 773-6672.
- Cost is $34.95 plus $4.95 Shipping and handling. Return time is approximately 1 - 2 weeks.
- They will accept Visa and MasterCard.
- You may also rent a copy of the videos from the Independent Study office for $25 and $10 will be returned to you upon return of the video.
Points : 50 possible
Mapping and Geography Assignment
Trails mapping, historical sites map(s), physiography map(s). Prepare maps using any medium e.g., hand-drawn copied and hi-lighted, downloaded from Internet, etc. Make as many maps as you need.
Points : 100 possible for all33 points each.
Trails and Boundaries of Deseret Mapping
- The Mormon Trail from Independence to The Great Salt Lake
- The Old Spanish Trail
- Pony Express Trail
- The State of Deseret (proposed state, not what became Utah)
Historic Locations Map(s)(make as many maps as you need)
- Fort Bridger
- Fort Uintah
- Fort Douglas
- Cove Fort
- Camp Floyd
- Mountain Meadows
- Hole in the Wall
(make as many maps as you need)
- The Great Basin/Colorado Plateau Province Hinge Line
- The Uinta Mountains
- Wasatch Mountains
- Henry Mountains
- Oquirrh Mountains
- San Juan
- The Great Salt Lake
- Lake Bonneville
- Flaming Gorge
- Salt Lake
- San Pete
- National Parks
- Bryce Canyon
- Capitol Reef
Reading Selections Assignment
In addition to your texts there are four reading selections written by the instructor included in this syllabus: Read and write a 1-2 page interpretative commentary on each of the readings. How does this information fit with your previously held ideas and opinions on this topic? Why is it significant to Utah's History? What did you learn?
Points : 100 possible, 25 each.
- Antoine RobidouxBuckskin Entrepreneur ( PDF , DOC )
- The Common Touch ( PDF , DOC )
- Ute Lands and People ( PDF , DOC )
- Whose Land. In the section 'Whose Land,' please understand that you may not have known anything about the specifics of jurisdiction or some of the other issues, however the concepts of land use, urban verses rural thinking, environmentalism, etc. are of particular significance to all Western states and regions. ( PDF , DOC )
Every student will prepare a research paper on a topic of their choosing as long as it covers some aspect of Utah's History. These papers should be typed, double spaced with regular margins, and 10 - 15 pages in text length. Include a typed title page and bibliography of sources. Please no slick cover foldersthe instructor greatly prefers a typed title page and the paper stapled together.
Primary history documents are sources from a first-person perspective and include: letters, diaries, journals, surveys, statistics, government surveys, etc. Period newspapers (newspapers from the time of the event) are quasi-primary documents. Secondary documents include studies of historic topics in books and journals written by historians. Your research paper should contain a mixture of primary and secondary sources, if your topic is appropriate oral interviews are considered primary sources and are acceptable.
A minimum of 10 different sources with minimum of two primary documents are required. Include a bibliography or works cited page.
Good writing is expected for all the assignments and the format for history should follow Kate L. Thurabian's book, A Manual for Writers of Term Papers, Theses, and Dissertations (University of Chicago Press, paperback 6th edition. For annotation of sources follow the Thurabian Style Manual, 6th Edition. For examples of annotation see Antoine RobidouxBuckskin Entrepreneur ( PDF ).
Include an introduction with a clearly stated thesis . The body of your paper comes next and should include the narrative of events and your evidence and interpretations of arguments. Your arguments should be based on evidence not merely your opinion. One on the main points of college writing is forming informed opinions based on researched evidence and then analysis of that evidence. The final part of your paper is the conclusion . This is not the place to introduce new evidence or arguments but to sum up those already outlined in the body of your paper. Keep in mind this is formal writing . Avoid contractions, first and second person pronouns, colloquial expressions and slang, etc.
Points : 100 possible |
A percussion instrument can be any object which produces a sound by being struck, shaken, rubbed, and scraped with an implement, or by any other action which sets the object into vibration. The term usually applies to an object used in a rhythmic context with musical intent.
The word, "percussion," has evolved from the Latin terms: "Percussio" (which translates as "to beat, strike" in the musical sense, rather than the violent action), and "percussus" (which is a noun meaning "a beating"). As a noun in contemporary English, it is described as "the collision of two bodies to produce a sound." The usage of the term is not unique to music but has application in medicine and weaponry, as in "percussion cap," but all known and common uses of the word, "percussion," appear to share a similar lineage beginning with the original Latin: "Percussus." In a musical context, the term "percussion instruments" may have been coined originally to describe a family of instruments including drums, rattles, metal plates, or wooden blocks which musicians would beat or strike (as in a collision) to produce sound. Percussion imitates the repetition of the human heartbeat. It is the most primal of all forms of expression. From aboriginal times, every civilization has used the drum to communicate.
Anthropologists and historians often explain that percussion instruments were the first musical devices ever created. The first musical instrument used by humans was the voice, but percussion instruments such as hands and feet, then sticks, rocks, and logs were the next steps in the evolution of music.
Percussion instruments can be, and indeed are, classified by various criteria depending on their construction, ethnic origin, function within musical theory and orchestration, or their relative prevalence in common knowledge. It is not sufficient to describe percussion instruments as being either "pitched" or "unpitched," which is often a tendency. It may be more informative to describe percussion instruments in regards to one or more of the following four paradigms:
Many texts, including Teaching Percussion by Gary Cook of the University of Arizona, begin by studying the physical characteristics of instruments and the methods by which they produce sound. This is perhaps the most scientifically pleasing assignment of nomenclature, whereas the other paradigms are more dependent on historical or social circumstances. Based on observation and experiment, one can determine exactly how an instrument produces sound and then assign the instrument to one of the following five categories:
"Idiophones produce sound when their bodes are caused to vibrate."
Examples of idiophones:
Most objects commonly known as "drums" are membranophones. "Membranophones produce sound when the membrane or head is put into motion."
Examples of membranophone:
Most instruments known as "chordophones" are defined as string instruments, but such examples are also, arguably, percussion instruments.
Most instruments known as "aerophones" are defined as wind instruments, such as a saxophone, whereby sound is produced by a person or thing blowing air through the object. Yet, the following instruments, if played at all in a musical context, are performed by percussionists in an ensemble. Examples of aerophones:
Electrophones are also percussion instruments. In the strictest sense, all electrophones require a loudspeaker (an idiophone or some other means to push air and create sound waves). This, if for no other argument, is sufficient to assign electrophones to the percussion family. Moreover, many composers have used the following instruments which are most often performed by percussionists in an ensemble: Examples of electrophones:
It is in this paradigm that it is useful to define percussion instruments as either having definite pitch or indefinite pitch. For example, some instruments such as the marimba and timpani produce an obvious fundamental pitch and can therefore play a melody and serve harmonic functions in music while other instruments such as crash cymbals and snare drums produce sounds with such complex overtones and a wide range of prominent frequencies that no pitch is discernible.
Instruments in this group are sometimes referred to as "pitched" or "tuned percussion."
Examples of percussion instruments with definite pitch:
Instruments in this group are sometimes referred to as "non-pitched," "unpitched," or "untuned." This phenomenon occurs when the resultant sound of the instrument contains complex frequencies through which no discernible pitch can be heard.
Examples of percussion instruments with indefinite pitch:
Although it is difficult to define what is "common knowledge," there are instruments in use by percussionists and composers in contemporary music which are certainly not considered by most to be musical instruments of any kind. Therefore, it is worthwhile to make a distinction between instruments based on their acceptance or consideration by a general audience. For example, most people would not consider an anvil, a brake drum (the circular hub on modern vehicles which houses the brakes), or a fifty-five gallon steel pans from oil barrels to be musical instruments, yet these objects are used regularly by composers and percussionists of modern music.
One might assign various percussion instruments to one of the following categories:
(Sometimes referred to as "found" instruments)
John Cage, Harry Partch, Edgard Varèse, all of whom are notable composers, have created pieces of music using unconventional instruments. Beginning in the early 20th century, perhaps with Ionisation by Edgard Varèse which used air-raid sirens (among other things), composers began to require percussionists to invent or "find" objects to produce the desired sounds and textures. By the late twentieth century, such instruments had become common in modern percussion ensemble music and popular productions such as the off-Broadway show, Stomp.
It is not uncommon to discuss percussion instruments in relation to their cultural origin. This has led to a dualism between instruments which are considered "common" or "modern" and those which have a significant history and/or significant purpose within a geographic region or among a specific demographic of the world's population.
This category may contain instruments which could have a special significance among a specific ethnic group or geographic region. Such examples are the following:
This category may contain instruments which are widely available throughout the world and have experienced popularization among a variety of world populations. Such examples are the following:
Percussion instrumentation is commonly referred to as "the backbone" or "the heartbeat" of a musical ensemble, often working in close collaboration with bass instruments, when present. In jazz and other popular music ensembles, the bassist and the drummer are oftened referred to as the "rhythm section." Most classical pieces written for full orchestra since the time of Haydn and Mozart are orchestrated to place emphasis on the string instruments or strings, woodwinds, and brass instruments. Often, at least one pair of timpani is included, though they rarely play continuously but serve to provide additional accents when needed. In the eighteenth and nineteenth centuries, other percussion instruments (like the triangle or cymbals) have been used, again relatively sparingly in general. The use of percussion instruments became more frequent in twentieth century classical music.
In almost every style of music, percussion instruments play a pivotal role. In military marching bands and pipes and drums, it is the beat of the bass drum that keeps the soldiers in step and at a regular speed, and it is the snare drum that provides that crisp, decisive air to the tune of a regiment. In classic jazz, one almost immediately thinks of the distinctive rhythm of the "hi-hats" or the ride cymbal when the word "swing" is spoken. In more recent popular music culture, it is almost impossible to name three or four rock, hip-hop, rap, funk, or even soul charts or songs that do not have some type of percussive beat keeping the tune in time.
Because of the diversity of percussive instruments, it is not uncommon to find large musical ensembles composed entirely of percussion. Rhythm, melody and harmony are usually present in these musical groups, and they are quite a sight to see in a live performance.
Music for pitched percussion instruments can be notated on a musical staff with the same treble and bass clefs used by many non-percussive instruments. Music for percussive instruments without a definite pitch can be notated with a specialist rhythm or percussion clef. More often a treble clef (or sometimes a bass clef) is substituted for a rhythm clef.
The general term for a musician who performs on percussion instruments is a "percussionist" but the terms listed below are often used to describe a person's specialties:
All links retrieved April 14, 2015.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
Mumps: An acute (sudden, shortlived) viral illness that usually presents with inflammation of the salivary glands, particularly the parotid glands. A child with mumps often looks like a chipmunk with a full mouth due to the swelling of the parotids (the salivary glands near the ears).
Mumps can also cause inflammation of other tissues, most frequently the covering and substance of the central nervous system (meningoencephalitis), the pancreas (pancreatitis) and, after adolescence, the ovary (oophoritis) and the testis (orchitis). The testis is particularly susceptible to damage from mumps; the damage can lead to infertility.
Together with the likes of measles and chickenpox, mumps was once considered one of the inevitable infectious diseases of childhood. Since a mumps vaccine became available in 1967, the incidence of mumps has declined in the U.S., but there are still many underimmunized populations (for example, more blacks than whites have not yet been immunized).
Treatment is with rest and non-aspirin pain relievers to ease pain in swollen areas. Rarely, mumps can cause a form of meningitis, in which case hospitalization may be needed. Prevention is by immunization with the vaccine.
The origin of the word mumps is not clear. It may have to do with the English usage, now obsolete, of "mump" to mean a grimace. More probably, mumps comes from a colder climate, Iceland, where mumpa meant to fill the mouth too full. |
December 10th, 2018
The collapse of marine species during the Great Dying, the largest extinction in the Earth’s history, is likely to have been caused by global warming.
Up to 96 per cent of all marine species and 70 per cent of terrestrial species were wiped out during the Permian extinction over 250 million years ago.
A new study from the University of Washington and Stanford University indicates that the mass extinction of marine animals was caused by increased marine temperatures and reduced oxygen availability.
As temperatures rose and the metabolism of marine animals sped up, the warmer waters could not hold enough oxygen for them to survive, the report found.
As oceans warm, marine animals’ metabolism speeds up, meaning they require more oxygen, while warmer water holds less oxygen.
Worryingly, the scientists behind the study have warned that “we would be wise to take note” as similar environmental changes are currently underway as a result of climate change.
“These results highlight the future extinction risk arising from a depletion of the ocean’s aerobic capacity that is already underway,” the study concludes.
The tolerance of modern animals to high temperature and low oxygen is believed to be similar to Permian animals as they have evolved under similar environmental conditions, the study found.
Ocean temperatures have continued to rise over the past decade, with 2017 listed as the hottest year in recorded history in a recent study from the Institute of Atmospheric Physics of the Chinese Academy of Sciences.
The past five years have been the warmest for our oceans, the study states, with the rise in temperature largely a result of greenhouse gas emissions from human activities.
According to the University of Washington’s Justin Penn, under a business-as-usual emissions model, warming in the upper ocean by 2100 will be close to 20 per cent of warming levels seen in the late Permian period.
“By the year 2300, it will reach between 35 and 50 per cent [under a business-as-usual model,” Penn warned, thus highlighting the “potential for a mass extinction arising from a similar mechanism under anthropogenic climate change”.
The end of the Permian period for terrestrial plants and animals was brought about as a result of a series of massive volcanic eruptions.
It has remained hotly debated, however, as to how the thriving and diverse marine ecosystem collapsed during this period when the land masses were combined into the Pangaea supercontinent.
The researchers said that other changes, such as acidification or shifts in the productivity of photosynthetic organisms, likely acted as additional causes. |
When you first start using the ukulele, it is very overwhelming when you look at all the different chords and their shapes. They are very intuitive and easy to understand, but some symbols may require some explanation. Once you know how the chord diagrams “work” you will be happy to see them on the songs page as it will really help you speed up learning a song.
The UkuTabs chord diagram is easy to understand, I have been as simple as possible. You should imagine watching on the Ukulele in front of you, you are looking at the fingerboard above the motherboard. The string is represented as a vertical line (G C E A string from left to right) and the note is a horizontal line.
All different chord shapes are represented by dots. Each dot represents the position of the finger. In the example on the left, you can see the G chords. Then how do you put your fingers? You play the C string in the second sound, the E string in the third sound, and the A string in the second sound. The small circle (o) at the top of the G string means that you must open the string (ie don’t put your finger on it).
Sometimes you will see numbers on both sides of the chart. These are a certain timbre number, because sometimes the string plays higher on the fingerboard (if it is not displayed, it means note 1 -> 4). In the example on the left, the A string is played at the sixth sound, the C string is played at the fifth sound, and the G string is played at the third sound.
When you see a small “x” at the top of a string, this means you should not play the string at all. In other words, you have to mute it. You can do this by placing one or more micro-finger fingers on the strings without bothering them. An example is shown on the left (same example as in “High Up The Fretboard”).
Sometimes in the notes of a song or when you talk to people about chords, you don’t see chord diagrams, only four simple numbers. For example, when you see “0232”, it represents a G chord. So the four numbers refer to each string of ukulele (sequence: GCEA). This is useful for quickly telling someone how to play a chord. The second example in this guide is: 35×6.
Ukulele Chords Chord Charts
Here you can find the soprano, concert and tenor Ukulerds UkuChords chord leaderboard. It has all the major chord diagrams, and you can download print-friendly PDF or “poster” charts.
Edit by Bonnie
Height Musical Instrument Co.,ltd |
A flourishing microbial ecosystem has been found in an ice-covered Antarctic lake by Brent Christner, John Priscu, and their colleagues, raising the chances of finding life on icy moons such as Europa and Enceladus.
The team used hot water to drill a 60-centimeter-diameter hole through the overlying ice of Lake Whillans, a 60-square-kilometer subglacial lake in western Antarctica. The main problem with searching for life in ice-covered lakes is to avoid contamination from the surface during drilling, and the team took extraordinary measures to ensure sterilization by using filters, heating, ultraviolet light and hydrogen peroxide.
They found an aquatic microbial ecosystem that was surprisingly diverse, including nitrogen bacteria and microorganisms that metabolize methane. Lake Whillans lies beneath half a mile (about 800 m) of ice on the lower portion of the Whillans Ice Stream in West Antarctica, and is part of an extensive river network underneath the ice.
The new finding shows that a rich and diverse ecosystem of bacteria and single-celled organisms is possible in a pitch-dark environment without any apparent contribution from photosynthesis.
We can envision a similar situation on Jupiter’s moon Europa. In most places, the icy crust there is many miles deep, but in some areas such as the so-called chaos terrains, the ice may be only as thick as that overlying Lake Whillans.
In Europa’s chaos terrains (see the simulated flyover of one of these regions, called Conamara, above), nutrients may be brought up close to the surface by subsurface currents powered by geothermal activity and tidal forces. These nutrients might originate in Europa’s deep ocean, which by some accounts is about 100 kilometers deep. Various modeling studies have shown that Europa may have enough nutrients to support a substantial ecosystem, perhaps even simple multicellular life such as brine shrimps.
Here’s a short video from the journal Nature (which published the results) giving a behind-the-scenes look at how the team does its work: |
Here’s how we’ll approach reading comprehension questions:
- Read the passage
- Read all the questions
- Reread the passage, keeping the questions in your head.
- Do question one, looking back at the passage to be sure it’s the right answer.
- Underline the answer in the passage.
- Write the paragraph number where you found the answer next to the question.
- Repeat steps 4-6 for each question.
This process will ensure that students closely examine the text, and use text evidence, to answer questions about it. If you have any questions, please be sure to drop me a note. |
Written By Jesus de La Fuente / CEO Graphenea / [email protected]
Around the world, research institutions are trying to develop ways to revolutionise the production of graphene sheets of the highest quality. One of the most cost effective ways this is possible is by the reduction of graphene oxide into rGO (reduced graphene oxide). The problem with this technique is the quality of graphene sheets produced, which (with certain methods) displays properties currently below the theoretical potential of pristine graphene compared to other methods such as mechanical exfoliation. However, this doesn't mean that improvements can’t be made, or that this reduced graphene oxide is effectively unusable; far from it, in fact.
Graphite oxide is a compound made up of carbon, hydrogen and oxygen molecules. It is artificially created by treating graphite with strong oxidisers such as sulphuric acid. These oxidisers work by reacting with the graphite and removing an electron in the chemical reaction. This reaction is known as a redox (a portmanteau of reduction and oxidisation) reaction, as the oxidising agent is reduced and the reactant is oxidised.
The most common method for creating graphite oxide in the past has been the Hummers and Offeman method, in which graphite is treated with a mixture of sulphuric acid, sodium nitrate and potassium permanganate (a very strong oxidiser). However, other methods have been developed recently that are reported to be more efficient, reaching levels of 70% oxidisation, by using increased quantities of potassium permanganate, and adding phosphoric acid combined with the sulphuric acid, instead of adding sodium nitrate.
Graphene oxide is effectively a by-product of this oxidisation as when the oxidising agents react with graphite, the interplanar spacing between the layers of graphite is increased. The completely oxidised compound can then be dispersed in a base solution such as water, and graphene oxide is then produced.
Graphite oxide and graphene oxide are very similar, chemically, but structurally, they are very different. The main difference between graphite oxide and graphene oxide is the interplanar spacing between the individual atomic layers of the compounds, caused by water intercalation. This increased spacing, caused by the oxidisation process, also disrupts the sp2 bonding network, meaning that both graphite oxide and graphene oxide are often described as electrical insulators.
Graphite Oxide to Graphene Oxide
The process of turning graphite oxide into graphene oxide can ultimately be very damaging to the individual graphene layers, which has further consequences when reducing the compound further (explanation to follow). The oxidisation process from graphite to graphite oxide already damages individual graphene platelets, reducing their mean size, so further damage is undesirable. Graphene oxide contains flakes of monolayer and few layer graphene, interspersed with water (depending on the base media, the platelet to platelet interactions can be weakened by surface functionality, leading to improved hydrophilicity).
In order to turn graphite oxide into graphene oxide, a few methods are possible. The most common techniques are by using sonication, stirring, or a combination of the two. Sonication can be a very time-efficient way of exfoliating graphite oxide, and it is extremely successful at exfoliating graphene (almost to levels of full exfoliation), but it can also heavily damage the graphene flakes, reducing them in surface size from microns to nanometres, and also produces a wide variety of graphene platelet sizes. Mechanically stirring is a much less heavy-handed approach, but can take much longer to accomplish.
Graphene Oxide to Reduced Graphene Oxide
Reducing graphene oxide to produce reduced graphene oxide (hitherto referred to as rGO), is an extremely vital process as it has a large impact on the quality of the rGO produced, and therefore will determine how close rGO will come, in terms of structure, to pristine graphene. In large scale operations where scientific engineers need to utilize large quantities of graphene for industrial applications such as energy storage, rGO is the most obvious solution, due to the relative ease in creating sufficient quantities of graphene to desired quality levels.
As you would expect, there are a number of ways reduction can be achieved, though they are all methods based on chemical, thermal or electrochemical means. Some of these techniques are able to produce very high quality rGO, similar to pristine graphene, but can be complex or time consuming to carry out.
In the past, scientists have created rGO from GO by:
- Treating GO with hydrazine hydrate and maintaining the solution at 100 for 24 hours
- Exposing GO to hydrogen plasma for a few seconds
- Exposing GO to another form of strong pulse light, such as those produced by xenon flashtubes
- Heating GO in distilled water at varying degrees for different lengths of time
- Combining GO with an expansion-reduction agent such as urea and then heating the solution to cause the urea to release reducing gases, followed by cooling
- Directly heating GO to very high levels in a furnace
- Linear sweep voltammetry
Note: These are just a sample of the numerous methods that have been attempted so far.
Reducing GO by using chemical reduction is a very scalable method, but unfortunately the rGO produced has often resulted in relatively poor yields in terms of surface area and electronic conductibility. Thermally reducing GO at temperatures of 1000℃ or more creates rGO that has been shown to have a very high surface area, close to that of pristine graphene even.
Unfortunately, the heating process damages the structure of the graphene platelets as pressure between builds up and carbon dioxide is released. This also causes a substantial reduction in the mass of the GO (figures around 30% have been mentioned), creating imperfections and vacancies, and potentially also having an effect on the mechanical strength of the rGO produced.
The final example given above could eventually be the future of large scale production of rGO. Electrochemical reduction of graphene oxide is a method that has been shown to produce very high quality reduced graphene oxide, almost identical in terms of structure to pristine graphene, in fact.
This process involves coating various substrates such as Indium Tin Oxide or glass with a very thin layer of graphene oxide. Then, electrodes are placed at each end of the substrate, creating a circuit through the GO. Finally, linear sweep voltammetry is carried out on the GO in a sodium phosphate buffer at various voltages; at 0.6 volts reduction began, and maximum reduction was observed at 0.87 volts.
In recent experiments the resulting electrochemically reduced graphene oxide showed a very high carbon to oxygen ratio and also electronic conductivity readings higher than that of silver (8500 S/m, compared to roughly 6300 S/m for silver). Other primary benefits of this techniques are that there are no hazardous chemicals used, meaning no toxic waste to dispose of. Unfortunately, the scalability of this technique has come into question due to the difficulty in depositing graphene oxide onto the electrodes in bulk form.
Ultimately, once reduced graphene oxide has been produced, there are ways that we can functionalise rGO for use in different applications. By treating rGO with other chemicals or by creating new compounds by combining rGO with other two dimensional materials, we can enhance the properties of the compound to suit commercial applications. The list is almost endless as to what we can achieve with graphene in any of its guises.
- Related Publications -
Tell us what kind of graphene you need and we will produce it for you. Our knowledge, skills, experience and equipment mean that almost any need can be met and on any scale - research, pilot line and industrial. To discuss your graphene requirements:
[email protected] / [email protected] / contact form |
Length—Evolution from Measurement Standard to a Fundamental Constant
The meter had its origin in August of 1793 when the Republican Government of France decreed the unit of length to be 10-7 of the earth's quadrant passing through Paris and that the unit be called the meter. Five years later, the survey of the arc was completed and three platinum standards and several iron copies of the meter were made. Subsequent examination showed the length of the earth's quadrant had been wrongly surveyed, but instead of altering the length of the meter to maintain the 10-7 ratio, the meter was redefined as the distance between the two marks on a bar.
In 1875, the Treaty of the Meter established the General Conference on Weights Measures (Conférence Général des Poids et Mésures, CGPM) as a formal diplomatic organization responsible for the maintenance of an international system of units in harmony with the advances in science and industry. This organization uses the latest technical developments to improve the standards system through the choice of the definition, the method to experimentally realize the definition, and the means to transfer the standard to practical measurements. The international system of units (Système Intérnational d'Unités, SI) is constructed using seven base units for independent quantities and two supplementary units for angles and is a modern metric system. Within the United States, the National Institute of Standards and Technology (NIST) has the responsibility for realizing the values of the SI units and disseminating them by means of calibrations to domestic users, as well as engaging in international research with other national laboratories and with the International Bureau of Weights and Measures (Bureau Intérnational des Poids et Mésures, BIPM).
The meter (m) is the Si unit of length and is defined as the length of the path traveled by light in vacuum during the time interval of 1/299 792 458 of a second. This replaces the two previous definitions of the meter: the original adopted by CGPM in 1889 based on a platinum-iridium prototype bar, and a definition adopted in 1960 based on a krypton86 radiation from an electrical discharge lamp. In each case, the change in definition achieved not only an increase in accuracy, but also progress toward the goal of using fundamental physical quantities as standards, in particular, the quantum mechanical characteristics of atomic systems.
In 1889, there was one prototype meter, a bar made of a platinum iridium alloy with lines inscribed at each end; the distance between them defined the meter (see Figure 1). The length standard was disseminated to the national laboratories through the use of artifact meters, which were accurate (but not identical) replicas of the prototype meter. Each artifact meter was calibrated against the prototype for use as a national standard. A serious problem with a prototype standard results from the fact that there is no method to detect a change in its value due to aging or misuse. As a consequence, it is not possible to state the accuracy or stability of the prototype meter, although calibration uncertainties of the artifact meters can be assigned.
The development of the Michelson interferometer, which measures physical displacement in terms of optical wavelengths, and the realization that certain atoms and molecules have precisely defined and reproducible emission frequencies (and, thus, wavelengths) brought about the transition from a mechanical to an optical length standard. The krypton86 electrical discharge lamp was designed to produce the Doppler-broadened wavelength of the 2p10 5d5 transition of the unperturbed atom. The two dominant wavelength shifts, one caused by the DC Stark effect and the other by the gas pressure in the discharge lamp, were opposite in sign and could be made equal in magnitude by the proper choice of operating conditions. Different krypton86 lamps reproduced the same wavelength to about 4 parts in 109, but had the disadvantage that the coherence length of its radiation was shorter than the meter, complicating the changeover from the older standard.
The ability to measure atomic wavelengths with higher accuracy and reproducibility has been further enhanced by the invention of the laser, along with techniques that permit the direct observation of the natural linewidth of atomic and molecular transitions without Doppler broadening. By using saturated absorption spectroscopy, which employs high-intensity counter-propagating laser beams, previously unresolved hyperfine transitions are measured to high accuracy.
Faced with the possibility that further advances in laser spectroscopy would lead to proposals for new length standards based on more precise atoms or molecules, a new concept for the length standard definition was developed. The second (which is equivalent to 9,129,631,770 oscillations of the 133Cs atom) and the meter are independent base units. Traditionally, the speed of light was measured in terms of their ratio. By contrast, the present standard defines the meter in terms of the SI second and a defined (i.e., conventional) value for the speed of light in vacuum which fixes it to 299 792 458 m/s1 exactly and the meter is determined experimentally. Since it is not based on a particular radiation, this definition opens the way to major improvements in the precision with which the meter can be realized using laser techniques without redefining the length standard.
The BIPM stipulates that the meter can be realized by the following three methods. In these descriptions, c is the speed of light. The meter can be realized
By a direct measurement of the distance L that light travels in vacuum in the time interval t, using the relation L = (c)(t);
By a direct measurement of the frequency f of radiation and calculating the wavelength L in vacuum λ using the relation λ = c/f;
By means of one or the radiations from a list provided by the BIPM whose frequency and vacuum wavelength can be used with a stated uncertainty.
Method 1 follows directly from the definition, but cannot achieve the accuracy possible with the other two, and so is not used for practical purposes.
Method 2 measures laser frequencies in terms of the cesium clock. A complicated series of measurements is required because of the large difference between the microwave clock (9 GHz) and visible frequencies (500 THz), and because different regions of the electromagnetic spectrum require different measurement technologies. The general technique is to detect the beat frequency generated by focusing two or more laser beams, for example, the harmonic of one oscillator or laser and the fundamental of another, on nonlinear detector diodes, and adding or subtracting microwaves, to reduce the frequency of the beat signal so that it is within range of the counter. In the microwave region, commercial diodes are used. In the infrared region, specially constructed metal-insulator-metal diodes are used because of their ability to rectify signals at optical frequencies. In the visible region, parametric up-conversion is used to convert infrared radiation into visible light that is compared to a visible stabilized laser using a photo diode. The accuracy of the chain of frequency measurements from the cesium clock to the red helium-neon line (633 nm) is about 7.2 parts in 1012 and has an advantage over interferometry because corrections do not have to made for diffraction effects, reflective phase shifts, or the index of refraction.
Method 3 establishes practical length standards by using the frequencies of certain stabilized lasers whose performance has been carefully measured using Method 2 and calculating the wavelengths. In this way, a laboratory standard of known frequency can be constructed using the specifications and operating conditions provided by the BIPM. These descriptions also indicate the error associated with this method of realization. For length metrology, the iodine stabilized HeNe laser operating at 633 nm is the most common because it is convenient to operate, accurate to 2.5 parts in 1011, and is used to calibrate commercial displacement measuring interferometer systems (see Figure 2).
The ability to transfer the length standard to practical measurements is influenced by the index of refraction (n) of the beam path of the measuring instrument since the wavelength of the HeNe laser (633 nm) is air is smaller than its vacuum wavelength by 2.7 parts in 104 for standard atmospheric conditions. If the system is in a vacuum, the frequency of the length standard stipulated by the BIPM is used to calibrate the working laser and no further adjustments need to be made. If the system is not in a vacuum, an additional wavelength adjustment must be made for the index of refraction of the measurement environment. This can be done by a direct measurement of n or it can be calculated using an empirically derived formula whose input variables are temperature, barometric pressure, relative humidity, and CO2content. The accuracy of the calculation using the empirical formula is about 1 part in 107 when state-of-the-art technology is used for measuring the four input variables, so the index of refraction adjustment results in a reduction in accuracy by a factor of 500 compared to the accuracy in vacuum.
The modern length standard has evolved over a period of 200 years which has brought it to a point where it can be continually improved without the necessity of changing its definition. We may suspect that the developers of the first length standard were as unprepared to predict present day developments as we are to predict the advances that will be made in the next century. Such developments will, no doubt, cause great excitement for those who make precision measurements.
H.G. Jarrard, D.B. McNeill, A Dictionary of Scientific Units, Chapman and Hall, 1964, p. 85.
Arthur O. McCoubrey, NIST Special Publication 811 - Guide for the Use of the International System of Units, National Institute of Standards and Technology, Gaithersburg, MD, 1991.
Barry N. Taylor, NIST Special Publication 330 - The International System of Units (SI), National Institute of Standards and Technology, Gaithersburg, MD, 1991.
D.A. Jennings, C.R. Pollock, F.R. Peterson, R.E. Drullinger, K.M. Evenson, J.S. Wells, J.L. Hall, and H.P. Layer, Optics Letters 8(3); 1983.
T.J. Quinn, Mise en Pratique, Metrologia 30(5); 1994.
H.P. Layer, IEEE Trans. On Instrumentation and Measurements IM29(4); 1980.
F.E. Jones, J. Res. Nat. Bur. Stand. (U.S.) 86: 27-32; 1981.
A contribution by the Precision Engineering Division of the Manufacturing Engineering Lab at the National Institute of Standards and Technology.
PDF Version: Click here to retrieve PDF version of paper (157 KB)
The NIST Museum and History Program
The NIST Museum and History Program preserves the history and heritage of NIST and the National Bureau of Standards (NBS) by collecting, recording organizing, and exhibiting achievements of NIST staff.
Physical Measurement Laboratory (PML) |
When you work out, a few things probably run through your mind: changing your body composition, burning calories, building more muscle and strengthening your cardiovascular system. But did you know that there is one body part that most of us neglect to think about in terms of exercise? This specific organ helps you do, well, everything! What is it?|
Researchers have much more to learn about the brain, but over the last decade scientists have learned quite a bit about the effects of exercise on the brain—both physical and intellectual. It turns out that by exercising regularly and "training your brain," you can boost your brain power just like physical activity can strengthen your muscles.
The Link Between Working Out and Brain Power
One study published in Proceedings of the National Academy of Sciences found that regular sweat sessions can increase the size of a region of the brain called the hippocampus—a part of the brain that begins to decline around age 30 in most adults. The hippocampus is tucked deep in the brain and plays an important role in learning and memory. According to researchers, a larger hippocampus is associated with better performance on spatial reasoning and other cognitive tasks.
Another study in Neurology showed that exercise may help slow brain shrinkage in people with early Alzheimer’s disease. In the study, adults diagnosed with early Alzheimer’s who were less physically fit had four times more brain shrinkage than normal older adults. A study from 2010 in the journal, Brain Research found an association between physical fitness and children's brain power, too. In the study, researchers found that, on average, fit 9- and 10-year-old children had larger hippocampi and performed better on memory tests than their more sedentary peers.
How Exercise Helps the Brain
Here are a few more ways that exercise boosts brain power, according to AARP. |
What began more than 50 years ago as a way to improve fishing bait in California has led a University of Tennessee researcher to a significant finding about how animal species interact and that raises important questions about conservation.
In the middle of the 20th century, local fishermen who relied on baby salamanders as bait introduced a new species of salamander to California water bodies. These Barred Tiger salamanders came into contact with the native California Tiger salamanders, and over time the two species began to mate.
"To give you a sense of the difference between these two species, they are about as closely related as humans and chimpanzees," said UT assistant professor Ben Fitzpatrick, a faculty member in the Department of Ecology and Evolutionary Biology. This image shows California Tiger and Barred Tiger Salamanders in a California pond. New research shows that the two species have interbred to create a hybrids that have shown remarkable vigor.
Credit: Bruce Delgado/Bureau of Land Management
Mating between two different species creates a hybrid offspring. According to Fitzpatrick, while such hybrids have been found to be successful in plant species, research has generally shown that animal hybrids are not able to sustain themselves -- in scientific terms, they lack "fitness."
This understanding made Fitzpatrick's findings especially surprising when he looked at the offspring of the two salamander species in California. He and colleague Bradley Shaffer of the University of California, Davis, found that the new hybrid salamanders were not only surviving, but in some cases, thriving.
"I thought I was studying hybrid dysfunction going into this study -- looking at how hybrids go wrong," said Fitzpatrick. "The level of vigor in these hybrids was completely unexpected."
Their research, funded by the National Science Foundation and the Environmental Protection Agency, will appear in the upcoming issue of the Proceedings of the National Academy of Sciences. It is among the first to show hybrid vigor among animal species, and Fitzpatrick noted that the work raises a number of questions for conservationists.
The California Tiger salamander, which is native to the area of the study, is listed as threatened under the U.S. Endangered Species Act, while the Barred Tiger salamander is not. The data in the article lead the researchers to predict that eventually all California Tiger salamanders will have some of the non-native genes. In a sense, the entire species would then have hybrid ancestry.
According to Fitzpatrick, the finding raises questions about whether this would be considered beneficial to the native species or not -- it depends on how conservationists choose to define the new hybrid.
"If they consider it an acceptable modification of the original species, then this could enhance the chances for survival of the California Tiger salamander," he said, "but others may consider the hybrids to be genetically impure and see hybridization as accelerating extinction."
It is not yet clear from the research what is causing the hybrids to thrive.
"Our prediction is that, because of their advantages, the hybrids will remain part of the gene pool," said Fitzpatrick. "What we don't know is if those advantages come from the synergistic interaction of certain genes -- that they are greater than the sum of their parts -- or if they simply get the 'best of both worlds' by a selection of useful individual traits from each species."
Because the research is in such early stages, Fitzpatrick and colleagues plan to broaden their study of these salamanders, and also explore the implications of these vigorous hybrids for other animals in their ecosystem.
They have expanded the number of genetic markers that they are analyzing in the hybrids to determine the extent of their genetic mixing. Given that the new hybrids are finding more success in their environment, the researchers also plan to study whether their success is reducing food supply or other resources from native species in the area.
Fitzpatrick notes that their discoveries place the work on the leading edge of hybridization studies.
"We're right at the front in thinking that these ideas may be much more generally applicable," he said. He pointed to two other studies in recent months that have explored the issue of hybridization in butterflies. |
(Via Resilience Science, Garry Peterson, 1/28/2007):
The World Resources Institute’s EarthTrends weblog points to some data on trends in natural and manmade disasters.
Although natural and manmade disasters occur in all countries regardless of income or size, not all governments have the resources necessary for prevention and emergency response. For those regions already battling widespread poverty, disease, and malnutrition, disasters are a significant constraint on social and economic development. Understanding the trends that describe disasters through time and space is very important, particularly in light of climate change, which threatens to alter both the distribution and severity of disasters worldwide.
With growing population and infrastructures the world’s exposure to natural hazards is inevitably increasing. This is particularly true as the strongest population growth is located in coastal areas (with greater exposure to floods, cyclones and tidal waves). To make matters worse any land remaining available for urban growth is generally risk-prone, for instance flood plains or steep slopes subject to landslides. The statistics in the graph opposite reveal an exponential increase in disasters. This raises several questions. Is the increase due to a significant improvement in access to information? What part does population growth and infrastructure development play? Finally, is climate change behind the increasing frequency of natural hazards? |
Today's doodle on Google's homepage celebrates the birth anniversary of a Polish astronomer, Nicolaus Copernicus. The work and findings of Copernicus transformed human understanding of the solar system. He was the first person to propound that the sun and not the Earth is at the centre of the universe. His astronomical theory is called the heliocentric model of the universe.
He was born on 19 February 1473 in Torun city of Poland. He had his initial higher education at Jagiellonian University and then went Italy to study law at the University of Bologna. It was in Italy that his passion for astronomy grew under the influence of his mathematics professor, Domenico Maria de.
In 1530, he wrote a research study titled 'De Revolutionibus Orbium Coelestium' meaning 'On the Revolutions of the Celestial Spheres' with sun at the centre of the solar system on the earth as it was believed. Though his theory was accepted by the scientists but was opposed by religious leaders.
He died in May 1543 in Frombork. He completed a landmark book on astronomy just before his death. Copernicus' pupil, Rheticus also wrote a book, Narratio Prima (First Account), outlining the essence of Copernicus' theory. |
Ancient exoplanet found in sun’s neighborhood, could sustain water
The ancient star, just 13 light years from Earth, was initially discovered at the end of the 19th century by the Dutch astronomer Jacobus Kapteyn. The red dwarf is the second fastest moving star in the sky and belongs to the galactic halo, an extended cloud of stars orbiting our galaxy. It can actually be seen with an amateur telescope in the southern constellation of Pictor.
An international team of researchers coming from London’s Queen Mary University and the University of California, Santa Cruz (UCSC) have been collecting data on Kapteyn’s star for over a decade at the Keck Observatory.
In the most recent study, they measured tiny periodic changes in the motion of Kapteyn’s star, which are caused by the gravitational tug of the orbiting planets. Scientist can then work out some of the properties of these planets such as their mass and orbital periods.
“We were surprised to find planets orbiting Kapteyn’s star. Previous data showed some moderate excess of variability, so we were looking for very short period planets when the new signals showed up loud and clear,” says lead author Dr Guillem Anglada-Escude, QMUL’s School of Physics and Astronomy.
“Finding a stable planetary system with a potentially habitable planet orbiting one of the very nearest stars in the sky is mind blowing,” says US co-author Dr Pamela Arriagada, from the Carnegie Institution.
“This is one more piece of evidence that nearly all stars have planets, and that potentially habitable planets in our galaxy are as common as grains of sand on a beach.”
The planets were given traditional names of Kapteyn b and Kapteyn c.
For the time being only a few properties about them are known, such as their approximate masses, distances to the star and orbital periods.
But using new instruments currently under construction, scientists will be able to measure the planet’s atmospheres and may be able to detect the presence of water.
The two planets are estimated to be 11.5 billion years old, which is 2.5 billion years older than earth and only two billion years younger than the universe itself.
Kapetyn c is far bigger than earth and its year lasts 121 days. However, astronomers think it’s too cold to support liquid water.
Kapetyn b is the most likely to support water; its mass is five times bigger than Earth’s and takes 48 days to orbit its star.
As scientists believe the star itself was born in a dwarf galaxy that eventually got absorbed and destroyed by our own Milky Way galaxy, the life it could host might have come from a very deep space.
“It does make you wonder what kind of life could have evolved on those planets over such a long time,” Anglada-Escude said. |
There is no clear picture of the original use of Stonehenge. It certainly wasn't built by the Druids, as it pre-dated them. Constructed in phases spanning more than a millenium between 3000 and 1600 BC in the plains of England by people who left no written records, there is a great deal of debate about why they built it. There are a large number of burial sites in the region, and one theory holds that this was a place for religious rites concerning the dead, possibly a form of ancestor worship. The stones themselves line up with the sun and stars on particular days such as the solstices, and some claim this was more of an early astronomical observatory. Often these ancient cultures placed great significance on these days and/or worshiped the sun, moon and stars as deities, so this alignment may have been used to mark religious festivals. Tobias Reichling made a microscale version of the famous stone circle.
This microscale creation was part of a huge collaborative display by European LEGO builders. Conceived by Tobias Reichling and Bruno Kurth, five builders constructed a huge (fifteen square meters) LEGO map of Europe and twenty different people contributed forty-four microscale versions of famous European landmarks. |
The only sculptor of the fifth century who is at once known to us from literary tradition and represented by an authenticated and original work is Paeonius of Mende in Thrace. He was an artist of secondary rank, if we may judge from the fact that his name occurs only in Pausanias; but in the brilliant period of Greek history even secondary artists were capable of work which less fortunate ages could not rival. Pausanias mentions a Victory by Paeonius at Olympia, a votive offering of the Messenians for successes gained in war. Portions of the pedestal of this statue with the dedicatory inscription and the artist’s signature were found on December 20, 1875, at the beginning of the German excavations, and the mutilated statue itself on the following day (Fig. 143). A restoration of the figure by a German sculptor (Fig. 144) may be trusted for nearly everything but the face. The goddess is represented in descending flight. Poised upon a triangular pedestal about thirty feet high, she seems all but independent of support. Her draperies, blown by the wind, form a background for her figure. An eagle at her feet suggests the element through which she moves. Never was a more audacious design executed in marble. Yet it does not impress us chiefly as a tour de force. The beholder forgets the triumph over material difficulties in the sense of buoyancy, speed, and grace which the figure inspires. Pausanias records that the Messenians of his day believed the statue to commemorate an event which happened in 425, while he himself preferred to connect it with an event of 453. The inscription on the pedestal is indecisive on this point. It runs in these terms: “The Messenians and Naupactians dedicated [this statue] to the Olympian Zeus, as a tithe [of the spoils] from their enemies. Paeonius of Mende made it; and he was victorious [over his competitors] in making the acroteria for the temple.” The later of the two dates mentioned by Pausanias has been generally accepted, though not without recent protest. This would give about the year 423 for the completion and erection of this statue.
The great age of Greek sculpture. Second period: 400-323 B. C.
In the fourth century art became even more cosmopolitan than before. The distinctions between local schools were nearly effaced and the question of an artist’s birthplace or residence ceases to have much importance Athens, however, maintained her artistic preeminence through the first half or more of the century. Several of the most eminent sculptors of the period were certainly or probably Athenians, and others appear to have made Athens their home for a longer or shorter time. It is therefore common to speak of a “younger Attic school,” whose members would include most of the notable sculptors of this period. What the tendencies of the times were will best be seen by studying the most eminent representatives of this group or school. |
Misconceptions about Immunization
Immunizations should be part of routine health care obtained through one's personal physician (or in some instances, through one's local health department). Long-lasting protection is available against measles, mumps, German measles (rubella), poliomyelitis, tetanus (lockjaw), whooping cough (pertussis), diphtheria, chickenpox (varicella), Hemophilus influenzae b (Hib), and hepatitis B. Immunization against all of these is recommended for children by the American Academy of Pediatrics, the American Academy of Family Practice, and the Advisory Committee on Immunization Practices of the U.S. Centers for Disease Control and Prevention (CDC).
All states now require proof of immunization or other evidence of immunity against some of these diseases for admission to school. However, the requirements vary from state to state, and exemptions may be granted for medical, moral, or religious reasons.
Immunization is also important for adults. Those unprotected against any of the above diseases (except whooping cough) should consult their physicians. Tetanus boosters should be administered every ten years. Flu shots (which give only seasonal protection) and immunization against pneumococcal pneumonia are recommended for high-risk patients, elderly individuals, and certain institutional populations.
The success of vaccination programs in the United States and Europe inspired the 20th-century concept of "disease eradication"—the idea that a selected disease can be eradicated from all human populations through global cooperation. In 1977, after a decade-long campaign involving 33 countries, smallpox was eradicated worldwide. Polio caused by wild virus has been eradicated from the Western Hemisphere; childhood vaccination levels in the United States are at an all-time high; and disease and death from diphtheria, pertussis, tetanus, measles, mumps, rubella and Haemophilus influenzae type b (Hib) are at or near record lows. The CDC's Parent's Guide to Childhood Immunizations includes some interesting statistics about the impact of vaccination on childhood diseases.
|Disease||Cases per year before vaccines||Cases in 2007||Percent decline|
|Congenital rubella syndrome||823||0||100%|
At least ten misconceptions can lead parents to question the wisdom of immunizing their children. If you encounter others you would like Quackwatch to address, please contact us.
- Misconception #1: Because of better hygiene and sanitation, diseases had already begun to disappear before vaccines were introduced.
- Misconception #2: The majority of people who get the disease have been immunized.
- Misconception #3: There are hot lots of vaccine that have been associated with more adverse events and deaths than others. Parents should find the numbers of these lots and not allow their children to receive vaccines from them.
- Misconception #4: Vaccines cause many harmful side effects, and even death—and may cause long-term effects we don't even know about.
- Misconception #5: DTP vaccine causes sudden infant death syndrome (SIDS).
- Misconception #6: Vaccine-preventable diseases have been virtually eliminated from the United States, so there is no need for my child to be vaccinated.
- Misconception #7: Giving a child more than one vaccine at a time increases the risk of harmful side effects and can overload the immune system.
- Misconception #8: There is no good reason to immunize against chickenpox (varicella) because it is a harmless disease.
- Misconception #9: Vaccines cause autism.
- Misconception #10: Hepatitis B vaccine causes chronic health problems, including multiple sclerosis.
- Misconception #11: Thimerosal causes autism: Chelation therapy can cure it.
- Misconception #12: Children get too many immunizations.
The Vaccine Information Center at Children's Hospital of Philadelphia has produced a very powerful set of videos to help parents understand why vaccine are. This one tells the story of a parent who nearly lost a child because she believed misinformation on the Internet. To see the other videos, click here.
Opposition by Offbeat Professionals
Large percentages of offbeat practitioners advise parents not to immunize their children. Some are rabid on the subject. Others pretend to provide a "balanced" view but greatly exaggerate what they consider negative reasons. These actions are irresponsible and can cause serious harm both to patients and to our society as a whole. For further information see:
News and Commentary
- An Open Letter to the U.S. Congress about Immunization (2008)
- British Courts Side with Vaccination in Parental Dispute
- Immunization: The Inconvenient Facts: A science-based response to Viera Scheibner.
- The Promise of Vaccines: The Science and the Controversy: American Council on Science and Health booklet
- Quicksilver Salesmen: Highlights the intellectual dishonesty and paranoia of antivaccination leaders
- Vaccination Undermined: Three factors discussed.
In 1802, British satirist James Gillray caricatured a scene at the Smallpox and Inoculation Hospital at St. Pancras, showing Edward Jenner administering cowpox vaccine to frightened young women, and cows emerging from different parts of people's bodies. The cartoon was inspired by the controversy over inoculating against the dreaded disease, smallpox. Cowpox vaccine was rumored to have the ability to cause people to sprout cow-like appendages. Jenner stands calmly amid the crowd. A boy next to him holds a container labeled "VACCINE POCK hot from ye COW"; papers in the boy's pocket are labeled "Benefits of the Vaccine." The tub on the desk next to Jenner is labeled "OPENING MIXTURE." A bottle next to the tub is labeled "VOMIT." The painting on the wall depicts worshipers of the Golden Calf. (Source: Wikipedia)
Reliable Information Sources
- U.S. Centers for Disease Control
- National Immunization Program offers answers to common questions.
- The "Pink Book" Epidemiology & Prevention of Vaccine-Preventable Diseases
- CDC Information Hotline: (800) 232-2522.
- American Academy of Pediatrics
- Public Health Agency of Canada
- The Immunization Action Coalition, whose mission is to increase immunization rates, offers childhood and adult immunization information and answers questions by email.
- The Immunization Gateway: Links to many other authoritative sites.
- Immunization Newsbriefs: Online and e-mail newsletter from the National Network for Immunization Information
- The Vaccine Page: Vaccine news and a database
- Healthy People 2010: Surgeon General's goals for immunization
- Sabin Vaccine Institute: Vaccine news
- National Institute of Allergy and Infectious Diseases: Jordan Report 2000: Accelerated Development of Vaccines
- National Foundation for Infectious Diseases
- National Network for Immunization Information
- First Candle/SIDS Alliance: position paper on immunization and sudden infant death syndrome
- Vaccine Education Center (Children's Hospital of Philadelphia)
- Vaccine Support Message Board
- VaccinePlace.com: Comprehensive information on vaccine history, safety, and recommended use
This page was revised on April 20, 2013. |
CRISPR-induced Mutations – What do they Mean for Food Safety?A new study published in Nature Methods has found that the genome editing technology CRISPR introduced hundreds of unintended mutations into the genome of mice. In the study, the researchers sequenced the entire genome of mice that had undergone CRISPR gene editing to correct a genetic defect. They looked for all mutations, including those that only altered a single nucleotide (DNA base unit). They found that the genomes of two independent gene therapy recipients had sustained more than 1,500 single-nucleotide mutations and more than 100 larger deletions and insertions. None of these DNA mutations were predicted by the computer algorithms (software packages) that are widely used by researchers to screen the genome (the total DNA base unit sequence) of an organism to look for potential off-target effects. While this study was conducted in the arena of gene therapy, it has clear implications for the regulation of food plants and animals derived from CRISPR and other genome editing techniques. |
Some college students may have difficulty concentrating and maintaining focus, especially during lectures and while reading. The average attention span is between 20-90 minutes, varying greatly depending on the person's interest and complexity of material. There are, however, techniques to improve focus and concentration.
The following strategies help students maintain focus:
- Spend forty-five minutes to an hour studying one subject, and then take a short break. Getting a drink of water, and walking around a little will stimulate better blood circulation to your brain.
- Set goals to give yourself something to work for. Reward yourself with a study break or a special treat when you reach your goal.
- Start a reading assignment by previewing the chapter. Skim paragraph headings, pictures, charts, and graphs, as well as the introduction and summary.
- To avoid "eye glide" (looking at the page without taking in the contents) close your eyes or the book at the end of each section and mentally recap what you just learned. If you aren't able to do so, go back and re-read.
- Adjust your reading speed or read out loud.
- Take deep breaths, using your diaphragm instead of your chest. This allows more oxygen to get to your brain, increasing concentration and learning. (This is also works during class.) |
4th grade writing - 2017-2018 by an expository essay must be centered around an the more students see and hear expository mentor text will enable them to. Teaching expository essays to second graders chase young expository text: how to write an expository essay - duration:. Stageoflifecom features a collection of its best student and teen essays you can use as mentor texts in the classroom to accommodate our mentor text essay:. Mentor essay toulmin a literate life - mentor texts - julie ballew teacher note: students who have not successfully formulated an opinion and/ or offered text- based.
This lesson will be taught because in the intermediate grades, expository essay writing is ongoing in the future, students will have to know how to write an. Help students with descriptive writing skills by teaching these five writing activities that will allow them to practice showing not telling. Writing expository essays in an expository essay, a type of informational text, that you can use to model and analyze other mentor expository texts related. 4th grade staar teach expository writing with mentor texts you will sort student papers according to tea's grading category to identify biggest area of need, this is.
Text features mentor text graphic organisers for structuring expository texts linking words for expository essays for essay editing app java can an. 1 6th grade exemplar essay: expository essay my childhood game engaging like “cops and robbers,” “secret agent” is a creative and imaginative game. Explain yourself: an expository writing unit for high school adele barnett trinity university • create a “skeleton” essay from a mentor text—a devolved essay. Write an essay stating your position on whether you believe that we live in find a quote in the mentor text that would serve as your staar writing expository.Using mentor texts to motivate and support student writers by to write a convincing essay or draft a descriptive talk about the mentor text,. Students explore the nature and structure of expository texts that focus on cause and effect and apply what they learned using graphic organizers and writing. Examples of expository writing essays irving 10/10/2016 7:32:06 body part 1 student is an expository essay at school examples or a very, in third person. Five expository text structures and their associated signal words pattern description cue words (signal words) graphic organizer description the author. How to write an expository essay: outline, format, structure, topics, examples of an expository essay. Modeling an expository text structure strategy in think alouds christine j gordon abstract several current notions used in combination can contribute to better. Crafting a thesis for an expository essay with a great thesis in place, writing your essay will be a snap lc shows you how to write this all-important sentence. A 11 teaching ells to deconstruct writing select a mentor text class who are expected to write an expository essay that both describes what a hero is and. Mentor texts or anchor texts are any text that can be used as an example of good writing for writers writers use a mentor text to inform their own writing.
What is expository text an expository essay does exactly what the name implies: expository essays: types, characteristics & examples related study materials. Example of expository essay mentor text use that for a look at expository essay how that requires the history of expository essay. What is expository writing - definition & examples when writing an expository essay, what is expository writing - definition & examples related study.
Model expository/informative expositoryhow so students create an essay if you have a favorite mentor text you use to inspire expository writing. Expository essay researchdocx a new genre and the strategy of using a mentor text to discover the to read over another essay expository,. Expository writing teacher resources is the mentor text used in an expository writing unit that teaches paragraph expository essay format with a.Download
2018. Term Papers. |
The world just got one step closer to renewable, cheap, and efficient large-scale energy production as researchers at Stanford University lead by engineer Yi Cui developed a grid-scale battery whose electrodes can last for up to a thousand charge cycles without degrading. The new battery is heralded as a game-changer for fluctuating renewable energy sources such as solar and wind.
The key to the battery‘s design lies in the structure of its electrodes. In regular batteries, charged particles move towards the positive electrode during charging. During discharge, the particles flow back towards the negative electrode, creating an electric current. As this process is repeated, electrodes tend to degrade as the ions move back and forth. In Yi Cui’s battery, the negatively charged cathode is coated in hexacyanoferrate, and the positively charged anode is made of activated carbon and an electrically conductive polymer. The electrodes are set in a liquid solution of positively charged potassium ions, which are able to flow between the anode and cathode without damaging them.
The materials used in this battery are commercially available, which means the technology can be applied on a large scale. Most current means of storage are too expensive to be practical on grid levels, but Stanford’s technology may prove to be a game-changer. “Virtually all of the energy-storage capacity currently on the grid is provided by pumped hydroelectric power, which requires an immense capital investment, is location-dependent and suffers from low energy efficiency,” the team remarks. Now, with this example of a fast and long-lived storage technology, the battery could provide the extra boost of power renewable energy needs to become successful, reliable, and cheap. |
The philodendron plant is actually a common name for a large genus – roughly 900 in number – which is a member of the araceae family and aroideae subfamily. These plants have a wide distribution, ranging from tropical regions of the Americas all the way into Asia. Of the many varieties available, you may find a philodendron that trails, climbs or develops as a vine. Its foliage is large and alternate; pinnate, lobed or cut; and either heart, oval or pear-shaped. When mature, the philodendron plant develops an inflorescence that is made up of a waxy, bi-colored spathe that surrounds a spadix. The inflorescence may be cream, bright white or red, and may occur singly or as a large group.
The philodendron plant is absolutely one of the most popular houseplants today, but the history of its collection can be dated as far back as 1644, when the German naturalist Georg Marcgrave began acquiring them from the wild. Many other explorers sought to find out more about this extensive genus; the first such exploration was done by Charles Plumier, who managed to gather and classify at least six new species. As time went by, the philodendron began to increase in popularity, and by 1793 the species philodendron oxycardium was introduced to the English Botanic Gardens, and became a must have plant for any proper parlor. In the United States, the philodendron did not really take off until the mid-1930s when a nurseryman by the name of John Masek noticed the potential of this plant. Considering that they were easy to grow, not to mention low maintenance, he began propagating and selling them to florist shops. In addition to being a popular houseplant, philodendrons have also become a staple of artistic inspiration. Pablo Picasso, for instance, frequently used these plants to shape unusual scenes – such as his 1929 work, “Woman in the Garden,” where the nymph Daphne was transformed into a large brush of vines. More modern artists replicate this plant in vivid, often abstract shades, such as Mimi Little’s “Philodendron,” and Peggy Eyth’s “Tree With Split Leaf Philodendron.”
To pagans, the philodendron plant has long been considered a symbol of health, to others, it is thought to be an emblem for abundance and wealth. As a gift, these plants are frequently given in pots or hanging baskets to welcome neighborhood newcomers; to those who have just purchased their first home; or to wish the recipient well as they move on to a new path.
Philodendron Plant Pictures
Upload Your Photos:
We are always looking to expand our galleries.
You can help us by uploading your photos today!
Leave a Reply |
The Mini Synthetic-Aperture Radar (SAR) is a lightweight radar imaging instrument flying currently on the Indian Space Research Organization's Chandrayaan-1 mission. A modified version of this instrument will fly on NASA's Lunar Reconnaissance Orbiter mission in 2009.
Mini-SAR uses a different analytical approach to look for ice. Traditionally, the key parameter used to determine if ice is present is the circular polarization ratio (CPR). This quantity is equal to the magnitude of the same sense (i.e. the left or right sense of the transmitted circular polarization) divided by the opposite sense polarization signals that are received. Mini-SAR uses a hybrid dual polarization technique, transmitting a circular polarized signal (either Right or Left Circular Polarization) and then receiving coherently the linear Horizontal and Vertical polarization signals. This hybrid architecture preserves all of the information conveyed by the reflected signals and ultimately serves to determine if the returned signal is caused by an ice-regolith mixture, or simply dry rocks on the lunar surface.
The principal goal of Mini-SAR on Chandrayaan-1 is to conduct systematic mapping polewards of 80? latitude for both poles. Mini-SAR uses S-band (2380 MHz), has an illumination incidence angle of 35?, and image strips have spatial resolution of 75 meters per pixel. During the observation opportunities given to the instrument, it will image in SAR mode both poles every 2-hour orbit, covering both polar regions in a single 28-day mapping window.
These regions close to both poles contain some of the most promising sites for potential water deposits. By operating Mini-SAR during orbits of maximum inclination, the scientists will be able to obtain SAR strips of permanently shadowed regions within 2? latitude of both poles.
The Mini-SAR instrument was activated on November 17, 2008 and acquired SAR images of both poles during a commissioning test (Fig. 1).
P. D. Spudis (1), D.B.J. Bussey (2), B. Butler (3), L. Carter (4), J. Gillis-Davis (5), J. Goswami (6), E. Heggy (7), R. Kirk (9), T. Misra (6), S. Nozette (1), M. Robinson (8), R. K. Raney (2), T. Thompson (7), B. Thomson (2), E. Ustinov (7)
1. Lunar and Planetary Institute, Houston TX 77058 ([email protected]) 2. Applied Physics Laboratory, Laurel MD 20723 3. NRAO, Socorro NM 4. NASM, Washington DC 5. Univ. Hawaii, Honolulu HI 96822 6. ISRO, Bangalore, India 7. JPL, Pasadena CA 8. ASU, Tempe AZ 9. USGS, Flagstaff AZ
While no remote measurement can definitively answer the question of whether ice exists at the lunar poles, an orbiting SAR provides the most robust method of obtaining a positive indication of ice deposits. With an orbital SAR, ALL areas on the Moon can be seen. All permanently shadowed regions will be imaged multiple times by an orbiting radar with incidence angles favorable for determining their scattering properties.
Significance to Solar System Exploration
The presence of water ice on the Moon has the potential to completely change the space flight paradigm. Currently, space probes must be supplied and equipped on Earth and launched complete; this limits the amount of material, and thus capability, of future space probes. In contrast, if the Moon's resources can be used, specifically the water ice at the poles to make rocket propellant, the rules of space exploration will be forever changed. Use of lunar generated propellant will create an Earth-Moon transportation infrastructure, with which we can not only access any point in space, but also voyage to the planets beyond.
Last Updated: 21 January 2014 |
Counting to TenAnd a one and a two and a three. That's how it all starts. After that start, the sky is the limit for you. In the last section, you should have learned what numbers look like, now it's time to associate values with those numbers. As you move through this section, you will discover definite patterns in the way counting works. For most of the western world, if you know ten characters (0,1,2,3,4,5,6,7,8,9), you can do just about anything in the world of math. That's power!
Starting SmallYou may have read books where they talk about billions of stars. You may have listened to the news where they talk about thousands and millions of people. That's all very well and good, but before you can move into big numbers you've got to start small. Remember that computers, no matter how fancy and amazing they are, are still built on a platform of zeroes (0) and ones (1). That's where we'll start...Very small.
Other ValuesAs you read and learn more, you will be introduced to other values like decimals and fractions that we cover here. Those topics use numbers and counting, just in a different way. Counting can also get super complex and mathematicians turn to computers for help. You will eventually understand how those numbers and values work. Let's start with counting to 10.
0 - 1 - 2 - 3 - 4 - 5 - 6 - 7 - 8 - 9 - 10
And with words.
One - Two - Three - Four - Five - Six - Seven - Eight - Nine - Ten
That's it. Now repeat that a hundred or so times before you more to the next page.
Counting to Ten (10) Quiz
- Play Activity
Identify Groups to Ten (10) Memory Challenge
- Play Activity
- Dates & Times
- More Maths Topics
Useful Reference MaterialsWikipedia:
College of the Redwoods: |
“History is the struggle and record of humans in the process of humanizing the world, i.e., shaping it in their image and interests.” (Black history is therefore the collective record of people of African descent in Africanizing the world around them).
Maulana Karenga, the creator of Kwanzaa was born Ronald McKinley Everett on July 14th, 1941 in Parsonsburg, Maryland. Karenga moved to Los Angeles, California at age 18, where he attended Los Angeles Community College and became active in the Civil Rights Movement.
After meeting Malcolm X as a college student in the 1960s, Karenga became politicized and helped found the US organization, which among other things promoted a cultural revolution for African Americans. In 1966, Karenga created Kwanzaa, a holiday designed to celebrate and honor the values of ancient African cultures and inspire African Americans to greater pride in their heritage.
Kwanzaa is based on the year-end harvest festivals that have taken place throughout Africa for thousands of years. The name comes from the Swahili phrase “matunda ya kwanza,” which means “first fruits of the harvest.” Karenga chose a phrase from Swahili because the language is used by various peoples throughout Africa.
In the late 1960s, US and Karenga were investigated by the FBI’s COINTELPRO operation—established to counteract the influence of subversive groups—and placed on a watch list of dangerous, revolutionary organizations. At this time, US was engaged in a violent conflict with the Black Panther Party for supremacy in the African-American community. This led to a 1969 shootout at UCLA, in which two Panthers were killed. In 1971, he was arrested and convicted of assaulting a female US member and was sent to prison. Soon after, the US organization fell into disarray and disbanded in 1974. After his release from prison, Karenga admitted that US had made mistakes, which weakened the movement and compromised its ability to change with the times.
Currently, Karenga is a professor in the Department of Black Studies at California State University, Long Beach and is the director of the Kawaida Institute for Pan African Studies in Los Angeles.
Karenga has authored such books as Introduction to Black Studies, the most widely used introductory text in Black Studies, and Kawaida: A Communitarian African Philosophy. He has received numerous awards, including the National Leadership Award for Outstanding Scholarly Achievements in Black Studies from the National Council for Black Studies, and the Pioneer Award from the Rainbow/PUSH Coalition and Citizenship Education Fund. |
The computer I use for this class uses Windows 7 as the operating system. Regarding technology application, Roblyer and Doering (2012) stated, “the general goal is always the same: to harness the potential of technology in ways that offer an individual with a disability increased opportunities for learning, productivity, and independence – opportunities that otherwise would not be available” (p. 400). After exploring the different accessibility features available through the Ease of Access Center in Windows 7, I realized that Microsoft helps to support this goal.
I grouped the features based on the type of disability they can help to accommodate. I have focused on cognitive, physical, and sensory disabilities, but the features could also be applied to help at-risk and gifted/talented students, depending on their situation. Please keep in mind that many of the accessibility features can accommodate multiple disabilities, but I have listed each only once.
Students with Cognitive Disabilities
According to Roblyer and Doering (2012), the issue for students with mild cognitive disabilities “is not physical access to technology, but reading, writing, memory, and retention of information” (p. 406).
This feature can be setup to help someone who has difficulty reading the on-screen text due to a cognitive disability or someone with a visual impairment. The Narrator also voices each key typed on the keyboard. That would be helpful for someone who has difficulty with keyboard typing, although they would need to type slowly in order to allow time for the software to work appropriately.
- Blinking Cursor Thickness
This setting allows the user to increase the thickness of the blinking cursor used when typing in programs that are on the computer, such as Microsoft Word. I did not notice it being applied online when composing an email or chatting. This feature could be helpful for someone with a visual impairment or for a student that has difficulty focusing or tracking while typing or reading.
- Animations and Background Images
The Ease of Access Center allows for the user to choose to turn off animations and remove background images when available. This could help those with focusing problems to minimize distractions as much as possible.
Students with Physical Disabilities
According to Roblyer and Doering (2012), “Physical disabilities typically affect a person’s mobility and agility” (p. 409).
- On-Screen Keyboard
This feature would be ideal for someone with the inability to type with two hands since they are able to use to use a mouse to make their letter selections on a keyboard that appears on the screen. Unlike having to type an entire word with possibly one hand or one finger, the on-screen keyboard uses word prediction which can help someone “type” faster with the use of a mouse.
- Speech/Voice Recognition
After setting up the software, users are able to control their computer with their voice. The settings under Speech or Voice Recognition can help users with both visual and physical impairments that keep them from being able to see what is on the screen or from being able to use a keyboard and/or mouse effectively.
- Mouse Keys
Allows for the keyboard number pad to be used to move the mouse pointer around the screen rather than the mouse. Settings can be changed to adjust the speed of the movement using the number pad. This can be helpful for those with a physical impairment that keeps them using the mouse effectively.
- Sticky Keys
Allows for keys to be pressed consecutively, rather than simultaneously, when using keyboard shortcuts (such as Ctrl + x for cut). This can help someone with a physical impairment that is unable to press multiple keys at once. It is also helpful for new keyboard/typing students to learn shortcuts.
- Filter Keys
Ignores or slows down brief or repeated keystrokes and adjusts keyboard repeat rates. These settings are helpful for someone with a physical impairment that may have difficulty controlling their movements which could cause inaccuracy and frustration while typing.
Students with Sensory Disabilities
According to Roblyer and Doering (2012), “Sensory disabilities involve impairments associated with the loss of hearing or vision” (p. 409).
The Magnifier zooms in on the screen and temporarily makes everything in that area larger. In this section of the Ease of Access Center, the size of text and icons can also be adjusted to 125% and 150%. The Magnifier feature would be a great accommodation for someone with vision impairments that just need a little added assistance in viewing what is on the screen, especially website text that is often very small.
- High Contrast
The High Contrast feature would also be helpful for someone with visual impairments. It allows for settings to be modified making the computer easier to see by changing most backgrounds to black with white text. I can see how this would also be helpful for any user in a dark environment, making reading and typing easier on the eyes.
- Audio Description
This is another feature that would be helpful for someone with a visual impairment while watching a video. When available, it provides descriptions of what is happening in videos. I am not aware of how often descriptions for videos are available, but this would be a great feature to consider using if a student is unable to watch videos.
- Toggle Keys
Provides an audio cue that caps lock, number lock, or scroll lock have been activated or deactivated using the keyboard keys. This can be helpful for anyone, especially someone with a visual impairment, to notify them that a keyboard setting has been changed.
- Mouse Pointers
Users can change the color and size of the on-screen mouse pointer. This can be helpful for those with visual impairments to make keeping track of the mouse pointer easier.
- Visual Cues
Enable users to get notifications in writing (pop-up windows) rather than sounds. These settings would be helpful for someone with a hearing impairment.
Microsoft Accessibility in Windows 7 overview webpage:
A Guide for Educators – Valdosta State University:
YouTube video on how to find the Ease of Access Center in Windows 7:
Roblyer, M. D, & Doering, A. H. (2012). Integrating educational technology into teaching (6th Edition). Upper Saddle River, NJ: Pearson Education, Inc. |
A practical reference for university and senior secondary school students. Theories are explained in straightforward language, including factors that affect the learning of languages, such as motivation, memory and a range of strategies initiated by students themselves. Examples are taken from the beginners to advanced levels, including print and other media, individual and class study. Students report their use of computers and how they have approached the learning of culture. A final chapter has advice on taking examinations.
Out of Stock
Sorry but this item is out of stock, please check back again soon. |
Coming from the Japanese word 'harbour wave', tsunami are a series of waves – with wave lengths up to hundreds of kilometres between crests - caused by undersea seismic disturbances.
Causes of tsunami
Ground displacement (movement) due to undersea earthquakes is the most common cause of tsunami. However, they may also be caused by submarine landslides, volcanic eruptions and caldera collapses, and even large meteorite impacts.
Tsunami may travel across entire ocean basins. For example, an earthquake in South America could trigger a tsunami that could severely impact New Zealand.
In the open ocean, a tsunami wave may only be a matter of centimetres high. Its height increases as it approaches shore and slows down, due to the water depth becoming shallower. Tsunami waves can be several to tens of metres high when they hit the shore.
Potential consequences of tsunami
Tsunami can cause serious flooding of coastal areas. Large tsunami may inundate many kilometres inland.
The waves which initially hit the coast, followed by flooding spreading inland, can cause serious damage to infrastructure (including buildings and roads), land, crops and livestock. They can also result in injury or loss of human life. Strong currents may also cause damage to ships and coastal infrastructure.
Incidences of tsunami in recent years
- 2011 Tohoku Tsunami - The Tohoku Tsunami took place on March 11, 2011, and was caused by a magnitude 9 megathrustearthquake approximately 32 km deep and 72 km off the east coast of the Oshika Peninsula of Tohoku, Japan (the fourth biggest earthquake since records began) . The tsunami waves generated reached 10 m in height, and in some cases washed up to 10 km inland. This tsunami had an impact over the whole Pacific with parts of the west coast of North and South America being hit by waves up to 3 m high
- 2010 Chilean Tsunami - The Chilean Tsunami was caused by a magnitude 8.8 earthquake of the coast of the Maule/Biobío Region in February 2010 (the sixth biggest earthquake since records began) . The tsunami caused deaths in Chile and a tsunami watch across the entire Pacific Ocean.
- 2009 Samoa Tsunami - The Samoan Tsunami was triggered by a magnitude 8.1 outer rise earthquake on the Tonga-Kermadec Subduction zone, in September 2009. Tsunami waves inundated parts of Samoa, American Samoa and Tonga causing the loss of 189 lives in these three countries.
- 2004 Sumatra/Boxing Day Tsunami - The Indian Ocean (Boxing Day) Tsunami was triggered by a massive (magnitude 9.1) megathrust earthquake off the coast of Sumatra, Indonesia (the third biggest earthquake since records began). It affected not only Indonesia but also Thailand, Sri Lanka, India and the Maldives, and caused over 230,000 deaths.
Tsunami research at NIWA
New Zealand is at risk from tsunami due to its long coastline and its position on the Pacific Rim of Fire. Steep undersea canyons such as the Kaikoura Canyon and the Cook Strait Canyon are also areas of potential submarine landslides which can cause dramatic local tsunami.
NIWA's research into this natural hazard covers underwater earthquake faults and landslides, tsunami propagation and inundation modelling, post-disaster surveys and risk/loss assessment.
Current NIWA research projects
- Underwater earthquakes/landslides - NIWA scientists are engaged in better understanding the risk that undersea earthquakes and landslides could pose to New Zealand's coastal communities, by mapping the offshore faults and possible landslide sites.
- Modelling tsunami propagation and inundation using the Gerris and RiCOM models:
- Gerris - Gerris is based on a square grid that may be refined by subdividing any given square into 4 subsquares.
- RiCoM - RiCOM is based on a triangular mesh, by varying the triangles size slightly with each one it can grade smoothly between different sizes.
- RiskScape (risk and loss assessment) - Data collected from post-disaster surveys is fed into risk and loss assessment models: the more accurately costs can be calculated, the better the decisions around risk-reduction initiatives can be. Such information can also be used by areas at risk to help them better prepare for tsunami.
NIWA and GNS Science have developed RiskScape, a software tool which enables users to analyse the risks and impacts of a range of hazards, including tsunami.
In addition, recorded impacts be used to help refine computer models of how tsunami waves propagate and interact with coastal areas.
- Post-disaster surveys - Teams of scientists, including NIWA researchers, undertake reconnaissance missions to areas affected by tsunami (and other hazards, e.g. tropical cyclones) to assess their impacts. These surveys are used to collect the data which inform risk and loss assessment models, and refine models of how tsunami waves behave.
Dr Emily Lane
Tel 03 343 7856 |
Classroom Habits of Successful Learners
Michiko is an outstanding student. To begin with, she never misses a class unless she’s really sick, she always gets to class on time, and she always sit in front.
Michiko could answer that question. In her first classes she took a seat near the back of the class. She is shy and thought she’d feel like people were looking at her if she sat in front.
She soon regretted her decision. The students who sat in the back of the room didn’t seem interested in learning. Michiko noticed some students emailing or texting their friends. The boy next to her was surfing the web and kept wanting to show her the funny pictures he found. Other students were talking, chewing gum, and eating snacks. Several students got up and left before class was over. It was hard to hear what the professor was saying and Michiko had trouble concentrating with all the distractions.
The next day, Michiko walked to the front of the class and took one of the empty seats there.
Students who sit it front hear more and see more. They can read what’s on the blackboard or screen. They can see the Professor’s expressions.
Another reason to sit in front is that the Professor can see you. They will recognize you when they see you around campus. They are likely to think of you as a student who is interested in learning.
In discussion classes, it is usually the students who sit near the front who participate most actively in the discussion. They are likely to ask and answer the most questions. Sometimes class participation counts toward your grades. You also learn more when you are actively involved in the discussion.
Having Good Posture improves Learning
Students who sit up straight or lean slightly forward usually learn more and make better grades that students who lean back in their seats. Why should your posture make a difference?
Do an experiment. Listen to a lecture or watch the news on TV while leaning back in your chair, and again while sitting up or leaning slightly forward. You will notice a big difference. Leaning back is what you do when you are tired or bored. It is actually harder to pay attention. You might begin to yawn and feel tired.
When you sit up straight, you actually feel more alert. You have more energy. You look and feel ready to learn. The professor might notice your posture and assume that students with better posture are more interested in learning.
Why should you read the assignment before the lecture?
Students who prepare for a lecture usually learn more and make better grades. What difference does it make if you do the reading assignment before or after the lecture. There are three reasons to read the assignment first.
1. Students who read the chapter in the book first have learned a good deal about the topic. They may have a list of questions and hope to find answers in the lecture.
Students who have not read the chapter might find the lecture confusing. The professor might be using terms or discussing concepts they don’t understand – but would have understood if they had read the chapter.
2. Students who have not read the chapter feel a need to write down everything the professor says because it is all new to them. It is hard to really pay attention and understand the lecture if you are working so hard to write every word.
Students who have already read the chapter will recognize much of the material and concentrate on taking notes on material that is new or that they didn’t understand when they read it. They enjoy the lecture more because the material is now familiar.
3. Students who have read the book have a great advantage. They will be aware of what material in the lecture was NOT in the book. This additional material is something the professor considers important. Questions on this material are likely to be on the tests.
Review lecture notes from the previous class
In some classes this might not be especially helpful, but in many classes, professors will begin where the last lecture ended. They will mention material briefly from the earlier lecture and expect students to remember and follow their thinking.
Try reviewing notes before class several times and continue doing this when you find it helpful. Some students continue because that few minutes before the lecture is the ONLY time they look back at the previous lecture notes. Certainly, if you don’t go back over your notes at this point, you need to do it at some other time.
Students who never review lecture notes tend to forget 90-95% of the material in less than 24 hours. Why take all those notes if you aren’t going to read them?
Get to know your professors
Students who take time to talk to their professors after class or in their offices usually learn more and make better grades. No, it isn’t because the teacher likes them better. Students who know the teacher better often understand what the teacher wants students to learn.
When professors get to know some of their students better, they are often more willing to spend time helping them when the students have questions.
It is especially important to know the professors in your major subject. When you need letters of recommendation for scholarships, for internships or summer jobs, or for graduate school, professors who know you personally are more likely to write that letter. If you let them know you are applying for a scholarship, internship or job, they might tell you about other opportunities you would be interested in.
And why wouldn’t you want to take time to discuss your questions and ideas about the subject with someone who has spent so many years learning in this area. You might learn more in conversations with your professors than you do in class. |
...and created the greatest middle-class in world history.
John Maynard Keynes began his theoretical work (Treatise on Money, published in 1930) to examine the relationship between unemployment, money and prices back in the 1920s. A central idea of his work was that if the amount of money being saved exceeds the amount being invested — which can happen if interest rates are too high — then unemployment will rise.
He argued that aggregate demand determined the overall level of economic activity, and that inadequate aggregate demand could lead to prolonged periods of high unemployment. According to Keynesian economics, government intervention was necessary to moderate "boom and bust" cycles of economic activity. He advocated the use of fiscal and monetary measures to mitigate the adverse effects of economic recessions and depressions.
Keynes was deeply critical of the British government's austerity measures during the Great Depression. He believed that budget deficits were a good thing, a product of recessions.
At the height of the Great Depression in 1933 Keynes published The Means to Prosperity, which contained specific policy recommendations for tackling unemployment in a global recession, chiefly "counter-cyclical" government spending*.
* Keynesian economics advocates the use of automatic and discretionary counter-cyclical policies to lessen the impact of the business cycle. One example of an automatically counter-cyclical fiscal policy is progressive taxation. By taxing a larger proportion of income when the economy expands, a progressive tax tends to decrease demand when the economy is booming, thus reining in the boom. When the government adopts a counter-cyclical fiscal policy in response to a threat of recession the government might increase infrastructure spending. (As an aside: Keynes also makes one of the first mentions of the " multiplier effect".)
Classical economists at the time believed that "supply creates its own demand", and that in a "free market" workers would always be willing to lower their wages to a level where employers could profitably offer them jobs (when in those days, what was good for the boss, was also good for the worker).
An innovation from Keynes was the concept of "price stickiness" — the recognition that, in reality, workers often refuse to lower their wage demands, even in cases where a classical economist might argue it is rational for them to do so. Due in part to price stickiness, it was established that the interaction of "aggregate demand" and "aggregate supply" may lead to stable unemployment equilibria — and in those cases, it is the government (also acting as a referee), and not the free market, that economies must depend on for their salvation.
In Keynes's greatest work (The General Theory of Employment, Interest and Money) he argues that demand, not supply, is the key variable governing the overall level of economic activity. Aggregate demand, which equals total un-hoarded income in a society, is defined by the sum of consumption and investment. In a state of unemployment and unused production capacity, one can only enhance employment and total income by first increasing expenditures for either consumption or investment. Without government intervention to increase expenditure, an economy can remain trapped in a low employment equilibrium.
Keynes advocated "activist" economic policy (intervention) by government to stimulate demand in times of high unemployment — for example, by spending on public works — or preparing for war. "Let us be up and doing, using our idle resources to increase our wealth," he wrote. Keynes' General Theory work is often viewed as the foundation of modern macroeconomics.
Following the outbreak of World War II, Keynes's ideas concerning economic policy were adopted by leading Western economies. Keynes argued in his work How to Pay for the War (published in 1940) that the war effort should be largely financed by higher taxation* — and especially by compulsory saving (essentially workers lending money to the government), rather than deficit spending, in order to avoid inflation. He argued that compulsory saving would act to dampen domestic demand, assist in channeling additional output towards the war efforts, and would be fairer than punitive taxation; and would also have the advantage of helping to avoid a post-war slump by boosting demand once workers were allowed to withdraw their savings (and later events proved him to be 99.99% correct).
* The Iraq and Afghanistan wars were the very time time in U.S. history that taxes weren't raised to pay for American wars. George W. Bush did THE EXACT OPPOSITE, and lowered taxes — contributing to current U.S. government debt. The crisis of World War II led Congress to pass four excess profits tax statutes between 1940 and 1943. After the war in 1945 Congress repealed the tax, effective 1 January 1, 1946. The graph below shows the debt incurred during World War II. (There was not enough government spending to stimulate the economy after the 2007-09 recession as multinationals were investing and hiring more overseas.)
By the first years of American involvement in World War II, wartime manufacturing facilities had been established throughout the nation, creating a tremendous demand for labor. Within months of the U.S. declaration of war, the national unemployment rate plummeted an astounding 10% from its 1940 level. War mobilization—that is, the rapid production of military equipment, vehicles, weapons, and ammunition, along with the fortification of American borders and military bases abroad — coupled with the military draft, created a vast labor shortage. Employers were desperate to fill positions as quickly as possible to meet production demands and needed to hire workers en masse. Positions had to be opened then, not simply to the traditional labor force, but also to women and non-whites (those who had long been excluded from many skilled and high-paying industries).
Wartime mobilization contributed not merely to a temporary respite from the Great Depression, but planted the seeds for tremendous post-war economic growth. In order to maintain a military large enough and strong enough to fight on two major war fronts, the federal government required most manufacturers to halt production of consumer items. Car manufacturers, for instance, were ordered to cease normal operations and, instead, to assemble armored vehicles to be used on the battlefield.
The demand for labor was so great all across the nation that proprietors had to offer high wages and other fringe benefits to lure potential laborers — young, old, married, unmarried, white, black, immigrant, and women — away from competitors. Businesses practically begged for workers, offering extraordinary incentives such as medical care, exemption from the military draft, daycare facilities, and even paid maternity leave, a perk previously unimagined. To be sure, these were surreal shifts for so many Americans who were affected by the Great Depression and were intimately familiar with scarcity and hopelessness.
While Americans had fewer products to buy during the war, they were earning much more than ever before. As a result, families were compelled to save money throughout the war years. Once the war ended and manufacturers discontinued production for war mobilization, consumer products once again filled store shelves. A population buoyed by full employment, rising wages, growing prosperity, and renewed national confidence began to spend—and to spend enthusiastically.
New expectations, new wages, and new options that were created by the World War II home front mobilization had sparked a post-war economic boom — and the most prosperous period in the nation's history.
Economic aid flowed to war-ravaged European countries under the Marshall Plan, which also helped maintain markets for numerous U.S. goods. And the U.S. government itself recognized its central role in economic affairs. The Employment Act of 1946 stated as government policy "to promote maximum employment, production, and purchasing power."
During the 1950s, the number of workers providing services grew until it equaled and then surpassed the number who produced goods. And by 1956, a majority of U.S. workers held white-collar rather than blue-collar jobs. At the same time, labor unions won long-term employment contracts and other benefits for their members.
Growing demand for single-family homes and the widespread ownership of cars led many Americans to migrate from central cities to suburbs. Coupled with technological innovations such as the invention of air conditioning, the migration spurred the development of "Sun Belt" cities such as Houston, Atlanta, Miami, and Phoenix in the southern and southwestern states. As new, federally sponsored highways under Dwight D. Eisenhower created better access to the suburbs, business patterns began to change as well. Shopping centers multiplied, and many industries soon followed.
During the 1950s, Keynesian policies were adopted by almost the entire developed world and similar measures for a mixed economy were used by many developing nations as well. By then, Keynes's views on the economy had become mainstream in the world's universities. Throughout the 1950s and 1960s, the developed and emerging free capitalist economies enjoyed exceptionally high growth and low unemployment. Professor Gordon Fletcher has written that the 1950s and 1960s, when Keynes's influence was at its peak, appear in retrospect as a Golden Age of Capitalism.
And then came Ronald Reagan and his admiration for Milton Friedman and "the invisible hand" of Adam Smith, whose principles are often distorted today in debates about "free markets" regarding the self-regulating behavior of the free market, where individuals can make profit, and maximize it without the need for government intervention — benevolent or not.
After the Powell Memo, by 1979 the middle-class had peaked — then came Reaganomics and the Republican strategy of Starve the Beast (starving government spending) and the long decline in both manufacturing and the middle-class, when "trickle up economics" had created the largest income gap between rich and poor not seen since before the Great Depression.
The total number of manufacturing jobs in the U.S. had hit its historical high peak in 1979 (when union membership had also peaked), but these numbers have since been in a continuous decline. The last time the U.S. had this few manufacturing jobs was just after WWII, when manufacturing was retooling from military production to consumer production. This one graph below from the St. Louis Fed tells the whole story since 1979.
This post is continued here, and focuses more on the period between 2000 and 2014, because that's when the U.S. employment-population ratio and the labor force participation rate had both peaked at their all-time highs in 2000 — but have since been in a steady decline ever since, because of the escalation of offshoring of jobs. |
Popular Science Monthly/Volume 8/February 1876/Natural History of the Kangaroo
|NATURAL HISTORY OF THE KANGAROO.|
THE kangaroos have now become familiar objects to all who visit our Zoölogical Gardens, or who are familiar with any considerable zoölogical museum.
Their general external form, when seen in the attitude they habitually assume when grazing (with their front limbs touching the ground), may have recalled to mind, more or less, the appearance presented by-some hornless deer. Their chief mode of locomotion (that jumping action necessitated by the great length of the hind-limbs) must be familiar to all who have observed them living, and also, very probably, the singular mode in which the young are carried in a pouch of skin in the front of the belly of the mother.
But "What is a kangaroo?" The question will raise in the minds of those who are not naturalists the image of some familiar circumstances
like those just referred to. But such image will afford no real answer to the question. To arrive at such an answer it is necessary to estimate correctly in what relation the kangaroo stands to other animals—its place in the scale of animated beings—as also its relations to space and time; that is, its distribution over the earth's surface to-day, in connection with that of other animals more or less like it, and its relation to the past life of this planet, in connection with similar relations of animals also more or less like it. In other words, to understand what a kangaroo is, we must understand its zoölogical, geographical, and geological conditions. And my task in this paper is to make these conditions as clear as I can, and so to enable the reader to really answer the question, "What is a kangaroo?"
But before proceeding to these matters, let us look at our kangaroo a little closer, and learn something of its structure, habits, and history, so as to have some clear conceptions of the kangaroo considered by itself, before considering its relations with the universe (animate and inanimate) about it.
The kangaroo (Fig. 1) is a quadruped, with very long hind-limbs and a long and rather thick tail. Its head possesses rather a long muzzle, somewhat like that of a deer, with a pair of rather long ears. Each fore-paw has five toes, urnished with claws. Each hind-limb has but two large and conspicuous toes, the inner one of which is much the larger, and bears a very long and strong claw (Fig. 2). On the inner
side of this is what appears to be a very minute toe, furnished with two small claws. An examination of the bones of the foot shows us, however, that it really consists of two very slender toes united together in a common fold of skin. These toes answer to the second and third toes of our own foot, and there is no representative of our great-toe—not even that part of it which is inclosed in the substance of our foot, called the inner metatarsal bone. Two other points are specially noteworthy in the skeleton. The first of these is that the pelvis (or bony girdle to which the hind-limbs are articulated, and by which they are connected with the back-bone) has two elongated bones extending upward from its superior margin in front (Fig. 4, a). These are called marsupial bones, and lie within the flesh of the front of the animal's belly. The other point is that the lower, hinder portion of each side of the lower jaw (which portion is technically called the "angle") is bent inward, or "inflected," and not continued directly backward in the same plane as the rest of the lower jaw.
A certain muscle, called the cremaster muscle, is attached to each marsupial bone, and thence stretches itself over the inner or deep surface of the adjacent mammary gland or "breast," which is situated low down, and not in the breast at all.
The kangaroo's teeth consist of three on each side in the front of the mouth, and one on each side below. These eight teeth are what are called incisors. At the back of the mouth there are live grinding-teeth on each side above and five below, and between the upper grinders and incisors another pointed tooth, called a canine, may or may not be interposed. Such a set of teeth is indicated by the following formula, where I stands for incisors, C for canines, and M for grinding-teeth or "molars." The number above each line indicates the teeth of each denomination which exist on one side of the upper jaw, and the lower number those of the lower jaw:
The total number of incisor teeth of both sides of each jaw may therefore be expressed thus: I 6⁄2.
Such is the general structure of an adult kangaroo. At birth it is strangely different from what it ultimately becomes.
It is customary to speak of the human infant as exceptionally helpless at birth and after it, but it is at once capable of vigorous sucking, and very early learns to seek the nipple. The great kangaroo, standing some six feet high, is at birth scarcely more than an inch long, with delicate naked skin, and looking like part of an earthworm. But, in such feeble and imperfectly developed condition, the young-kangaroo cannot actively suck. The mother therefore places it upon one of her long and slender nipples (the end of which is somewhat swollen), this nipple entering its mouth, and the little creature remaining attached to it. The mother then, by means of the cremaster muscle (before spoken of), squeezes her own milk gland, and so injects milk into the young, which would thus be infallibly choked but for a noticeable peculiarity of its structure, admirably adapted to the circumstances of the case.
In almost all beasts, and in man also, the air-passage or windpipe (which admits air to and from the lungs) opens into the floor of the mouth, behind the tongue and in front of the opening of the gullet. Each particle of food, then, as it passes to the gullet, passes over the entrance to the windpipe, but is prevented from falling into it (and so causing death by choking) by the action of a small cartilaginous shield (the epiglottis). This shield, which ordinarily stands up in front of the opening into the windpipe, bends back and comes over that opening just when food is passing, and so, at the right moment, almost always prevents food from "going the wrong way." But, in the young kangaroo, the milk being introduced, not by any voluntary act of the young kangaroo itself, but by the injecting action of its mother, it is evident that, did such a state of things obtain in it as has been just described, the result would be speedily fatal. Did no special provision exist, the young one must infallibly be choked by the intrusion of milk into the windpipe. But there is a special provision for the young kangaroo; the upper part of the windpipe (or larynx), instead of lying as in us, and as in most beasts, widely separated from the hinder opening of the nostrils, is much raised (Fig. 3, a). It is in fact so elongated in the young kangaroo that it rises right up into the hinder end of the nasal passage, which embraces it. In this way there is free entrance for air from the nostrils into the windpipe by a
passage shut off from the cavity of the mouth. All the time the milk can freely pass to the back of the mouth and gullet along each side of this elongated larynx, and thus breathing and milk-injection can go on simultaneously, without risk or inconvenience.
The kangaroo browses on the herbage and bushes of more or less open country, and, when feeding, commonly applies its front-limbs to the ground. It readily, however, raises itself on its hind-limbs and strong tail (as on a tripod) when any sound, sight, or smell, alarms its natural timidity (Fig. 1).
Mr. Gould tells us that the natives (where it is found) sometimes hunt these animals by forming a great circle around them, gradually converging upon them, and so frightening them by yells that they become an easy prey to their clubs.
As to its civilized hunters, the same author tells us that kangaroos are hunted by dogs which run entirely by sight, and partake of the nature of the greyhound and deerhound, and, from their great strength and fleetness, are so well adapted for the duties to which they are trained, that the escape of the kangaroo, when it occurs, is owing to peculiar and favorable circumstances; as, for example, the oppressive heat of the day, or the nature of the ground; the former incapacitating the dogs for a severe chase, and the hard ridges, which the kangaroo invariably endeavors to gain, giving him great advantage over his pursuers. On such ground the females in particular will frequently outstrip the fleetest greyhound; while, on the contrary, heavy old males, on soft ground, are easily taken. Many of these fine kangaroo-dogs are kept at the stock-stations of the interior, for the sole purpose of running the kangaroo and the emu, the latter being killed solely for the supply of oil which it yields, and the former for mere sport or for food for the dogs. Although I have killed the largest males with a single dog, it is not generally advisable to attempt this, as they possess great power, and frequently rip up the dogs, and sometimes even cut
them to the heart with a single stroke of the hind-leg. Three or four dogs are more generally laid on; one of superior fleetness to "pull" the kangaroo, while the others rush in upon it and kill it. It sometimes adopts a singular mode of defending itself, by clasping its short, powerful fore-limbs around its antagonist, then hopping away with it to the nearest water-hole, and there keeping it beneath the water until drowned.
The kangaroo is said to be able to clear even more than fifteen feet at one bound.
Rapidity of locomotion is especially necessary for a large animal inhabiting a country subject to such severe and widely-extending droughts as in Australia. The herbivorous animals which people the plains of Southern Africa—the antelopes—are also capable of very rapid locomotion. In the antelopes, however, as in all hoofed beasts, all the four limbs (front as well as hind) are exclusively used for locomotion. But in kangaroos we have animals requiring to use their front pair of limbs for the purposes of more or less delicate manipulation with respect to the economy of the "pouch." Accordingly, for such creatures to be able to inhabit such a country, the hind pair of limbs must by themselves be fitted alone to answer the purpose of both the front and hind limbs of deer and antelopes. It would seem, then, that the peculiar structure of the kangaroo's limbs is of the greatest utility to it; the front pair serving as prehensile manipulating organs, while the hind pair are, by themselves alone, able to carry the animal great distances with rapidity, and so to traverse wide arid plains in pursuit of rare and distant water. The harmony between structure, habit, and climate, was long ago pointed out by Prof. Owen.
The kangaroo breeds freely in this country, producing one at a birth. We have young ones every year in our Zoölogical Gardens. A large number of them are reared to maturity, and altogether our kangaroos thrive and do well. One born in our gardens was lately in the habit of still entering the pouch of its mother, although itself bearing a very young one within its own pouch. These animals have been already more or less acclimatized in England. I have myself seen them in grounds at Glastonbury Abbey. Some were so kept in the open by Lord Hill, and some by the Duke of Marlborough. A very fine herd is now at liberty in a park near Tours, in France.
It is a little more than one hundred and five years since the kangaroo was first distinctly seen by English observers. At the recommendation and request of the Royal Society, Captain (then Lieutenant) Cook set sail in May, 1768, in the ship Endeavor, on a voyage of exploration, and for the observation of the transit of Venus of the year 1769, which transit the travelers observed, from the Society Islands, on June 3d of that year. In the spring of the following year the ship started from New Zealand to the eastern coast of New Holland, visiting, among other places, a spot which, on account of the number of plants found there by Mr. (afterward Sir Joseph) Banks, received the name of Botany Bay. Afterward, when detained in Endeavor River (about 15° south latitude) by the need of repairing a hole made in the vessel by a rock (part of which, fortunately, itself stuck in the hole it made). Captain Cook tells us that on Friday, June 22, 1770, "some of the people were sent on the other side of the water, to shoot pigeons for the sick, who at their return reported that they had seen an animal, as large as a greyhound, of a slender make, a mouse-color, and extremely swift." On the next day, he tells us: "This day almost everybody had seen the animal which the pigeon-shooters had brought an account of the day before; and one of the seamen, who had been rambling in the woods, told us on his return that he verily believed he had seen the devil. We naturally inquired in what form he had appeared, and his answer was, says John, 'As large as a one-gallon keg, and very like it; he had horns and wings, yet he crept so slowly through the grass that, if I had not been afeared, I might have touched him.' This formidable apparition we afterward, however, discovered to have been a bat (a Flying Fox).... Early the next day," Captain Cook continues, "as I was walking in the morning, at a little distance from the ship, I saw myself one of the animals which had been described; it was of a light mouse-color, and in size and shape very much resembling a greyhound; it had a long tail also, which it carried like a greyhound; and I should have taken it for a wild-dog if, instead of running, it had not leaped like a hare or deer." Mr. Banks also had an imperfect view of this animal, and was of opinion that its species was hitherto unknown. The work exhibits an excellent figure of the animal. Again, on Sunday, July 8th, being still in Endeavor River, Captain Cook tells us that some of the crew "set out, with the first dawn, in search of game, and in a walk of many miles they saw four animals of the same kind, two of which Mr. Banks's greyhound fairly chased; but they threw him out at a great distance, by leaping over the long, thick grass, which prevented his running. This animal was observed not to run upon four legs, but to bound or leap forward upon two, like the jerboa." Finally, on Saturday, July 14th, "Mr. Gore, who went out with his gun, had the good fortune to kill one of these animals which had been so much the subject of our speculation;" adding, "This animal is called by the natives kanguroo. The next day (Sunday, July 15th) our kanguroo was dressed for dinner, and proved most excellent meat."
Such is the earliest notice of this creature's observation by Englishmen; but Cornelius de Bruins, a Dutch traveler, saw, as early as 1711, specimens of a species (now named after him, Macropus Brunii), which he called Filander, and which were kept in captivity in a garden at Batavia. A very fair representation of the animal is given—one showing the aperture of the pouch. This species was, moreover, described both by Pallas and by Schreber.
It is not improbable, however, that kangaroos were seen by the earlier explorers of the western coast of Australia; and it may be that it is one of these animals which was referred to by Dampier, when he tells us that on August 12, 1699, "two or three of my seamen saw creatures not unlike wolves, but so lean that they looked like mere skeletons."
Having now learned something of the structure, habits, and history of the kangaroo, we may proceed to consider its zoölogical, geographical, and geological relations, in order to arrive at the best answer we may to our initial question, "What is a kangaroo?"
First, as to its zoölogical relations: and here it is necessary to recall to mind certain leading facts of zoölogical classification, in order that we may be better able to see with what creatures the kangaroo is, in various degrees, allied.
The whole animal population of the globe is spoken of under the fanciful term, the "animal kingdom," in contrast with the world of plants, or "vegetable kingdom."
The animal kingdom is divided into certain great groups, each of which is called a sub-kingdom; and one, the highest of these subkingdoms (that to which we ourselves belong), bears the name vertebrata, and it includes all beasts, birds, reptiles, and fishes; and the name refers to the series of bone called vertebræ, of which the backbone or spinal column (and all vertebrata have a spinal column) is generally made up.
Each sub-kingdom is made up of subordinate groups, termed classes; and thus the vertebrate sub-kingdom is made up of the class of beasts or Mammalia (so called because they suckle their young), the class of birds, and other classes.
Each class is made up of subordinate groups, termed orders; each order is further subdivided into families; each family is made up of genera; while every genus comprises one, few, or many species.
In considering the zoölogical relations of the kangaroo, we have then to consider the relations borne by its genera to the other genera of its family, the relations borne by its family to the other families of its order, and finally the relations borne by its order to the other orders of its class (the Mammalia)—that class which includes within it all other beasts whatever, and also man.
In the first place, it may be observed, there are many species of kangaroos, arranged in some four genera; but the true kangaroos form a genus, Macropus, which is very nearly allied to the three other genera. 2. Dorcopysis, with a very large first back tooth. 3. The tree kangaroos (Dendrolagus), which frequent the more horizontal branches of trees, have the fore-limbs but little shorter than the hind-limbs, and inhabit New Guinea; 4. The rat-kangaroos (Hypsiprymnus) which have the first upper grinding-tooth large, compressed, and with vertical grooves.
These four genera together constitute the kangaroo's family, the Macropodidæ, the species of which all inhabit Australia and the islands adjacent, but are found nowhere else in the world.
The species agree in having—
1. The second and third toes slender and united in a common fold of skin.
2. The hind-limbs longer than the fore-limbs.
3. No inner metatarsal bone.
4. All the toes of each fore-foot provided with claws.
5. Total number of incisors only 6
These five characters are common to the group, and do not co-exist in any other animals. They form, therefore, the distinguishing characters of the kangaroo's family. This family, Macropodidæ is one of the six other families which, together with it, make up that much larger group, the kangaroo's order. As was just said, to understand what a kangaroo is, we must know "what are the relations borne by his family to the other families of its order;" and accordingly it is needful for our purpose to take at least a cursory view of those other families.
There is a small animal, called a bandicoot (Fig. 7), which, in external appearance, differs very plainly from the kangaroo, but resembles it in having the hind-limbs longer than the fore-limbs, and also in the form of its hind-feet, which present a kangaroo structure, but not carried out to such an extreme degree as in the kangaroo, and therefore approximating more to the normal type of foot, there being a rudimentary inner toe and a less preponderant fourth toe; the second and third toes, however, are still very small, and bound together by skin down to the nails. In the fore-foot, on the contrary, there is a deficiency, the outer toes being nailless or wanting. The cutting-teeth are more numerous, these being I 10
This little creature is an example of others, forming the family Peramelidæ—a family made up of creatures none of which much exceed the hare in size, and which, instead of feeding on vegetable substances (as do the kangaroos), eat insects, for which food they are well adapted by the sharp points and ridges which may be seen on their back teeth.
One member of this family, Chæropus (Fig. 8), is very exceptional in the structure of its hind-feet, which out-kangaroo the kangaroo in the
minuteness of all the toes but the fourth, upon which alone the creature walks, while its front-feet are each reduced to two functional digits.
No other known beast besides walks upon a single toe in each hind-foot, save the horse family (horses, asses, and zebras), and they walk upon a different one, namely, that which answers to our middle-toe, while Chæropus walks on the next outer one or fourth. No known beast besides Chæropus walks upon two toes in each foot, save hoofed creatures, such as the ruminants and their allies; but in them it is the third and fourth toes that are used, while in Chæropus it is the second and third toes.
Another animal, called a phalanger (of the genus Phalangista) is a type of a third family of the kangaroo's order, the Phalangistidæ, a family made up of creatures which live in trees and are nocturnal in their habits, feeding upon fruits and leaves. Here we find the limbs of nearly equal length. Once more we have I 6⁄2, and we still have the second and third toes united in a common fold of skin; but the inner-most toe (that answering to our great-toe) is not only largely developed, but is like that of the apes, directed outward, and capable of being opposed to the other toes, as our thumb can be opposed to our fingers.
Some of these creatures have prehensile tails. Others have the skin of the flanks enlarged so as to serve them as a parachute in their leaps, whence they are called "flying opossums," just as squirrels, similarly provided, are called "flying" squirrels.
There are two very aberrant members of this family. One, the koala. Fig. 9. (Phascolarctus), called the native bear or native sloth, is devoid of any tail.
The other, Tarsipes, but little bigger than a mouse, has a long and pointed muzzle, and its teeth are reduced to minute pointed processes, few in number, 6—6/5—5, situated far apart in each jaw.
The genus Cuscus, closely allied to Phalangista, is found in New Guinea and the adjacent islands to Timor (Fig. 10).
Another animal, the wombat, Fig. 11 (Phascolomys), forms by itself a distinct family, Phascolomyidæ. It is a burrowing nocturnal animal, about the size of a badger, with rudimentary tail and peculiar feet and teeth.
We still find the second and third toes bound together, limbs of equal length, and all the five toes of the fore-foot with claws (as in the last family), but the great-toe is represented by a small tubercle, while the cutting teeth are 2/2, growing from persistent pulp through life, as in rats, squirrels, and Guinea-pigs (Fig. 12).
We may now pass to a very different family of animals belonging to the kangaroo's order. We pass, namely, to the Dasyuridæ, or family of the native cat, wolf, and devil, so named from their predatory or fierce nature. They have well-developed eye-teeth (or canines), and back teeth with sharp cutting blades, or bristling with prickly points. The second and third toes are no longer bound together; and while there are five toes with claws to each fore-foot, the great-toe is either absent altogether or small. The cutting teeth. Fig. 13, are 8/6
and the tail is long and clothed with hair throughout. Some of these animals are elegantly colored and marked, and all live on animal food. This form (belonging to the typical genus Dasyurus, which gives its name to the family) may be taken as a type; but two others merit notice.
The first of these is Myrmecobius Fig. 14, from Western Australia, remarkable for its number of back teeth, 8—8/9—9' and for certain geographical and zoölogical relations, to be shortly referred to. With respect to this creature, Mr. Gilbert has told us: "I have seen a good deal of this beautiful little animal. It appears very much like a squirrel when running on the ground, which it does in successive leaps, with its tail a little elevated, every now and then raising its body, and resting on its hind-feet. When alarmed, it generally takes to a dead tree lying on the ground, and before entering
the hollow invariably raises itself on its hind-feet, to ascertain the reality of approaching danger. In this kind of retreat it is easily captured, and when caught is so harmless and tame as scarcely to make any resistance, and never attempts to bite. When it has no chance of escaping from its place of refuge, it utters a sort of half-smothered grunt, apparently produced by a succession of hard breathings."
The other member of the family Dasyuridæ, to which I call the reader's attention, is a very different animal from the Myrmecobius. I refer to the largest of the predatory members of the kangaroo's order; namely, to the Tasmanian wolf. It is about the size of the animal after which it is named, and it is marked across the loins with tiger-like, black bands (Fig. 16). It is only found in the island of Tasmania, and will probably very soon become altogether extinct, on account of its destructiveness to the sheep of the colonists. Its teeth have considerable resemblance to those of the dog, and it differs from all other members of the kangaroo's order, in that mere cartilages represent those marsupial bones which every other member of the order unquestionably possesses.
The last family of the kangaroo's order consists of the true opossum, which (unlike all the animals we have as yet passed in review) inhabits not the Australian region, but America only.
These creatures vary in size from that of the cat to that of the rat.
They are called Dldelphidæ, and agree with the Dasyuridæ in having well-developed canine teeth and cutting back teeth (Fig. 17); in
having the second and third toes free, and five toes to the fore-foot. But they differ in that
Cutting-teeth 10/8 (more than in any other animal).
A large opposable great-toe.
A tail, naked (like that of the rat) and prehensile.
One of them is aquatic in its habits and web-footed. Such are the very varied forms which compose the six families which together make up the kangaroo's order, and such are the relations borne by the kangaroo's family to the other families of the kangaroo's order.
But, to obtain a clear conception of the kangaroo, we must not rest content with a knowledge of its order considered by itself. But we must endeavor to learn the relation of its order to the other orders of that highest class of animals to which the kangaroo and we ourselves both belong, namely, the class Mammalia which class, with the other classes, birds, reptiles, and fishes together, makes up the back-boned or vertebrate primary division of the whole animal kingdom,
What, then, is the relation of the kangaroo's order—the Marsupialia—to the other orders of the class Mammalia?
Now, these orders are:
1. The order which contains man and apes.
2. That of the bats.
3. That of the mole, shrew, hedgehog, and their allies—all insectivorous.
4. That of the dog, cat, weasel, and bear—all carnivorous.
5. That of the gnawing animals, such as the rat, squirrel, jerboa, and guinea-pig—all with cutting-teeth 2/2, with permanent pulps. They are called Rodents.
6. The order containing the sloths.
7. That of the grazing, hoofed quadrupeds—deer, antelopes, and their allies.
Besides three orders of aquatic beasts (seals, whales, and the manatee order), with which we need not be now further concerned.
Now, in the first place, very noticeable is the much greater diversity of structure found in the kangaroo's order than in any other order of mammals. While each of the latter is of one predominate type of structure and habit, we have found in the marsupials the greatest diversity in both.
Some marsupials are, we have seen, arboreal, some are burrowing, some flit through the air, while others range over and graze upon grassy plains. Some feed on vegetable food only, others are as exclusively insectivorous or carnivorous, and their teeth vary much in number and structure. Certain of my readers may wonder that such diverse forms should be thus grouped together, apart from the other mammals. At first sight it might seem more natural to place together flying opossums with flying squirrels;' the native sloth with the true sloth; the dog and cat-like opossums with the true dogs and cats; and, lastly, the insectivorous marsupials with the other insectivora.
As to the kangaroos themselves, they might be considered as approximating in one respect to the Ruminants, in another to the Rodents.
We have seen that even in Captain Cook's time its resemblance to the jerboa forced itself into notice. And, indeed, in this jerboa (and its first cousin, the alactaga) we have the same or even a relatively greater length of hind-limb and tail, and we have the same jumping mode of progression.
Again, in the little jumping insectivorous mammal, the shrew (Macroscelides), we meet with excessively long hind-limbs and a jumping habit. More than this: if we examine its teeth, we find both in the upper cutting teeth and in the back teeth great resemblance to those of the kangaroo. And yet there is no real affinity between the kangaroo and such creatures, any more than there is between a non-marsupial truly carnivorous beast and a marsupial carnivore. Indeed, both myself and ray readers are far more like the jerboa or weasel than either of the latter is like to any marsupial animal.
The fact is, that all these so varied marsupial forms of life possess in common certain highly-important characters, by which they differ from all other mammals. These characters, however, mainly relate to the structure of their reproductive organs, and could not be here detailed without a long preliminary anatomical explanation; but, as to the great importance of these characters, naturalists are agreed.
Among the characters which serve to distinguish the marsupials, there are two to which I have already called attention in describing the kangaroo; namely, the marsupial bones and the inflected angle of the lower jaw.
Every mammal which has marsupial bones has the angle of its jaw inflected, or else has no angle to its jaw at all; while every animal which has both marsupial bones and an inflected jaw-angle possesses also those special characters of the reproductive system which distinguish the marsupials from all other mammals.
Thus it is clear we have at least two great groups of mammals. One of them—the non-marsupials—contains man; the apes; bats; hedgehog-like beasts (shrews, moles, etc.); cats, dogs, bears, etc.; hoofed beasts; edentates; rodents, and also the aquatic mammals. And this great group, containing so many orders, is named Monodelphia. The other great groups consist of all the marsupials, and no others. It consists, therefore, of the single order, Marsupialia, and is called Didelphia.
Another group of mammals is made up of two genera only—the duck-billed platypus, or Ornithorhynchus, and the Echidna, two most interesting forms, but which cannot be further noticed here. They form, by themselves, a theme amply sufficient for an article, or even half a dozen articles.
As to its zoölogical relations, then, we may say that the kangaroo is a peculiarly modified form of a most varied order of mammals (the Marsupials), which differ from, all ordinary beasts (and at the same time differ from man) by very important anatomical and physiological characters, the sign of the presence of which is the coexistence of marsupial bones with an infected angle of the lower jaw.
We may now proceed to the next subject of inquiry, and consider the space relations (that is, the geographical distribution) of the kangaroo, its family, and order. I have already incidentally mentioned some countries where marsupials are found, but all of those were more or less remote. To find living, in a state of nature, any member of the kangaroo's order, we must at least cross the Atlantic.
When America was discovered by the Spaniards, among the animals found there, and afterward brought over to Europe, were opossums, properly so called—marsupials, of the family Didelphidæ, which extend over the American Continent, from the United States to the far South. These creatures were the first to make known to Europeans that habit of sheltering the young in a pouch which exists in the kangaroo, and which habit has given the name Marsupialia to the whole order. But, though this habit was duly noted, it is not strange that (being the only pouched forms then known) the value of the peculiarity should have been under-estimated. It is not strange that they should have been regarded as merely a new kind of ordinary flesh-eating beasts, since in the more obvious characters of teeth and general form they largely resembled such beasts. Accordingly even the gi-eat Cuvier, in the first edition of his "Règne Animal," made them a mere subdivision of his great order of flesh-eating mammals.
But, to find any other member of the kangaroo's order (besides the Didelphidæ), in a state of nature, we must go much farther than merely across the Atlantic; namely, to Australia or the islands adjacent to it, including that enormous and unexplored island, New Guinea, which has recently attracted public attention through the published travels of a modern Baron Munchausen.
To return, however, to our subject. To find marsupials at all, we have, as we have seen, to go to the New World. To find nearer allies of the kangaroo, we must go to the newest world, Australia; newest because, if America merited the title of new from its new natural productions as well as its new discovery, Australia may well claim the superlative epithet on both accounts. We have found an indication, in the name Botany Bay, of the interest excited in the mind of Sir Joseph Banks by the new plants as well as by the new animals of Australia. And, indeed, its plants and animals do differ far more from those of the New World (America) than do those of America from those of the Old World.
Marsupials, in fact, are separated off from the rest of their class—from the great bulk of mammals—the Monodelphia—no less by their geographical limits than by their peculiarities of anatomical structure.
And these geographical limits are at the same time the limits of many groups of animals and plants, so that we have an animal population (or fauna) and a vegetable population (or flora) which are characteristic of what is called the Australian region—the Australian region, because the Australian forms of life are spread not only over Australia and Tasmania, but over New Guinea and the Moluccas, extending as far northwest as the island of Lombok, while marsupials themselves extend to Timor.
In India, the Malay Peninsula, and the great islands of the Indian Archipelago, we have another and a very different fauna and flora—those, namely, of the Indian region, and Indian forms of life extend downward southeast as far as the island of Bali. Now, Bali is separated from Lombok by a strait of but fifteen miles in width. But that little channel is the boundary-line between these two great regions—the Australian and the Indian. The great Indian fauna advances to its western margin, while the Australian fauna stops short at its eastern margin.
The zoölogical line ofwhich passes through these straits is called "Wallace's line," because its discovery is due to the labors of that illustrious naturalist, that courageous, persevering explorer, and most trustworthy observer, Alfred Wallace, a perusal of whose works I cordially recommend to my readers, since the charm of their style is as remarkable as is the sterling value of their contents. Mr. Wallace pointed out that not only as regards beasts (with which we are concerned to-day), but that also as regards birds, these regions are sharply limited. "Australia has," he says, "no woodpeckers, no pheasants—families which exist in every other part of the world; but instead of them it has the mound-making brush-turkeys, the honey-suckers, the cockatoos, and the brush-tongued lories, which are found nowhere else upon the globe."
All these striking peculiarities are found also in those islands which form the Australian division of the archipelago, while in those islands which belong to its Indian division these Australian birds have no place.
On passing from the island of Bali to that of Lombok, we cross the division between the two. "In Bali," he tells us, "we have barbets, fruit-thrushes, and woodpeckers, while in Lombok these are seen no more; but we have abundance of cockatoos, honey-suckers, and brush-turkeys, which are equally unknown in Bali, or any island farther west."
As to our second point, then—the geographical relations of the kangaroo—we may say that the kangaroo is one of an order of animals confined to the Australian region and America, the great bulk of which order, including the kangaroo's own family, Macropodidæ, is strictly confined to the Australian region. We may further add that in the Australian region ordinary beasts (Monodelphia) are entirely absent, save some bats and a rat or two, and the wild-dog or dingo, which was probably introduced there by man himself.
There only remains, then, for us to inquire, lastly, what relations with past time may be found to exist on the part of the kangaroo's order or of the kangaroo itself. Now, in fact, these relations are of considerable interest. I have spoken of Australia as, what in one sense it certainly is, the newest world, and yet the oldest world would, in truth, be an apter title for the Australian region.
In these days we hear much of "survivals," as the two buttons behind our frock-coats are "survivals" of the extinct sword-belt they once supported, and the "Oh, yes! oh, yes! oh, yes!" of the town-crier is a "survival" of the former legal and courtly predominance of the French language among us. "Well, in Australia we have to-day a magnificent case of zoölogical survival on the largest scale. There, as has already been said, we find living the little Myrmecobius, which represents before our eyes a creature living in the flesh to-day, which is like other creatures which once lived here in England, and which have left their relics in the Stonesfield oolite, the deposition of which is separated from our own age by an abyss of past time not to be expressed by thousands of years, but only to be indicated in geological language as the Mesozoic period—the middle of the secondary rocks.
But Australia presents us with a yet more interesting case of "survival." Certain fish-teeth had from time to time been found in deposits of oolitic and triassic date, and the unknown creature to which they once belonged had received the name of Ceratodus. Only five years ago this animal, supposed to have been extinct for untold ages, was found still living in Queensland, where it goes by the name of "flat-head." It is a fish of somewhat amphibious habits, as at night it leaves the brackish streams it inhabits, and wanders among the reeds and rushes of the adjacent flats. The anatomy of this animal has been carefully described for us by Dr. Günther.
We have, then, in Australia what may be termed a triassic land, still showing us in life to-day the more or less modified representations of forms which elsewhere have long since passed away from among us, leaving but rare and scattered fragments—relics "sealed within the iron hills."
No member of the Australian families of the kangaroo's order has left its relics in European strata more recent than the secondary rocks. But the American family, Didelphidæ, is represented in the earliest Tertiary period by the remains of an American form (a true opossum) having been found by Cuvier in the quarries of Montmartre. He first discovered a lower jaw, and, from its inflected angle, concluded that it belonged to a marsupial animal, and that therefore marsupial bones were hidden in the matrix. Accordingly he predicted that such bones would be found; and, proceeding to remove the enveloping deposit with the greatest care, he laid bare before the admiring eyes of the bystanders the proof of the correctness of his prediction. It is noteworthy, however, that, had this fossil been that of an animal like the Tasmanian wolf, he would have been disappointed, as, though marsupial, it has, as has been already said, not marsupial bones, but cartilages.
But relics of creatures more closely allied to the kangaroo existed in times ancient historically, though, geologically speaking, very recent. Just as in the recent deposits of South America we find the bones of huge beasts, first cousins to the sloths andwhich live there now, so in Australia there lived beasts having the more essential structural characters of the kangaroo, yet of the bulk of the rhinoceros. Their bones and teeth have been found in the tertiary deposits of Australia, They have been described by Prof. Owen, and are now to be seen preserved in the British Museum and that of the Royal College of Surgeons. It may be that other fossil forms of the middle mesozoic or even of triassic times may, so some believe, have belonged to creatures of the kangaroo's family; but at least there is no doubt that such existed in times of post-tertiary date.
As to our third point—the geological relations of the kangaroo—we may say, then, that "the kangaroo is one of an order of animals which ranged over the Northern Hemisphere in triassic and oolitic times, one exceptional family lingering in Europe to the Eocene period, and in America to the present day. That the kangaroo itself is a form certainly become fossil in its own region, where, in times geologically recent, creatures allied to it, but of vastly greater bulk, frequented the Australian plains."
We may now, then, proceed to answer finally the question, "What is a kangaroo?" We may do so because the meaning of the technical terms in which the answer must necessarily be expressed (if not of undue length) has been now explained, as far as space has allowed.
We may say, then, that "the kangaroo is a didelphous (or marsupial) mammal, of the family Macropodidæ; an inhabitant of the Australian region and connected as respects its order with triassic times, and possibly even as regards its family also, though certainly (as regards the latter) with the time of the post-tertiary geological deposits.
We have seen what are didelphous and what are monadelphous mammals; what are the respective values of the terms "order," "family," and "genus," and also in what respect the kangaroo differs from the other families of the marsupial order. We have also become acquainted with the distribution of organic life now and with the interrelations of different geological strata, as far as those phenomena of space and of time concern our immediate subject.
By becoming acquainted with these matters, and by no other way, is it possible to give an intelligent answer to the question, "What is a kangaroo?"—Popular Science Review.
- See Cornelis de Bruins, "Reizen over Moskorie, door Persie en Indie." Amsterdam, 1714, p. 374, Fig. 213
- Pallas, "Act. Acad. So. Petrop.," 1777, part ii., p. 299, tab. 4, Figs. 4 and 5.
- Schreber, "Sangth.," iii., p. 551, pl. 153, 1778.
- The following are some among the earlier notices of these animals: "Histoire d'un Voyage fait en la Terre du Brésil," par Jean de Léry, Paris, 1578, p. 156. Hernande's "Hist. Mex.," p. 330, 1626. "Histoire Naturelle des Antilles," Rotterdam, 1658. "Anatomy of an Opossum," Tyson, Phil. Trans., 1698. |
Much has been written about how big retailers use RFID tags to keep track of product inventory,but now an agricultural research lab in Australia is using the wireless sensors to keep track of experimental plants. The goal: to quickly and efficiently develop new varieties of food crops, such as wheat and barley, that can withstand disease and drought, and thrive even in poor soil conditions.
Using IBM computers designed to conserve energy, the University of Adelaide, in Australia, has just completed construction on The Plant Accelerator, the largest test facility of its kind in the world. Using an elaborate system of conveyor belts, digital imaging gear, and robotic equipment, technicians can continuously monitor the vitality of up to 2,400 radio-tagged plants, each in its own pot. By linking 3-D images and data to records of each plant’s genetic makeup, researchers can accelerate the process of designing hardier plants–cutting the time it takes to develop a new variety by perhaps 70%.
The new system will provide critical insights for breeding the kinds of crops that could help overcome food shortages in the face of global warming. The new plant varieties could be particularly useful for developing nations in Africa and Asia where over-planting and other poor farming techniques have depleted the soil. They could allow farmers there to increase yields, so these countries could be better able to feed their people.
The facility’s tech staff designed the system with the help of Datacom Systems, an IBM partner based in New Zealand, and uses a software package from LemnaTec, a German company, to control the imaging and analysis system.
Eventually, Accelerator technicians will be equipped with handheld PDAs connected wirelessly to the computer database, allowing them to fetch vital information from facility’s IBM blade servers, which use a fraction of the power of more conventional computers, while they’re examining individual plants. They also plan to add CAT scanners so they get 3-D images of the plants’ roots in addition to stems and leaves. |
A strong bond or attachment between baby and parent is important for them to grow up happy and leads to less emotional conflict, more empathy, and better grades. How parents deal with their child's emotional life has the greatest effect on their future happiness.
6 Parental Behaviors for Dealing with Emotions
1. A demanding but warm parenting style (authoritative) that involves good communication with your children
a. Responsive: give kids support, warmth, and acceptance; communicate affection (rather than rejection)
b. Demanding: control behavior by making and enforcing rules consistently; clearly explain rules and encourage independence while still complying with family values
2. Comfort with your own emotions: setting an example for kids so they can learn to be comfortable with theirs
3. Tracking your child's emotions (watch, listen and respond) without smothering or helicoptering
4. Verbalizing emotions: Be able to label your feelings and teach your child to label theirs - this teaches self-soothing which helps them focus and have successful relationships
a. Surprisingly, studying music for at least 10 years starting before the age of 7 can help children more easily recognize emotional cues
5. Running towards emotions - parents who do this:
a. Don't judge emotions
b. Acknowledge the reflexive nature of emotions (rather than denying/ignoring their existence)
c. Know that behavior is a choice, but an emotion is not - help kids to understand that they have a choice in how they express their emotions
d. See a crisis as a teachable moment
6. Two tons of empathy: verbalize a child's feelings, validate them, and show you understand - this works because empathy calms people down
Next up: a moral baby... |
Is there evidence that the Maya were in Georgia and Florida? If so, why were they there? Were they mining gold and shipping it back to Mexico? Does a gold artifact discovered in a Florida mound in the 1800s offer positive proof of this? Let’s look at the evidence and see what it suggests about the true goings-on in the southeastern U.S. before the arrival of Europeans.
Maya in Florida and Georgia?
A site in Florida called Fort Center near Lake Okeechobee offers the earliest evidence of corn agriculture in the eastern United States. The question naturally arises as to how corn, a Mexican plant, showed up in Florida before it showed up elsewhere in the southeast. If it came by land you would expect to see evidence of its cultivation in Texas, Louisiana, Mississippi and Alabama long before it arrived in south central Florida. The logical conclusion, then, is that it was brought by people who arrived by boat. The archaeologist who excavated the site, William Sears, asserted in his book/archaeological report, Fort Center: An Archaeological Site in the Lake Okeechobee Basin, that this is precisely how corn came to be at this site. But who brought it?
Interestingly, Lake Okeechobee was originally named Lake Mayaimi. It took its name from a tribe of Indians named the Mayaimi who lived around the lake. This is where the city of Miami gets its name. So in the same place where the first evidence of corn agriculture was discovered we find a tribe named Mayaimi. In nearby Cape Canaveral the Spanish recorded that a tribe named the Mayayuaca lived. Another nearby tribe recorded by the Spanish was the Mayaka. When the Spanish first reached the Yucatan in Mexico they encountered a tribe called Maia (Maya) living in a province called Maiam. Could the Maya have been responsible for bringing corn to Florida?
The migration legend of one Native American tribe, the Hitchiti, suggests this is the case. The Hitchiti migration legend as recorded in the book Creation Myths and Legends of the Creek Indians seems to place them in the Lake Okeechobee area after arriving on the coast of Florida:
Their ancestors first appeared in the country by coming out of a canebrake or reed thicket near the sea coast. They sunned and dried their children during four days, then set out, arrived at a lake and stopped there. Some thought it was the sea, but it was a lake; they set out again, traveled up stream and settled there for a permanency
At the time this legend was recorded, the Hitchiti lived in Georgia. Following this legend in reverse, the only place south or “down stream” from Georgia with a lake large enough to be confused with the sea is Lake Okeechobee. The fact they arrived at the sea coast suggests they arrived in Florida by boat.
More importantly, this legend states that the Hitchiti’s ancestors came out of a “reed thicket.” The actual Hitchiti word recorded in the legend is utski which translates literally as “reeds.” In the Mayan language, “reeds” or “place of reeds” is a metaphor for a large city. For instance, the Maya referred to the great Mesoamerican metropolis of Teotihuacan as Puh which means “reeds.” The great Toltec capital of Tula was also known as a “place of reeds.” “Place of Reeds” served as a metaphor relating the masses of reeds in a marsh to the masses of people in a metropolis thus a metropolis became a “place of reeds.”
The Hitchiti migration legend reference to their ancestors coming from “reeds” suggests they were Maya who left a major city in Mexico and then arrived on the coast of Florida and temporarily settled near Lake Okeechobee before heading upstream and settling in Georgia “for a pemanency.” Interestingly, the Itza Maya referred to their ancestors as Ah Puh which translates as “Reed People.” Could the Hitchiti be descendants of the Itza Maya?
Mayan Words and Glyphs Among the Hitchiti?
If the Hitchiti were, indeed, descendants of the Itza Maya then there should be linguistic similarities between the Hitchiti and Mayan languages. And, in fact, there are. Chichen Itza, the great Mayan city in the Yucatan constructed by the Itza Maya, is translated as “Mouth of the Well of the Itza.” Chichen means “mouth of the well” in Mayan with chi meaning “mouth” and chen meaning “well” as confirmed in an Itza Maya dictionary. According to a Hitchiti-English dictionary, chi also means “mouth” and chahni means “well” thus chichahni means “mouth of the well” in Hitchiti. (For more linguistic connections read: “Mayan Words Among Georgia’s Indians?“)
The Maya also had a writing system believed to have been passed down from the Olmecs which used glyphs to convey sounds and sometimes concepts. If the Hitchiti were related to the Itza Maya then you would expect to find evidence of this writing system among this tribe. In fact, there is. A pottery tradition known as Swift Creek pottery existed in the same areas of Georgia where the Hitchiti language is known to have been spoken. Designs on this pottery are similar and some cases identical to Mayan glyphs and symbols in Mexico. More importantly, this pottery tradition begins around the same time that corn first showed up around Lake Okeechobee.
For instance, one of the most important symbols among the Maya was that of Kukulkan, the plumed serpent. According to David Smith in his article “Quetzalcoatl- The Plumed Serpent,” (Quetzalcoatl was the Aztec name for this deity) this symbol also makes an appearance on Swift Creek pottery:
More importantly, Smith argues that the duck bill on this version of Quetzalcoatl represents a wind deity known as Ehecatl-Quetzalcoatl. (A gold duck bill pendant discovered near Lake Okeechobee will be discussed later which further supports a Mayan presence in Florida.)
In his article “Swift Creek Design Investigations” that appeared in the book A World Engraved: Archaeology of the Swift Creek Culture, researcher Frankie Snow notes that another Swift Creek design has an “Olmec look.” This design which he described simply as “unidentified creature” bares a striking resemblance to the Olmec Jaguar glyph:
|Swift Creek design suggestive of the Olmec Jaguar||Olmec Jaguar design|
A quick perusal through the pages of A World Engraved reveals many other such designs. For instance, one design features a cartouche featuring two symbols, a diamond and cross. The fact that the Swift Creek potters decided to place both of these symbols in a cartouche reveals they believed these two symbols were closely associated. Among the Maya, these are both glyphs for the Mayan word Ek which means “star” or “Venus”:
|Swift Creek diamond & cross design||Mayan cross-and-diamond Ek glyph|
Another Swift Creek design appears to represent another version of the Mayan Ek glyph:
|Swift Creek design||Mayan Ek glyph|
And this is just the tip of the iceberg. (Read “Mayan Glyphs on Georgia, Florida Pottery” for a more in-depth discussion.)
So to recap:
- There are Mayan words in the Hitchiti language
- A pottery tradition in the same areas of Georgia where the Hitchiti language was spoken contains designs identical to Mayan glyphs
- The Hitchiti migration legend placed them arriving in Florida from a place of reeds, a known Mayan euphemism for a large city, and living at the very place where corn was first cultivated in the southeast.
- The pottery tradition and arrival of corn occur at the same time in the same areas where the Hitchiti are known to have lived
This is what the FBI would call “evidence.”
Getting here from there- Yucatan to Florida By Boat?
Now that it seems clear there was a Maya presence in Florida and Georgia the next question that must be answered is were the Itza Maya capable of crossing the Gulf of Mexico and reaching Florida? According to researcher Douglas Peck, the Maya most capable of crossing open ocean were the Chontal Maya. In his paper on the Chontal Maya and their seafaring accomplishments he noted they were great seafarers and navigators who controlled all the coastal trade routes from Mexico down to Central America. They also made voyages into the Caribbean. Thus the Chontal Maya were the most likely candidates to have traveled to Florida bringing corn along with them. Which begs the question: who were the Chontal Maya and what was their relationship with the Itza Maya? (Continues…) |
One of the most important functions of wetlands is to restore and maintain water quality. Wetlands upstream from the bay can filter sediments and remove pollutants from water before it reaches the bay itself. Wetlands are capable of removing up to 90 percent of nitrogen and 80 percent of phosphorus from water, as well as capturing particulate matter suspended in runoff.
The challenge becomes where to work within the 64,000-square-mile watershed to most effectively improve water quality through wetland restoration activity. In order to do our best with limited resources, DU developed the Chesapeake Bay Planning Network, which targets and ranks subwatersheds for restoration (see sidebar). Restoration activities in these subwatersheds will improve the quantity and quality of SAV for redheads and canvasbacks.
Focusing within these priority watersheds, biologists work with private and public landowners to restore and enhance wetlands, plant upland grass buffers, reforest riparian corridors, and improve coastal salt-marsh habitats, all of which improve water quality in the bay. DU restores previously converted wetlands to improve water quality and provide much-needed waterfowl and wildlife habitat. Riparian forests provide stream bank stabilization and act as buffers, filtering excess nutrients from runoff before it enters adjacent streams. Warm-season grass plantings perform a similar function. They filter sediment from surface runoff and uptake excess nutrients from groundwater. Grassland buffers are also effective in preventing topsoil erosion. Additionally, grass plantings provide habitat for waterfowl, upland game birds, and a variety of nongame species. |
Once the immediate shock had passed, the search for explanations began. Why had Napoleon’s heirs fallen so quickly? Many Frenchmen chose to blame insidious Fifth Columnists: Nazi and Communist agents who had supposedly undermined the homeland from within. (Nazi Germany and the Soviet Union were allies at this point.) Visiting Paris in May 1940, Clare Boothe heard constant talk of trahi (betrayed): “At first it was no more than a whisper…. And then the whisper became a great wail that swept through France, a great wail of the damned: ‘Trahi…trahi….’” Although a number of people were lynched as suspected enemy agents, in reality there were few German spies operating behind enemy lines, and their meager efforts can hardly explain the magnitude of the disaster that befell France.
A more plausible version of the “stab in the back” thesis was that France was undone not by active treason but by passive indifference: Following the carnage of the Great War and two decades of political turmoil pitting right against left, the French had simply lost the stomach to fight another costly war. This theory was widely held by those who participated in the actual events, but it has met with skepticism from modern historians who point to contemporary records showing that French morale had made a considerable recovery by the time Germany invaded Poland in September 1939. Hitler’s aggression convinced most French people of the rightness and necessity of war. They were “resigned but resolute,” in the words of the British ambassador. And in the war that followed, while some French units crumbled without a fight, many others fought hard even while suffering crushing casualties. In six weeks of combat, France lost an estimated 124,000 men killed and 200,000 wounded, more than the American casualties in the Korean and Vietnam Wars combined over the course of many years.
Even more damning to any attempt to ascribe France’s defeat to its loss of will is the fact that morale on the other side was by no means as high as a zeppelin. Many Germans were as reluctant to fight as the French, not least among them generals who were so afraid that Hitler was leading them to disaster they had discussed mounting a coup to topple him. Only with the successful conclusion of the campaign against France did real enthusiasm for the war break out in Germany. French spirits would have been equally ebullient if their soldiers had won any victories to boast of. The low state of morale among the French cannot be entirely dismissed as an explanation for their downfall; there is no denying that most Frenchmen did not fight till the bitter end and that few joined the Resistance or the Free French forces. But the dominant view of recent historians is that the French loss of will was more the consequence, rather than the cause, of battles lost.
Why, then, did the Germans win those battles? The prevalent impression of the time—that the Allies were outnumbered—is false. As we have seen, the Allies enjoyed an advantage in the overall number of divisions, tanks, aircraft, and artillery pieces. The one critical area where they were deficient was in the number of bombers and fighters actually deployed on the Western Front. The Germans had 2,779, the Allies 1,448. But this was not due to some inherent deficiency on the Allied side; it was mainly because the British and French did not commit many of their aircraft to the fight. The British understandably chose to keep the bulk of their air force at home for self-defense. Less understandable, indeed inexplicable, was the French decision to keep many of their planes in southern France and North Africa, where they could do no good. The problem, in sum, was not how many aircraft the Allies had but how they were utilized.
Another popular misconception—that the Germans had superior weapons—does not stand up to scrutiny either. The best tanks belonged to the French, not the Germans; the best Allied aircraft were as good as the best German models. The only technical area where the Germans had a major edge was in their widespread use of radios, which gave them operational flexibility and the ability to concentrate their mechanized forces and warplanes at the decisive point of attack. This made up for the fact that the vast bulk of their troops walked to the front. (Out of more than one hundred German divisions mobilized for the campaign in the West, only ten were tank divisions and another ten were motorized.)
If the Germans did not have material superiority, what accounted for their easy victory? Quite simply, their decisive edge in doctrine, training, planning, coordination, and leadership. Writing in 1942, two years before his death,
Marc Bloch convincingly argued that “the German triumph was, essentially, a triumph of intellect.” The Germans had adapted their methods of warfare to the Second Industrial Revolution, which had transformed “the whole idea of distance.” The French had not. “The ruling idea of the Germans in the conduct of this war was speed. We, on the other hand, did our thinking in terms of yesterday or the day before. Worse still: faced by the undisputed evidence of Germany’s new tactics, we ignored, or wholly failed to understand, the quickened rhythm of our times. So true is this, that it was as though the two opposed forces belonged, each of them, to an entirely different period of human history.”
Still, there was nothing inevitable about the outcome. “As I looked at the ground we had come over,” Guderian wrote, “the success of our attack struck me as almost a miracle.” It is not hard to fathom why even this most self-confident and swashbuckling of generals would be agog at his own success. It could easily have gone the other way, especially if the Germans had stuck to their original version of Case Yellow or if something had gone wrong during their journey through the Ardennes or across the Meuse. That the Germans prevailed so quickly owes something to luck and even more to their meticulous preparation and inspired execution. While the daring use of panzers got most of the attention, the key breakthrough was due to courageous infantrymen rowing across a river under fire—a maneuver that the Germans had practiced meticulously beforehand on the Moselle River. When the time came for the actual crossing, Guderian was able to issue the same orders used in the exercises with only the dates, times, and locations changed. Thus the final victory was a tribute not to panzers alone but to the skillful employment of the combined-arms concept.
From a historical perspective, the German victories in Poland, Norway, Denmark, Belgium, Holland, and, above all, France helped to reestablish the possibility of the decisive campaign. The ability to achieve clear-cut results on the battlefield had been in decline since the mid-nineteenth century, when the firepower and sheer size of armies had increased beyond the ability of transportation and communications networks to cope. Generals could barely find their foes (recall how blind both Moltke and Benedek were on the eve of Battle of Königgrätz), much less maneuver effectively to destroy them. Faced with machine guns or even rifles, cavalry could no longer perform its traditional role of hunting down and destroying the tattered remnants of defeated armies. This meant that the losing side on the battlefield could usually make good its escape, as Lee did after Gettysburg, and return to fight another day. The growing indecisiveness of war reached its apotheosis in World War I, where, on the Western Front at least, combat became a senseless struggle for a few yards of ground. Now, with the rise of mechanized forces, the art of maneuver could once again be practiced as skillfully as it had been by Frederick the Great or Napoleon Bonaparte. With their lightning victories, Hitler’s legions had shown that force of arms could win wars, not just battles. Or so it seemed in 1940.
In the warm, heady afterglow of victory, the Germans tended to forget all the doubts that had plagued them before and during the invasion of France. More and more generals joined Hitler in concluding that their war machine was invincible and unstoppable. Not even their failure to knock Britain out of the war could disabuse them of this illusion. Weakened by its losses over France, the Luftwaffe could not establish air superiority over southern England in the summer and fall of 1940, and Hitler had to call off his planned invasion, Operation Sea Lion. Germany returned to the path of conquest in 1941 with the swift occupation of Yugoslavia and Greece. Rommel’s Afrika Korps also enjoyed steady success in North Africa against British forces from the time of its arrival in February 1941 until the battle of El Alamein in October 1942. By then Rommel’s operations had become a mere sideshow to the much larger war being fought in Russia.
Hitler invaded the Soviet Union on June 22, 1941, with 3.2 million soldiers, 3,600 tanks, and 700,000 horses. No matter that he faced an enemy with millions more men, three times more aircraft, and five times more tanks. His armies enjoyed swift and stunning success against the ill-prepared Russian troops deployed on the frontier. The Soviet tank and air forces were almost completely annihilated. The Nazis advanced deep into Russia, arriving on the doorstep of Leningrad and Moscow by the winter of 1941. Then the offensive stalled out, partly as a result of stout Soviet resistance but mainly due to the inherent limitations of the Wehrmacht.
The blitzkrieg had proved a devastating weapon in the relatively confined spaces of western Europe. Its force was considerably dissipated on the nearly endless steppes of Mother Russia. Warfare in the Second Industrial Age required moving not only tons of food and ammunition but also tons of fuel and lubricants to keep tanks and trucks on the go. German logisticians were simply not able to keep their armies supplied more than a few hundred miles beyond the frontier (Sedan to Dunkirk is 170 miles); in Russia, German armies quickly found themselves more than a thousand miles from their bases.
These difficulties were compounded by the onset of the harsh Russian winter, for which the Nazis had not prepared; they had expected the entire campaign to be over in four months. The Germans found themselves trapped deep inside Russia, freezing, hungry, exhausted, running low on fuel and ammunition, and facing an adversary that was growing stronger by the day. The turning point was the Battle of Stalingrad. By the time it was over in January 1943, the Germans had lost 209,000 men killed and 91,000 captured. Hitler tried one last major offensive at Kursk in July 1943. His forces were repulsed in the largest battle of the war, pitting more than two million men and six thousand tanks against each other.
The Soviet victory at Kursk represented a hard-won armored renaissance. The Russians had been among the leaders in developing mechanized forces in the 1920s and early 1930s. Under Marshal Mikhail Tukhachevsky, they had invented the doctrine of “deep battle,” a variant of blitzkrieg, which called for masses of tanks and airplanes to penetrate hundreds of miles behind the enemy’s front lines to isolate and encircle opposing forces. In 1937, Stalin had executed Tukhachevsky in his purge of Red Army officers. The “deep battle” doctrine was discredited along with its founder, only to be revived in 1942–43 after the Soviets had suffered severe setbacks at the hands of Nazi tank forces. On the Eastern Front, then, the Germans eventually faced an adversary that fought much as they did—only with far more men and tanks and airplanes to throw into the fray.
The same thing happened in the West. The fall of France alerted Britain and America to the need to develop better mechanized forces. The U.S. had deployed a Tank Corps in World War I, but it was disbanded in 1920 over the anguished objections of two of its leading officers—Colonel George S. Patton and Major Dwight D. Eisenhower. In the interwar years, the U.S. Army spurned the innovative American tank designer J. Walter Christie, who sold his work to Russia, where it formed the basis of the workhorse T-34 tank. In the 1930s the U.S. Army limited its mechanization to one cavalry brigade. The first armored divisions were not formed until after the fall of France, in July 1940. They were tested in war games in Louisiana and Tennessee in 1941, and dispatched the following year to fight in North Africa. U.S. troops did not perform well at first, but by the time of the Normandy invasion in 1944 the U.S. possessed formidable armored forces grouped into all-arms divisions equipped with the serviceable if not spectacular M4 Sherman medium tank and led by generals like Patton whose abilities rivaled those of Rommel and Guderian. And, unlike the Germans, the Americans managed to motorize most of their army, rather than just its spearhead.
The British, likewise, fielded effective armored forces after the fall of France, starting with the 7th Armored Division, which under the tank pioneer General Percy Hobart thrashed Italian troops in North Africa in 1940–41, and continuing on to the much larger forces under Field Marshal Montgomery’s command during the drive into Germany in 1944–45.
Despite the considerable achievements and painful sacrifices of the Allied armies in their quest for victory, the Germans ultimately did not lose the war because they faced forces superior in the quality of men or materiel. The Panzer V (Panther) and Panzer VI (Tiger), developed in 1942, were probably the best tanks of the war; Sherman tank rounds would simply bounce off their frontal armor, while they could wreck a Sherman with one shot.
The German soldier, too, was in all likelihood the best of the war. A postwar study by Trevor Dupuy, a retired U.S. Army officer, found that, right up until the end, German units had at least a 20 percent “combat effectiveness superiority per man” over Anglo-American forces, meaning that “[o]n the average, a force of 100 Germans was the combat equivalent of 120 American or 120 British troops.” The German advantage over the Russians was even greater. According to Dupuy, one hundred German soldiers were the equivalent of two hundred Russians. While Dupuy’s findings have been questioned, there is little doubt that the Germans were at least as effective as their enemies, if not more so.
But few armies, no matter how effective, can prevail when outnumbered as badly as the Germans were by the later stages of World War II. They faced a crippling deficit not only in manpower but also in materiel. As early as 1942, the United States was outproducing all of her enemies combined—in historian Richard Overy’s summation, “47,000 aircraft to 27,000, 24,000 tanks to 11,000, six times as many heavy guns.” Add in Soviet production, which recovered rapidly after the catastrophes of 1941, and the disparity became almost insuperable.
The Allied weapons may not have been as technologically sophisticated as some German models, but they were cheap, durable, and plentiful. Henry Ford, with his mass production techniques, was more valuable to the Allied cause than any general: “The Ford company alone,” Overy notes, “produced more army equipment during the war than Italy.” The Allies also had access to vast pools of natural resources that the Axis could not match; most critically, they controlled 90 percent of the world’s natural oil production.
If the Allies had fought as incompetently as they had in 1939–41, they might have frittered away these considerable material advantages. Luckily for them, by 1943 their tactical skills had improved enough—if still not perhaps to the German level—to make effective use of the products being churned out by their factories.
This goes to show the limits of a military revolution. If Hitler had possessed the sagacity of a Bismarck and made peace following the victories of 1939–40, as Bismarck made peace following the victories of 1864, 1866, and 1870, he might have consolidated the conquests won by his peerless war machine. By choosing to push the blitzkrieg farther than it could reasonably go—by taking on both the U.S. and USSR—the Führer consigned Germany to a war of attrition that it would have been hard-pressed to win, unless, perhaps, it had developed a true wunderwaffe (wonder weapon) like the atomic bomb. (Hitler’s ersatz wonder weapons, the V-1 cruise missile and V-2 rocket, were not enough.) |
Adult brown bears defend themselves by using their paws, which are equipped with 4-inch-long, razor-sharp claws. Often they engage in threatening postures and vocalizations in an attempt to drive off perceived threats before engaging in physical conflicts. Unable to climb trees like black bears, mature brown bears usually stand their ground if they cannot flee a threat.Continue Reading
Adult brown bears have few natural predators. North American brown bears need only fear humans and larger bears, while those living in Asia must also cope with tigers. In contrast to the adults, young brown bear cubs are vulnerable to a range of predators, including wolves, coyotes and mountain lions, as well as other brown bears. Fortunately, young brown bears can climb trees to avoid danger. Mothers often send their cubs into the trees when they sense danger. When the danger has passed, the mother emits vocalizations that signal the cubs to return to the ground. The cubs stay with their mother for an extended period of time, learning how to hunt, forage for food and avoid danger. In some cases, this learning period may take up to four years to complete.
Brown bears also use their impressive claws to obtain food. Though they look large and clumsy, the claws are quite dexterous, and bears can use them to dig, pry open rotten logs and manipulate small objects.Learn more about Bears |
Age-related macular degeneration (AMD) is a disease that causes blurring of your central vision. The blurring happens because of damage to the macula, a small area at the back of the eye. The macula helps you see the fine detail in things that your eyes are focusing on.
Macular degeneration makes it harder to do things that require sharp central vision, like reading, driving, and recognizing faces. It does not affect side vision, so it does not lead to complete blindness.
There are two types of macular degeneration—wet and dry. The dry form is by far the most common type. The wet form is much less common, but it happens more quickly and is more severe. |
"Willst du die suchen gehen, Leo?" I called out to the kitchen walls. Two seconds later a disembodied voice from the tablet echoed my question. "I'm getting good at this," I thought with a smug smile.
What I'm now finding is that snatches of text pop randomly into my head long after the cartoon has ended, and I can change the verbs and nouns to create sentences of my own. But am I going to sit down with a notepad and a pencil when my son is asleep and study his cartoons? Probably not. The language isn't challenging and, let's face it, cartoons simply aren't entertaining enough. I'm really not that invested in what Leo will build next. But as long as they are on in the background, cartoons are helping me fine-tune my pronunciation, soak up some fixed expressions and reinforce (and question) my understanding of articles, separable verbs and sentence structure.
6 ways to use cartoons to learn:
1. Choose one aimed at little kids - these tend to have less dialogue and limited language variety. Switch it on and just listen. See if you can understand what's happening with sound only. If not, try watching it, too, next time.
2. Repeat what you hear. Sure, it might be easy to understand, but how is your pronunciation? Does your voice go up and down in the same way as the characters' voices on the screen?
3. If there's a question, try to guess the answer before you hear it.
4. Play the role of a character. While you watch, ask the questions or give the responses.
5. Turn off the sound and try to narrate what you see happening on the screen.
6. Label everything you see in the cartoon. It can be quite a shock to realise how many words there are still left to learn!
Words from the text
Willst du die suchen gehen, Leo? Do you want to go and look for them, Leo?
smug feeling pleased with yourself for something you have done (negative meaning)
suds the foam or bubbles created by the cleaning product when you wash dishes
anticipate expect (here it means 'guess')
justifiable when there's a good reason for something
there's more to sth than meets the eye there is more than you think at first
intonation part of pronunciation - changes you make to sounds (rise/fall) when you talk
rolled into one combined
dollop of a small amount of soft food (like sauce or cream) that you eat with sth else
snatches of short parts of sentences or conversations
pop into my head occur to me / I think of
fine-tune to make small changes to (here it means 'perfect')
soak up absorb
to narrate to tell the story |
CBSE Class 12 Biology syllabus is essential for board exam. Referring to the syllabus will help you understand the curriculum for the Biology subject easily, you to maintain track of all the topics and chapters to be prepared for board exams. The syllabus covers the marking scheme of the Biology that can help you decide which topics to cover first and how to proceed with the learning. The syllabus also incorporates for practical examination, recommended books, and the internal assessment scheme for Class 12 Biology.
Class 12 Biology Syllabus comprises topics like asexual reproduction – binary fission, sporulation, and fragmentation, vegetative propagation, pollination, the structure of the flower, dispersal significance, and fruit formation. You will also study the concepts of inheritance such as codominance, blood groups, and genes. You will learn about DNA, RNA, genetic code, gene expression, and how life originated and evolved. Other topics include how various populations live in particular habitats, and population attributes.
You can download the CBSE Class 12 Biology syllabus from the askIITians website for your employer. Along with the Class 12 syllabus for Biology, we also provide CBSE study matter. This includes chapter notes, revision notes, mindmaps, flashcards, mock tests, test series, and study planners. You can enroll in our live classes, where our experts will teach you all the basic to complex in easy to understand language. Our study materials are completely based on the tardiest CBSE syllabus and exam for Class 12 Biology.
Chapter-wise CBSE Class 12 Biology Syllabus
The Class 12 Biology syllabus for 2021-22 comprises 5 units and 16 chapters. These units comprise 70 marks in total while 30 marks are allocated to practical examination and subjective assessments.
CBSE Class 12 Biology Syllabus.
Unit 1: Reproduction
Reproduction, modes of reproduction, sexual and asexual reproduction, pollination, flower structure, vegetative propagation in plants, male and female reproductive systems, reproductive health, birth control
Unit 2: Genetics and Evolution
Principles of inheritance and variation, chromosome theory of inheritance, sex determination, Mendelian inheritance, inheritance of blood groups, the structure of DNA and RNA, DNA fingerprinting, evolution, biological evolution, central dogma, Lamarck’s theory of use and disuse of organs, Darwin’s theory of evolution, Hardy – Weinberg’s principle, human evolution
Unit 3: Biology and Human Welfare
Human health and diseases, the immune system in human beings, vaccination, animal husbandry, parasites causing human diseases, plant breeding, single-cell protein, antibiotics, microbes in food processing, bio-gas fertilizers, bio-control agents
Unit 4: Biotechnology and Its Applications
Biotechnology, its principles, and applications, genetic engineering, genetically modified organisms, human insulin, RNA interference, gene therapy, biosafety issues, applications of biotechnology in health and agriculture
Unit 5: Ecology and Environment
Organisms and populations, habitat and niche, population interactions, ecological adaptations, structure and functions of the ecosystem, loss of biodiversity, ecosystem, conservation of biodiversity, national parks, wildlife, biosphere reserves, environmental issues, air pollution, radioactive waste management, solid waste management, ozone layer depletion, and deforestation
CBSE Class 12 Biology Syllabus FAQs
#1 Why is the Class 12 Biology Syllabus important?
Class 12 Biology syllabus is an important document that helps you in study preparation for the board exam. When you have a list of all the topics to be studied for the exam, you can build a study plan accordingly. The syllabus also helps you keep a record of all your learning progress. You can color-code the topics based on their difficulty or the questions that you have completed and are yet to be done. The syllabus also indicates the exam guide and the marking scheme for biology Class 12.
#2 What is the syllabus for CBSE Class 12 Biology practical examination?
The practical examination includes 30 marks in total. Students will be assessed based on one major experiment, one minor experiment, slide preparation, practical record, spotting, viva voce, and project record.
#3 How to prepare for the Class 12 Biology exam?
- Read the CBSE syllabus for Class 12 Biology carefully and identify the exam guide and marking scheme.
- Complete all the basic concepts and do not study any additional topics. Prefer reading only NCERT books.
- Use flashcards to memorize important terms and diagrams, for instance, the structure of a cluster et cetera
- Practice previous year board exam papers and sample papers to understand your areas of weaknesses and strengths in the subject
- Take online CBSE coaching from expert tutors to understand all the typical concepts of biology and biotechnology with ease
#4 How can AskIITians help you to prepare for Class 12 Biology?
Download the complete syllabus for CBSE Class 12 Biology 2021-22 from the askIITians website and begin your preparations with our study resources. We follow the latest CBSE guidelines and exam pattern to create the best study resources for Class 12 Biology:
- Chapter Notes, Revision Notes
- Class 12 Biology NCERT Solutions
- Online lessons, Live Classes, Pre-recorded Lessons
- Previous Year Papers for Class 12 Biology
- Mindmaps, Flashcards, Study Planners
- Chapter tests, Unit tests & more!
Also read about hcn lewis structure. |
Developing social and emotional skills builds a foundation for everyone’s success. Social and emotional skills include understanding and managing oneself, relating to others, and making responsible choices. Social and emotional skills are associated with improved behavior, lower levels of emotional distress, enhanced wellbeing, improved academic outcomes, and more stable employment. Social and emotional skills can be taught, practiced, and strengthened in everyday interactions in schools, at home, in workplaces, and community organizations. Students, educators, families and community members use social emotional learning (SEL) every day. |
What is a democratic classroom? It’s not a partisan space. It doesn’t focus on political parties and their viewpoints. Instead, a democratic classroom engages students in living democratically by promoting values such as inclusion, voice, representation, and participation.
After prolonged periods of remote learning, our classroom communities have never been more relevant as spaces to nurture student agency, foster social belonging, and prepare our learners as active citizens. We can create this environment by establishing democratic classrooms: safe, inclusive learning environments, where students actively practice democratic values, understand their rights, and take responsibility for their behavior as both individuals and members of a community.
These are characteristics of the democratic classroom:
- High-trust relationships and shared power between teachers and students
- High degree of student voice and agency
- Respect for children’s ideas and contributions
- Intentional sharing of diverse perspectives, including those about challenging issues
- Use of dialogue and group decision-making, often through protocols
- Development of the whole self, including students’ critical consciousness
Practical Ways to Promote Agency and Participation With Students
Classroom setup: Start the school year by co-constructing features of the classroom with students and integrating their ideas into the design of the learning space. With your learners, consider the underlying beliefs and values that the classroom should promote. Using a Y-chart, ask them, “What should our classroom community look like, sound like, and feel like?”
For example, if students want a space that promotes collaboration and social belonging, how might a furniture layout help or hinder this? While rows of desks will limit social interaction, table groups will foster discussion and sharing. Make decisions with students about the placement of furniture and the nature of classroom decorations—for example, signs in students’ home languages that reflect their beliefs and values. Think about what interactions might look like in the classroom space. How might this connect to the purpose of a space, such as promoting reflection? When and how should the teacher and students revisit the classroom design to see how it functions?
Reflect on how the classroom setup might communicate underlying power dynamics. For instance, does the teacher sit on a soft chair while the students are on hard chairs? Are all chairs directed toward the teacher? Does the teacher sit at a higher level than the students—for example, on a chair in a carpeted area, while children sit on the floor? How might such decisions promote or impede a democratic classroom climate?
Instead, we can shift power and share it with our students by decentralizing our classrooms. Using small group seating, not all physically directed toward the same classroom wall, we can promote agency and leverage the power of collaborative, peer-to-peer learning. When stools are placed in a circle for class discussion, students can speak directly to others in the class, instead of requiring constant meditation by the teacher. The traditional student-teacher ping-pong becomes a multiplayer basketball game. This allows us to move away from a teacher-as-authority stance and to the role of a learning facilitator as students develop communication and social skills.
Co-constructed class charter: Rather than creating classroom rules or expectations, co-construct a classroom charter with students. A charter is different because it focuses on students’ rights as well as their responsibilities. It models how individual and group needs intersect, yet also deviate. As part of the design of a class charter, introduce students to a rights framework, such as the United Nations Convention on the Rights of the Child. Ask students to choose rights they believe need to be in place in order for them to learn. Next, given these rights, what are their responsibilities to ensure that these are upheld in the classroom community?
Importantly, the rights that students choose can’t be taken away as punishment. However, the opportunity to infringe on other people’s rights can be withdrawn. For example, a student can be stopped from exercising their freedom of speech if their words are discriminatory against others in the classroom. UNICEF Canada has more information about class charters.
Peaceful place: Peace begins with each student. If we want peaceful classroom communities, where students learn how to recognize their emotions and navigate conflict in productive ways, we need to teach strategies for being at peace. “A peaceful place” is a physical location in the classroom where students can reflect on their feelings, calm down, or resolve conflict with their peers. It may be a table at the back of the class, a play tent, or even a large cardboard box that can be painted by students.
By inviting students to identify, design, and decorate a peaceful place, we can introduce the notion of peace and set up classroom routines that promote personal and group well-being.
For instance, with elementary students, place paper, markers, and reading books in the space, so that students can use the area to cool off when feeling frustrated. Likewise, arrange a few chairs inside or next to the space to promote conflict resolution strategies such as recognizing emotions, using “I statements,” and active listening. Importantly, a peaceful place should never be used for disciplinary purposes, such as a time out. Peace is a dynamic rather than passive concept, so ongoing practice and reflection on behavior is key. There’s more information about this strategy in a book I wrote with Elizabeth O. Crawford, Worldwise Learning: A Teacher’s Guide to Shaping a Just, Sustainable Future.
Structures for discussion and dialogue: Students benefit from structures and protocols that enable them to feel safe and secure during discussions. This is especially true when speaking about current events that may be contentious or produce strong emotions in our learners. Structures and protocols provide predictability and help to scaffold the development of communication skills. For engaging students in conversations about social justice issues, Learning for Justice’s Let’s Talk! provides a number of concrete tips. For example, when structuring critical conversations, teach students to “Restate, Contemplate, Breathe, Communicate” to manage emotions or use temperature-check strategies such as Fist to Five to understand students’ comfort level.
Likewise, Facing History and Ourselves has a number of strategies that teachers can use before, during, or after discussions, such as “Big Paper” to promote silent conversations or “Cafe Conversations” where students take on the role of an assigned perspective in a small group discussion.
The democratic classroom fosters critical thinking, authentic participation, and social and emotional learning. It’s a humanizing space that empowers our students. As American scholar and activist bell hooks says, this is “education as a practice of freedom.” Part of creating a democratic classroom is being aware of how to set up our classrooms, establish community, and make space for students’ diverse voices, opinions, and perspectives.
This is part of our hidden curriculum. Whether intentional or not, our choices communicate beliefs, values, and expectations to students. If we want our students to become active citizens, we must ensure that our routines, structures, and interactions mirror this aim. |
Yesterday, scientists at the Smithsonian Astrophysical Observatory in Cambridge, Massachusetts made a major announcement. For the first time, after years of searching, Earth-sized planets had been detected outside of our solar system. Among the five planets in the distant Kepler-20 star system are Kepler-20e and Kepler-20f—two rocky orbs with diameters approximately 87 percent and 103 percent that of earth, respectively. The news has the scientific world in a state of excitement over the consequences of the find. We spoke with Smithsonian astrophysicist Francois Fressin, the lead author of the paper, about the discovery.
Researchers have been using the Kepler space telescope since it launched in March of 2009 to search for exoplanets, or planets in other solar systems. “Kepler is staring at 200,000 stars, all located in the same area of the sky, and it just monitors the light it gets from each of the stars, continuously, for years,” says Fressin. “For a fraction of the stars, there’s a periodic dimming with the same duration and same depth of light.” This dimming can be caused by a small opaque body crossing between the star and the telescope—in this case, a pair of planets. The team first detected the telltale dimming more than a year ago, but had to make more calculations with custom-developed software to rule out the possibility that it was caused by other phenomena.
From the degree and frequency of the dimming, the scientists are able to make inferences about the planets. Kepler-20e and Kepler-20f are 6,900 miles and 8,200 miles in diameter, respectively, remarkably close to Earth’s 8,000 mile size. Because the two planets are so close to their host star—they orbit at 4.7 million miles and 10.3 million miles, both far closer in than Mercury is to the sun—they are believed to be extremely hot, with average temperatures of 1400 and 800 degrees Fahrenheit, respectively. “We know they’re both pure rock bodies,” Fressin says. “But we don’t have precise mass estimates, so we can’t say if they’re similar in composition to the Earth, or something denser with more iron, like Mercury.”
What It Means For Astronomy
Exoplanet hunters began uncovering distant gas giants as early as 1992, but smaller, Earth-sized bodies had proved more difficult to detect. “We’ve crossed the threshold: this the first time that humanity is able to detect an Earth-sized object around another star,” Fressin says. ”That’s symbolically and technologically important.”
The discovery represents a historic milestone in astronomy. Now, scientists are convinced that they have the right tools to be able to detect Earth-sized planets that might support life. Researchers will continue using the Kepler space telescope to locate exoplanets in hope of finding such a world.
What It Means For Planetary Science
The discovery also turns upside-down much of what scientists believed about the formation of solar systems. The two Earth-size planets are interspersed with three gas giants, all extremely close to the host star, Kepler-20. “From the star, it goes in the order big, small, big, small, big, which seems completely weird,” says Fressin. “In our solar system, we have these four rocky small bodies, and then, farther away, these four large giant gaseous planets. So how did that happen, that we have all this mixing in the Kepler-20 system?”
Although we don’t currently have definitive answers, scientists suspect that the planets drifted into their current position over time. “They didn’t form at the place they are right now, there was not enough rocky material to build these five planets so close to their host star,” Fressin says. “So one solution would be that they formed farther out, and then migrated in.”
What It Means For Extraterrestrial Life
The most tantalizing possibility of these discoveries is the potential that the exoplanets might harbor life. But both Kepler-20e and Kepler-20f are outside the habitable zone—often called the “Goldilocks” zone—that is neither too close nor too far from the host star, allowing for the evolution of living creatures. ”We don’t know a lot of things about life, but we know that one of the main ingredients of life on Earth is the presence of liquid water,” says Fressin. “Right now, at the temperatures estimated, water can’t be in a liquid state on either planet.”
Still, the hypothesis that the planets may have formed farther away, and then migrated to their current locations close to the star, means that life may have existed long ago. “It seems pretty clear that Kepler-20f once crossed the habitable zone of its host star, after its formation,” Fressin says. “It is the closest object in terms of size to the Earth in the known universe, and this means that it could have been habitable in its past.”
What It Means For Space Exploration
Although Kepler-20 is much too far to attempt as the target of a space probe mission—it’s about 950 light-years from Earth, which would require a journey of 36 million years by the space shuttle—Fressin feels that discoveries like this should stimulate interest in the very real possibility of exploring other, closer, star systems. “It would be challenging, and would require great international collaboration, maybe for one or two generations, but it would be feasible,” he says.
Such a mission would admittedly be very long-term, but the rewards are many. “I think the best location to send a probe would be to the closest sun-like star,” says Fressin. “So then imagine, in two generations, we’d have the probe coming back with pictures—real pictures—of another world.” |
How to Write a Report in the Narrative Form
Narrative writing—writing that tells a story—is well suited to reports that relate events with a beginning, middle and end. Police officers describing an accident, human resource professionals explaining employee misconduct and doctors describing operations frequently write reports in the narrative form because a chronological recounting of events is often the best way for others to understand them. Also adding to the appeal of narrative reports is that the focus remains on people, their motivations, and their actions.
Review the principles of narrative writing. You can find these in many places, one of which is Northern Illinois University’s website. In particular, note this website’s advice that you should add the time element to your stories, provide many details and concentrate on, “people, about the decisions they make and the consequences that follow.”
Decide which form of narrative is most appropriate for your purpose. For example, as the university explains, you can narrate without dialogue or you can tell the story by painting a picture, scene by scene, while quoting everyone involved.
Write in the first person if you want people to know you witnessed these events, as the online writing lab at Roane State Community College in Tennessee suggests. Reserve this for reports in which objectivity is not required.
Prepare a rough draft quickly by answering the five W’s: who, what, where, when and why. For example, start out with, “A teenage driver who was talking on his cell phone lost control of his car and veered over the double yellow lines on Highway 101, colliding with several cars traveling in the opposite direction.”
Expand on the rough draft by filling in more details and adding quotes. Describe the teenager by height, weight, and other pertinent physical characteristics. Explain how you know he was talking on his cell phone and how many cars he ended up colliding with.
Add anecdotes to the narrative, if possible. These can help you make a point and add clarity. For example, if you are narrating an argument between employees for a human resources report, explain the scene, time of day and exactly what was said. This creates a mental picture of the events.
Borrow techniques from fiction writers by using plot and characterization, as suggested by The Writing Site. After setting the scene, describe the rising action, climax and resolution. Describe characters by mentioning personal quirks and habits.
Create names for those involved if you are not permitted to use their real names. For confidential reports you can say, “Driver No. 1” or “Stock Room Employees.”
Michele Vrouvas has been writing professionally since 2007. In addition to articles for online publications, she is a litigation paralegal and has been a reporter for several local newspapers. A former teacher, Vrouvas also worked as a professional cook for five years. She holds a Bachelor of Arts in history from Caldwell College. |
The National Institute of Neurological Disorders and Stroke (NINDS) conducts and funds research on the motor neuron disorders. Researchers are testing whether different drugs, agents, or interventions are safe and effective in slowing the progression of motor neuron diseasess. The National Institutes of Health (NIH) is conducting clinical trials to study drugs to stimulate muscle growth in Kennedy’s disease and to suppress endogenous retroviruses in individuals with ALS. A large NIH-led collaborative study is investigating the genes and gene activity, proteins, and modifications of adult stem cell models from both healthy people and those with ALS,spinal muscular atrophy, and other neurodegenerative diseases to better understand the function of neurons and other support cells and identify candidate therapeutic compounds.
Information from the National Library of Medicine’s MedlinePlus
The motor neuron diseases (MNDs) are a group of progressive neurological disorders that destroy cells that control essential muscle activity such as speaking, walking, breathing, and swallowing. Normally, messages from nerve cells in the brain (called upper motor neurons) are transmitted to nerve cells in the brain stem and spinal cord (called lower motor neurons) and from them to particular muscles. When there are disruptions in these signals, the result can be gradual muscle weakening, wasting away, and uncontrollable twitching (called fasciculations). Eventually, the ability to control voluntary movement can be lost. MNDs may be inherited or acquired, and they occur in all age groups. MNDs occur more commonly in men than in women, and symptoms may appear after age 40. In children, particularly in inherited or familial forms of the disease, symptoms can be present at birth or appear before the child learns to walk.
The causes of sporadic (noninherited) MNDs are not known, but environmental, toxic, viral, or genetic factors may be implicated. Common MNDs include amyotrophic lateral sclerosis (ALS), progressive bulbar palsy, primary lateral sclerosis, and progressive muscular atrophy. Other MNDs include the many inherited forms of spinal muscular atrophy and post-polio syndrome, a condition that can strike polio survivors decades after their recovery from poliomyelitis.
There is no cure or standard treatment for the MNDs. Symptomatic and supportive treatment can help patients be more comfortable while maintaining their quality of life. The drug riluzole (Rilutek®), which has approved by the U.S. Food and Drug Administration (FDA) to treat ALS, prolongs life by 2-3 months but does not relieve symptoms. The FDA has also approved the use of edarvarone to reduce the clinical decline seen in ALS. Other medicines that may help reduce symptoms include muscle relaxants such as baclofen, tizanidine, and the benzodiazepines for spasticity; glycopyrrolate and atropine to treat excessive saliva; and anticonvulsants and nonsteroidal anti-inflammatory drugs to relieve pain. Panic attacks can be treated with benzodiazepines. Some individuals may require stronger medicines such as morphine to cope with musculoskeletal abnormalities or pain in later stages of the disorders, and opiates are used to provide comfort care in terminal stages of the disease.
The FDA has approved nusinersen (Spinraza ™) as the first drug approved to treat children and adults with spinal muscular atrophy. The drug is administered by intrathecal injection into the fluid surrounding the spinal cord. It is designed to increase production of the full-length SMN protein, which is critical for the maintenance of motor neurons.
Physical and speech therapy, occupational therapy, and rehabilitation may help to improve posture, prevent joint immobility, slow muscle weakness and atrophy, and cope with swallowing difficulties. Applying heat may relieve muscle pain. Assistive devices such as supports or braces, orthotics, speech synthesizers, and wheelchairs help some patients retain independence. Proper nutrition and a balanced diet are essential to maintaining weight and strength.
Prognosis varies depending on the type of MND and the age of onset. Some MNDs, such as primary lateral sclerosis and Kennedy disease, are not fatal and progress slowly. Patients with spinal muscular atrophy may appear to be stable for long periods, but improvement should not be expected. Some MNDs, such as ALS and some forms of spinal muscular atrophy, are fatal. |
The British East India Company was given permission by a Mughal emperor (Islamic Persian ruler) in 1617 to trade in India. In protecting its trading interests, Britain used more and more military force until it took over large areas of India and its administration, with the cooperation of local rulers. In 1857, after the Indian Rebellion (also called the Sepoy Mutiny or the Revolt of 1857), the British government took over control of the country from the British East India Company, adding India to its empire. The British ruled in India with many trained Indians as part of their administrative staff. The upper classes of India lost their traditional power, and in order to gain advancement in the new system, Indians had to have an English education and training to get positions in the British Raj. Even today, the privileged classes of India are those with an English education.
Beginning in the 1920s, leaders such asMohandas Gandhi, sought to rouse the Indians from their colonial bondage. Gandhi taught the people to boycott English products and to make their own cloth and salt. He used the principle of nonviolence to protest the presence of the British and gained a following of millions. He was miraculously able to unify all the religious factions of India, particularly the Hindus and Muslims, who were rivals. Independence was granted in 1947, with the partition of the country into Pakistan (which later became the Islamic Republic of Pakistan) and India. India today is a secular state, and its citizens belong to many different religions, including Hinduism, Islam, Jainism, Sikhism, and Buddhism.
Desai’s story takes place in the postcolonial nation of contemporary India. ‘‘Postcolonial’’ has a special meaning for the former territories of European nations, for all of the countries in Africa, Asia, or the Americas that were held by European powers were changed forever by the dominant foreign culture. The postcolonial nations often exhibit symptoms of displacement, shock, and schizophrenic values, amounting to a modern identity crisis. Forced to modernize, they cannot go back to the way things were, yet they cannot forget their cultural heritage. The family in ‘‘Games at Twilight’’ is much like a Western nuclear family, and their suburban life represents the new direction India took after independence. People had to move to the cities where there were jobs, and the old extended families and customs began to break down. Western education broughtWestern desires and consumerism and secularism. Most of Desai’s fiction takes place in large cities like Bombay or Delhi, which Desai describes as ugly and destructive of life, especially to sensitive souls like Ravi.
Women and Children
All this change gave rise to an educated English-speaking middle class in India that is oriented toward Western lifestyles and values. Desai had a German mother and was given a Western education. She became a voice of the modern middle-class Indian woman in her fiction, showing the inner suffering of stifled, sensitive women fighting against traditional roles in Cry, the Peacock and Fire on the Mountain. Desai champions a freer life for women but shows the cost of postcolonial shock in the families of her characters who confront the fragmentation of city life. The focus in ‘‘Games at Twilight,’’ however, is on the child. Anita Desai has written children’s books, such as Village by the Sea (1982), in which lower-class children have to support their failing family. Several stories in the collection Games at Twilight are written from a child’s point of view. Since her novels concern psychological portraits of adults, their malaise is often traced back to childhood incidents, such as Ravi’s rude awakening. Desai writes of Indian culture from a secular point of view, though she includes India’s rich religious and philosophical background in her work. Equally influenced by Western and postcolonial Indian concerns, she writes of both the breakdown of the old order and the search of her characters for a new life. Like Ravi, her characters strive to become themselves in a new and puzzling world.
Sara Constantakis, Thomas E. Barden – Short Stories for Students – Presenting Analysis, Context & Criticism on Commonly Studied Short Stories, vol. 28 (2010) – Anita Desai – Published by Gale Cengage Learning. |
Convincing people not to chop down trees is hard, especially in poor countries, where the quick fix of clearing land for development and selling timber seems like a winning strategy. When money is on the table, it’s hard for do-gooders warning about threats such as global warming to get much traction with talk of carbon reserves, or locals to win debates with arguments about tradition.
But what if you could prove that the forest itself was a potential cash cow, and that leaving it alone might make more money than cutting it down?
That’s the premise of a new report from the Center on Global Development by the economist Katrina Mullan, who quantifies the value of forest ecosystems in an effort to complicate the traditional binary between conservation and development. Her hope is that more people will see the immediate benefits of conservation.
In China, for example, she found that a reforestation effort in the upper watershed of the Three Gorges Hydroelectric Power Dam helped control erosion, saving $15.1 million in sediment-clearing costs and generating an additional $21.9 million worth of electricity thanks to increased water flow each year.
When mangrove forests on coasts are destroyed, they expose nearby settlements to natural disasters (particularly noticeable during the 2004 Indian Ocean tsunami) and reduce the productivity of fisheries—in Mexico, a 3% reduction of mangrove area in one village reduced revenue from shrimp harvests by $279,000 a year. Forests provide bee populations for pollination of Costa Rican coffee plantations, increasing profits by $62,000 a year.
And when fires are started to clear forests or as a side-effect of logging, the smoke has serious health implications: Forest fires in Indonesia are estimated to have generated $300 million in national health costs from respiratory illness.
Mullan hopes that governments in developing economies can use these ideas to increase the compensation developers must pay residents to offset the economic losses of deforestation, or even to have those companies pay for conservation. In Nepal, for example, a fishery cooperative pays upstream residents to use forestry practices that minimize the erosion that could fill their waters with sediment.
It’s not easy to value forest land—the calculations depend on location, how people use it, and the level of development in the area. But the odds that protecting forests will be more lucrative than harvesting them increase where rural population density is high, incomes are low and natural disasters are prevalent—exactly the circumstances that increase the temptations of deforestation. |
Use this calculator to determine the coordinates of the midpoint (M) of a line segment determined by its two end points (A, B). Find what the midpoint of a segment AB is.
What is a midpoint?
In geometry, a midpoint is the point on a segment of a straight line which splits it in two equal halves which is why it is sometimes referred to as a halfway point. A segment is defined uniquely by two points (say A and B) and has a unique point (say M) which sits in its middle. A more technical way of describing a midpoint M is to say that it bisects the segment AB.
In a two-dimensional Cartesian coordinate plane each point has two coordinates - one on each axis. A simple visualization of the midpoint of a straight line AB and the corresponding coordinates is shown below:
The point M splits the length of AB in two equal parts. Using a midpoint calculator one can find the coordinates of the midpoint by knowing the coordinates of the endpoints. Alternatively, if the coordinates of one endpoint and the midpoint are known, then the coordinates of the other point can be determined as well. See our endpoint calculator.
The equation for finding the coordinates of the midpoint of a straight line AB defined by the points A and B is:
where (xA, yA) are the coordinates of point A, (xB, yB) are the coordinates of point B, and (xA, xA) are the coordinates of M - the midpoint of AB as shown in the illustration above.
As we can see from the formula, the x-coordinate of the midpoint M of the line segment AB is the arithmetic mean of the x-coordinates of the two endpoints of the segment. Likewise, the y-coordinate of the midpoint is the mean of the y-coordinates of the endpoints. The formula is easy enough to apply even without the help of a calculator, but using a midpoint formula calculator certainly makes that a breeze.
Midpoints in geometry
An example for using the midpoint equation can most easily be given in geometry. If one is given (or measured with a ruler) the coordinates of the two endpoints, one can determine the middle point. An example task would be:
What is the midpoint of the segment AB, if the coordinates of the first endpoint (A) are (2,6) and of the second (B) are (4, 18)?
To answer what the midpoint of AB is, simply replace the values in the formula to find the coordinates of the midpoint. In this case these are (2 + 4) / 2 = 3 and (6 + 18) / 2 = 12. So (xM, yM) = (3, 12) is the midpoint of the segment defined by A and B.
Applications in physics
In physics, midpoint calculations have several prominent applications. For example, the center of mass of a given object is its center of gravity. In order to balance that object, support has to be provided to the midpoint to counteract gravity in such a way that neither end begins to dip. Finding the endpoint has obvious utility.
In tasks related to transportation or movement of objects in a straight line through two-dimensional space, the midpoint formula calculation can be useful in determining at what point or when a vehicle is halfway through to its destination. Obviously, objects rarely have the option of moving in a straight line for long distances, so such applications are mainly for teaching purposes.
Cite this calculator & page
If you'd like to cite this online calculator resource and information as provided on the page, you can use the following citation:
Georgiev G.Z., "Midpoint Calculator", [online] Available at: https://www.gigacalculator.com/calculators/midpoint-calculator.php URL [Accessed Date: 05 Jul, 2022]. |
Smartwatches and other wearable devices may be used to sense illness, dehydration and even changes to the red blood cell count, according to biomedical engineers and genomics researchers at Duke University and the Stanford University School of Medicine.
The researchers say that, with the help of machine learning, wearable device data on heart rate, body temperature and daily activities may be used to predict health measurements that are typically observed during a clinical blood test. The study appears in Nature Medicine on May 24, 2021.
During a doctor’s office visit, a medical worker usually measures a patient’s vital signs, including their height, weight, temperature and blood pressure. Although this information is filed away in a person’s long-term health record, it isn’t usually used to create a diagnosis. Instead, physicians will order a clinical lab, which tests a patient’s urine or blood, to gather specific biological information to help guide health decisions.
These vital measurements and clinical tests can inform a doctor about specific changes to a person’s health, like if a patient has diabetes or has developed pre-diabetes, if they’re getting enough iron or water in their diet, and if their red or white blood cell count is in the normal range.
But these tests are not without their drawbacks. They require an in-person visit, which isn’t always easy for patients to arrange, and procedures like a blood draw can be invasive and uncomfortable. Most notably, these vitals and clinical samples are not usually taken at regular and controlled intervals. They only provide a snapshot of a patient’s health on the day of the doctor’s visit, and the results can be influenced by a host of factors, like when a patient last ate or drank, stress, or recent physical activity.
“There is a circadian (daily) variation in heart rate and in body temperature, but these single measurements in clinics don’t capture that natural variation,” said Duke’s Jessilyn Dunn, a co-lead and co-corresponding author of the study. “But devices like smartwatches or Fitbits have the ability to track these measurements and natural changes over a prolonged period of time and identify when there is variation from that natural baseline.”
To gain a consistent and fuller picture of patients’ health, Dunn, an assistant professor of biomedical engineering at Duke, Michael Snyder, a professor and chair of genetics at Stanford, and their team wanted to explore if long-term data gathered from wearable devices could match changes that were observed during clinical tests and help indicate health abnormalities.
The study, which began in 2015 at Stanford with the Integrative Personal Omics Profiling (iPOP) cohort, included 54 patients. Over three years, the iPOP participants wore an Intel Basis smart watch that measured their heart rate, movement, skin temperature and sweat gland activation. The participants also attended regular clinic visits, where researchers used traditional measurement methods to track things like heart rate, temperature, red and white blood cell count, glucose levels, and iron levels.
The experiment showed that there were multiple connections between the smartwatch data and clinical blood tests. For example, if a participant’s watch indicated they had a lower sweat gland activation, as measured by an electrodermal sensor, that indicated that the patient was consistently dehydrated.
“Machine learning methods applied to this unique combination of clinical and real-world data enabled us to identify previously unknown relations between smartwatch signals and clinical blood tests,” said ?ukasz Kidzi?ski, a co-lead author of the study and a researcher at Stanford.
The team also found that measurements that are taken during a complete blood lab, like hematocrit, hemoglobin, and red and white blood cell count, had a close relationship to the wearables data. A higher sustained body temperature coupled with limited movement tended to indicate illness, which matched up with a higher white blood cell count in the clinical test. A record of decreased activity with a higher heart rate could also indicate anemia, which occurs when there isn’t enough iron in a patient’s blood.
Although the wearables data isn’t specific enough to accurately predict the precise number of red or white blood cells, Dunn and the team are highly optimistic that it could be a noninvasive and fast way to indicate when something in a patient’s medical data is abnormal.
“If you think about someone just showing up in an emergency room, it takes time to check them in, to get labs going, and to get results back,” said Dunn. “But if you were to show up in an ER and you’ve got an Apple Watch or a Fitbit, ideally you’d be able to pull the long-term data from that device and use algorithms to say, ‘this may be what’s going on.’
“This experiment was a proof-of-concept, but our hope for the future is that physicians will be able to use wearable data to immediately get valuable information about the overall health of a patient and know how to treat them before the clinical labs are returned,” Dunn said. “There is a potential for life-saving intervention there if we can get people the right care faster.”
View original article here Source |
Destruction, from "Ozymandias," a poem by Percy Bysshe Shelley
In Shelley’s poem, a traveler describes the colossal Wreck of an imperious ruler’s once-magnificent statue. What was once a grand and imposing monument in honor of a powerful King of Kings now lies scattered in rubble. The poem offers many interpretations, but here we will pay attention to the theme of destruction.
“Ozymandias” (Percy Bysshe Shelley)
I met a traveller from an antique land,
Who said — “Two vast and trunkless legs of stone
Stand in the desert. . . . Near them, on the sand,
Half sunk a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them, and the heart that fed;
And on the pedestal, these words appear:
My name is Ozymandias, King of Kings;
Look on my Works, ye Mighty, and despair!
Nothing beside remains. Round the decay
Of that colossal Wreck, boundless and bare
The lone and level sands stretch far away.”
Nature: We cannot know from the poem what caused
Two vast and trunkless legs of stone
Stand in the desert
with the legs fractured at the knees and the statue collapsed in what is now desert, but it is possible that the insidious effects of wind, rain, heat and temblor doomed the monument. No matter how grand the structures of humans, nothing can stand up to the forces of nature, encroaching dunes, or the other gradual erosions of time. Storm waves reshape beaches; high winds topple trees and decapitate houses; blazing heat melts roads; and hailstorms smash windows. Destruction treads with nature’s march toward reduction of all things.
Humans Wreaking: The propensity of humans to violently destroy others’ lives and property cannot be overstated. A victorious invading army may have razed the palace, my Works, and the surrounding lands ruled by Ozymandias; in so doing they may have triumphantly and symbolically toppled that fallen leader’s bragging statue. Given that the trunk has disappeared, it is more likely that hands of humans rather than the hand of God did the demolition. If an earthquake or a massive fire had attacked the monument, its pieces would have fallen nearby and likely remain visible. If soldiers sacked the palace and this icon, they might have looted the torso and hauled it away. After all, the narcissistic king would have adorned his trophy statute with jewels or precious metals. From this short poem, it seems less likely that attackers pulled down the magnificent monument than that the implacable undermining of nature did the deed because soldiers or others pillaging the grounds would probably have defaced the proud visage.
Civilization Falls: A metaphorical message about destruction permeates this poem. Certainly the grand carving on a pedestal honored a particular autocratic leader (Ozymandias), but that proud structure also represented a powerful city-state, fiefdom or empire. The destruction reminds us of the salting of Carthage after the Romans conquered it. [Note 1] All is desert in the poem around what was once a vibrant and lush city. All civilizations have prided themselves on their achievements – Look on my Works, ye Mighty, and despair! – but most of them collapsed or were toppled.
Retribution: Although small towns shrivel under the impersonal forces of economics and racial injustice, much destruction is deliberate, as vengeance flows full in our veins. Perhaps an uprising of slaves or political outcasts overthrew the ruler, the cruel reigning sovereign, on his Ides of March. They would have danced around the symbolic shaming that comes from shattering the image of the once powerful. Mussilini on a meat hook, the Tsar’s family before the guns, the homage to Robert E. Lee upended; the Berlin Wall sledgehammered into pieces, the Buddhas of Bamiyan in Afghanistan dynamited by the Taliban – destruction by humans gives vent to their oppressions and ideologies.
Colossal egos often end humbled, destroyed. Stalin was denounced and de-throned, Hitler fired a bullet in a bunker; royalty met the guillotine; Presidents have been impeached; and Enron’s CEO died of a heart attack. The downfalls of once-mighty figures often accompany the destruction of their artifacts, such as monuments, public squares, named bridges and airports, or stupendous edifices.
We invite you to read about other poems discussed on this blog, and their themes. Here they are:
- Alcohol: Mr. Flood’s Party
- Beauty: She Walks in Beauty
- Chance: Hap
- Death: Death, be not proud
- Decisions: The Road Not Taken
- Destruction: Ozymandias
- Silence: For Whom the Bell Tolls
- Time: To His Coy Mistress
- Trains: The Railway Train
- Work: Stopping by Woods on a Snowy Evening
- At least as early as 1863, various texts claimed that the Roman general Scipio Aemilianus sacked Carthage, enslaved its survivors, plowed over the city, and sowed it with salt after defeating it in the Third Punic War (146 BC). |
Working scientists discuss how they use the scientific method in their work in this video from PBS Learning Media. This shows real-life application for students who have been introduced to the basic steps.
Steps of the Scientific Method for grades 5-8 offers in-depth exploration of each step, examples, design help, and educator tools for teaching. A flow chart emphasizes that results and new information may require backing up and repeating some steps.
Even very young children can learn the instinctive steps of the scientific method and begin to "think like a scientist." Early childhood educators will want to take a look at the PBS Ruff Ruffman collection as Ruff models scientific inquiry skills, takes on challenges, and learns the value of failure. Also check out these ideas for using the scientific method with preschool, kindergarten, and first grade students. You may want to use this Scientific Method song from PBS Kids.
Developing the skills to "think like a scientist" is crucial for students preparing for future study and careers. PBS Learning Media has a multi-part program for elementary teachers, based on the You At The Zoo video series that models educators utilizing the scientific method and inquiry in the classroom. Take a look:
Part of the Scientific Method is efficiently recording questions, hypotheses, experiments, results, and conclusions. Check out these useful resources to help teachers incorporate the use of STEM notebooks or lab notebooks in their classrooms.
Scientific Method: Lesson Plans
The Science Spot provides an excellent lesson plan for teaching and using the Scientific Method. The lesson includes useful templates for implementing the method, an experiment for practicing the scientific method, SpongeBob Science worksheets, and extension activities.
In I Am A Scientist, designed for grades 1-2, children learn about the scientific method while practicing the skills of questioning, predicting, experimenting, writing, and sharing. It includes a Scientific Method graphic organizer to be used with five simple experiments.
The Scientific Process, from PBS Learning Media, is a complete lesson plan for grades 5-7 where students make observations, develop a hypothesis, and test their hypothesis to see how well it holds up in light of the evidence they collect.
From Arizona State University, Introducing Students to the Scientific Method includes teaching tips and a game activity where students must connect their problem-solving steps to the scientific method as they figure out a mystery.
The Minnesota Science Teachers Education Project has several lesson plans designed to give students practice with the scientific method. Lessons address process more than content and focus on asking questions for investigations, experimental design, variables, data collection and analysis, and stating conclusions. For K-2 students, the You Are A Scientist lesson gives practice in observing and recording data. 3rd-6th grade students use the Scientific Method in various areas of science, including chemistry, electromagnetism, mechanics, force and motion, and biology. For middle school students, a Consumer Testing lesson gives students experience using the scientific method in practical applications.
This multimedia site for your smartboard could be used to provide graphics as you introduce or review the Scientific Method, or it could be used for individual study.
Inventions and Inventors: Resources for Teachers
You'll find videos about life-changing inventions, stories about famous inventors, and discussions of innovative technology at this history of inventions site.
Keeping kids excited about inventions is a key to continued engagement. Consider showing a brief video clip each day of a fascinating new invention. Can sound be used to put out fires? Can a teddy bear help migrant kids stay safe? Can a reverse vacuum keep a cat from wandering?
PBS's Design Squad has an educators' page with lesson plans, teachers' guides, activities and videos aimed at middle-school students. Of particular interest is Invent It - Build It, a comprehensive teacher's guide with six invention challenges intended to foster creative thinking, problem-solving, and engagement in the design process. You'll love using this excellent resource in your classroom.
The Tech Museum of Innovation has a terrific collection of engaging lesson plans to help you lead your students through design and invention challenges. Lessons are aligned with standards for grades 3-12.
How can failure lead to success? In this lesson plan from National Geographic, students investigate the role of perseverance in the invention process by exploring several items that were invented by accident.
In this interactive activity from Scholastic, students conduct a virtual interview with inventor Ben Franklin and write a news story.
This Scholastic lesson plan on the invention of the bicycle emphasizes that every invention depends on smaller inventions before it. It's a Whatchamacallit challenges students to create an original invention that solves a common problem.
Once students understand the invention process, have your class try some of these design challenges from ZOOM and from NASA. In each case, encourage them to evaluate and improve upon their design as they follow and repeat the steps. |
The French Revolution unleashed the idea of the Rights of Man and Nations, an unstoppable force which led to the 1848 socialist revolutions in Europe. The latter sent radical German revolutionaries, the “Forty-eighters,” who controlled the powerful German-American press which Lincoln did not ignore in 1860. The Federal host invading the American South included divisions of Germans, Irish, the Red Shirts of Garibaldi, and some who had followed the Hungarian revolutionary, Kossuth.
Bernhard Thuersam, www.Circa1865.com
Revolutionary Indemnity Deja Vu
“The French Revolution was different [than previous revolutions] because it brought into the world and Europe in particular, a new idea, the Rights of Man, and with the Rights of Man went the Rights of Nations. Where previously states had been based on dynastic power they were now based on national existence. In the old days, right up to 1789, the state was simply the property of the ruler . . . Then suddenly there appeared the French people who said, “We are France.”
This was a challenge to all the dynasties of Europe and there was a competition of propaganda and of assertion, with, as the [revolution] developed, first the liberal and then the radical, and then the revolutionary leaders staking out more aggressively the claims of the people of France and in time the claims of others. After all, if France had the right to be a nation . . . this applied to others.
One of the factors which produced the revolutionary war was the provocative declaration which the French legislative assembly made on 19 November 1792, promising help and fraternity to every nation seeking to recover its liberty.
The word recover is curious. Most of the nations had never had their liberty, but it was already a myth that there had been a distant time when peoples had all been free and had then been enslaved by their kings.
Something else was curious about it. Although two great forces, the one of monarchy, of tradition, of conservatism, the other of liberalism and nationalism, were moving against each other, neither of them looked at it in practical terms [and action beyond issuing threats].
Strangely enough, though France was the one threatened [by the other monarchies seeking a restoration of Louis XVI], it was the French revolutionary government which finally plunged into war, declared war – threatened Austria in April 1792, and then actually went to war, though unable to do very much.
Why? Because as one of them said: “The time has come to start a new crusade, a crusade for universal liberty.” When the French revolutionary armies encountered the armies of the old [French] regime and were defeated, the cry arose, as it does in a war, of “Treason.” “We are betrayed.” The very same cry that the French raised in 1940 when they were again defeated.
[As the French revolutionary armies] began to achieve victories, [they] certainly brought liberation from the traditional institutions, liberation from the kings and princes, liberation from the Christian religion. At the same time, they brought demands . . .”After all,” the French said, “We have done the fighting, we have liberated you, we have presented you with the Rights of Man, we not only had to pay the money for these armies, we had actually to do the fighting for you as well. Therefore you must pay us.”
Wherever the armies of liberty went in Europe, they imposed indemnities. They collected so much that there was a time when the French revolutionary wars were practically paying for themselves. Moreover, as the armies grew greater and more powerful, the apprehensions of the civilian politicians in Paris grew greater also.
What they wanted was that these revolutionary armies . . . devoted as they were to liberty and equality and fraternity, should not exert power in Paris itself. As one of the revolutionaries said “We must get these scoundrels to march as far away from France as possible.” Revolution had become something for export.”
(How Wars Begin, A.J.P. Taylor, Atheneum Press, 1979, excerpts pp. 20-33) |
Learn something new every day
More Info... by email
Hemoglobinopathy is an inherited condition where the structure of globin chains is abnormal and causes the development of medical symptoms, with a number of genetic diseases known to involve malformed globin chains. Many patients have variants on normal globin chain structure without experiencing symptoms, and their conditions are not considered hemoglobinopathies because they do not experience disease. The severity of a patient's condition can vary, depending on the type of abnormality, and there may be treatment options available, although the underlying issue cannot be cured.
In patients with hemoglobinopathies, the genes inherited to provide instructions on making globin chains are erroneous in nature. Two common examples of hemoglobinopathy are sickle cell anemia and hemoglobin C disease. The patient can experience anemia, bleeding and clotting problems, and other issues. By contrast, thalassemias, another family of inherited blood disorders, involve normal globins produced in decreased numbers, causing the patient to develop anemia.
If a doctor suspects a patient has a hemoglobinopathy on the basis of family history of symptoms, blood can be drawn and evaluated in the lab. Sometimes, clear structural differences can be seen under the microscope and in other cases, the blood may need to be analyzed more intensively. This will be used to provide information about the nature of the disorder and to determine if treatment options are available. While the patient cannot start making the right globin chains, blood products to counter anemia and other issues may be available.
The structure of globin chains is complex and the development of hemoglobinopathy can be the result of any number of errors. Sometimes, spontaneous mutations arise, and in other cases, patients may have a family history of disease. People from particular regions are often at risk for specific hemoglobinopathies, as sometimes they confer a survival advantage. People carrying the sickle cell trait, for example, are more resistant to malaria, leading to increased survival of people with this gene in malaria-prone areas and a corresponding risk of having children with full-blown sickle cell anemia as a result of inheriting the gene from both parents.
The study of hemoglobinopathy takes place in a number of regions around the world as people look into how these conditions express and are inherited. Researchers are also interested in learning more about the racial connections with various diseases, as this can provide information about why they developed in the first place and why they persist despite being clearly deleterious. This can also help people identify patients at risk of particular blood disorders on the basis of racial origin, as well as family history.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
Prior to 1919 when the 18 th Amendment was ratified, there was no time limit on the ratification process. However, in 1919, Congress instituted a time limit on the passage of a proposed addition to the Constitution. To date, six Amendments have been proposed that have not been ratified. Only two of the proposed Amendments could still be ratified.
Twelve Amendments were proposed in 1789 with articles three through 12 being ratified as the Bill of Rights. Some 203 years later, the second article included with the original 12 was ratified as the 27 th Amendment. But, the first article proposed was never ratified.
The text of that proposed Amendment dealt with the number of people represented by each member of the House, as well as the number of members of the House. This proposal has become moot since the size of Congress is well over the minimum requirements stated in the Amendment.
After the first enumeration required by the first article of the Constitution, there shall be one Representative for every thirty thousand, until the number shall amount to one hundred, after which the proportion shall be so regulated by Congress, that there shall be not less than one hundred Representatives, nor less than one Representative for every forty thousand persons, until the number of Representatives shall amount to two hundred; after which the proportion shall be so regulated by Congress, that there shall not be less than two hundred Representatives, nor more than one Representative for every fifty thousand persons.
The second proposed Amendment to never be ratified came about in 1810. That Amendment would have required lawyers and others with titles from foreign nations to surrender their citizenship. This Amendment could potentially still be ratified. To date, only 12 states have approved, the last being in 1812.
If any citizen of the United States shall accept, claim, receive or retain any title of nobility or honour, or shall, without the consent of Congress, accept and retain any present, pension, office or emolument of any kind whatever, from any emperor, king, prince or foreign power, such person shall cease to be a citizen of the United States, and shall be incapable of holding any office of trust or profit under them, or either of them.
In 1861, an Amendment was proposed to protect the practice of slavery. This is the only proposed, and not ratified, Amendment to bear the signature of the President. The President's signature is considered unnecessary because of the constitutional provision that on the concurrence of two-thirds of both Houses of Congress the proposal shall be submitted to the States for ratification. Two states approved this proposal. Technically, it could still be ratified, although the 13 th Amendment put an end to slavery.
No amendment shall be made to the Constitution which will authorize or give to Congress the power to abolish or interfere, within any State, with the domestic institutions thereof, including that of persons held to labor or service by the laws of said State.
On June 2, 1926, a proposed Amendment would have regulated child labor and allowed federal law to supercede state law. To date, 28 states have ratified this amendment.
The Congress shall have power to limit, regulate, and prohibit the labor of persons under eighteen years of age.
The power of the several States is unimpaired by this article except that the operation of State laws shall be suspended to the extent necessary to give effect to legislation enacted by the Congress.
The Equal Rights Amendment was proposed in March 1972, and later extended beyond the seven year limit to June 1982. However, it never received ratification by the necessary three-fourths of the states.
Equality of rights under the law shall not be denied or abridged by the United States or by any State on account of sex.
The Congress shall have the power to enforce, by appropriate legislation, the provisions of this article.
This amendment shall take effect two years after the date of ratification.
An August 1978 Amendment that would have granted the District of Columbia representation in the Congress reached the seven-year limit before it could be ratified.
For purposes of representation in the Congress, election of the President and Vice President, and article V of this Constitution, the District constituting the seat of government of the United States shall be treated as though it were a State.
The exercise of the rights and powers conferred under this article shall be by the people of the District constituting the seat of government, and as shall be provided by the Congress.
The twenty-third article of amendment to the Constitution of the United States is hereby repealed.
This article shall be inoperative, unless it shall have been ratified as an amendment to the Constitution by the legislatures of three-fourths of the several States within seven years from the date of its submission. |
COGENERATION / Cogeneration Technologies /
What is cogeneration?
Combined Heat and Power
Cogeneration (Combined Heat and Power or CHP) is the simultaneous production of electricity and heat, both of which are used. The central and most fundamental principle of cogeneration is that, in order to maximize the many benefits that arise from it, systems should be based on the heat demand of the application. This can be an individual building, an industrial factory or a town/city served by district heat/cooling. Through the utilization of the heat, the efficiency of a cogeneration plant can reach 80% or more.
Cogeneration optimizes the energy supply to all types of consumers, with the following benefits for both users and society at large:
- Increased efficiency of energy conversion and use. Cogeneration is the most effective and efficient form of power generation.
- Lower emissions to the environment, in particular of CO2, the main greenhouse Cogeneration is the single biggest solution to the Kyoto targets.
- Large cost savings, providing additional competitiveness for industrial and commercial users, and offering affordable heat for domestic users.
- An opportunity to move towards more decentralized forms of electricity generation, where plants are designed to meet the needs of local consumers, providing high efficiency, avoiding transmission losses and increasing flexibility of system use. This will particularly be the case if natural gas is the energy carrier.
- Improved local and general security of supply – local generation, through cogeneration, can reduce the risk of consumers being left without supplies of electricity and/or heating. In addition, the reduced need for fuel resulting from cogeneration reduces import dependency – helping to tackle a key challenge for Turkey’s energy future.
- An opportunity to increase the diversity of generation plant, and provide competition in generation. Cogeneration provides one of the most important vehicles for promoting energy market liberalization.
- Increased employment – a number of studies have now concluded that the development of CHP systems is a generator of jobs.
Subscribe to our e-mail list |
a) Relations with the Catholic Church: Even though Mussolini had seemed anti-clerical and had written “God Does Not Exist” he had began forming a good relationship with the Roman Catholic Church because of its huge power and influence. He had begun forming this good relationship by getting married in a church in 1926 and having his two children baptized. He had also closed down some wine shops and nightclubs. In 1929 the Lateran treaty was signed after a series of meetings it had recognized the pope’s sovereign rule, the church had received 750 million lire cash and 1000 million lire in government bonds for the loss of the papal states in 1860. Catholicism had also become the state religion; church marriages became legal, religious education were a must in secondary schools, catholic action would continue as long as it was independent of political parties and it was subordinate to the church’s hierarchy. This treaty had gotten the church and it’s faithful followers on Mussolini’s side.
However, this treaty had angered the radical fascists who were anti-clerical since the independence of the church meant there would be no totalitarian rule. The church had also been against communism and socialism therefore when the fascist destroyed the left this had brought Mussolini closer to the church. Mussolini had also strengthened this relationship by exempting the clergy from paying taxes in the mid 1920’s in return the Pius XI forced Dom Sturzo to resign since he was a fierce opponent of fascism.
However, some friction remained between the fascist government and the Catholic Church as the catholic youth movements rivaled the fascist youth and student organizations. Moreover, some of the members of the catholic student organization were becoming influential and became significant leaders in the Christian Democratic Party in Italy after 1945 such as Aldo Moro in the 1930’s, which created problems for the fascists. The pope had also disapproved of the anti-Semitic laws introduced by Mussolini. However, he had approved of the invasion of Abyssinia in 1935 since it was similar to a crusade and his intervention in the Spanish civil war in 1926 to stop the Left. Mussolini’s relationship with the church remained well since they both gained a lot from their treaty. This policy had brought back the church’s power and had made Mussolini’s aim to build a new fascist generation impossible.
b) Education and youth movements: Italian fascism like all other fascists had wanted to influence the young generation. In 1926 the Opera Nazionale Balilla was established bringing together fascist youth organizations and giving government funding. It was placed under the ministry of education in 1929 and they had begun closing rival youth organizations except the catholic youth groups. In 1932 the Ballila membership became obligatory. In 1937 the ONB joined with the young fascists to create one youth organization called Gioventu Italiana del Littorio for 6-21 year olds. The Ballila was political and it was militarized but it was also filled with sports and recreational activities, which attracted children, but 40% of the population had not joined showing the failure of this policy.
At first Italian schools had some freedom but Mussolini had appointed the philosopher Giovanni Gentile to become the first minister of education. In 1923 Gentile had passed the education act which had changed education by promoting grammar schools, encouraging philosophy, classical studies and had not emphasized on technical and vocational education. To ensure that the schools would not spread anti-fascist ideas, anti-fascist teachers were removed and teachers were forced to take an oath of loyalty. Mussolini had started to really control schools in the mid 1930’s, as schools were obliged to use fascist textbooks. In 1936 there was also a history textbook that had to be taught which focused on promoting a part of Italian history that would create loyalty to Mussolini. Physical education was also important to have healthy kids who could go to war and be prepared for motherhood. Therefore, Mussolini had greatly impacted education in order to create loyalty to him.
c) The battle of the births In order to make Italy a great power Mussolini had launched in 1927 a battle for the births so that Italy’s population could go from 37 to 60 million. The government started encouraging marriage by forcing more taxes on bachelors, awarding prizes to women with the most children, families with 10 or more children were exempted from paying taxes, loans were given to newly married couple, family allowances were introduced in 1934 finally the criminal code in 1932 had banned contraception, abortion and sterilization. This policy had failed since birth rate continued to fall in 1922 there was 147.5 births for every 1000 women of childbearing age while in 1936 it had fallen to 102.7 births. The population had only reached 44 million in 1940. However, this was only due to the falling of death rate and emigration but the government had failed to encourage early marriage as the average age at which they got married rose from the 1930’s.
d) The media and the arts Mussolini wanted to suppress opposition so he started by censoring newspapers in 1923 and the fascists government had taken owned 10% of newspapers which meant it did not take over press but controlled what they wrote as the editors that would oppose him would be fined or banned from journalism. At first the fascist government saw radio and film as being insignificant but this changed when the government broadcasts increased and the ownership of radios went up to one million. Similarly, in 1924 a government film agency called Istituto Luce was created to make documentaries. In 1937 the government founded an Italian film studio called Cinecitta.
However, Mussolini began really using propaganda in the 1930’s in order to form a new type of Italian, a heroic and energetic one. In 1925 the cult of the Duce was launched as was the biography of Mussolini called Dux. In this book Mussolini was presented as an athlete, hard working and loved the people. There were also many parades to elaborate rituals in order to revive roman spirit. The use of propaganda had aided Mussolini to become more popular in the years 1929-36. However, this popularity had begun to diminish once Mussolini became more radical as he applied the anti-Semitic policy and joined WWII.
Mussolini did not get involved in art as much as Hitler had but there were division within the fascists as to which type of art to support. The neo-classicists preferred architecture and art that was inspired by ancient Rome while the modernists encouraged experimental art. Therefore, this lead to the formation of two artistic prizes, the Cremona prize for traditional art and propaganda while the Bergamo prize encouraged experimentation. To try to spread access to art the fascist government organized almost 50 art exhibitions a year.
e) Racial policies (Anti-Semitic laws) Early on Mussolini had not shown any signs anti-Semitism however he had been racist against Africans in Libya and Abyssinia. One of the theories is that Mussolini had wanted to weaken the Jews because in the 1930’s he wanted to start a war and was not sure whether they would be loyal or not. Another theory is that in 1938 Mussolini was getting closer to Germany but while Hitler never pushed him to adopt anti-Semitism he might have chosen to adopt it in order to get closer to Hitler. Mussolini had started this policy by writing an article on reducing the number of Jews in 1938. He had then banned marriage between Jews and non-Jews, forbidden the Jews from jobs in civil service, teaching and PNF membership.
The Jewish kids were also excluded from state schools and up to 10000 non-Italian Jews were deported. This law had caused Mussolini to become unpopular even within his own party who was made up of one third of Jews. The church that had a major influence had also criticized this law therefore making Mussolini even more unpopular. By 1941 6000 Italian Jews had left Italy among them were businessmen, professionals and academics therefore when they left the economy was badly affected. And wasn’t implemented systematically
f) Other areas/ points of your own: Economic policies Mussolini had wanted to improve the economy to prevent foreign input this policy was called Autarky. He had adopted the battle of the grains in 1925 to improve agriculture to increase grain production in order to show economic strength therefore leading to nationalism. This policy had succeeded as imports were reduced by 75% between 1925 and 1935 therefore increasing Mussolini’s popularity.
However, to increase production of wheat he needed more land to plant and he started using the land suitable for citrus which caused a decrease in their production and poverty in the south still continued. Another policy he had adopted was the corporate state in 1926 to manage relationships between employer and employee so as to cooperate therefore leading to more production. By 1934 22 corporations were set up and had succeeded to influence the economy. However, the corporations were just advisors that were dominated by fascists therefore they did what is best for them and left the worker’s interests aside.
g) Conclusion Mussolini had adopted many policies that had gained him popularity and changed the Italian society such as the relationship with the church and influencing the media and the arts. However, when he had become more radical and adopted the anti-Semitic policy and tried to control education he had began losing popularity, which had eventually lead to his fall.
To what extent was Mussolini influential in international affairs in the 1930’s? After the league of nations was undermined by the Manchuria crisis therefore when Hitler had began expanding and broke the treaty of Versailles by announcing his intention to build an army of 550000 men using conscription Mussolini decided to sign the stress front on 1935 with France and Britain. This stated that the three countries would take action if Germany broke the treaty of Versailles further. However, this agreement fell apart when Britain did not consult Italy or France before singing the Anglo-German naval agreement in 1935, which allowed Germany to expand its navy beyond what the treaty Versailles had allowed. Mussolini had also invaded Abyssinian, which Britain and France disapproved of. Therefore this shows that his international affairs with other countries had failed.
Moreover, when Mussolini had invaded Abyssinia in 1935 it had changed his foreign policy completely his relationship with Britain and France was destroyed while his relationship with Germany was improving. Although Mussolini though France and Britain would not react to this invasion the League of Nations imposed economic sanctions since Britain was being undermined. However, Germany continued to trade with Italy and Mussolini ignored the sanctions therefore strengthening their relationship and weakening the league.
In addition, Mussolini had also intervened in the Spanish civil war as he send 70000 troops to Spain to help support general Franco. Although he had failed he had sent them in order to weaken France who had a similar government as Spain and to have a naval base in the Balearic Islands to help promote Italian power in the Mediterranean.
Furthermore, Mussolini’s relationship with Germany had also been strengthened as they had signed the Rome-berlin axis. Italy had also walked out of the League of Nations as Germany had done. It is also said that Mussolini had adopted the anti-Semitic policy in order to get closer to Germany and to make Italy more radical. As Mussolini became closer to Hitler he had changed his foreign policy towards Austria as he allowed Germany to increase its influence over Austria. In 1938 after the newly appointed chancellor Seyss-Inquart had invited Hitler to send troops, Hitler had attacked Austria and Mussolini had not rejected. Moreover, when a crisis broke out because Hitler had wanted to invade Czechoslovakia after he demanded that the Czech government allow the German speaking are of Czechoslovakia to unite with Germany.
Therefore, it seemed that Britain and France would side with Czechoslovakia therefore causing war. Mussolini played the role of peacemaker and set up the Munich conference in 1938. Mussolini was also encouraged by Britain and France’s appeasement of Hitler to avoid aggression to start a more violent foreign policy. Mussolini had also been aware of him being the weak partner in the Italian-German relationship therefore this encouraged him to become violent to be more influential. He started by invading Albania in 1939. He then signed the pact of steel with Germany in 1939, which forced the two countries to support each other in case of war. This was an advantage for Germany who was likely to enter a war while Italy would be helped to expand.
Sorry, but copying text is forbidden on this website. If you need this or any other sample, we can send it to you via email.
Please, specify your valid email address
Topic: Mussolini Policies
We can't stand spam as much as you doNo, thank’s. I prefer suffering on my own.
Remember that this is just a sample essay and since it might not be original, we do not recommend to submit it. However, we might edit this sample to provide you with a plagiarism-free paperEdit this sample
Courtney from Study Moose
Hi there, would you like to get such a paper? How about receiving a customized one? Check it out https://goo.gl/3TYhaX |
Day Zero and pathways to water security for regional towns
Day Zero approaches
Most of us will have heard of Day Zero by now. Day Zero marks the day when residential taps are turned off – when water is carted to local collection points. It happened in Cape Town, South Africa in 2017; the wide-spread water shortages caused social tension, and had environmental and economic impacts, with damage to tourism and agricultural industries.
In Australia, Day Zeros have been reported for several regional and rural country towns. For instance, in Stanthorpe, Queensland, the town officially ran out of water in January 2020 and has had to start carting truckloads of water every day to meet residential demand.
Luckily, much needed recent rainfall in February 2020 has helped many communities replenish their water supplies. However, much like the Millennium Drought, this current drought highlights the need for improved water security. Water security remains inadequate for many rural and regional communities, with multiple towns at risk of reaching, or already at, Day Zero.
While the current drought may have broken in places like Sydney — where water restrictions are being eased — now is actually the ideal time to plan and prepare for the next drought, which we know will come. But how can that be done?
The impact of water insecurity
Terms like “water security” actually suggest the exact opposite: water insecurity. Water insecurity can result from acute impacts (events such as droughts), chronic impacts (such as a drying climate), and water quality risks (events such as post bushfire catchment runoff can make reservoirs and rivers undrinkable) or a combination of these factors.
Water insecurity has many negative impacts on human health, the environment and the economy – particularly in regional areas that rely on agricultural industries. The possibility of reaching Day Zero can never totally be eliminated. However, it can be made tolerable if the threat of water insecurity is more actively managed.
Setting objectives for water security
Instead of seeing Day Zero as a looming threat, decision-makers could approach water security as an objective that they strive to meet; where water insecurity is no longer an issue for the community.
Investment in response to water security risks can be approached from a cost-benefit perspective, or from a tolerable versus acceptable risk perspective. The classical cost-benefit approach compares the costs of a water security intervention (investment in infrastructure, for example) with the marginal economic, social and environmental benefits gained, with benefits also expressed in dollars. A town or city’s ‘residual water security risk’ is the level of risk (of running out of water) that remains, even after implementing water security measures.
However, it can be problematic for decision-makers to adopt either the ‘objective-setting’ or ‘cost-benefit’ approach for a variety of reasons. For instance, within a ‘cost-benefit’ approach it’s generally difficult to put a dollar value on the social or environment benefits. If benefits are intangible, a multi-criteria approach, which does not require every criterion to be expressed in dollar units, can be an option.
Alternatively, the ‘acceptable risk’ approach implies that there is some threshold beyond which risk is unacceptable, but this may differ between individuals and society. Nevertheless, the absence of a minimum risk also seems to be unacceptable. Notwithstanding the difficulty of establishing water security risk-based targets, metrics can still be developed.
How much is enough water security?
To enjoy water security, several dimensions need to be managed including the needs of human consumption and sanitation, environmental water needs as well as water needs of local economies. Internationally, water security indicators have therefore concentrated on the volume of water needed.
For example, the World Health Organisation (WHO) has defined water security in terms of access for human health and sanitation risks:
- No access: Less than 5 litres, per person, per day. (No water security)
- Basic access: 20 litres, per person, per day. (Basic water security)
- Intermediate access: 50 litres, per person, per day. (Effective water security)
- Optimal access: more than 100 litres, per person, per day. (Optimal water security)
Meeting basic water needs for food security and agriculture also poses challenges. In Australia, 50-70% of water use supports irrigation, in addition to the rainfall used for rain-fed agriculture and grazing. These metrics of basic water needs can provide a baseline against which recent trends and future projections can be understood, informing management decisions.
Investment pathways to water security
Investment decisions should be informed by measuring the reduction in water security risk. Currently, investments in water-related infrastructure generally do not indicate how much water security risk has (or could be) decreased.
For example, the recently completed Wentworth to Broken Hill Pipeline is a major piece of public infrastructure. It supplies Broken Hill with up to 37.4 ML of raw water per day, via a 270-kilometre pipeline from the River Murray near Wentworth to Broken Hill. Monitoring the effectiveness of this approximately $500 million investment (compared to other water security investment options, such as water banks or even direct potable reuse) in the context of water security metrics is seldom done explicitly.
There are several options for improved water security around Australia, including:
- Increased water recycling of wastewater
- Use of desalination technology
- Building surface water dams
- Use of aquifers and water banking
- Demand management and water restrictions
- Improved efficiency of systems
‘Pathways to water security’ refers to the sequence of investment decisions taken to reduce water security-related risks. The most effective investment options for any given community depends both on the context and the pathway taken. Often past investments can both open or close alternatives.
There is also emerging evidence about pathways to water security in terms of institutional reform and infrastructure, as well as the interaction and sequence of these two pathways. For example, the operation of effective water banks via Managed Aquifer Recharge (infrastructure) requires well-defined property rights (institutional reform). These water property rights determine who gets access to water and how decisions are made. Property rights and water sharing across state boundaries establish the benefit and cost sharing needed to spur investments in infrastructure.
Pathways deliberately address questions of sequencing, lock-in effects (the order in which infrastructure is built and the option is foregone, as a result), and cumulative risks (the sum of all risks). So far, methodologies for risk-based appraisal of robust infrastructure investment pathways have been applied in a relatively small number of settings.
Case studies from Australia and abroad
Decision-makers in Australia and abroad are using some of these approaches and alternatives to improve water security. For example:
- Perth, Western Australia: Australia’s first full-scale Groundwater Replenishment Scheme is located in Perth, Western Australia. It started recharging recycled water to Perth’s deep aquifers in 2017.
- Orange, New South Wales: In 2019, a stormwater harvesting scheme and a pipeline recently provided a third of Orange’s water supply, adding up to more than 790 megalitres of water over six months.
- Beaufort, South Africa: The community of Beaufort West, blends 20 per cent reused water into its water supply from local dams.
- Arizona, USA: The Arizona Water Banking Authority stores water in underground aquifers, to earn long-term storage credits. These credits can be recovered (pumped) during a shortage to provide back-up water supplies (“firming”) for Arizona water users.
- Big Spring, USA: Severe drought in 2014 prompted Big Spring and Wichita Falls to recycle wastewater effluent for drinking water use.
Looking ahead: using ‘pathways to water security’ in Australia
Perhaps decision-makers are fearful of using approaches that seem complex or conceptually challenging. However, the assessment of risks and ‘pathways to water security under uncertainty’ represents only a small change to what is already done when making water security decisions. The existing tools of risk analysis and management are well-established and provide a methodological toolkit for appraising pathways to water security.
With Day Zero again in the public space, it’s arguable that there has never been greater attention on water security in Australia. There will be different water security pathways for different communities, but how those decisions are made requires transparent, robust and defensible decision making. A better understanding of the water security risks is the foundation for good decision making, policy reform and infrastructure investments to increase resilience.
When this current drought breaks, we can’t lose sight of the fact that another drought will inevitably come.
This is the moment we must start preparing for it. |
A Brief (and Basic) Overview of Chromosome 16 Disorders
Every cell in the body should contain 23 pairs of chromosomes, which carry our hereditary material. Therefore, there should be two 16 chromosomes in each cell in the body. Sometimes, however, a chromosomal aberration can occur. Disorders associated with chromosome 16 abnormalities include:
A: Numerical Abnormalities
Full Trisomy 16: a chromosomal disorder in which an individual has three copies of chromosome 16 instead of the usual two. Trisomy 16 is not compatible with life and is the most common chromosomal cause of miscarriages (causing over 100,000 miscarriages annually in the U.S. alone).
Mosaic Trisomy 16: an extremely rare chromosomal disorder in which an extra chromosome 16 is present in some, but not all, of the cells of the affected individual's body. The affects of the disorder vary greatly, but some of the more common characteristics include intrauterine growth retardation (IUGR) and congenital heart defects.
Mosaic Trisomy 16 Confined to the Placenta (CPM): a condition in which the chromosome 16 abnormality is believed to be present only in the placental tissues.
Uniparental Disomy of Chromosome 16: a condition in which the chromosomes appear normal but both copies have originated from just one of the two parents (this is most often found in association with mosaic trisomy 16).
B: Structural Abnormalities
Every chromosome has two main parts, the short arm, called "p" for "petite", and the long arm, called "q." Individuals can have deletions or duplications of one of these arms, instead of the whole chromosome.
16p- (sixteen p minus): an extremely rare chromosomal disorder in which some portion in the short (p) arm of chromosome 16 is missing (deleted).
16q- (sixteen q minus): an extremely rare chromosomal disorder in which some portion of the long (q) arm of chromosome 16 is missing (deleted).
16p+ (sixteen p plus): an extremely rare chromosomal disorder in which some portion of the short (p) arm of chromosome 16 is duplicated.
16q+ (sixteen q plus): an extremely rare chromosomal disorder in which some portion of the long (q) arm of chromosome 16 is duplicated.
Unbalanced Translocation: a rare chromosomal disorder in which the deleted portion of a chromosome is attached to another chromosome.
Inversion: a rare chromosomal disorder in which a small portion of chromosome breaks off and then reinserts itself backwards.
There are many other combinations of deletions and/or duplications. This wide range of variation leads to a wide variety of outcomes, from no obvious problems to severe physical and mental handicaps. The more members we can add to the Foundation, the more information we can gather about common characteristics of chromosome 16 disorders. |
The human genome contains as many as 10 million genetic variants, the majority of which are innocuous, which distinguish us as individuals. But since genetic mutations are often implicated in disease, it’s imperative to identify which genetic variants are potentially harmful.
Researchers at Washington University School of Medicine in St. Louis have developed a rapid, inexpensive technique to create DNA fragments representing all possible variations in a gene. The ability to study each fragment could allow researchers to determine which genetic variations are disease-causing, and which are harmless.
Current methods of synthesizing DNA fragments are extremely costly, and take up to a week to generate the product. The research was published in the journal, Nature Methods.
“As a pediatric neurologist who does a lot of genetic studies of kids with developmental disabilities, I frequently will scan a patient’s whole genome for genetic variants,” said Dr. Christina Gurnett, the study’s senior author and an associate professor of neurology and of pediatrics. “Sometimes I’ll find a known variant that causes a particular disease, but more often than not I find genetic variants that no one’s ever seen before, and those results are very hard to interpret.”
To test the effects of individual genetic variations, scientists have traditionally replaced bases one-by-one. By translating the DNA into its resulting protein, researchers were able to assess whether the product behaved as it should.
In replacement of this time-consuming and laborious process, researchers have begun to use a method of creating hundreds of variations on a sequence, and then testing all products in the set simultaneously. While more efficient, the high cost of this method has limited its use.
Dr. Gabriel Haller, a postdoctoral researcher working in Gurnett’s lab, found that he could create these sets of DNA sequences using common lab equipment and reagents. By copying a DNA sequence using a nonstandard base known as inosine, Haller was able to create sequences containing the unique base at a random site.
Each inosine was then replaced with one of the standard bases – adenine, thymine, cytosine or guanine – which resulted in a single gene mutation in each copy. As this technique is both fast and inexpensive, whole catalogues of genetic variants for any given gene could be easily generated for research purposes.
“Then, when clinicians find a variant that’s never been seen before in one of these genes associated with aortic aneurysm, they can go through this catalog and say, ‘Yes, this mutation does have a negative effect on that protein, so it’s likely harmful,’” said Gurnett. “It would help them decide what to tell the patient. This would be one piece of the big interpretation puzzle for genetic mutations.” |
Some religious groups oppose the concept of biological evolution, but others accept the idea of “evolutionary creation,” which posits that God created the universe and biological evolution is a natural process within that creation.
Scientists have ample evidence that humans evolved from within the process of general evolution. Nature is capable of constructing itself, including bringing forth human beings. And if so, are there any traces in human creativity that reflect how nature creates?
This is the question to be answered. Musical compositions will serve as an example to illustrate human creativity.
How Nature Constructs
Thanks to the energy released in the original explosion of the big bang, nature is capable of synthesis. Nature is capable of unifying parts into wholes. Surprisingly, wholes have properties that their parts do not have. Already the Greek philosophers of old, Plato, Aristotle, Plotinus recognized that wholes are, quantitatively and qualitatively, more than the sum of their parts.
Of course, the whole cannot exist without its parts; but united, the whole has properties that are radically new. From atoms to molecules to life to consciousness, and from there to human self-consciousness, each synthetic event brought forth new wholes, with totally new properties. The results from sequential synthesis illustrate how nature creates.
The Nature of Musical Compositions
Notes are the atoms, the “matter” of music. As such, they are already complex. Their complexity emerges from the oscillations of waves, their amplitudes, frequency, and of course also from the timbre of the instrument on which notes are played. All these various parts are integrated into just one note.
Notes can be integrated into various intervals, seconds, quarts, quints, and octaves. Each interval has distinct emergent properties.
Intervals sound totally different from their individual notes. Notes and intervals can become integrated into melodies, and from there, can be further synthesized into musical phrases, symphonic movements, and so on. The point is that similar to the generation of novelty through synthesis in nature, composers also generate higher-order musical structures through sequential synthesis. As a result, the architecture of the constructs of nature and musical compositions is the same. Both construct novelty in integrated parts that are, however, already hierarchies themselves.
Modes of Evolution in Nature and Music
Over time, the complexity of atoms may increase, for example, through synthesis in the nuclear furnaces of the stars. Most important, the resulting structures are always units, “simple” ones. Even as atoms combine to form molecules, they are again units that integrate a diversity of atoms.
Synthetic events combined molecules into the building blocks of life. Once life appeared, the Darwinian mechanism of variation and natural selection kicked in. Today we start understanding the genetic constructs that are at the base of organismic evolution.
The Evolution of Higher-Level Structures in Nature
For organismic evolution to happen, deep changes in their genetic material and its arrangement need to happen. Alterations happen through mutations, duplications of genetic strings, their variation and then re-integration into a working genome. Crises also seem to be necessary through which entire genomes may disintegrate into genetic parts. Nature may take advantage of such liberated genetic elements by recombining old information into totally new genomic constructs. In addition, the variation of timing, when a genetic machine is turned on of off, or for how long it will work, provides an additional powerful source for organismic change.
The Evolution of Higher-Level Structures in Music
Historically, the evolution of musical complexity started from “simple” linear compositions of Gregorian chants. The duplication of such lines led to compositions in which both lines were first sung together, then at intervals, e.g., quints and octaves. Such doubled constructs became duplicated again, some of them even sung simultaneously in different languages.
Also, instruments were added to play the multiple lines. The discovery that not all musical parts had to be performed simultaneously, but could be organized sequentially, was an important component of the wonderful complexity of emerging Renaissance music. That dissonant vertical musical constructs could be resolved into horizontal harmonious ones was another crucial addition to the dynamic of musical complexity.
In addition, timing and speed of playing a musical composition could help transform one style of music into a new one. My colleague Gerald Gabel had the idea to take a piece of music from the Renaissance composer William Byrd, play it on a harpsichord and double the tempo (a time mutation)! By doing this, Gerald created a missing link between the style of the Renaissance and the Baroque – to speak in evolutionary language.
In conclusion: As illustrated by musical compositions, the architecture that results from human creativity reflects the architecture of the constructs of nature.
- Their architecture is the same, but of course their “material” of construction is different.
- In nature and music, the elements of construction are hierarchies. They are wholes (the tip) that integrate parts that are, however, already hierarchies themselves.
- At any level of either natural or musical constructions, all the elements are “simple”because they are ones, yet their “simplicity” is always complex.
Human creativity reflects the creativity of nature. Why? Because both the structures of nature and musical compositions emerge from the same simplex architecture.
About Rudolf Brun
Rudolf Brun (www.churchandscience.squarespace.com) is the author of Science, Art, and Christianity: Sketching a Theology of Nature for Our Time. He received a Ph. D. from the University of Basel, Switzerland, and has been a professor in biology at the University of Geneva, Indiana State University and Texas Christian University. His interdisciplinary work included designing and co-teaching the course “Religion and Science” and presenting at numerous national and international conventions on “Science and Christianity.” He has been published in interdisciplinary journals, a collection of which is available in Creation and Cosmology: Attempt at Sketching a Modern Christian Theology of Nature.[/vc_column_text][/vc_column][/vc_row] |
How To Teach A Great Unit on The Geography of Interconnections
Buy and Download >
This resource on teaching a unit on Geographies of Interconnections is designed to provide teachers with engaging strategies and resources for meeting the requirements for the Year 9 Australian Geography Curriculum. The resource covers strategies and resources for teaching perceptions and influences of perceptions of place, the significance of digital and physical connection between places, the impact of production, trade, consumption and travel on connection between regions. This comprehensive unit outline further looks at how to teach such geographic skills as collection and analysis of data, visual and written representation and presentation of information. |
UPSC IAS Mains: Urbanization Their Problems and Their Remedies
(GS Paper- 1 Indian Heritage and Culture, History and Geography of the World and Society)
Urbanization is pervasive and recent phenomenon. In present global atmosphere, all nations undergo with the challenges of environment, social, transportation, economy in their respective cities. These issues are commonly occurred in developing countries due to the difference of development in cities and villages. Most of countries focus on development of cities instead of rural areas. Consequently, the urban areas are equipped with infrastructure, public facilities as well as provide employment opportunities compared to the rural areas. Therefore inhabitants are more attracted to migrate in cities to avail hi tech facilities, enhance their lifestyles and ultimately these activities raise numerous urbanization issues. Cities have major role to enhance economic growth and prosperity. The sustainable development of cities largely depends upon their physical, social and institutional infrastructure. An urban area is spatial concentration of people who are working in non-agricultural activities. The essential characteristic is that urban means non-agricultural. Urban can also be explained as a fairly multifaceted concept. Criteria used to define urban can include population size, space, density, and economic organization. Typically, urban is simply defined by some base line size, like 20 000 people.
Concept of urbanization: The term Urbanization is well explained by Nsiah-Gyabaah as the change from a rural to an urban society which involves an augment in the number of people in urban regions during a particular year. Likewise, Gooden argued urbanization as the immigration of people in huge numbers from rural to urban areas and this process happen due to the concentration of resources and facilities in towns and cities. Other theorists like, Reynolds (1989) characterized urbanization as the development of the population and cities, so that higher proportion of population lives in urban areas. Normally, urbanization is directly associated with innovation, industrialization, and the sociological process of good reason. Urbanization process had been started during the industrial revolution, when workforce moved towards manufacturing hubs in cities to get jobs in factories as agricultural jobs became less common. Theoretical studies have demonstrated that Urbanization is the result of social, economic and political developments that lead to urban concentration and expansion of big cities, changes in land use and revolution from rural to urban pattern of organization and governance. Urbanization is a process in which an increased proportion of society lives in cities and the suburbs of the cities. Historically, it has been strongly related with industrialization. Industrialization is processes that widely utilize inanimate sources of energy to improve human productivity.
Global urban population is growing at rapid rate from 17% in 1951 to 20% in 2001 and expected to increase 41% in 2020. It is observed that developing countries urbanize faster than industrialized nations because they have more issues of urbanizations. It has been documented in studies that Cities and towns operate as mechanisms for growth, often driving much of people’s cultural, intellectual, educational and technological accomplishment and modernization. Though, in contemporary living style of people of new, low-density approaches to urban development results in better consumption of energy, resources, transport and land, in this manner raising greenhouse gas emissions and air and noise pollution to levels that often surpass the legal or suggested human protection limits. Overall consumption, energy use, water use and waste generation go along with an increasing number of urban families.
Urban environmental management, is also the big business of local governments, play major role to offer services; civil society, and promotes citizens health and its rights to provide hygienic, liveable environment. The private sector can increase the efficiency and effectiveness of service delivery. Currently, cities are taking on roles that expand far beyond the conventional provision of infrastructure and services. A theoretical move may be perceived (European Environment Agency, 1996). The most remarkable immediate change accompanying urbanization is the fast change in the existing character of local livelihoods as agriculture or more traditional local services and small-scale industry give way to contemporary industry and urban and related commerce, with the city drawing on the resources of an ever-widening area for its own nourishment and goods to be traded or processed into manufactures.
When referring to the pre-industrial city, Wheatley (Wheatley, 1971) described urbanism as “that particular set of functionally integrated institutions which were first devised some 5,000 years ago to mediate the transformation of relatively egalitarian, inscriptive, kin-structured groups into socially stratified politically organized, territorially based societies”. The stress on institutional change relates the growth of cities to a major socio-political reorganization of society, which he considers as a main constituent in the development of society. Correspondingly, Childe offers a listing of ten characteristics of an urban civilization. These may be separated into five primary characteristics referring to primary changes in the organization of society and five secondary features indicative of the presence of the primary factors (Childe, 1951).
Major causes of urbanization: Following are the main causes of urbanization:
- Industrial revolution: Industrial employment catches the attention of people from rural to urban areas. In the urban areas, people work in modern sector in the occupations that assist national economic development. This represents that the old agricultural economics is changing to a new non-agricultural economy. This is the trend, which will build a new modern society (Gugler 1997).
- Emergence of large manufacturing centres.
- Job opportunities: There are ample job opportunities in mega cities therefore village people or individuals from town frequently migrate to these areas.
- Availability of transportation: Due to easy transport, people prefer to stay in big cities.
- Migration: Migration is main cause for rapid growth of mega-cities. Migration has been going on over centuries and it is normal phenomenon. When considering urbanization rural-urban and urban-rural and rural-rural migrations are very important. Urban-urban migration means that people move from one city to another. People may move to the city because they are forced by poverty from rural community or they may be pulled by the magnetism of city lives. Combination of these push and pull factors can force people to migrate to cities.
- . Infrastructure facilities in the urban areas: Infrastructure has vital role in the process of urbanization in the development of countries. As agriculture becomes more fruitful, cities grow by absorbing workforce from rural areas. Industry and services increase and generate higher value-added jobs, and this led to economic growth. The geographic concentration of productive activities in cities creates agglomeration economies, which further raises productivity and growth. The augments income and demand for agricultural products in cities.
- Growth of private sector.
Factors lead to urbanization: There are several aspects that lead to urbanization. According to Gooden, the factors can be categorized into three categories that include, economic opportunities, proper infrastructure and utilities and availability of public facilities.
Economic opportunities: It is general perception that living standard of urban area is superior as compared to village areas. People consider that more job opportunities and more jobs are offered in the city instead of rural area. Besides, the income also will be higher.
Proper infrastructure and utilities: In today’s economy driven society, majority of nations in the world are focusing on the development of major cities as the centre of government and business. As such, the cities will be certainly equipped with a better infrastructure and utilities such as roads and transportation, water, electricity and others. Apart from that, the communication and internet coverage also are good in the cities which are believed as one of the pulling factors of migration.
Availability of public facilities: To make smart city, metropolitan cities also offered better public facilities which are not there in rural areas. Since a variety of public facilities such as health and education are provided in the cities, people have more choices either to use public or private. Additionally, the provision of leisure area, postal services as well as police station and others are also provided to meet the needs of the urban community. In urban area, a greater variety of entertainment such as restaurants, movie theatres and theme parks attract more people to live in cities.
Global perspective: The urbanization progression and nature of the problems in more developed and less developed ones are very dissimilar. While in the framework of more developed countries, urbanization and city growth were necessary conditions for industrialization and modernization, it has become a risk to better living in the less developed countries because of the unpredictable growth of the cities, mainly of a few super cities. The speedy population growth in urban areas is due to migration of people from rural to urban and small cities to large ones are creating problems such as urban overcrowding, poor housing, and crowded transportation, lack of basic services, ill health, low educational status and high rate of joblessness. Such problems in the less developed countries may become heightened. It is necessary that studies should be undertaken on the patterns of urbanization observe the process so as to lessen its unfavourable consequences. India, the second most crowded country in the world has reached a state where urban problems have assumed to be serious.
Urbanization Issues and Problem: Some scholars think that the process of urbanization will bring numerous benefits for monetary growth, expansion of business activities, social and cultural incorporation, resourceful services, as well as resources of utilization. Though, there are some issues occur due to the urbanization. These include:
Rapid rate of urbanization: It is observed that fast rate of urbanization which is increasing every year has needed more growth of new areas for housing, social amenities, commercial and other urban land uses. Though, the lack of clear urban limits has led to the formation of urban slump encroaching upon environmentally sensitive areas, major agricultural areas and areas which are not appropriate for development (TCPD, 2006). In addition, the high demand of land use at strategic areas also has led to land use variances. These situations led to various urbanization issues such as environmental pollution, traffic congestion, depletion of green areas and degradation in the quality of urban living.
Problems due to rapid rate of urbanization
Degradation of environmental quality: Due to urbanization, there is environmental degradation especially in the quality of water, air and noise. With the influx of more people in cities, there is great demand of facilities such as housing. Some unlawful factories and even houses which have a poor infrastructure, the waste from buildings are directly channelled to the nearest river or water resources which directly pollute the water. The domestic waste, industrial effluents and other wastes that were dumped directly to the river, degrade the water quality. Another after effects of rapid urbanization is the air pollution which has also increased due to emanation from motor vehicles, industrial development and use of non-environmental friendly fuel sources. The noise pollution is produced from the various human actions which also degrade the environment and ultimately affect the human health. The growth of population has generated a very high quantity of solid waste and there is pressure to provide a waste disposal place in the urban areas.
Inefficient transportation system: Urbanization created severe problem of transpiration. Due to movement of people into metropolitan cities, the number of vehicles on the road is increasing every year. Although various types of public transportation are provided in the cities but people in cities still prefer to drive private vehicles. This is due to the ineffective public transportation. The public transportation facilities are provided without referring to the need to integrate the different modes of transportation. Consequently it is difficult for the user to change the modes of transportation. Since the public transportation is not trustworthy, people usually travel from private vehicles which led to the severe problem of blockage in the cities. If any traffic jam happens, public transportation, especially bus and taxi and private vehicles are trapped together and cannot move. It creates lot of problem for people.
Decline in quality of living for urban dwellers: Urbanization is major concern for management researchers because it decline in quality of living for urban inhabitants. As the metropolis becomes a developed city, the land value will also increase. The housing provision will focus more to fulfil the needs of the high income group. As such, there will be a problem in the provision of housing, especially for the middle and low class people. The supply of housing for the urban poor is still inadequate as the cost of these houses is very high to which low and middle income group cannot afford. The lack of housing provision for the low income group has led to the continuation of unlawful resident settlements in the city. These unlawful tenant settlements will certainly lack in proper infrastructure that will bring about many hindrances to the urban environment and create social problems such as child education, crime, drugs, delinquency and others. Besides housing problem for low income group, the process of urbanization has also increased the demand on infrastructure and utility which cannot be fulfilled from the existing facilities. The maintenance of drains and debris collection is incompetent which can raise other serious problems such as flash floods and poor public health. The reappearance of flash floods is due to the drainage system being unable to contain surface water run-off that has greatly increased with the higher intensity of urban activities.
Unsuccessful urban governance: The urban authority undergoes with multifaceted challenges to manage a city. The fast speed of urbanization is major challenges which need every party to be more focused in undertaking each and every responsibility in urban development. However, the involvement of several agencies and departments in urban management made it complicated to synchronize many actions and resultant, it affects the efficiency of those actions. Besides this, the local authority also deals with the different goals and interests of community groups which they need to fulfil. The local authority also needs to find solution for different social issues.
Cities are developed on two percent of the land’s surface. Their inhabitant uses over three-quarters of the world’s resources and release similar amounts of wastes. Urban wastes have local impacts but these are issues at global scale. The impacts of the cities are usually seen both locally and globally such as air pollution, city populations, as the major users of energy, cause both regional and worldwide pollution. These factors have adverse impact on health of the people, air quality and biosphere.
Urbanization issues in Indian context: India is known for its rural population in the world with about 73 percent of its population living in rural villages. The growth of urban population as well as the speed of urbanization has been usually slow as compared to most of the other Asian countries. When evaluating urbanizing process in Indian perspective, it is observed that major problems of urbanisation in this nation are Urban Sprawl, Overcrowding, Housing, Unemployment, Slums and Squatter Settlements, Transport, Water, Sewerage Problems, Trash Disposal, Urban Crimes, and Problem of Urban Pollution. While urbanisation has been a mechanism of economic, social and political progress, it can pose serious socio-economic problems. The absolute magnitude of the urban population, random and unplanned growth of urban areas, and lack of infrastructure are major issues in India due to urbanization. The fast growth of urban population both natural and through migration, has put immense pressure on public utilities like housing, sanitation, transport, water, electricity, health, and education.
Poverty, joblessness and under employment among the rural immigrant, beggary, thefts, dacoities, burglary and other social sins go wild. Urban slump is encroaching the valuable agricultural land. According to the statistical reports in 2001, the urban inhabitants of India were more than 285 million. It is estimated that by 2030, more than 50 per cent of India’s population is expected to live in urban areas. Numerous problems need to be emphasized.
Urban sprawl or real development of the cities, both in population and geographical area, of rapidly increasing cities is the major cause of urban troubles. In most cities, the financial support is unable to deal with the problems created by their expansion. Huge immigration from rural areas as well as from small towns into large cities has occurred almost consistently and as a result the size of the city is increased. Historical records signify that initial large flow of migration from rural to urban areas was during the “depression” of late 1930s when people moved for searching employment. Afterwards during the decade 1941-51, another a million persons migrated to urban areas in response to period of war industrialisation and division of the country in 1947. During 1991-2001, more than 20 million people migrated to urban areas. It is commonly observed that such big cities attracted to majority of people to get employment opportunities and live in modern style. Such hyper urbanisation leads to increased cities sizes which challenge imagination. Delhi, Mumbai, Kolkata, Chennai, Bangalore are examples of urban slump due to huge migration of people from the nearby places.
Overcrowding is a situation in which large number of people lives in too little space. Overcrowding is a consistent result of over-population in urban areas. It is obviously expected that cities are increasing their size due to massive movement of people from undeveloped areas but it squeezed in a small space due to overcrowding.
Housing: It is another intense problem due to urbanization in India. Overcrowding leads to a constant problem of scarcity of houses in urban areas. This problem is particularly more severe in those urban areas where there is large invasion of jobless or underemployed immigrants who could not find place to live when they come in cities and towns from the nearby areas. The major factors for housing problems are lack of building materials and financial resources, insufficient expansion of public utilities into sub-urban areas, poverty and unemployment of urban immigrants, strong caste and family ties and lack of enough transportation to sub-urban areas where most of the available land for new construction is to be found.
Unemployment: The problem of joblessness is also serious as the problem of housing. Urban unemployment in India is estimated at 15 to 25 per cent of the labour force. This percentage is even higher among the educated people. It is approximate that about half of all knowledgeable urban unemployed youth are living in four metropolitan cities such as in Delhi, Mumbai, Kolkata, and Chennai. Additionally, although urban incomes are higher than the rural incomes, they are awfully low because of high cost of living in urban areas. Major causes of urban unemployment are the huge relocation of people from rural to urban areas.
Slums and Squatter Settlements: The natural development of unchecked, unexpected and random growth of urban areas is the growth and spread of slums and unlawful resident settlements which present a prominent feature in the environmental structure of Indian cities, particularly of urban centres. The fast urbanisation in combination with industrialisation has resulted in the enlargement of slums. The explosion of slums occurs due to many factors, such as, the lack of developed land for housing, the high prices of land beyond the reach of urban poor, a large influx of rural migrants to the cities in search of jobs.
Transport: Urbanization poses major challenge to transport system. With traffic blockage, almost all cities and towns of India are suffering from severe form of transport problem. Transport problem increases and becomes more complex as the town grows in dimension. With its growth, the town performs varied and complex functions and more people move to work or shop.
Water: Water is one of the most essential elements of nature to maintain life and right from the beginning of urban civilisation. However, supply of water started falling short of demand as the cities grew in size and number.
Sewerage Problems: Urban centres in India are almost consistently beset with inadequate sewage facilities. Resource crisis faced by the municipalities and illicit growth of the cities are two major causes of this pitiable state of affairs. Most cities do not have proper arrangements for treating the sewerage waste and it is drained into a nearly river or in sea as in Mumbai, Kolkata and Chennai and these activities pollute the water bodies.
Trash Disposal: Urbanization pushed Indian cities to grow in number and size and as a result people have to face the problem of trash disposal which is in alarming stage. Enormous quantities of garbage produced by Indian cities cause a serious health problem. Most cites do not have proper arrangements for garbage disposal and the existing landfills are full to the edge. These landfills are breeding grounds of disease and countless poisons leaking into their environs. Wastes putrefy in the open inviting disease carrying flies and rats and a filthy, poisonous liquid, called Leachate, which leaks out from below and contaminates ground water. People who live near the decomposing garbage and raw sewage get victims to several diseases such as dysentery, malaria, plague, jaundice, diarrhoea, and typhoid.
Health problem due to urbanization: Factors affecting health in slums are Economic conditions, Social conditions, Living environment, Access and use of public health care services, Hidden/Unlisted slums and Rapid mobility. Environmental problems can cause many other problems such as Poor air quality that can produce asthma and allergies or contribute to physical inactivity, an impure water supply can cause the spread of infectious diseases through the water supply or through food such as waterborne and food borne diseases, climates changes can cause deaths from severe heat or cold , noise can cause sleep disturbances, and hence poor performance at work and in school, Lead poisoning leading to developmental and behaviour problems, Second-hand smoke and exposure to carcinogens can cause cancer. In general, poor environmental quality contributes to 25–33% of global ill health. Physical, mental, and social health is affected by living conditions. There are numerous examples that impact on human living such as lead exposure, noise, asbestos, mould growth, crowding, respiratory disease, and spread of infectious diseases, accidents, and mental illness. Health impacts of inadequate housing conditions are an intricate issue involving variety of exposures (physical, chemical, biological, building, and social factors) and various health outcomes such as asthma and allergies, respiratory diseases, cardiovascular effects, injuries, poisoning, mental illnesses. Issues of overcrowding, lack of resources, poverty, unemployment, and lack of education and social services can lead to numerous many social problems for example crime, violence, drug use, high school drop-out rates, and mental health problems.
Urban Crimes: In developed cities of India, people get connected with different types of individuals who do not have similarity with one another. The problem of crimes increases with the increase in urbanisation. In fact the increasing trend in urban crimes tends to upset peace and tranquility of the cities and make them insecure to live in mainly for the women. The problem of urban crime is becoming more complicated in current situation because criminals often get shelter from politicians, bureaucrats and leaders of the urban society. Dutt and Venugopal (1983) stated that violent urban crimes such as rape, murder, kidnapping, dacoity, robbery are more prominent in the northern-central parts of the nation. Even the economic crimes such as theft, cheating, breach of trust are concentrated in the north- central region. Poverty related crimes are prevalent in the cities of Patna, Darbhanga, Gaya and Munger. This may be due to poverty existing in this area.
Problem of Urban Pollution: Rising urbanisation in present situation led to develop industries and transport systems out of proportion. These developments are mainly responsible for contamination of environment, particularly the urban surroundings. Urban pollution is mainly the collection of impurities created by cities which would certainly shock city dwellers. It includes Air, water, ground the entire environment. Air pollution has dangerous consequences which emerge due to urbanization. Cities are the source of several dangerous gases, particularly vehicles like passenger cars, Lorries, buses which generate carbon dioxide (CO2), carbon monoxide (CO), sulphur dioxide (SO2), nitrous oxides (NOx), benzene, ozone in addition to fine particles released by diesel motors which create a serious threat to human health. Heating installations use fossil fuels which also contaminate the air of urban centres. However, in numerous urban agglomerations, the main source of the worsening of air quality is from industrial facilities which emit veritable poisons into the air, which is then inhaled by riverside dwellers. Water is also source of pollution in urban areas. Since earlier times, cities are attracting millions of rural residents to their recognizable shores. Each of these individuals has required water to live, and consume for other basic needs. Cities under continuous development must increase their water resources and their water treatment capacities. In many countries, this has created nearly insoluble problems and millions of human beings are not assured daily access to potable water. As regards wastewater, the lack of effective collection and treatment facilities means that wastewater is often quite simply dumped back into Nature, often into the ocean, which creates severe and long lasting pollution problems.
|Remedy to fix issues of urbanization in India|
India has rapidly increasing population. According to the estimates of New McKinsey Global Institute research, cities of India could produce 70 percent of net new jobs by 2030, may generate around 70 percent of Indian GDP, and drive a near fourfold increase in per capita incomes across the country. If India upgrades its urban operating model, it has the capacity to reap a demographic dividend from the increase of around 250 million expected in the next decade in the working-age inhabitants.
India’s current Prime Minister Mr. Narendra Modi also came forward to resolve the issues related to urbanization. To manage city system and fulfil the great demands of inhabitants due to the rapid urbanization, specialists have stated that government must focus on two critical factors which is solid waste management and waste water treatment. But the Gujarat government on its part has taken up 50 towns in the state and took initiatives like ‘Clean city, Green city’ in partnership to execute solid waste management and waste water treatment. In order to decrease discrimination, Mr. Modi stated that there is a need to concentrate on comprehensive growth and must recognize the most backward areas in cities and towns and provide basic amenities in place. There is an urgent need to develop social mechanisms which will assist to reduce inequality and make sure the basics like health, sanitation, education to reach those who have been underprivileged of the same. Mr. Modi has realized that most of the urban actions are technical but the employees who do these jobs are often clerical level therefore there must be focus on opening universities on urban planning, urban infrastructure, urban development for the assistance of young people to learn how to meet the demands of urbanization. To lessen urban crime, Mr Modi stresses that police staff in urban areas need a specific training to maintain demands of the law and order situation.
Possible remedy for the urbanization issues and problems at global level:
The most effectual way to resolve issues of urbanization is to make the economy of village and small scale fully viable. Economies must be revitalized if government undertakes huge rural development program. It is suggested that surplus manpower must be absorbed in village in order to migrate to urban areas. It is needed to control traffic congestion in urban region and people must be encouraged to use public transport. India must improve the traffic control system to avoid accidents. It is necessary to implement resilient clean-up campaign. Government must make polices to construct low cast multi-storeyed flats in order to accommodate the slum dwellers. Government should provide funds to encourage entrepreneurship and also find solution for pollution in the nation. Reports of WHO stated that the health cities proposal aimed to develop the physical, mental, environmental, and social welfare of people who live and work in urban centres. People from different backgrounds, including community members to government representatives, from cities were organized and encouraged to come together and work together in order to deal with the problems that emerge in urban environments. This association of people shared strategies, success stories, and resources to tackle the concerns of the local society. WHO reports indicated that, “A healthy city is one that is continually creating and improving the physical and social environments and expanding the community resources that enable people to mutually support each other in performing all the functions of life and in developing to their maximum potential.”
To summarize, Urbanization is the substantial expansion of urban areas due to rural migration and it is strongly related to modernization, industrialization, and the sociological process of rationalization. Urbanization commonly occurred in developing countries because government has keenness to accomplish a developed city status. As a result, almost all area in the city has been developed and in the worst case scenario, even the green areas are also turned into industrial or business area. It illustrates that speedy urbanization has many unconstructive implications especially towards social and environmental aspects. While the process of urbanization occurs at global scale, it is more visible in developing countries. This growth has led to concerns about the sustainability of these urban centres. Explosive growth in the world population and migration of people to in urban centres is causing major concern about the quality of life in these urban centres and the life-supporting capacity of the planet ecologically and communally.
The government should not be keen to develop a city without considering the impacts towards the social and environmental aspect. Instead, the government should modify the urban development process in order to accomplish a developed city and make efforts to lessen the possibility of problems that might arise. In order to triumph over urbanization issues and problems, Khosh-Chashm (1995) recommended that the society should work together closely with the authorities to assist in modernizing life in urban area. The changeover from a rural to urban wealth is very rapid in historical terms for most economic systems. The task to fulfil all the demands for jobs, shelter, water, roads, transport and other urban infrastructure is overwhelming. Presently, India already has numerous mega cities. Many researchers believe that urbanization is good for the financial growth of country but careful planning is required to develop cities and offer basic amenities for healthy living. |
The effect of linguistic context on efl vocabulary learning highlight the role of vocabulary by and the other half with first language (farsi) definition and. Communication psychology definition see also 'communication cord',communication interface',nonverbal communication english definition, english vocabulary. Vocabulary for chapter 6 health vocab find gender communication nonverbal feedback describe how nonverbal feedback conveys powerful messages. The text revealed a more emotional and romantic role of language in defining gender by phraseology vocabulary and nonverbal vocabulary side to hamilton. Teaching elementary language arts: diagnostic checklist for vocabulary development in the language arts program definition and role of grammar.
Language defines gender uploaded by admin on aug 03, 1999 how do men and women communicate clearly when most of their ways of communicating are so different in today's society language. Gender differences of communication how do men and women communicate clearly when most of their ways of communicating are so different in today's society language plays a key role in. How do men and women communicate clearly when most of their ways of communicating are so different in today’s society language plays a key role in defining gender by phraseology. Vocabulary barrier anything that interferes with a message being sent or received communication an exchange of information nonverbal not involving words and language. And the idea of genderlects and gender roles influence language it was found that those who communicated nonverbal signals gender-specific vocabulary.
Start studying chapter 6: nonverbal communication learn vocabulary, terms, and more with flashcards, games, and other study tools. The impact that gender has on both verbal and nonverbal messages include language, vocabulary women often play on their gender roles to provide a fantasy to. Language and social behavior language pervades social life gender each of the sections wilson's definition focuses on the central role of representations in. Gender studies, gender roles - gender differences in communication.
The vocabulary and phraseology belonging to an art or rate this definition: language a verbal or nonverbal means of communicating language and gender. Oxford language professor deborah cameron investigates in the first the guardian - back the gender pattern is explained by the observation that in most.
Essays related to gender and language 1 gender and language the inference that language is a reflection of the thoughts and attitudes within society runs throughout language research. Capturing phraseology in an online dictionary for advanced users of english as a second language: a response to user needs.
Ling 132: language gender and sex description final gendered social roles -nonverbal communication. Study flashcards on public speaking vocabulary at cramcom nonverbal movements that you might but who does not hold the formal position or role of. All terms in the glossary of the glencoe/mcgraw hill texbook communication applications learn with flashcards, games, and more — for free. A tetrahedral model of person, task, context, and strategies is first vocabulary learning in a second language should even if defining vocabulary is.
Pronounceable in the linguistics topic by longman dictionary of contemporary english | ldoce | what you need to know about linguistics: words, phrases and expressions. The influence of gender on communication style published on january 8 or body language whether through verbal or nonverbal means. Language definition the speech or phraseology peculiar to a class language, dialect, jargon, vernacular refer to linguistic configurations of vocabulary. How do men and women communicate clearly when most of their ways of communicating are so different in today's society language plays a key role in defining gender by phraseology. • practice vocabulary with fl ashcards and chapter 7 verbal and written communications 139 or a patient who does not speak your language pose challenges. The effects of vocabulary breadth and depth on were recognized in defining vocabulary studies have usually focused only on the role of vocabulary. Role of language in defining gender by phraseology vocabulary and nonverbal vocabulary he sought out the an examination of abortion and the women seeking the.Download Role of language in defining gender by phraseology vocabulary and nonverbal vocabulary |
May 31, 2013
Water-Rock Reaction Could Provide Energy For Martian Microbes
Brett Smith for redOrbit.com - Your Universe Online
Previous research has shown how rock-water reactions can produce hydrogen when temperatures are far too hot for living things to survive, such as near hydrothermal vent systems on the ocean floor. However, a new study in Nature Geoscience reports that the same hydrogen-producing reaction can also occur at more hospitable temperatures."Water-rock reactions that produce hydrogen gas are thought to have been one of the earliest sources of energy for life on Earth," said Lisa Mayhew, who worked on the study as a doctoral student at the University of Colorado, Boulder.
"However, we know very little about the possibility that hydrogen will be produced from these reactions when the temperatures are low enough that life can survive,” she added. “If these reactions could make enough hydrogen at these low temperatures, then microorganisms might be able to live in the rocks where this reaction occurs, which could potentially be a huge subsurface microbial habitat for hydrogen-utilizing life."
When water infiltrates iron-rich igneous rocks, a few unstable atoms of iron can be released into the water. These unstable atoms, known as reduced iron atoms, can split water molecules to produce hydrogen gas and new minerals containing a more stable, oxidized form of iron at temperatures above 392 degrees Fahrenheit.
To see if these reactions are possible, the researchers submerged rocks in oxygen-free water at temperatures between 122 and 212 degrees Fahrenheit. They were able to detect some evidence of a hydrogen-creating water-rock reaction — possibly enough hydrogen to support life.
Next, the researchers accelerated electrons in a small storage ring to create "synchrotron radiation" that would allow them to establish the type and location of iron in the rocks on a microscopic scale.
Instead of seeing the reduced iron in the rocks converted to the more stable oxidized state, as they are at higher temperatures, the scientists discovered newly formed oxidized iron on minerals with a cubic structure called ℠spinels´ that are highly conductive.
The researchers theorized that the conductive spinels were facilitating the swap of electrons between reduced iron and water, a necessity for the iron to divide the water molecules and create hydrogen gas.
"After observing the formation of oxidized iron on spinels, we realized there was a strong correlation between the amount of hydrogen produced and the volume percent of spinel phases in the reaction materials," Mayhew said. "Generally, the more spinels, the more hydrogen."
The researchers said hydrogen gases produced in these rocks would potentially be able to feed microbial life in a large volume of rock on Earth. They added that low temperature reactions could also be occurring in the same types of rocks that also are prevalent on Mars. Because the minerals that form as a result of these reactions have been found on both Earth and Mars, the new study´s findings may have implications for investigating potential Martian microbial habitats.
NASA announced recently that its Curiosity rover recently spotted preliminary evidence of an ancient streambed on Mars. “¯The study´s results suggest that hydrogen-dependent life could have existed where the Martian streambed was in contact with iron-rich igneous rocks. |
Ion propulsion thrusters for future Mars and Mercury Missions, Military Satellites , Space planes and ASAT role
An ion thruster is a form of electric propulsion used for spacecraft propulsion. It creates thrust by accelerating ions with electricity. As the ionised particles escape from the aircraft, they generate a force moving in the other direction. Power supplies for ion thrusters are usually electric solar panels, but at sufficiently large distances from the sun, nuclear power is used.
Generally, an ion thruster has a few advantages over a chemical-powered rocket. Ion thruster can drive a spacecraft to speeds of up to 40 kilometers per second; its chemical counterpart can only manage 5 kilometers per second. Secondly, an ion thruster has ten times more fuel efficiency which is ideal for space travel. Chemical rockets need to bring their fuel supply for the whole journey and that load means more mass and additional fuel requirement for take-off.
Ion thrusters are being designed for a wide variety of missions—from keeping communications satellites in the proper position (station-keeping) to propelling spacecraft throughout our solar system. “Ion propulsion is even considered to be mission enabling for some cases where sufficient chemical propellant cannot be carried on the spacecraft to accomplish the desired mission,” says NASA. The technology could be used to power a return trip to Mars without refuelling, and use recycled space junk for the fuel. Ion thrusters will be used in the European Space Agency’s (ESA) mission to Mercury. The BepiColombo will launch in 2017, fly by Venus in 2019 and 2020, and be captured by Mercury’s gravity in 2024.
EPS is expected to drive half of all new spacecraft by 2020. For Space-dependent sectors across the globe, the economic benefits of EP systems are said to be immense. Currently government-owned and private space players agencies are said to be scrambling to make space missions 30 per cent cheaper than now – by lowering the per-kg cost of lifting payloads to specific distances.
Many countries led by US are developing Ion thrusters. University of Michigan researchers have developed an ion thruster that has the potential to power manned missions to Mars. Dubbed the X3, the ion thruster has already surpassed current thrusters in its category in terms of power output, thrust and operating current. China has finished building the world’s most powerful ion thruster and will soon use it to improve the mobility and lifespan of its space assets, according to a state media report. India has launched a 2,195-kg, GSAT-9 or the South Asia Satellite om May 5 carrying an electric propulsion or EP system, the first on an Indian spacecraft.
Dr Paddy Neumann of Neumann Space and two professors have developed an ion thruster that is heading to the International Space Station (ISS) for a year-long experiment that ultimately could revolutionise space travel. University of Sydney doctoral candidate in Physics, Paddy Neumann, has developed a “new kind of ion space drive” that outperforms NASA’s in fuel efficiency and that can use a variety of metals, even those found in space junk, according to student newspaper Honi Soit.
Ion Propulsion Vs Chemical propulsion
As NASA explain: “An ion thruster ionizes propellant by adding or removing electrons to produce ions. Most thrusters ionize propellant by electron bombardment: a high-energy electron (negative charge) collides with a propellant atom (neutral charge), releasing electrons from the propellant atom and resulting in a positively charged ion. ” The gas produced consists of positive ions and negative electrons in proportions that result in no over-all electric charge. This is called a plasma. Plasma has some of the properties of a gas, but it is affected by electric and magnetic fields. Common examples are lightning and the substance inside fluorescent light bulbs. Ion thrusters have an input power need of 1–7 kW, exhaust velocity 20–50 km/s, thrust 25–250 millinewtons and efficiency 65–80%.
These thrusters have high specific impulses—ratio of thrust to the rate of propellant consumption, so they require significantly less propellant for a given mission than would be needed with chemical propulsion,” says NASA. These can be more than 10 times as fuel efficient as other rocket engines. Another attraction of using this kind of thruster is that it does not need the kind of high temperatures required by forms of chemical propulsion.
This kind of electric propulsion system is also lighter in weight, meaning that future space trips could be more feasible. A xenon based EPS can be five to six times more efficient than chemical-based propulsion on spacecraft and has many uses, according to Dr Annadurai, whose centre assembles all Indian spacecraft. A 3,500-kg EPS-based satellite, for example, can do the work of a conventional spacecraft weighing 5,000 kg, but cost far less.
The advantages include : Highest specific impulse offers substantial mass saving (>3000s); High performance at low complexity; Reduced power processing unit mass; Narrow beam divergence; Robust design concept with a large domain of operational stability; Large throttle range and adaptable to available electric power; Excellent thrust stability and fast thrust response and Highest growth potential with increasing electric power in near and medium-term future
However Ion thrust engines create small thrust levels (the thrust of Deep Space 1 is approximately equal to the weight of one sheet of paper ) compared to conventional chemical rockets. They are practical only in the vacuum of space and cannot take vehicles through the atmosphere because ion engines do not work in the presence of ions outside the engine. Besides, the engine’s minuscule thrust would not matter when air resistance comes into play.
Michael Patterson, senior technologist for NASA’s In-Space Propulsion Technologies Program compared ion and chemical propulsion with “Tortoise and the Hare”. “The hare is a chemical propulsion system and a mission where you might fire the main engine for 30 minutes or an hour and then for most of the mission you coast.” “With electric propulsion, it’s like the tortoise, in that you go very slow in the initial spacecraft velocity but you continuously thrust over a very long duration — many thousands of hours — and then the spacecraft ends up picking up a very large delta to velocity.”
Military applications of Ion Thrusters
Ion propulsion can also be used in commercial as well as military satellites. They can also provide a much more cost effective way of maneuvering satellites for orbit keeping, military surveillance and assisting Anti Satellite operations . “A more efficient on-orbit thruster capability is huge. Less fuel burn lowers the cost to get up there, plus it enhances spacecraft operational flexibility, survivability and longevity,” says Major General Tom Masiello, AFRL commander.
Small micro satellites using ion thrusters could maneuver close to adversary satellites and perform anti satellite missions.
The US Air Force’s most public secret, the X-37B unmanned spaceplane, in its latest mission has carried Hall thruster as part of an experiment to improve the design for use on Advanced Extremely High Frequency (AEHF) military communications spacecraft. Once in operation, the experiment will use telemetry to record thruster performance and the thrust it puts on the spacecraft. The Air Force says that the results will be used to improve thruster and environmental models, and to better extrapolate ground test results to actual on-orbit performance. The Hall thruster experiment is a partnership between the Air Force Research Laboratory (AFRL), Space and Missile Systems Center (SMC), and Rapid Capabilities Office (CRO) and is based on the thrusters used on the first three AEHF satellites.
NASA to fly ion thruster on Mars orbiter
NASA Engineers want to add ion engines to the orbiter and fly the efficient electrically-powered thruster system to Mars for the first time, “The Mars mission model based on the asteroid retrieval mission would have enough power from its ion engines to launch to the red planet and return to Earth, and still fit in the envelope of a Falcon 9 or low-end Atlas 5 rocket, according to NASA official.
A Mars orbiter launching in 2022 is a prime candidate to test out new technologies — like ion drive engines, better solar arrays, and lightning-fast broadband communications between Earth and Mars — to help scientists return samples from the Martian surface, and eventually send humans there, according to Charles Whetsel, who oversees formulation of future Mars missions at NASA’s Jet Propulsion Laboratory in Pasadena, California.
US Glenn Research Center leader in ion propulsion
At a recent demonstration at NASA’s Glenn Research Center in Ohio, the X3 broke several records achieved by any Hall thruster. “We have shown that X3 can operate at over 100 kW of power,” said Alec Gallimore, lead researcher and U-M’s dean of engineering, in an interview with Space.com. “It operated at a huge range of power from 5 kW to 102 kW, with electrical current of up to 260 amperes. It generated 5.4 Newtons of thrust, which is the highest level of thrust achieved by any plasma thruster to date.”
There are several types of ion thrusters and X3 is classified as a Hall thruster. A Hall thruster (also referred to as Hall-effect thruster, after discoverer Edwin Hall) uses an electric field to accelerate the propellant material. The process starts when electrons run through a circular channel and collide with atoms of a propellant (xenon is commonly used). The collisions knock electrons off and turn atoms into positively-charged ions. The process also creates a powerful electric field that pulls the plasma out of an exhaust, which in turn, generates the thrust.
Gallimore’s team though was able to address low thrust limitation with the X3: “We figured out that instead of having one channel of plasma, where the plasma generated is exhausted from the thruster and produces thrust, we would have multiple channels in the same thruster…We call it a nested channel,” Gallimore said.
NASA granted $6.5 million over three years to California-based rocket manufacturer Aerojet Rocketdyne to fabricate two NEXT flight systems known as XR-100 (thrusters and power processors) for use on a future NASA science mission. The X3 thruster is a key component of XR-100 and U-M researchers got $1 million from that grant for their work.
NASA is involved in work on two different ion thrusters: the NASA Evolutionary Xenon Thruster (NEXT) and the Annular Engine. NEXT, a high-power ion propulsion system designed to reduce mission cost and trip time, operates at 3 times the power level of NSTAR and was tested continuously for 51,000 hours (equivalent to almost 6 years of operation) in ground tests without failure, to demonstrate that the thruster could operate for the required duration of a range of missions.
In addition to flying the NEXT system on NASA science missions, NASA plans to take the NEXT technology to higher power and thrust-to-power so that it can be used for a broad range of commercial, NASA, and defense applications.
When NASA announced the Next Space Technologies for Exploration Partnerships (NextSTEP) in 2016, thrusters were one of the projects of the program. NASA Glenn’s patented Annular Engine has the potential to exceed the performance capabilities of the NEXT ion propulsion system and other electric propulsion thruster designs. It uses a new thruster design that yields a total (annular) beam area that is 2 times greater than that of NEXT. Thrusters based on the Annular Engine could achieve very high power and thrust levels, allowing ion thrusters to be used in ways that they have never been used before. The objectives are to reduce system cost, reduce system complexity, and enhance performance (higher thrust-to-power capability).
The NASA Glenn Research Center has been a leader in ion propulsion technology development since the late 1950s, the NASA Solar Technology Application Readiness (NSTAR) ion propulsion system enabled the Deep Space 1 mission, the first spacecraft propelled primarily by ion propulsion, to travel over 163 million miles and make flybys of the asteroid Braille and the comet Borelly.
NexGen Ion Propulsion System in the Works by ArianeGroup and Boeing
Boeing has signed an agreement with the Orbital Propulsion unit of Ariane Group (based in Lampoldshausen, Germany) regarding joint development of a new generation of ion propulsion systems for satellites. The system will be based on Ariane Group’s dual mode Radio frequency Ion Thruster (RIT) technology, which offers a high-thrust mode for orbital transfer manœuvres.
Thanks to its high-thrust mode for orbit-raising operations, the RIT thruster system will enable Boeing to increase payload mass while reducing time-to-orbit on its satellites. Boeing is using its experience in on-orbit electric propulsion operations to update its satellite architectures for integration of the advanced RIT propulsion system.
The RIT 2X subsystem comprises the thruster itself, a high-power processing unit and a radio frequency generator. The subsystem successfully passed its preliminary design review milestone in mid-2016 and is moving towards a critical design review.
ISRO first Electric Propulsion Satellite
India has launched a 2,195-kg, GSAT-9 or the South Asia Satellite om May 5 carrying an electric propulsion or EP system, the first on an Indian spacecraft. Dr. Annadurai told The Hindu that GSAT-9’s EPS would be used to keep its functions going when it reaches its final slot – which is roughly about two weeks after launch – and throughout its lifetime. The new feature that will eventually make advanced Indian spacecraft far lighter. It will even lower the cost of launches tangibly in the near future.
M.Annadurai, Director of the ISRO Satellite Centre, Bengaluru, explained its immediate and potential benefits: the satellite will be flying with around 80 kg of chemical fuel – or just about 25% of what it would have otherwise carried. Managing it for more than a decade in orbit will become cost efficient.
Dr. Annadurai said, “In this mission, we are trying EPS in a small way as a technology demonstrator. Now we have put a xenon-based EP primarily for in-orbit functions of the spacecraft. In the long run, it will be very efficient in correcting the [initial] transfer orbit after launch.”
“Using electric propulsion, we can send a four-tonne satellite, which is equivalent to a six-tonne satellite. Instead of chemical fuel, we save on weight and pack it with more transponders,” said A S Kiran Kumar, chairman of Isro. “With electric propulsion, we can add more transponders into space on our own.”
International Space Station to trial Aussie-designed thrusters that could power journey to Mars
Dr Paddy Neumann of Neumann Space and two professors have developed an ion thruster that is heading to the International Space Station (ISS) for a year-long experiment that ultimately could revolutionise space travel.
Professor Marcela Bilek, one of the co-inventors, said they built a system in the early 2000s that was a “cathodic arc pulsed with a centre trigger and high ionisation flux”. Professor Bilek explained a cathodic arc was a system that used solid fuels — metals — and worked similar to a welding arc. “Where you’re ablating the material from the solid and turning it into what’s called a plasma — the sort of stuff you see in the sun,” she said.
Professor Bilek said magnesium came out on top in their tests as the fuel with the highest specific impulse, and so the most fuel efficient. “Magnesium happens to be a light metal, which is very abundant in aerospace materials,” she said.
Australian student smashes NASA’s fuel efficiency record
University of Sydney doctoral candidate in Physics, Paddy Neumann, has developed a “new kind of ion space drive” that outperforms NASA’s in fuel efficiency and that can use a variety of metals, even those found in space junk, according to student newspaper Honi Soit.
NASA’s current record holder for fuel efficiency is its High Power Electric Propulsion, or HiPEP, system, which allows 9,600 (+/- 200) seconds of specific impulse. However, the new drive developed by Paddy Neumann, has achieved up to 14,690 (+/- 2,000), according to student newspaper Honi Soit.
“NASA’s HiPEP runs on Xenon gas, while the Neumann Drive can be powered on a number of different metals, the most efficient tested so far being magnesium,” the paper explains. “As it runs on metals commonly found in space junk, it could potentially be fuelled by recycling exhausted satellites, repurposing them into fresh fuel.”
China creates New records in Power and efficiency of Ion thrusters
China has finished building the world’s most powerful ion thruster and will soon use it to improve the mobility and lifespan of its space assets, according to a state media report. Researchers at the 502 research institute, which operates under the China Aerospace Science and Technology Corp. in Beijing, have delivered a new-generation Hall-effect thruster unit to Chinese customers in the space industry, the report by the Science and Technology Daily stated.
The machine will outperform all of the ion thrusters used on satellites or spacecraft that are currently in use, it added. The daily is run by the Ministry of Science and Technology.
The most powerful ones in operation today can accelerate to 30 kilometres per second at their maximum thrust. But Mao Wei, chief designer of China’s Hall thruster, told the daily that the latest version will beat the current performance record of this kind of thruster by as much as 30 per cent. Gao Jun, another researcher involved in the project, said other countries were busy developing similar ion thrusters but that none had completed ground testing yet. As such, China should become “the first [country] to test the new technology on a high-altitude satellite,” he was quoted as saying by the newspaper.
Russia plan to use a nuclear reactor to power an electric ion propulsion system
Hall thrusters were developed by the Soviets in the 1950’s and first deployed in 1971 on a Russian weather satellite. Over 240 have flawlessly flown since, often to boost satellites into orbit and keep them there.
The Russian government began the nuclear energy propulsion project back in 2010, providing over $17 million dollars as an initial investment. Anatoli Perminov, the former head of Russian space agency Roscosmos, told Interfax that “while the engine is expected to be fully assembled by 2017 the accompanying craft will not be ready before 2025.”
Nuclear energy can be used in two ways in powering propulsion systems: either its energy can be used to generate heat that is turned into energy or it may provide power directly. Russia is targeting this latter technology for development. They plan to use a nuclear reactor to power an electric ion propulsion system.
If Russia is able to harness nuclear energy to power long duration space missions by 2025, it would give them a significant lead in the modern space race. “Nuclear energy has significant advantages for deep space missions, in which the ability to carry fuel is a limiting factor in determining a mission’s duration. Solar power can be used for extended missions within the inner Solar System, but outer system missions are too far from the Sun to make this a practical energy source,” writes Ines Hernandez
European research into radio-frequency Ion propulsion was initially conducted in the 1960’s by the University of Giessen, Germany. Since 1970, the Lampoldshausen team has continued with the research, development and refinement of Radio-Frequency Ion Thruster technologies, associated propulsion systems, analytical tools and techniques, processes and materials technologies.
Lampoldshausen’s first Radio-frequency Ion Thruster Assembly (RITA) was successfully demonstrated in space aboard ESA’s European Retrievable Carrier EURECA, launched by the Space Shuttle Atlantis in 1992. At that time, the RIT-10 system aboard EURECA provided a nominal specific impulse of 3,058 seconds.
QinetiQ delivers Ion Thrusters to European Space Agency
QinetiQ has delivered four electric propulsion thrusters to the European Space Agency (ESA), to be used on the BepiColombo mission to Mercury. Due for launch in 2017, the mission will be a European ‘first’, using a multiple ion engine propulsion module for interplanetary transfer.
To reach Mercury requires an extremely high velocity change, which can be achieved by ion thrusters with modest propellant quantities, compared to traditional chemical thrusters. The engines are based on the T6 ion thruster model, a development from the smaller T5 used by ESA on the successful GOCE mission. These thrusters are more effective for the BepiColombo mission than the alternative Hall and chemical technologies.
Field-emission electric propulsion (FEEP)
Field-emission electric propulsion (FEEP) is an advanced electrostatic space propulsion concept, a form of ion thruster, that uses liquid metal (usually either caesium, indium or mercury) as a propellant. A FEEP device consists of an emitter and an accelerator electrode. A potential difference of the order of 10 kV is applied between the two, which generates a strong electric field at the tip of the metal surface.
The interplay of electric force and surface tension generates surface instabilities which give rise to Taylor cones on the liquid surface. At sufficiently high values of the applied field, ions are extracted from the cone tip by field evaporation or similar mechanisms, which then are accelerated to high velocities (typically 100 km/s or more).
A separate electron source is required to keep the spacecraft electrically neutral. Due to its very low thrust (in the micronewton to millinewton range), FEEP thrusters are primarily used for microradian, micronewton attitude control on spacecraft, such as in the ESA/NASA LISA Pathfinder scientific spacecraft.
Austrian startup ramping to mass produce tricky electric propulsion thrusters
Enpulsion is commercializing a Field Emission Electric Propulsion, or FEEP, thruster starting with small satellites ranging from 3 to 100 kilograms, Sypniewski said. ESA and industry have studied FEEP systems for well over a decade, but with limited success getting the technology beyond the laboratory.
The lure of FEEP thrusters is their ability to enable extremely precise movements or station-keeping while in space. ESA intended to use FEEP thrusters from Austria’s Fotec for the Lisa Pathfinder science mission, but production complications contributed meaningfully to the mission’s delays and ESA ultimately replaced the thrusters with more mature cold gas thrusters.
Enpulsion spun out of Fotec, a research division of the University of Applied Sciences Wiener Neustadt in Austria, to commercialize a breakthrough involving the use of a “porous tungsten crown emitter,” which Sypniewski said “provides a stable and repeatable technology that can be produced on a mass-production scale.”
“We have an enormous interest from worldwide small satellite manufacturers in our product,” Alexander Reissner, Enpulsion’s founder and CEO, said in a statement. “The key to this success is the concept of clustering pre-qualified building blocks, which is made possible by our proprietary Indium-FEEP technology. It seems that our offer of providing a custom propulsion solution at a catalog price and with less than two months lead time is really hitting a nerve of the industry.”
Sypniewski said the company plans to produce 100 to 200 thrusters per year, and has 150 pre-orders from customers in Europe and the United States. Among those customers is Iceye, a Finnish synthetic aperture radar startup that is flying a cluster of Enpulsion FEEP thrusters next year.
How Does an Ion Thruster Work?
As NASA explain: “An ion thruster ionizes propellant by adding or removing electrons to produce ions. Most thrusters ionize propellant by electron bombardment: a high-energy electron (negative charge) collides with a propellant atom (neutral charge), releasing electrons from the propellant atom and resulting in a positively charged ion. ” The gas produced consists of positive ions and negative electrons in proportions that result in no over-all electric charge. This is called a plasma. Plasma has some of the properties of a gas, but it is affected by electric and magnetic fields. Common examples are lightning and the substance inside fluorescent light bulbs.
Ion thrusters are categorized by how they accelerate the ions, using either electrostatic or electromagnetic force. Electrostatic thrusters use the Coulomb force and accelerate the ions in the direction of the electric field. Electromagnetic thrusters use the Lorentz force.
The most common propellant used in ion propulsion is xenon, which is easily ionized and has a high atomic mass, thus generating a desirable level of thrust when ions are accelerated. It also is inert and has a high storage density; therefore, it is well suited for storing on spacecraft. In most ion thrusters, electrons are generated with the discharge hollow cathode by a process called thermionic emission.
Electrons produced by the discharge cathode are attracted to the discharge chamber walls, which are charged to a high positive potential by the voltage applied by the thruster’s discharge power supply. Neutral propellant is injected into the discharge chamber, where the electrons bombard the propellant to produce positively charged ions and release more electrons. High-strength magnets prevent electrons from freely reaching the discharge channel walls. This lengthens the time that electrons reside in the discharge chamber and increases the probability of an ionizing event.
The positively charged ions migrate toward grids that contain thousands of very precisely aligned holes (apertures) at the aft end of the ion thruster. The first grid is the positively charged electrode (screen grid). A very high positive voltage is applied to the screen grid, but it is configured to force the discharge plasma to reside at a high voltage. As ions pass between the grids, they are accelerated toward a negatively charged electrode (the accelerator grid) to very high speeds (up to 90,000 mph).
“The positively charged ions are accelerated out of the thruster as an ion beam, which produces thrust. The neutralizer, another hollow cathode, expels an equal amount of electrons to make the total charge of the exhaust beam neutral. Without a neutralizer, the spacecraft would build up a negative charge and eventually ions would be drawn back to the spacecraft, reducing thrust and causing spacecraft erosion.”
The primary parts of an ion propulsion system are the ion thruster, power processing unit (PPU),propellant management system (PMS), and digital control and interface unit (DCIU). The PPU converts the electrical power from a power source—usually solar cells or a nuclear heat source—into the voltages needed for the hollow cathodes to operate, to bias the grids,and to provide the currents needed to produce the ion beam. The PMS may be divided into a high-pressure assembly (HPA) that reduces the xenon pressure from the higher storage pressures in the tank to a level that is then metered with accuracy for the ion thruster components by a low-pressure assembly (LPA). The DCIU controls and monitors system performance,and performs communication functions with the spacecraft computer. |
They make many people cringe and little boys love to taunt little girls with them, but earthworms are endlessly useful. Without them, our soil would not be as amenable to growing crops as it is. Depending on where you live, you probably have earthworms in your soil already, but you can further improve its quality by growing and making use of earthworms. They are a natural way to increase your garden’s yield, and you can grow them yourself!
Earthworms are members of the class of animals called Oligochaeta. Common names include earthworm, rainworm, night crawler, and angleworm. Earthworms are a very simple animal with a segmented body and uncomplicated circulatory and digestive systems. They can regenerate parts of themselves. Earthworms eat organic matter in the soil as they move throughout it. They can live in leaf litter, compost, topsoil, or in deep burrows below the topsoil.
Grow Your Garden With Earthworms
Your soil is enriched by the presence of earthworms. With more earthworms, it can become an even better environment in which to grow vegetables. Earthworms play several roles in the formation of good soil: biological, chemical, and physical.
- Biological. Because they consume organic matter, earthworms help compost to create rich humus. They eat larger pieces of organic matter, such as leaf litter. This results in the breaking down of large pieces of matter into smaller pieces. They can take compostable materials and turn them into wonderful soil.
- Chemical. Earthworm casts are a natural way to fertilize plants and gardens. When earthworms ingest small particles of soil, they are digested into a paste and excreted. The excreted matter is called casts and provides plants with nutrients in an accessible form. In other words, worm poop feeds the plants in your garden.
- Physical. Because earthworms burrow through the soil in your garden, they act as a natural aerator. The channels and holes that they produce by their movements keep the soil open and loose, which allows air flow and good drainage.
Soil without earthworms simply cannot support plants and vegetables to the same extent that soil with these creatures can. For the best possible garden, encouraging and growing earthworms is crucial.
At a minimum, you should create an environment in your garden that encourages earthworms to move in and take up residence. Earthworms need organic matter, moist and loose soil, temperatures between fifty and seventy degrees Fahrenheit, and a pH between six and seven. Composting your garden is the best way to reach these conditions. When your garden is rich with compost, you provide the worms with food. The compost also helps to keep moisture in and keeps temperatures at a consistent level. You can purchase a pH testing kit to find out what your soil’s pH level is and to find out how to alter it. Once you attract earthworms to your garden, they will be there for years to come and will keep the soil conditions right for themselves and for growing your vegetables.
Growing Your Own Earthworms
Encouraging earthworms to move into your garden can take time and effort. You need to create the compost and turn it into the soil before they will come around. To speed up the process, you can raise your own earthworms and transplant them directly into your garden. Growing earthworms is a very doable project. You will need to create a home for them, select worms to start your farm, and then harvest them for your garden.
Creating an Earthworm Farm
- Find them a home. A traditional way to create a home for worms is to bury a refrigerator. This keeps them contained, but be sure that it does not have Freon, which is a toxic substance, before you bury one. Also, remove the door so no one gets trapped in it. You can also use a bucket for a small-scale worm farm, a wooden box, or a plastic kiddie swimming pool.
- Consider drainage. If you plan to keep the farm out in the elements, make sure it has drainage holes so you don’t flood and drown your worms. Keep the worms from escaping by making the holes with nails and keeping the nails in place. The water will get out, but the worms will not.
- Fill the container. Use at least medium-quality soil. It does not have to be top of the line, but you don’t want soil that has too much sand or clay either. A loose, medium soil with some sand and clay will work.
- Add food. The worms will need something to eat. Mix organic materials into the soil for the worms’ meals. This could be leaf litter, grass clippings, or kitchen scraps. Avoid potato peels, as they will grow. Also avoid eggshells, manure, and meats. These can raise the temperature of the soil too high.
- Keep it mixed. Stir your earthworm soil and food mixture well so that the worms can eat anywhere in their home. This will keep them from overcrowding in one area.
- Create a top. You need to place something over the top of your farm to keep in moisture and keep out the light. A layer of leaves, grass clippings, or a piece of cardboard will work well.
Finding Worms to Stock the Farm
You could dig up some worms to place in your new farm and hope they grow and reproduce, but a better way to get your farm started is to purchase stock. The initial investment in starter worms is usually pretty low. You should be able to find a worm farm in your local area. Most commonly used worms include red wigglers and European nightcrawlers. When you find a worm farm from which to purchase your supply, talk with the owner about what kind you should buy and how many you will need for the size of worm home you have created.
Growing Your Worms
Once you have your worms started in their new home, they don’t require a whole lot of maintenance. Keep their soil moist without overwatering it, and keep them well fed with yard waste and kitchen scraps. To help them chomp through it more quickly, chop up food waste into little pieces, or even consider pulsing it in the blender for a minute. It is important not to overfeed them. Start with a small amount. If it is gone 24 hours later, try adding a larger amount. If some remains after 24 hours, it was too much. Left to their own devices, your worms will multiply and make more worms. You need not be careful about selecting male or female worms for ensuring reproduction as worms are hermaphroditic.
If you plan to keep producing worms for a long period of time, you should harvest them about once a month. This keeps the population at a reasonable level so that overcrowding does not occur. Harvesting is simple. Spread some of the earthworm dirt mixture on a table, board, or other flat surface and hand pick the worms. If the worms are intended for your garden, simply transfer them there.
Other Uses for Your Worms
When you have enough earthworms for your garden, you can use the rest for several different purposes. You can use them to create more compost for the garden. You can use them for fishing. You can even sell them to others. Advertise that you have nightcrawlers for sale, and your fishing neighbors will come with money in hand.
Growing earthworms is a great way to create an optimal vegetable garden. Worms are your friends when it comes to maintaining rich, organic soil that is packed with nutrients. Keeping a worm farm is not only good for your garden, it can even be extra income. When you get the kids involved, it also becomes a fun homeschooling project. |
Grade Levels: 5/6, 7/8, 9/10
Subject Area: Social Studies, History, ELA, Civics
This lesson plan is inspired by the article “Confederation Diary,” in the Summer 2017 issue of Kayak: Canada’s History Magazine for Kids.
This lesson examines the various viewpoints and perspectives of the participants in the Confederation debates, including the voices of women, Indigenous peoples and other groups left out of the process. Using primary source material, students will be asked to interpret various perspectives on Confederation as well as to imagine how discussions could look today, set within contemporary values.
45 minutes x 5
Historical Thinking Concept(s)
This lesson plan uses all six historical thinking concepts. This includes: establish historical significance, use primary source evidence, identify continuity and change, analyze cause and consequence, take historical perspectives, and understand the ethical dimension of historical interpretations.
- Outline the major participants in the Confederation debates of the 1860s, as well as identify the the principal groups who were excluded.
- Summarize the point of view of one of the provinces or parties involved in the debates, including pros and cons of joining the union.
- Describe some of the major ideas informing Confederation debates.
- Explain the concerns of groups not included in the debates, including linguistic minority groups, women, and First Nations, Inuit, and Métis people.
- Apply their new understanding of the Confederation debates to create their own political cartoon, illustrating a distinct point of view on the union.
- Analyze how the debates over nationhood, inclusion and identity have by identifying those who might be included in a discussion of Confederation if held today.
- Recommend a new model for contemporary Confederation that is more inclusive and reflective of Canada today.
At the end of the summer of 1864, delegates from Nova Scotia, New Brunswick and Prince Edward Island gathered in Charlottetown to discuss the possible union of the Maritime colonies. The Canadian delegation — led by John A. Macdonald, George Étienne Cartier and George Brown — invited itself to their meeting. The Canadian delegates did their best to persuade their counterparts of the benefits of an extended union and debated the main points of their initiative with the Maritime delegates.
Having come to an agreement on the principle of a colonial union, the delegates decided to pursue the discussions in Quebec City. In October 1864, they met in the library of the Parliament of the Province of Canada to debate two major visions: one favouring a strong central government and the other upholding the autonomy of the provinces.
At the second conference, 72 resolutions were adopted. They were essentially a draft constitution. At the end of the Quebec Conference, the delegates were asked to have the resolutions ratified by the legislative assemblies of their respective colonies. The process of ratification was not without friction. As these publications suggest, Confederation was intensely debated in the assemblies and newspapers. In Nova Scotia, Joseph Howe opposed the union initiative in a series of open letters entitled “The Botheration Scheme.” Other politicians and journalists published vehement pamphlets. However, Thomas D’Arcy McGee, in Montreal, and Edward Whelan, in Charlottetown, eloquently defended the Confederation initiative.
A third conference began in London, England, on December 4, 1866. Ratified by the legislative assemblies of the Province of Canada, New Brunswick and Nova Scotia, the 72 resolutions of the Quebec Conference now had to be revised and validated by the imperial government. A bill entitled the British North America Act, essentially a written constitution, was introduced in both houses of Britain’s Parliament and approved. Queen Victoria then received the delegates at court and, on March 29, 1867, gave royal assent to Confederation.
Confederation was a union of colonies who considered themselves to be as different as countries vast distances apart. Yet, not all nations in the land that was to be called Canada were included. First Nations had long signed treaties with the government to preserve friendship and goodwill. For First Nations people, the agreements they signed were contracts that should last forever, or as long as the sun shines and the rivers flow. First Nations, along with their lands, were mentioned in the 72 resolutions as subject to the jurisdiction of the central government. The Resolutions, as well as the British North America Act which created Confederation, did not name First Nations as nations unto themselves. Confederation was signed without their understanding or consent, leading the way to future abuses of rights.
In addition, Confederation was not a union of diverse people. At the time of Confederation in 1867 women were not allowed to be politicians. They were not even allowed to vote in federal elections. It was not until 1918 that women could vote in federal elections, and not until 1919 that women gained the right to be elected to the House of Commons. Most men believed women were incapable of participating in democratic affairs. At the Charlottetown, Quebec and London conferences, however, women accompanied their husbands and fathers. The women played a major role in the social aspects of the conferences. Mercy Ann Coles, who accompanied her father from Charlottetown to Quebec City, carefully recorded her impressions of the events in her diary. Sarah Caroline Steeves, daughter of a delegate from Nova Scotia, is said to have made a cushion with silk from a gown she wore to the conference balls in 1864. Still, and despite these voices, it is clear that the perspectives of women did not feature prominently in the shape the country was to take.
Confederation was instituted as a union of the provinces with the aim to promote economic opportunity and security and well-being for the provinces, as well as to ensure the development of a national railway system. It was not a compact for or of the people, but an agreement that the provinces arrived at through negotiation and compromise, by considering what was best for them.
The Lesson Activity
- Working in pairs or in small groups (3-4), students will share any information they know about how Canada was founded. If no one in the group has prior information, students will create a bullet-point chart showing what they think happened. Groups should then share their version of events with the rest of the class.
- Students will watch the Heritage Minute. Breaking into pairs or groups, they will be asked to revise the information or the stories they created according to the information provided in the Heritage Minute.
(The cartoon, published in Quebec in 1864, paints Confederation as the harbinger of destruction. The sacrificial lamb in the image represents Québec as a sacrifice to the hydra monster of Confederation, which is ridden by George Brown, one of the Fathers of Confederation. The cartoonist predicts the demise of French Canadian culture, and shows Québec politicians blessing the monster instead of defending the lamb. The accompanying text for this cartoon argues that Québec will be destroyed by joining Confederation. It represents the total destruction of Québec’s cultural, religious, and linguistic heritage.)
- Students read “Confederation Diary” considering the perspectives of women left out of the Confederation debates.
- Teacher presents treaty medal (Annex 2). The class can predict what the imagery seems to represent and what it might mean, before it is explained by the teacher.
(Between 1871 and 1921, the Crown concluded 11 numbered treaties with groups located in the territorial boundaries set by Canada. First Nations leaders believed that the Treaty they were signing was establishing a nation-to-nation relationship, negotiated as equals, that would ensure the group’s survival and success into the future.)
- As a small group or as a class, students brainstorm what other groups might have been left out of the Confederation debates.
- Using the summary perspectives attached, students will produce their own political cartoon outlining their province or territory’s objection to, or agreement with, Confederation (according to the perspective based on the handouts).
- Students will watch the original Heritage Minute again, noting what they might change to better reflect the diversity of Confederation debates and the difficulties of achieving consensus.
- Students will engage in a small or large-group discussion about which groups might be invited today to discuss Confederation, if the debate was to take place in contemporary times.
“Confederation Diary,” in the Summer 2017 issue of Kayak: Canada’s History Magazine for Kids.
Heritage Minute: Sir John A. Macdonald
Student Handout 1
Annex 1: Confederation Cartoon
Annex 2: Treaty Medal
Confederation for Kids |
An abjad is an alphabet in which all its letters are consonants. Even though vowels can be added in some abjads, they are not needed to write a word correctly. Well-known examples of abjads are the Arabic alphabet and the Hebrew alphabet.
Abjads are the first writing systems that were made only to show a word's pronunciation, instead of its meaning, unlike ideographs or ideograms, and they were created before full alphabets, like the Greek alphabet, which have letters for both consonants and vowels.
The earliest known abjad in the world is the Phoenician alphabet. Since in Afro-Asiatic languages, the root meaning of a word is found in its consonants, abjads are widely used in those languages. There are also languages without consonant roots that use abjads, such as Persian and Urdu, which both use the Arabic alphabet. |
The Mississippian Period was much like the earlier part of the Paleozoic Era in North America – much of the continent was covered by warm shallow seas in which abundant life contributed to the thick sheets of limestone that were laid down. Eastern North America was still a highland, the result of the ongoing collisions with Europe and the microcontinents that were rifting away from Gondwana, and the ocean between North America and Gondwana was getting narrower and narrower.
In western North America, the Antler uplift that we discussed at the end of May was still there, and it was shedding sediment into the western seaway. The combined weight of those sediments and the piles of thrusted rocks pushed over the western edge of North America by the Antler Orogeny pushed the earth’s crust downward, creating what’s called a foreland basin. The Antler foreland basin extended through what is now eastern Nevada, western Utah, and into parts of Idaho and California. It was a relatively deep trough, and the mud that found its way into it contained a lot of organic material, washing in from both the west and the east. The resulting rock is called the Chainman Shale – a rock we mentioned in the Devonian, in May, as the source rock for the oil fields of Nevada. The Chainman is an excellent source rock, as much as 8% total organic carbon in some places.
There was some tectonic activity in what is now Alaska’s North Slope and across the Arctic islands of northern Canada – it’s called the Innuitan or Ellsmerian Orogeny and it was mostly taking place toward the end of the Devonian and into the Mississippian Period.
In Europe, Gondwana was pretty much encroaching on the southern margin of Baltica, but it was a complex interaction with lots of smaller blocks colliding. The seaway between Gondwana and Europe, called the Tethys Sea, is actually still there. We call it the Mediterranean Sea today.
—Richard I. Gibson
Links to Paleogeographic maps:
Western North America |
Principles of classroom assessment chapter exam instructions choose your answers to the questions and click 'next' to see the next set of questions you can skip. Coming soon the assessment center will be available soon in the meantime, sign up to receive updates, and join the principles conversation on your social platforms. Guiding principles of assessment honest, insightful and productive assessment thrives only in a culture of trust the effective measurement of learning outcomes encourages students, faculty, staff and administration to examine and collaborate ways to improve our teaching and services to students. Indiana university - 8 principles of assessment 1 the assessment of student learning is based on goals set by faculty and students in mutual activity 2 the assessment of student learning is a formative process 3 the assessment of student learning is a continuous process 4. Guiding principles on early childhood assessments for educators and professionals birth to age 8 (grade 3) introduction pennsylvania’s office of child development. Principles of assessment for learning principles of assessment for learning above all, assessment for learning must be underpinned by the utmost confidence that every student can improve.
Institution-wide assessment culture shift “the way that culture changes is through conversations at every level” –dr leslie reid dr leslie reid has a rich. The technical principles of assessment are rules each rto must follow when planning, conducting and reviewing candidate assessment the 4 principles of assessment are detailed below: validity | a valid assessment will assess what it claims to assess. Assessment for learning is best described as a process by which assessment information is used by teachers to adjust their teaching strategies, and by students to. Principles of good practice for assessing student learning (american association for higher education, 1991) 1 the assessment of. Principles for effective assessment of student achievement shared principles of this consensus statement can principles for effective assessment of student. How can the answer be improved.
Throughout this assignment the essential underpinning principles of the nursing assessment process exploring what it is, why we do it and what factors influence the nursing process will be discussed. Assessment strategy & methods how we go about assessing someone’s performance is called the assessment strategy principles of assessment. Principles of assessment reliability if a particular assessment were totally reliable, assessors acting independently using the same criteria and mark scheme would. Principles of assessment mohawk valley community college (mvcc) adopts the 9 principles of good practice for assessing student learning developed by the.
Assessment for learning: research-based principles to guide classroom practice 2002 10 principles assessment reform group. Assessment principles, theory-practice, self regulation, engagement, empowerment, academic, social, learning communities introduction over the past few years i have written a number of papers where i have attempted to identify from published research the core principles underpinning good assessment practice in higher education.
Assessment for learning: research-based principles to guide classroom practice assessment reform group 2002 10 principles.
Appropriateness the method of assessment is suited to the performance being assessed fairness the method of assessment does not. The key concepts and principles of assessment the key concepts of assessment is that enables the assessor to clarify if the learner has gained the required understanding, skills and knowledge required as part of their program the concept of assessment is what is included in the whole assessment process. Start studying principles of management c483 wgu learn vocabulary, terms, and more with flashcards, games, and other study tools. Principles of assessment for learning the following principles provide the criteria for judging the quality of assessment materials and practices. Twenty years ago, in 1992, the american association for higher education’s assessment forum released its “principles of good practice for assessing student learning,” a document developed by twelve prominent scholar-practitioners of the movement.
The principles of assessment apply to all forms of assessments summative assessment involves assessment procedures that aim to determine students' learning at a particular time, for example when reporting against the achievement standards, after completion of a unit of work or at the end of a term or semester. Principles of effective assessment of student achievement endorsed by: higher education associations: aacc, aascu, ace. I choose and applied the correct assessment methods to the learning outcomes following the key principles and concepts valid, authentic, consistent, sufficient, reliable, and inclusive the assessment methods adhere to e-qas policies and procedures including: 'data protection act 1998', 'equality for opportunity', 'health and safety. Assessment for learning: 10 principles research-based principles to guide classroom practice assessment reform grou. Principles of assessment educ 4454 p/j methods the primary purpose of assessment and evaluation is to improve student learning we do this by: identifying. |
A white dwarf 3,260 light-years from Earth - mere walking distance in cosmic terms - looks like it could go supernova. And that stellar explosion would have dire consequences for our planet, not to mention our possible descendants.
Located in the binary system T Pyxidis, the white dwarf in question was originally thought to be far more distant from our solar system. Although three thousand light-years might sound like a fairly safe distance away from a potential supernova, it really is quite close by astronomical standards. To put it in some perspective, the diameter of the Milky Way, at roughly 100,000 light-years wide, is multiple orders of magnitude greater than what we're talking about here.
The huge white dwarf in the T Pyxidis system is known as a recurrent nova because it undergoes relatively minor eruptions at regular intervals. Small nova explosions have been observed every twenty years for over a century, although the last recorded nova burst was in 1967. Astronomers are unsure why the star is overdue.
These explosions occur because the white dwarf attracts stray hydrogen gas from its partner star. Once the gas has sufficiently built up, the eruption occurs. The concern for astronomers is whether the amount of hydrogen expelled by the star in these novas is more or less than that originally siphoned off. If more mass is taken in than is ejected, that means the star is slowly increasing in mass and may at some point reach the so-called Chandrasekhar Limit. It is at this point that the white dwarf would collapse in on itself due to its own overwhelming gravitational stress, leading to a massive, Type 1A supernova.
Astronomers have generally said any supernova within a hundred light years would be cataclysmic for Earth, but Pyxidis could be dangerous from even thousands of light-years away. The gamma rays released by a Type 1A supernova at that distance would hit Earth with the force of a thousand solar flares. Most destructively, the rays would create huge amounts of nitrous oxide in the Earth's atmosphere, which would in turn eradicate the Ozone Layer.
Admittedly, on the list of threats to our planet, this one should remain fairly low on the list. The current consensus is that T Pyxidis, if it goes supernova at all, won't explode for some ten million years. By cosmic (and geological) standards, ten million years is practically tomorrow, but it's hard to feel too worried about it if T Pyxidis does go supernova. Even then, it would take three millennia for the radiation to reach Earth. So our descendants 10,000,000 years from now can rest easy too. Although I pity those poor fools 10,003,00 years in the future. Truly, they were the unluckiest of all. |
International Bog Day 2011
To celebrate the 20th International Bog Day on Sunday 31st of July, our Affiliate Arkive is highlighting some of the amazing species that call the beautiful bog their home, and why they are such important habitats to preserve. They may have a bit of a bad reputation, but bogs are important ecological sites sustaining a unique array of species.
Bogs are often low oxygen, high carbon dioxide environments leading to acidic conditions. High acidity prevents nutrients being available to plants in a useful form and this has led to plants turning to more grisly methods to get the nutrients they need. These plants are able to break down and absorb nitrogen and other nutrients from animals, usually invertebrates such as insects.
Also found in boggy areas, the fanged pitcher plant (Nepenthes bicalcarata) produces nectar which attracts invertebrates to the brim of its pitcher. When stepping on the slippery, waxy surface the invertebrates will often fall into the depths of the pitcher. Unable to escape, they drown in the pitcher fluid and their bodies are broken down by digestive enzymes.
Contrary to popular belief, bogs aren’t dull and dreary. Prevalent on peat bogs, sphagnum mosses provide a carpet of colour. For example the Baltic bog moss (Sphagnum balticum) can form large floating mats of green and orange. The bog asphodel (Narthecium ossifragum) is another vibrant bog-dwelling plant. It has bright yellow star-like flowers which were once used as a hair dye. The bogbean (Menyanthes trifoliata) can be found throughout most of Europe, with it’s delicate white flower brightening boggy areas.
So why is there a need for an International Bog Day? Boggy habitats are becoming rarer and rarer as they are increasingly being drained, dredged, filled or flooded, for urban development, agriculture, and pond and reservoir construction. Bogs are an important habitat for many specialized species, and they certainly deserve protecting and a day of recognition.
Photo shows common blue damselfly caught by sundew carnivorous plant.
Credit: Emmanuel Boitier, Biosphoto, courtesy Arkive
To see more incredible bog images, link to Arkive. |
RF Receivers Information
RF receivers are electronic devices that separate radio signals from one another and convert specific signals into audio, video, or data formats. RF receivers use an antenna to receive transmitted radio signals and a tuner to separate a specific signal from all of the other signals that the antenna receives. Detectors or demodulators then extract information that was encoded before transmission. There are several ways to decode or modulate this information, including amplitude modulation (AM) and frequency modulation (FM). Radio techniques limit localized interference and noise. With direct sequence spread spectrum, signals are spread over a large band by multiplexing the signal with a code or signature that modulates each bit. With frequency hopping spread spectrum, signals move through a narrow set of channels in a sequential, cyclical, and predetermined pattern.
Selecting RF Receivers
Selecting RF receivers requires an understanding of modulation methods such as AM and FM. On-off key (OOK), the simplest form of modulation, consists of turning the signal on or off. Amplitude modulation (AM) causes the baseband signal to vary the amplitude or height of the carrier wave to create the desired information content. Frequency modulation (FM) causes the instantaneous frequency of a sine wave carrier to depart from the center frequency by an amount proportional to the instantaneous value of the modulating signal. Amplitude shift key (ASK) transmits data by varying the amplitude of the transmitted signal. Frequency shift key (FSK) is a digital modulation scheme using two or more output frequencies. Phase shift key (PSK) is a digital modulation scheme in which the phase of the transmitted signal is varied in accordance with the baseband data signal.
RF receivers vary in terms of performance specifications such as sensitivity, digital sampling rate, measurement resolution, operating frequency, and communication interface. Sensitivity is the minimum input signal required to produce a specified output signal having a specified signal-to-noise (S/N) ratio. Digital sampling rate is the rate at which samples can be drawn from a digital signal in kilo samples per second. Measurement resolution is the minimum digital resolution, while operating frequency is the range of received signals. Communication interface is the method used to output data to computers. Parallel interfaces include general-purpose interface bus (GPIB), which is also known as IEEE 488 and HPIB Protocol. Serial interfaces include universal serial bus (USB), RS232, and RS485.
Additional considerations when selecting RF receivers include supply voltage, supply current, receiver inputs, RF connectors, special features, and packaging. Some RF receivers include visual or audible alarms or LED indicators that signal operating modes such as power on or reception. Other devices attach to coaxial cables or include a connector or port to which an antenna can be attached. Typically, RF receivers that are rated for outdoor use feature a heavy-duty waterproof design. Devices with internal calibration and a frequency range switch are also available.
Applications and Industries
RF receivers are used in a variety of applications and industries. Often, devices that are used with integrated circuits (ICs) incorporate surface mount technology (SMT), through hole technology (THT), and flat pack. In the telecommunications industry, RF receivers are designed to fit in a metal rack that can be installed in a cabinet. RF receivers are also used in radios and in electronic article surveillance systems (EAS) found in retail stores. Inventory management systems use RF receivers as an alternative to barcodes.
Related Products & Services
RF modules are partially finished circuits that can be incorporated into larger designs.
RF transceivers are electronic devices that receive and demodulate an RF signal, then modulate and transmit a new signal.
RF transmitters are electronic devices consisting of an oscillator, modulator, and other circuits that produce an RF signal.
Telemetry Receivers and Telemetry Transmitters
Telemetry receivers and telemetry transmitters are data acquisition components used to gather information from remote locations via wireless communication. |
Infrared Telescopes Spy Small, Dark Asteroids
This chart illustrates why infrared-sensing telescopes are more suited to finding small, dark asteroids than telescopes that detect visible light. The top of the chart shows how three asteroids of the same size but differing compositions would appear in visible light. An asteroid that has a shinier surface, or higher albedo, will appear brighter than a dark asteroid, even though they are the same size. This is because a shinier asteroid will reflect more visible light from the sun.
The bottom of the chart shows the same three asteroids when viewed in infrared light. They appear to be the same brightness, regardless of their albedo. Objects of the same size will radiate about the same amount of infrared light, as result of being heated by the sun. It's easier for an infrared telescope to see a small, dark asteroid because it senses the object's heat signature and not the little amounts of reflected sunlight.
Image credit: NASA/JPL-Caltech |
Our exploration of space has led to insights about our planet and universe, scientific experimentation and discovery, and a satellite communications system that interconnects our global community. But sometimes, what goes up doesn’t come down.
This image illustrates items in Earth’s orbit that are currently being tracked, about 95% of which are orbital debris & not functional satellites. Photo courtesy of NASA.
As it turns out, the area of space known as “low Earth orbit” is congested with debris, much of it from explosions and collisions, some intentionally released during launch and mission operations, and millions of tiny objects, such as paint flecks, that result from heat stress on spacecraft. NASA tracks this debris, which includes more than 21,000 pieces larger than about 4" in diameter and millions of smaller pieces.
This part was removed from the Hubble Space Telescope during in-space repairs. The yellow arrows show the damage from many orbital debris impacts. Photo courtesy of NASA.
Even seemingly small debris can cause significant damage to spacecraft and satellites because all collisions in space are high-speed. By studying damaged parts, NASA’s Orbital Debris Program is able to help design systems to protect new spacecraft and satellites from debris impact. The program also works to minimize the amount of future debris through improved design and materials. To learn more about space debris, visit http://orbitaldebris.jsc.nasa.gov.
Space Junk 3D, a short documentary, tells the story of the ring of debris orbiting Earth – and explains how that debris could affect future space exploration.
The film, which continues to play at museums, planetariums, and theaters around the country, will be available on DVD in September. To learn more about this film, visit www.spacejunk3d.com. |
Canine influenza (CI, or dog flu) in the U.S. is caused by the canine influenza virus (CIV), an influenza A virus. It is highly contagious and easily spread from infected dogs to other dogs through direct contact, nasal secretions (through coughing and sneezing), contaminated objects (kennel surfaces, food and water bowls, collars and leashes), and by people moving between infected and uninfected dogs. Dogs of any breed, age, sex or health status are at risk of infection when exposed to the virus.
Unlike seasonal flu in people, canine influenza can occur year round. So far, there is no evidence that canine influenza infects people. However, it does appear that at least some strains of the disease can infect cats.
Canine influenza symptoms and diagnosis
Greyhound resting on a blanket CIV infection resembles canine infectious tracheobronchitis (“kennel cough”). The illness may be mild or severe, and infected dogs develop a persistent cough and may develop a thick nasal discharge and fever. Other signs can include lethargy, eye discharge, reduced appetite, and low-grade fever. Most dogs recover within 2-3 weeks. However, secondary bacterial infections can develop, and may cause more severe illness and pneumonia. Anyone with concerns about their pet’s health, or whose pet is showing signs of canine influenza, should contact their veterinarian.
CIV can be diagnosed early in the illness (less than 4 days) by testing a nasal or throat swab. The most accurate test for CIV infection is a blood test that requires a sample taken during the first week of illness, followed by a second sample 10-14 days later.
Transmission and prevention of canine influenza
Dogs are most contagious during the two- to four-day incubation period for the virus, when they are infected and shedding the virus in their nasal secretions but are not showing signs of illness. Almost all dogs exposed to CIV will become infected, and the majority (80%) of infected dogs develop flu-like illness. The mortality (death) rate is low (less than 10%).
The spread of CIV can be reduced by isolating ill dogs as well as those who are known to have been exposed to an infected dog and those showing signs of respiratory illness. Good hygiene and sanitation, including hand washing and thorough cleaning of shared items and kennels, also reduce the spread of CIV. Influenza viruses do not usually survive in the environment beyond 48 hours and are inactivated or killed by commonly used disinfectants.
There are vaccines against the H3N8 strain of canine influenza, which was first discovered in 2004 and until 2015 was the only strain of canine influenza found in the United States. However, a 2015 outbreak of canine influenza in Chicago was traced to the H3N2 strain – the first reporting of this strain outside of Asia – and it is not known whether the H3N8 vaccine provides any protection against this strain. Used against H3N8, the vaccines may not completely prevent infection, but appear to reduce the severity and duration of the illness, as well as the length of time when an infected dog may shed the virus in its respiratory secretions and the amount of virus shed – making them less contagious to other dogs.
The CIV vaccination is a “lifestyle” vaccination, recommended for dogs at risk of exposure due to their increased exposure to other dogs – such as boarding, attending social events with dogs present, and visiting dog parks.
Canine Influenza reference page (for veterinarians)
H3N2 Frequently Asked Questions (Cornell University College of Veterinary Medicine) |
In the field of railroad locomotives, how the wheels of the locomotive are arranged by type, position and connections. There are several notations used to describe wheel arrangements. In the United States and Great Britain, the Whyte Notation is generally used for steam locomotives. In Europe, the UIC Classification scheme is generally used, especially for diesel and electric locomotives. British practice uses a slightly simplified form of that notation for diesels and electrics; the US uses an even more simplified version for such purposes.
Especially in steam days, wheel arrangement was an important attribute of a locomotive, because there were many different ones, each optimised for a different use. Modern diesel locomotives are much more uniform. |
- Writing a Business Plan
- Financial Statements
- Business Forecasting
- Business Checklist
1. MARKUP PRICING
Markup pricing is very common among manufacturers, wholesalers, retailers, and service providers, largely due to its simplicity. The markup pricing method, calculates all the costs of purchasing or producing the product and then adds a desired markup to it.
To illustrate the markup pricing method, suppose you open a retail computer store and sell one brand of Pentium computer. Each computer costs you $1,150 plus $50 shipping & handling. Therefore, each computer costs $1,200 to buy and have shipped to your place of business. Using the markup pricing method, you would simply decide on a desired gross margin and markup the price accordingly. If you markup the price by 50% of cost, then you'll make $600 from the sale of each computer. That is: 50% x $1,200 = $600. Therefore, your selling price on each computer would be $1,800 ($1,200 cost + $600 markup = $1,800).
The above example deals with a markup based on cost, however, many businesses like to translate markups on cost into what is known as a markup based on selling price. To calculate a markup based on selling price, you simply divide the product's planned dollar markup by the product's planned selling price. For instance, the markup on selling price, using the above example, would be calculated like this;
Markup % on selling price = $ 600 = 33.3%
Therefore, the markup on cost is 50% and the markup on selling price is 33.3%. Although the percentages are different, the profit made on each computer sold remains at $600.
The markup pricing method used to determine your prices is simple to calculate, however, it ignores two important variables - your competitors' prices and consumer demand for the product. For these reasons, many critics feel the markup pricing method is not the "best" pricing approach. Yet many manufacturers, wholesalers, retailers, and services use this pricing method for the following reasons;
SUMMARY OF MARKUP PRICING:
Markup pricing uses product costs and percentage markups to calculate the selling price of a product; and it ignores operating expenses (marketing & administration expenses). |
- from Greek, by way of Latin, philosophia, “love of wisdom” – love is an unconditioned feeling, it has no aim out of itself, it is very strong, it cannot be ignored; philosophers are not the wise, they love wisdom, seek it,
- the critical examination of the grounds for fundamental beliefs
- analysis of the basic concepts employed in the expression of such beliefs.
- philosophical inquiry is a central element in the intellectual history of many historical civilizations.
It would be difficult if not impossible to find two philosophers who would define philosophy in exactly the same way. Throughout its long and varied history in the West, philosophy has meant many different things. Some of these have been:
- a search for wisdom;
- an attempt to understand the universe as a whole;
- an examination of humankind’s moral responsibilities and social obligations;
- an effort to fathom the divine intentions and the place of human beings with reference to them;
- an effort to ground the enterprise of natural science;
- a rigorous examination of the origin, extent, and validity of human ideas;
- an exploration of the place of will or consciousness in the universe;
- an examination of the values of truth, goodness, and beauty;
- and an effort to codify the rules of human thought in order to promote rationality and the extension of clear thinking.
Even these do not exhaust the meanings that have been attached to the philosophical enterprise, but they give some idea of its extreme complexity and many-sidedness.
The same truth is expressed in different ways (science, religion, myth, art and philosophy).
Philosophy originates from the doubt and the sense of wonder.
Philosophical problems and disciplines
Philosophical problems evolved over centuries, from ancient Greek questions about the origin and nature of cosmos, validity of sensual impressions, possibility to obtain certain knowledge; over eternal questions about beauty, art, science, politics, values; to contemporary issues, such as finding a new basis for common values, new basis for social identification, mind-body problem, freedom of the will in the era of highly developed science, distinguishing good from bad information, intellectual property, collective decision-making and collective rationality, what exactly is a human person when its every aspect can be manipulated at will, humans and environment and global justice.
Main philosophical disciplines are: ethics, logic, aesthetics (is beauty objective or subjective), philosophy of science, political philosophy, metaphysics, epistemology (nature and grounds of knowledge and its limits and validity) and the history of philosophy.
There are three main eras of Philosophy:
Ancient, Medieval and Modern
Ancient Greek and Roman philosophy can be divided into:
- The pre-Socratic period
- The seminal thinkers (influential, formative, ground-breaking, pioneering, original, innovative; major, important)
- Hellenistic and Roman philosophy
The pre-Socratic period can be divided into:
- Cosmology and the metaphysics of matter
- Epistemology of appearance
- Metaphysics of number
- Antropology and relativism
Cosmology and the metaphysics of matter can be divided into:
- Monistic cosmologies (Thales, Anaximander, Anaximenes, Xenophanes, Parmenides and Heracleitus)
- Pluralistic cosmologies (Empedocles, Anaxagoras, Leucippus and Democritus) |
There are few words more powerful in the English language than “what if”. BioLite, a New York City company started by innovators Jonathan Cedar and Alec Drummond, is a powerful example of “what if” in action.
The two inventors met in New York City and found they share a love for sustainable design. Turning their attention to the limitations of camping stoves on the market, they thought, “What if we could design a camping stove that was not dependent on petroleum fuels or batteries to work? What if we could make a wood-burning stove that could utilize its own thermal energy efficiently?”
They thought it, and then they did it. On their own time after work and on weekends, the duo designed a stove that contains its own thermoelectric generator, which converts waste heat from the fire into usable electricity. The electrical energy thenpowers an internal battery and a fan. All these components together increase the stove’s combustion and create a cleaner, almost smokeless fire.
Does it work? The BioLite Camp Stove can boil a liter of water in four and a half minutes and generates two watts of electricity to power lights and smart phones through a USB port. People in over 70 countries are using the stove with renewable biomass fuels like wood and brush to replace harder-to-acquire and harder-on-the-environment fossil fuels.
Originally, the concept behind the camp stove was a simple. Cedar and Drummond were focused on making the life of the recreational camper easier and more environmentally friendly. Happy with their initial results, they took their prototype to the ETHOS combustion conference in Seattle. There, two things happened which would change the focus of their company forever.
First, their prototype took the top prize as the cleanest stove of the year, which afforded them some seed money for future endeavors.
More importantly, however, they discovered something that showed them a more worthy goal. Noting that most of the other entries at the conference were focused on developing clean burning stoves for the developing world, Cedar and Drummond began to realize the true potential of their invention.
According to the World Health Organization (WHO), over 3 billion people worldwide cook and heat their homes using either open flames or stoves that burn wood, coal, or animal and crop waste. Why is this a problem?
WHO estimates 4 million premature deaths occur each year due to exposure to smoky household emissions in developing lands. Releasing nearly 1 billion metric tons of carbon dioxide into the atmosphere each year, this method of heating and cooking exposes millions of people to diseases such as pneumonia, stroke, coronary artery disease, chronic obstructive pulmonary disease and lung cancer.
Once the inventors discovered the true magnitude of the problem, they found that they could not sit idly by and do nothing to address the issue. So, they quit their previous jobs and devoted themselves full-time to developing the BioLite home stove, a device mainly designed for use in developing countries.
The problem with manufacturing a product that would be targeted to developing lands is there is no guarantee of monetary profit in it. Providing the BioLite stove to economically disadvantaged populations would, of necessity, be a prohibitively costly venture.
So, once again using their innovative approach to problem-solving, BioLite’s management team decided to move in a non-traditional direction. They chose to partially re-focus on the recreational camper market to gain the confidence and support of investors.
Then, using the profit from their recreational camping products, they could fund the development of products for emerging markets. Once this fell into place, the BioLite team turned its attention to promoting the use of their products in developing lands.
Understanding the reluctance of people to accept foreign ideas without question, the BioLite team worked to build partnerships with local energy companies in emerging markets to help them educate the population and distribute their products where they are most needed.
Using local sales people, BioLite shows citizens of developing countries how the BioLite Home Stove will impact their lives for the better. BioLite estimates that using their product can save the average household in developing lands from $8 to $10 per week. This is a huge sum to economically strapped people, and represents a considerable monetary advantage. Add to that the health benefits of a clean burning stove, and it is easy to see that this is a perfect mating of ingenuity and practicality.
BioLite’s story well illustrates that there is much power in the imagination and ingenuity of the human spirit. Harnessing the potential of this power can have major impact on our world and the way we experience it. |
Barnacle Geese (Branta leucopsis), Margaretenkoog, Denmark © Peter Prokosch, Grid Arendal
Bonn, 12 October 2017 - Migratory species rely on a network of interlinked habitats throughout their journeys, including for feeding, resting and breeding. But their dependence on multiple sites makes them particularly vulnerable: When one or more of these habitats is fragmented by a road or dam, for example, or destroyed by human activity, such as agriculture or mining, it can impact on the species’ long- term survival.
The challenge for conservationists is to know where to intervene and what to prioritize in these complex networks, which often reach across multiple national boundaries and cover vast expanses of ocean, sky or land.
A resolution will be presented at CMS COP12, which aims to draw attention to the connectivity-related aspects of conservation strategies and the importance of cooperation and shared efforts across countries and continents to protect migratory animals.
The Convention on Migratory Species (CMS) is uniquely placed to foster multinational agreements, for example, to protect corridors along migratory routes linking key sites. Conservation interventions should consider the requirements of the animals concerned throughout their entire range and lifecycle, the proposal suggests.
Habitat destruction and fragmentation are primary threats to migratory species. The identification and conservation of habitats of appropriate quality, extent, distribution and connectivity are of paramount importance in both the terrestrial and marine environments, the draft resolution says.
The proposal, which consolidates past resolutions, stresses the need for habitat protection and international cooperation as well as for active local community support.
View here the full text of the proposal.
For interviews or to speak to an expert, please contact:
Florian Keil, Coordinator of the Joint Communications Team at the UNEP/CMS and UNEP/AEWA Secretariats
Tel: +49 (0) 228 8152451
Veronika Lenarz, Public Information, UNEP/CMS Secretariat
Tel: +49 (0) 228 8152409
Last updated on 12 October 2017 |
Read Lines 1024-end.
Common Core Objectives
Determine two or more themes or central ideas of a text and analyze their development over the course of the text, including how they interact and build on one another to produce a complex account; provide an objective summary of the text.
Analyze multiple interpretations of a story, drama, or poem (e.g., recorded or live production of a play or recorded novel or poetry), evaluating how each version interprets the source text. (Include at least one play by Shakespeare and one play by an American dramatist.)
Note that it is perfectly fine to expand any day’s work into two... |
Valley Fever: Timely Diagnosis, Early Assessment, and Proper Management
How to Recognize and Prevent Valley Fever (Coccidioidomycosis)
Valley fever is a non-communicable fungal disease originating from theCoccidioidesspecies. The organisms live in the soil of semi-arid areas, such as the southwestern United States, regions of Mexico, and South America. When its spores are released into the air, they can cause lung infections, from slight to severe. If you’ve been or are going to an area affected by Valley Fever, be sure you understand the prevention measures you can take to avoid contracting the disease and the symptoms that can help you diagnose an infection.
Recognizing the Symptoms
Watch our for flu-like symptoms.Mild infections of Valley Fever often go unrecognized because they manifest themselves much like other common and seasonal illnesses. However, if you have been in an endemic area, you should pay attention to any early symptoms in order to avoid contracting a more serious form of the disease.
- The earliest symptoms of Valley Fever include fever, headaches, a persistent cough, chest pain and shortness of breath, chills, night sweats, fatigue, muscle and joint aches, and red bumpy rashes, especially on the upper body or legs.
Be on the lookout for more severe infections.If your Valley Fever goes untreated, the symptoms can become more severe, and the infection can cause chronic pneumonia. If you have been experiencing a constant fever, persistent chest pains and coughing, and weight loss, you should go to the doctor immediately.
- Another telling symptom of a developing infection is coughing up mucus tinged with blood, which may indicate that you have nodules in your lungs.
Be wary of lung infections.In its most dangerous and advanced stages, Valley Fever can spread from the lungs to other parts of the body, including the skin, bones, liver, brain, heart, and nervous system. At this point, you should already be in contact with your doctor, who can help you navigate these more severe symptoms.
- In its most serious “disseminated” form, Valley Fever will lead to skin sores, lesions in the skull and spine, bone and joint infections, and meningitis--an infection that affects the fluid and membranes that protect the brain and spinal cord.
Assessing Your Risk
Find out if you’ve been in an endemic area.The fungus that causes Valley Fever can be found in the soils of the southwestern United States. It’s also present in some regions of Mexico, Central America, and South America.
- In the U.S., affected states include Arizona, southern California, southern Nevada, New Mexico, western Texas, southwestern Utah, and south-central Washington. Most of the 10,000 annual cases are diagnosed in Arizona and California.
Assess your exposure to infected soils.You contract Valley Fever by inhaling microscopic fungal spores that are released into the air when soil is disturbed. If you are in an endemic area and have been exposed to dusty conditions caused by heat mixed with wind and/or manmade disturbances to the soil, you are at a greater risk of being infected.
- Construction work, agricultural labor, military field training, and archaeological exploration are examples of activities that can put you at risk of contracting Valley Fever.
Check if you’re part of a high-risk group.Not everyone who is exposed to the Coccidioides fungus will contract Valley Fever. The fungal spores can cause infections in people of any age or race, but there are certain groups of people who are more prone to infections.
- Most cases of Valley Fever occur in adults who are over 60. So, elderly people are at greater risk of infection.
- Anyone who has a weakened immune system is at greater risk of contracting the disease and developing more severe forms of it. These people include those who have HIV/AIDS, diabetes, and other chronic illnesses; expectant mothers, especially in their third trimester; and people who have had an organ transplant.
- People of African and/or Filipino descent are more susceptible to Valley Fever.
Find out if you’ve had previous exposure to Valley Fever.The symptoms are often subtle or flu-like, which means that many people never even realize they’ve had it. However, if you have already had it, you will be immune to the disease for life.
- If you have been tested for Valley Fever previously, it will show up on your medical record. If you have not been tested, you can ask your doctor for a skin test to see if you test positive for Coccidioides. If you do but have never had Valley Fever, it is likely that you are immune to it. Keep in mind that 30-60% of people living in affected areas will test positive for Coccidioides, but only about 40% of the infected population will ever present symptoms.
Check for common diseases or outbreaks.If you are planning to travel, it might be a good idea to check up on the common diseases and outbreaks in the region you are visiting. Visit the CDC’s website to determine if Valley Fever is something to worry about while you are traveling.
Avoid dusty areas in regions where the infection is indigenous.These include areas in the affected states which receive very little rainfall, particularly Arizona and California.
Avoid work and work areas where the soil is disturbed.Infections occur when people inhale spores that become airborne after disturbing contaminated soil. If you’re in a high-risk area, stay away from work zones that involve construction, excavation, and agriculture.
- This also includes domestic labor. If you’re living in an endemic region, you should consult a doctor before doing significant yardwork, gardening, construction projects, or other sorts of digging in your yard or on your property.
- If you cannot avoid working in contaminated soil, go to the doctor immediately to get their prevention recommendations. It’s likely that they’ll encourage you to wear a special mask and/or take a preventative antifungal medication to reduce your risk of infection.
Implement an air filtration system.If you live in an affected area, consider keeping your windows closed and using an air filter to ensure that the dust and dirt outside your door doesn’t invade your living space.
Stay inside during storms.Winds will kick up dust that contains the pesky fungal spores, so be sure that you find shelter that has closed windows.
Use an N95 respirator.Wear this or a miner’s mask in areas that have recently suffered a natural disaster. Natural disasters, such as earthquakes and dust storms, can also disturb contaminated soil. This can cause the spores to become airborne. Use a respirator to avoid breathing in these spores.
- Normal paper masks or bandanas will not offer protection against Coccidioides since the spores are microscopic. In order to be effective, you need a respirator that will completely seal around your face and prevent particles 2-4 micrometers in size from passing through.
Clean any injuries thoroughly.Use soap and water to clean any wounds that may have been exposed to dirt or dust. This can help stop an infection from developing.
Treating the Disease
Take a sick day.For most Valley Fever infections, getting plenty of rest and drinking plenty of fluids will restore you to health. If you only have mild flu-like symptoms, a simple, at-home cure will usually suffice.
Go to the doctor.If you’re worried that you may have Valley Fever, it’s a good idea to make an appointment with a medical professional. They will be able to monitor the disease and ensure that your case doesn’t worsen or advance into a disseminated form. Make sure to provide a thorough history of your travels and activities so your doctor can include a comprehensive list of possible infections and appropriate treatment/monitoring.
- Seeing the doctor will be beneficial to public health, helping researchers to track the scope and severity of the disease. It will also inform you as to whether or not you have Valley Fever and can expect to be immune to it in the future.
Get a prescription for antifungal medication.If your symptoms worsen or don’t improve upon a few day’s bed rest, go to the doctor immediately. They can help address the infection by giving you a prescription for antifungal drugs that can attack the root of the disease.
- Because these drugs have unpleasant side effects like nausea, vomiting, and diarrhea, doctors will only generally prescribe them for serious or chronic cases.
QuestionHow long does this Valley Fever last if untreated?
Family Medicine PhysicianFamily Medicine PhysicianExpert AnswerDepending on overall health it can last weeks to months as a mild cold, or develop into complications as mentioned above.Thanks!
||A brief, informative video concerning Valley Fever.|
- Valley Fever is not contagious. It cannot be transmitted person-to-person or animal-to-human. It is safe to be around people who are infected with the disease. Contact with those who have it does not in any way increase your likelihood of contracting it.
- Animals, particularly dogs, are also susceptible to Valley Fever. If you have pets or livestock, take the same precautions with them as you would for yourself in order to prevent them from being infected. Speak with your veterinarian if you suspect your pet may have Valley Fever.
- Valley Fever is widespread in Arizona and the San Joaquin Valley area of California.
- Coccidioidesgrown in a culture in the laboratory can cause also infection if the culture is not properly handled.
Video: Arizona doctor sets up new guidelines to detect Valley Fever infection early | Cronkite News
How to Grow Cuttings from Established Plants
Your Web of Life
21 Insanely Beautiful Makeup Ideas for Prom
9Supermarket Things It’s Better Not toBeFooled bytoGet Quality Products
Free People Occasion Dresses Summer 2012
10 Side Effects Of Lemon Juice
Ayurveda in Hindi, Ayurvedic Treatment and Medicines in Hindi
11 Simple Diet Tips And A Diet Chart To Gain Weight
How to Decorate a Guest Room |
Stonehenge is one of the most recognized formations of megalithic architecture and has a long history of speculation as to its builders, age, function, and changes over time. Located on the Salisbury Plain in Wiltshire, England, it is one of over nine hundred such circles of standing stones in Great Britain alone. Thousands more are evident in France (especially along the coast of Brittany), Spain, Denmark, Italy, and Malta.
Megalithic (mega = large, lithos = stone) architecture is generally characterized by three components: a tumulus; a large artificial mound or communal burial site; and a collection of large stones. It is one of the major characteristics of the western European Neolithic age and represents one of the earliest known stone architectures in the world. This architecture appeared from the fifth millennium B.C.E. as the agrarian way of life compelled a more sedentary existence. It can be termed simple when it involves a single standing stone, a menhir, placed in stark contrast with the surroundings; open but more complex if many standing stones are arranged in a circle, semi-circle, or aligned in parallel rows; and closed when the stones are used to construct a burial chamber and covered with a tumulus.
A henge is an English term denoting a usually circular, prehistoric structure and earthworks, which can consist of a ditch, mound, wood, and a series of standing stones, of which Stonehenge provides one of the most striking examples.
The construction of Stonehenge has been divided into roughly three periods. The first division, using radiocarbon dating methods, is calculated at ca. 3000 BC and was composed of a circular enclosure surrounded by two banks of earthworks, with the outer bank 380 feet in diameter, 8 feet wide, and about 3 feet high, and with a ditch between. The ditch varied in width between 10 to 20 feet and 5 to 7 feet in depth. Artifacts taken from the ditch confirm the date of construction. The inner bank was composed of a local chalk work about 6 feet high and 20 feet wide, with a diameter of 320 feet upon which were positioned a pair of portal stones. To the northeast was a 35-foot entrance or avenue with a standing stone set back a short distance. This naturally shaped standing or heel stone, weighing about 30 tons, is about 20 feet long by 8 feet wide by 7 feet thick, with the lower 4 feet buried in the ground. It is inclined inward toward the circle at an angle of 30º from the perpendicular but is thought to have been erect originally. The stone is composed of sarsen, a very hard sandstone with a source at Marlborough Downs some 20 miles to the north. Stonehenge is composed of a variety of local materials and denotes a familiarity with the geological composition, texture, durability, and color.
Toward the end of this period, 56 Aubrey or X holes were dug inside the circle and varied from 2.5 to 6 feet in width and 2 to 4 feet in depth, spaced rather evenly in a 288-foot diameter circle. These holes were named after their 17th-century discoverer, John Aubrey, who described the filled-in circles that bordered the embankment. He speculated that Stonehenge served as a temple and that the avenue was oriented toward the point on the horizon where the sun would rise on the summer solstice. The heel stone added further weight to the directional hypothesis as it is placed in a direct line of sight for an observer standing exactly in the middle of the circle.
Four points within the circle denoting four post holes form a remarkably accurate rectangle, which may have helped in setting out the site or, according to American astrophysicist Gerald Hawkins, may have established a position for observation of the sun and moon. But one of the certain functions of the first Stonehenge was an interment site for at least 55 cremation burials found at various locations, some dated soon after the original site was completed.
Stonehenge may have retained this shape and its structure for centuries, but eventually the site was altered. During a second phase, the main new work done was a planned but partially completed double circle of blocks of bluestone. For the most part, they are made of dolerite, blue sandstone, and limestone. The chief deposit for the dolerite is the Prescelly Mountains in Wales, some 150 miles away, although bluestone blocks were present in a neighboring area centuries before Stonehenge was built. The avenue was reconstructed and followed a winding route from a repositioned entrance down to the Avon river some two miles away.
Before the circles were finished, major social and political upheavals took place and the stones were dismantled and the double ring holes filled in. Alterations were begun during this third period of reconstruction. More than eighty massive sarsen blocks, some up to 50 tons, were brought in from Marlborough Downs near Avebury, 25 miles away, and Stonehenge began to take its present form.
How these stones were conveyed, the amount of human energy expended, the time span, and the engineering expertise required are all daunting achievements. On a circle with a diameter of 100 feet, three upright sarsen stones were positioned and capped with a continuous ring of lintels in the center of the site. Within this circle, five even more enormous trilithons, or formations, consisting of two uprights supporting a lintel, were positioned in a horseshoe formation. These are the stones that dominate the landscape today and are the prominent feature at Stonehenge.
When these stones were in position, the discarded bluestones from the second period were dressed and erected in an oval setting inside the sarsen horseshoe. Its precise form is uncertain as they were dismantled once again and rearranged in their present form in a circle between the sarsen circle and the horseshoe.
A final discovery in 1923, Y and Z filled-in holes, formed two concentric rings of pits dug about 20 feet apart around the outside of the circle of stones. These may have been intended for a new placement or replacement of stones that was never carried out.
The major impact of Stonehenge is two-fold. As an engineering feat, it speaks of a society possessing the manpower, social organization, and engineering skills necessary to complete such a task. As to its astronomical significance, there are some striking circumstances of location and structures. The particular latitude of Stonehenge allows for the direction of sunrise at the summer solstice and midsummer rising full moon at its extreme southerly standstill position to be at right angles through use of the rectangle formed by the four stations. Had its position been a few miles north or south, there would have been misalignment. The position also gives a clear view of the horizon for 360º for celestial sightings. The different heights of certain stones, such as the Heel Stone and Station Stone, enabled an observer at the center to use the tops of these stones to line up with the distant horizon, thereby facilitating observations and alignments.
The prehistoric megaliths of Stonehenge, believed by many to correlate to a calendar or other device for calculating and/or monitoring time,...
On Salisbury plain in southern England stands an awesome testament of time and mystery—an arrangement of stones that has been the subject of...
England A Megalithic Marvel IT IS BRITAIN’S MOST IMPORTANT and well-known ancient monument – and its most mysterious. Rising out of... |
Why do particles adhere to surfaces?
Particles are kept in place by adhesive forces that may be 107times stronger than gravity. Aparts from electrostatic forces and their own weight, it is mainly the Van-der-Waals forces that cause the particles to stay where they are. These forces go back to an irregular distribution of electrical charges within a molecule. As there is no excess or lack of charge, a ionisation of the surrounding air cannot eliminate these forces.
The Ingromat® system makes use of capillary adhesive forces that arise between the micro-moistened filament and the particle. In the diagram, the capillary force and the total of the adhesive forces (Van-der-Waals forces, electrostatic force, weight) are shown in relation to the particle diameter. Up to a particle diameter of approx ca. 2000 μm the capillary force is bigger than the total of the adhesive forces. It can therefore bind the particles to the filament and remove them effectively from the surface.
Diagram: Adhesive forces in relation to the particle diameter (calculated for plastic particles on metal surfaces) |
By STANISLAUS MEUNIER.
THE planet Mars has for a long time signalized itself to observers by the remarkable traits of its constitution. In consequence of its relative nearness, the telescope has been able to furnish us with a number of data respecting its physical geography and its meteorology; and it has been a very rich source of results concerning the philosophy of the solar system and the physical universe in general.
It is well known that Mars displays some bright spots, and others dark, of which we have every reason to consider the former to be continents, the latter seas. Toward the poles appear large white zones, varying in size at different times, which are caps of ice, susceptible of occasional breakings-up like our icebergs. In the thin and transparent atmosphere we can distinguish clouds, currents, and sometimes whirlwinds quite like the cyclones that rage among us.
Besides these intimate analogies with the earth, the study of Mars reveals especial features, some of which are most satisfactorily explained by considerations of comparative geology. With the tenuity of the atmosphere is associated a much smaller extension of the seas, and the relative repartition of land and water is very different from what prevails on the earth. Astronomers observe, as one of the most remarkable peculiarities of the surface of this planetary neighbor of ours, a large number of long and narrow passages and seas like bottle-necks. In our globe the oceans are of three times the surface of the continents; and Europe, Asia, and Africa form together a single island, while another island is formed by the union of the two Americas. But, on Mars, an almost complete equality exists between the surfaces occupied by the continents and by the seas. Further, they are mingled with one another in such a complicated manner that a traveler might visit nearly all the quarters of the planet, either by land or by boat, without having to leave the element on which he began his journey.
This much assumed, it should be recollected that Mars is older in the planetary series than the earth; that is, having been individualized at a more ancient period, and having a smaller volume, it has reached a more advanced stage in the sidereal evolution. Hence the planet represents now, in its great lines and independent of its individual characteristics, a condition which the earth will ultimately attain. One of the effects of the secular cooling of the earth is to determine the progressive absorption of the waters of the ocean by the successively consolidated rocky masses. Hence a striking comparison might be made between the present Martial seas and the terrestrial oceans after we shall have supposed they have been in a more or less great part absorbed. The results of innumerable soundings have permitted the |
Trees are natural structures for representing certain kinds of hierarchical
A (rooted) tree consists of a set of nodes (or vertices)
and a set of arcs (or edges).
Each arc links a parent node to one of the parent's children.
A special root node has no parent.
Every other node has exactly one parent.
It is possible to reach any node by following a
unique path of arcs from the root.
If arcs are considered bidirectional,
there is a unique path between any two nodes.
The simplest kind of tree is a binary tree
where each parent has at most two children.
A way to think of a binary tree is that it is either empty (emptyTree) or it is a fork which contains an element and two subtrees which are themselves binary trees.
(It is also common to use curried versions of constructors and functions, such as fork, especially in functional languages.)
Many operations can be defined on trees. For example:
Note that these functions are binary-recursive as is the definition of the tree type. (Also note that a tree consisting of a single leaf is sometimes defined to be of height zero, rather than 1, as here.)
Trees have many uses in computing. For example a parse-tree can represent the structure of an expression:
Multiplication has a higher priority then addition and binds more tightly. This tree shows that a+b*c is interpreted as a+(b*c) rather than as (a+b)*c. Such trees are used in compilers and other programs.
Implementation of Binary Trees by Pointers and Records
A tree data type can be implemented as a collection of records and pointers.
The basic operations can create new records and manipulate pointers.
Not surprisingly this tree implementation is similar to the implementation of hierarchical lists.
An output routine for trees is a virtual necessity. A tree can be printed in a style like that used for lists but a graphical two-dimensional layout is more informative. It is difficult to print trees down the page, because they quickly grow too wide, but it is relatively easy to print them across the page so that the root is at the left and the tree grows to the right.
For an example of tree output, see later sections.
Example: Parse Trees
Parsing an input language according to a grammar is frequently necessary in computing. Here we consider a language of simple arithmetic expressions. A grammar written in Backus Naur form (BNF) is given below.
The grammar can be read as a definition of the syntactic structure of expressions, <exp>. The symbol `::=' can be read as `is' or `can be replaced by'. The symbol `|' can be read as `or'. The names in angle brackets, such as '<exp>', are variables for parts of the language. The string <exp>+1 is not a legal expression but it stands for many legal expressions such as x+1, y+1 and (x+y*z)+1. The first line of the grammar can be read as: an <exp> is an <exp> plus <term> or a <term>. An expression is either an expression plus a term or just a term. Note that the syntax rules are recursive. A little thought shows that an expression is a sequence of terms separated by plus signs. Similarly, a term is a sequence of operands separated by multiplication signs. An operand is either an identifier or a bracketed subexpression. An identifier is a sequence of letters.
A parser can be written using the recursive-descent technique. A procedure is written to recognise each class of object in the grammar - exp, term, operand. These routines are recursive, as is the grammar, and call each other to implement the grammar rules. For example, the routine for expression calls the routine for term. The repetitive nature of expressions and terms is coded by the use of a loops. A bracketed subexpression is inherently recursive and so is the parser at that point. The complete parser is given below. It is moderately long but not complex, especially if read with the grammar in mind. A lexical routine, insymbol, skips white space and packages input letters into identifiers and other kinds of symbol. It is followed by the parser proper consisting of the routines Operand, Term and Exp.
Note the use of mutual recursion. Various errors may be detected during parsing. If the expression is syntactically correct, the tree representing its structure is finally printed.
Recursive Tree Traversal
There are three classic ways of recursively traversing a tree or of visiting every one of its nodes once. In each of these, the left and right subtrees are visited recursively and the distinguishing feature is when the element in the root is visited or processed.
In a preorder or prefix traversal the root is visited first (pre) and then the left and right subtrees are traversed.
In an infix traversal, the left subtree is traversed and then the root is visited and finally the right subtree is traversed.
In a postorder or postfix traversal the left and right subtrees are traversed and then the root is visited afterwards (post).
This method can be used to generate postfix or reverse Polish code for a stack machine.
the three traversals give the following results. The results of yet another method, breadth-first traversal, are included for comparison; see the next section.
Note that the method given for printing a tree is a reversed infix traversal.
A breadth-first traversal of a tree starts at the root of the tree. It next visits the children, then the grand-children and so on.
The numbers indicate the order in which the nodes are visited, not the contents of the nodes. Because children are only accessible from a parent, they must be stored while the parent's siblings and cousins are visited. A [queue] is used to do this.
Note that the queue is a queue of pointer to node or a queue of tree type. Initially the queue contains just the root of the tree. At each iteration of the algorithm, the first element is removed from the queue. Its children, if any, are pushed onto the end of the queue and the element is processed. The algorithm terminates when the queue is empty. See also the chapter on queues.
Most routines on trees are recursive. This is natural because the tree is a recursive data type. It is possible to write iterative versions of these operations but it is harder to do so than is the case for flat lists because the tree type is binary recursive. The flat list and hence most of its operations are linear recursive and a linear recursive routine usually has a simple iterative version. It is often necessary to introduce an explicit stack into a program when writing a non-recursive tree routine. This is often not worth the effort as the language implementors can usually do a better job with the system stack.
The object-oriented programming language
has the notion of an
The tree operations described so far have no side-effects except for input, output and manipulation of dynamic storage; they are pure tree operations. As is the case with lists, it is often necessary or desirable to use operations having side-effects on efficiency grounds. This is particularly natural if a program uses a single tree as a dictionary or database structure. As before, should multiple trees share components, changing one tree may change another and if this is not anticipated it will cause program bugs.
A binary search tree can be created so that the elements in it satisfy an ordering property. This allows elements to be searched for quickly. All of the elements in the left subtree are less than the element at the root which is less than all of the elements in the right subtree and this property applies recursively to all the subtrees. The great advantage of this is that when searching for an element, a comparison with the root will either find the element or indicate which one subtree to search. The ordering is an invariant property of the search tree. All routines that operate on the tree can make use of it provided that they also keep it holding true.
It takes O(h) time to search a search tree of height h. Since a tree of height `h' can hold n=2h-1 elements, the search takes O(log(n)) time under favourable circumstances. The search-tree and its search routine should be compared with the use of the binary search algorithm on sorted arrays in the chapter on tables.
The use of the tree speeds up the insertion and deletion operations at the price of the space needed to hold the pointers. The tree has the speed advantage when the data in the structure changes rapidly.
The routine given here to insert an element does so as a side-effect by changing the tree.
If elements are inserted in the order d, b, a, c, e, f, g the following tree is created:
Note that an insertion takes O(h) time. Under favourable circumstances, a balanced tree is created, as for b, a, c, giving O(log(n)) search time. If the input is sorted however an unbalanced tree approximating a list is created, as for e, f, g, and the search degenerates to an O(n) linear search. This problem is addressed later.
A new element is added to the tree as a new peripheral, leaf node. However if an element can also be deleted it is possible for it to be internal. This makes deletion rather more difficult than insertion.
A leaf element is easily deleted by setting the pointer to it to emptyTree.
The node becomes garbage and can be freed.
An element with one child can be deleted by by-passing it.
An internal element x with two children cannot easily be bypassed without loosing one of its subtrees. The solution is to overwrite x with some other element y of the tree and then to delete the original copy of y. There are two obvious elements to choose from - either the largest element `A' less than x or the smallest element `B' greater than x. Each of these has at most one child! The sortedness of the tree is maintained if x is overwritten with either of these.
Both of A and B fall into one of the cases previously dealt with. A can be found from x by taking one step left and then as many steps right as possible. B can be found by taking one step right and then as many steps left as possible.
Deletion takes O(h) time.
As mentioned above, if elements are inserted into a search tree in sorted order, a tree is created that is equivalent to a list. This will lead to inefficient searches. A height-balanced tree or an AVL-tree (after G. M. Adel'son-Velskii and E. M. Landis) is a search tree in which the height of the right subtree minus the height of the left subtree equals 1, 0, or -1. This property also applies recursively to all subtrees. It can be shown that a height-balanced tree of `n' elements has height O(log(n)) and this guarantees efficient search. Fortunately fast O(log(n)) insertion and deletion is still possible.
A flag indicating the balance of each subtree is added to the node record.
There are four crucial cases during insertion. In the first case, the left subtree L grows too tall on its left:
By rotating about L and T the height balance is restored and the ordering in the tree is maintained. In the second case, L grows too tall on its right:
The rotation restores the balance while maintaining the tree ordering. In the above example left(LR) grew; an alternative is that right(LR) grew but the same operations still restore the balance. There are mirror-image right-2 and right-3 rotations.
Maintaining the balance significantly complicates the tree insertion routine. However a fixed amount of work is done at each node that is visited and so it takes O(h) time.
Note that if a tree has just grown in height, it can only be perfectly balanced if it is a single (new) leaf. Otherwise one subtree must be higher than the other.
Compare this with the tree given earlier and created by the non-balancing insert.
Implementation of Binary Trees by Arrays
A binary tree can be implemented as an array of records.
The empty tree is represented by zero. Left and right are indexes to left and right subtrees.
This implementation has no advantage in a language supporting dynamic storage unless random access to nodes is also needed. It is useful in a language without dynamic storage.
Full Trees by Arrays
It is possible to define a full or weight balanced tree in contrast to a height balanced tree. The empty tree is a full tree of zero levels. If T and T' are full binary trees of n levels then fork(e,T,T') is a full binary tree of n+1 levels.
The numbering of the nodes corresponds to a breadth-first traversal. This suggests that such a tree can be stored in an array:
Such an implementation is very efficient indeed,
if the tree is full, because
no space at all is used for pointers.
See the chapter on sorting for an application of this technique in
Testing and Debugging
Programming techniques for trees share a great deal in common with those for lists. Pre and post conditions and assertions should be included where possible. Minimise the use of side-effects in general and the manipulation of global variables in particular.
Note that there are really just two kinds of tree - emptyTree and non-emptyTree. Most operations on a tree follow this pattern:
If a case is missing it probably indicates an error. The empty case is usually very simple, often returning a constant. The main case often operates on one or both subtrees recursively.
When testing or debugging a tree routine, test cases should cover the above options. For non-emptyTree trees, it may be appropriate to try a tree of a "few" nodes and a tree of a single node. As usual it is invaluable to write the output routine first. The most common problems are: . unassigned pointers, particularly those that should be emptyTree, . lost pointer values through a wrong ordering of operations, . knotted pointers maybe introducing a cycle, . side-effects through shared pointers, intentional or otherwise.
The coding of reasonably complex routines such as those on AVL trees is prone to small typographical errors. Nothing can beat careful proof reading but one good testing trick is to use a set of ascending data and then its reversal in which case the mirror image tree should be produced. This test is not sufficient on its own! It is important to exercise all the paths through a complex routine and this requires a great deal of thought. |
The Moon Children - Teachers’ Guide
The Moon Children
A Novel Study prepared by the author:
Saskatoon-based educator Beverley A. Brenna
This novel study is geared for grades six/seven students but could be adapted for other ages. The intent of the book, in addition to telling Billy and Natasha’s stories, is to increase awareness about Fetal Alcohol Spectrum Disorder in our communities, assisting with supports for people affected with FASD as well as prevention of a syndrome which implies permanent brain damage and is potentially very challenging for those affected.
The activities in this study are designed on Charlotte Huck’s categories of webbing. They could be provided to students on a contractual basis, and relate to an independent reading study, or they could be directed activities as a class or group of students progress together through the novel. Websites listed should be checked by the teacher prior to student use.
- Try out the classroom yo-yos provided for student use. See what tricks you can learn. (This activity could work as a learning centre; extension ideas: research history of yo-yo development using internet websites.
- Have you ever wanted to do something that you found difficult? What was your response—to keep trying? To give up? Explore this topic as your first journal entry, and continue the journal as a personal response notebook as you read the novel.
- Brainstorm a list of natural disasters. Time yourself for one minute, and compare your ideas with a friend. Who had the most number of ideas (fluency)? Who had ideas the other didn’t have (originality)? Who had the most number of categories of ideas (flexibility)? Compose a companion list of words associated with these disasters (i.e. Volcanoes: hot, steaming, bursting; Earthquakes: rumbling, sudden, dangerous). This is one activity the author did to prepare herself for describing Billy’s emotional state, as he often used the idea that a natural disaster was bubbling inside him, ready to erupt or overflow.
- Develop a classroom list of summer fun activities. Number the ones you’d rate as your favorite three. This activity could lend itself to math (bar graphs) if appropriate.
- Find Romania on the world map. Discuss the fall of the communist dictatorship in 1998 and the orphanages which were overcrowded because of the previous government ruling that mothers had to have five children before they were age 45 (leaving many families without the money to raise their kids). At that time, many children were adopted to North America.
- An extension activity is the moon journal assignment, modeling Natasha’s records of the phases of the moon and her observations of life on her street during the time she’s watching the moon.
- What difficulties would you have if you were not able to read? How could people in a community help someone who couldn’t read?
- What are some words to describe Billy’s feelings about entering the talent contest? Have you ever felt this way about something? Discuss with a partner.
- Think about friends you have encouraged the way Billy and Natasha encourage each other. How has someone encouraged you?
- Is there a character in the novel who reminds you of yourself or someone you know from real life? In what ways?
- Does this book remind you of any other books or poems you have read? Which ones?
- Does the setting of the book sound familiar? Have you ever been to North Battleford? In what ways does it sound similar/different to your community?
Observation and Understanding
- Describe Billy’s response to taking the money from his mother’s purse. Did he know taking the money was wrong, or was he afraid of the possibility of his father being sent away because his mom thought his dad took the money? Sometimes people with brain damage have a hard time understanding certain concepts like “stealing” which is a very abstract idea. For example, when would it be okay to take money from someone’s purse? (i.e. If the purse belongs to you; if someone told you to take the money; if you had to borrow it in an emergency and would tell the person). So you see, a hard-and-fast rule doesn’t apply here, such as “Never take money from a purse.” Because there are exceptions to the “rule”, it makes this concept harder to understand.
- Often we learn appropriate behavior because of “cause and effect” sequences of events. For example, if we take something that doesn’t belong to us, someone will tell us not to do that, or give us a consequence, such as a time-out or a warning, and we will learn from this consequence not to do it again. Sometimes people with brain damage have difficulty remembering or predicting “cause and effect” sequences.
- For example, someone with brain damage might take a pot off a burner on the stove, and then burn his or her hand on the burner because it’s still hot. The next time the person takes a pot off the stove, the person might not remember about the possibility of the burner being hot and so isn’t able to predict that it might be hot again.
- Some people with extensive brain damage need constant reminders to prevent unsafe or inappropriate behavior. For example, they might need a sign on the stove telling them not to touch the burners because they might be hot.
- Sometimes people with brain damage need the environment to be structured to prevent the possibility of trouble, such as limited or supervised access to a stove, in the example above.
- In Billy’s case, what could his mother do to remind him not to take money from her purse, or to prevent him from doing this in the future?
- How did Billy’s challenges occur? Try the experiment below, along with a class discussion:
Break a raw egg (without breaking the yolk) into a bowl.
Add a 1 ounce shot glass of alcohol.
With a swizzle stick, gently stir some of the alcohol into the egg white.
Watch the effects on the egg white.
White streaks will form in the clear portion. Alcohol literally cooks the cells.
Biological specimens are preserved in alcohol. Why? The alcohol kills anything it contacts, so it prevents rotting (bacterial and enzyme degradation) of the specimen.
- There are studies which show how exercise improves memory. Billy seems more able to remember things when they have a physical component, such as the yo-yo tricks. In addition, he is able to remember the words to songs while he is doing his yo-yo routine. Read the following newspaper article and discuss what it means.
Getting Physical Aids the Memory
If you want to improve your memory and ability to learn, get off the couch and get going. Brian Christie, an assistant professor of psychology at the Universityof British Columbia, says exercise can promote the generation of new neurons in the adult brain and lengthen the dendrites that aid communication between neurons. Both those things are good for memory, he says. Following up on earlier research that found a connection between voluntary exercise and brain enhancement, Christie did studies in which rats were allowed daily access to an exercise wheel. He found that the brains of animals that exercised showed substantially improved neurogenesis and synaptic plasticity.
In a second set of experiments, the researchers looked at animals that had been prenatally exposed to alcohol, generating a condition similar to fetal alcohol spectrum disorder.As adults, these animals showed impairment in learning and memory. When these animals were provided with exercise wheels to see what, if any, effect exercise would have, the results were striking. The rats exposed to alcohol showed clear improvements in both spatial memory and learning ability, so much so that their brains looked virtually identical to those of rats that had never been exposed to alcohol.
Adapted from a newspaper article
- Character Study
- Brainstorm a list of qualities you notice about Billy. Brainstorm a list of qualities you notice about Natasha. Why do you think they are friends? Do friends have to have the same characteristics? How can people with different characteristics learn to get along with each other?
- What is an unusual thing about Mrs. Schmidt? Do you know any other characters in books who are based on a fascination with one particular interest? Why do you think an author might create a character this way?
- Describe Billy’s mother. For any value judgments, list the evidence from the book which proves what you say.
- What do you know about Billy’s father? Make a list of information from the book. What do you think might happen with this character if the book had a sequel?
- Language Study
- Make a list of the unique sayings from the book (for example: Hope springs eternal). Choose one and work it into a poem, so you show the meaning of the saying by your use of examples.
- Billy’s mom uses the word “gonna” instead of “going to.” Provide a definition of “slang” and indicate if what Billy’s mom is using here is slang. Why might an author choose to have a character use slang rather than “dictionary words”?
- Moon Motif
- A motif is a recurring single element in a work of art. List the ways the moon appears in the story as a motif.
- In the beginning of the story, the moon draws Billy and Natasha together. What other things do they have in common?
- Billy retells the story of How Raven Freed the Moon which he heard from his grade five teacher. Why do you think Billy remembers some things, but not others? In the myth, the idea is that the moon became too heavy for Raven to carry. What secrets in The Moon Children become too heavy for their bearers to carry? What advice does Billy give to Natasha about heavy secrets?
- What meaning does the moon have for Natasha? Why does seeing a full moon make her remember? Why do you think she isn’t able to talk about this with her adoptive parents?
- What might have happened if the community playground group had been aware of Billy’s challenges?
- What might have happened if the business community in town had previous experience with Billy through a school-and-community work project?
- How might things have been different if Natasha’s mother or father had tried to get to know Billy right from the start?
- Choose one of the scenarios above and rewrite one of the scenes in the book to fit this new possibility.
- Do you think Billy’s mother is trying to be a good mother? Explain.
- What kind of help might Billy’s father need in order to come back into the family? Discuss as a class.
- Why do you think Eddie makes trouble for Billy? Do you think Eddie is a good or bad character? What might his reasons be for acting this way?
- Did Billy mean to hurt Eddie? If he did, was he right in doing so?
- Why didn’t Billy mistrust Eddie right from the beginning? Does this mean Billy is stupid? Is anyone really stupid?
- Do you think the ending of the novel is good? Why? Why not? Why do you think the author chose not to have Billy win the contest after all?
- Make a model of the interior of Billy’s apartment. You may use a diorama or a drawing to present information from the book.
- Create the poster that might have been used to advertise the community talent contest.
- Choose a scene from the book that you would like to illustrate. Think carefully about what materials you would like to use, and discuss your choices with the teacher before you begin.
- Billy is very careful with the moths on his window ledge. Make a moth (or collection of moths) out of tissue paper or some other medium, that you think Billy would appreciate. Why do you think Billy took such good care of the moths? What does this tell you about Billy himself?
- Use collage or another technique to depict the show-room of the car dealership managed by Mr. Arnold.
- Write new journal entries for Natasha for the week following her release from the hospital, or for any other seven-day period in her life.
- Billy has had some embarrassing experiences that someday he might find funny. Write about an embarrassing experience you have had. Was it funny to you at the time? Is it funny to you, now?
- Describe how Mr. Schmidt might have gotten his nickname Pork Chop. Be as creative as you can. You may use the voice of a storyteller (One day Mr. Schmidt was ordering meat for the family’s freezer. Instead of ordering six pork chops, he accidentally…) or you may tell the story through dialogue (having two characters talking to each other:
“What do you have on special that’s good tonight?” Mr. Schmidt asked the waiter…).
- Write a letter to the author telling what you thought of the novel and why.
- Show the dialogue between Natasha and Billy as they walk to the car dealership.
- Present a soliloquy (a character sharing his thoughts aloud) from Billy’s perspective as he enters the hospital to visit Natasha.
- Develop the conversation Mr. and Mrs. Schmidt might have had the night after Billy’s mom and dad have the fight where Zak pushes the chair over.
- Show what Eddie’s teacher might have said if she caught him passing a nasty note about Billy.
- Show two scenes by Billy’s classmates: Scene A, where they talk about him behind his back, making fun of his disabilities; Scene B, where they talk about him supportively, wondering how they can help with his challenges.
- Make Dutch Pancakes
- Make Dutch Potato Pancakes:
Pare and dice enough white potatoes to make 2 cups.
In electric blender place 2 eggs, 1/2 teaspoon salt and 1 cup potatoes.
Cover and blend at high speed 5 seconds.
Add 1/4 cup unsifted regular flour and remaining potatoes.
Cover and blend until potatoes are just grated (3 seconds).
Fry in butter, as you would any kind of pancakes.
Makes 8 pancakes.
- Listen to the Elvis rendition of “Blue Suede Shoes” which becomes Billy’s theme song in the book; what songs might represent other characters in The Moon Children?
- Why might the author have chosen “Blue Suede Shoes” to include in the story?
- Research Carl Perkins, the author of the song “Blue Suede Shoes” which Elvis recorded.
- Carl Perkins, from Jackson Tennessee, was the founder of the Exchange Club: The Carl Perkins Center for the Prevention of Child Abuse
- Research Elvis, the recording artist who brought “Blue Suede Shoes” to fame; why might the author have chosen his work to spotlight in this novel?
But Michael Makes Me Laugh by Lori Stetina
- A picture book about a five-year-old with FAS
Joey Pigza Swallowed the Key by Jack Gantos
- A junior novel about a boy who has ADHD issues along with other challenges; Joey is not diagnosed as having FASD but the issue of his mother’s current drinking, and possibly during pregnancy, is addressed.
How Raven Freed the Moon by Ann Cameron
- A retelling of the north coast explanatory myth about how the moon got into the sky (**this is a story which Billy tells in The Moon Children, illustrating how some secrets are too heavy to keep).
The Pinballs by Betsy Byars
- This novel would make a good comparison study as it also has characters who feel powerless in the situations in which they find themselves.
Rules by Cynthia Lord
- This novel deals with respect for people with special needs. |
Bullying - When Special Needs Students Are the Victims or Instigators
By Arizona Education Attorneys
What is Bullying? Bullying is unwanted, aggressive behavior among school aged children that involves a real or perceived power imbalance. The behavior is repeated, or has the potential to be repeated, over time. Both students who are bullied and who bully others may have serious, lasting problems. In order to be considered bullying, the behavior must be aggressive and include an imbalance of power repeatedly. Students who bully use their power—such as physical strength, access to embarrassing information, or popularity—to control or harm others. Power imbalances can change over time and in different situations, even if they involve the same people. Bullying behaviors happen more than once or have the potential to happen more than once. Bullying includes actions such as making threats, spreading rumors, attacking someone physically or verbally, and excluding someone from a group on purpose.
Types of Bullying: There are three types of bullying. One type is verbal bullying. Verbal bullying is saying or writing mean things. Verbal bullying includes teasing, name-calling, inappropriate sexual comments, taunting, and threatening to cause harm. A second type of bullying is social bullying. Social bullying is sometimes referred to as relational bullying. It involves hurting someone’s reputation or relationships. Social bullying includes intentionally excluding someone, telling other children not to be friends with someone, spreading rumors about someone and embarrassing someone in public. The third type of bullying is physical bullying. Physical bullying involves hurting a person’s body or possessions. Physical bullying includes hitting, kicking, pinching, spitting, tripping, pushing, taking or breaking someone’s things, and making mean or rude hand gestures.
Where and when does bullying take place? Bullying can occur during or after school hours. While most reported bullying happens in the school building, a significant percentage also happens in places like on the playground or the bus. It can also happen travelling to or from school, in the youth’s neighborhood, or on the internet (cyberbullying).
What is Cyberbullying? Cyberbullying is bullying that takes place using electronic technology. Electronic technology includes devices and equipment such as cell phones, computers, and tablets as well as communication tools including social media sites, text messages, chat, and websites. Examples of cyberbullying include mean text messages or emails, rumors sent by email or posted on social networking sites, and embarrassing pictures, videos, websites, or fake profiles. Students who are being cyberbullied are often bullied in person as well. Additionally, students who are cyberbullied have a harder time getting away from the behavior. Cyberbullying can happen 24 hours a day, 7 days a week, and reach a student even when he or she is alone. It can happen any time of the day or night. Cyberbullying messages and images can be posted anonymously and distributed quickly to a very wide audience. It can be difficult and sometimes impossible to trace the source. Deleting inappropriate or harassing messages, texts, and pictures is extremely difficult after they have been posted or sent. Whether done in person or through technology, the effects of bullying are similar.
Bullying of students with disabilities can amount to a denial of a FAPE as it creates a hostile learning environment such that may interfere with the student’s ability to access the curriculum. For instance, they may not want to go to school, or they may be distracted by thoughts of the bully.
The U.S. Department of Education policy guidance states that “disability harassment that adversely affects an elementary or secondary student's education may also amount to a denial of FAPE under the IDEA. Harassment of a student based on disability may decrease the student's ability to benefit from his or her education and amount to a denial of FAPE.” Dear Colleague Letter regarding Disability Harassment, 7/25/2000.
There are a growing number of cases and court decisions concerning bullying and finding that peer-on-peer bullying – and bullying by the student’s teacher – can result in a denial of FAPE in violation of the IDEA and/or 504:
· Bullying that is severe enough to alter the condition of student's education and create an abusive educational environment, coupled with the knowledge and deliberate indifference by school officials, is one way a student may establish a violation of the Rehabilitation Act. D.A. v. Meridian Joint School Dist. No. 2, --- F.R.D. ----, 2013 WL 588761 (D.Idaho, 2013).
· A teacher’s deliberate indifference to the abuse and teasing of a student with a disability could result in the denial of a FAPE under the IDEA. M.L. v. Fed. Way Sch. Dist., 394 F.3d 634, 650 (9th Cir.2005).
· A student with emotional disabilities was denied FAPE based on the likelihood that a proposed placement would subject the student to continued bullying because of his perceived effeminacy. Shore Regional High School Board of Education v. P.S., 381 F.3d 194 (3d Cir., 2004). The placement of the student at the local high school was inappropriate because the school would not be able to prevent or stop the continued bullying. The student had previously been subjected to relentless physical and verbal harassment as well as social isolation because he was "girlish." Because the placement would expose him to further bullying and harassment, the placement would in effect deny the student FAPE.
When the special needs student is the perpetrator, or bully, the student should be referred for a Functional Behavior Assessment (“FBA”) from which a Behavior Intervention Plan (“BIP”) can be developed and implemented.
Best practices for schools:
· Develop and publicize comprehensive policies regarding bullying, harassing, and hazing;
· Explain to students exactly what they should do if they are bullied or witness bullying;
· Have a formal reporting procedure in place;
· Inform students that they will not be punished for reporting bullying in good faith.
· Consider having an online system where students can report bullying. Many students become nervous, scared, or shy, and having a more informal system for students to report bullying may encourage them to do so;
· Respond promptly to all incidents and reports;
· Consider the totality of circumstances presented when determining whether the conduct objectively constitutes harassment or bullying;
· Enforce the policy consistently;
· Be sure that all students receive and review a copy of the student handbook containing a copy of the anti-bullying/harassment/hazing policies;
· Make it clear, particularly to athletes and upperclassmen, that hazing will not be tolerated and that students will face serious discipline, including criminal charges, for any violations;
· Be aware of the potential infringement of students’ First Amendment rights.
· Even when disciplinary action is not allowed, take steps to inform and engage parents in the anti-bullying efforts. |
Several Principles Of Ultrasonic Welding
1. Principle of ultrasonic welding
Ultrasonic welding is to transmit high-frequency vibration waves to the surfaces of two objects to be welded, so that the surfaces of the two objects rub against each other to form fusion between the molecular layers.
The main components of the ultrasonic welding system include: ultrasonic generator / transducer / horn / welding head triplet / mold and frame.
Ultrasonic welding is to convert 50/60Hz current into 15.20.30 or 40KHz electric energy through an ultrasonic generator, and the converted high-frequency electric energy is converted into mechanical motion of the same frequency again by the transducer, and then the mechanical motion is passed through a set of adjustable amplitude. The horn device is transmitted to the welding head, and the welding head transmits the received vibration energy to the joint part of the workpiece to be welded. In this area, the vibration energy is converted into heat energy by friction and melts the area of the part to be welded.
Ultrasonic can be used not only to weld metals, hard thermoplastics, but also to process fabrics and films.
2. Principle of ultrasonic plastic welding
When ultrasonic waves act on the thermoplastic contact surface, tens of thousands of high-frequency vibrations are generated per second. This high-frequency vibration that reaches a certain amplitude transmits ultrasonic energy to the welding area through the upper weldment. Since the weld area is the interface of two welds, localized high temperatures are generated. Due to the poor thermal conductivity of the plastic, it cannot be dissipated in time, and it gathers in the welding area, causing the contact surface of the two plastics to melt rapidly. With a certain pressure, ultrasonic welding can make it fused into one. When the ultrasonic wave stops working, let the pressure continue for a few seconds to make it solidify and form a strong molecular chain to achieve the purpose of welding, and the welding strength can be close to the strength of the raw materials.
Contact Person: Ms. Hogo Lv |
Poetry is often thought of as a medium to express emotions such as love, happiness, and sadness. However, anger is another powerful emotion that can be expressed through poetry. Ghusa poetry is a form of poetry that explores the power of anger and uses it to create thought-provoking and impactful pieces of literature.
What is Ghusa Poetry?
Ghusa poetry, also known as anger poetry, is a genre of poetry that expresses anger and frustration through words. It is a form of poetry that is raw, unfiltered, and unapologetic. The term “ghusa” is a Urdu word that means anger or rage.
Ghusa poetry is often used as a tool to criticize social, political, and economic issues. It is a way for poets to express their frustrations with the world and to call for change. Ghusa poetry is also used to express personal emotions and experiences that are rooted in anger.
The Origins of Ghusa Poetry
Ghusa poetry has its roots in the ancient art of poetry in Arabic and Persian literature. The poets of the time used poetry as a means to express their emotions, and anger was one of the emotions that they explored. However, it was during the 19th and 20th centuries that ghusa poetry emerged as a distinct genre in Urdu literature.
During this time, poets such as Mirza Ghalib and Allama Iqbal used anger as a means to challenge the status quo and to call for social and political change. Their poetry was bold and unapologetic, and it paved the way for a new generation of poets who used anger as a tool to express their frustrations with the world.
Ghazal poetry is a form of classical Persian poetry that originated in the Arabic-speaking world. The word “ghazal” comes from the Arabic word “ghazala,” which means to converse with women. The ghazal is a poetic form that consists of rhyming couplets and a refrain, with each line having the same meter.
The ghazal typically expresses love, longing, or separation, and often includes themes of spirituality and mysticism. The poet uses the form to convey emotions and ideas, often through the use of metaphors and symbolism.
Ghazal poetry has been widely adopted across many cultures, including in South Asia, where it has become an important form of Urdu poetry. In fact, the term “ghazal” is often used to refer specifically to Urdu ghazals.
In Urdu, ghazal poetry is a popular form of expression and has been used by poets like Mirza Ghalib, Faiz Ahmed Faiz, and Allama Iqbal. Ghazal poetry has also been used in Bollywood films, where it is often set to music and sung by popular singers.
Overall, ghazal poetry is a rich and complex form of expression that has been embraced by poets and audiences around the world for centuries.
For more query about this type of similar topics, you can read this article: 2 line Urdu poetry romantic SMS
The Characteristics of Ghusa Poetry
Ghusa poetry is characterized by its rawness and its unfiltered expression of anger. It is often marked by a sense of urgency and a call to action. Ghusa poetry is also known for its use of metaphors and symbolism to convey its message.
In addition, ghusa poetry often employs a rhythm and flow that is different from traditional poetry. It is often more rhythmic and fast-paced, with an emphasis on the sound of the words as well as their meaning.
Examples of Ghusa Poetry
Here are some examples of ghusa poetry:
- “Hum dekhenge” by Faiz Ahmed Faiz is a poem that expresses anger at the injustice and oppression faced by the people. The poem calls for a revolution and a change in the status quo.
- “Aurat” by Fahmida Riaz is a poem that expresses anger at the treatment of women in society. The poem challenges the patriarchal norms that restrict women’s freedom and agency.
- “Tum bilkul hum jaise nikle” by Gulzar is a poem that expresses anger at the hypocrisy of the powerful. The poem exposes the double standards and corruption of those in power.
The Impact of Ghusa Poetry
Ghusa poetry has had a significant impact on Urdu literature and has played a role in shaping the social and political discourse of the time. It has been used as a tool to challenge the status quo and to call for change.
In addition, ghusa poetry has inspired a new generation of poets who use anger as a means to express their frustrations with the world. It has opened up new avenues of expression and has given voice to those who have been marginalized and oppressed.
Ghusa poetry is a powerful form of poetry that explores the emotion of anger. It is a raw and unfiltered expression of frustration and a call to action. Ghusa poetry has had a significant impact on Urdu literature and has inspired a new generation of poets to use their words to |
Voice is how we naturally communicate and can often be more appropriate than SMS text, email, chat or other forms of written or visual communication.
People talk to, and listen to, other people. It's how we naturally communicate with each other. With text-to-speech you can synthesise human speech and make interaction with an automated system more natural. Bringing more natural interactions to scalable and cost-effective automated systems delivers positive customer experiences and drives adoption.
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be implemented in software or hardware products. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.
Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output.
Text-to-Speech (TTS) refers to the ability of computers to read text aloud. A TTS Engine converts written text to a phonemic representation, then converts the phonemic representation to waveforms that can be output as sound. TTS engines with different languages, dialects and specialized vocabularies are available through third-party publishers.Use Text-to-Speech service > Use Speech-to-Text service >
Convert text to lifelike speech. Text-to-speech via web console, email, REST API or Zapier.
Voice conferencing, hosted PBX, text-to-speech (TTS), speech-to-text (STT), numbering and voice SIP trunks. |
The most likely location for a cavity to develop in your child's mouth is on the chewing surfaces of the back teeth. Run your tongue over this area in your mouth, and you will feel the reason why: These surfaces are not smooth, as other areas of your teeth are. Instead, they are filled with tiny grooves referred to as “pits and fissures,” which trap bacteria and food particles. The bristles on a toothbrush can't always reach all the way into these dark, moist little crevices. This creates the perfect conditions for tooth decay.
What's more, a child's newly erupted permanent teeth are not as resistant to decay as adult teeth are. The hard enamel coating that protects the teeth changes as it ages to become stronger. Fluoride, which is found in toothpaste and some drinking water — and in treatments provided at the dental office — can strengthen enamel, but, again, it's hard to get fluoride into those pits and fissures on a regular basis. Fortunately, there is a good solution to this problem: dental sealants.
Dental sealants are invisible plastic resin coatings that smooth out the chewing surfaces of the back teeth, making them resistant to decay. A sealed tooth is far less likely to develop a cavity, require more expensive dental treatment later on, or, most importantly, cause your child pain.
How Sealants Are Placed
You can think of a sealant as a mini plastic filling, though please reassure your child that it doesn't “count” as having a cavity filled. Because tooth enamel does not contain any nerves, placing a sealant is painless and does not routinely require numbing shots. First, the tooth or teeth to be sealed are examined, and if any minimal decay is found, it will be gently removed. The tooth will then be cleaned and dried. Then a solution that will slightly roughen or “etch” the surface is applied, to make the sealing material adhere better. The tooth is then rinsed and dried again. The sealant is then painted on the tooth in liquid form and hardens in about a minute, sometimes with the help of a special curing light. That's all there is to it!
A note about BPA: A 2012 study that received wide press coverage raised concerns that trace amounts of the chemical bisphenol-A (BPA) found in some (but not all) dental resins might contribute to behavioral problems in children. The study authors noted that while they had found an association, they had not actually proven that BPA in dental sealants causes these problems. In fact, BPA is far more prevalent in food and beverage packaging than in dental restorative materials. The American Academy of Pediatric Dentistry and the American Dental Association have since reaffirmed their support for the use of sealants.
Taking Care of Sealants
Sealed teeth require the same conscientious dental hygiene as unsealed teeth. Your child should continue to brush and floss his or her teeth daily and have regular professional cleanings. Checking for wear and tear on the sealants is important, though they should last for up to 10 years. During this time, your child will benefit from a preventive treatment proven to reduce decay by more than 70 percent.
Sealants for Children The tiny grooves in your child's back teeth are ideal places for cavities to form. But you can take a proactive role in preventing this with dental sealants. These are protective plastic resin coatings placed in these tiny pits and fissures of teeth, actually sealing them from attack. It's a wonderful method of decay prevention that every parent should consider... Read Article
Tooth Decay — A Preventable Disease Tooth Decay is perhaps the number one reason children and adults lose teeth during their lifetime. Yet many people don't realize that it is a preventable infection. This article explores the causes of tooth decay, its prevention, and the relationship to bacteria, sugars, and acids... Read Article
How to Help Your Child Develop the Best Habits for Oral Health Proper oral health habits are easy to learn — and lead to behaviors that result in lifelong dental health. And the time to begin is as soon as your child's first baby teeth appear. From toothbrushing for your toddler to helping your teenager stay away from tobacco, Dear Doctor magazine offers the most important tips for healthy habit formation through childhood and beyond... Read Article
Top 10 Oral Health Tips for Children There's no need to wait until your baby actually has teeth to lay the foundations for good oral or general health. In fact, good nutrition and oral hygiene can start right away. It is up to you to develop the routines that will help protect your child from tooth decay and other oral health problems. So let's get started... Read Article |
Features, Importance and Limitations of Planning
Planning is required by all individuals, businesses, and non-business organizations. It is practised for all types of organizations small, medium and large. The primary function of management is to get the tasks done by others. Every manager has to plan for their enterprise. This plan may be viewed as a map. By following this, it is possible to check the extent of progress towards the projected goal. At the same time, it is a basic requirement and important input of the management process. Thus all organizational activities begin with planning to achieve the objectives effectively. Today, planning is considered a strategic area of management, especially in the context of a rapidly changing environment and globalization of business operations. It is the prerequisite to effective management.
Meaning and definition of Planning
Planning is a blueprint of the course of action to be followed in the future. It is also a mental exercise that requires imagination, foresight and sound judgment. It is thinking before doing. It is a preparatory step and it refers to detailed programs regarding the future course of action. In fact, it is the basic management functions that involve forecasting, laying down objectives, analyzing the different courses of action, and deciding the best alternative of those to perform different managerial functions to achieve pre-determined goals. Thus, it is a continuous process that involves decision-making, i.e., deciding the course of action for framing and achieving objectives.
Planning is deciding in advance what to do, how to do it, when to do it, and who is to do it. Planning bridges the gap from where we are to where we want to go. It makes it possible for things to occur which would not otherwise happen.
-Koontz and O’Donnell
Features of Planning
By analyzing the above meaning and definition, we can reveal the following features of planning:
- Planning focuses on achieving objectives: Planning is a goal-oriented work because its purpose of planning is to achieve organizational objectives quickly and economically. These objectives are purposeful, as they provide basic guidelines for planning activities by identifying the actions which lead to desired results.
- Planning is a primary function of management: Planning is the primary function of management as it serves as a base for all other management functions because it provides the basic framework within which all other management functions are performed. We consider it to be a blueprint, as it provides the foundation for managerial actions.
- Planning is pervasive: It is pervasive as it is required at all levels of management and in all types of organizations. However, the scope of planning varies from one level to another, while supervisors at the lowest level formulate day-to-day operational programs and middle-level managers prepare departmental plans, and the top management plans for the organization as a whole.
- Planning is a continuous process: Planning is an ongoing process. Plans are prepared for a specific period and at the end of that period, as there is a need for a new plan based on the new situation. Since the future is uncertain, there are various assumptions about the future that may change. Therefore, the original plan may have to be revised in light of changing conditions.
- Planning is futuristic: Planning involves looking into the future, and it predicts the best advantage of an organization. Managers plan to manage future events to the best of their capacity. Planning also involves thinking about the future for doing in the present. It essentially involves scientific anticipation of future events, i.e. forecasting.
- Planning involves decision making: Planning is the process of making choices from various alternatives to achieve the specified objectives. The need for planning arises only when alternatives are available, and in actual practice, planning presupposes the existence of alternatives. Thus, decision-making is an integral part of planning, as it involves a choice from various alternative courses of action. But, if there is only one alternative, then there is no need for planning.
- Planning is a mental exercise: Planning is an intellectual process that is related to thinking before doing involving imagination and creativity. It is an activity of thinking based on logical reasoning rather than guessing and doing work. The success of planning depends on the performance of a planner. So, a planner must have intelligent imagination and sound judgment capacity.
Importance of Planning
Following are the various importance of a sound planning:
- Planning provides direction: Planning is involved in deciding the future course of action. Fixing goals and objectives is the priority of any organization. By stating the objective in advance, planning provides unity of direction. Proper planning makes goals clear and specific. It helps the manager to focus on the purpose for which various activities are to be undertaken. It means planning reduces aimless activity and makes actions more meaningful.
- Planning reduces the risk of uncertainty: Every business enterprise has to operate in an uncertain environment. Planning helps a firm to survive in this uncertain environment by eliminating unnecessary action. It also helps to anticipate the future, and prepare for the risk by making necessary provisions.
- Planning reduces overlapping and wasteful activity: Plans are formulated after keeping in mind the objective of the organization. An effective plan integrates the activity of all the departments. In this way, planning reduces overlapping and wasteful activities.
- Planning promotes creativity and innovative ideas: Planning encourages creativity, and helps the organization in various ways. Managers develop new ideas and apply the same to create new products and services leading to overall growth and expansion of the business. Therefore, it is rightly said that a good planning process will promote more individual participation by throwing up various new ideas and encouraging managers to think differently.
- Planning facilitates decision-making: Decision-making means searching for various alternatives and selecting the best one. Planning helps the manager to look into the future, and choose among various alternative forces of action. Planning provides guidelines for sound and effective decision-making.
- Planning establishes a standard for controlling: Planning lays down the standards against which actual performance can be evaluated and measured. Comparison between the actual performance and predetermine standards help to point out the deviation, and take corrective actions to ensure that events confront plans. In case of any deviation, the management can take remedial measures to improve the results.
Limitations of Planning
Following are the limitations of planning:
- Rigidity: Planning brings rigidity to work as employees are required to strictly follow pre-determined policies. There is a tendency that by strictly following these predetermined policies, people become more concerned about complying with these plans rather than achieving the goals. Sometimes planning discourages individual initiative and creativity. It restricts their freedom and new opportunities are ignored.
- Planning may not work in a dynamic environment: Planning has to operate in an external environment, such as government policies, technology, etc., which is beyond the control of the organization. In any situation, changes in the environment make the plan inoperative and ineffective. So planning does not provide a positive result when such changes are not accurately forecasted.
- Planning reduces creativity: Planning involves the determination of policies and procedures in advance. Employees are required to strictly follow them, and deviations are considered to be highly undesirable. As a result, employees do not show their skills, and it reduces their initiative and creativity.
- Planning involves huge costs: Planning is an expensive process because a lot of money is spent on gathering and analyzing information. It also involves the cost of experts, as experts are paid for planning. Efforts should be made to benefit from the analysis and ensure that benefits derived from planning should be more than their cost. If the cost of planning does not justify the benefit, then planning should be avoided.
- Planning is time-consuming: It takes a lot of time in collecting, analyzing, and interpreting information relevant to planning. This causes a delay in decision making. Therefore during crises and emergencies, which call for an immediate decision, planning does not work. Sometimes, advance planning may lead to a delay in actions making, which may result in the loss of profitable opportunities.
- Planning does not guarantee success: Planning may create a false sense of security in the organization. Managers tend to adopt previously tested plans, but it is not necessary that a plan which has worked before will work again in this competitive environment. So, we cannot say that planning guarantees success.
- Resistance to change: The employee becomes familiar with the method of doing work. So they resist change and do not want to adopt a new method of doing work. Such unwillingness may lead to the failure of the plan.
One of the reasons why we need planning is again the 18/20 rule. It is well established that for unplanned activity, 80% of the effort is less than 20% of the outcomes. One should spend much time deciding what to do, when to do and how to do it. Otherwise, it may result in taking many unnecessary, unfocused and inefficient steps. Limitations of planning indicate the problems in the planning process, but with limited careful steps, these limitations of planning can be overcome. It is much easier to adjust a plan to avoid a coming rise rather than to deal with the crisis when it comes.
Please Login to comment... |
Scientists with the University of Chicago have demonstrated a way to create infrared light using colloidal quantum dots. The researchers said the method demonstrates great promise; the dots are already as efficient as existing conventional methods, even though the experiments are still in early stages.
UChicago researcher Xingyu Shen holds a device that uses quantum dots to produce infrared light—a scientific advance that could lead to new lasers or sensors. Credit: Jean Lachat
The dots could someday form the basis of infrared lasers as well as small and cost-effective sensors, such as those used in exhaust emissions tests or breathalyzers.
"Right now the performance for these dots is close to existing commercial infrared light sources, and we have reason to believe we could significantly improve that," said Philippe Guyot-Sionnest, a professor of physics and chemistry at the University of Chicago, member of the James Frank Institute, and one of three authors on the paper published in Nature Photonics. "We're very excited for the possibilities."
The Right Wavelength
Colloidal quantum dots are tiny crystals—you could fit a billion into the period at the end of this sentence—that emit different colors of light depending on how big you make them. They're very efficient and easy to make and are already being used in commercial technology; you might already have bought a quantum-dot TV without knowing it.
However, those quantum dots are being used to make light in the visible wavelength—the part of the spectrum humans can see. If you wanted quantum dot light in the infrared wavelength, you've mostly been out of luck.
But infrared light has a lot of uses. In particular, it is very useful for making sensors. If you want to know whether harmful gases are coming out of your car exhaust, or test whether your breath is above the legal alcohol limit, or make sure methane gas isn't coming out of your drill plant, for example, you use infrared light. That's because different types of molecules will each absorb infrared light at a very specific wavelength, so they're easy to tell apart.
"So a cost-effective and easy-to-use method to make infrared light with quantum dots could be very useful," explained Xingyu Shen, a graduate student and first author on the new study.
Infrared lasers now are made through a method called molecular epitaxy, which works well but is labor- and cost-intensive. The scientists thought there might be another way.
Guyot-Sionnest and his team have been experimenting with quantum dots and infrared technology for years. Building on their previous inventions, they set out to try to recreate a "cascade" technique that is widely used to make lasers, but had never been achieved with colloidal quantum dots.
In this "cascade" technique, researchers run an electrical current across a device, which sends millions of electrons traveling across it. If the architecture of the device is just right, the electrons will travel through a series of distinct energy levels, like falling down a series of waterfalls. Each time the electron falls down an energy level, it has the chance to emit some of that energy as light.
The researchers wondered if they could create the same effect using quantum dots. They created a black "ink" of trillions of tiny nanocrystals, spread it onto a surface and sent an electrical current through.
"We thought it would be likely to work, but we were really surprised by how well it worked," said Guyot-Sionnest. "Right away, from the first time we tried it, we saw light."
In fact, they found that the method was already as efficient as other, conventional ways to produce infrared light, even in exploratory experiments. With further tinkering, the scientists said, the method could easily surpass existing methods.
They hope the discovery could lead to significantly cheaper infrared lights and lasers, which could open up new applications.
"I think it's one of the best examples of a potential application for quantum dots," said Guyot-Sionnest. "Many other applications could be achieved with other materials, but this architecture really only works because of the quantum mechanics. I think it's pushing the field forward in a really interesting way."
Xingyu Shen et al, Mid-infrared cascade intraband electroluminescence with HgSe–CdSe core–shell colloidal quantum dots, Nature Photonics (2023). DOI: 10.1038/s41566-023-01270-5
Source: University of Chicago |
In the context of mitigating the impending global warming due to anthropogenic emissions of carbon dioxide, which of the following can be the potential sites for carbon sequestration?
- Abandoned and uneconomic coal seams
- Depleted oil and gas reservoirs
- Subterranean deep saline formations
Select the correct answer using the code given below:
[UPSC Civil Services Exam – 2017 Prelims]
(a) 1 and 2 only
(b) 3 only
(c) 1 and 3 only
(d) 1, 2 and 3
- Carbon sequestration is the process of capturing and storing atmospheric carbon dioxide, aimed at reducing the amount of this gas in the atmosphere.
- Various methods are employed for this purpose.
- For instance, abandoned coal seams can be used to trap CO2, and CO2 can be injected into oil fields to enhance oil recovery.
- Additionally, forests and wetlands serve as natural carbon sinks, while subterranean deep saline formations can also be used for carbon storage.
- Therefore, statements 1, 2, and 3 are all correct. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.