content
stringlengths 275
370k
|
---|
- The definition of gut is courage or strength.
An example of gut is an abused woman fighting back against her abuser; the woman has guts.
- Gut is defined as the intestines or belly, or the cord made from animal intestines.
- An example of gut is when someone gets punched in the stomach; they are punched in the gut.
- An example of gut is the material from which violin and cello strings are made.
- Gut means to remove the internal parts of someone or something.
- An example of gut is slicing a fish's belly open and removing the intestines.
- An example of gut is to destroy the interior of a building.
A man measures his gut.
- : now often regarded as an indelicate usage
- the bowels; entrails
- the stomach or belly
- all or part of the alimentary canal, esp. the intestine
- tough cord made from animal intestines, used for violin strings, surgical sutures, etc.; catgut
- the little bag of silk removed from a silkworm before it has spun its cocoon: made into strong cord for use in fishing tackle
- a narrow passage or gully, as of a stream or path
- Informal the basic, inner or deeper parts
- daring, courage, perseverance, vigor, etc.
- impudence; effrontery
- power or force
Origin of gutMiddle English ; from Old English guttas, plural ; from base of geotan, to pour: for Indo-European base see gust
transitive verbgutted, gutting
- to remove the intestines from; eviscerate
- to destroy the interior of, as by fire
- urgent and basic or fundamental: the gut issues of a campaign
- easy; simple: a gut course in college
hate someone's gutsSlang
- a. The digestive tract or a portion thereof, especially the intestine or stomach.b. The embryonic digestive tube, consisting of the foregut, the midgut, and the hindgut.c. guts The bowels or entrails; viscera.
- Slang a. Innermost emotional or visceral response: She felt in her gut that he was guilty.b. guts The inner or essential parts: “The best part of a good car &ellipsis; is its guts” (Leigh Allison Wilson).
- guts Slang Courage; fortitude: It takes guts to be a rock climber.
- Slang A gut course.
- a. Thin, tough cord made from the intestines of animals, usually sheep, used as strings for musical instruments or as surgical sutures.b. Fibrous material taken from the silk gland of a silkworm before it spins a cocoon, used for fishing tackle.
- A narrow passage or channel.
- Sports a. The central, lengthwise portion of a playing area.b. The players occupying this space: The fullback ran up the gut of the defense.
transitive verbgut·ted, gut·ting, guts
- To remove the intestines or entrails of; eviscerate.
- To extract essential or major parts of: gut a manuscript.
- To destroy the interior of: Fire gutted the house.
- To reduce or destroy the effectiveness of: A stipulation added at the last minute gutted the ordinance.
Origin of gutFrom Middle English guttes, entrails, from Old English guttas; see gheu- in Indo-European roots.
- The alimentary canal, especially the intestine.
- (informal) The abdomen of a person, especially one that is enlarged
- beer gut
- (uncountable) The intestines of an animal used to make strings of a tennis racket or violin, etc.
- A person's emotional, visceral self.
- I have a funny feeling in my gut.
- (in the plural) The essential, core parts.
- He knew all about the guts of the business, how things actually get done.
- (in the plural) Ability and will to face up to adversity or unpleasantness.
- It took a lot of guts to admit to using banned substances on television.
- (informal) A gut course
- You should take Intro Astronomy: it's a gut.
- A narrow passage of water.
- the Gut of Canso
- The sac of silk taken from a silkworm when ready to spin its cocoon, for the purpose of drawing it out into a thread. When dry, it is exceedingly strong, and is used as the snood of a fishing line.
(third-person singular simple present guts, present participle gutting, simple past and past participle gutted)
(comparative more gut, superlative most gut) |
Human parainfluenza viruses (HPIVs) usually spread from an infected person to others through—
- the air by coughing and sneezing,
- close personal contact, such as touching or shaking hands, and
- touching objects or surfaces that have HPIVs on them then touching your mouth, nose, or eyes.
HPIVs can stay in the air for over an hour and on surfaces for a few hours and still infect people depending on the environmental conditions.
People usually get HPIV infections in the spring, summer, and fall. For more information, see HPIV Seasons.Top of Page
- Page last reviewed: August 18, 2015
- Page last updated: August 18, 2015
- Content source: |
Africa, home to 350 million people belonging to some 3000 tribes and speaking some 800 to 1000 distinct languages, is one of the most musically diversified regions of the world. The geographical variety of the continent - from the mountains and the vast desert of the north to the wide Savannah belt, the central rain forests and the fertile southern coast - is reflected in a multiplicity of musical styles.
In spite of this diversity, unifying features may be identified. African music is primarily percussive. Drums, rattles, bells and gongs predominate, and even important melodic instruments such as xylophones and plucked strings are played with percussive techniques. African melodies are based on short units, on which performers improvise. Though melodies are often simple, rhythms are complex by European standards, with much syncopation (accents on beats other than the main one), hemiola (juxtaposition of twos and threes) and polyrhythm (simultaneous performance of several rhythms). While Western rhythms are classified as 'additive' (time span divided into equal sections, e.g. 12 beats divided 4+ 4 + 4), African rhythms are usually 'divisive' (unequal sections, e.g. 12 beats divided 5 + 7 or 3 + 4 + 5). An unusual aspect of African rhythm is what has been called the 'metronome sense', the ability of many musicians to perform for long periods without deviating from the exact tempo. Group performances are most typical, and the 'call-and-response' style with a solo leader and responsorial group is used throughout the continent. Most African music is based on forms of diatonic scales, closely related to European scales; so the Western listener may find it more familiar, more accessible than the music of Asia.
These characteristics apply to the cultures south of the Sahara Desert, often referred to as 'Black Africa'. North African music is more closely allied to the music of other Arab countries of West Asia and is characterized by solo performance, monophonic rather than polyphonic forms, the predominance of melody over rhythm, a tense and nasal vocal style and non-percussive instruments including bowed rather than plucked strings. While the North as well as portions of West Africa and the east coast have been influenced by Islam, a distinctive sub-region is formed by Ethiopia, whose music has been influenced for centuries by Coptic Christianity, reflected in the ritual melodies, modes and liturgical chant (which is notated). Ethiopian instruments include the small krar lyre and the large, ten-string beganna lyre, claimed to be a descendant of David's harp.
In sub-Saharan Africa, music is an integral part of daily life. Songs accompany the rites of passage, work and entertainment. They were also important in the life of the traditional African courts, and are still used for political comment, especially in West Africa. Although the claims that all members of African communities participate in musical activities are now discredited, studies have shown that communal music-making is more common than in the West. And although musicians are generally accorded low social status, skilled professional musicians (called griots in some regions), employed by rich patrons, are common in many African societies. Musical notation is rare in Africa; skills and knowledge are passed from master to pupil in oral tradition.
The most celebrated African instruments are membrane drums The famous 'talking drums' of West Africa, such as the atumpan of Ghana, can imitate speech tones and are sometimes used to signal messages. Speech is also imitated by bells, gongs and wind instruments of the horn, trumpet and flute types. Harps are played mainly north of the Equator, in a broad band extending from Uganda to the western Savannah. Harp-lutes, such as the Gambian kora, are popular in West Africa. Other string instruments include fiddles in East Africa and the musical bow, fashioned like a hunting bow and played, with varying techniques and great sophistication, throughout the continent. Wind instruments of the trumpet and horn types are played in orchestras, in hocket fashion, with each instrument supplying its one note to the melodic whole. The algaita, an oboe-type instrument of West Africa, is probably of Islamic influence. Xylophones are common, particularly in the East where the Chopi xylophone orchestras of Mozambique perform polyphonic dance suites of uncommon beauty. An instrument unique to African and African-American music is the mbira or sanza (called thumb piano in earlier writings); it consists of a set of thumb-plucked metal tongues mounted on a board, often with a gourd resonator.
In recent decades, traditional African music has tended to be overshadowed by new hybrid urban forms such as highlife (Ghana), juju (Nigeria), Congolese (Zaire) and kwela (southern Africa) which blend elements from Western pop and disco idioms with local features. |
In this drawing lesson we’ll show you how to draw a Jellyfish in 6 easy steps. This Free step by step lesson progressively builds upon each previous step until you get to the final rendering of the JellyFish.
This is a simple lesson designed for beginners and kids with real easy to follow steps. Feel free to print this page and use as a drawing tutorial.
Here are some fun facts about the JellyFish you might find interesting.
- The sting of a Jellyfish can be very deadly.
- Jellyfish lifespans typically range from a few hours to several months.
- Jellyfish have existed on the face of this planet for over 650 million years.
- Some of them can swim quite powerfully and actively chase their prey.
- There is no real set size for the different species of Jellyfish in the world.
Step 1: Draw the main body of the Jellyfish
Step 2: Draw the first two tentacles.
Step 3: Draw a third tentacle.
Step 4: Add some detail to the head.
Step 5: Add more tentacles.
Step 6: Complete the Jellyfish by finishing the last tentacle and you’re done!
Here’s a quick 1 minute video on how to draw a simple Cartoon Jellyfish |
Interior and Exterior Angles
In this angles worksheet, 10th graders solve and complete 55 various types of problems. First, they find the sum of the measures of the interior angles of the convex polygon. Then, students find the measure of a central angle of a regular polygon with the given number of sides. They also find the perimeter and area of the regular polygons shown. |
Featured Artifact – Gas Rations
On December 1, 1942, mandatory gas rationing went into effect.
America’s military needed millions of tires for jeeps, trucks and other vehicles. Tires required rubber. Rubber was also used to produce tanks and planes. But when Japan invaded Southeast Asia, the United States was cut off from one of its chief sources of this critical raw product.
American overcame the rubber shortage in several ways. Speed limits and gas rationing forced people to limit their driving. This reduced wear on tires. A synthetic rubber industry was created. The public also carpooled and continued rubber scrap for recycling.
All automobiles received gas ration “grade.” “A” meant nonessential. “B” indicated work use (for instance, a car used by a traveling salesman). “C” stood for essential use (for example, doctors, clergy and civil defense workers). “T” was for long-distance trucks. Most cars were graded “A,” which meant the owner received stamps for three gallons of gas per week. |
Question of the Week
Work and Travel on the Rails
In this unit, students will examine the important and chaging role of the railroad in the 19th and early 20th centuries. They will learn about the railroad from two perspetives: those who worked on the railroad and those who rode the railroad. Students will come to understand the change in immigrant labor on the rails over time, gain knowledge of the working conditions and life experiences of immigrant railroad workers, and examine how railroads shaped larger social standards for behavior and comfort.
- Textual evidence, material artifacts, the built environment, and historic sites are central to understanding United States history.
- Learning about the past and its different contexts shaped by social, cultural, and political influences prepares one for participation as active, critical citizens in a democratic society.
- Long-term continuities and discontinuities in the structures of United States society provide vital contributions to contemporary issues. Belief systems and religion, commerce and industry, innovations, settlement patterns, social organization, transportation and trade, and equality are examples continuity and change.
• Analyze a primary source for accuracy and bias and connect it to a time and place in United States history.
• Analyze the interaction of cultural, economic, geographic, political, and social relations for a specific time and place.
• Apply the theme of continuity and change in United States history and relate the benefits and drawbacks of your example.
Background Material for Teacher
Working on the Rails: Irish and Italian Laborers on Pennsylvania's Railroads
19th-Century Life on the Rails: A Microcosm of American Society
R. David McCall, "Everything in its Place": Gender and Space on America's Railroads, 1830-1899. (Masters thesis, Virgina Polytechnic and State University, 1999).
End of Unit Assessment
Students can create three diary entries of an immigrant working or riding on the rails. If musically inclined, they could write and perform a ballad which could have been sung by Irish workers on the rail.
PA Core Standards
CC.8.5.11-12.F CC.8.5.11-12.G. CC.8.6.11-12.B
This lesson was migrated from the old HSP website. It was not created in the format that we presently use. Please excuse discrepencies in formatting and lack of fully digitalized sources.
About the Author
This lesson was originally on the old HSP webiste. It was updated by Amy Seeberger and Eden Heller, Education Interns, Historical Society of Pennsylvania. |
In this image of from BYU's x-ray diffraction facility, x-rays arriving from the left scatter in all directions from a tiny crystal at the center, and are then imaged by a 16-megapixel x-ray camera. The often beautiful scattering patterns that result contain a wealth of information about the atomic structure of the sample. The speed and sensitivity of state-of-the-art instruments like this have revolutionized the study of crystalline materials. The stainless-steel cylinder with lots of knobs is a double focusing Kirkpatrick-Baez mirror, which increases the intensity of the x-ray beam by a factor of 8. The beam collimator (left), the goniometer head (back), the microscope (45°), the low-temperature gas nozzel (vertical), and a beam stop (right) all point towards the location of the sample.
The spatial distribution of scattered x-rays is closely related to the Fourier transform of the crystal's electron density. In principle, an inverse Fourier transform can be used to directly convert experimental scattering data into a picture of the electron density (i.e. the atomic crystal structure). Of course, the Fourier Transform of the electron density is a complex-valued function, only the magnitude of which is actually measured. Hence the "phase problem" of crystallography.
In the figure below, the lattice of grey dots represents the Fourier transform of a simple cubic crystal with one atom per repeating unit cell. Because the Fourier transform of any periodic function (crystals are periodic by definition) is a discrete lattice of uniformly spaced peaks, we call this the reciprocal lattice. The individual peaks are called Bragg peaks. In the figure, the reciprocal lattice and the real-space laboratory configuration have been superimposed (don't try this at home!) to illustrate the geometry of a diffraction experiment. Note that when the crystal rotates on the diffractometer, its reciprocal lattice rotates with it, but around a different origin. The well-known Bragg diffraction equation, 2dsin(q) = l, is actually the equation for a sphere of radius 1/l with it's edge just touching the origin of reciprocal space. We have included this imaginary "Ewald sphere" in the figure. As the crystal rotates on the goniometer, any Bragg peak that comes into contact with the surface of the Ewald sphere satisfies the diffraction condition and results in a diffracted x-ray beam that leaves the crystal in the direction of the peak. When diffracted beams intersect the x-ray detector surface, their locations and intensities are recorded. Click on the figure to see an interactive flash animation of a single-crystal diffraction experiment. Choose a crystal type and orientation to start the animation.
The animation makes the Ewald sphere look something like a disco ball. So now we have Disco Diffraction! |
Hinduism is practiced by about eighty percent of India's population, and by about 30 million people outside India. But how is Hinduism defined, and what basis does the religion have? In this 'Very Short Introduction', Kim Knott provides insight into the beliefs and authority of Hindus and Hinduism, and considers the ways in which it has been affected by colonialism and modernity. Knott offers succinct explanations of Hinduism's central preoccupations, including the role of contemporary gurus and teachers in the quest for spiritual fulfillment; and the function of regular performances of the Mahabharata and Ramayana - scriptures which present the divine in personal form (avatara) and provide models of behavior for everyone, from kings and warriors to servants and children, and which focus on the dharma, the appropriate duties and moral responsibilities of the different varna or classes. The author also considers the challenges posed to Hinduism at the end of the twentieth century as it spreads far beyond India, and as concerns are raised about issues such as dowry, death, caste prejudice, and the place of women in Hindu society. |
Storm cellars are underground structures that are either located below buildings, or are built underground near houses or other such buildings. They are reinforced structures into which residents can go for protection from a strong wind storm. They are common in areas that often have tornadoes and hurricanes.
A typical storm cellar for a single family would be built nearby the home. It might have a floor area of eight by twelve feet, (2.5 x 3.5 m) and an arched roof like that of a quonset hut—but it would be entirely underground. In most cases the entire structure would be built of blocks faced with cement and rebar through the bricks for protection from the storm doing this makes it almost impossible for the bricks to fall apart. New ones sometimes are made of septic tanks that have been modified with a steel door and vents. Most storm cellars would be reached by a covered stairwell, and at the opposite end of the structure there would be conduits for air that would reach the surface, and perhaps a small window to serve as an emergency exit and also to provide some light.
A storm cellar may also be used to store canned goods for emergencies or for a long period of time. |
Beyond the Gene
“There are no depths. Appearance is the summary of phenomena.”
— Joseph Brodsky
Life on Earth is a narrative written by the deoxyribonucleic acid (DNA). The chemical design of DNA is uniform among every form of life, but its sequence is different between species and individuals. DNA sequences are comprised of millions of differentially combined chemical letters (A, T, C & G) and yield most of the current diversity of species, as well as offering an endless blueprint for the future design of life forms. Once established, life forms tend to stay within the borders of their species, one generation after the other. However, no two individual organisms, even twins, manage to follow precisely the same script. The reason for this is the so called “epigenetic” impact that expands the boundaries of the DNA language and generates another level of diversity by incorporating life experience in its different forms.
What is epigenetics? In the mid-1960s, a founder of epigenetics Conrad Waddington wrote: “Some years ago [in 1947] I introduced the word ‘epigenetics’, derived from the Aristotelian word ‘epigenesis,’ which had more or less passed into disuse, as a suitable name for the branch of biology which studies the causal interactions between genes and their products which bring the phenotype into being.” “Epi” means “upon” or “over,” and “genetics” implies that genes are involved, so the term reflected the need to study events “over” or beyond the gene.
What is beyond, or over the gene? The DNA of a single human cell is 1.2 meters long and is tightly packed within the micron-size cell nucleus. The packed ball of DNA string has a levitating 3D structure where the different strings overlap and loop around each other like folded fishing line. Each loop contains genes that encode proteins as well as numerous regulatory elements that define the gene activity.. Epigenetics addresses the mechanisms that define gene activity without interfering with the gene structure. Thus epigenetics is more a censorship of the DNA “reading” than an editing of the text.
DNA “reading” involves thousands of different proteins that move with extreme precision and sensitivity over the DNA structure, like a blind reader’s fingers over brail. DNA reading molecules are sensitive to environmental impacts, and the longer and stronger these are, more likely it is that cells will tune the pattern of DNA reading to environmental pressure. The meaning of the environmental impact differs between distinct cells and the organisms. For example, in the brain neurons, persistent fear of aggression affects the activity of hundreds of genes responsible for the immediate reaction to aggression, as well as a long-lasting adaptation to life in fear, meaning that many of the genes that operate in the brain of “free” individuals are either silent or abnormally active in the brain of those who live under conditions of oppression or fear. The scope of gene activity and patterns of genes within each given cell generate an individual cell dialect. As cells operate together they form a library of dialects that will eventually drive the emergence of novel and stable features that could not have been predicted by the DNA sequence alone.
The emergence of a common cell language is similar to the emergence of linguistic dialects that liberate individuality from the rigid constraints of scholastic grammar. As genes interact with each other and with the “readers,” life within the cell nucleus appears strikingly similar to life in the ancient Greek seaport of Piraeus, inhabited by Greeks looping into Athens from all around the Mediterranean. The mixture of dialects and the contact between multiple varieties of the same language brought about a new language, “koiné.” Koiné languages do not change any existing dialect, but instead emerge as a spoken dialect in addition to the original ones. As with koiné, our cells do not attempt to change the original DNA language, yet koiné ad libitum in response to environmental influences.
The supposed virginity of our genome becomes challenged as soon as we tap into the “dirt” of the world. Bacteria and viruses invade our bodies already in the womb, and even more after we are born. The total mass of bacteria living in a single human can be measured in kilograms. Bacteria have no preconceived knowledge of the future host, and likewise, we are not aware of the invaders. To reach a symbiotic relation, both our body and the collective body of invaders need to adapt to each other to avoid excessive stress of futile intolerance. In essence, the birth of a human is not merely the origin of a new individual, but the onset of a new community. Exposed, the cells of our body tune their genetic language to help achieve the life-long symbiosis between us and “them.” Since partners in symbiotic relationships frequently belong to different kingdoms, yet may be so intimate with each other that one may reside in the other’s tissues and/or cells, their shared “language” tends to be a basic—and ancient—form of communication. Such communication blurs the boundaries between different living entities, giving rise to a single biomolecular network: a “holobiont” with a “hologenome.” At the bottom of all of this are our genes, carrying the burden of the environment throughout our life.
Similar to bacterial invasions that challenge our immunity, the brain of a newborn becomes exposed to the limitless number of new signals that will continue to arrive for as long as we live. Similar to how genes in the gut and skin adapt to bacterial presence, the genes in our brain cells become part of a virtual “hologenome,” where not bacteria or viruses but information imprints its presence on the genes and alter their function. A baby screaming in a cradle in the feverish embrace of the world reflect a struggle where our genetic script, which really has not changed that much over thousands of years, experiences the burden of novelty in a cruel and unanticipated form.
Can the burden of novelty be reduced by passing a memory of ancestral experience to the next generation? Can information about ancestral environments be transmitted to offspring via sperm cells? Early in the twentieth century, Dr. August Weismann removed the tails of sixty-eight white mice repeatedly over five generations, and found that while “901 young were produced by five generations of artificially mutilated parents … there was not a single example of a rudimentary tail or of any other abnormality in this organ.” Intuitively, Weismann chose mutilation as the most likely experience that could lead to heritable changes. Though naïvely, Weismann mistook a sign of mutilation with its consequences. While tail shortening is obviously not heritable, the trauma of mutilation left a lasting hereditary impact. It turned out that fear can be passed to new generations. Mice were found to pass on the memory of a smell if it was coupled to a strongly painful experience. In the experiment, a particular odorant was paired with foot shock. Offspring of the males treated in this way showed increased sensitivity to that specific odorant, while no change in sensitivity was registered with another. This study suggests the somewhat shocking possibility that parents can inform offspring in an extraordinarily complex chemical milieu via sperm.
While these experiments were done in mice, there is no reason to think that humans are different. Instead of being individually discrete, our flesh and mind may be hostage to our ancestral experiences. While the ancestral influence may have a positive impact, it may also narrate defeat and leave us inept in front of danger. Maybe revolution and its ensuing calamity is merely an evolutionary tool to erase the memory of the past, to start from a clean slate. |
All About Blocks
Have you ever wondered why blocks are such an important part of any good early childhood classroom? Or why your child seems to like playing with them so much? Perhaps it is the simplicity of their design and the multiplicity of their uses that make blocks a perennial favorite with preschoolers. Even more important, blocks represent a microcosm of life: your child can use them to construct his own understanding about how things work, and even how life works.
The Science of Blocks
When your child plays with blocks, building replicas of the world around her, she is like a little scientist, experimenting with balance, structure, space, and even gravity! Have you ever watched your child attempt to build a simple tower, only to have it fall down at a particular height? Perhaps you have noticed that she tried different ways of placing the blocks until finally she created a tower that stayed up! Amazingly, what she is doing is using the scientific method of experimentation, observation, and cause-and-effect to solve the problem of the tumbling tower.
The Math of Blocks
Given the many shapes that blocks come in, they are the perfect tool for hands-on learning about basic math concepts: shape, size, area, geometry, measurement, and equivalencies. While playing with blocks, your child may naturally begin to sort them by a particular attribute, such as shape or size. He may notice that long rectangle blocks make much better bases than the triangular ones, or that curved blocks need to lie flat on the floor. This exploration into the nature of shapes prepares your child for later geometric understanding. You may also notice that your child enjoys making long lines of blocks. This is an important first step in grasping the concept of measurement. Children often delight when they notice that things are the same length. For example, "Look, my blocks are as long as the couch!" This would be the perfect time to ask your child, "Do you think you are the same size as your line of blocks? Can you lay next to it to see?" By asking a "next step" question, you extend the learning by asking your child to apply what he has learned from the first measurement of the couch to a new object — himself!
The Language of Blocks
Block play is an effortless way to get children to practice language skills simply because there is so much to talk about! Many children like to describe what they're building, or they narrate as they go along. Some young builders talk to themselves as they try new things. This makes the block area a prime place for your child to experiment with open-ended questions such as, "What might happen if . . ." and "How many ways can you . . ."; Just by presenting a question, idea, or new prop, you can inspire your child for hours of constructive play.
The Social World of Blocks
Of course, the "pretend play" aspect of block-building also supports the development of emotional and social skills. In an early childhood classroom, the block area is an active social center that encourages children to share, take turns, listen, and communicate. While blocks can be a solitary activity, in most classrooms they are the place where children congregate. Even in your own home you may notice that when you bring out the blocks, everyone wants to join in the fun! Perhaps it is the open-ended nature of blocks that makes them so good for practicing a variety of social skills. There is no one "right" way to build with them, thus requiring children to work creatively together to decide how to use them. |
May 24, 2013
News & Features
AFTER a decade of fierce debate and much research, the once heretical view that stomach ulcers are an infection caused by a bacterium, Helicobacter pylori, and are curable with antimicrobial drugs, has prevailed. And now leading researchers are turning to the public health implications of H. pylori, including a link to stomach cancer. Until this view of the cause of ulcers was endorsed this month by an independent panel of medical experts convened by the National Institutes of Health, a Federal agency in Bethesda, Md., the theory and benefits of antimicrobial therapy were still considered unproved and radical. The panel not only endorsed the theory but also strongly urged a drastic change in standard ulcer therapy: the addition of combinations of antimicrobial drugs to the usual ulcer regimen.
Setting a new standard of care for millions of people with stomach ulcers, a panel of medical experts said today that antimicrobial agents, including antibiotics, should be added to the conventional treatments for the common ailment. The recommendation reflects evidence from studies in the last few years that ulcers are caused by infection with a bacterium, Helicobacterium pylori. The aim of antimicrobial therapy is to knock out the bacteria permanently and prevent recurrences.
Scientists here say they have found widespread degradation of the health of thousands of people living in areas contaminated by fallout from the fire five years ago at the Chernobyl nuclear power plant. "The medical and biological consequences of Chernobyl appear much more serious and diverse than had been expected during the first years," said Dr. Tamara V. Belookaya, head of diagnostics at the Institute of Radiation Medicine in this town in the Minsk metropolitan area.
Persistent infection with a common spiral-shaped bacterium greatly increases the risk of stomach cancer, two new studies have concluded. Stomach cancer is the second most-common cancer in the world, after lung cancer.
LEAD: NEW research has produced strong evidence that an S-shaped bacterium causes the common stomach disorder gastritis. And scientists say there is increasing though inconclusive evidence that the microbe is linked to many stomach and duodenal ulcers.
TWO Australian researchers have discovered what seems to be a new spiral-shaped bacterium living in the human stomach. The finding of one more microorganism among the thousands known might have been no more than a curiosity if the Australian bacterium were not now being tentatively linked to some of the most painful ailments known: gastritis, peptic ulcers in the stomach and duodenum, and perhaps other problems as well. Several million Americans have these ailments whose origins are often unknown. The story of how the finding was made and how the research is being conducted on several continents has much to say about how science really works, not so much as a matter of breakthroughs but rather in fits and starts, with optimism, pessimism and rivalries that sometimes impede potentially important advances. Thus, while there is a great deal of skepticism about the importance of this finding, there is a great deal of excitement, too, as the potential implications begin to emerge. It is possible, for instance, that if bacteria contribute to or lie at the root of stomach and intestinal pains, these now intractable problems may be helped, even cured, by antibiotics. Such drugs might be used in place of or in addition to the medications now prescribed.
MOST POPULAR - HEALTH
- Well: The Scientific 7-Minute Workout
- Well: Can Statins Cut the Benefits of Exercise?
- Well: What's in Your Green Tea?
- Well: Disability and Discrimination at the Doctor's Office
- Well: Ask Well: White-Coat Hypertension
- Well: Heartburn Tied to Throat Cancer
- ‘Semi-Invisible’ Sources of Strength
- Well: For a Better Smoothie, Just Add Chia
- Well: Many Fronts in Fighting Obesity
- The New Old Age: V.A. Warns Aging Veterans Against 'Pension Poachers' |
Gravimeters are instruments to measure the the Earth’s gravity (g) which is a function of the
geographic position of the observation site and a function of time as well.
A distinction is drawn between Absolute Gravimeters, which measure the local and
instantaneous gravity, and Relative Gravimeters, measuring local variations of gravity in time
or gravity differences between observation sites.
The major time depending gravity effects in an observation site are the periodic earth tides
caused by the gravitational attraction of sun and moon which changes due to the Earth’s
rotation and the revolution of sun and moon.
The present state-of-the-art allows for a signal resolution up to 0.00001ppm (parts per
million); for illustration: this corresponds 0.1mm in relation to the distance of 10 000km from
the equator to the earth pole.
Why determine gravity g ?
In Geophysics we determine gravity g for:
- Determination of the Geoid (represents the mean sea level of the oceans and the extrapolation of this surface under the continents)
- Tectonics & Post-glacial rebound
- Influence of atmosphere, cryosphere & hydrosphere on the Earth
- Structure of the Earth’s interior
- Geology & mineral resource exploration
In Metrology we determine g for
- Calibration of relative gravimeters
- Force standards: e.g. pressure transducers
- Watt balance: to relate the kilogram to a natural constant
The absolute gravimeter FG5 is installed in the Walferdange Underground Laboratory.
The FG5 operates by using the free-fall method. An object is dropped inside a vacuum chamber (called the dropping chamber). The descent of the freely-falling object is monitored very accurately using a laser interferometer. The free-fall trajectory of the dropped object is referenced to a very stable active-spring system called a Superspring. The Superspring provides seismic-isolation for the reference optic to improve the noise performance of the FG5.
The optical fringes generated in the interferometer provide a very accurate distance measuremen t system that can be traced to absolute wavelength standards. Very accurate and precise timing of the occurence of these optical fringes is done using an atomic rubidium clock that is also referenced to absolute standards.
The fact that the measurement is directly tied to absolute standards kept at all of the national and international laboratories around the world is the reason why the FG5 is an ABSOLUTE GRAVIMETER. Absolute standards of length and time provide the means to achieve a calibrated gravity value that does not drift over time.
To get more information please visit Micro-g Solutions Inc.
The Superconducting Gravimeter (SG) is installed in the Walferdange Underground Laboratory to measure gravity g at very high resolution. The SG uses persistent supercurrents, which are trapped in superconducting magnets, to produce an ultra-stable magnetic field which levitates a superconducting test mass (sphere). The magnetic field is generated by persistent currents in two niobium coils that are superconducting below a temperature of 9.3 K (or -263.85 ° Celsius)
The GWR Superconducting Gravimeter records gravity changes due to the deformation of the Earth from tidal forcing, tidal loading, atmospheric pressure loading, and seasonal changes in water storage.
Changes in gravity result from changes in the Earth’s mass and or vertical deformations of the Earth’s crust.
What do we learn about the Earth from these observations? The amplitude of the tides tells us about what the Earth is made of and how the tides of the ocean circulate. We learn how continental water moves around the planet and learn about climate change (how ice mass varies with time).
Data is acquired by ECGS developed data acquisition systems and remotely transfered to our office.
For more information on principles and product infos, please visit GWR Instruments Inc., USA webpage
Superconducting Gravimeter’s Gravity Sensor Unit (GSU)
Principle of operation
A magnetic field produced by two superconducting coils levitates a 2.54 cm diameter spherical proof mass weighing 4-8 grams. The sphere tries to move up or down in response to changes in gravity. A voltage is applied to keep the sphere at it’s equilibrium position. This voltage is proportional to changes in gravity. |
Metals are present everywhere around us and are one of the major materials upon which our economies are built. Economic development is deeply coupled with the use of metals. During the 20th century, the variety of metal applications in society grew rapidly. In addition to mass applications such as steel in buildings and aluminium in planes, more and more different metals are in use for innovative technologies such as the use of the speciality metal indium in LCD screens. A lot of metals will be needed in the future. It will not be easy to provide them. In particular in emerging economies, but also in industrialised countries, the demand for metals is increasing rapidly. Mining and production activities expand, and with that also the environmental consequences of metal production. In this course, we will explore those consequences and we will also explore options to move towards a more sustainable system of metals production and use. We will focus especially on the options to reach a circular economy for metals: keeping metals in use for a very long time, to avoid having to mine new ones. This course is based on the reports of the Global Metals Flows Group of the International Resource Panel that is part of UN Environment. An important aspect that will come back each week, are the UN Sustainable Development Goals, the SDGs. Those are ambitious goals to measure our progress towards a more sustainable world. We will use the SDGs as a touching stone for the assessment of the metals challenge, as well as the solutions we present in this course to solve that challenge. |
3-1. WHAT IS A PULSE?
A pulse is when the left ventricle of the heart contracts. When this happens, blood is suddenly pushed from the ventricle to the main artery (aorta). This sudden forcing of blood from the heart into the arteries causes two things to happen.
a. Artery Expansion. The sudden rush of blood increases the volume of blood in the arteries. In order to accept this increased volume, the arteries expand (stretch). As the arteries quickly contract (go back to normal size), blood is forced from the arteries, through the capillaries, and into the veins.
b. Pulse. In addition to the expansion of the arteries, a "wave" travels through the arteries. This wave is the pulse. All arteries have a pulse, but the pulse is easier to feel (palpate) when the artery is near the surface of the body.
3-2. WHAT IS PULSE RATE?
The pulse rate is the number of times that you can feel a pulse wave passing a point in one minute. Since a pulse wave occurs whenever the heart beats, the pulse rate equals the heartbeat rate. However, "taking a patient's pulse" means more than just determining his pulse rate. It also includes noting certain other factors about the pulse.
3-3. WHAT FACTORS ARE NOTED WHEN TAKING A PATIENT'S PULSE?
When taking a patient's pulse, you should note the patient's pulse rate, the strength of the pulse, and the regularity of the pulse. Most of the pulse characteristics discussed in this paragraph are illustrated in figure 3-1.
a. Pulse Rate.
(1) The normal adult has a pulse rate of about 72 beats each minute. Infants have higher average pulse rates. The normal pulse rate ranges based upon age are given below.
- Adults: 60 to 100 beats per minute.
- Children: 70 to 120 beats per minute.
- Toddlers: 90 to 150 beats per minute.
- Newborns: 120 to 160 beats per minute.
(2) Pulse rates that are outside the normal range are classified as tachycardia or bradycardia.
(a) Tachycardia. If the patient's pulse rate is over 100 beats per minute, the patient is said to have tachycardia. Tachycardia means "swift heart." Constant tachycardia could be a sign of certain diseases and heart problems. Often, however, tachycardia is only temporary. Temporary tachycardia can be caused by exercise, pain, strong emotion, excessive heat, fever, bleeding, or shock.
(b) Bradycardia. If the patient's pulse rate is below 50 beats per minute, the patient is said to have bradycardia. Bradycardia means "slow heart." Bradycardia can be sign of certain diseases and heart problems. Certain medicines, such as Digitalis, can result in bradycardia.
b. Strength. The strength (force) of the pulse is determined by the amount of blood forced into the artery by the heartbeat. A normal pulse has a normal strength. You will be able to identify a normal strength pulse with practice.
(1) Bounding. If the heart is pumping a large amount of blood with each heartbeat, the pulse will feel very strong. This strong pulse is called "bounding" pulse (as in "by leaps and bounds"). A bounding pulse can be caused by exercise, anxiety, or alcohol consumption.
(2) Weak. If the heart is pumping only a small amount of blood with each heartbeat, the pulse will be harder to detect. This type of pulse is called weak, feeble, or thready. If the pulse is weak, you may have trouble finding (palpating) the pulse at first.
(3) Strong. A strong pulse is stronger than normal pulse, but is less than bounding. Shock and hemorrhage (serious bleeding) can cause a strong pulse.
c. Rhythm. Rhythm refers to the evenness of the beats. In a regular pulse, the time between beats is the same (constant) and the beats are of the same strength.
(1) Irregular. A pulse is irregular when the rhythm does not have an even pattern. The time between beats may change, or the strength of the beats may change or the pulse may vary in both time between beats and strength.
(2) Intermittent. An intermittent pulse is a special type of irregular pulse. A pulse is intermittent when the strength does not vary greatly, but a beat is skipped (missed) either at regular or irregular intervals. If the missing beats in an intermittent pulse were present, then the pulse rhythm would be normal.
NOTE: Examples of some pulse patterns are illustrated in figure 3-1.
Figure 3-1. Pulse patterns.
3-4. WHICH ARTERY IS PALPATED WHEN A PULSE IS TAKEN?
There are several sites on the body where a pulse is normally taken. All arteries have a pulse, but it is easier to palpate (feel) the pulse at certain locations. It is easier to feel the pulse when the artery is near the surface of the skin and when there is firm tissue (such as a bone) beneath the artery. The three most common sites are the radial (wrist), carotid (throat), and brachial (inside of elbow). These and other sites are discussed below and illustrated in figure 3-2. The site or sites that you choose to use may vary depending upon the condition of the patient. For example, suppose that you are assisting someone who is bleeding severely from a wound in his thigh. After giving the person first aid to stop the bleeding, you will check the person's pulse at a point below the injury to make sure that your bandage has not cut off the blood circulation to the lower leg. You may take the pulse at the popliteal (behind the knee) site, the dorsalis pedis (top of the foot) site, and/or the posterior tibial (back of the ankle) site.
a. Radial. The radial pulse (the pulse taken using the radial artery) is taken at a point where the radial artery crosses the bones of the wrist. If the patient's hand is turned so that the palm is up, the radial pulse is taken on the thumb side of top side of the wrist.
b. Carotid. The carotid pulse is taken on either side of the trachea (windpipe). The best location is the grooves located to the right and to the left of the larynx (Adam's apple).
c. Brachial. The brachial pulse is taken in the depression located about one-half inch above the crease on the inside (not the bony side) of the elbow. This site is used when taking the patient's blood pressure.
Figure 3-2. Sites for taking a pulse.
NOTE: All pulse sites except apical exist on both sides of the body. For example, one radial site exists on the right wrist and one exists on the left wrist.
d. Temporal. The temporal pulse is taken in the temple area on either side of the head. The temple area is located in front of the upper part of the ear. The pulse is felt just above a large, raised bony area called the zygomatic arch.
e. Ulnar. Like the radial pulse, the ulnar pulse is taken at the wrist. The radial pulse is taken over the artery on the thumb side of the wrist while the ulnar pulse is taken on the other side of the wrist. Both pulses are taken on the palm side of the wrist. The radial artery is normally preferred over the ulnar artery for taking the pulse because the radial artery is somewhat larger.
f. Femoral. The femoral pulse is taken in the groin area by pressing the right or left femoral artery against the ischium (the lower part of the pelvic bones located in the front part of the body).
g. Popliteal. The popliteal pulse is taken in the middle of the area located on the inside of the knee (the area opposite the kneecap).
h. Posterior Tibial. The posterior tibial pulse is taken at the top of the ankle or just above the ankle on the back, inside part of the ankle.
i. Dorsalis Pedis. The dorsalis pedis pulse is taken on the top portion of the foot just below the ankle. The pulse is taken in the middle of this area (not to the inside or outside).
j. Apical. Unlike the other sites, the apical pulse is not taken over an artery. Instead, it is taken over the heart itself. The apical pulse (actually, the heartbeat) can be felt over the apex of the heart (the pointed lower end of the heart.) This site is located to the (patient's) left of the breastbone and two to three inches above the bottom of the breastbone. The apical pulse is easily heard when a stethoscope is used. |
The human brain is a highly complex network of specialised cells that interact in an coordinated manner with each other in order to support us with the tools we need not only to survive, but to live a happy and fulfilled life. Using electrical and chemical signalling, information is passed on from one cell to the other (called neurotransmitters) to bridge the gap between one nerve cell to the other (called synaptic cleft). In the cell itself, information is transmitted by the support of an electrical current in the outer layers of the cell.
How drugs affect ‘normal’ brain function
Taking drugs interfere with these processes. In particular, the so-called reward circuit of the brain — which is responsible for emotion, motivation and the continuation of actions that induce feelings of pleasure — is vulnerable to change. A normally functioning reward circuit reinforces life-sustaining activities such as eating and socialising. However, drugs can manipulate this pathway to encourage further drug use which, in turn, could become addictive behaviour. There are at least two ways that drug use disrupts communication in the nerves: These include:
Direct pathways — neurotransmitters (mainly dopamine) are constantly being released at the synaptic cleft, so that nerves are not able to rest between messages. This is the case when for example substances like cocaine or methamphetamine are used.
Indirect pathways — neurotransmitters other than dopamine are blocked from being sent across synaptic clefts, due to the way the drugs mimic and, therefore, compete with other necessary neurotransmitters. Thus, with drugs such as marijuana or heroin, there is an increased amount of dopamine available for sending messages through the body.
Either scenario results in overstimulation of the brain’s reward circuit with dopamine. So instead of a normal response to natural behaviours, the nervous system is ‘flooded’ with the dopamine, producing euphoric effects. This process sets a reinforcing pattern in motion, teaching the brain and body to repeat the rewarding behaviour of abusing drugs.
Article: Medicalwriters.com GmbH |
The center of the galaxy is not only a crowded place, it's blasted by shock waves, bathed in radiation and warped by owerful gravitational forces from the supermassive black hole at the region's heart.
Nevertheless, new research by astronomers at the Harvard-Smithsonian Center for Astrophysics shows that planets can still form here. They've discovered a cloud of hydrogen and helium plunging toward the galactic center and representing, they say, the shredded remains of a planet-forming disk orbiting an unseen star.
"This unfortunate star got tossed toward the central black hole. Now it's on the ride of its life, and while it will survive the encounter, its protoplanetary disk won't be so lucky," says lead author Ruth Murray-Clay.
The cloud was discovered last year by a team of astronomers using the Very Large Telescope in Chile, who suggest that it formed when gas streaming from two nearby stars collided.
But Murray-Clay and her colleague Avi Loeb have a different take on the discovery, pointing out that newborn stars retain a surrounding disk of gas and dust for millions of years. If one dived toward our galaxy's central black hole, radiation and gravitational tides would rip apart its disk in a matter of years.
They also identify a possible source of the stray star - a ring of stars known to orbit the galactic center at a distance of about one-tenth of a light-year. Astronomers have detected dozens of young, bright O-type stars in this ring, suggesting that hundreds of fainter sun-like stars are also present. Interactions between the stars could fling one inward, along with its accompanying disk.
Although this protoplanetary disk is being destroyed, the stars that remain in the ring can hold onto their disks - meaning they could still form planets despite their hostile surroundings.
As the star continues its plunge over the next year, more and more of the disk's outer material will be torn away, leaving only a dense core. The stripped gas will swirl down into the black hole, with friction heating it so much that it will glow in X-rays.
"It's fascinating to think about planets forming so close to a black hole," says Loeb. "If our civilization inhabited such a planet, we could have tested Einstein's theory of gravity much better, and we could have harvested clean energy from throwing our waste into the black hole." |
The complete system of naming chemical compounds is very complex and well beyond the scope of this course. However, we can look at some simple parts of the nomenclature to get an idea of how it works. In fact, the naming scheme used by chemists tells us about chemistry, because the nature of the material named is used in the choice of its name.
When elements from the far left of the periodic table (metals) react, they tend to loose electrons to form positive ions or cations. When elements from the right react with metals, they gobble up these electrons to form negative ions, or anions. Compounds formed between metals and nonmetals where this takes place are called ionic compounds and they are named from the ions that compose them.
Cations formed from metals have the same name as the metal. If the metal can form more than one different positive ion, the charge state of the metal is put in roman numerals within parenthesis following the metal name. Sometimes an older naming scheme is used: when only two ions can be formed from a given metal, an -ous ending is given to the Latin name of the metal for the lower charged (less positive) ion and and -ic ending is given for the higher charged ion. Cations formed from non-metals are given an -ium ending. Here are some examples of names of cations:
Monoatomic anions are named by shortening the name of the element ad adding an -ide ending. Some simple polyatomic anions are also given the -ide ending (like hydroxide). Many polyatomic anions contain oxygen and these are called oxyanions. Usually, these anions come in groups that have the same charge but different numbers of oxygen atoms. The names of these species have the root derived from the non-oxygen atom, and the most common ion is given the -ate ending. Names for the ion with more and fewer oxygen atoms are given different suffixes and sometimes prefixes:
Oxyanaions that can combine with hydrogen ions (H+), are given the prefix
hydrogen or dihydrogen, or in the older literature the prefix
Ionic compounds are named by simply putting the cation name first and the anion name second (and leave out the 'ion' word if the compound is electrically neutral). eg.
Acids are named from the anions that they are formed from since we know the cation they contain is Hydrogen ion. We just change the suffix of the anion and add the word 'acid'.
Binary moleular compounds that are formed between non-metals do not usually form ions first (they are 'covalent'), so we simple use a prefix to state how many atoms of each non-metal are in the molecule. The element that is furthest left or down in the periodic table (most metallic like) is named first and the second element is given an -ide ending. You don't use mono- on the first element, and you drop a vowel when they get strung together:
Examples of this type of naming are
Do you have to memorize these naming schemes? Yes. And these are just the beginning --- we haven't even begun to discuss organic nomenclature :)
The only alternatice is to memorize each chemicals name uniquely (>10 million names) and learn new ones each day... |
Materials That Reflect No Light
Solar cells, camera lenses, and LEDs could benefit from new antireflection coatings.
Unwanted reflections limit the performance of light-based technologies, such as solar cells, camera lenses, and light-emitting diodes (LEDs). In solar cells, for example, reflections mean less light that can be converted into electricity. Now researchers at Rensselaer Polytechnic Institute (RPI), in Troy, NY, and semiconductor maker Crystal IS, in Green Island, NY, have developed a new type of nanostructured coating that can virtually eliminate reflections, potentially leading to dramatic improvements in optical devices. The work is published in the current issue of Nature Photonics.
The researchers showed that they can prevent almost all reflection of a wide range of wavelengths of light by “growing” nanoscale rods projected at specific angles from a surface. In contrast, conventional antireflective coatings work best only for specific colors, which is why, for example, eyeglasses with such coatings still show faint red or green reflections. Fred Schubert, professor of physics and electrical, computer, and systems engineering at RPI and one of the authors of the study, says that the material stops reflections from nearly all the colors of the visible spectrum, as well as some infrared light, and it also reduces reflections from light coming from more directions than conventional coatings do. As a result, he says, the total reflection is 10 times less than it is with current coatings.
Applied to a solar cell, the new coating would increase the amount of light absorbed by a few percentage points and convert it into electricity, Schubert says. A more remarkable 40 percent improvement could be seen in LEDs, he says, in which a large amount of light generated by a semiconductor is typically trapped inside the device by reflections. The work is part of a growing effort among researchers to alter the properties of materials, such as their optical properties, by controlling nanoscale structures.
To make less-reflective surfaces, the RPI engineers created a multilayered, porous coating that eases the transition as light moves from air into a solid material or as light is emitted from a semiconductor in an LED. Reflectivity is related to the difference between the amount that two substances, such as air and glass, refract or bend light. Reducing the difference reduces reflection where two materials meet. In the new coating, each successive layer bends light more as light moves from air into a substrate. Likewise, as in the example of an LED, light emerging from a semiconductor is bent less in each successive layer until it reaches the air.
The theory behind this has been known for decades, says Steven Johnson, a professor of applied mathematics at MIT, but the challenge has been fabricating a structure that is both porous enough and small enough to work with the short wavelengths of visible light.
The RPI researchers made such a porous structure by depositing materials on a surface to create nanoscale rods. Tilting the surface makes it possible to grow the nanorods at an angle. The researchers found that by changing the angle of the nanorods, they can control the way the nanorods bend light–the index of refraction. Air has an index of refraction of very nearly one. The researchers were able to make a top layer of nanorods with what Schubert says is an unprecedented index of 1.05. (For comparison, glass has an index of refraction of 1.45, and a light-emitting semiconductor, aluminum nitride, has an index of about 2.05.) Each successive layer has a higher index of refraction until the last layer nearly matches the substrate. The top two layers incorporate glass nanorods. The bottom three are made of titania. The researchers tested the coating on aluminum nitride, but it should work on a variety of substrates, Schubert says.
“We have developed a new class of materials that has a refractive index that is lower than anything else–any other viable optical thin-film material that has been available in the past,” Schubert says. Since “everything in optics depends upon the refractive index,” he says it could have applications other than antireflective coatings. Indeed, the nanorods could be used to do the opposite, creating very highly reflective mirrors by pairing layers of nanorods that bend light very differently, rather than by creating a gradual transition.
Schubert is working with a spinoff company to commercialize the technology, and he anticipates that products could be available in three to five years. The technology will face competition with inexpensive conventional coatings as well as with other new nanostructured materials. “This is very elegant, beautiful work,” says Michael Rubner, a professor of materials science and engineering at MIT. “They’ve been able to get some exceptionally low refractive-index values for a coating. The key question is always going to be cost versus performance.” |
The strong interaction of the silver nanoparticles with light occurs because the conduction electrons on the metal surface undergo a collective oscillation when excited by light at specific wavelengths (Figure 2, left).
Known as a surface plasmon resonance (SPR), this oscillation results in unusually strong scattering and absorption properties.
In fact, silver nanoparticles can have effective extinction (scattering + absorption) cross sections up to ten times larger than their physical cross section.
The strong scattering cross section allows for sub 100 nm nanoparticles to be easily visualized with a conventional microscope. When 60 nm silver nanoparticles are illuminated with white light they appear as bright blue point source scatterers under a dark field microscope (Figure 2, right).
The bright blue color is due to an SPR that is peaked at a 450 nm wavelength.
A unique property of spherical silver nanoparticles is that this SPR peak wavelength can be tuned from 400 nm (violet light) to 530 nm (green light) by changing the particle size and the local refractive index near the particle surface.
Even larger shifts of the SPR peak wavelength out into the infrared region of the electromagnetic spectrum can be achieved by producing silver nanoparticles with rod or plate shapes. –
The size and shape of metal nanoparticles are typically measured by analytical techniques such as TEM, scanning electron microscopy (SEM) or atomic force microscopy (AFM). Measuring the aggregation state of the particles requires a technique to measure the effective size of the particles in solution such as dynamic light scattering (DLS) or analytical disc centrifugation. However, due to the unique optical properties of silver nanoparticles, a great deal of information about the physical state of the nanoparticles can be obtained by analyzing the spectral properties of silver nanoparticles in solution. The spectral response of silver nanoparticles as a function of diameter is shown in Figure 3, left. As the diameter increases, the peak plasmon resonance shifts to longer wavelengths and broadens. At diameters greater than 80 nm, a second peak becomes visible at a shorter wavelength than the primary peak. This secondary peak is due to a quadrupole resonance that has a different electron oscillation pattern than the primary dipole resonance. The peak wavelength, the peak width, and the effect of secondary resonances yield a unique spectral fingerprint for a plasmonic nanoparticle with a specific size and shape. Additionally, UV-Visible spectroscopy provides a mechanism to monitor how the nanoparticles change over time. When silver nanoparticles aggregate, the metal particles become electronically coupled and this coupled system has a different SPR than the individual particles. For the case of a multi-nanoparticle aggregate, the plasmon resonance will be red-shifted to a longer wavelength than the resonance of an individual nanoparticle, and aggregation is observable as an intensity increase in the red/infrared region of the spectrum. This effect can be observed in Figure 3, right, which displays the optical response of a silver nanoparticle solution destabilized by the addition of saline. Carefully monitoring the UV-Visible spectrum of the silver nanoparticles with time is a sensitive technique used in determining if any nanoparticle aggregation has occurred.
Figure 3. (Left) Extinction (scattering + absorption) spectra of silver nanoparticles with diameters ranging from 10-100 nm at mass concentrations of 0.02 mg/mL. (Right) Extinction spectra of silver nanoparticles after the addition of a destabilizing salt solution.
For silver nanoparticle solutions that have not agglomerated and have a spectral shape that is identical to the as-received suspension, the UV/Visible extinction spectra can be used to quantify the nanoparticle concentration. The concentration of silver nanoparticle solutions is calculated using the Beer-Lambert law, which correlates the optical density (OD, a measure of the amount of light transmitted through a solution) with concentration. Due to the linear relationship between OD and concentration, these values can be used to quantify the concentration of nanoparticle solutions.
When nanoparticles are in solution, molecules associate with the nanoparticle surface to establish a double layer of charge that stabilizes the particles and prevents aggregation. Aldrich Materials Science offers several silver nanoparticles suspended in a dilute aqueous citrate buffer, which weakly associates with the nanoparticle surface. This citrate-based agent was selected because the weakly bound capping agent provides long term stability and is readily displaced by various other molecules including thiols, amines, polymers, antibodies, and proteins.
Silver nanoparticles are being used in numerous technologies and incorporated into a wide array of consumer products that take advantage of their desirable optical, conductive, and antibacterial properties.
- Diagnostic Applications: Silver nanoparticles are used in biosensors and numerous assays where the silver nanoparticle materials can be used as biological tags for quantitative detection.
- Antibacterial Applications: Silver nanoparticles are incorporated in apparel, footwear, paints, wound dressings, appliances, cosmetics, and plastics for their antibacterial properties.
- Conductive Applications: Silver nanoparticles are used in conductive inks and integrated into composites to enhance thermal and electrical conductivity.
- Optical Applications: Silver nanoparticles are used to efficiently harvest light and for enhanced optical spectroscopies including metal-enhanced fluorescence (MEF) and surface-enhanced Raman scattering (SERS).
There is growing interest in understanding the relationship between the physical and chemical properties of nanomaterials and their potential risk to the environment and human health.
The availability of panels of nanoparticles where the size, shape, and surface of the nanoparticles are precisely controlled allows for the better correlation of nanoparticle properties to their toxicological effects.
Sets of monodisperse, unaggregated, nanoparticles with precisely defined physical and chemical characteristics provide researchers with materials that can be used to understand how nanoparticles interact with biological systems and the environment.
Due to the increasing prevalence of silver nanoparticles in consumer products, there is a large international effort underway to verify silver nanoparticle safety and to understand the mechanism of action for antimicrobial effects.
Colloidal silver has been consumed for decades for its perceived health benefits1 but detailed studies on its effect on the environment have just begun. Initial studies have demonstrated that effects on cells and microbes are primarily due to a low level of silver ion release from the nanoparticle surface.2 The ion release rate is a function of the nanoparticle size (smaller particles have a faster release rate), the temperature (higher temperatures accelerate dissolution), and exposure to oxygen, sulfur, and light.
In all studies to date, silver nanoparticle toxicity is much less than the equivalent mass loading of silver salts(?).
Silver nano-wires are an exciting class of silver nanoparticles which have been studied as possible components in many advanced technology applications.
- Conductive Coatings: Silver nanowires can be used to provide conductive coatings for transparent conductors and flexible electronics.
- Plasmonic Antennas: Metallic nanoparticles attached to silver nanowires function as antennas enhancing plasmonic activity for sensing and imaging applications
- Molecular Sensing: Single layers of silver nanowires have been used to construct arrays for molecule specific sensing in conjunction with Raman Spectroscopy.
- Nanocomposites: Silver nanowires have been studied as components of nanocomposites and can show high dielectric constants in such systems.
The unique properties of silver nanoparticles make them ideal for numerous technologies, including biomedical, materials, optical, and antimicrobial applications, as well as for use in nanotoxicology studies.
Aldrich Materials Science offers a variety of silver nanoparticles including: 10 nm, 20 nm, 40 nm, 60 nm, and 100 nm in diameter with a citrate-stabilized surface at concentrations of 0.02 mg/mL. |
This latest study, published in the July 1 issue of Nature, describes the usage of X-ray lasers in order to control behavior of electrons. It might help future researchers to achieve atomic-scale images of biological molecules, leading to the improvement of chemical processes.
The team’s report describes a procedure in which they tuned Linac Coherent Light Source (LCLS) pulses to strip selected electrons, one by one, from atoms of neon gas. By varying the photon energies of the pulses, they could do it from the outside towards inside. Moreover, they even managed to perform a much more difficult task: Peeling it from the inside out, creating so-called “hollow atoms”. "Until very recently, few believed that a free-electron X-ray laser was even possible in principle, let alone capable of being used with this precision," said William Brinkman, director of the Department of Energy’s Office of Science. "That’s what makes these results so exciting."
Researchers from five different institutions compose the team, led by Argonne National Laboratory physicist Linda Young. "No one has ever had access to X-rays of this intensity,” she said. “The way in which ultra-intense X-rays interact with matter was completely unknown. It was important to establish these basic interaction mechanisms." SLAC‘s Joachim Stöhr, director of the LCLS, adds: "When we thought of the first experiments with LCLS ten years ago, we envisioned that the LCLS beam may actually be powerful enough to create hollow atoms, but at that time it was only a dream. The dream has now become reality."
This study continues a previous report, published by physicist Nora Berrah of Western Michigan University in Physical Review Letters. Her work describes the first experiments on molecules; she and her group also created hollow atoms, in this case within molecules of nitrogen gas. They found surprising differences in the way short and long laser pulses of exactly the same energies stripped and damaged the nitrogen molecules. "We just introduced molecules into the chamber and looked at what was coming out there, and we found surprising new science," said Matthias Hoener, a postdoctoral researcher in Berrah’s group. "Now we know that by reducing the pulse length, the interaction with the molecule becomes less violent."
Both teams used LCLS. It forms images by scattering X-ray light off an atom, molecule, or larger sample of material; when mirrors tightly focus the LCLS X-rays, each powerful laser pulse destroys any sample it hits. The innovative use of the X-ray laser is minimizing damage, since certain types of damage are not instantaneous and only develop with time. Not surprisingly, the two teams found that the shorter the laser pulse, the fewer electrons stripped away from the atom or molecule, meaning less damage occurs..
The experiments Young conducted deal with removing electrons using optical lasers. In order to understand her latest work, one should remember that the construction of atoms and molecules occurs via the intrinsic bonds of electrons, which organize themselves in orbits similar to layers. The energy increases from the inside towards the outside.
Young’s work is not the first to experiment with intense optical lasers to strip neon atoms; however, it is the first time that researchers understand how ultra-intense X-ray lasers do this. According to the study, at low photon energies the lasers remove the outer electrons, leaving the inner electrons untouched. However, at higher photon energies, the elimination of inner electrons occurs first; then the outer electrons cascade into the empty inner core, only for later parts of the same X-ray pulse to eradicate them as well. Even within the span of a single pulse there may be times when both inner and outer electrons are missing, creating a hollow atom that is transparent to X-rays.
This new way to explore atomic structure and dynamics offers improvement to current researching methods. "This transparency associated with hollow atoms could be a useful property for future imaging experiments, because it decreases the fraction of photons doing damage and allows a higher percentage of photons to scatter off the atom and create the image," Young said. Application of this phenomenon would also allow researchers to control how deeply an intense X-ray pulse penetrates into a sample, thus making researches regarding nanoclusters of atoms, protein nanocrystals, and even individual viruses much easier.
TFOT also covered the discovery of a single-atom transistor, which is crucial for the development of future compact computers, and a peek at the world’s thinnest material, made at Northwestern University. Another related TFOT story is the creation of molecular gear at the nanoscale level, made by scientists from A*STAR.
For more information about the process of unpeeling atoms and molecules, see Stanford University’s press release. |
By Anupum Pant
B.F. Skinner was an American psychologist, a behaviorist, and a social philosopher. He was also the inventor of the operant conditioning chamber – A.K.A the Skinner box, is a box which is used to study animal behaviour. For example, you can use it to train an animal to perform certain actions in response to some input, like light or sound.
Using one of his favourite animals, he designed an experiment where he trained a pigeon in order to examine the formation of superstitious beliefs in animals. Here’s what he did.
He placed a couple of pigeons in his setup which was designed to deliver food to them after certain intervals. Of all the things, the timing of food delivery by this apparatus wasn’t related to one thing for sure – behaviour or actions of the bird.
And yet, after some time in this automated setup, the pigeons developed certain associations which made them belief that the food came when they did something. They had developed superstitious beliefs.
One bird was conditioned to turn counter-clockwise about the cage, making two or three turns between reinforcements. Another repeatedly thrust its head into one of the upper corners of the cage. A third developed a ‘tossing’ response, as if placing its head beneath an invisible bar and lifting it repeatedly. Two birds developed a pendulum motion of the head and body, in which the head was extended forward and swung from right to left with a sharp movement followed by a somewhat slower return. – Wikipedia
The bird behaviour isn’t much different from what humans do…
The pigeons started believing in a causal relationship between its behaviour and delivery of food, even when there was nothing like that.
It’s almost like the humans blowing on the dice, or throwing it harder to make a favourable number appear. Even when blowing or throwing a dice harder doesn’t hold any causal relationship with the event of good numbers turning up.
During other times, when people bowl down a bowling ball and twist their bodies towards right to make the ball go right, they have in fact unknowingly developed a superstitious belief, just like the pigeons that there’s a causal relationship between turning their bodies and curving of the bowling ball. In reality, there’s nothing like that. The ball goes where it has to, irrespective of how they turn their bodies.
Just like in the superstitious pigeon’s case, the food would have appeared anyway. The pigeon didn’t have to do something to get it. |
At the centre of every atom is a nucleus containing protons and neutrons. Together, protons and neutrons are known as nucleons. Around this core of the atom, a certain number of electrons orbit in shells.
The nucleus and electrons are referred to as subatomic particles. The electrons orbit around the centre of the atom, which is due to the charges present; protons have a positive charge, neutrons are neutral and electrons have a negative charge. It is the electromagnetic force that keeps the electrons in orbit due to these charges, one of the four fundamental forces of nature. It acts between charged objects – such as inside a battery – by the interaction of photons, which are the basic units of light.
An atom is about one tenth of a nanometre in diameter. 43 million iron atoms lined up side by side would produce a line only one millimetre in length. However, most of an atom is empty space. The nucleus of the atom accounts for only a 10,000th of the overall size of the atom, despite containing almost all of the atom’s mass. Protons and neutrons have about 2,000 times more mass than an electron, making the electrons orbit the nucleus at a large distance.
An atom represents the smallest part of an element that can exist by itself. Each element’s atoms have a different structure. The number of protons inside a specific element is unique. For example, carbon has six protons whereas gold has 79. However, some elements have more than one form. The other forms – known as isotopes – of an atom will have the same number of protons but a totally different number of neutrons. For example, hydrogen has three forms which all have one proton; tritium has two neutrons, deuterium has one neutron and hydrogen itself has none.
As different atoms have different numbers of protons and neutrons, they also have different masses, which determine the properties of an element. The larger the mass of an atom the smaller its size, as the electrons orbit more closely to the nucleus due to a stronger electromagnetic force. For example an atom of sulphur, which has 16 protons and 16 neutrons, has the same mass as 32 hydrogen atoms, which each have one proton and no neutrons.
What makes up an atom?
Shell – Each shell can hold a different number of electrons. The first can hold 2, then 8,18,32 a nd so on.
Protons – A stable elementary particle with a positive charge equal to the negative charge of an electron. A proton can exist without a neutron, but not vice versa.
Electron – An elementary particle (one of the basic particles of matter), an electron has almost no mass and a negative charge.
Neutrons – An elementary particle with a neutral charge and the same mass as a proton. The number of neutrons defines the other forms of an element.
Nucleus – Held together by the strong nuclear force, the strongest force in nature, the nucleus is tightly bound and holds the protons and neutrons.
Quantum jump – An electron releases or absorbs a certain amount of energy when it jumps from one shell to another, known as a quantum leap.
Fact about the atomic bomb
Atomic bombs are notorious for their devastating power. By harnessing the energy in the nucleus of an atom, atomic bombs are one of the most powerful man-made weapons. In 1939, Albert Einstein and several other scientists told the USA of a process of purifying uranium, which could create a giant explosion known as an atomic bomb. This used a method known as atomic fission to ‘split’ atoms and release a huge amount of energy.
The only two bombs to ever be used in warfare were a uranium bomb on Hiroshima and a Plutonium bomb on Nagasaki in 1945 at the end of World War II. The effects were frighteningly powerful, and since then no atomic bomb has ever been used as a weapon. |
9th grade math books. Are you looking for books for ninth grade math students to better understand 9th grade math topics?
Your child may need some help with his or her ninth grade math class. Or you may be looking for 9th grade math resources for your students. TuLyn is the right place. Learning ninth grade math is now easier.
We have hundreds of books for 9th grade math students to practice. This page lists books on ninth grade math. You can navigate through these pages to locate our 9th grade math books.
Math Games: 180 Reproducible Activities to Motivate, Excite, and Challenge Students, Grades 6-12
: 9780787970819 : Judith A. Muschla, Gary Robert Muschla : Paperback
This ninth grade math book "Math Games: 180 Reproducible Activities to Motivate, Excite, and Challenge Students, Grades 6-12" is a great Ninth Grade math resource.
Mathematics for Elementary Teachers: A Contemporary Approach
: 9780470105832 : Gary L. Musser, Blake E. Peterson, William F. Burger : Hardcover : 8
Click to expand
Click to hide
Now in its eighth edition, this text masterfully integrates skills, concepts, and activities to motivate learning. It emphasises the relevance of mathematics to help students learn the importance of the information being covered. This approach ensures that they develop a sold mathematics foundation and discover how to apply the content in the real world.
This ninth grade math book "Mathematics for Elementary Teachers: A Contemporary Approach" is a great Ninth Grade math resource.
The Math Teacher's Book Of Lists: Grades 5-12, 2nd Edition
: 9780787973988 : Judith A. Muschla, Gary Robert Muschla : Paperback : 2
Click to expand
Click to hide
This is the second edition of the bestselling resource for mathematics teachers. This time-saving reference provides over 300 useful lists for developing instructional materials and planning lessons for middle school and secondary students. Some of the lists supply teacher background; others are to copy for student use, and many offer new twists to traditional classroom topics. For quick access and easy use, the lists are numbered consecutively, organized into sections focusing on the different areas of math, and printed in a large 8-1/2" x 11" lay-flat format for easy photocopying. Here's an overview of the ready-to-use lists you'll find in each section:
I. NUMBERS: THEORY AND OPERATIONS presents 40 lists including classification of real numbers, types of fractions, types of decimals, rules for various operations, big numbers, and mathematical signs and symbols.
II. MEASUREMENT contains over 30 lists including, things that measure, measurement abbreviations, the English and Metric Systems, and U.S. money¾coins and bills.
III. GEOMETRY offers more than 50 lists covering topics such as lines and planes, types of polygons, types of quadrilaterals, circles, Pythagorean triples, and formulas for finding area and volume.
IV. ALGEBRA gives you over 40 lists including how to express operations algebraically, powers and roots, common factoring formulas, quadratic functions, and types of matrices.
V.TRIGONOMETRY AND CALCULUS provides more than 30 lists including the quadrant signs of the functions, reduction formulas, integration rules, and natural logarithmic functions.
VI. MATH IN OTHER AREAS offers more than 30 lists that tie math to other content areas, such as descriptive statistics, probability and odds, numbers in popular sports, and some mathematical facts about space.
VII. POTPOURRI features 16 lists that explore the various aspects of math including, famous mathematicians through history, world firsts, math and superstition, and the Greek alphabet.
VIII. SPECIAL REFERENCE LISTS FOR STUDENTS provides 10 lists of interest to students such as overcoming math anxiety, steps for solving word problems, and math web sites for students.
IX. LISTS FOR TEACHERS' REFERENCE contains 25 lists such as how to manage a cooperative math class, sources of problems-of-the-day, how to have a parents' math night, and math web sites for teachers.
X. REPRODUCIBLE TECHING AIDS contains an assortment of helpful reproducibles including number lines, fraction strips, algebra tiles, and various nets for making 3-D geometric shapes.
This ninth grade math book "The Math Teacher`s Book Of Lists: Grades 5-12, 2nd Edition" is a great Ninth Grade math resource.
How Others Use Our Site
I teach math to 9th grade -12th grade at a small city school in North Carolina. Knowing that each student in NC has to score levels 3 or 4 on the EOC Algebra 1 test to graduate, any real-life application type problems would be beneficial to the success of my students. Knowing also that matrces are heavily tested on the NC EOC test, this website appears that it would be helpful. Our textbooks do not offer many examples of matrices . Thank you. |
[For trouble viewing the images/movies on this page, go here]
Janus and Tethys demonstrate the main difference between small moons and large ones. It's all about the moon's shape.
Moons like Tethys (660 miles or 1,062 kilometers across) are large enough that their own gravity is sufficient to overcome the material strength of the substances they are made of (mostly ice in the case of Tethys) and mold them into spherical shapes. But small moons like Janus (111 miles or 179 kilometers across) are not massive enough for their gravity to form them into a sphere. Janus and its like are left as irregularly shaped bodies.
Saturn's narrow F ring and the outer edge of its A ring slice across the scene.
This view looks toward the unilluminated side of the rings from about 0.23 degrees below the ring plane. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera on Oct. 27, 2015.
The view was obtained at a distance of approximately 593,000 miles (955,000 kilometers) from Janus. Image scale at Janus is 3.7 miles (6 kilometers) per pixel. Tethys was at a distance of 810,000 miles (1.3 million kilometers) for an image scale of 5 miles (8 kilometers) per pixel.
The Cassini Solstice Mission is a joint United States and European endeavor. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter was designed, developed and assembled at JPL. The imaging team consists of scientists from the US, England, France, and Germany. The imaging operations center and team lead (Dr. C. Porco) are based at the Space Science Institute in Boulder, Colo. |
Researchers are still trying to understand what causes CFS so
they can find a cure.
View this image ›
“We think that it’s caused by an infection that results in
long-term abnormalities in the body’s nervous system and immune
response, but this is all theoretical,” says Rowe. Although
researchers can’t pin down the viral or bacterial infection,
there is evidence that CFS is linked to nervous system and
“Some studies have found elevated levels of inflammatory
cytokines in the patients’ cerebral spinal fluid, which usually
show up when a flu or cancer happens,” Fleming says. This
suggests that CFS might cause an immune
activation in the central nervous system, or autoimmunity.
“We also think there’s an issue with the autonomic nervous
system, which controls involuntary body functions, because many
patients have a hard time getting blood flow to the brain when
upright and develop a rapid heart rate,” Rowe says.
There’s another theory that CFS is caused by central
sensitization, Fleming says, which basically means someone
is hypersensitive to stimuli like noise, light, heat, cold, or
electricity. So their brain sends abnormal pain signals to the
central nervous system from normal stimuli that doesn’t bother
other people. It could explain why CFS patients feel so much
pain, get frequent headaches, or feel so exhausted from
exercise, Fleming says. |
Tom Baumiller’s research bridges the gap between ecological and evolutionary time scales: he studies how organisms interact with their physical and biotic environment, and explores the ecological and evolutionary consequences of these interactions. Crinoids are favorite subjects, in part because modern representatives can be studied in situ and in the lab, and the group has a rich and long fossil record.
Documenting biotic interactions has been a major challenge. This is particularly daunting when dealing with fossils, but even interactions in the Recent are often difficult to document. For example, it was only recently that we found today’s sea urchins biting pieces off stalks of live sea lilies. Since sea lilies can shed their stalks and crawl, Baumiller and his colleagues postulated that stalk shedding and crawling evolved as an escape from slow-moving benthic predators.
Sea urchin bite marks are just one sign of predation: drill holes are more common and have been more widely studied. Drill holes in crinoids and blastoids indicate that platyceratids (a group of Paleozoic gastropods) possessed the ability to drill and while they may have also drilled brachiopods and other invertebrates, nevertheless drilling remained a minor strategy in the Paleozoic compared to its subsequent ubiquity. Whether this was due to intrinsic constraints or differences in types of available prey, remains an unanswered.
Injuries to crinoid arms are also an indicator of biotic interactions, but since those regenerate, such injuries are ephemeral. Theoretical treatment indicates that frequency of such injuries records predation pressure, an important parameter for studying the ecological and evolutionary impact of predation. A project in Honduras aims to assess crinoid injuries as a function of depth (450-2000 ft) to test the hypothesis that fish predation drove stalked crinoids from shallower habitats, where they used to thrive in the geologic past. |
3 memory techniques for permanently memorising hundreds of pieces of information in little time.
A good memory can make the difference in many contexts; at school, university or work. Whether we want it or not, knowing how to memorise often keeps us out of trouble. How would you feel if you could prepare for an exam in just a few days? Memorise a work report in a handful of minutes? Remember the names of everyone you meet? (and you know how much this counts for interpersonal relationships).
How to memorise?
If you learned how to speed read, now it is time to improve your memory. There are 3 memory techniques for remembering more information in less time and forever.
Our brain is programmed to memorise millions of pieces of information every day, without us even realising it. This memorisation is immediate (around a millisecond) and permanent. But why can’t we remember small pieces of information?
Improving your own memory means being able to better access memories you already have.
Think of web 2.0: what was one of the most efficient techniques for cataloguing and organising millions of files uploaded by web users? Tags. Small keywords to associate to an image, an article or a video to identify them in the internet ocean.
There is a memorisation technique that uses so-called “mental tags”. This is how it works.
Every time you have to remember names, dates, numbers, facts or concepts, follow this strategy:
- Create a tag in your mind to identify the origin of the information you have to remember. For example: “Prof. Smith’s lesson”.
- Create a tag to identify the object of the information you have to remember. If it’s an economics lesson on the causes of a recession, use the tag “recession”
- Create a funny image to associate the origin tag to the object tag. In our example, you could think of Prof. Smith who is beginning to sing a “memorable” song on stage about stock exchange indexes and recession. The funny image will stimulate the so-called emotional memory, particularly useful for quickly remembering information.
- Finally, create an image for identifying key concepts to remember. In our little story, you could use the members of a band, associating each of them with a keyword: the guitarist John Inflation, the drummer Frank Liquidity, etc. I think you get the point.
And now try to forget Prof. Smith’s lesson on recessions, if you can!
Sometimes you do not have enough time available to study or memorise. Maybe you have to prepare for an exam in a few days, or you have to remember important information without being able to write it down or… you simply want to remember the number of a girl you met at the club and your phone is dead!!!
In these cases you can use a technique called “immediate memory”. Here are the 5 stages to apply it:
- Believe. Convince yourself that you will remember the material (it’s a bit like pressing the red REC button).
- Want. Desire remember the information (this will reinforce your mnemonic abilities).
- Visualise. Mentally look at or repeat the information once, in a clear way.
- Command. Order your brain to remember the material (feel like a stupid idiot doing this? Great: the stronger the emotion you associate with the memory, the easier to is to remember).
- Review. Look at the material one final time.
Your grandfather remembers a poem learned in primary school and you discovered yesterday on Facebook the name of your classmate in secondary school? Remembering information for years is not so difficult, you just have to apply a simple technique of mental revision. It’s great if you want to learn how to study for A-level exams, university exams, or any other test that you will have in life:
- Every 20 minutes studying, make a list of points that you want to remember and go over them for 5 minutes.
- At the end of the day (before going to sleep), revise the list of key points for 5 minutes.
- After 3 days, revise the list of key points for the last time for 5 minutes.
These techniques were part of my university studying method; I’m certain that, if you know how to apply them with dedication, you will also attain surprising results. |
The Chinese stripe-necked turtle is an opportunistic omnivore, which feeds on a variety of plant and animal matter. However, the composition of the diet alters between sexes and age groups, with mature females being more herbivorous, and males and young females being mainly carnivorous. Large females feed chiefly on a large plant, Murdannia keisak, growing on the banks of rivers, while males feed primarily on aquatic snails (Physa acuta), and the larvae and pupae of flies (mainly blackflies). Other food items eaten by males and females include the seeds of knotweed (Polygonum species), plant shoots and roots, and terrestrial insects (4) (6).
The Chinese stripe-necked turtle is believed to nest from late March to early June, when it lays a clutch of 7 to 17 eggs. The nests in which the eggs are laid are often visited by predators; spiders (Lycosidae species) have been seen feeding on the eggs, and dogs are also thought to be a potential predator of the eggs and hatchlings. Turtle hatchlings have been first seen in early August. (4). |
Leishmaniasis is a common parasitic disease that is endemic to most tropical areas, particularly in the Old World. The parasite is rarely transmitted from person to person and usually requires that a person be bitten by an infected sandfly, which is the most common vector of the disease. Recent fossil evidence has shown that the parasite that causes the disease has existed at least since the time of the dinosaurs as it was found in a sandfly trapped in amber.
The disease has two common forms:
- Cutaneous leishmaniasis, which is characterized by one or more large sores on the skin. It results when the parasite is generally limited to the skin's surface.
- Visceral leishmaniasis, which is characterized by multiple organ failure. It results when the parasite enters the bloodstream.
Among parasitic diseases, only malaria kills more people each year than leishmaniasis. About 500,000 people a year develop the disease, and 60,000 die from it.
Leishmaniasis is difficult and dangerous to treat, but can usually be treated successfully. Antimony, otherwise a toxin to humans, is known to be effective against the parasite, but the reasons for this are unknown. However, some cases resolve spontaneously without treatment. |
Syncope is a transient, self-limited loss of consciousness due to acute global impairment of cerebral blood flow. The onset is rapid, duration brief, and recovery spontaneous and complete. Other causes of transient loss of consciousness need to be distinguished from syncope; these include seizures, vertebrobasilar ischemia, hypoxemia, and hypoglycemia. A syncopal prodrome (presyncope) is common, although loss of consciousness may occur without any warning symptoms. Typical presyncopal symptoms include dizziness, lightheadedness or faintness, weakness, fatigue, and visual and auditory disturbances. The causes of syncope can be divided into three general categories: (1) neurally mediated syncope (also called reflex syncope), (2) orthostatic hypotension, and (3) cardiac syncope.
Neurally mediated syncope comprises a heterogeneous group of functional disorders that are characterized by a transient change in the reflexes responsible for maintaining cardiovascular homeostasis. Episodic vasodilation and bradycardia occur in varying combinations, resulting in temporary failure of blood pressure control. In contrast, in patients with orthostatic hypotension due to autonomic failure, these cardiovascular homeostatic reflexes are chronically impaired. Cardiac syncope may be due to arrhythmias or structural cardiac diseases that cause a decrease in cardiac output. The clinical features, underlying pathophysiologic mechanisms, therapeutic interventions, and prognoses differ markedly among these three causes.
Syncope is a common presenting problem, accounting for approximately 3% of all emergency room visits and 1% of all hospital admissions. The annual cost for syncope-related hospitalization in the United States is ~ $2 billion. Syncope has a lifetime cumulative incidence of up to 35% in the general population. The peak incidence in the young occurs between ages 10 and 30 years, with a median peak around 15 years. Neurally mediated syncope is the etiology in the vast majority of these cases. In elderly adults, there is a sharp rise in the incidence of syncope after 70 years.
In population-based studies, neurally mediated syncope is the most common cause of syncope. The incidence is slightly higher in females than males. In young subjects there is often a family history in first-degree relatives. Cardiovascular disease due to structural disease or arrhythmias is the next most common cause in most series, particularly in emergency room settings and in older patients. Orthostatic hypotension also increases in prevalence with age because of the reduced baroreflex responsiveness, decreased cardiac compliance, and attenuation of the vestibulosympathetic reflex associated with aging. In the elderly, orthostatic hypotension is substantially more common in institutionalized (54–68%) than community dwelling (6%) individuals, an observation most likely explained by the greater prevalence of predisposing neurologic disorders, physiologic impairment, and vasoactive medication use among institutionalized patients.
The prognosis after a single syncopal event for all age groups is generally benign. In particular, syncope of noncardiac and unexplained origin in younger individuals has an excellent prognosis; life expectancy is unaffected. By contrast, syncope due to a cardiac cause, either structural heart disease or primary arrhythmic disease, is associated with an increased risk of sudden cardiac death and ... |
Swimmer's itch is a skin rash your child may get after swimming in some freshwater lakes and ponds, and sometimes in salt water. A swimming pool is usually safe as long as it is clean and has chlorine. It is also called cercarial dermatitis.
It is caused by a parasite carried by snails, ducks, geese, and other animals living near the water. When your child swims in the water, the parasite gets into your child’s skin. The parasite can’t be passed from person to person. The parasites soon die while still in your skin.
The first symptom is itching that starts 1 to 2 hours after your child gets out of the water. The itching is usually mild at first. The itching may go away, then return after several hours. The itching is usually more intense when it comes back.
A pinpoint red rash may develop, but your child can have itching without a rash.
Your healthcare provider will ask about your child's symptoms and medical history and examine your child. Tell the provider where your child has been swimming or wading.
To help relieve the itching:
Check with local officials to find out if the parasite is a problem in the area where you want to swim. Rinse exposed skin with fresh water immediately after leaving the water. Make sure that your child dries off well using a towel with a rubbing motion as soon as he gets out of the water. This may help prevent the parasite from getting into your child’s skin. Wash your child’s swimsuits often. |
With reference to Emily Bronte, explain how poetry becomes the means whereby she is able to embody a subtle, but important experience that continues to speak powerfully to us today. Consider using the poem “Stars” to illustrate this idea.
Bronte’s use of a simple experience like star gazing in her poem “Stars” is the first step she takes to connect her audience strongly to her poetry. With the audience connected to the simple activity that is the basis of her poem, Bronte begins to expand. She builds to her subtle love of the night and it’s darkness in general as opposed to bright light of daytime. Through her poetic language techniques she is able to express her love of the night, as she juxtaposes it with the day time and all the negative connotations she sees and specifically attaches to the day. Especially with the example of juxtaposition in the lines “And scorch with fire the tranquil cheek/Where your radiance fell?” she contrasts the discomfort she feels during the day and then the tranquility she feels during the night. The importance she puts in emphasising this difference immediately puts this subtle experience into focus as a simultaneously powerful experience we as an audience strongly relate to. |
Fossil fuels, including coal, petroleum, and natural gas, contain stored energy from organic compounds that originated millions of years ago in living plants and animals. To access this energy, we burn these fuels, and as we do, air pollutants and certain harmful gases are released into the atmosphere. Among these emissions is carbon dioxide (CO2). Although CO2 is a naturally occurring and nontoxic gas, human activities have increased its concentration in the atmosphere well beyond natural levels. Most climatologists link heightened levels of atmospheric CO2 to accelerated global warming. This is because CO2 (along with methane, nitrous oxide, and water vapor) is a known greenhouse gas, and greenhouse gases act much like the glass of a greenhouse: allowing sunlight in but preventing heat from escaping.
Another possible source of energy for cars -- one that gives off non-toxic by-products -- is hydrogen. Because its by-products don't harm the environment, the hydrogen fuel cell, which produces electricity capable of powering cars and other vehicles, has been touted by many as a promising replacement for the internal combustion engine.
Fuel cell technology is proven but nevertheless problematic. Like batteries, fuel cells turn chemical energy into electricity. A fuel cell, using a platinum catalyst, combines hydrogen and oxygen into water in a way that produces an electric potential, like that of a battery. In vehicles, the electric current is routed to small motors in the wheels, and the by-products -- heat and water -- are released into the air through a tailpipe.
Unlike fossil fuel resources, which are extracted from the ground, hydrogen must be made. Hydrogen can be removed from water using another fuel, such as natural gas or coal, to power the extraction process, but this creates CO2. Thus, the challenge is to find a way to extract hydrogen using a carbon-free energy source. Non-polluting, renewable sources of energy, like solar cells, wind turbines, or hydroelectric dams, might one day fuel the extraction process, but at present such a solution would not be efficient. The other problem with hydrogen as a fuel is that a distribution and refueling infrastructure to serve drivers of hydrogen-powered cars would be extremely expensive. |
The first known use of the noun ‘plague’ occurred in the 14th century, according to The Merriam-Webster. The word has Latin, Old High German and Greek roots, all of which point to tragedy, lamentation, and curse. In modern English, the severity of the word has become diluted somewhat; it is now also used to depict unwelcome, nuisance events or mild disturbances. But it is important not to be unmindful of the disaster, the destruction, the calamity – untold misery and suffering of countless people, that this word came to represent during certain periods of human history.
It is, therefore, curious, to a chronicler of medical history, that the liability associated with the word ‘plague’ should come down to one rod-shaped (“bacillus“) bacterium, Yersinia pestis – a microbe stained pinkish red by the Gram’s Stain procedure (“Gram negative“), capable of growing in presence or absence of oxygen (“facultative anaerobe“), and responsible for the deaths of both humans and other animals.
Yersinia pestis is primarily a rodent pathogen; in the nature, its life cycles between wild rodents (including squirrels, chipmunks, prairie dogs, voles, rabbits, wood rats and mice) and fleas that infest them and drink their blood. This relationship, called enzootic cycle, establishes a long term reservoir of the bacterium. Transmission to humans and other animals (such as wild carnivores, domestic pets, urban rat populations) primarily occurs by the bite of infected fleas. This phase of the life of Yersinia pestis is referred to as the epizootic cycle. There is some evidence that in between epizootic cycles, the bacterium can persist in the soil under natural conditions, for up to 24 days; the mechanism is not yet known.
Humans are considered most at risk during an epizootic cycle. When the infected rodents start to die, the parasitic fleas look for new hosts. A direct flea-bite, or exposure to blood and bodily fluids of a dead, infected animal can transfer the bacterium to humans; dogs may carry the fleas on their skin, and cats get infected by eating an infected rodent. In addition, coughed-out droplets from an infected human patient or cats (which are, unfortunately, rather susceptible to plague) may be accidentally inhaled by other humans and animals.
Once in the body, Yersinia pestis is generally rapidly destroyed and eaten by cellular defenders of the immune system. However, a few may survive by hiding inside one of those immune cells, called a macrophage. The macrophages ingest the bacteria, but may not be able to kill them; in that protected intra-cellular environment, Yersinia pestis grows in number and collects a protein coat. Later, when they burst out of the macrophage, the coat protects them from attacks by other immune cells, and they quickly spread to the lymph nodes in the neck, the armpit and the groin. As a result, the lymph nodes swell, become painful and hot to touch, and may bleed internally, giving a black, bruised appearance (known as ‘buboes’); the condition is called Bubonic plague.
As a result of untreated Bubonic plague or via direct entry through skin (say, a wound), the bacterium spills into the bloodstream in a matter of days, spreading to skin, liver, and spleen. This is known as Septicemic plague. One major complication of septicemic plague is known as DIC or disseminated intra-vascular coagulation, a pathological condition in which blood clots randomly inside blood vessels in the body, depleting clotting factors (which subsequently causes abnormal internal bleeding) and preventing normal blood flow to organs (which may lead to organ failure). As a consequence of untreated septicemia, or via direct inhalation of infectious droplets, the bacterium infects the lungs causing severe respiratory distress, lung failure and shock. This form, called Pneumonic plague, is the most difficult to treat and is uniformly fatal.
As one can imagine, plague is a serious disease, and without prompt identification and institution of correct treatment, can lead to a quick and painful death. However, the plague bacterium is still susceptible to commonly available antibiotics active against Gram-negative bacteria (such as polyketide “-cycline”; aminoglycoside “-mycin”; chloramphenicol; fluoroquinolones “-floxacin”; sulfonamides “sulfa drug”; and so forth).
Since administration of antibiotics is always laced with the worry about development of resistance (occurrence of naturally-resistant strains is not unknown), alternative therapeutic (‘treatment’) and prophylactic (‘preventive’) approaches are being studied; these include:
- Vaccination, especially for people who reside in zones endemic for plague;
- Passive immunotherapy, in which anti-toxin and anti-microbial antibodies are administered;
- Antimicrobial peptides produced in other bacteria (‘bacteriocins’);
- Substances (‘immune response modifiers’) which can neutralize the effect of the plague bacterium on the immune system; and
- Substances that can interfere with the bacterium’s ability to cause disease.
Now, having learnt about Yersinia pestis and its role in one of deadliest diseases known to mankind, throw your mind back to the times when the medical interventions were not available. Those times were not pretty. However, the scourge of plague – by virtue of its destructive nature – has occasionally turned the tides of history, significantly impacting the development of human civilization. We shall talk more about it in the next segment. Stay tuned! |
The Basic Concept
Curriculum design, at its essence, is social engineering. The concept of curriculum is used in a variety of different ways, but is generally understood to refer to the scope of what is taught and learned in an educational setting. Because this meaning is so broad, distinctions have been drawn, for instance, between the taught curriculum and the learned curriculum, or between the formal curriculum and the informal curriculum. Formal curricula provide the authorized encoding of what the educational institution deems to be important.
In most institutionalized educational settings, the curriculum is defined operationally by some kind of formal body of material, a course syllabus, a set of required readings, a document outlining standards and outcomes of the learning experience etc. Whatever the form, this document outlines the prescribed course if learning for those involved--usually with a heavy emphasis on knowledge and skills.
The structure and content of a curriculum reflect the institutionalized priorities of the educational institution. It provides a map for students' learning and teachers' teaching.
A well-developed curriculum provides important scaffolding to help make teaching and learning more systematic. It also helps to bring some consistency to educational activities from one classroom to the next. While the trend today is towards more individualized learning, curricula help to provide a general direction to guide teachers and learners towards the learning deemed important by educational leaders.
In order to be systematic in its educational processes, any education systems need to have a clear idea of the learning they hope to achieve. But the process of defining that body of learning is necessarily restrictive. The process of defining a curriculum results in three consequences which impact student learning and development.
First, a curriculum, by its very nature prioritizes and privileges a certain body of learning while marginalizing or excluding others. In so doing, any curriculum will implicitly attach a higher value and status to certain sanctioned knowledge and skills. This creates a split between official or high status knowledge and skills, and those which are not. This implicitly advantages those who come from socio-cultural groups where these knowledge and skills are more commonplace, and pressures those from other groups to either conform and embrace these intellectual goods in order to keep up, or to accept the fate of remaining in the realm of low status.
Second, from a structural standpoint, curricula tend to tell us not only what is important, but also how we should organize our thinking about those important things. There is nothing immutable, for example, about the typical subject divisions in our schools. Nor is there any convincing research that learning one subject in isolation from others results in better learning. In fact, quite to the contrary, recent work around integrated approaches and project-based learning tends is much more convincing.
Third, and most fundamentally, curricula provide stakeholders with an operational definition of what we can consider "real" learning. In the encoding of what is important, this tends to be dominated by knowledge, and to a lesser extent, skills. When values or attitudes are included explicitly in curricula, they tend to be referenced as underpinnings or cross-cutting themes, which serves to marginalize them because of very limited understandings of how such things can actually be taught. This positions education as a relatively narrow, functionalist training.
The collective impact of these three issues is to set a narrow and superficial educational agenda--the very one reflected in our current educational practice. It is education for its own sake. This education is self-perpetuating, serving to reinforce our existing ways of being and thinking, rather than guiding our societies in a more constructive process of reform.
To serve a more deeply educative purpose and help to orient education towards social transformation, curriculum development needs to go back to first principles. The emperor has no clothes, friends. So, rather than building on the established structures and formats, rather than just updating existing contents, rather than simply integrating technology in hopes of arriving at a 21st century model, curriculum needs to be rethought. This rethinking should be based on a more grounded view of the purposes of schooling, and a more comprehensive understanding of what learning and development takes place at school.
This is not a plea to re-divide learning into new subject divisions or to introduce new subject areas, though both of these might be constructive approaches. Nor is it a plea to reinvigorate old subject areas with new ideas, reframing language arts as critical literacy, or math as data literacy. It is not even a plea to weave a web that integrates all subject areas across the curriculum. Certainly we should question the existing subject areas and why we have these and not others, but the point of this argument is not that. The existing subject divisions are unlikely to go away any time soon, and how content learning gets sliced up is a bit of a red herring.
The formal curriculum is the place for us to define what is really important for students to learn. How those things are presented for learning is actually more important than what those things are. No matter what the content, if it is introduced uncritically, if it is not oriented towards transformation, it ends up reinforcing the status quo. The myth that education can or should be neutral has made educators and education systems both complacent and complicit in issues of serious moral concern--poverty, war, environmental collapse, oppression, marginalization, social apathy, and so on.
The issue with curriculum, however, runs deeper than what the content is and how it is presented. The discussion of curriculum needs to stop splashing around in the shallow end, and acknowledge that school is socialization. We need to go beyond knowledge and skills, to have meaningful discussion about the dispositions which we are cultivating at school. A redefinition of curriculum which includes full attention to dispositions requires a commensurate change in the way we view teaching and learning. If we restrict our concept of teaching to teacher talk, then of course, our concept of learning is limited to content knowledge. But if we open up our concept of teaching to include tacit teaching--role modeling, conditioning, internalization of routines and procedures, environmental design, learning by doing, and so on--then we enrich our concept of learning along with it. This view of teaching and learning serves to demystify the process of socialization, and allows us to consider bringing more intentionality to that process. Like it or not, our students are being socially engineered by our education systems. An enhanced concept of curriculum allows for us as concerned educators to become more actively involved in determining the result of that process, rather than just reinforcing the status quo. |
In Java 1.4, Assertion is a keyword that is represented by a boolean expression and enables you to test your assumptions about your program. It evaluates to true, but if the expression is not true, the system will throw an error through an AssertionError.
The assertion statement has two forms. The first, simpler form is:
assert Expression1 ;
where Expression1 is a boolean expression. When the system runs the assertion, it evaluates Expression1 and if it is false throws an AssertionError with no detail message.
The second form of the assertion statement is:
assert Expression1 : Expression2 ;where:
Expression1 is a boolean expression.
In the second form, expression2 provides a means of passing a String message to the assertion facility.
If the condition evaluates to false and assertions are enabled, AssertionError will be thrown at runtime.
Some examples that use simple assertion form are as follows.
|assert value > 5 ;
assert accontBalance > 0;
Read more at: |
Toolbox – Whole School Approach
To create a culture of democracy and intercultural dialogue, the role of schools is essential in making the intercultural education a priority. This section aims at providing project ideas that can help students, teachers and the whole school community develop their intercultural competences. While working together on a common project, students and educators broaden their horizons and open up to the surrounding world. These projects often include the collaboration with local associations and non formal education providers. |
The Stephens-Murphy-Townsend Party of 1844 was the first American emigrant company to bring wagons across the Sierra Nevada into California, although they did so in stages. In the process they opened an important trail along the Truckee River, and became the first non-Indians to view Lake Tahoe. The party of about eleven wagons and 53 people arrived at Fort Hall, Idaho, without undue hardship, but when they reached the Humboldt Sink in modern Nevada, they had no idea which way to go. Fortunately they met an Indian whose name they misheard as “Truckee,” who led them across a 40-mile desert to a fine stream of water they named the Truckee River. More challenges lay ahead. Deciding to leave half of the wagons at a small alpine lake, they took the rest over the pass—only to have their way blocked by snow along the Yuba River. Most of the women and children, and two men, remained at the Yuba River camp while the others rode ahead to Sutter’s Fort to seek help. By March 1, 1845, all had arrived safely in the Sacramento Valley. The wagons they left at the lake—now Donner Lake—were brought in later that year. |
Political Activism through Music: Civil War Era
|Overview:||Students view an image of song lyrics from a chosen Civil War era song in order to understand the impact the political climate of the times had on the music of the same period. Students will analyze one song as a large group and one as a small group. Extension would include student research and comparison.|
|Objective:||After the completing the activity, students will be able to:
|Time Required:||One class period of 50 minutes.|
|Topic/Subject:||Literature, Performing Arts, Music|
|Era:||Civil War and Reconstruction, 1861-1877|
|Illinois Learning Standards:||Fine Arts:
25.B-Understand the similarities, distinctions and connections in and among the arts.
27-Understand the role of the arts in civilizations, past and present.
|Handouts:||Thinking about Songs as Historical Artifacts Worksheet.
Folders containing one primary source from LOC.
|Analysis Tool:||Music Sheet Analysis|
|PowerPoint:||Available on PDF.|
|Library of Congress Items:||Title: John Brown's Entrance into Hell.|
|Title: The Black Regiment|
|Title: John Brown|
|Title: Lines on the proclamation issued by the tyrant Lincoln, April first, 1863. By a Rebel.|
|Title: Young Eph's Lament|
|1.||Take student poll "is music affected by culture and/or politics?" Discuss results.|
|2.||Discuss briefly the political climate during Civil War and what issues could have affected the music of the period, i.e. slavery, abolition, North vs. South.|
|3.||Show PowerPoint of "John Borwn's Entrance into Hell." (Available on PDF)|
|4.||Discuss song stanza by stanza looking for political issues.|
|5.||Have students look up names and places they do not know.|
|6.||Discuss the song as a whole focusing on what we know about the political climate of the time.|
|7.||Students infer why this song was sung and who it would have been sung by.|
|8.||Using what we have concluded, model completion of the Historical Artifacts worksheet with student input.|
|9.||Students will be given folders containing one of four songs and the Historical Artifacts Worksheet.|
|10.||In the group (based on what song they have), the student will analyze the song and put their conclusions on the Music Sheet Analysis worksheet.|
|11.||Have each group present their song and conclusions to the class. Allow the class to provide additional input as needed.|
Participation in discussion sessions. Participation in group sessions. Completion of both Historical Artifacts worksheet and Music Sheet Analysis. For extension, completion of worksheets and assessment of essay.
Students will individually choose a song from the America Singing: Nineteenth Century Song Sheets Collection as well as contemporary songs they believe to have political significance. The student will analyze both songs using the Historical Artifact Worksheet. Using that information the student will compose a comparative essay discussing the similarities and differences between the songs and how the songs illustrate the political climate during which they were written.
Flora High School |
In the presence of angular acceleration, there can exist a fictitious force known as the Euler force. This force is observed from a non-inertial reference frame.
To illustrate this concept, consider a particle P
that is firmly sitting on a turntable, which is rotating about point O
with a (counterclockwise) angular acceleration α
. The radius measured from point O
to the location of the particle is R
The tangential (circumferential) acceleration of the particle (ac
The tangential force (Fc
) between the particle and turntable is preventing the particle from sliding in the circumferential direction. This force is determined by applying Newton’s Second Law in the direction of ac
is the mass of the particle.
Now, let’s say the force Fc
= 0. In this situation the particle would slide in the direction opposite ac
shown in the figure above. This can be deduced from Newton’s Second Law. Setting Fc
= 0, it follows that ac
Since Newton’s Second Law applies to an inertial reference frame
, this means that for Fc
= 0 the particle must have zero tangential acceleration relative to ground (which serves as the inertial reference frame). Thus, relative to a (non-inertial) reference frame attached to the turntable (and moving with it), the particle must move in the circumferential (clockwise) direction opposite the direction of ac
, shown in the figure above.
From the point of view of an observer sitting on the turntable (and moving with it), the Euler force is the apparent force that appears to be acting on the particle, pushing it in the circumferential (clockwise) direction. The line of action of this apparent force is therefore in the circumferential (tangential) direction and (as a result) is perpendicular to the line passing through point O
and point P
. But in reality, there is no actual force acting on the particle. What the observer sees is just a direct result of Newton’s Second Law, which applies to an inertial reference frame (ground). The figure below illustrates this.
The particle will also move in the circumferential (clockwise) direction relative to the turntable, if the force Fc
is insufficient to hold the particle in place. In other words,
It is worth noting that, if the turntable has angular velocity, two more fictitious forces can arise. They are the Coriolis force
, and the Centrifugal force
Return to Dynamics page
Return to Real World Physics Problems home page |
NASA's Voyager 1 probe set out to learn more about Jupiter, Saturn, and Titan, among other things, but it kept going even after its primary mission was complete. Today, the Voyager 1 spacecraft resides in interstellar space, just outside of the solar system. And it will keep moving onward, even after its onboard electronics and fuel source eventually fizzle out.
Voyager 1 didn't get where it is today by casually floating around. Instead, it's whipping through space at break-neck speeds exceeding 38,000 miles per hour thanks to a gravitational assist maneuver that was made possible by Jupiter's gravitational influence. Even at these astonishing speeds, it took Voyager 1 more than 40 years to get to its current location in outer space - more than 13 billion miles from home.
13 billion miles is a long distance to travel, and with that in mind, have you ever stopped to think how long it might take for other humanmade vehicles to travel this same distance? If you did, then you wouldn't be alone.
Assuming cars could 'drive' the same distance that Voyager 1 flew, it would take some of the world's top production cars more than 5,500 years to cover the same distance that Voyager 1 completed in just 40 years. Comparatively, a Boeing 747 airliner would take more than 2,100 years, an F-15 jet airplane would take more than 330 years, and the space shuttle would take around 80 years.
Voyager 1's speed in this contents put into perspective just how quickly the spacecraft is moving, but also the vastness of the universe around us. |
How much light does a supernova shed on the history of universe?
New research by cosmologists at the University of Chicago and Wayne State University confirms the accuracy of Type Ia supernovae in measuring the pace at which the universe expands. The findings support a widely held theory that the expansion of the universe is accelerating and such acceleration is attributable to a mysterious force known as dark energy. The findings counter recent headlines that Type Ia supernova cannot be relied upon to measure the expansion of the universe.
Using light from an exploding star as bright as entire galaxies to determine cosmic distances led to the 2011 Nobel Prize in physics. The method relies on the assumption that, like lightbulbs of a known wattage, all Type Ia supernovae are thought to have nearly the same maximum brightness when they explode. Such consistency allows them to be used as beacons to measure the heavens. The weaker the light, the farther away the star. But the method has been challenged in recent years because of findings the light given off by Type Ia supernovae appears more inconsistent than expected.
“The data that we examined are indeed holding up against these claims of the demise of Type Ia supernovae as a tool for measuring the universe,” said Daniel Scolnic, a postdoctoral scholar at UChicago’s Kavli Institute for Cosmological Physics and co-author of the new research published in Monthly Notices of the Royal Astronomical Society. “We should not be persuaded by these other claims just because they got a lot of attention, though it is important to continue to question and strengthen our fundamental assumptions.”
One of the latest criticisms of Type Ia supernovae for measurement concluded the brightness of these supernovae seems to be in two different subclasses, which could lead to problems when trying to measure distances. In the new research led by David Cinabro, a professor at Wayne State, Scolnic, Rick Kessler, a senior researcher at the Kavli Institute, and others, they did not find evidence of two subclasses of Type Ia supernovae in data examined from the Sloan Digital Sky Survey Supernovae Search and Supernova Legacy Survey. The recent papers challenging the effectiveness of Type Ia supernovae for measurement used different data sets.
A secondary criticism has focused on the way Type Ia supernovae are analyzed. When scientists found that distant Type Ia supernovae were fainter than expected, they concluded the universe is expanding at an accelerating rate. That acceleration is explained through dark energy, which scientists estimate makes up 70 percent of the universe. The enigmatic force pulls matter apart, keeping gravity from slowing down the expansion of the universe.
Yet a substance that makes up 70 percent of the universe but remains unknown is frustrating to a number of cosmologists. The result was a reevaluation of the mathematical tools used to analyze supernovae that gained attention in 2015 by arguing that Type Ia supernovae don’t even show dark energy exists in the first place.
Scolnic and colleague Adam Riess, who won the 2011 Nobel Prices for the discovery of the accelerating universe, wrote an article for Scientific American Oct. 26, 2016, refuting the claims. They showed that even if the mathematical tools to analyze Type Ia supernovae are used “incorrectly,” there is still a 99.7 percent chance the universe is accelerating.
The new findings are reassuring for researchers who use Type Ia supernovae to gain an increasingly precise understanding of dark energy, said Joshua A. Frieman, senior staff member at the Fermi National Accelerator Laboratory who was not involved in the research.
“The impact of this work will be to strengthen our confidence in using Type Ia supernovae as cosmological probes,” he said. |
Homework in Class 1
Reading and Comprehension
We read daily in class and have the same expectations for home. Each week, children are provided with a new reading book and reading record for parents/carers to comment on their child's reading. It is recommended that children read for 10-15 minutes daily, whether this be a book provided by school or other books/comics they may have at home.
Each Wednesday, children will be given a set of words to practice spelling and using in sentences in preparation for a test the following Monday. Children's spellings are provided based on phonics sounds they have learnt so far in Read Write Inc and words expected of their age from the National Curriculum.
Children are provided with weekly homework based on the current Maths learning that week. Homework will be given on a Wednesday in the children's homework book and should be handed in the following Monday to be marked. If your child has difficulty completing their homework, then this can be discussed on the Monday so it can be handed in the following day.
Occasionally further homework projects may be set based on our current topic e.g. research using books and the internet, but suitable time will always be given to complete this. |
Various fiber crops, cereals, sugar, legumes, fruits and vegetables are grown in Egypt. The agricultural area of Egypt is contained to regions near the Nile and its delta. Other than available land, the biggest challenges to growing crops in Egypt are disease-carrying pests.Continue Reading
Egyptian crops increased by 20 percent in the decade between 2004 and 2014. Cotton is the biggest fiber crop and the leading export. Wheat and rice, also popular exports, are successful cereal crops grown there. Sugar cane, sugar beets, a variety of beans, clover, oranges, grapes, stone fruit, pome fruits, tomatoes and potatoes are also grown. The Egyptian climate is sunny and therefore conducive to growing these crops, and the Nile River is an exceptional source of water, as the soil near the Nile is generally of excellent quality.
The biggest obstacle to growing crops in Egypt are the pests; diseased microorganisms are a well-known hazard in agriculture there. Nematodes, or roundworms, are also a big problem, so nematicides are imported regularly to try to improve the crop yield. Root-knot is the major problem in greenhouse cultures and commercial nurseries, where high-value crops are grown in relatively small areas. To increase the amount of land that can be farmed, Egypt has had to build irrigation systems that extend out from the Nile. Only 3 percent of Egyptian land can currently support agriculture.Learn more about Africa |
Io is the innermost and the second smallest of the four Galilean moons. It was discovered, along with Europa, Ganymede and Callisto by Galileo Galilei in 1610.
Io Moon Profile
|Mass:||8.93 x 10^22 kg (1.2 Moons)|
|Orbit Distance:||421,800 km|
|Orbit Period:||1.77 days|
|Surface Temperature:||-163 °C|
|Discovery Date:||January 8, 1610|
|Discovered By:||Galileo Galilei|
Facts about Io
- Io has more than 400 active volcanoes on its surface. They make this little moon the most actively volcanic world in the solar system.
- The volcanism on Io is due to tidal heating, as the moon is stretched by Jupiter’s strong gravitational pull and by the lesser gravitational effects of the other satellites.
- The volcanoes of Io are constantly erupting, creating plumes that rise above the surface and lakes that cover vast areas of the landscape.
- Io has a very thin atmosphere that contains mostly sulfur dioxide (emitted from its volcanoes). Gases from the atmosphere escape to space at the rate of about a ton per second. Some of the material becomes part of a ring of charged particles around Jupiter called the Io plasma torus.
- The volcanic plumes of Io rise up as high as 200 km, showering the terrain with sulfur, sulfur dioxide particles, and rocky ash.
- Io has a number of mountains, some of which rise up as high as Mount Everest on Earth. The average height of Io’s peaks are around 6 km.
- Io is made mostly of silicate rocks, and its surface is painted with sulfur particles from the volcanoes and frosts that are created as the atmospheric gases freeze out and fall to the ground.
- Robotic missions to Io could study its volcanism in closer detail. No human missions are planned as yet, due to the extreme radiation environment and highly toxic atmosphere and surface.
Image – http://photojournal.jpl.nasa.gov/catalog/PIA02308
Profile – http://solarsystem.nasa.gov/planets/io, http://nssdc.gsfc.nasa.gov/planetary/factsheet/joviansatfact.html |
Definition of Group A
: any of various strains of a streptococcus (Streptococcus pyogenes) that include the causative agents of pharyngitis, scarlet fever, septicemia, some skin infections, rheumatic fever, and glomerulonephritis —usually used attributively <Group A strep throat>
First Known Use of group a
Medical Definition of Group A
: the Lancefield group of beta-hemolytic streptococci that comprises all strains of a species of the genus Streptococcus (S. pyogenes) and that includes the causative agents of pharyngitis, scarlet fever, septicemia, some skin infections (as pyoderma and erysipelas), rheumatic fever, and glomerulonephritis—usually used attributively <Group A streptococcal infections>; compare group b
Seen and Heard
What made you want to look up Group A? Please tell us where you read or heard it (including the quote, if possible). |
The newly defined literacies include the following categories:
1. Information literacy
2. Computer literacy
3. Media literacy
4. Television literacy
5. Visual literacy
Three examples of technology being used to meet the needs of a diverse population:
These are examples only to give you a guideline, you will need to elaborate.
Using CD ...
Newly defined literacies addressed and technology implementation in the classroom. |
This section of the website presents information on the Individuals with Disabilities Education Act (IDEA). It is organized by the order of subjects as covered by the statute. An attempt has been made to link the article to the relevant regulations, and to the Special Education Subject as presented in ESE-Subjects.
IDEA - Individuals with Disabilities Education Act – 20 U.S.C. § 1400 et seq.; 34 C.F.R. § 300.
The IDEA is the principal law relied upon by advocates to obtain appropriate education for children with disabilities. In simple terms IDEA requires public school districts to seek out and identify children with disabilities and provide them with an appropriate education in an educational environment, which is as close to the regular education environment as possible. This law requires that schools develop an annual educational plan (IEP) through an multidisciplinary team, which includes the parents. Educational advocates function primarily in the complex processes involved in the development and implementation of these educational plans.
A. History – Until 1975, when the Education of all Handicapped Children Act (EHCA) or Public Law 94-14 was passed, children with disabilities had no federally protected right to an appropriate public school education. Up to that time, many severely disabled children were excluded from public school or were provided only minimal education in segregated, often dismal programs.
The original EHCA was followed by successive amendments in 1983 and 1986. Then in 1990 the law was revamped and renamed the Individuals with Disabilities Education Act (IDEA). The IDEA has been reauthorized and amended in 1992, 1997 and most recently in November 2004.
Because the IDEA and other related educational laws are continually changing and evolving, it is very important that advocates make every effort to update their knowledge of the statutes and their interpretation by the courts.
B. Legal Concepts – IDEA: It is often easiest for advocates to develop their understanding of special education law through a study of the basic legal principles, which have evolved from the statutes and the case law. Once the advocate has a grasp of these principles, it becomes easier to apply the law to the complexities of the educational process. Some of the basic principles are briefly discussed below.
1. Special Education: While most people understand that a child with an IEP (Individual Education Plan) is receiving “special education,” there is a lot of confusion about what we mean by this term. Essentially, special education is considered “specially designed instruction,” which is provided to a student with disabilities in order to assist the student successfully access education. 20 U.S.C. §1401 – Definitions (25). 34 C.F.R. § 300.26 (3)
Special Education usually involves the adaptation or modification of content and/or the variation in the rate or method of delivery of instruction. Special education can also involve the teaching of skills, designed to assist the student in compensating for or overcoming this effects of the student’s particular disability.
2. IDEA Eligibility: Before a student can receive special education services, it is necessary that the student be found legally eligible for such services. In a global sense, IDEA eligibility requires that the student be a “child with a disability” and by reason thereof “needs” special education and related services. 20 U.S.C. § 1401 (3)
Each category of disability has its own legal criteria for determining the disability and eligibility. The simple existence of a disability will not in and of itself make the child eligible for IDEA services. The law also requires that the established disability have an adverse affect on the child’s educational performance. (See Practice Note)
Disability categories: The statutes and rules related to each disability category under IDEA are discussed under the heading for that disability (See the tab "Disabilities" in the left-hand column). It is important that the advocate have a good understanding of the various legally recognized disability categories. These are identified in C.F.R. 300.7 (See list below). The specific eligibility criteria of each category can be somewhat complex and are usually grounded in psychological, communicational, or physical assessments. The advocate should at least be able to locate the precise legal criteria for each disability category and should be capable of understanding the assessment procedures used to determine eligibility. A few of these primary categories are:
3. Free and Appropriate Public Education. 20 U.S.C. § 1401 (8);
The obligation of school districts to educate children with disabilities is summarized in the requirement for schools to provide a “free and appropriate public education.” This is a power packed phrase and is the core principle governing every issue related to the education of those with disabilities.
The requirement of a “free” education means that the school may not require any payment for the provision of education. If assistive technology, special books, transportation, etc. are required in order for the child to receive an appropriate education, then they need to be provided without charge.
The definition of the word “appropriate” as it related to the education of those with disabilities could fill volumes. Almost every due process administrative hearing or court case will turn around whether or not the school district has offered an “appropriate” education. The IEP is the vehicle or plan for delivering this “appropriate” education and appropriateness will be decided through the subjective finding as to whether the IEP is “reasonably calculated to confer educational benefit.”
4. Least Restrictive Environment: The principle of education in the least restrictive environment harkens back to the origins of special education law. Prior to 1975 many children with disabilities were not offered any public education. Those who were provided some education were generally segregated into separate institutions, schools or classes. In this sense the § 504 and IDEA are truly anti-discrimination laws.
In simple terms these laws require that children be educated in an environment as close to the regular education environment as possible. These laws raised a rebuttable presumption that disabled children should be educated with their non-disabled peers. Only when all reasonable efforts, including accommodations, modifications, and supports, have been unsuccessful in providing an appropriate education in the regular placement, may the school begin to restrict the students educational environment.
In determining whether a student is capable of receiving an appropriate education in a regular education or other inclusion class it is not necessary that the student be successful in the same way as the typically developing peers. It is only necessary that the student be able to make educational progress according to the student’s own abilities and nature. It has been held that social benefit may be sufficient grounds for approving mainstream education for a child with a disability (See also IEP-Placement)
”To the maximum extent appropriate, children with disabilities, including children in public or private institutions or other care facilities, are educated with children who are not disabled, and special classes, separate schooling, or other removal of children with disabilities from the regular educational environment occurs only when the nature or severity of the disability of a child is such that education in regular classes with the use of supplementary aids and services cannot be achieved satisfactory.” 20 U.S.C. § 1412 A (5); 34 C.F.R. § 300.130; 34 C.F.R. 300.550 through 556.
5. Stay Put: Congress recognized that at some point in a child’s education a dispute might arise between the student’s parents and the school district as to the appropriate educational plan for the child. In an effort to maintain a certain equilibrium between the parties during the resolution of the dispute, the law has established the principle of “stay put.” This means in simple terms that the student shall remain in the current educational placement “during the pendency” of due process administrative hearing or Court trial. The child can be moved to another placement during the pendency of the resolution only by mutual agreement of the parties.
“… during the pendency of any proceedings conducted pursuant to this section, unless the State or local educational agency and the parents otherwise agree, the child shall remain in the then-current educational placement of such child, …” 20 U.S.C. § 1415 (j) |
In my child assessments, I check children’s communication styles. I have been doing this for so long that my family members can sometimes identify the kids with the auditory communication style right away, because they talk. A lot!
I usually pay attention to the way they use verbal stimulation to memorize things, if they whisper as they work and if they can repeat numbers and sounds. I also check the way they respond to verbal encouragement. Generally, they do much better when they can control their auditory space than when they are restricted.
Auditory kids are very influenced by the sounds around them and are unable to block them. They are very sensitive to arguments, shouting, yelling, crying, whining and scolding. Some of them say they feel pain when their teacher or parent shouts. Communicating with them in a loud voice may cause them to shut down completely. On the other hand, speaking to them in a soft, calm voice supports their learning greatly.
Children with the auditory communication style can learn anything, as long as it is associated with sound effects, a funny voice, an accent or even a lisp.
When talking to an auditory child (or adult), use auditory words from this list:
- Talk / speak
- Listen / hear
- Scream / shout
- Quiet / silence
Teachers should use these children’s strong auditory abilities to facilitate their learning. This communication style may be a challenge at school, especially in high school, because of insufficient awareness and acceptance of communication styles in the school system and because only 20% of children are auditory, which makes them a minority.
One of the difficulties auditory children face is that people around them do not understand their behavior. After people treat them as if something is wrong with them for a while, they start thinking and behaving as if something might actually be wrong with them, and this makes things harder to change.
Self-talk is a typical behavior of auditory kids (and adults) and it is very important that people who work with them do not treat this behavior as abnormal or distracting. In fact, self-talk is essential for auditory people’s thinking and development.
When auditory children’s self-talk is restricted, their stress level increases, they becomes confused and frustrated and their self-confidence and self-image are threatened, which further increases the need for self-talk. In fact, when given an auditory outlet, their need for self-talk decreases.
Unfortunately, many auditory children are misdiagnosed as having Attention Deficit Disorder and trapped in a negative self-fulfilling prophecy of abnormal behavior. In my experience, after assessing hundreds of children over the years, most auditory kids do not have organic ADD, ADHD or ODD, even when some of the symptoms seem similar.
How to Help Auditory Kids Learn
Here are several ways to help kids be “in the auditory zone”, learn better, listen more, be happier and love to learn:
- When talking to them, make sure your voice is soft. No matter what happens, do not raise your voice or speak in a sharp tone.
- Allow them to whisper and talk to themselves. Allow them to talk through what they are doing. Talk to them while they work and ask them open questions about what they are doing to help them verbalize their thoughts (avoid court-style questions to “lead the witness”).
- Use music and sounds when you teach. This excellent option stimulates and motivates kids with the auditory communication style. Find out what kind of music they like and use it to set their mood before, during (softly, in the background) and after learning (as a reward). If the music contains words relating to the topic of learning, auditory kids learn the topic more quickly.
- Encourage them to take part in rhythm and music activities like dancing, singing and playing a musical instrument.
- Having background music they like may contribute to storing information better, because the information is linked to the sounds.
- Encourage auditory children to play a musical instrument, not for the sake of performance, but to stimulate them through the auditory channel and to help them control their auditory space in a productive and pleasant way. Playing an instrument will make them proud of themselves and better academically, without having to study more. This effect increases over time, so it is best to start early.
- Teach them to whistle as a good auditory stimulation that requires listening and producing sound.
- Play musical trivia. Sing parts of songs and ask them to identify the song or continue to sing it.
- Play humming trivia. Hum a song and ask them to recognize which song or piece it is. Swap roles and ask them to hum a song for you.
- Show them how to use a digital sound recorder (or smartphone, iPod or computer) to record their own voice, then play it back and listen. Encourage them to do small singing or voice-acting projects and then proudly play them for you.
- Use rhythm Adding a beat to anything and speaking with a beat makes any content come to life for kids with the auditory communication style. Encourage them to tap along, sing along or hum along, if they want to do that.
- Choose stories with more dialogue than visual descriptions and act out the different characters in the story using different voices and sound effects. Ask the kids to read the stories and act them out for you.
- Find comic books to teach anything you want. Comics include many dialogues and auditory people, young and old, love them. You can teach them science (Magic School Bus) or history (Tintin, Asterix and Obelix, Horrible Histories) and they will be able to memorize everything in detail.
- Teach through singing! When teaching an auditory child something new, find a song for it. Auditory kids remember what they hear. The ability to learn through songs is great for teaching them, because you can always use a familiar tune as an anchor for new learning.
- Puppet shows, storytelling and role-plays are great activities for kids with the auditory communication style. The use of conversation, sounds and sound effects is very stimulating. Encourage them to make puppets and play with them, and be their welcoming and supporting fan.
- Give them verbal affirmations. Auditory kids prefer verbal communication and their self-talk tends to spiral down. To pick their spirits up, say something encouraging. In my assessments, I can see an increase in performance when I use verbal encouragement. Auditory kids need verbal encouragement more often than other kids do.
- Auditory children often experience negative self-talk. Fortunately, negative self-talk requires them to look down (this is how the brain is wired). To stop it, hold their chin gently and get them to look up. Doing this from time to time will break their negative train of thought and over time, they will learn to do it by themselves.
Learn more about the auditory communication style
To read more about kids with the auditory communication style and how to help them learn, read these posts:
- MacGyver Pro: A Super Auditory Kid
- Auditory Musicians
- How to Stimulate Auditory Kids
If you have any kids with the auditory communication style, learn to support them and allow them to process information their own way. Do not be tempted to label them as being difficult or having problems, because being auditory is a gift, not a curse. If you teach your kids to see it as a blessing, they will blossom. |
About Breast Cancer
Based on National Cancer Registry Programme (ICMR), report of (2001-03), about 25% of the total Breast Cancer Cases among Indian women constitutes of Breast Cancer Treatment in India. Breast Cancer usually starts off in the inner lining of milk ducts or the lobules that supply them with milk. A malignant tumor can spread to other parts of the body. A Breast Cancer that started off in the lobules is known as lobular carcinoma, while one that developed from the ducts is called ductal carcinoma.
Breast cancer starts in the cells of the breast. A cancerous (malignant) tumour is a group of cancer cells that can grow into and destroy nearby tissue. It can also spread (metastasize) to other parts of the body. Cells in the breast sometimes change and no longer grow or behave normally. These changes may lead to non-cancerous (benign) breast conditions such as atypical hyperplasia and cysts. They can also lead to non-cancerous tumours such as intra-ductal papillomas. But in some cases, changes to breast cells can cause breast cancer. Most often, breast cancer starts in cells that line the ducts, which are the tubes that carry milk from the glands to the nipple. This type of breast cancer is called ductal carcinoma. Cancer can also start in the cells of the lobules, which are the groups of glands that make milk. This type of cancer is called lobular carcinoma. Both ductal carcinoma and lobular carcinoma can be in situ, which means that the cancer is still where it started and has not grown into surrounding tissues. They can also be invasive, which means they have grown into surrounding tissues. |
Rapid changes in the Earth’s magnetic field between 1225 and 1550 AD are evident in samples from the burnt floors of Iron Age structures, reports a paper published in Nature Communications this week. Burning by Iron Age people caused the rock in the floors to become magnetically orientated, producing a record of the Earth’s magnetic field at the time. The authors suggest this has implications for the interpretation of recent changes in the intensity of the Earth’s magnetic field.
The rapid decay of the Earth’s dipole magnetic field has recently captured the public imagination; a drop in intensity in the Southern Hemisphere over the past 160 years has motivated some speculation regarding a potential magnetic reversal, in which magnetic north and south switch places. However, our understanding of these changes has been limited by a lack of longer-term observations.
John Tarduno and colleagues investigate Iron Age sites from southern Africa and use magnetically oriented samples from the preserved burnt floors of huts, grain bins and livestock enclosures to produce an almost 600-year-long record. The samples show a time of rapid changes and a sharp drop in intensity of the magnetic field, suggesting that the changes in its behaviour may not just be a recent feature. The researchers attribute the changes to the unusual structure and composition of the core-mantle boundary deep beneath southern Africa. They suggest that fluid vortices trigger the expulsion of magnetic flux, resulting in rapidly changing magnetic field directions and intensities.
Climate change: Urban greening can help reduce accelerated surface warming in citiesCommunications Earth & Environment
Ecology: Drought has life-long consequences for red kitesNature Communications
Geoscience: Diamond from the deep reveals a water-rich environmentNature Geoscience
Environment: Human contribution to Middle East’s poor air quality underestimatedCommunications Earth & Environment
Planetary science: Mars InSight lander records impact of meteoroidsNature Geoscience
Climate change: Potential global threat to city greeneryNature Climate Change |
Scratch is a basic coding language that uses a ‘building block’ style coding to create animated stories, interactive games, simulations, and beautiful artwork. In using Scratch, learners will be introduced to basic coding concepts and develop their computational thinking skills while bringing their own ideas to life. In this series, the basics of Scratch will be introduced to provide learners with the foundational skills required to begin creating in Scratch.
Scratch makes it easy for learners who are just starting out by organizing the types of code you can use into categories. The code blocks are grouped by the following categories: Motion, Looks, Sound, Events, Controls, Sensing, Operators, Variables and My Blocks. These code blocks can be pieced together in the Code Area like placing jigsaw puzzle pieces together. Both the Code Area and the Stage are visible at the same time which allows learners to run code, test, debug and view their creations.
According to the Scratch Wiki, Scratch’s coordinate system uses 2 coordinates, x-coordinate and y-coordinate, to determine the location of a sprite on the stage. The x-coordinate value determines the horizontal location of the sprite and the y-coordinate value determines the vertical location or height. Every Scratch project whether it be a game, story or animation will always have the x,y grid behind the stage to determine where sprites are located.
In this episode, learners will explore:
- Moving their sprite left and right using the arrow keys
The following vocabulary definitions are from the Scratch Wiki.
- Block Palette (Scratch)
- The Block Palette is the area on the left of the screen when the Code button is opened. On the left, there is an area that contains the nine sections of blocks in Scratch. To the right of that, there is an area that contains blocks that can be dragged into the Code Area to make code.
- Code Area (Scratch)
- The Code Area is the large empty space to the right of the Block Palette. It is an area for storing blocks that run the project. Blocks can be dragged from the Block Palette into the Code Area and arranged to form scripts.
- Sprite (Scratch)
- Either user-created, uploaded, or found in the sprites library, are the objects that perform actions in a project.
- Sprite Pane (Scratch)
- It is a white area located beneath the Stage where all sprites present in a project can be easily accessed to modify or inspect.
- Stage (Scratch)
- The stage is the area where the sprites are and perform their actions. It is located in the top of the area to the right of the Code Area.
- A sprite’s x-coordinate is its location on the x (horizontal) axis of the stage. The value increases or decreases depending on how far right or left (respectively) on the Stage the sprite is, where the lateral center is 0.
- A sprite’s y-coordinate is its location on the y (vertical) axis of the stage. The value gets higher or lower depending on how far up or down on the Stage the sprite is, from where the vertical center is 0.
This episode will take you through the steps of making your sprite move right and left on the Stage.
Add Your Sprite and Background
- Choose a Backdrop for your project and a Sprite that you want to code to move left and right. If you need help with how to choose a Sprite, see Scratch Basics Episode 2 for more information.
Explore the Location of Your Sprite on the Stage With X and Y Coordinates
- Before we code anything, click on your sprite on the stage and move it to the right. Notice how the value of the x-coordinate in the Sprites Pane gets bigger! See how the value of the x-coordinate changes from -105 to 135.
- Now move your sprite to the left and notice how the value of the x-coordinate in the Sprites Pane gets smaller. Notice how the value of x changes from 135 to -35.
- If you move the sprite upwards on the stage, the value of the y-coordinate in the Sprites Pane will get bigger. Notice how the value of the y-coordinate changes from -77 to 123.
- If you move the sprite downwards the value of the y-coordinate in the Sprites Pane will get smaller. Notice how the value of y changes from 123 to -43.
Set the Start Position of Your Sprite
- Drag a “when green flag clicked” block from the Events category in the Block Palette into the Code Area.
- Click on your sprite in the Stage area and drag it to the location you want your sprite to start at everytime you run your code. This will change the x-coordinate and y-coordinate of your sprite in the Sprites Pane.
- Click and drag a “go to x: y” coordinates block from the Motion category in the Blocks Palette on to the Code Area. The default x-coordinate and y-coordinate input bubbles on this Motion block will be set to the exact location that your sprite is on the Stage (which means that it will have the same coordinates that are set in the Sprites Pane). This block will tell your sprite to go to that location every time you press the Green Flag.
Moving Left and Right
- We are going to begin a new script to make our sprite move left and right. Click and drag a “when space key pressed” block from the Events category in the Block Palette into the Code Area.
This means that whatever code we add underneath this new Events block will run when you press the space key and it will not run when the Green Flag is pressed like the other script.
- Click on the drop-down menu in the “when space key pressed” block and select the “right arrow”. Now the code you add underneath this keyboard input Events block will run only when you press the right arrow key on your keyboard.
- Go to the dark blue Motion category in the Blocks Palette and select the “change x by” block. Drag this block onto the Code Area and snap it underneath the “when right arrow pressed” Events block. The “change x by 10” block tells your sprite to go right along the x-axis because the number is increasing by 10, meaning it is moving your sprite to a larger x-coordinate. Since we’ve attached this Motion block to the right arrow keyboard input, everytime you press the right key your sprite will move 10 steps to the right.
- Now we will follow the same steps to code the sprite to move to the left. Click and drag another “when space key pressed” block from the Events category in the Block Palette into the Code Area.
- Click on the drop-down menu in the “when space key pressed” block and select the “left arrow”. Now the code you add underneath this Events block will run only when you press the left arrow key on your keyboard.
- Go to the dark blue Motion category in the Blocks Palette and select another “change x by” block. Drag this block onto the Code Area and snap it underneath the “when left arrow pressed” Events block.
- Now we need to change the “change x by 10” Motion block to move the sprite to the left. Click on the white input bubble on the duplicated “change x by 10” block and change this number to -10. This will tell your sprite to move to the left along the x-axis because the number is getting smaller. Now when you press the left arrow on your keyboard, your sprite will move 10 steps to the left along the x-axis to a new, smaller x-coordinate.
- Your Code Area should now have three Event blocks with code under each block like the image below:
- Test your code! Press the Green Flag and then use the left and right arrows on your keyboard to move the sprite left and right!
In this episode you learned basic Scratch concepts such as how to code a sprite to move left and right on the Stage. What will you create now that you have learned a few Scratch Basics?
We want to see the awesome things you’re creating! Take a photo or video and share your work with us by emailing [email protected] or tagging @pinnguaq on Facebook, Twitter, or Instagram. Don’t forget to include the hashtag #LearnWithPinnguaq! You can also upload your project to the Pinnguaq Studio. |
The development of architectural design elements forces the construction industry to change and develop. We prepared a study on Prestressed Concrete, a type of concrete that enables wide openings and special design works, as in the architectural concrete type in our previous series.
Definition of Prestressed Concrete
The definition of prestressed concrete according to several regulations that apply in the world of construction is as follows:
According to the ACI (American Concrete Institute), prestressed concrete is concrete that experiences internal stresses with a magnitude and distribution in such a way that it can compensate to a certain extent the stress that occurs due to external loads.
In another definition, according to the Consensus Concrete Guidelines draft 1998, prestressed concrete is reinforced concrete that has been given internal compressive stress to reduce the potential internal tensile stress due to workload.
Prestressed concrete can also be defined as reinforced concrete where the tensile stress is under certain loading conditions with such a value and distribution to a safe limit by applying a permanent compressive force, and the prestressed steel used for this purpose is pulled before the concrete has hardened (prestressed) or after the concrete has hardened. (post-pull).
The main difference between reinforced concrete and prestressed concrete is that reinforced concrete combines concrete and steel reinforcement by joining and allowing them to work together as desired. Meanwhile, prestressed concrete combines high-strength concrete and high-strength steel in “active” methods.
The manufacture of prestressed concrete is achieved by pulling the steel and holding it to the concrete, thus subjecting the concrete to a stressed state. This active combination results in better behavior of the two ingredients.
Steel is a material known for its toughness and toughness. This allows the steel to be made to work with high tensile strength by prestressing. Meanwhile, concrete is a brittle material and its ability to withstand tension is improved by applying pressure, while its ability to withstand pressure is not reduced. So prestressed concrete is the ideal combination of two modern high strength construction materials.
Principles and Methods of Stress Concrete Work
There are 2 types of methods of giving concentric force to prestressed concrete, namely:
Pre-tensioned Prestressed Concrete
In this method, the tendon is tightened with the help of auxiliaries before the concrete cast. The concentric force is maintained until the concrete is hard enough. Once the concrete is hard enough, the tendons are cut and the prestressing force is transferred to the concrete through the attachment. In mass manufacturing, this method is very suitable.
Prestressed steel is pre-tensioned to independent anchoring before casting of the surrounding concrete. The term pre-attraction means the application of prestressed steel, not to the beam. Precast application is usually carried out at the precast concrete manufacturing site.
Post-tensioned Prestressed Concrete
In this method, the tendon withdrew after the concrete cast. Before casting is carried out, a sleeve is attached to the groove of the tendon. After the concrete is finished, the tendons are inserted into the concrete through the tendon shawls that were previously installed during casting. Withdrawal is carried out after the concrete reaches the desired strength according to the calculation. After the withdrawal is made, the sleeve is filled with grouting material.
The principles of this post-pull are briefly as follows:
Stage 1: Prepare a formwork complete with holes for the tendon ducts, which are curved according to the moment plane of the beam, after which the concrete is cast.
Stage 2: After the concrete has been cast and can bear its own weight, the prestressed tendon or cable is inserted into the tendon duct, then it is pulled to get the prestressed force. The method of pre-stressing is by tying one armature, then the other end of the armature is pulled (pulled from one side). However, some are pulled on both sides and then held up simultaneously. After being transported, grouting is carried out on the armature hole earlier.
Stage 3: After being anchored, the concrete block becomes compressed, so the concentric forces have been transferred to the concrete. Because the tendon is curved, due to the concentric force the tendon gives an even load to the beam in an upward direction, as a result, the shape of the beam bends upwards.
To facilitate transportation from the factory to the site, usually, the pre-stressed concrete is made with a post-tension system. This is carried out segmentally (the beam is divided into several parts, for example, the sections are made with a length of 1 to 3 m).
Unlike conventional concrete, prestressed concrete undergoes several stages of loading. At each loading stage, checks must be made on the condition of the compressed and tensile fibers from each section. At this stage, different allowable stresses apply according to the conditions of the concrete and tendons. There are two stages of loading on prestressed concrete, namely transfer, and service.
The transfer stage is the stage when the concrete begins to dry and the prestressed cable is drawn. At this time, usually, only the dead load of the structure works, namely the structure’s own weight plus the load of workers and tools. At this time the live load has not worked so that the working moment is minimum, while the working force is maximum because there is no loss of prestressing force.
Conditions of service(service) are the condition when prestressed concrete is used as a structural component. This condition is achieved after all the prestressed force losses are considered. At this time the external load is at its maximum, while the stress force is close to the minimum price.
Prestressed Concrete Material
Concrete is the result of mixing several materials in the form of cement, water, and aggregate. With the ratio of the weight of the mixture, namely 44% coarse aggregate, 31% fine aggregate, 18% cement, and 7% water. After 28 days, the concrete will reach the ideal strength which is called the characteristic compressive strength. The characteristic compressive strength is the stress that has exceeded 95% of the uniaxial compressive strength measurement taken from the standard stress test, namely with a 15×15 cm cube, or a cylinder with a diameter of 15 cm and a height of 30 cm. The concrete used in the manufacture of prestressed concrete is concrete that has high compressive strength with an FC value of at least 30 MPa.
Steel: steel materials commonly used in manufacturing practices are as follows.
- PC Wire, usually used for prestressed steel in prestressed concrete with a prestressed system.
- PC Strand wire, usually used for prestressed steel for post-pulling prestressed concrete.
- PC BAR wire, usually used for prestressed steel in prestressed concrete with a prestressed system.
- Ordinary reinforcement, namely reinforcement that can be used for conventional concrete such as plain iron and screw iron.
Prestressed Concrete Technology
Prestressed concrete is the application of the stresses that the element will encounter under load to the structural elements after engineering calculations. From a technical point of view, it is the combination of the high compressive strength of concrete with the high tensile strength of the steel building material.
In the working logic of classical reinforced concrete structures, most of the stresses are met by steel reinforcements. Concrete and steel carry stresses together in building systems, also known as post-tensioned concrete. The best examples of this feature are post-tensioned beams, prestressed columns, and so on. We can see it in the building elements. It makes the structure more resistant to unexpected shocks and loads.
Advantages of Prestressed Concrete
- Concrete compressive strength is used efficiently
- Steel building material is used efficiently
- Reduces the risk of corrosion by reducing tensile cracking
- Resistant to shear stress
- The same performance can be obtained with small sections compared to reinforced concrete
- Structure weight is reduced
- Can be composited
Disadvantages of Post-tensioned Concrete
- It is difficult to work with
- It must be manufactured by controlling
- Special alloy prestressed steels are expensive
- Its assembly requires attention and different construction equipment
- Construction cost is relatively expensive
Prestressed Concrete Technology has some advantages compared to reinforced concrete structures due to its properties. While it allows concrete and steel building materials to work together, as post-tensioned structural elements better meet equivalent loads, long structures with small cross-sectional areas make it possible to pass wide openings.
Properties of Prestressed Concrete
- It has high strength
- Tensile stress is low
- Its strength is quite high
- It is economical compared to conventional reinforced concrete.
- It is light compared to reinforced concrete
- Resistant to cutting forces
Due to the above reasons, its use is increasing in today’s construction industry. Post-tensioned concrete systems are used in most industrial buildings, commercial buildings and factories built recently.
Where to use?
Ardgermeli Concrete is in essence a “prestressed” form of loads applied by calculating these forces in advance to balance the tensile stresses created by external loads with the compressive stress that the concrete meets.
In reinforced concrete elements, this stress, that is, leaving it under stress, is generally applied to steel reinforcement. When we examine the usage areas of Prestressed Concrete, the following types of structures come to the fore;
- Foundation beams
- Railway sleepers
- Bridge structures
- Water tanks
- Airport etc. runway structures
- Tall buildings
- Factory buildings
Prestressed concrete is not preferred for columns and walls due to its material properties. When we examine the properties of the structures in which it is used, it is seen that there are structures with very high bending stresses. Additionally, it is preferred as a retaining wall in some buildings. The main reason why it is preferred in such structures is its ease of manufacture and economy. |
According to the 2020 Emissions Gap Report announced by United Nations Environment Program, the global temperature will rise 3.2 degrees in 2100. In North America and the middle east, forest fires and drought continued due to heatwaves, while Western Europe and central China are suffering from the highest level of rainfall since observations. In Brazil, the coldwave made its lowest temperature 17.96°F, which is -7.8°C.
Experts predict that the world temperature will rise two degrees Celcius in 2050. In that situation, about 420 million people will undergo extreme heat waves. Also, about 410 million people in the city will be running short of water. The melting of glaciers in Greenland and western Antarctica will raise sea levels by about 13 meters. The most worrying scenario is that the permafrost in Siberia melts and leaks billions of tons of methane, which speeds up temperature increases. Mark Linus predicted that six degrees Celcius rises would result in the release of methane hydrate. It is predicted to make all living things become a mass extinction.
The need to make efforts for the prevention of temperature rise is being underlined. Many countries including Korea are trying to reduce carbon dioxide to zero, however, a 4 to 5 degrees Celcius is already inevitable. |
Science Week in 3rd + 4th Class
The children in 3rd and 4th class were very busy during Science Week (November 8th - 15th). They carried out a variety of experiments and had great fun predicting what the results of the experiments would be.
Colour Change Walking Water Experiment
The children used cups, tissue paper, food colouring and water to complete this experiment. They laid out the cups as in the picture and put water and food colouring in every other cup with a folded piece of tissue in each cup moving into the next cup. The result of the experiment wasn't clear straight away but after leaving the cups as they were overnight, the children were quite surprised in the morning when they saw how things had changed.
The water has moved from the original cups to the empty cups through a process called capillary action - Water is able to move against the force of gravity because water molecules stick to each other and they stick to the fibers of the tissue paper. As water molecules are attracted to the fibers of the paper towel, they pull other water molecules with them. The adhesive forces between the water and the fibers of the paper towel are stronger than the cohesive forces between the water molecules. This allows water to travel from one cup to another.
Water, Oil and Food Colouring Experiment
The children used two cups, water in one and oil in the other and added a few drops of food colouring to the oil. They mixed the oil together with the food colouring and then poured it into the cup with the water. This is what happened:
This experiment is all about density. Density is a measure of how much something weighs (it’s mass) by how much space it takes up (it’s volume). Oil and water don’t mix because the water molecules are more attracted to each other than the oil. Oil is also less dense than water, which causes the oil to float on top of the water, creating two distinct layers. Liquid food colouring is water-based, which is why it doesn’t mix with the oil even when you stir it. Instead the food colouring breaks up into small droplets which become temporarily suspended (floating) within the oil. These droplets are denser than the oil and so they gradually fall though the oil and enter the water. (https://gosciencekids.com/fireworks-science-kids-oil-water-density/) |
By Fredrica Syren:
Until now, research on the impact of plastic on our health has focused solely on specific parts of the plastic lifecycle, often on single products, processes, or exposure pathways. Now a new research report by Center for International Environmental Law(CIEL), Earthworks,Global Alliance for Incinerator Alternatives(GAIA), Healthy Babies Bright Futures(HBBF), IPEN, Texas Environmental Justice Advocacy Services(t.e.j.a.s.), University of Exeter, and UPSTREAMshows that the impact on the human body from exposure to plastic is greater and that people worldwide are exposed at multiple stages of plastic’s lifecycle.
Every stage of the plastic lifecycle has an impact on human health — from wellhead to production, from store shelves to human bodies, and from waste management. The effects continue with the impacts of microplastics in the air, water and soil.
According to the report, 99% of plastic comes from fossil fuels. The extraction of oil and gas, particularly hydraulic fracturing for natural gas, releases an array of toxic substances into the air and water, often in significant volumes. Over 170 fracking chemicals used to produce the main feedstocks for plastic have known human health impacts, including cancer; neurological, reproductive, and developmental toxicity; impairment of the immune system; and more.
We’re exposed to plastic many ways:
- Extraction and transportation of fossil feedstocks for plastic release highly toxic substances into the air and water. They are known to cause cancer, neurotoxicity, reproductive and developmental toxicity, and impairment of the immune system;
- Refining and production of plastic resins and additives release carcinogenic and other highly toxic substances into the air. The effects include impairment of the nervous system, reproductive and developmental problems, cancer, leukemia and genetic impacts like low birth weight;
- Consumer products and packaging can lead to ingestion and/or inhalation of microplastic particles and hundreds of toxic substances;
- Plastic waste management, especially “waste-to-energy” and other forms of incineration, releases toxic substances including heavy metals such as lead and mercury; acid gases and particulate matter, which can enter air, water and soil, thus causing both direct and indirect health risks for workers and nearby communities;
- Fragmenting and microplastics enter the human body directly and lead to an array of health impacts (including inflammation, genotoxicity, oxidative stress, apoptosis and necrosis), which are linked to negative health outcomes ranging from cardiovascular disease to cancer and autoimmune conditions;
- Cascading exposure as plastic degrades further leaches toxic chemicals concentrated in plastic into the environment and human bodies; and
- Ongoing environmental exposures as plastic contaminates and accumulates in food chains through agricultural soils, terrestrial and aquatic food chains, and the water supply, thereby creating new opportunities for human exposure.
One of the obstacles to finding a solution that will reduce exposure to plastic is the lack of transparency regarding the chemicals in plastic and its production processes. This prevents a full assessment of its impacts, reduces the ability of regulators to develop adequate safeguards, prevents consumers from making informed choices, and prevents fenceline communities from limiting their exposure.
According to a report called Fueling Plastics CIEL, plastic producers are well aware of the damage plastic is causing to our oceans … and they don’t care. As a matter of fact, they have known about the harm disposable plastic is to humans and the ocean since the 1970s. Instead of taking responsibility and working on a solution, for decades they have continued to deny any responsibility and have spent their time opposing sustainable solutions and fighting local regulations regarding disposable plastic products. Even as the crises mounted, they still did nothing. To put it bluntly, co-author of the CIEL report, Steven Feit, says the plastics producers took a note from Big Oil’s playbook on climate change: deny, confuse, and fight regulation and effective solutions.
We must remember that, after all, the plastics industry is for-profit business. It is the United States’ third largest industry, and responsible for 400 billion dollars in shipments. The plastics business is producing this volume of material because of consumer demand, so less demand will create less plastic.
Unfortunately, a change in consumers’ habits is not enough to solve the plastic crisis: “As hundreds of billions of dollars from the petrochemical industry are being poured into new plastic production, we need a global, binding treaty that regulates plastic pollution throughout its lifecycle, from wellhead production to ocean waste.”
It’s a fact that many actions and solutions are needed to confront this threat to human life and human rights. The only way to be effective is for manufacturers to reduce production, use, and disposal of plastic and associated toxic chemicals. |
A History of Civil Liberties During Wartime
A History of Civil Liberties During Wartime
History shows that curtailment of civil liberties—including the right to free speech, the right to a fair trial, and the right to equal protection under the law—has often followed national crises, particularly the outbreak of war. "Since the nation's founding, Americans have relied on basic legal protections spelled out by the Bill of Rights," writes reporter Angie Cannon, "but during past wartimes, civil liberties have been curbed dramatically."16 From the Sedition Act of 1798—which made it a crime to criticize the government—to the internment of Japanese Americans during World War II, during times of crisis the United States has often curtailed civil liberties in ways that Americans later regretted. In waging the war on terrorism, one of the many challenges facing the United States is to avoid the civil liberties mistakes of the past.
The nation's founders, well aware of the tension between security and freedom, were concerned that Americans would be tempted to curtail civil liberties in times of war. In 1787, during the debates over the framing of the Constitution, Alexander Hamilton predicted that when faced with war or other threats, America would "resort for repose and security to institutions which have a tendency to destroy their civil and political rights. To be more safe, they, at length, become willing to run the risk of being less free."17 In 1798, little more than a decade after the framing of the Constitution and only seven years after the Bill of Rights was ratified, Hamilton's fears were proved correct.
Free Speech and Sedition
In 1798 the French Revolution was still raging, and hostile French diplomatic actions had many Americans convinced that war with France was imminent. In addition, many French refugees had immigrated to the United States, and most of them supported the Republican Party, led by Thomas Jefferson. John Adams, a Federalist, was president at the time. The Federalists also controlled Congress and—partly because of hysteria over possible war with France, and partly to secure Federalist power against the Republicans—the legislature passed four laws. Known as the Alien and Sedition Acts, they targeted immigrants and made it a crime to criticize the government.
One of the four laws was the Naturalization Act, which postponed citizenship and voting privileges for immigrants from five years to fourteen years. Because many immigrants tended to vote Republican, this law greatly undermined the Republican Party. The Alien Act gave the president the power to deport aliens thought to present a danger to the government, and the Alien Enemies Act authorized the imprisonment and deportation of immigrants during wartime. Together, the alien acts foreshadowed the government's tendency to enact anti-immigrant measures in times of war.
The most controversial of the four laws, however, was the Sedition Act. The term sedition refers to the incitement of rebellion against a government, but the 1798 Sedition Act broadly prohibited all criticism of the government, outlawing "any false, scandalous and malicious writing or writings against the government of the United States, or either house of the Congress of the United States, or the President of the United States." The broad terms of the law basically nullified the First Amendment's protection of free speech and freedom of the press. Prominent supporters of the Republican Party, many of them journalists, were arrested for criticizing the Federalists in power.
The Alien and Sedition Acts identified no traitors and made no Americans safer. To the contrary, American citizens and their rights were the only casualties. The war with France never came, and the fear of the French subsided. No alien was ever deported or incarcerated, and a few years later the … residence requirement for citizenship reverted to five years. But the Sedition Act was widely enforced against American citizens, all of them … political opponents of President Adams and his administration's policies.18
All four of the laws were repealed or allowed to expire by 1802, and President Thomas Jefferson, elected in 1800, pardoned those imprisoned under the Sedition Act.
The Sedition Act of World War I
Passed on May 16, 1918, the Sedition Act made it a crime to criticize the government or the war effort. An excerpt follows.
"Whoever, when the United States is at war, shall willfully make or convey false reports or false statements with intent to interfere with the operation or success of the military or naval forces of the United States, or to promote the success of its enemies,… and whoever, when the United States is at war, shall willfully cause or attempt to cause, or incite or attempt to incite, insubordination, disloyalty, mutiny, or refusal of duty, in the military or naval forces of the United States, or shall willfully obstruct or attempt to obstruct the recruiting or enlistment service of the United States, and whoever, when the United States is at war, shall willfully utter, print, write, or publish any disloyal, profane, scurrilous, or abusive language about the form of government of the United States, or the Constitution of the United States, or the military or naval forces of the United States, or the flag of the United States, or the uniform of the Army or Navy of the United States, or any language intended to bring the form of government of the United States, or the Constitution of the United States, or the military or naval forces of the United States, or the flag of the United States, or the uniform of the Army or Navy of the United States into contempt, scorn, contumely, or disrepute, or shall willfully utter, print, write, or publish any language intended to incite, provoke, or encourage resistance to the United States, or to promote the cause of its enemies,… and whoever shall willfully advocate, teach, defend, or suggest the doing of any of the acts or things in this section enumerated, and whoever shall by word or act support or favor the cause of any country with which the United States is at war or by word or act oppose the cause of the United States therein, shall be punished by a fine of not more than $10,000 or imprisonment for not more than twenty years, or both."
Yet more than a century later, the United States passed another Sedition Act, this time to suppress dissent against America's involvement in World War I. The Sedition Act of 1918 made it a crime to "utter, print, write, or publish any disloyal, profane, scurrilous, or abusive language about the form of government of the United States" or to "encourage resistance to the United States." The act also authorized the postmaster general to seize mail that contained criticism of the government. U.S. leaders, concerned about the 1917 Bolshevik Revolution in Russia, a revolt of the working class, became particularly suspicious of Socialists and other left-wing groups. Eugene V. Debs, a labor leader, anti-war activist, and former Socialist Party candidate for president, was among the more than one thousand individuals imprisoned for violating the Sedition Act.
The Wisdom of Hindsight
In the second half of the twentieth century, the Supreme Court, over the course of several rulings, made it clear that both of the Sedition Acts had been unconstitutional. Today, the idea that Congress would pass a law abridging freedom of speech or freedom of the press is unthinkable to most Americans. In fact, defenders of the USA PATRIOT Act point out that although it expands the power of the FBI to search an individual's belongings and financial records, it only allows such searches if they are "not conducted solely upon the basis of activities protected by the first amendment to the Constitution."
However, the First Amendment only achieved its vaunted status after being so severely curtailed at two different points in U.S. history. As wrong as they may seem to modern Americans, the two sedition acts seemed justified to the leaders who enacted them and to many of the Americans who lived under them. As journalist Geoffrey R. Stone explains, "It is much easier to look back on past crises and find our predecessors wanting than it is to make wise judgments when we ourselves are in the eye of the storm. But that challenge now falls to us."19
Having been curtailed twice before, the right to criticize the government is now one of Americans' most cherished liberties. However, civil libertarians caution, the curtailment of other liberties that Americans take for granted—such as freedom from unwarranted search and seizure—may seem justified "in the eye of the storm," just as the Sedition Acts seemed justified to past generations. In addition, the war on terrorism is unlike conventional wars of the past: The security measures to combat terrorism are different than the measures used to fight traditional wars, and therefore the threats to civil liberties may be different as well.
Terrorist Bombings and the Palmer Raids
Compared to other countries, the United States does not have a great deal of experience in dealing with terrorist attacks on U.S. soil, but one incident stands out: the Palmer Raids of 1919 to 1921. In June 1919, the fighting in World War I had recently ended, but the United States was in political and economic turmoil. Workers had led the 1917 Bolshevik Revolution in Russia, and many Americans were caught up in the "Red Scare," worrying that Russia's communism and political instability would take hold in the United States.
Amid this climate of social conflict, writes historian Bruce Watson, "the threat of terrorism sent Americans into a frenzy of fear."20 A series of bombings and attempted bombings had begun on April 28, when a package of sulfuric acid and dynamite was mailed to Seattle's mayor. Then on the evening of June 2, bombs exploded in eight cities—in Washington, D.C., an Italian anarchist blew himself up outside the home of A. Mitchell Palmer, President Woodrow Wilson's attorney general. Palmer reacted to the bombings by ordering mass arrests of suspected Communists, Socialists, and anarchists. From June 1919 to early 1921, between six thousand and ten thousand people were arrested—five thousand of them in sweeping raids of thirty-three cities on January 2, 1920. Most of those arrested were noncitizen immigrants. Many were arrested solely because of their affiliation with a radical group, without warrants or evidence. More than two hundred immigrants were deported.
"Not for at least half a century," writes historian William Leuchtenberg, "had there been such a wholesale violation of civil liberties."21 Palmer justified the raids by claiming that there was a terrorist conspiracy afoot. He predicted that there would be mass bombings on May 1, 1920. When none occurred, the Red Scare began to die down and a public backlash against the Palmer Raids began. The American Civil Liberties Union was founded in 1920 largely as a result of the civil liberties violations that occurred during the Palmer Raids. "The Palmer Raids trampled the Bill of Rights," states an ACLU pamphlet, "making arrests without warrants, conducting unreasonable searches and seizures, wantonly destroying property, using physical brutality against suspects, and detaining suspects without charges for prolonged periods. Palmer's men also invoked the wartime Espionage and Sedition Acts of 1917 and 1918 to deport noncitizens without trials."22
For civil libertarians, Attorney General John Ashcroft's ordering of mass arrests of immigrants after September 11 seemed like a frightening echo of the Palmer Raids. In the first few days after September 11, more than seven hundred foreigners were arrested and detained on immigration charges. Most were later released, but about one hundred were still being detained more than a year later. Civil libertarians argue that, as in the Palmer Raids, immigrants have become the primary targets of investigation, only now being Arab or Muslim has replaced being a Socialist or a Communist. The detainment of immigrants after September 11 also recalls the internment of Japanese Americans during World War II.
Following the Japanese surprise attack on Pearl Harbor, Hawaii, on December 7, 1941, millions of Americans feared that a Japanese invasion of the West Coast was imminent, and some members of the public and the military feared that the large numbers of Japanese immigrants and U.S.-born Japanese Americans living in California, Oregon, and Washington might aid such an invasion. President Franklin D. Roosevelt—who a few years earlier had declared, "If the fires of freedom and civil liberties burn low in other lands they must be made brighter in our own"23—gave in to the pressure to relocate Japanese Americans and immigrants away from the West Coast. In February 1942 Roosevelt signed an executive order authorizing the military to round up people of Japanese ancestry and forcibly relocate them to internment camps in eastern California and the Southwest. These people were forced to sell their homes, businesses, and farms at great losses and become prisoners for the duration of the war.
More than two-thirds of those relocated were American citizens, who were denied their constitutional rights to due process under the law and to equal treatment under the law. Yet, as Glasser of the ACLU notes, "Nearly all Americans accepted this, looked the other way, and felt safer because they were afraid."24 Of course, the Japanese invasion of the West Coast never happened, and decades later, in 1988, Congress authorized monetary compensation for those who had been relocated and President Ronald Reagan issued a formal apology for the government's actions. During the actual crisis, though, the United States chose to try to increase safety at the expense of civil liberties.
Treatment of Persons Suspected of Criminal Activity
The sedition acts, the Palmer Raids, and the internment of Japanese Americans are just three of the most infamous violations of civil liberties during times of crises, in which the First Amendment and the rights of immigrants and minorities were clearly violated. At other points in history, the government has infringed on the right of individuals to be free from unwarranted searches and the right of those suspected of a crime to defend themselves in a court of law.
During the Civil War, President Abraham Lincoln suspended the writ of habeus corpus, a constitutional provision that prohibits the government from detaining citizens without charge. The U.S. government's detention of thousands of terrorism suspects since September 11 has again raised questions about the suspension of the writ, and the Sixth Amendment's guarantee of a speedy and public trial, in times of crisis.
At the beginning of America's Cold War with the Soviet Union in the 1950s, Senator Joseph McCarthy and the House Un-American Activities Committee (HUAC) launched a hunt for Communist sympathizers and subversives. McCarthy's Senate committee and HUAC made sweeping accusations against individuals and organizations, often only on the evidence of unidentified informants. In show business, academia, and labor unions, many people's careers were ruined once they were labeled as Communist sympathizers by McCarthy or HUAC. Often people were identified as such based only on their ties to a left-wing organization. Some civil libertarians worry that the USA PATRIOT Act's harsh penalties for those with ties to terrorist organizations, and the way the USA PATRIOT Act defines the term "terrorist organization," could lead to charges of guilt by association as in the McCarthy era.
Finally, beginning in 1971, journalists and congressional investigation revealed that the U.S. government, through the Federal Bureau of Investigation (FBI), theCentral Intelligence Agency (CIA), and other agencies, had been spying on, and, in some cases, sabotaging the efforts of, civil rights and antiwar groups since the 1950s. The FBI and other law enforcement organizations used illegal wiretaps and other measures to monitor civil rights leaders such as Martin Luther King Jr., black-power groups such as the Black Panther Party, and a variety of organizations, including several independent newspapers, that opposed the war in Vietnam. In the 1970s several steps were taken to limit intelligence organizations' authority to spy on U.S. citizens. As journalist Stuart Taylor Jr. explains,
The following excerpts are from a speech that Secretary of Transportation Norman Y. Mineta gave at the University of Rochester in 2001, in which he discussed his experience as a Japanese American during World War II and his hope that Americans will not react to the September 11 attacks with fear of and discrimination against minorities.
"As you know, more than a few journalists and historians have taken to describing September 11th as the new Pearl Harbor. The analogy is a good one—once again, the United States has been attacked without warning and without mercy. The attack has awakened us to a danger our Nation sometimes felt we would not have to face. And it has strengthened our resolve to face that danger—and remove it.
I think that all of you will understand that, as an American of Japanese ancestry, I find the analogy of Pearl Harbor to be particularly important. It highlights one of the greatest dangers that we will face as a country during this crisis—-and that is, the danger that in looking for the enemy we may strike out against our own friends and neighbors.…
The internment of Japanese Americans during the Second World War has rightly been called the greatest mass abrogation of civil liberties in our Nation's history—and I believe that it stands as a warning to all of us of how dangerous misguided fear can be.…
We can resolve today, as Americans, that the tremendous progress we have made toward our goal of equal justice and equal opportunity for all Americans will not be sacrificed to fear.
The terrorists who committed the atrocities of September 11th … believe they can use the forces of terror and fear to make us fail our most basic principles, and to break our most sacred promises to each other. It is my greatest hope that, in the months and years ahead, all of us will join together as Americans to make sure they do not succeed."
The Supreme Court, Congress, and the Ford and Carter administrations placed tight limits on law enforcement and intelligence agencies. The Court [restricted] government powers to search, seize, wiretap, interrogate, and detain suspected criminals (and terrorists). It also barred warrantless wiretaps and searches of domestic radicals. Congress barred warrantless wiretaps and searches of suspected foreign spies and terrorists—a previously untrammeled presidential power—in the 1978 Foreign Intelligence Surveillance Act.25
The USA PATRIOT Act, which removes some of these restrictions, has revived the debate about domestic intelligence programs and their potential for abuse.
Avoiding the Mistakes of the Past
Some proponents of the war on terrorism have downplayed America's history of curtailing civil liberties during wartime. Historian Jay Winik, for example, points out that although the United States has repeatedly curtailed civil liberties in times of crisis, it has also restored them once the crisis was past:
Faced with the choice between security and civil liberties in times of crisis, previous presidents—John Adams, Abraham Lincoln, Woodrow Wilson and Franklin Roosevelt—to a man (and with little hesitation) chose to drastically curtail civil liberties. It is also worth noting that despite these previous and numerous extreme measures, there was little long-term or corrosive effect on society after the security threat subsided. When the crisis ended, normalcy returned, and so too did civil liberties, invariably stronger than before.26
In addition, defenders of the USA PATRIOT Act and other measures in the war on terrorism argue that they are not nearly as drastic as civil liberties curtailments of the past.
Moreover, they argue that Americans will not repeat past mistakes. For example, Harvard law professor Laurence H. Tribe has applauded President Bush and other leaders for condemning discrimination against Muslims and Arab Americans. Discussing the anti-Japanese hysteria that followed the attack on Pearl Harbor, Tribe remarked, "How different was the sight of New York's Mayor Rudolph Giuliani, soon followed by President Bush, appealing eloquently to Americans not to seek revenge on their fellow citizens who happened to be Muslims."27
Civil libertarians insist, however, that upholding civil liberties in times of crisis requires constant vigilance. Again and again, they point out, when faced with a choice between security and civil liberties, Americans have chosen security. Although the nation as a whole has survived curtailments of civil liberties, countless individuals had constitutional rights violated by repressive laws and security measures. And often civil liberties have been curtailed unnecessarily, in the face of exaggerated threats. As journalist Daniel Hellinger writes, "Rather than defend the 'homeland' from terrorist attacks, abridgement of civil liberties has more often been aimed at suppressing
Lincoln's Suspension of the Writ of Habeus Corpus
One of the most well-known examples of the restriction of civil liberties during wartime is President Abraham Lincoln's suspension of the writ of habeus corpus during the Civil War. Habeus corpus is a Latin term which means "you should have the body," capturing the idea that the government should not detain people against their will without just cause. The Constitution guarantees that any person being held by the government has the right to request a writ of habeus corpus from the courts. If a court agrees to the request, then the government must demonstrate that it is holding the person in question for a compelling reason or else set the person free. The right to a writ of habeus corpus—essentially a guarantee against indefinite detention—is the only individual right to be included in the text of the original Constitution.
The Constitution states in Article 1, Section 9 that the writ may be suspended during national emergencies: "The privilege of the Writ of Habeas Corpus shall not be suspended, unless when in Cases of Rebellion or Invasion the public Safety may require it." Lincoln assumed this power to authorize the arrests and indefinite detention of thousands of Southern sympathizers. For generations, Civil War historians have argued over whether Lincoln's actions were justified and whether they were constitutional.
dissent, advancing some other agenda, or boosting the careers of unscrupulous politicians."28
In general, Americans are more willing to curtail civil liberties during times of crisis than during times of peace; civil libertarians argue that, given the wartime abuses of the past, it is during wartime that Americans should be most cautious about sacrificing their freedoms. Both sides, however, have plenty of hope that in the war on terrorism, the United States will learn from, rather than repeat, its past mistakes. As journalist Geoffrey Stone observes, "To strike the right balance in our time, our nation needs citizens who have the wisdom to know excess when they see it and the courage to stand for liberty when it is imperiled."29 |
Earth’s increasingly precarious state of things has always worried scientists. And ever since space exploration began, colonizing other bodies in the solar system has been the underlying vision for humans to find permanent settlements in space. The science community’s perpetual fixation on building habitats on other planets of our solar system and our natural satellite, the Moon, have been in the works for a long time. While the moon among other space bodies may not be an ideal place for a permanent residence, it could serve as a storage unit for our invaluable resources.
According to a New York Post report, scientists have proposed to establish a lunar gene bank that could house a repository of reproductive cells, sperm and egg samples from 6.7 million of Earth’s species, including humans. The proposed bank or ‘ark’ to be built on the moon is seen as a ‘modern global insurance policy.’
At a recent aerospace conference, Mechanical and aerospace engineer Jekan Thanga, whose team at the University of Arizona submitted their report, proposed setting up a lunar gene bank by shipping millions of sperm and egg samples for safekeeping. Thanga, speaking at the annual Institute of Electrical and Electronics Engineers (IEEE) Aerospace Conference on Saturday, said that as the planet’s growing instability, an ‘Earth-based repository’ would leave the collected specimen vulnerable. He wants to jumpstart a cross planetary of sorts by starting a human seed vault on the moon at the earliest.
According to his presentation, the so-called ‘ark would cryogenically preserve various species in the event of a global disaster. “We can still save them until the tech advances to then reintroduce these species — in other words, save them for another day,” he said.
The study he co-authored with five other scientists would store the reproductive cells in recently discovered lunar ‘pits’ from which scientists believe lava once flowed billions of years ago. And they think these pits also are the perfect size for cell storage, as they go down 80 to 100 meters underground and ‘provide readymade shelter from the surface of the moon,’ which endures ‘major temperature swings,’ as well as threats from meteorites and space radiation.
In his presentation, he also said that many plants and animals were ‘seriously endangered’ and cited the eruption of Indonesia’s Mount Toba 75,000 years ago, which caused a 1,000-year cooling period. He connected the same to present-day parallel to ‘human activity and other factors that we fully don’t understand.’
However, Thanga’s concept of creating gene banks is not new, it is already being employed at the Svalbard Global Seed Vault on the island of Spitsbergen in the Arctic Sea that houses plant seeds among others at the facility. The unique seed vault currently houses close to 992,000 samples – each containing an average of 500 seeds. |
What is EMF and how is it produced?
Electric and magnetic fields (EMFs)—also known as “radiation”—are energy fields that are produced by electricity but are not visible. Power lines, cell phones, and microwaves are all common sources. There was some concern in the 1990s about a possible link between EMFs and childhood cancers, but there hasn’t been enough research to back it up.EMFs are produced by many of today’s most prevalent electrical devices, which means we’re constantly exposed to this form of radiation. While there has been substantial research into the possible hazards of EMFs, no definitive linkages have been found thus far. That does not, however, imply that scientists are persuaded they are fully safe.
There is currently no consensus on whether EMFs should be considered a possible harm to human health. EMFs are “probably carcinogenic to humans,” according to the World Health Organization’s International Agency for Research on Cancer (IARC), but there has been no equivalent at the federal level in the United States.
What are the symptoms of EMF?
Lack of Quality Sleep
Heart Palpitations and Chest Pain
If you want to discover a natural and effective way to reduce your exposure to dangerous EMF radiation that affects your health >>>> Click Here
What Is The Health Risks of EMF
When people talk about the possible health dangers of EMFs, they’re usually talking about non-ionic manufactured EMFs like those emitted by electronic equipment like computers, phones, and televisions, rather than natural radiation like ultraviolet (UV) light emitted by the sun. The science behind how UV radiation harms human health is well-understood at this point. This includes understanding that UV radiation can cause sunburns, skin cancer, aging, and snow blindness (a sunburn to your cornea that causes a temporary loss of vision) as well as lowering your body’s ability to fight illness.
The question of whether EMFs from power lines can cause cancer has been studied since the 1970s. suggested a link between living near power lines and childhood leukemia. However, more recent research, which includes studies from the 1990s to the 2010s, has produced mixed results. The majority of studies found no link between power lines and childhood leukemia, and those that did found one only in children who lived in homes with extremely strong magnetic fields, which are uncommon in households.
How can you protect yourself from EM radiation?
Despite the fact that there is no scientific consensus on the health effects of anthropogenic EMFs, some people may prefer to avoid electronic device radiation as much as possible out of an abundance of caution. Here are some examples of how to go about doing that:
1 Ensure that your cell phone reception is as good as it can be. Some phones will amplify their signal to try to build a better connection if they have poor reception, which increases EMF exposure.
2. When making phone calls, use a headset or speakerphone. The goal is to keep your phone away from your body as much as possible.
3.Limit the time you spend on your phone and other electronic gadgets. This implies that you should use them less frequently and for shorter periods of time.
4.Instead of calling, send a text. Because it uses a lot smaller signal than a voice call, it exposes you to fewer EMFs.
EMF Pendant Necklaces are technologically advanced necklaces that vibrate at a frequency that is in line with your nervous system and energy fields. As a result, anytime you are exposed to radiation from electronic gadgets, these smart meters create healthy negative ions that offset the harmful positive ions emitted by the radiation, thus neutralizing the effects of the radiation.
As previously stated, the cancellation of those ions causes a sense of peace and relaxation. It relieves worry, stress, and any other negative consequences caused by non-ionizing radiation in your body. This is why EMF Pendant Necklaces are referred to as “harmonizers,” “neutralizers,” or “defenders.”
My Final Take On this:
The investigation into the health effects of EMFs is still ongoing. For the time being, the best we can do is work with the data we have, which shows that non-ionizing EMFs do not cause cancer in children or adults.
And if you feel more in control of your health by taking extra care with gadgets that emit EMFs, tactics like minimizing cell phone usage or seeking an EMF reading in your vicinity will not damage you. |
Table of Contents
ATOMS AND MOLECULES
Matter is made up of discrete particles. The main ones are atoms, molecules, and ions. An atom is the smallest part of an element which can take part in a chemical reaction. A molecule is the smallest particle of a substance that can exist alone and still retains the chemical properties of that substance. Molecules are made up of atoms.
Atomicity of an element is the number of atoms in one molecule of the element. We have monatomic, diatomic and triatomic for those elements that contain one atom, two atoms and three atoms respectively in their molecules.
- Define an atom.
- Give two examples of diatomic molecules.
An ion is an atom or group of atoms which carries an electric charge. Such groups of atoms that carry either a positive or negative charge are called RADICALS.
An acid radical is thus a small group of atoms carrying a negative charge that keeps its identity. Examples include S042-, N03– e.t.c.
Generally ions are grouped as cations and anions. Cations are positively charged ions e.g Ca2+, Na+, NH4+ e.t.c.
Anions are negatively charged ions e.g.. C032-, S042-, Cl–, OH–, etc.
- What are ions?
- State the cation and anion present in (I) H2S04 (ii) NaCl (iii) FeS04
DALTON’S ATOMIC THEORY
John Dalton, British Physicist and Chemist (1808) proposed the atomic theory thus:
- All elements are made up of small indivisible particles called atoms.
- Atoms can neither be created nor destroyed in any chemical reaction.
- Atoms of the same elements are exactly alike in aspect and are different from atoms of all other elements.
- Atoms of different elements can combine in simple whole number ratios to form compounds.
- All chemical changes result from the combination or separation of atoms
MODIFICATIONS OF DALTON’S ATOMIC THEORY
Due to new discoveries in the twentieth century, Dalton’s atomic theory cannot hold in its entirety. There is need for its modification.
- The first statement has been proved wrong by Rutherford’s discovery of protons, electron and neutrons as constituents of the atom. An atom is not an indivisible solid piece.
- The second statement still holds good for ordinary chemical reactions. During nuclear reactions, however, the nucleus can be broken into simpler atoms giving out large amount of heat (nuclear fission). This destroys the atoms involved.
- The discovery of isotopes makes the third statement unacceptable. Chlorine for example has two atoms with different nucleus content and hence different relative atomic masses although the same proton numbers.
- The fourth statement is true only for inorganic compounds which contain a few atoms per molecule. Carbon forms very large organic molecules such as proteins, starch and fats which
contain thousands of atoms.
- State the modifications of the Dalton’s atomic theory.
- A mixture contains propanone, ethanol and water with boiling point of 56C, 78C and 100C respectively.
- What method will be used to separate the liquids
- Name the first liquid that will distil over. Explain your answer
- Name an industrial process that uses fractional distillation
- Which of the following is not a constituent of the atom (a) proton (b) electron (c) neutron (d) isotope
- Which of the following statement about an atom is not correct? (a) it is indivisible (b) it is destructible in some cases (c) it is the smallest part of a substance that takes part in a reaction (d) it is made up of protons, neutrons and electrons
- Which of the following is a liquid at room temperature? (a) copper (b) gold (c) mercury (d) silver
- How can you separate a mixture of iron filings and sulphur powder? (a) distillation (b) chromatography (c) magnetization (d) evaporation
- What is the atomicity of neon? (a) monoatomic (b) diatomic (c) triatomic (d) polyatomic
- Give any two postulates of the Dalton’s atomic theory.
- (a) Differentiate an atom from a molecule. (b)How will an atom become an ion? |
WHO Eratosthenes WAS
Eratosthenes was a talented mathematician and geographer as well as an astronomer. He made several other important contributions to science. Eratosthenes devised a system of latitude and longitude, and a calendar that included leap years. He invented the armillary sphere, a mechanical device used by early astronomers to demonstrate and predict the apparent motions of the stars in the sky. He also compiled a star catalog that included 675 stars. His measurement of the circumference of Earth was highly respected in his day, and set the standard for many years thereafter. He may have also measured the distances from Earth to both the Moon and to the Sun, but the historical accounts of both deeds are, unfortunately, rather cryptic. A crater on Earth's Moon is named after Eratosthenes.
In 240 B.C., the Greek astronomer Eratosthenes made the first good measurement of the size of Earth. By noting the angles of shadows in two cities on the Summer Solstice, and by performing the right calculations using his knowledge of geometry and the distance between the cities, Eratosthenes was able to make a remarkably accurate calculation of the circumference of Earth. Eratosthenes lived in the city of Alexandria, near the mouth of the Nile River by the Mediterranean coast, in northern Egypt. He knew that on a certain day each year, the Summer Solstice, in the town of Syene in southern Egypt, there was no shadow at the bottom of a well. He realized that this meant the Sun was directly overhead in Syene at noon on that day each year. Eratosthenes knew that the Sun was never directly overhead, even on the Summer Solstice, in his home city of Alexandria, which is further north than Syene. He realized that he could determine how far away from directly overhead the Sun was in Alexandria by measuring the angle formed by a shadow from a vertical object. He measured the length of the shadow of a tall tower in Alexandria, and used simple geometry to calculate the angle between the shadow and the vertical tower. This angle turned out to be about 7.2 degrees. Next, Eratosthenes used a bit more geometry to reason that the shadow's angle would be the same as the angle between Alexandria and Syene as measured from the Earth's center. Conveniently, 7.2 degrees is 1/50th of a full circle (50 x 7.2° = 360°). Eratosthenes understood that if he could determine the distance between Alexandria and Syene, he would merely have to multiply that distance by 50 to find the circumference of Earth! Eratosthenes had the distance between the two cities measured. His records show that the distance was found to be 5,000 stadia. The stadion or stade (plural = stadia) was a common distance unit of the time. Unfortunately, there was not a universal, standard length for the stadion; so we don't know exactly which version of the stadion Eratosthenes used, and therefore are not exactly sure how accurate his solution was. He may have been correct to within less than 1% (if used the Greek stadion that was approx. 155 meters), a remarkable accomplishment! Or, if it was actually a different stadion (if used the Italian attic stadion that was approx. 185 meters) that he used, he may have been off by about 16%. The actual polar circumference of Earth is just a bit over 40 thousand km (about 24,860 miles).
Eratosthenes's only tools were sticks, eyes, feet and brains; plus a zest for experiment. With those tools he correctly deduced the circumference of the Earth, to high precision, with an error of only a few percent. That's pretty good figuring for 2200 years ago.
- Carl Sagan - |
To provide for and promote the protection and conservation of West Virginia's soil, land, water and related resources for the health, safety and general welfare of the state's citizens.
In the early 1930’s, the nation was experiencing an unparalleled ecological disaster known as the Dust Bowl. Following a severe and sustained drought in the Great Plains, the region’s soil began to erode and blow away. This created enormous black dust storms that blotted out the sun and swallowed the countryside. Thousands known as "dust refugees" fled the area to seek better lives.
On Capitol Hill, while testifying about the erosion problem, soil scientist Hugh Hammond Bennett threw back the curtains to reveal a sky blackened by dust. Bennett’s testimony moved Congress to unanimously pass legislation declaring soil and water conservation a national policy and priority.
Bennett would found and head the Soil Conservation Service, now known as the Natural Resources Conservation Service. Since about three-fourths of the continental United States is privately owned, Congress realized that only active, voluntary support from landowners would guarantee the success of conservation work on private land.
In 1937, President Franklin D. Roosevelt wrote the governors of all the states recommending legislation that would allow local landowners to form soil conservation districts. West Virginia’s Soil Conservation Committee was created in 1939. Its functions and programs were to conserve soil and retard erosion.
By referendum, the first conservation district organized in West Virginia was the West Fork Conservation District on February 2, 1940. The Eastern Panhandle and Greenbrier Valley Conservation Districts followed on February 3, 1940. Today, West Virginia has 14 Conservation Districts, each consisting of one to six counties.
In 2002, the state Legislature changed the name of the "Soil Conservation Committee" to "State Conservation Committee" to show that the committee’s responsibilities went beyond soil to all natural resources such as air and water. The State Conservation Committee serves as the governing body of the WVCA. |
Calculating Displacement in a Physics Problem
In physics, you find displacement by calculating the distance between an object’s initial position and its final position. Say, for example, that you have a fine new golf ball that’s prone to rolling around. This particular golf ball likes to roll around on top of a large measuring stick. You place the golf ball at the 0 position on the measuring stick, as shown in the below figure, diagram A.
The golf ball rolls over to a new point, 3 meters to the right, as you see in the figure, diagram B. The golf ball has moved, so displacement has taken place. In this case, the displacement is just 3 meters to the right. Its initial position was 0 meters, and its final position is at +3 meters. The displacement is 3 meters.
In physics terms, you often see displacement referred to as the variable s.
Scientists, being who they are, like to go into even more detail. You often see the term si, which describes initial position, (the i stands for initial). And you may see the term sf used to describe final position.
In these terms, moving from diagram A to diagram B in the figure, si is at the 0-meter mark and sf is at +3 meters. The displacement, s, equals the final position minus the initial position:
Displacements don’t have to be positive; they can be zero or negative as well. If the positive direction is to the right, then a negative displacement means that the object has moved to the left.
In diagram C, the restless golf ball has moved to a new location, which is measured as –4 meters on the measuring stick. The displacement is given by the difference between the initial and final position. If you want to know the displacement of the ball from its position in diagram B, take the initial position of the ball to be si = 3 meters; then the displacement is given by
When working on physics problems, you can choose to place the origin of your position-measuring system wherever is convenient. The measurement of the position of an object depends on where you choose to place your origin; however, displacement from an initial position si to a final position sf does not depend on the position of the origin because the displacement depends only on the difference between the positions, not the positions themselves. |
Nuclear Fusion: The Hope for Our Energy Future
The fusion process is the reaction that powers the sun. On the sun, in a series of nuclear reactions, four isotopes of hydrogen-1 are fused into a helium-4 with the release of a tremendous amount of energy.
Here on earth, two other isotopes of hydrogen are used: H-2, called deuterium, and H-3, called tritium. Deuterium is a minor isotope of hydrogen, but it’s still relatively abundant. Tritium doesn’t occur naturally, but it can easily be produced by bombarding deuterium with a neutron.
The fusion reaction is shown in the following equation:
The first demonstration of nuclear fusion — the hydrogen bomb — was conducted by the military. A hydrogen bomb is approximately 1,000 times as powerful as an ordinary atomic bomb.
The isotopes of hydrogen needed for the hydrogen bomb fusion reaction were placed around an ordinary fission bomb. The explosion of the fission bomb released the energy needed to provide the activation energy (the energy necessary to initiate, or start, the reaction) for the fusion process.
Control issues with nuclear fusion
The goal of scientists for the last 50 years has been the controlled release of energy from a fusion reaction. If the energy from a fusion reaction can be released slowly, it can be used to produce electricity. It will provide an unlimited supply of energy that has no wastes to deal with or contaminants to harm the atmosphere — simply non-polluting helium.
But achieving this goal requires overcoming three problems:
The fusion process requires an extremely high activation energy. Heat is used to provide the energy, but it takes a lot of heat to start the reaction. Scientists estimate that the sample of hydrogen isotopes must be heated to approximately 40,000,000 K.
K represents the Kelvin temperature scale. To get the Kelvin temperature, you add 273 to the Celsius temperature.
Now 40,000,000 K is hotter than the sun! At this temperature, the electrons have long since left the building; all that’s left is a positively charged plasma, bare nuclei heated to a tremendously high temperature. Presently, scientists are trying to heat samples to this high temperature through two ways — magnetic fields and lasers. Neither one has yet achieved the necessary temperature.
Time is the second problem scientists must overcome to achieve the controlled release of energy from fusion reactions. The charged nuclei must be held together close enough and long enough for the fusion reaction to start. Scientists estimate that the plasma needs to be held together at 40,000,000 K for about one second.
Containment is the major problem facing fusion research. At 40,000,000 K, everything is a gas. The best ceramics developed for the space program would vaporize when exposed to this temperature.
Because the plasma has a charge, magnetic fields can be used to contain it — like a magnetic bottle. But if the bottle leaks, the reaction won’t take place. And scientists have yet to create a magnetic field that won’t allow the plasma to leak.
Using lasers to zap the hydrogen isotope mixture and provide the necessary energy bypasses the containment problem. But scientists have not figured out how to protect the lasers themselves from the fusion reaction.
What the future holds for nuclear fusion
Science may only be a few years away from showing that fusion can work: This is the break-even point, where we get out more energy than we put in. It will then be a number of years before a functioning fusion reactor is developed. But scientists are optimistic that controlled fusion power will be achieved. The rewards are great — an unlimited source of nonpolluting energy.
An interesting by-product of fusion research is the fusion torch concept. With this idea, the fusion plasma, which must be cooled in order to produce steam, is used to incinerate garbage and solid wastes. Then the individual atoms and small molecules that are produced are collected and used as raw materials for industry. It seems like an ideal way to close the loop between waste and raw materials. Time will tell if this concept will eventually make it into practice. |
1. What was the date of publication of Utopia?
2. What explorations had created a new world picture in the quarter of a century prior to the composition of Utopia? How did those explorations affect the book?
3. Who was Erasmus and what was his connection with More?
4. Who was Peter Giles and what was his role in Utopia?
5. Who was Raphael Hythloday and what was his role in Utopia?
6. Who was Cardinal Morton and how did he figure in Utopia?
7. Cite several conditions, laws, and customs in England that were criticized by Hythloday.
8. Do you believe that More was in full agreement with those criticisms?
9. How did Hythloday represent the methods and attitudes of the inner circle of government officials?
10. What is your understanding of the geographical location of Utopia?
11. Point out several features or details that contribute to the effect of verisimilitude in the account of the island of Utopia. Cite specific figures and other details.
12. What are some features of the location and layout of the cities that exhibit careful, logical planning?
13. Explain the various communal aspects of the Utopian plan of society.
14. How did the Utopians use gold and silver? What were their reasons for that practice?
15. How do the Utopians treat fools (fools in the sense of Shakespeare's fools or clowns)?
16. Does the manner of dress among the Utopians appeal to you? What do you think caused More to propose such measures? Do you recall any precedents for that manner of dress?
17. What is the attitude of the Utopians toward the use of cosmetics?
18. What do you learn of family life in Utopia from a thorough examination of the work — of marriage, divorce, child-rearing, housing, meals, of authority and discipline, and of confession of sins?
19. There are certain works of utopian literature that dispense with marriage and family life. Name some and explain their practices.
20. Explain the principal features of the Utopian legal system. Do you think that More, a lawyer himself, meant to recommend the Utopian legal system in preference to the system then current in England?
21. How did the religious doctrines and practices of the Utopians differ from those of Roman Catholicism?
22. What was More's relation to the Protestant Reformation?
23. What was the date of Luther's Ninety-five Theses, which marked the outbreak of the Reformation?
24. Give an account in some detail of the Utopians' method of conducting warfare.
25. What features of Utopia reveal the spirit and substance of Renaissance humanism?
26. In what order were the different sections of the book written? Explain.
27. What events in More's career throw light on his book and influence its interpretation?
28. To what extent was More in agreement with the system of the Utopians as recommended by Hythloday? Break down your answer in order to deal with various topics separately.
29. Discuss what seem to you the pros and cons of life in the Utopian commonwealth.
30. Give a brief history of the utopian tradition in literature, naming authors and titles, indicating approximate dates, and noting distinguishing features of some of the more important documents.
31. Give an account of some of the living communities that were established on utopian principles — naming, locating, and describing a number of the prominent ones. |
Basketball Athlete Nutrition
Proper athletic nutrition helps prevent injury, supply vital energy for training and competitions.
Proper athlete nutrition is often overlooked in amateur sports. The philosophy of WBA is to educate players and their families about the importance of proper athlete nutrition to support a growing child that is physically active. As an athlete matures, the physical and mental demands increase. It is important to match these new demands with a proper nutritional profile for optimal athletic and mental function and performance.
Athlete nutrition is a little different than nutrition for less active individuals. Basketball players and other athletes require more calories and carbohydrates than “average” individuals. This is because more energy is burned and more cell recovery is needed for a more physically active individual. Basketball players need to maintain their mental acuity to remember plays and “read” the game at all times. For this reason, a focus on brain foods, especially the omega 3 fatty acid DHA need to be a daily part of their dietary intake.
The game of basketball requires the combination of endurance, speed, power, agility, specific basketball skills and mental acuity. By incorporating proper performance nutrition, players can maximize their training and competitive abilities.
Fatigue, both physical and mental, is one of the most crucial opponents to the basketball player. Therefore, one of the goals of basketball performance nutrition is to reduce the onset of fatigue in practise and game situations. Delaying fatigue not only gives players an advantage over their competitors but it also helps to prevent injury. Many injuries occur in the last few minutes of the games or practises when players are physically drained and mentally tired. So, maintaining high energy levels throughout game and practice give a player a distinct competitive edge.
5 Tips for Basketball Nutrition
- 1. Carbohydrates
Carbohydrates supply the essential energy required during physical and mental activities. The two main forms of carbohydrates are a)sugars such as; fructose, glucose, and lactose, and b) starches, which are found in foods such as starchy vegetables, grains, rice, breads, and cereals. The body breaks down most carbohydrates into the sugar glucose, which is absorbed into the bloodstream. As our glucose levels rise in the body, the pancreas releases a hormone called insulin. Insulin is needed to move sugar from the blood into the cells, where it can be used as a source of energy.
Carbohydrates are stored as glycogen in the muscles. The muscles need to have sufficient glycogen during physical exercise as this is the fuel that is burned. It is very important that basketball players eat sufficient, good quality carbohydrates before training and games to ensure that the energy is there when needed.
A good rule of thumb is to have your carb intake be about 60% of your diet. Carbohydrates are the first source of energy used for physical activities such as basketball. Carb deficiencies can lead to fatigue and poor performance or training.
Carbohydrates should be the largest source of calories and included in every meal. Carbs are converted to glycogen, and then back to glucose to fuel your body and give you energy. It is the first and foremost source for any athlete’s energy.
Carbs are especially important shortly before and after physical activity. You’re larger meal 3-4 hours before an activity should include “starchy” carbohydrates such as breads and pastas and be between 500-1,000 calories. Meals with higher fat percentages should be avoided at this time.
You should also take in some carbs 30-60 minutes before an activity in the form of a smaller meal or snack. This snack should be primarily carbohydrates, but not include large amounts of gaseous foods like raw fruits and vegetables. Total carbs should be between 25 grams and 100 grams, depending on your body makeup and the duration of the exercise.
It is very important to replenish your carbohydrates within 15-30 minutes after your activity. This depends on your body make-up, how many carbs you took in during your game/training, and the length of the event.
In an athlete’s carbohydrate supplies in the body become depleted they experience fatigue which means decreased speed, quickness, reaction time and declining endurance, decision making abilities, and mental focus. Therefore athletes need to come to games and practices fully fueled with adequate carbohydrate. Carbohydrates can come from many different foods, but quality sources mainly come from the fruit, vegetable, and grain food groups. Some examples of good carbohydrate foods are bananas, oranges, berries, dried fruits, carrots, peas, pastas, baked potatoes, sweet potatoes, whole grain breads, granola bars, and oatmeal.
- 2. Protein
Proteins are often called the building blocks of the body. Protein is made of structures called amino acids that combine in various ways to make muscles, bone, tendons, skin, hair, and other tissues.
It is important to eat protein regularly because it isn’t easily stored by the body. Various foods supply protein in varying amounts with complete proteins (those containing 8 essential amino acids) coming mostly from animal products such as meat, fish, and eggs and incomplete protein (lacking one or more essential amino acid) coming from sources like vegetables, fruit and nuts. Vegetarian athletes may have trouble getting adequate protein if they aren’t aware of how to combine foods. They need to combine their vegetarian sources of amino acids in the right way to ensure they are getting complete protein intake.
Protein Needs for Athletes
Athletes need protein primarily to repair and rebuild muscle that is broken down during exercise and to help optimize their carbohydrate storage in the form of glycogen. Protein isn’t an ideal source of fuel for exercise, carbohydrates are. Proteins can be used for fuel as a back up when the diet lacks adequate carbohydrate. This is detrimental, though for the athlete, because if their protein is used for fuel, there isn’t enough available to repair and rebuild body tissues, including muscle.
Basketball players need 0.6 to 0.8 grams of protein per pound of body weight per day (1.4 to 1.7 g/kg/day). For instance, a 100 lb boy (45 kg) would need to eat 0.6 x 100 lbs= 60 g of protein, or 45 kg x1.4= 63 grams of protein per day. Good sources of protein are: chicken, turkey, beef, cheese, yogurt, eggs, nuts, and soy.
- 3. Fats
Basketball players need to ensure that they consume enough healthy fats. Fats are important as they make up every cell in the body. Fats are required for brain function and mental acuity, both of which are vitally important for the basketball player. Essential fatty acids are necessary for cellular function and for the brain’s thinking capabilities. The omega 3 fatty acid called DHA, which stand for Decosahexanoic acid is the most important omega 3 fatty acid for the brain and memory. This omega 3 is not made by the body and needs to be supplemented by the diet. Unfortunately, the best source of this omega 3 fatty acid is fish, and fish have become so toxic that they are no longer the best way to supplement this fatty acid. DHA can be obtained from Omega 3 eggs and from high quality fish oil supplements that have concentrated DHA, like VitaFish Oil from VitaTree Nutritionals. (www.vitatree.com)
Fats are actually used before protein as an energy source while engaged in physical activity. Many athletes are not getting enough fat from their diets.
Too much fat is not good, especially before a game or training because it can cause sluggishness and muscle cramps. Try limiting high fat foods before and during training.
Too little fat in your overall diet is detrimental as well. Since it is your body’s second source of fuel behind carbohydrates, a deficiency in fat can cause you to fatigue more quickly. After “prolonged” outputs of energy, fat (the “good” kind/unsaturated) is the primary source of fuel for your body. Your diet should consist of about 20% fat. As with protein, it should be included in all meals, but not overdone.
Basketball players need at least 0.45 grams of fat per pound of body weight per day (1 g/kg/day). For example, a 100 pound player would need 0.45 x 100 lbs= 45 g of fat per day, or 45 kgx1 g= 45 g of fat per day. Choose heart-healthy fats like coconut oil, olive oil, and nuts.
- 4. Hydration
Maintaining adequate hydration before, during, and after practices and games is very important for the basketball athlete. Dehydration can happen to a player before he/she realizes the effects. Symptoms such as thirst, fatigue, headaches, and muscle cramps are often felt after it’s too late.
A great way to monitor your hydration levels and prevent dehydration is to check the color of your urine. Light-colored, clear, odorless urine throughout the day means an athlete is probably well hydrated. Strong, dark yellow urine suggests dehydration and indicates that athletes should begin drinking water until well hydrated. Fluid losses of only 1-2% of body weight can negatively affect performance and cause dehydration. Many athletes can easily lose this much fluid in just an hour of exercise.
- 5. Weight gain
Basketball players are normally tall, slim individuals and may be looking for ways to increase muscle mass as they reach the teen years. There are certainly safer ways to “bulk up” and to build lean muscle mass. One of the keys is adequate protein and calories. Athletes usually expend more energy than less active individuals and need to consume a higher number of calories in a day.
Weight Gain the Healthy Way
Most basketball players are tall and slender, and are looking to add muscular bodyweight. In order to gain weight, you must consume more calories than you expend on a daily basis. This means if you are looking to put on weight, you must eat, eat, and eat! Now for the select few looking to lose weight (i.e. reduce body fat), they must do the opposite — consume fewer calories than they expend. This is done by controlling their portion sizes.
A great way to supplement extra calories and protein is to drink protein shakes during the day. The best time is after a workout or practise, to refuel the muscles. We like to recommend vegetarian sources of protein powders or whey proteins, however be sure you do not have a dairy sensitivity if you choose whey (made from cow’s milk). Be sure to check the labels carefully on protein powders, that they do not contain any artificial sweeteners, colours or synthetic vitamins and minerals.
Fruity Weight Gainer Shake
- · 1 cup of frozen strawberries
- · 1 cup of milk (soy or almond milk is preferred)
- · 1 large banana
- · 1 cup of yogurt
- · 2-3 scoops of protein powder
- · Ice
Blend in a blender until smooth.
Chocolate Peanut Butter Weight Gainer Shake
- · 1 large banana
- · 11/2 cup of milk (soy or almond milk is preferred)
- · 1 tbsp chocolate milk powder
- · 2 tbsp peanut butter (smooth or crunchy)
- · 2-3 scoops protein powder
- · Ice
Blend in a blender until smooth.
Top 10 Foods for Basketballers!
- 1. Eggs-Provide complete protein and a great source of energy. It is a myth that eggs will raise your cholesterol levels. Eggs should have a dark yellow yolk for the most nutrient density. Omega 3 eggs are healthy for the DHA component , fuel for the brain and spinal cord, just be sure the integrity of the egg’s nutrition is not compromised by the added DHA, meaning the yolk should still be a dark yellow.
- 2. Oatmeal- Oatmeal is a great carbohydrate and fiber source for basketballers. It also contains a small amount of essential fatty acids. Oatmeal should be unsweetened and the kind you cook on the stove, not the instant kind that you add hot water to. The instant type of oatmeal contains a lot of sugar and food additives that are not considered healthy.
- 3. Bananas- Bananas supply instant energy and fiber and are a great source of potassium for refuelling tired muscles and to help prevent cramping. Potassium is also important for cardiovascular health. Bananas can be eaten on their own or blended into a protein shake.
- 4. Sweet potatoes- Contain healthy carbohydrates and vitamin A. Sweet potatoes are a good source of fiber and carbohydrates that do not spike insulin levels abruptly. They also contain important anti-oxidants to combat free radical damage brought on by exercise.
- 5. Soy/almond milk- These are great sources of protein and fat and carbohydrate and can be used in protein shakes.
- 6. Broccoli- Contains important anti-oxidants that help counteract the free radicals that can damage cells, brought on by exercise. Broccoli also contains vitamins, minerals and fiber, important for proper digestion and elimination.
- 7. Blueberries/blackberries- Contain anti-oxidants due to their highly dark pigmented skins. These berries are very nutritionally dense, meaning that the nutrients they hold are very high compared to other fruits. Berries in general are high on the ORAC scale, meaning their ability to quench free radicals is very good.
- 8. Chicken- Chicken is a good source of lean protein for basketball players. Protein is needed to build muscle mass and provide longer term energy.
- 9. Whole grain pasta/breads- Whole-wheat pasta is best for athletes when they are loading on carbs one or two days before their event. Whole grain pastas and breads will provide good carbs, fiber, and a little protein (but you should add a protein to your pasta with something like chicken).
10. Avocado- Avocados contain healthy fats and Vitamin E. Vitamin E is an important anti-oxidant quenching free radicals and protecting the heart and cardiovascular system. |
Compressed Zip File
Be sure that you have an application to open this file type before downloading and/or purchasing. How to unzip files.
It is one thing to read about the Common Core, but it can be a challenge to figure out where to begin.
Organizing for the Common Core Teacher Toolkits are designed to provide educators with the structure and tools to facilitate implementing Common Core Standards (CCS) in their classroom.
* * I encourage you to DOWNLOAD the FREE PREVIEW to see what is included * *
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
WANT TO KNOW MORE?
There are 5 Essential Components to the Toolkit:
There are labels you can print on Avery 5160 sheets to organize your files. One type lists the main topic such as “Communication Skills" or "Questioning". The matching label has the standard listed and students expectations.
Each CCS has been unpacked into teachable objectives.The Tracking Forms are useful for keeping track of which materials or resources you have used to address each standard. It is also helpful for curriculum mapping or identifying gaps in your current program.
PROGRESS MONITORING FORMS:
It is important to be able to monitor how your students are doing with the objectives. Simply print a copy for each of your students and use your school’s evaluation codes to keep track of their mastery level.
I CAN CARDS:
Each standard has been written in kid-friendly language to explain the goal of the lesson.
The natural question on every teacher’s mind is, “What am I suppose to teach?” Each CCS list was designed to be a handy resource to answer that question. Print and save the lists with your lesson planning materials for quick reference.
Also available as a BUNDLED TOOLKIT for 2nd grade Math, Reading, Language, Writing, Speaking/Listening and SAVE 28% |
A future warming of the Southern Ocean caused by rising greenhouse gas concentrations in the atmosphere may severely disrupt the stability of the West Antarctic Ice Sheet. The result would be a rise in the global sea level by several metres. A collapse of the West Antarctic Ice Sheet may have occurred during the last interglacial period 125,000 years ago, a period when the polar surface temperature was around two degrees Celsius higher than today. This is the result of a series of model simulations which the researchers of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI) have published online in the journal Geophysical Research Letters.
The Antarctic and Greenland are covered by ice sheets, which together store more than two thirds of the world's freshwater. As temperatures rise, ice masses melt; in consequence the global sea level rises and threatens the coastal regions. According to scientific findings, the Antarctic already today contributes to the annual sea level rise with 0.4 millimetres. However, the most recent world climate assessment report (IPCC 2013) pointed out that the development of the ice masses in the Antarctic is not yet sufficiently understood. Climate modellers of the Alfred Wegener Institute have therefore analysed the changes to the Antarctic Ice Sheet in the last interglacial period and applied their findings to future projections.
"Both, for the last interglacial period around 125,000 years ago and for the future our study identifies critical temperature limits in the Southern Ocean: If the ocean temperature rises by more than two degrees Celsius compared with today, the marine-based West Antarctic Ice Sheet will be irreversibly lost. This will then lead to a significant Antarctic contribution to the sea level rise of some three to five metres", explains AWI climate scientist Johannes Sutter. This rise, however, will only occur if climate change continues as it has up to now. The researchers make these assessments based on model simulations.
"Given a business-as-usual scenario of global warming, the collapse of the West Antarctic could proceed very rapidly and the West Antarctic ice masses could completely disappear within the next 1,000 years", says Johannes Sutter, the study's main author, who has just completed his doctoral thesis on this topic. "The core objective of the study is to understand the dynamics of the West Antarctic during the last interglacial period and the associated rise in sea level. It has been a mystery until now how the estimated sea level rise of a total of about seven metres came about during the last interglacial period. Because other studies indicate that Greenland alone could not have done it", Prof Gerrit Lohmann, the head of the research project, adds.
The new findings on the dynamics of the ice sheet allow conclusions to be drawn about how the ice sheet might behave in the wake of global warming. According to model calculations, the ice masses shrink in two phases. The first phase leads to a retreat of the ice shelves, ice masses that float on the ocean in the coastal area of the Antarctic stabilising the major glacier systems of the West Antarctic. If the ice shelves are lost, the ice masses and glaciers of the hinterland accelerate and the ice flow into the oceans increases. As a result, the sea level rises, the grounding line retreats, leading to a further floatation of the grounded ice masses with a progressing acceleration and retreat of the glaciers. These will achieve a stable intermediate state only once - put simply - a mountain ridge under the ice temporarily slows down the retreat of the ice masses.
If the ocean temperature continues to rise or if the grounding line of the inland ice reaches a steeply ascending subsurface, then the glaciers will continue to retreat even if the initial stable intermediate state has been reached. Ultimately, this leads to a complete collapse of the West Antarctic Ice Sheet. "Two maxima are also apparent in the reconstructions of the sea level rise in the last interglacial period. The behaviour of the West Antarctic in our newly developed model could be the mechanistic explanation for this", says a delighted Johannes Sutter.
The climate scientists used two models in their study. A climate model that includes various Earth system components such as atmosphere, oceans and vegetation, and a dynamic ice sheet model that includes all basic components of an ice sheet (floating ice shelves, grounded inland ice on the subsurface, the movement of the grounding line). Two different simulations were used with the climate model for the last interglacial period to feed the ice sheet model with all the necessary climate information.
"One reason for the considerable uncertainties when it comes to projecting the development of the sea level is that the ice sheet does not simply rest on the continent in steady state, but rather can be subject to dramatic changes", according to the AWI climate scientists, emphasising the challenges involved in making good estimates. "Some feedback processes, such as between the ice shelf areas and the ocean underneath, have not yet been incorporated into the climate models. We at the AWI as well as other international groups are working on this full steam." Improving our understanding of the systematic interaction between climate and ice sheets is crucial in order to answer one of the central questions of current climate research and for future generations: How steeply and, above all, how quickly can the sea level rise in the future?
Johannes Sutter, Paul Gierz, Klaus Grosfeld, Malte Thoma, Gerrit Lohmann: Ocean temperature thresholds for Last Interglacial West Antarctic Ice Sheet collapse. Geophysical Research Letters 2016. DOI: 10.1002/2016GL067818
The Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI) conducts research in the Arctic, Antarctic and oceans of the high and mid-latitudes. It coordinates polar research in Germany and provides major infrastructure to the international scientific community, such as the research icebreaker Polarstern and stations in the Arctic and Antarctica. The Alfred Wegener Institute is one of the 18 research centres of the Helmholtz Association, the largest scientific organisation in Germany. |
Why Play = Learning
Kathy Hirsh-Pasek, PhD, Roberta Michnick Golinkoff, PhD
Temple University, USA, University of Delaware, USA
Our children from their earliest years must take part in all the more lawful forms of play, for if they are not surrounded with such an atmosphere they can never grow up to be well conducted and virtuous citizens.
--Plato, The Republic 1
The study of play has a long history. From Plato to Kant, from Froebel to Piaget, philosophers, historians, biologists, psychologists, and educators have studied this ubiquitous behavior to understand how and why we play. Even animals play. This fact alone leads researchers like Robert Fagan,2 a leader in the study of animal play, to speculate that play must have some adaptive value given the sheer perilousness and energy cost to growing individuals. Researchers suggest that play is a central ingredient in learning, allowing children to imitate adult behaviors, practice motor skills, process emotional events, and learn much about their world. One thing play is not, is frivolous. Recent research confirms what Piaget3 always knew, that “play is the work of childhood.” Both free play and guided play are essential for the development of academic skills.4,5
Despite the many treatises on play, scholars still find the term elusive. Like Wittgenstein’s definition of game, the word play conjures up multiple definitions. Researchers generally discuss four types of play although in practice these often merge: (a) Object play, the ways in which children explore objects, learn about their properties, and morph them to new functions;(b) pretend play (either alone or with others), variously referred to as make-believe, fantasy, symbolic play, socio-dramatic play, or dramatic play, where children experiment with different social roles; (c) physical or rough-and-tumble play, which includes everything from a 6-month-old’s game of peek-a-boo to free play during recess;6 and (d) guided play7 where children actively engage in pleasurable and seemingly spontaneous activities under the subtle direction of adults.
Whether play is with objects, involves fantasy and make believe, or centers on physical activity, researchers generally agree that from the child’s point of view, eight features characterize ordinary play. Play is (a) pleasurable and enjoyable, (b) has no extrinsic goals, (c) is spontaneous, (d) involves active engagement, (e) is generally engrossing, (f) often has a private reality, (g) is nonliteral, and (h) can contain a certain element of make-believe.8,5,9 Even these criteria for judging play have some fuzzy boundaries.
Key Research Questions
A looming question is whether free play and guided play promote learning or whether they are simply a matter of releasing pent-up energy for young children. And, if play is related to learning, is one form of play more advantageous than another? These issues have dominated the research landscape in the past decade.
The findings suggest that both free play and guided play are indeed linked to social and academic development. For example, Pellegrini10 finds that elementary-aged children who enjoy free play during recess return to the classroom more attentive to their work. These children, especially boys, do better in reading and mathematics than do children who did not have recess. Physical play has also been associated with areas of brain development (the frontal lobes) that are responsible for behavioral and cognitive control.1 Indeed, a recent study used guided play throughout a school day to help preschoolers learn how to hold back impulsive behaviors and responses. The so-called executive function skills (attention, problem solving, and inhibition) nurtured in the guided play conditions were related to improvements in mathematics and reading.11
Recent Research on Academic Enhancement Through Play
Academically, then, play is related to reading and math as well as to the important learning processes that feed these competencies. More specifically, there are direct studies connecting play to literacy and language, and to mathematics. By way of example, 4-year-olds’ play—in the form of rhyming games, making shopping lists, and “reading” story books to stuffed animals—predicts both language and reading readiness.12 Research suggests that children demonstrate their most advanced language skills during play, and that these language skills are strongly related to emergent literacy.13,14 Finally, a review of 12 studies on literacy and play allowed Roskos and Christie15 to conclude that “play provides settings that promote literacy activity, skills, and strategies . . . and can provide opportunities to teach and learn literacy.”
Play and playful learning also supports the burgeoning mathematician. A naturalistic experiment by Seo and Ginsburg16 found that 4- and 5-year-old children build foundational mathematical concepts during free play. Regardless of children’s social class, three categories of mathematical activity were widely prevalent: pattern and shape play (exploration of patterns and spatial forms), magnitudeplay (statement of magnitude or comparison of two or more items to evaluate relative magnitude) and enumerationplay (numerical judgment or quantification). Children’s free play contains the roots of mathematical learning 46% of the time. A recent study by Ramani and Siegler17 demonstrated that guided play in the form of playing a board game like Chutes and Ladders also fostered diverse mathematical tasks among lower income preschoolers. Preschoolers who played the game four times for 15- to 20-minute sessions within a 2-week period were better at numerical magnitude (which is bigger), number line estimation, counting, and numeral identification. Finally, Gelman18 found that even children as young as 2.5 and 3 years of age can demonstrate an understanding of the cardinal counting principle--that the last number counted in a set is the amount the set contains. But this skill is only manifest when children are engaged in a playful task.
Recent Research on Social Enhancement Through Play
Free play and guided play are also important for fostering social competence and confidence as well as for self-regulation, or children’s ability to manage their own behavior and emotions. In free play children learn how to negotiate with others, to take turns, and to manage themselves and others.19, 20, 21, 22, 23, 24, 25, 26, 27 Play is essential for learning how to make friends and how to get along.
Barnett and Storm28 also find that play serves as a means for coping with distress. Indeed, Haight, Black, Jacobsen, and Sheridan29 demonstrated that children who have been traumatized can use pretend play with their mothers to work through their problems. Taken together, social competencies such as friendship and coping serve as building blocks for school readiness and academic learning. Raver23 concluded that “from the last two decades of research, it is unequivocally clear that children’s emotional and behavioral adjustment is important for their chances of early school success.” It is through play that children learn to subordinate desires to social rules, cooperate with others willingly, and engage in socially appropriate behavior—behaviors vital to adjusting well to the demands of school.
The datas are clear. Play and guided play offer strong support for academic and social learning. In fact, comparisons of preschools that use playful, child-centered approaches versus less playful, more teacher-directed approaches reveal that children in the child-centered approaches do better in tests of reading, language, writing, and mathematics.30 More engaging and interesting environments for children foster better learning well into elementary school.31,30
Given the findings linking play and learning, it is perhaps shocking that play has been devalued in our culture. Play has become a 4-letter word that often represents the opposite of productive work. A recent report from Elkind32 suggests that in the last few years, 30,000 schools have dropped recess to make more room for academic learning. From 1997 to 2003, children’s time spent in outdoor play fell 50%. In the last 20 years, children have lost over 8 hours of discretionary playtime per week. Why? Because many do not realize that play and learning are inextricably intertwined. When children play they are learning. Children who engage in play and playful learning do better in academic subjects than do their peers who play less. The work cementing this relationship, however, is just beginning to emerge and, at this point, relationships between play and learning are largely based on correlational evidence. In the next decade, we must do more to compare the relationship of play to the learning of academic and social outcomes in controlled and empirical ways.
Play is, thus, central for school readiness and school performance. It might also play an important role in preparing children for the global world beyond the classroom. Business leaders suggest that in the knowledge age, success will depend on children having a toolkit of skills that include collaboration (teamwork, social competence), content (e.g., reading, math, science, history), communication (oral and written), creative innovation, and confidence (taking risks and learning from failure). Each of these “Five Cs” is nurtured in playful learning.
In sum: Play = Learning. As children move from the sandbox to the boardroom, play should be the cornerstone of their education. The research is clear: Playful pedagogy supports social-emotional and academic strengths while instilling a love of learning.
- Panksepp J, Burgdorf J, Turner C, Gordon N. Modeling ADHD-type arousal with unilateral frontal cortex damage in rats and beneficial effects of play therapy. Brain and Cognition 2003;52(1):97-105.
- Angier N. The purpose of playful frolics: Training for adulthood. New York Times October 20, 1992.
- Piaget, J. Play, Dreams, andImitation in Childhood. Gattegno C, Hodgson FN, trans. New York, NY: W. W. Norton & compagny; 1962.
- Singer DG, Golinkoff RM, Hirsh-Pasek K, eds. Play = Learning: How Play Motivates and Enhances Children’s Cognitive and Social-Emotional Growth New York, NY: Oxford University Press; 2006
- Hirsh-Pasek K, Golinkoff RM, Ever DE. Einstein never used flashcards: How our children really learn and why they need to play more and memorize less. Emmaus, PA: Rodale Press; 2003.
- Pellegrini AD, Holmes RM. The role of recess in primary school. In:. Singer DG, Golinkoff RM, Hirsh-Pasek K, eds. Play = Learning: How Play Motivates and Enhances Children’s Cognitive and Social-Emotional Growth New York, NY: Oxford University Press; 2006:36-53.
- Hirsh-Pasek K, Golinkoff RM, Berk LE, Singer DG. A Mandate for Playful Learning in Preschool: Presenting the Evidence. New York, NY: Oxford University Press; 2008.
- Garvey C. Play. Cambridge, MA: Harvard University Press; 1977.
- Christie J, Johnsen E. The role of play in social-intellectual development. Review of Educational Research 1983;53(1):93-115.
- Pellegrini AD. Recess: Its Role in Development in Education. Mahwah, NJ: Lawrence Erlbaum Associates; 2005.
- Diamond A, Barnett WS, Thomas J, Munro S. Preschool program improves cognitive control. Science 2007;318(5855):1387-1388.
- Bergen D, Mauer D. Symbolic play, phonological awareness, and literacy skills at three age levels. In: Roskos KA, Christie JF, eds. Play and Literacy in Early Childhood: Research from Multiple Perspectives. New York, NY: L. Erlbaum; 2000: 45-62.
- Christie JF, Enz B. The effects of literacy play interventions on preschoolers' play patterns and literacy development. Early Education and Development 1992;3(3): 205-220.
- Christie J, Roskos K. Standards, science and the role of play in early literacy education. In: Singer DG, Golinkoff RM, Hirsh-Pasek K, eds. Play=Learning: How Play Motivates and Enhances Children’s Cognitive and Social-Emotional Growth. New York, NY: Oxford University Press. 2006:chap 4.
- Roskos K, Christie J. Examining the play-literacy interface: A critical review and future directions. In: Zigler EF, Singer DG, Bishop-Josef SJ, eds. Children's play: Roots of reading. 1st ed. Washington D.C.; Zero to Three Press; 2004:116.
- Seo KH., Ginsburg HP. What is developmentally appropriate in early childhood mathematics education? Lessons from new research. In: Clements DH, Sarama J, DiBiase AM, eds. Engaging Young Children in Mathematics: Standards for Early Childhood Mathematics Education. Mahwah, NJ: Lawrence Erlbaum Associates, 2003:91–104.
- Ramani GB, Siegler RS. Promoting broad and stable improvements in low-income children’s numerical knowledge through playing number boardgames. Child Development 2008;79(2):375-394.
- Gelman R. Young natural–number arithmeticians. Current Directions in Psychological Science 2006;15(4):193-197.
- Connolly JA, Doyle AB. Relations of social fantasy play to social competence in preschoolers. Developmental Psychology 1984;20(5):797-806.
- Howes C, Matheson CC. Sequences in the development of competent play with peers: Social and social pretend play. Developmental Psychology 1992;28(5): 961-974.
- Howes C. The Earliest Friendships. In: Bukowski WM, Newcomb AF, Hartup WW, eds. The Company They Keep: Friendships in Childhood and Adolescence. Cambridge, England: Cambridge University Press; 1998:66-86.
- Hughes C, Dunn J. Understanding mind and emotion: Longitudinal associations with mental-state talk between young friends. Developmental Psychology 1998; 34(5):1026-1037.
- Raver CC. Emotions matter: Making the case for the role of young children’s emotional development for early school readiness. SRCD Social Policy Report 2002; XVI(3):3-18.
- Singer DG, Singer JL. Imagination and Play in the Electronic Age. Cambridge, MA; Harvard University Press; 2005.
- Smith PK. Play and peer relations. In: Slater A, Bremner G, eds. An Introduction to Developmental Psychology. Malden, MA: Blackwell Publishing; 2003:311–333.
- Bodrova E, Leong DJ. Tools of the Mind: The Vygotskian Approach to Early Childhood Education. Englewood Cliffs, NJ: Merrill;1996.
- Krafft KC, Berk LE. Private speech in two preschools: Significance of open-ended activities and make-believe play for verbal self-regulation. Early Childhood Research Quarterly 1998;13(4):637-658.
- Barnett LA, Storm B. Play, pleasure, and pain: The reduction of anxiety through play. Leisure Sciences 1981;4(2):161-175.
- Haight W, Black J, Jacobsen T, Sheridan K. Pretend play and emotion learning in traumatized mothers and children. In: Singer D, Golinkoff RM, Hirsh-Pasek K, eds. Play=Learning: How Play Motivates and Enhances Children’s Cognitive and Social-Emotional Growth. New York, NY: Oxford University Press; 2006:chap.11.
- Lillard A, Else-Quest N. Evaluating Montessori education. Science 2006;313(5795):1893–1894.
- Sternberg RJ, Grigorenko EL. Teaching for Successful Intelligence: to Increase Student Learning and Achievement. 2nd ed. Thousand Oaks, CA: Corwin Press; 2007.
- Elkind D. Can we play? Greater Good Magazine 2008;IV(2):14-17.
How to cite this article:
Hirsh-Pasek K, Golinkoff RM. Why Play = Learning. In: Tremblay RE, Boivin M, Peters RDeV, eds. Smith PK, topic ed. Encyclopedia on Early Childhood Development [online]. https://www.child-encyclopedia.com/play/according-experts/why-play-learning. Published October 2008. Accessed October 22, 2021. |
Plants have a variety of botanical names. Many plants were named by plant explorers who travelled the world in times past looking for unusual specimens. Most of the explorers were from England and funded by horticultural entities such as Kew Botanical Gardens.
Some of the explorers named plants that they discovered after themselves or people that they knew. They also named some plants after the regions or countries where they were native. For example, plants from Japan often have Japonica in their names.
Some names suggest the characteristics of a plant. For example, quinquefolia is derived from the Latin “quinque” meaning “five,” and “folia” meaning “leaf.”
If the word “medicus” is included in a name, it suggests a belief that the plant has medicinal qualities.
The term “noctiflorus” suggests a night-blooming plant and “robustus” indicates that a plant is robust. At times plant names may change, because either they have been found to be incorrect, or because the same name has been used to refer to more than one type of plant, or new information has affected the classification.
Just like the names of people, plant names are quite diverse. |
Saturn's largest satellite (of 60 orbiting Saturn) is the mysterious world called Titan. It has slightly greater diameter (5150 km), density, and mass than Callisto. At 1.22 million kilometers from Saturn, it takes 15.9 days to orbit Saturn. With a density of 1.881X water, Titan is probably half rock, half ice. Careful observations of how the Cassini spacecraft moves in Titan's gravity field have shown that Titan's interior is only partially differentiated (like Callisto). Below the frozen surface may be an internal ocean of liquid water (or water-ammonia mixture) sandwiched between two thick ice layers surrounding a rock-ice mixture core.
What is special about Titan is that it has a thick atmosphere with a surface air pressure about 1.5 times thicker than the Earth's. Even though Titan's mass is even smaller than Mars', it is so cold (just 95 Kelvin) that it has been able to hold on to its primordial atmosphere. The atmosphere is made of cold molecular nitrogen (95%) and methane (about 5%). Other organic molecules have been detected in its atmosphere. They are formed from solar ultraviolet light and high energy particles accelerated by Saturn's magnetic field interacting with the atmospheric nitrogen and methane. The molecules of nitrogen and methane are split apart (photodissociation) and the atoms recombine to make a thick haze layer of mostly ethane that blocks our view of the surface in visible light. When the droplets of the organic molecules get large enough, they rain down to the surface as very dark deposits of liquid methane and ethane. Methane bubbling up from below the surface is thought to replenish the methane lost in its atmosphere from photodissociation. The picture of Titan, Triton and the Moon at the end of this sub-section shows hazy Titan as viewed in visible wavelengths from the Voyager spacecraft. Unfortunately, Voyager's cameras were precisely tuned to the wrong wavelengths so it could not peer through the haze layer. Therefore, all it saw was an orange fuzz ball.
Titan's brew of organic compounds is probably like the early Earth's chemistry. Its very cold temperatures may then have preserved a record of what the early Earth was like before life formed. This possibility and the possibility of lakes or oceans of methane and ethane hidden under a haze of organic compounds made Titan the special subject of a Saturn orbiter mission to follow-up the Voyager fly-by mission. The Cassini spacecraft is now orbiting Saturn and flying by its numerous moons as part of its mission with special attention focussed on Titan. Using infrared wavelengths and radar, Cassini has been able to peer through the hazy atmosphere. The picture below is a mosaic of 16 images taken at infrared wavelengths coming from the surface and that pass through the atmosphere easily to Cassini's camera.
Cassini managed to sample particles from the uppermost levels of Titan's atmosphere (many hundreds of kilometers above the surface) and found that there were traces of oxygen in Titan's upper atmosphere, probably from the photodissociation of water escaping Enceladus (see below) into hydrogen and oxygen. The presence of trace amounts of oxygen enables a greater variety of chemical compounds to be made with the energy of sunlight than just nitrogen and methane alone. Scientists simulating the conditions of Titan's upper atmosphere by mixing together simple compounds together under very low densities and bathing them with various wavelength bands of light have been able to create complex organic compounds, and an experiment reported in October 2010 (see also the LPL Spotlight article or the UA news release) produced all five of the nucleotide bases of life (adenine, cytosine, uracil, thymine, and guanine) and two amino acids (glycine and alanine) when a mixture of molecular nitrogen, methane, and carbon monoxide were subjected to microwaves. Early Earth with only trace amounts of oxygen in its atmosphere might have produced the first nucleotides and amino acids in the same way.
Another probe called Huygens, built by the European Space Agency, hitched a ride on Cassini and parachuted down to Titan's surface in January 2005. The color picture below (left) is Huygen's view from the surface of Titan. The probe settled 10 to 15 centimeters into the surface. The various landscapes of Titan look surprisingly familiar---like landscapes here on Earth. The mechanics of Titan's hydrogeological cycle is similar to the Earth's but the chemistry is different: instead of liquid water, Titan has liquid methane and instead of silicate rocks, Titan's rocks are dirty water ice. Liquid methane below the surface is released to the atmosphere to replenish that lost to the formation of the photochemical smog that eventually gets deposited in the soil. Methane rain washes the higher elevations of the dark material and it gets concentrated down in valleys to highlight the river drainage channels (see picture below right). Later images from Cassini have revealed huge methane and ethane lakes that change shape, presumably from rainfall of liquid hydrocarbons (see also). More recent images show methane rain falling in the equatorial regions of Titan as spring unfolded in the Saturn system in late 2010. Besides erosion, Titan may have signs of tectonic activity (and see also) and volcanism (and see also). There are, of course, some impact craters but fewer than 100 have been seen---small bodies burn up in Titan's atmosphere and erosion, tectonics, and volcanism erase others. Its icy composition and its eccentric orbit might mean that tidal heating added to radioactive decay are enough to provide the internal heat (recall that water ice melts and flexes at lower temperatures than the rocks of the inner planets).
The montage below includes a radar map of the lakes near the north pole of Titan. They are filled with liquid ethane and methane and are fed from sub-surface seepage and rainfall. Looking a lot like lakes on the Earth, you can see bays, islands, and tributary networks. The large lake at the top of the radar map is larger than Lake Superior on Earth. Kraken Mare, of which a small portion of is visible in the lower left part of the map, is as big as the Caspian Sea on the Earth. There are also lakes near the south pole of Titan. [Data used to create the montage: 939 nm image, 5 micron glint image, and radar image.]
Titan, Triton, and our Moon to the same scale.
Enceladus is the fourth largest moon of Saturn at 504 km in diameter. It is shown in front of the much larger Titan in the image at left from Cassini. Enceladus orbits 238,000 kilometers from Saturn in 1.37 days. Despite its small size, Enceladus is a moon of large interest because it has the highest albedo of any major moon (1.0) and it is geologically active. Tidal heating supplies only a small amount (about 1/5th) of the internal heat for this moon. Recent simulations show that if Enceladus has a slight wobble in its rotation of between 0.75 and 2 degrees, the wobbling could generate about five times more heat than tidal heating as well as produce it at the observed locations of greatest heat in the fissures in its southern hemisphere. Geological activity is helped by Enceladus being mostly ice---its density is 1.61X water. Recall that ices can deform and melt at lower temperatures than silicate and metal rocks.
Enceladus has geysers spurting water (vapor and ice) from its south pole that point to a large ocean of liquid water below its icy, mirror-like surface. The geysers can be seen when one is on the other side of Enceladus looking back toward the Sun. The small particles scatter the sunlight forward toward the viewer. Geyser material is able to escape Enceladus and become part of the E-ring of Saturn. Enceladus' activity appears to be localized to the southern hemisphere. Its northern hemisphere has many more craters. Recent sampling of the geyser material has found salts (sodium chloride and potassium chloride) and carbonates mixed in with the water. That means the liquid water layer is in contact with the rocky core instead of being sandwiched between ice layers. If there is an ocean below the icy surface, should Enceladus be another place to look for life besides Europa?
In this image above taken in November 2009, more than 30 individual jets shoot water vapor and ice up hundreds of kilometers from the south pole region.
The south pole region of Enceladus is a stark contrast from regions further north in the image on the right. In this enhanced color view, the blue "tiger stripes" stand out. The "tiger stripes" are fissures that spray icy particles, water vapor and organic compounds.
Triton has many black streaks on its surface that may be from volcanic venting of nitrogen heated to a gaseous state despite the very low temperatures by high internal pressures. The nitrogen fountains are about 8 kilometers high and then move off parallel to the surface by winds in the upper part of its thin atmosphere. Another unusual thing about Triton is its highly inclined orbit (with respect to Neptune's equator). Its circular orbit is retrograde (backward) which means the orbit is decaying---Triton is spiralling into Neptune. Triton's strange orbit and the very elliptical orbit of Neptune's other major moon, Nereid, leads to the proposal that Triton was captured by Neptune when Triton passed too close to it. If it was not captured, Triton was certainly affected by something passing close to the Neptune system.
Go back to previous section -- Go to next section
last updated: January 12, 2013 |
Carbon's mysterious magnetism.An X-ray experiment has produced the most conclusive evidence CONCLUSIVE EVIDENCE. That which cannot be contradicted by any other evidence,; for example, a record, unless impeached for fraud, is conclusive evidence between the parties. 3 Bouv. Inst. n. 3061-62. yet that carbon can be made into a permanent magnet.
Only a few elements are magnetic at room temperature. They are metals whose atoms have a magnetic moment arising from the spin of an unpaired electron. Pairs of electrons with opposite spin produce no net magnetic moment. Since carbon tends to form covalent bonds, which contain paired electrons, it seems an unlikely candidate to be magnetized.
But several experiments have suggested that under certain conditions, forms of bulk carbon such as graphite can acquire a feeble permanent magnetization. Most physicists regard these results with skepticism, however, since even trace contamination by a metal such as iron could make a sample slightly magnetic.
Now, a team led by Hendrik Ohldag of the Stanford (Calif.) Synchrotron synchrotron: see particle accelerator.
Cyclic particle accelerator in which the particle is confined to its orbit by a magnetic field. The strength of the magnetic field increases as the particle's momentum increases. Radiation Laboratory has found magnetism in a sample of graphite that had been irradiated with protons. The researchers detected a magnetic moment through the effect it had on the absorption of polarized A one-way direction of a signal or the molecules within a material pointing in one direction. X rays. To rule out contamination, they tuned the energy of the X rays so that they interacted with carbon atoms but not with iron atoms. The results appear in the May 4 Physical Review Letters Physical Review Letters is one of the most prestigious journals in physics. Since 1958, it has been published by the American Physical Society as an outgrowth of The Physical Review. .
Ohldag says that the proton bombardment could have permanently deformed the hexagonal lattice The hexagonal lattice or equilateral triangular lattice is one of the five 2D lattice types.
Three nearby points form an equilateral triangle. In images four orientations of such a triangle are by far the most common. of carbon atoms in graphite, creating some non-covalent bonds between atoms. |
Stratigraphy is a term used by archaeologists, geologists, and the like to refer to the layers of the earth that have built up over time. Stratification is defined by the depositing of strata or layers, one on top of the other, creating the ground we walk on today. Stratigraphy is a relative dating system, as there are no exact dates to be located within the ground, and areas can build up at different rates depending on climate, habitation, and weather. This is why context and association are so important when excavating. If multiple objects are found in association with each other, it is a good indication that they were buried at the same time. If coins are found within strata, or pieces of organic material that can radio carbon dated, then more exact dates can be attributed. Once a collection is formed over various layers in the earth, we are then able to create a proper timeline. Analysis of stratigraphy is then used to create a matrix, sorting out the layers to create a visual timeline.
Radiocarbon Dating and Archaeology
Stratigraphy , scientific discipline concerned with the description of rock successions and their interpretation in terms of a general time scale. It provides a basis for historical geology , and its principles and methods have found application in such fields as petroleum geology and archaeology. Stratigraphic studies deal primarily with sedimentary rocks but may also encompass layered igneous rocks e. A common goal of stratigraphic studies is the subdivision of a sequence of rock strata into mappable units, determining the time relationships that are involved, and correlating units of the sequence—or the entire sequence—with rock strata elsewhere.
Following the failed attempts during the last half of the 19th century of the International Geological Congress IGC; founded to standardize a stratigraphic scale, the International Union of Geological Sciences IUGS; founded established a Commission on Stratigraphy to work toward that end. Traditional stratigraphic schemes rely on two scales: 1 a time scale using eons, eras, periods, epochs, ages, and chrons , for which each unit is defined by its beginning and ending points, and 2 a correlated scale of rock sequences using systems, series, stages, and chronozones.
Determining the ages of fossils is an important step in mapping out how life evolved across geologic time. The study of stratigraphy enables.
To get the best possible experience using our website, we recommend that you upgrade to latest version of this browser or install another web browser. Network with colleagues and access the latest research in your field. Chemistry at Home Explore chemistry education resources by topic that support distance learning. Find a chemistry community of interest and connect on a local and global level.
Technical Divisions Collaborate with scientists in your field of chemistry and stay current in your area of specialization. Explore the interesting world of science with articles, videos and more. Recognizing and celebrating excellence in chemistry and celebrate your achievements. Diversity in Chemistry Awards Find awards and scholarships advancing diversity in the chemical sciences.
Stratigraphy is the study of layered materials strata that were deposited over time. The basic law of stratigraphy, the law of superposition, states that lower layers are older than upper layers, unless the sequence has been overturned. Stratified deposits may include soils, sediments, and rocks, as well as man-made features such as pits and postholes. The adoption of stratigraphic principles by archaeologists greatly improved excavation and archaeological dating methods.
By digging from the top downward, the archaeologist can trace the buildings and objects on a site back through time using techniques of typology i.
For those researchers working in the field of human history, the chronology of This approach helps to order events chronologically but it does not provide the Absolute dating methods mainly include radiocarbon dating.
Archaeologists use many different techniques to determine the age of a particular artifact, site, or part of a site. Two broad categories of dating or chronometric techniques that archaeologists use are called relative and absolute dating. Stratigraphy is the oldest of the relative dating methods that archaeologists use to date things.
Stratigraphy is based on the law of superposition–like a layer cake, the lowest layers must have been formed first. In other words, artifacts found in the upper layers of a site will have been deposited more recently than those found in the lower layers. Cross-dating of sites, comparing geologic strata at one site with another location and extrapolating the relative ages in that manner, is still an important dating strategy used today, primarily when sites are far too old for absolute dates to have much meaning.
The scholar most associated with the rules of stratigraphy or law of superposition is probably the geologist Charles Lyell. The basis for stratigraphy seems quite intuitive today, but its applications were no less than earth-shattering to archaeological theory.
Stratigraphy and the Laws of Superposition
The law of superposition is that the youngest rock is always on top and the oldest rock is always on the bottom. The law of superposition is based on the common sense argument that the bottom layer had to laid down first. The bottom layer because it logically had to be laid down first must be older. The layers on top could only be laid down on top of the bottom layer so must be younger.
However the relative ages of rocks is more commonly determined by the presumed ages of the fossils found in the sedimentary layers. The sedimentary layers with the simplest fossils are assumed to be older even if the sedimentary layer is found on top of a sedimentary layer that has fossils that are more complex and therefore assumed to be younger.
How Broadly Can This Approach Be Applied? Currently, Hendriks’ approach is tuned to identifying post imitations of pre works.
Stratigraphic Superposition Picture on left: In places where layers of rocks are contorted, the relative ages of the layers may be difficult to determine. View near Copiapo, Chile. At the close of the 18th century, careful studies by scientists showed that rocks had diverse origins. Some rock layers, containing clearly identifiable fossil remains of fish and other forms of aquatic animal and plant life, originally formed in the ocean.
Other layers, consisting of sand grains winnowed clean by the pounding surf, obviously formed as beach deposits that marked the shorelines of ancient seas. Certain layers are in the form of sand bars and gravel banks — rock debris spread over the land by streams. Some rocks were once lava flows or beds of cinders and ash thrown out of ancient volcanoes; others are portions of large masses of once molten rock that cooled very slowly far beneath the Earth’s surface.
Other rocks were so transformed by heat and pressure during the heaving and buckling of the Earth’s crust in periods of mountain building that their original features were obliterated.
Dating Rocks and Fossils Using Geologic Methods
The age of fossils can be determined using stratigraphy, biostratigraphy, and radiocarbon dating. Paleontology seeks to map out how life evolved across geologic time. A substantial hurdle is the difficulty of working out fossil ages. There are several different methods for estimating the ages of fossils, including:.
This dating scene is dead. Radiocarbon Dating. Sometimes called carbon dating, this method works on organic material. Both plants and animals exchange It would be like having a watch that told you day and night.”.
Stratigraphy refers to layers of sediment, debris, rock, and other materials that form or accumulate as the result of natural processes, human activity, or both. An individual layer is called a stratum; multiple layers are called strata. At an archaeological site, strata exposed during excavation can be used to relatively date sequences of events.
At the heart of this dating technique is the simple principle of superposition: Upper strata were formed or deposited later than lower strata. Without additional information, however, we cannot assign specific dates or date ranges to the different episodes of deposition. In this example, archaeologists might radiocarbon date the basket fragment or bone awl in Stratum E, and they could use artifact seriation to obtain fairly precise date ranges for Strata A, B, C, and E.
If the date on the car license plate is preserved, they can say with certainty that Stratum A was deposited in that year or later. Download app. Learn About Archaeology. What is Archaeology? Common Dating Methods. Download Our App.
Relative Age-dating — Discovery of Important Stratigraphic Principles
Nicolaus Steno introduced basic principles of stratigraphy , the study of layered rocks, in William Smith , working with the strata of English coal Former swamp-derived plant material that is part of the rock record. The figure of this geologic time scale shows the names of the units and subunits.
How do we know this and how do we know the ages of other events in This plate shows a date of , thus the Tin Cans layer is about 67 years old. of stratigraphy that allowed them, and now us, to work out the relative.
These present many characteristics that are used for comparing them, such as morphology and raw materials in the case of stone tools, and decorative techniques and motifs in the case of ceramics. Radiocarbon Dating Radiocarbon dating is the most widely used dating technique in archaeology. It relies on a natural phenomenon that is the foundation of life on earth. Indeed, carbon 14 14C is formed from the reaction caused by cosmic rays that convert nitrogen into carbon 14 and then carbon dioxide by combining with carbon 12 12C and carbon 13 13C , which are stable carbon isotopes.
Following the death of an organism, any exchange ceases and the carbon 14, which is radioactive and therefore unstable, slowly begins to disintegrate at a known rate half-life of years, ie, after this period only half of the total carbon 14 present at the time of death remains. A sample requires 10 to 20 grams of matter and usually consists of charred organic material, mainly charcoal, but bones see zooarchaeology and shells can also be dated using this technique. An initial reading dates the specimen which is then calibrated by considering this date and its correspondence with the measurable level of carbon 14 stored over time in the growth rings of certain tree species, including redwood and pine bristol.
Website access code
Radiocarbon dating is a key tool archaeologists use to determine the age of plants and objects made with organic material. But new research shows that commonly accepted radiocarbon dating standards can miss the mark — calling into question historical timelines. Archaeologist Sturt Manning and colleagues have revealed variations in the radiocarbon cycle at certain periods of time, affecting frequently cited standards used in archaeological and historical research relevant to the southern Levant region, which includes Israel, southern Jordan and Egypt.
These variations, or offsets, of up to 20 years in the calibration of precise radiocarbon dating could be related to climatic conditions. Pre-modern radiocarbon chronologies rely on standardized Northern and Southern Hemisphere calibration curves to obtain calendar dates from organic material.
Stratigraphy, scientific discipline concerned with the description of rock whether from years of experience gained by working on that content or via study for an advanced degree. dating methods—such as radiometric dating (the measurement of radioactive decay), efforts did much to stimulate speleological studies.
In groups of people, students will use soil “keys” to match a known date and soil context to soils on the poster. The keys provide a date to apply to different features on the poster. Students will take this information and concepts learned from the discussion to complete the worksheet. Copies of the soil levels poster for each group.
Poster may be printed out at any size. Legal or 11X17 is best for visibility and for sharing. If you increase the poster size, remember to increase the sheet of keys the same amount to allow for best matching. There are 2 sets of keys, and 2 teachers keys on each sheet. |
Although Uncle Sam (initials U.S.) is the most popular personification of the United States, many Americans have little or no concept of his origins. If pressed, the average American might point to the early 20th century and Sam’s frequent appearance on army recruitment posters. In reality, however, the figure of Uncle Sam dates back much further. Portraying the tradition of representative male icons in America, which can be traced well back into colonial times, the actual figure of Uncle Sam, dates from the War of 1812.
At that point, most American icons had been geographically specific, centering most often on the New England area. However, the War of 1812 sparked a renewed interest in national identity which had faded since the American Revolution.
The term Uncle Sam is said to have been derived from a man named Samuel Wilson, a meat packer from Troy, New York, who supplied rations for the soldiers during the War of 1812. Samuel Wilson, who served in the American Revolution at the age of 15, was born in Massachusetts. After the war, he settled in the town of Troy, New York, where he and his brother, Ebenezer, began the firm of E. & S. Wilson, a meat packing facility. Samuel was a man of great fairness, reliability, and honesty, who was devoted to his country. Well, liked, local residents began to refer to him as “Uncle Sam.”
During the War of 1812, the demand for meat supply for the troops was badly needed. Secretary of War, William Eustis, made a contract with Elbert Anderson, Jr. of New York City to supply and issue all rations necessary for the United States forces in New York and New Jersey for one year. Anderson ran an advertisement on October 6, 1813 looking to fill the contract. The Wilson brothers bid for the contract and won. The contract was to fill 2,000 barrels of pork and 3,000 barrels of beef for one year. Situated on the Hudson River, their location made it ideal to receive the animals and to ship the product.
At the time, contractors were required to stamp their name and where the rations came from onto the food they were sending. Wilson’s packages were labeled “E.A. – US., which stood for Elbert Anderson, the contractor, and the United States. When an individual in the meat packing facility asked what it stood for, a coworker joked and said it referred to Sam Wilson — “Uncle Sam.”
A number of soldiers who were originally from Troy, also saw the designation on the barrels, and being acquainted with Sam Wilson and his nickname “Uncle Sam”, and the knowledge that Wilson was feeding the army, led them to the same conclusion. The local newspaper soon picked up on the story and Uncle Sam eventually gained widespread acceptance as the nickname for the U.S. federal government.
Though this is an endearing local story, there is doubt as to whether it is the actual source of the term. Uncle Sam is mentioned previous to the War of 1812 in the popular song “Yankee Doodle”, which appeared in 1775. However, it is not clear whether this reference is to Uncle Sam as a metaphor for the United States, or to an actual person named Sam. Another early reference to the term appeared in 1819, predating Wilson’s contract with the government.
The connection between this local saying and the national legend is not easily traced. As early as 1830, there were inquiries into the origin of the term “Uncle Sam”. The connection between the popular cartoon figure and Samuel Wilson was reported in the New York Gazette on May 12, 1830.
Regardless of the actual source, Uncle Sam immediately became popular as a symbol of an ever-changing nation. His “likeness” appeared in drawings in various forms including resemblances to Brother Jonathan, a national personification and emblem of New England, and Abraham Lincoln, and others. In the late 1860s and 1870s, political cartoonist Thomas Nast began popularizing the image of Uncle Sam. Nast continued to evolve the image, eventually giving Sam the white beard and stars-and-stripes suit that are associated with the character today. He is also credited with creating the modern image of Santa Claus as well as coming up with the donkey as a symbol for the Democratic Party and the elephant as a symbol for the Republicans.
However when a military recruiting poster was created in about 1917, the image of Uncle Sam was firmly set into American consciousness. The famous “I Want You” recruiting poster was created by James Montgomery Flagg and four million posters were printed between 1917 and 1918. Indeed, the image was a powerful one: Uncle Sam’s striking features, expressive eyebrows, pointed finger, and direct address to the viewer made this drawing into an American icon.
Throughout the years, Uncle Sam has appeared in advertising and on products ranging from cereal to coffee to car insurance. His likeness also continued to appear on military recruiting posters and in numerous political cartoons in newspapers
In September 1961, the U.S. Congress recognized Samuel Wilson as “the progenitor of America’s national symbol of Uncle Sam.” Wilson died at age 88 in 1854, and was buried next to his wife Betsey Mann in the Oakwood Cemetery in Troy, New York, the town that calls itself “The Home of Uncle Sam.”
Uncle Sam represents a manifestation of patriotic emotion. |
The stars life sequence, ending with the formation of a black hole. Image credit: Nicolle Rager Fuller/NSF Click to enlarge
Just a few hundred millions years after the Big Bang, a massive star exhausted its fuel, collapsed as a black hole, and exploded as a gamma ray burst. The radiation from this catastrophic event has only now reached Earth, and astronomers are using it to peer back to the earliest moments of the Universe. The burst, named GRB 050904, was observed by NASA’s Swift satellite on September 4, 2005. One unusual thing about this burst is that it lasted for 500 seconds – most are over in a fraction of that time.
It came from the edge of the visible universe, the most distant explosion ever detected.
In this week’s issue of Nature, scientists at Penn State University and their U.S. and European colleagues discuss how this explosion, detected on 4 September 2005, was the result of a massive star collapsing into a black hole.
The explosion, called a gamma-ray burst, comes from an era soon after stars and galaxies first formed, about 500 million to 1 billion years after the Big Bang. The universe is now 13.7 billion years old, so the September burst serves as a probe to study the conditions of the early universe.
“This was a massive star that lived fast and died young,” said David Burrows, senior scientist and professor of astronomy and astrophysics at Penn State, a co-author on one of the three reports about this explosion published this week in Nature. “This star was probably quite different from the kind we see today, the type that only could have existed in the early universe.”
The burst, named GRB 050904 after the date it was spotted, was detected by NASA’s Swift satellite, which is operated by Penn State. Swift provided the burst coordinates so that other satellites and ground-based telescopes could observe the burst. Bursts typically last only 10 seconds, but the afterglow will linger for a few days.
GRB 050904 originated 13 billion light years from Earth, which means it occurred 13 billion years ago, for it took that long for the light to reach us. Scientists have detected only a few objects more than 12 billion light years away, so the burst is extremely important in understanding the universe beyond the reach of the largest telescopes.
“Because the burst was brighter than a billion suns, many telescopes could study it even from such a huge distance,” said Burrows, whose analysis focuses mainly on Swift data from its three telescopes, covering a range of gamma-rays, X-rays, and ultraviolet/optical wavelengths, respectively. Burrows is the lead scientist for Swift’s X-ray telescope.
The Swift team found several unique features in GRB 050904. The burst was long–lasting about 500 seconds–and the tail end of the burst exhibited multiple flares. These characteristics imply that the newly created black hole didn’t form instantly, as some scientists have thought, but rather it was a longer, chaotic event.
Closer gamma-ray bursts do not have as much flaring, implying that the earliest black holes may have formed differently from ones in the modern era, Burrows said. The difference could be because the first stars were more massive than modern stars. Or, it could be the result of the environment of the early universe when the first stars began to convert hydrogen and helium (created in the Big Bang) into heavier elements.
GRB 050904, in fact, shows hints of newly minted heavier elements, according to data from ground-based telescopes. This discovery is the subject of a second Nature article by a Japanese group led by Nobuyuki Kawai at the Tokyo Institute of Technology.
GRB 050904 also exhibited time dilation, a result of the vast expansion of the universe during the 13 billion years that it took the light to reach us on Earth. This dilation results in the light appearing much redder than when it was emitted in the burst, and it also alters our perception of time as compared to the burst’s internal clock.
These factors worked in the scientists’ favor. The Penn State team turned Swift’s instruments onto the burst about 2 minutes after the event began. The burst, however, was evolving as if it were in slow motion and was only about 23 seconds into the bursting. So scientists could see the burst at a very early stage.
Only one other object–a quasar–has been discovered at a greater distance. Yet, whereas quasars are supermassive black holes containing the mass of billions of stars, this burst comes from a single star. The detection of GRB 050904 confirms that massive stars mingled with the oldest quasars. It also confirms that even more explosions of distant stars–perhaps from the first stars, theorists say–can be studied through a combination of observations with Swift and other world-class telescopes.
“We designed Swift to look for faint bursts coming from the edge of the universe,” said Neil Gehrels of NASA Goddard Space Flight Center in Greenbelt, Maryland, Swift’s principal investigator. “Now we’ve got one and it’s fascinating. For the first, time we can learn about individual stars from near the beginning of time. There are surely many more out there.”
Swift was launched in November 2004 and was fully operational by January 2005. Swift carries three main instruments: the Burst Alert Telescope, the X-ray Telescope, and the Ultraviolet/Optical Telescope. Swift’s gamma-ray detector, the Burst Alert Telescope, provides the rapid initial location, was built primarily by the NASA Goddard Space Flight Center in Greenbelt and Los Alamos National Laboratory, and was constructed at GSFC. Swift’s X-Ray Telescope and UV/Optical Telescope were developed and built by international teams led by Penn State and drew heavily on each institution’s experience with previous space missions. The X-ray Telescope resulted from Penn State’s collaboration with the University of Leicester in England and the Brera Astronomical Observatory in Italy. The Ultraviolet/Optical Telescope resulted from Penn State’s collaboration with the Mullard Space Science Laboratory of the University College-London. These three telescopes give Swift the ability to do almost immediate follow-up observations of most gamma-ray bursts because Swift can rotate so quickly to point toward the source of the gamma-ray signal.
Original Source: PSU News Release |
This informative book teaches early readers about the important impact the government has on our lives. Readers will learn about taxes, the three branches of government, voting, and more through bright images and supportive text. A table of contents, glossary, and index are included to aid in helping readers better understand the content.
People count on good citizens to make America a great country. In this inspiring nonfiction book, readers will learn what they can do to be good citizens, what leaders do to help citizens, and what rules good citizens follow. The appealing images, supportive text, and helpful table of contents, glossary, and index work together to keep readers informed and engaged from cover to cover.
Give early readers a look into who determines the rules for various places--from the classroom to the entire country! With vivid images in conjunction with easy-to-read text, readers are encouraged to recognize and follow rules that impact their lives.
This fact-filled nonfiction book will teach readers about various types of laws--from city laws to national ones--and what impact they have on us. The detailed images, fascinating facts, and supportive text work together to inspire readers to learn all they can about laws. A glossary, table of contents, and index are provided to build readers' comprehension.
Today, Americans embrace one another's differences. But it was not always this way. In the past, people had to struggle against slavery and unfair leaders. Americans believe in equality and responsibility. These are our civic values. It is important that we uphold these beliefs. Colorful images, supporting text, a glossary, table of contents, and index all work together to help readers better understand the content and be fully engaged from cover to cover.
Thurgood Marshall was an incredible man. He believed that "separate but equal" was not fair. He fought for people and their civil rights. He became a justice for the Supreme Court. Here he helped change unfair laws for African Americans. He is known as "Mr. Civil Rights". Colorful images, supporting text, a glossary, table of contents, and index all work together to help readers better understand the content and be fully engaged from cover to cover.
What does it take to be a good citizen? Early readers will find out in this nonfiction title that features lively images, simple text, and an accompanying glossary. Children are encouraged to practice being a good citizen on their own time and to share their good deeds with others. |
“Epiphyte” is another word for “air plant.” They are a class of plants that grow up in trees, often with little to no contact to the soil below the tree. They are also not parasitic to the tree they live on, getting their nutrients from the air itself or what little else they can find on the branch, such as bird droppings. Epiphytes are common to tropical environments and somewhat less so in subtropical environments. While some kinds of epiphytes, such as tillandsias (which are a subgroup of bromeliads), are capable of surviving with literally no soil, most require a little. In the wild, plants firmly anchored to tree limbs attract debris and droppings, which break down into a sort of soil that collects on tree limbs. In fact, some of this soil has been found to be fertile enough that the trees grow roots out of their limbs to take advantage of the nutrients. However, the “soil” is always thin and light and drains well.
In cultivation, most epiphytes are grown wired or glued to a log or piece of bark with a little sphagnum moss packed around their roots. Some are grown in pots, but the soil is often loose and well-draining. Orchid bark is a good example of soil that is specifically designed for epiphytes, as most orchids are epiphytes. As for watering, a light mist two to four times a week will do for most species and nearly all prefer to dry out between waterings. Despite growing in wet climates, their soil is so thin that they are more adapted to dry conditions than many of their neighbors. Also many do not require any fertilizer at all. Many also prefer dappled light since they tend to have a canopy over their heads in the wild. All of these things make many epiphytes great houseplants and there are many varieties available. Below are some of the different types of epiphytes out there.
Most orchids (with the notable exception of Paphiopedilums) are epiphytes. I had a Phalaenopsis and an Encyclia that I grew epiphytically on a log in a dry climate for over a year. While both survived, neither flowered, probably due to the fact that I rarely fertilized them and watered them all too infrequently. Some orchids do better than others in that sort of situation, but most will die if planted in soil. Plus, the amazing blooms of these plants make them worth a try.
Bromeliads are probably the most famous of the air plants. While there are terrestrial bromeliads (pineapple is one), many are very epiphytic. I have three different Tillandsias that have been living without so much as a little sphagnum moss around their roots for over 4 years now. I suspect that if I was in a humid climate, I probably wouldn’t even have to spray them. If you look a little deeper than your local grocery store, you can find some bromeliads with really fascinating foliage. With proper care, they flower every 2 years or so, producing blooms that rival orchids. As a bonus, when the blooms fade, the parent plant produces pups, which are miniature plants. When those get to about half the size of the parent plant, they can be removed and planted elsewhere.
There are actually several varieties of vining, epiphytic cacti. Their adaptation to dry climates must have made this a natural move. I can only imagine that a dry-adapted plant moving into a moist environment would cause them to evolve to take advantage of the driest microclimate available. I am just starting to grow my first epiphytic cactus, but they are still just seedlings. I’ll talk more about that in my next post.
Nepenthes are commonly called “tropical pitcher plants.” They are a family of plants that start on the forest floor and then climb up the nearest tree. As with many tropical plants, they have a drip tip on the end of their leaves. Only in Nepenthes, the dip tip has become highly evolved. As the plant gets bigger and feels the lack of nutrients in its chosen home, the drip tip enlarges into a little pitcher that is used to capture and digest prey, thereby giving the plant the nutrients that growing on a tree lacks. The pitchers of nepenthes tend to be more elaborate than terrestrial pitcher plants and quite beautiful. In fact, some of the largest pitchers in the world belong to Nepenthes. Nepenthes rajah is reported to have pitchers big enough to capture a rat.
There are a couple of kinds of epiphytic ferns. The white rabbit foot fern has white, fuzzy aerial roots that sort of look like rabbits’ feet. However, by far the most fascinating of the epiphytic ferns are the staghorn ferns. A staghorn fern starts on the side of a tree and grows two kinds of fronds. The basal fronds grow short and round and cover the root ball, protecting and enlarging it over time. The fertile fronds are what the plant is named for. They grow long and wide and sort of resemble the antlers of a moose. Mature staghorn ferns can be several feet across and are absolutely majestic plants. Thus far my attempts to grow any epiphytic ferns have been unsuccessful as they don’t tolerate the lack of humidity in my climate.
While I have never grown an ant plant, and probably never will, I still find them fascinating. In addition to the usual complement of leaves, ant plants grow an enlarged, bulbous base that is riddled with tunnels. In the wild, ants move in and inhabit the tunnels, allowing the plant to take advantage of their waste.
If you are interested in possibly growing any of these epiphytes, I strongly recommend giving Black Jungle Terrarium Supply a visit. They specialize in supplies for making terrariums for poison dart frogs and have an amazing variety of epiphytic plants. |
A drug may be classified by the chemical type of the active ingredient or by the way it is used to treat a particular condition. Each drug can be classified into one or more drug classes.
Catecholamines include adrenaline, noradrenaline and dopamine. They are physiologically important neurotransmitters, as part of the sympathetic and central nervous systems. Catecholamines act on both the alpha and beta adrenergic receptors. Catecholamines are released in times of stress. They make your heart beat faster with greater force and narrow the blood vessels, causing a rise in blood pressure.
The beta1 effects of catecholamine on the heart are due to an increase in intracellular concentration of cyclic-AMP. Cyclic-AMP activates protein kinase A, which phosphorylates sites on calcium channels, including alpha1-subunits.
This increases the probability that the channels will open, increasing inward calcium ion current and therefore the force of cardiac contraction. It also increases the calcium ion capture by the sarcoplasmic reticulum, increasing the amount of calcium stored intracellularly available for release by action potential. So the net result of catecholamine action is to elevate and steepen the ventricular function curve. The increase in heart rate results in an increased slope of pacemaker potential owing to a shift in the voltage-dependence of the conductance
Medical conditions associated with catecholamines: |
SLOPE AND MORE
Answers, Explanations, and Elaborations
The slides of "Slope and More" take you from the basic concept of a slope to the application of that concept (and of a few others) to the macroeconomy. The overall goal of the exercise is to encourage the student to think of "math" and "the real world" at the same time. Offered below are answers, explanations, and elaborations.
1. The slope is defined as the RISE over the RUN.
That's right; it is. And you should observe that the RISE and the RUN can be expressed in physical dimensions (6 inches and 10 inches) or in any other dimensions appropriate to the application, such as an extra $6 of consumption spending brought about an extra $10 of income. The slope itself is a "pure" number: 6 inches divided by 10 inches is simply 0.6. Graphically, a slope is the ratio of the two legs (Vertical-to-Horizontal) of a right triangle. That is, the slope of the hypontenuse is the ratio of the vertical leg to the horizontal leg.
2. What is the slope of the roof of this garage?
Here you need only visualize the relevant right triangle. Its horizontal leg extends from the eave of the garage to the midpoint of the garage and therefore measures 10 feet. The slope of the roof, then, is 6 feet/10 feet, or 0.6.
3. How tall is this evergreen tree?
If you know the RISE and the RUN, you can find the SLOPE. Or, if you know the RUN and the SLOPE, you can find the RISE. Either way, the defining relationship is: SLOPE = RISE/RUN. The RUN, in the form of the tree's shadow, is given as 25 feet. The SLOPE, indicated by the relative lengths of the legs of the right triangle, is 0.6. We can write, then, that 0.6 = RISE/25. Multiplying both sides by 25, we see that RISE = 15. The evergreen tree is 15 feet tall.
4. What is the slope of this line?
Now we've gone from garages and trees to conventional grahpics and symbology. Your high-school math teachers and college math professors label the vertical axis with a lower-case "y" and the horizontal axis with a lower-case "x"; they use an "m" for the slope of a line and a "b" for its vertical intercept: y = mx + b. We are given the coordinates of two points on this line, and so we can determine both the RISE and the RUN. The RISE is 26-14, or 12; the RUN is 40-20, or 20. The SLOPE of the line, then, is RISE/RUN, which is 12/20, or 0.6.
To find the vertical intercept, we simply write the equation of the line, substituting the coordinates of either point for the x and y and substituting 0.6 for the slope, m.
y = mx + b
14 = 0.6(20) + b
14 = 12 + b
b = 14 - 12
b = 2
Equivalently: y = mx + b
26 = 0.6(40) + b
26 = 24 + b
b = 26 - 24
b = 2
5. How far does this luggage fall?
The hope here is that you will look at problem 5 and "see" problem 4. Now we've gone from conventional graphics and symbology to a conveyor belt and luggage. But the analytics are the same. Treat the vertical distance that represents "the distance that the luggage will fall" as the vertical intercept of the straight line traced out by the conveyor belt.
We are given both the vertical and the horizontal separation between the two pieces of luggage on this conveyor belt, and so we can determine both the RISE and the RUN. The RISE is 12; the RUN is 20. The SLOPE of the conveyor belt, then, is RISE/RUN, which is 12/20, or 0.6.
To find the vertical intercept, we simply write the equation of the conveyor belt, substituting the coordinates for either piece of luggage and substituting 0.6 for the slope, m.
|y = mx + b
21 = 0.6(20) + b
21 = 12 + b
b = 21 - 12
b = 9
|Equivalently:||y = mx + b
33 = 0.6(40) + b
33 = 24 + b
b = 33 - 24
b = 9
Is there an easier way to get the answer? Sure. Notice that both RUNS in the picture are given as 20 feet. One of the corresponding RISES is given as 12 feet. The other RISE has to be 12 feet as well. Just subtract the second 12-foot RISE from the given distance of 21 feet to get 9 feet, which is the distance that the luggage will fall.
6A. What is the Marginal Propensity to Consume?
This problem uses a straight line (which looks a lot like that conveyor belt) to describe consumer behavior as it is affected by changes in income. C (consumption spending) is a linear function of Y (income). Note the specific form of the function (C = a + bY) with Y being measured along the horizontal axis and C being measured along the vertical axis. The other symbols take their meaning from the role they play in the equation. The stand-alone term "a" is the vertical intercept: the coefficient "b" is the slope.
We are given the coordinates of two points on this consumption equation, and so we can determine both the RISE and the RUN. The RISE is 33-21, or 12; the RUN is 40-20, or 20. The SLOPE of the line, then, is RISE/RUN, which is 12/20, or 0.6.
6B. How much would people spend on consumption goods (C) even if their incomes (Y) fell temporarily to zero?
To find the vertical intercept, we simply write the equation, substituting the coordinates of either point for Y and C and substituting 0.6 for the slope, b.
C = a + bY
21 = a + 0.6(20)
21 = a + 12
a = 21 - 12
a = 9
Equivalently: C = a + bY
33 = a + 0.6(40)
33 = a + 24
a = 33 - 24
a = 9
This is the level of consumer spending we would expect to see even if income has (temporarily) fallen to zero.
7A. Find the MPC for this economy and write the equation that describes consumption behavior.
The MPC is the Marginal Propensity to Consume, which is simply the slope of the consumption equation. To get the slope, we need to find two points on the equation whose coordiantes are known. The intercept is one such point: Y = 0; C = 6. The other point is the intersection of the consumption line with the 45-degree line: Y = 15; C = 15 (Remember that the 45-degree line has a slope of 1.0, which means that, starting from the origin, the Y-distance and the C-distance to any point on that line are are the same). So, now we visualize a right triangle with one acute vertex touching the C-intercept and the other touching the intersection with the 45-degree line. The vertical leg of the triangle, the RISE, is 15-6, or 9; The horizontal leg of the triangle, the RUN, is 15. The SLOPE is the RISE over the RUN: 9/15 = 0.6.
Now we know both the vertical intercept (a = 6) and the slope (b = 0.6) and can write the equation describing consumption behavior:
C = a + bY
7B. How much do people spend on consumption goods (C) when total income (Y) is 35?
This question calls your bluff: Show me you can actually make use of this equation relating C to Y. You can simply substitute 35 for Y and solve for C:
C = 6 + 0.6Y
7C. How much do they save?
There are two ways to calculate the level of saving. The easiest is simply to recognize that saving is what's left of your income after you're through spending. That is, S = Y - C; S = 35 - 27 = 8
Alternatively, you could write the saving equation by observing the general form: S = -a + (1 - b)Y, and then evaluating for an income of Y = 35:
S = -a + (1 - b)Y
7D. How much would the investment communtiy have to spend for the economy to be in equilibrium with Y = 35?
An income-expenditure equilibirum requires that income be equal to expenditures. For this wholly private economy, we can write the equilibrium condition as Y = C + I. We know that for an income of 35, consumption spending is 27. So we can write: 35 = 27 + I. Therefore I = 35 - 27 = 8.
Alternatively, we can recognize that for a wholly private economy, it is always true that Y = C + S. That is, your income has to be equal to the part of it that you spend plus the part of it that you don't spend. We can now write the equilibrium condition as C + S = C + I. Subtracting consumption spending from both sides gives us the alternative equilibrium condition for a wholly private economy: S = I. This means that if S = 8, then that amount needs to be borrowed and spent by the investment community for the economy to be in macroeconomic equilibrium.
(By the way, if we were dealing with a mixed economy, we would write as our accounting identity: Y = C + S + T. That is, your income has to be equal to the part of it that you spend plus the part of it that you don't spend plus the part that you don't even see--because the government took it as taxes before you were given your (after-tax) income. In this case, the alternative equilibrium would be S + T = I + G.)
8A. What is the equilibium level of income?
We're shown a wholly private economy (no government spending; no taxes) in which consumption spending is given by the equation C = 30 + 0.6Y and investment spending is 34. The equilibrium level of income is the income that satisfies the equilibrium condition Y = C + I. So, we simply make use of all we know about C and about I and solve for Y:
Y = C + I
8B. How much are people saving?
There's an easy way and a hard way to get this answer--and even the hard way is pretty easy. First, if we know that the economy is in equilibrium and we know that investment is 34, then saving must be 34, too. So, S = 34.
Alternatively, we can write the saving equation and evaluate it for an income of Y = 160:
S = -a + (1 - b)Y
8C. How much are they spending on consumption goods?
Again, there's an easy way and an almost-as-easy way. First, if people are earning Y = 160 and saving S = 34, then they must be spending the difference: C = 126. That is, we simply make use of the identity: Y = C + S.
Alternatively, we can write the consumption equation and evaluate it for an income of Y = 160:
C = 30 + 0.6Y
8D. Can you say whether or not the labor force is fully employed?
We can't say. We take the so-called "going wage" as the wage rate that cleared the labor market during some earlier period when the economy was in good macroeconomic health. That going wage is still going--even if the demand for labor has fallen (and the economy is in recession). In recessionary conditions, the income of Y = 160 is equal to going wage rate times the quantity of worker-hours currently demanded at that wage rate.
Only if the current demand for labor happens to intersect the given supply of labor at the going wage (a possible but not very likely circumstance, according to Maynard Keynes) would the corresponding income (Y = WN) be full-employment income. |
One of the greatest musical geniuses the world has ever seen might have been struck down at the height of his powers by a bacterial infection that school nurses yawn at. A new analysis suggests that Wolfgang Amadeus Mozart may have died of complications relating to strep throat.
Mozart died on December 5, 1791 in Vienna after abruptly taking ill about two weeks before. The cause of death for the 35-year-old man was recorded as “fever and rash,” which even in the 18th century were considered symptoms, not a disease. Many causes have been suggested over the centuries: syphilis, the effects of treatment with salts of mercury, rheumatic fever, vasculitis leading to renal failure, infection from a bloodletting procedure, trichinosis from eating undercooked pork chops [The New York Times]. As no autopsy was conducted at the time of death and the common grave that held Mozart’s remains was later dug up to make room for new graves, modern medical sleuths have little direct evidence to go on.
For the new study, published in The Annals of Internal Medicine, researchers compared historical accounts of the maestro’s illness – fever, rash, limb pain and swelling – with illnesses prevalent at the time of his death. They analysed more than 5,000 cases between 1791 and 1793 and found oedema (a swelling caused by the build-up of fluid beneath the skin) to be the third most common cause of death after tuberculosis and malnutrition. Mozart’s body was said to be so swollen in his dying days that he could not even turn over in bed [BBC News]. What’s more, the researchers found that in the winter of 1791 an usually high number of young men died of the illness known alternatively as edema or dropsy.
An infection with a Streptococcus bacteria could have felled Mozart because strep can inflame the delicate blood vessels in the kidney called glomeruli that filter wastes from the blood. Within 10 days or so of a strep throat or a strep skin infection, the kidneys can fail from this damage. Intense swelling, from buildup of fluid, is often the result [NPR News]. Whether that truly was how Mozart met his match remains a mystery, but the theory may have music lovers wishing for a time machine in which to send back a packet of antibiotics.
80beats: Scientist Wants to Test Abraham Lincoln’s Bloodstained Pillow for Cancer
80beats: DNA Evidence Proves that Romanov Prince and Princess Rest in Peace
Image: Wikimedia Commons |
What are the difference between DDL, DML and DCL commands?
DDL – Data Definition Language: statements used to define the database structure or schema. Some examples:
- CREATE – to create objects in the database
- ALTER – alters the structure of the database
- DROP – delete objects from the database
- TRUNCATE – remove all records from a table, including all spaces allocated for the records are removed
- COMMENT – add comments to the data dictionary
- RENAME – rename an object
They are called Data Definition since they are used for defining the data. That is the structure of the data is known through these DDL commands.
DML – Data Manipulation Language: statements used for managing data within schema objects. Some examples:
- SELECT – retrieve data from the a database
- INSERT – insert data into a table
- UPDATE – updates existing data within a table
- DELETE – deletes all records from a table, the space for the records remain
- MERGE – UPSERT operation (insert or update)
- CALL – call a PL/SQL or Java subprogram
- EXPLAIN PLAN – explain access path to data
- LOCK TABLE – control concurrency
DML commands are used for data manipulation. Some of the DML commands
insert,select,update,delete etc. Even though select is not exactly a DML language command oracle still recommends you to consider SELECT as an DML command.
DML are not auto-commit. i.e. you can roll-back the operations, but DDL are auto-commit
DCL – Data Control Language. Some examples:
- GRANT – gives user’s access privileges to database
- REVOKE – withdraw access privileges given with the GRANT command
Data Control Language is used for the control of data. That is a user can access any data based on the priveleges given to him. This is done through DATA CONTROL
It is used to create roles, permissions, and referential integrity as well it is used to control access to database by securing it.
TCL – Transaction Control: statements used to manage the changes made by DML statements. It allows statements to be grouped together into logical transactions.
- COMMIT – save work done
- SAVEPOINT – identify a point in a transaction to which you can later roll back
- ROLLBACK – restore database to original since the last COMMIT
- SET TRANSACTION – Change transaction options like isolation level and what rollback segment to use
For revoking the transactions and to make the data commit to the database we use TCL.
Once we commit we cannot rollback. Once we rollback we cannot commit.
Commit and Rollback are generally used to commit or revoke the transactions that are with regard to DML commands. |
Cultural Displacement in Black Elk Speaks
Black Elk Speaks depicts the tragedy of a culture that can no longer support its traditional ideals. In their own terms, the Sioux have lost the sacred hoop of their nation. But they did not lose it through a lack of faith or other internal weakness; they lost it, almost inevitably, to the forces of economic greed when white Americans expanded westward in search of more land and more goods. Their culture is lost through the loss of the traditions and lifestyle that Black Elk commemorates.
The end of the traditional Sioux hunting practices is a striking example of the loss of culture. The bison, an abundant source of food that was a daily reminder of the providence of the Great Spirit, were considered sacred. The bison roamed the prairie in what seemed to be a never-ending supply. Even the Transcontinental Railroad's separation of the herd into two halves, when Black Elk was still a child, did not seem especially threatening; as he says, half of the herd was still more than they could use. A complex cultural event, the great bison hunt, occurring just after his vision (see Chapter 4), is an arena for the hunters on horseback to display their courage and bravery (Standing Bear, killing his first adult buffalo, shows his manhood). Butchering, food preparation, and the hide-and-bone-processing practices that followed the hunt allowed for the tribe's sustenance. Finally, the community celebrated with dancing, singing, and thanksgiving rituals — a joyous feast. The priority of railroad and settlement expansion and the carelessness with which whites hunted the bison for sport ("They just killed and killed because they liked to do that," Black Elk says) meant that the herd decreased drastically in size. After January 1876, when Indians were ordered onto reservations, the food supply became a way to control defiant Indian behavior. With the bison herd much diminished and the confiscation of Indian horses and guns, the Indians had no way to supply their own food and were forced to rely on government rations. When the Indians seemed hostile, as when Sitting Bull refused to come out of Canada and live on a reservation, rations were decreased. The Indians, starved and sickened, were coerced into submission. When the bison herd was lost, so was contact with the sacred along with a sense of Sioux identity and independence.
Loss of their nomadic way of life was another incident in the cultural displacement of the Sioux. When the Plains Indians were herded onto agency-governed reservations, they lost their interdependence with nature. No longer could they move voluntarily to pursue the bison herd, harvest plants and rootcrops, or fish. The traditional encamped way of Sioux life, with its close sense of community and its clear social structure, was replaced with the foreign immobility of reservation life, further undermining the Sioux sense of identity.
In connection with the loss of traditional practices, Black Elk calls attention to the loss of cultural symbols, most importantly the circle, which is central to Sioux belief because "the Power of the World always works in circles": The world is round, the moon is round, and the seasons return to repeat themselves cyclically. In reflection of this, tepees were built around circular frames, and the structure of the community was understood as a circular image, the sacred hoop. "Our tepees were round like the nests of birds," Black Elk says, "and these were always set in a circle, the nation's hoop, a nest of many nests, where the Great Spirit meant for us to hatch our children." He recalls cutting poles for tepees as a child as emblematic of an older, happier time. When the Indians had to abandon their traditional tepees for the square wooden houses of the reservation, he says, they lost their power: "When we were living in the power of the circle in the way we should, boys were men at twelve or thirteen years of age. But now it takes them very much longer to mature." Black Elk calls the houses "square boxes" and characterizes the Indians as "prisoners of war."
The Indians retain some important practices amidst this cultural displacement. Black Elk retains his sacred pipe, and even when he speaks to Neihardt, Black Elk uses the ritual of pipe smoking as a way to affirm their relationship. (Elsewhere, Neihardt mentions that he himself shared the cigarettes he brought with him at his first meeting with Black Elk; one imagines that the significance of this gesture was not lost on Black Elk.) Some Indian scholars maintain that Sioux culture was never lost, that it only went underground or transformed itself under new appearances. Photographs, for example, of Black Elk, in his later years, show him addressing the Great Spirit while wearing long red underwear instead of the red paint that he wore as a young man. Similar photographs show Indians handling ritual objects, such as small drums made of evaporated milk cans instead of wood and buffalo hide. These can be seen as a triumphant sign of the survival of a culture, but Black Elk's tone in the narrative is one of lament for a culture lost. |
A heart murmur is a sound made by turbulent blood flow within the heart. Your doctor hears this sound with a stethoscope. A murmur can occur in a normal heart. Or it may indicate some problem within the heart.
Most often, the turbulence is normal. And the sound is called a benign flow murmur. It happens when blood flows faster through the heart, for example in a person who is anxious, has just finished exercising, has a high fever or has severe anemia. About 10% of adults and 30% of children (most between the ages of 3 and 7) have a harmless murmur produced by a normal heart. This type of murmur is also called an innocent murmur.
A heart murmur may indicate a structural abnormality of a heart valve or heart chamber, or it may be due to an abnormal connection between two parts of the heart. Some abnormalities of the heart that create heart murmurs include:
A tight or leaky heart valve – The heart has four valves: the aortic, mitral, tricuspid and pulmonary valves. A heart murmur can be heard if any one of these valves has a narrowing of the valve opening (stenosis) that interferes with the outflow of blood or a valve leak (regurgitation or insufficiency) that causes a backflow of blood.
Mitral valve prolapse – In this condition, the leaflets of the mitral valve fail to close properly, allowing blood to leak back from the heart's lower left chamber (the left ventricle) to the upper left chamber (the left atrium).
Congenital heart problems – Congenital means the disorder was present at birth. Congenital heart problems include:
Septal defects – These are also known as holes in the heart. They are abnormal openings in the heart's septum (the wall between the heart's left and right sides).
Patent ductus arteriosus – Before birth, the channel between the pulmonary artery and the aorta (called the ductus arteriosus) allows blood to bypass the lungs because the fetus is not breathing. Once a child is born and his or her lungs are functioning, the ductus arteriosus normally closes. Patent ductus arteriosus occurs when blood flow through the ductus arteriosus continues after birth.
Endocarditis – Endocarditis is an inflammation and infection of the heart valves and endocardium, the inner lining of the heart chambers. A heart valve infection can cause a heart murmur by causing blood to leak backwards, or the infected valve can partially obstruct blood flow.
Cardiac myxoma – A cardiac myxoma is a rare, benign (noncancerous) tumor that can grow inside the heart and partially obstruct blood flow.
Asymmetric septal hypertrophy – Asymmetric septal hypertrophy is an abnormal thickening of the heart muscle inside the lower left chamber (left ventricle) of the heart. The thickened muscle makes the outflow passage narrow just below the aortic valve. This condition, also called idiopathic hypertrophic subaortic stenosis, is seen in people with hypertrophic cardiomyopathy. |
How the Heart Works
The heart is an amazing organ. It continuously pumps oxygen and nutrient-rich blood throughout the body to sustain life. This fist-sized powerhouse beats (expands and contracts) 100,000 times per day, pumping five or six quarts of blood each minute, or about 2,000 gallons per day.
How Does Blood Travel Through the Heart?
As the heart beats, it pumps blood through a system of blood vessels, called the circulatory system. The vessels are elastic, muscular tubes that carry blood to every part of the body.
Blood is essential. In addition to carrying fresh oxygen from the lungs and nutrients to the body's tissues, it also takes the body's waste products, including carbon dioxide, away from the tissues. This is necessary to sustain life and promote the health of all parts of the body.
There are three main types of blood vessels:
- Arteries. They begin with the aorta, the large artery leaving the heart. Arteries carry oxygen-rich blood away from the heart to all of the body's tissues. They branch several times, becoming smaller and smaller as they carry blood further from the heart and into organs.
- Capillaries. These are small, thin blood vessels that connect the arteries and the veins. Their thin walls allow oxygen, nutrients, carbon dioxide, and other waste products to pass to and from our organ's cells.
- Veins. These are blood vessels that take blood back to the heart; this blood has lower oxygen content and is rich in waste products that are to be excreted or removed from the body. Veins become larger and larger as they get closer to the heart. The superior vena cava is the large vein that brings blood from the head and arms to the heart, and the inferior vena cava brings blood from the abdomen and legs into the heart.
This vast system of blood vessels -- arteries, veins, and capillaries -- is over 60,000 miles long. That's long enough to go around the world more than twice!
Blood flows continuously through your body's blood vessels. Your heart is the pump that makes it all possible.
Where Is Your Heart and What Does It Look Like?
The heart is located under the rib cage, slightly to the left of your breastbone (sternum) and between your lungs.
Looking at the outside of the heart, you can see that the heart is made of muscle. The strong muscular walls contract (squeeze), pumping blood to the rest of the body. On the surface of the heart, there are coronary arteries, which supply oxygen-rich blood to the heart muscle itself. The major blood vessels that enter the heart are the superior vena cava, the inferior vena cava, and the pulmonary veins.The pulmonary artery and the aorta exit the heart and carry oxygen-rich blood to the rest of the body. |
Pre-AP: Effective Thinking Strategies for All Students
Learn to use research-based thinking strategies for classroom practice
This one-day workshop provides middle and high school teachers with research-based strategies to help students activate their background knowledge, deepen comprehension and develop academic vocabulary. This workshop explores the learning benefits of thinking strategies for classroom practice.
- Participants will learn cross-curricular strategies that will help them meet the needs of a diverse population of students.
After attending this workshop, participants will be able to:
- Understand how thinking strategies affect learning.
- Build students' reading comprehension through specific research-based strategies.
- Apply comprehension strategies in the classroom.
- Welcome and Introductions
- Laying the Foundation (Workshop Guiding Question, Building Equity and Access, Thinking Skills—The Key to Comprehension)
- Building Background Knowledge
- Building Reading Comprehension
- Building Academic Vocabulary
- Workshop Summary and Closing |
It’s back-to-school time! Classroom organization is ready and routines are established. Now it’s time to start assessments, and that is not always an easy undertaking. First, assessment takes significant prep work: you have to print materials, note students’ prior level, set each student up in your data management system, and then assess every student in a reasonable amount of time while staying organized and keeping track of all materials and results! It’s a lot of work. It can be easy to get caught up in the logistics and lose sight of the main reason why we assess our students. So let’s take a step back and kick off the new school year thinking about the power and purpose of assessment.
“Without a system of gaining information about each reader, you will be teaching without the children.” ~Irene C. Fountas and Gay Su Pinnell
Some teachers might view assessment as “separate” from instruction, which can make it a dreaded task or an annoying interruption. But if your assessments are effective, instruction can be much more effective—even powerful. Using assessment to inform your teaching has highly positive outcomes. You’re able to:
- Understand the strengths and needs of individual students
- Determine your next teaching moves and make smart instructional decisions
- Monitor students’ growing control of literacy behaviors and understandings
- Accurately report students’ progress to parents and administrators.
“Assessment allows you to see the results of your teaching and make valid judgments about what students have learned to do as readers and writers, what they need to learn to do next, and what teaching moves will support them. In short, assessment makes evidence-based, student centered, responsive teaching possible,” (Fountas and Pinnell 2018).
Your observations of students’ literacy behavior are essential to the teaching process. You collect data for many purposes, but the most important reason is to inform teaching in a way that will improve students’ abilities. Fountas and Pinnell recommend two types of standardized assessment that are essential for effective teaching:
- Interval Assessment: assess to inform the direction of instruction (as you would when using the Benchmark Assessment System)
- Continuous Assessment: monitor student progress and analyze the effects of teaching (as you would by taking reading records during guided reading using the Fountas & Pinnell Classroom™ Guided Reading Collection)
It is critically important that these two types of assessment are used in conjunction with one another. Teachers cannot teach effectively without gathering information about each learner. Teaching without continual and interval assessment is akin to “teaching without the children.”
“Through systematic observations and accurate record-keeping, you will have a continuous flow of reliable information about students’ progress as literacy learners. The decisions you make based on the data will be the heartbeat of your responsive teaching,” (Fountas and Pinnell, 2018).
~The Fountas & Pinnell Literacy™ Team
Irene C. Fountas and Gay Su Pinnell. 2018. Fountas & Pinnell Classroom™ System Guide. Portsmouth, NH: Heinemann.
Join the fastest growing community in the field of literacy education. Get your free membership and stay up to date on the latest news and resources from Fountas and Pinnell at www.fountasandpinnell.com
For a well-organized, searchable archive of FAQs and discussions that are monitored by Fountas and Pinnell-trained consultants, go to our Discussion Board at www.fountasandpinnell.com/forum
For more collaborative conversation, join the Fountas & Pinnell Literacy™ Facebook Learning Group at https://www.facebook.com/groups/FountasPinnell/ |
Researchers at the University of Missouri have unveiled tiny nuclear batteries that produce power from the decay of radioisotopes. Although such batteries are currently used in devices such as pacemakers and satellites, they are costly, large and heavy - something which these new penny-sized batteries are not.
Developed by a research team led by Dr Jae Wan Kwon, assistant professor of electrical and computer engineering at Missouri University, the team's innovation in creating the battery is not only its size, but its semi-conductor, which is liquid instead of solid.
"The critical part of using a radioactive battery is that when you harvest the energy, part of the radiation energy can damage the lattice structure of the solid semiconductor," Dr Jae explained. "By using a liquid semiconductor, we believe we can minimize that problem."
So what about the risk of nuclear meltdown? There is none. Although nuclear batteries generate electricity from atomic energy, there is no chain reaction involved, instead using the emissions from a radioactive isotope.
"People hear the word 'nuclear' and think of something very dangerous," said Dr Jae. "However, nuclear power sources have already been safely powering a variety of devices, such as pacemakers, space satellites and underwater systems."
The team now hopes to increase the power of the battery, as well as decrease the size of it further - it could be made thinner than a human hair.
Image source: Gizmag.com |
Hypothyroidism in infants and children, while relatively rare -- approximately 1 in 4,000 infants have the condition-- poses a serious threat to a child's physical and mental development if left untreated. Hypothyroidism can affect both newborns and older children, and occurs for a variety of reasons, but the treatment is always the same.
This article will explore the different types of hypothyroidism, methods for detection and treatment, and why it is crucial that your newborn be tested within the first few days of birth.
Types of Hypothyroidism
Congenital Hypothyroidism (CH)
Congenital Hypothyroidism, or CH, refers to hypothyroidism which is present from birth. It is by far the most common form of hypothyroidism in infants, accounting for approximately 90 percent of hypothyroidism cases in infants. CH occurs when any part of the fetus's thyroid system fails to develop correctly. Sometimes the gland does not descend fully into the proper place. In other instances the gland is in the proper location, but is underdeveloped. In rare cases, the thyroid may fail to produce or release the thyroid hormone properly. In these cases, the newborn has something called Thyroid Dyshormonogenesis.
The genes for this disorder are inherited from both the mother and the father. The chance for the parents of a CH child to have another child with CH is 1 in 4 for every child born to them.
CH, Transient Form
In about 10 percent of infants with CH, the hypothyroidism is transient, or temporary. This is usually because the mother has been treated for Graves' Disease/hyperthyroidism during pregnancy, has a history of thyroid disease, or has been exposed to iodine-containing substances. Transient CH can last anywhere from several days to several months, but eventually will subside, and no further treatment will be necessary. If your child's thyroid gland is properly located and not in any way malformed, she may very well have transient hypothyroidism.
Acquired hypothyroidism, which affects older children and adolescents, typically develops in children due to autoimmune thyroid disease such as Hashimoto's disease. It's more common as children reach puberty or teenage years, but can still appear in young children. It is also more common in girls than in boys.
How Hypothyroidism is Diagnosed
In most industrialized countries, newborn infants are given a heel stick test (blood is drawn from the heel of the foot) within a few hours of birth. This test is often referred to as the PKU test, because it also covers phenylketonuria (PKU) as well as galactosemia, along with the screening for congenital hypothyroidism. It is worthwhile to double check with the hospital staff to make sure that the test covers CH along with PKU. This early blood test is by far the most reliable way to diagnose CH in infants.
Most newborns with CH show no clear symptoms of the disorder, and could potentially go for months with the disease undetected. It is essential that you have your newborn tested for CH within 3 days of birth. If the initial screening shows a low level of T4 hormone and/or the TSH hormone is elevated, further tests will likely be ordered to confirm a diagnosis of hypothyroidism. Usually, the blood test will be repeated to make sure that the initial test was accurate. Additionally, an x-ray of the baby's legs will be taken to examine the ends of the bones in the knee area. In infants with hypothyroidism, the bones will appear underdeveloped. Often a scan is taken of the thyroid gland to determine if the gland is malformed or improperly located or even absent. These tests can all be done while the infant is still in the hospital.
In addition, you should be aware of the symptoms of CH. In extremely rare instances, a hospital test could fail to detect CH, or a test result overlooked, or a test forgotten. Your knowledge of the symptoms of hypothyroidism could be of great benefit to your baby's health should this be the case. The following list of symptoms is excerpted from the chapter on children and infants with hypothyroidism that is featured in my book, Living Well With Hypothyroidism.
- Puffy face, swollen tongue
- Hoarse cry
- Cold extremities, mottled skin
- Low muscle tone (floppy, no strength)
- Poor feeding
- Thick coarse hair that goes low on the forehead
- Large fontanel (soft spot)
- Prolonged jaundice
- Herniated bellybutton
- Lethargic (lack of energy, sleeps most of the time, appears tired even when awake)
- Persistent constipation, bloated or full to the touch
- Little to no growth
Older children who develop acquired hypothyroidism can be trickier to diagnose. Hypothyroidism can go unnoticed, and symptoms such as tiredness, mood swings, weight gain and health problems are often attributed to a number of other causes. Children are frequently labelled as having Attention Deficit Hyperactivity Disorder (ADHD.) The key symptom, however, seems to be slow or absent growth. Children can also have many of the other symptoms common in adults, and summarized in the Hypothyroidism Symptoms Checklist.
|DOES YOUR CHILD HAVE ADD/ADHD?
Do you suspect that your child may have Attention Deficit Hyperactivity Disorder (ADHD) -- sometimes called ADD (attention deficit disorder)? If so, I'd recommend reading an excellent web resource from Everyday Health on ADD/ADHD Basics. |
How Hypothyroidism is Treated
No matter what form of hypothyroidism a child develops, the treatment is always the same. It involves prescription thyroid replacement hormone treatment, in pill form, taken daily. The average starting dose for an infant is between 25 and 50 mcg per day. The goal is for the newborn to have normal hormone levels within the first 4 weeks of life. This will require frequent, usually weekly, visits to the treating physician for follow-up blood tests and adjustment of the dosage. Once the baby's hormone levels are properly adjusted, she or he will probably be seen every 2 to 3 months for the first 3 years of life.
Consequences of Untreated Hypothyroidism in Children
Congenital hypothyroidism was once a major cause of mental retardation in children. Insufficient thyroid hormone in children will prevent the brain from developing properly, and can stunt a child's growth. Before the heel stick test became routine, 1 in 4,000 infants were condemned to suffer the consequences. Thankfully, the cases of hypothyroidism in infants going untreated in the developing world are now extremely rare. Be sure to check with your hospital staff that your baby has been properly tested. Furthermore, especially if you have a history of thyroid disorders in your family, I encourage you to become familiar with the signs and symptoms of hypothyroidism in older children, and to consider having your own children tested at regular intervals for this disease.
Writer/reseacher Sarah Vela contributed to this article. |
Join Scott Simpson for an in-depth discussion in this video Working with while and until loops, part of Learning Bash Scripting.
We don't always want to have a loop work on a specific range of values. We might want a loop to continue while some condition is true or false, or until some condition occurs. For that, there's the while and until loops. First let's look at the while loop with a simple example that counts up to 10. Here, I'm using both the integer comparison to loop while i is less than or equal to 10, And doing a little bit of arithmetic to increment the value of I by one. You'll notice that the syntax here is basically the same as the if statement, which of course makes sense because we're asking for an evaluation, that is whether I is less than or equal to ten, and if it evaluates true, then we do everything within the loop.
And then at the end of the loop is the word done. I'll save it and run, and I see the numbers 0 through 10. The until loop is the counter part to the while loop, here I'm echoing the value of J, until J is greater than or equal to 10. And when I run it, I see that the loop ran until J was equal to 10. Keep in the mind, both of these can cause infinite loops, so be careful to check your logic
- What is Bash?
- Managing output with grep, awk, and cut
- Understanding Bash script syntax
- Creating a basic Bash script
- Displaying text with "echo"
- Working with numbers, strings, and arrays
- Reading and writing text files
- Working with loops
- Using functions
- Getting user input during execution
- Ensuring a response
Skill Level Beginner
1. Working with the Command Line
2. Building Bash Scripts
3. Control Structures
4. Interacting with the User
- Mark as unwatched
- Mark all as unwatched
Are you sure you want to mark all the videos in this course as unwatched?
This will not affect your course history, your reports, or your certificates of completion for this course.Cancel
Take notes with your new membership!
Type in the entry box, then click Enter to save your note.
1:30Press on any video thumbnail to jump immediately to the timecode shown.
Notes are saved with you account but can also be exported as plain text, MS Word, PDF, Google Doc, or Evernote. |
Mega sperm ultrasound post vasectomy mobile material within how does carbon dating work gcse the mega sperm ultrasound filarial dance ultrasound epididymus presumably sperm of various sizei called softly to the band. Radiodating gcse, cookies on the bbc website uranium—thorium dating a relatively short-range dating technique is based on the decay of uranium into thorium, a substance with a half-life of about 80, years. Radiocarbon or carbon-14 dating is a technique used by scientist to date bones, wood, paper and cloth carbon-14 is a radioisotope of carbon it is produced in the earth’s upper atmosphere when nitrogen-14 is broken down to form the unstable carbon-14 by the action of cosmic rays. Carbon-14 dating is a useful example of the concept of half-life in practice carbon-14 is a radioactive isotope of carbon with a half-life of 5730 years all living matter takes in carbon-14 during its lifetime as it naturally occurs in nature.
Carbon dating is not used to date the age of the earth as you've read from other posts, it dates when an object dies, and the earth was never alive furthermore, c-14 has a half life of 5730 years. Q1 the carbon content of living trees includes a small proportion of carbon-14, which is a radioactive isotope after a tree dies, the proportion of carbon-14 in it decreases due to radioactive decay. Carbon dating carbon-14 is a radioactive isotope of carbon (it has two extra neutrons in its nucleus making it unstable) drawing of heads of cave lions in. This is a slide and worksheet for radioactive dating and half life activity the slides are to be shown at the front and then each item is shown in turn and the pupils use their graphs to calculate the age of each item from the percentage of carbon found.
When they die they stop taking carbon in, then the amount of carbon-14 goes down at a known rate (carbon-14 has a half-life of 5700 years) the age of the ancient organic materials can be found by measuring the amount of carbon-14 that is left. Carbon dating gcse beyond that timespan, the amount of the original 14c remaining is so small that it cannot be reliably distinguished from 14c formed by irradiation of nitrogen by neutrons from the spontaneous fission of uranium, present in. Carbon dating gcse a key concept in interpreting radiocarbon dates is archaeological association: another problem is the ever present bombardment of cosmic rays too many people forget the definition of a theory. Radioactivity carbon dating what is carbon dating the age of archaeological specimens can be calculated by looking at the amount of carbon-14 in a sample the method is a form of radiodating called carbon dating radiodating can also be used to date rocks how is carbon-14 formed the isotope carbon-14 is created at a. Home gcse physics uses how does carbon dating work gcse of radioactivityradiocarbon datingradiocarbon or carbon-14 dating is a technique used by scientist to date bones, wood missing work carbon dating.
Radiocarbon dating, or simply carbon dating, is a technique that uses the decay of carbon 14 to estimate the age of organic materials this method works effectively up to about 58,000 to 62,000 years. Carbon dating is a variety of radioactive dating which is applicable only to matter which was once living and presumed to be in equilibrium with the atmosphere, taking in carbon dioxide from the air for photosynthesis. Radioactive decay is used in carbon dating, fracking and radiotherapy dangers of radiation include causing cancer nuclear fission is the splitting of a radioactive nucleus to release energy. Radiocarbon dating is a method that provides objective age estimates for carbon-based materials that originated from living organisms an age could be estimated by measuring the amount of carbon-14 present in the sample and comparing this against an internationally used reference standard.
But in 1988 the subject seemed to be closed carbon dating experts from universities in oxford, zurich and arizona proved that the shroud originated in the 14th century and thus could not be an imprint of jesus. This feature is not available right now please try again later. In the movies, scientists use “carbon dating” to determine the age of ancient artifacts and dinosaur bones but what is the real science behind carbon dating, and how does. P2 carbon dating, using turin shroud as an example higher/foundation tier, edexcel specification my apologies for anything that is a tad patchy, i made this resource in year 10, message me for any details not covered.
Carbon-14 dating is a way of determining the age of certain archeological artifacts of a biological origin up to about 50,000 years old it is used in dating things such as bone, cloth, wood and plant fibers that were created in. The amount of carbon-14 in the air has stayed the same for thousands of years there is a small amount of radioactive carbon-14 in all living organisms because it enters the food chain once an. Carbon-14 dating can determine the age of an artifact that is up to 40,000 years old living organisms absorb carbon my eating and breathing after burning a small piece of an artifact, scientists compare the amount of carbon-14 to the amount of carbon-12 to determine the age of the object. Video about carbon dating gcse bitesize: aqa gcse bbc bitesize - the carbon cycle the resolve life for particular is writers the private professionals of the human launch report save in how much individual they store, and in how headed it takes for the 14 c green by cosmic suits to away mix with them if near of the logic has authentic. |
Penguin/Catalogs/Penguins in the northern hemisphere
|The metadata subpage is missing. You can start it via filling in this form or by following the instructions that come up after clicking on the [show] link to the right.|
Despite the fact that nearly all penguin species are native to Earth's southern hemisphere, there are, remarkably, penguins in the northern hemisphere as well. These flightless birds are at large in northern waters not only because there are penguin colonies on islands crossing the equator, but also because of two attempts to introduce penguins to the Arctic Circle.
Although most penguins' natural habitat lies well within the southern hemisphere, many do live further north. The Galápagos penguin is the sole equatorial species, living mainly in colonies of several hundred birds on two of the Galápagos Islands west of Ecuador in South America. Because the northernmost point of Fernandina Island lies just inside the northern hemisphere, it is possible for these penguins to make regular incursions into the north.
Penguins in the Arctic Circle
The possibility of penguins in the Arctic Circle living for some time cannot be conclusively ruled out, as there have been at least two attempts to introduce them to the northern hemisphere.
The Arctic: a habitat unfit for penguins
Penguins have certainly not themselves made the Arctic their home; the Earth's poles are probably too far apart for these animals to make the journey. Penguin experts would not recommend introducing a penguin colony towards the North Pole as they would count as an invasive species, possibly disrupting the ecosystem and disturbing the food chain; this could threaten other wildlife. Alternatively, penguins would be unable to compete successfully with local species, meaning they would succumb to starvation, disease or predation.
Penguins artificially introduced to the Arctic
Despite fears that penguins would not survive in Arctic conditions, groups of these animals have been introduced to the north. In the nineteenth century, the great auk was hunted to extinction by whalers; 1930s scientists speculated that penguins could fill this ecological niche, providing a source of meat and eggs. In 1936, a team from the Norwegian Nature Protection Society led by Carl Schoyen released nine king penguins in northern Norway. In 1938, a separate group released several macaroni and jackass penguins into the same region. For most individual penguins, the trail then went cold; one was certainly killed by a local huntswoman who apparently mistook it for a demon, and another was caught on a fishing line as late as 1944, apparently in good health. No others were conclusively sighted.
Penguins of the southern hemisphere found further north than usual
Occasionally, other species of penguin are spotted much further north than their natural grounds, though still inside the southern hemisphere. As these animals make good swimmers, it is possible for them to journey considerable distances, as one magellanic penguin from southern Chile proved, the bird travelling 3000 miles (5000 kilometres) to end up in Peru. Larger groups of Antarctic species also regularly wash up on the coastline of Brazil every year, having floated thousands of miles on melting ice floes or carried off by ocean currents, perhaps while learning to swim. While this phenomenon has kept Brazilian zoos well-supplied with penguins, and given locals the opportunity to take as pets these survivors of shark-infested waters, more recently the Brazilian government has arranged for hundreds of the birds to be repatriated to Antarctica with the help of the air force and navy.
The 'Arctic penguin' and the word 'penguin'
The myth of the 'Arctic penguin' persists on the internet, possibly encouraged by various factors, from casual birdwatchers mistaking auks for penguins, to Christmas cards featuring penguins alongside northern polar bears. Indeed, one story on the origin of the word penguin claims that Welsh-speaking sailors named auks pen gwyn ('white head'), which were then mistaken for penguins in the northern hemisphere. The Oxford English Dictionary disputes this story, noting that the etymology of the word is obscure.
- Modern science has also noted the similarities between the unrelated species of penguins and auks as an example of convergent evolution; see Van Tuinen et al. (2001: 1349-1350).
Van Tuinen M, Butvill DB, Kirsch JAW & Hedges SB (2001) 'Convergence and divergence in the evolution of aquatic birds'. Proceedings of the Royal Society B 268: 1345-1350.
- New Scientist: 'Job swap.' 20th August 2005.
- For example, a king penguin from Edinburgh Zoo has been made an honorary regimental sergeant major in the Norwegian Army. See BBC News: 'Penguin picks up military honour'.
- Linus Torvalds, the creator of the Linux operating system, is from the northern hemisphere nation of Finland, and is also fond of penguins, to the extent that a cartoon version has become the official Linux mascot. Perhaps this tenuous association between penguins and Finnish people has also encouraged the myth of the 'Arctic penguin'.
- BBC News: Confused penguin strays 5,000km.' 11th May 2007.
- Guardian: 'Victims of global warming?' 18th January 2001. The extent to which global warming is responsible for this movement remains controversial.
- BBC News: Lost penguins get Brazil air lift'. 4th October 2008.
- BBC News: Brazil to take penguins back home.' 31st July 2006.
- Superspoof.com: Save the Arctic Penguin Campaign'.
- There is also a Scottish cruise ship, the Arctic Penguin; how much this name has misled tourists is unknown.
- Askoxford.com - ask the experts: 'What is the origin of the word 'penguin'?'. See Where does the word 'penguin' originate? in the Penguin article, for more linguistic confusion. |
Unit 9 Quiz#1
Part 1: Multiple Choice
You may write on this quiz. However, place the answers to the following questions on the ScanTron sheet.
1. Electrons are able to move in an electrical circuit because.
2. The role of the power supply in an electric circuit is to
5. The power delivered to the light bulb is.
6. The amount of work done by the battery to move 3.0 Coulombs of charge from terminal B to terminal A is
7. A 60 Watt light bulb is connected to a 120 volt plug. What is the current in the light bulb?
8. The current flow in a conducting wire will _________ as the length of that wire is increased.
9. Which one of the following does not represent units of current?
10. A kilowatt x hour is a unit of
11. A device used to measure current is known as the ___________.
12. Which one of the following symbols represents the schematic circuit symbol of a resistor?
13. A circuit consists of a 6-Volt battery and a lamp bulb. The battery does 12 J of work in order to move 2 C of charge through the internal circuit. As the 2 C charge flows through the external circuit, it does _______ of work upon the light bulb.
14. Charge encounters resistance as it flows through circuits because of
A power supply, light bulb, and two ammeters are connected in a circuit as shown below. Use the information in the diagram to answer the following questions.
15. The ammeter at the "bottom" of the diagram will read
16. The voltage of the power supply is
17. Conventional current will flow through the external circuit from
18. The electric potential at A is ____________ the electrical potential at C.
19. The electric potential at C is ____________ the electrical potential at D.
20. The amount of energy lost by 1 Coulomb of charge as it flows through the external circuit is
Anna Litical performed an experiment to measure the resistance of three different electrical devices (A, B, and C). Anna set up a circuit and collected voltage-current data for each of the three devices. Anna then constructed graphs of Voltage vs. Current for her data. Her data sets are shown below. Use the data in the tables to answer the next several questions.
21. An analysis of Anna's data would indicate that the device with the greatest resistance is
22. The device which fails to obey Ohm's law seems to be
23. An analysis of the data set for Device C show that one of the pairs of measured data (0.25 V, 0.79 amps) seems to be inconsistent with the other measurements. The best way for Anna to treat this particular data pair is
24. An analysis of the data set for Device A shows that the ratio of V/I for each data pair is not exactly constant. The appropriate response to this fact would be to
Part 2: Problem-Solving
For the following problems, show all your work (with units) to receive full or partial credit. The solution to each problem will demand a careful treatment of units, thoughtful reasoning, and the appropriate usage of more than one equation.
26. An electric heater provides 2.00 kW of power when connected to a 120-V potential difference. How much energy is used if the heater is on for five hours?
27. Eighty percent of the electrical energy used by a sun lamp is converted into thermal energy. When the sun lamp is plugged into a 120-V outlet, it draws 2.00 A. How much thermal energy (heat) is released by this lamp in 30.0 minutes?
© Tom Henderson, 1996-1998
Last updated on |
Friday, 1 January 2010
So, the earlier students start to learn these skills the better and the best way of learning how to do something is to do it yourself. The following cartoons have all been created by students, to illustrate a particular point about their area of study.
Students used some or all of the following techniques in the creation of their cartoons:
There is a powerpoint available to help teach these techniques at:
Creating cartoons using these techniques, with the aim of making a particular point, helps students to be able to recognise the technique and the message in professional cartoons.
Year 8 Students were also asked to create a commentary on their cartoon of Cromwell. This commentary had to explain the meaning of the different features of their cartoon. The best commentaries also supported explanation of the cartoon's meaning with historical detail. This pattern of thinking fits in perfectly to the GCSE criteria for cartoon interpretation.
Describes surface features of the cartoon
Explains what the cartoon means without reference to the source details
Explains what the cartoon means with reference to either the details of the source OR contextual knowledge
Explains what the cartoon means with reference to BOTH the details of the source AND contextual knowledge
See if you can level the Year 8 Cromwell cartoon commentaries!
Even if you're not a GCSE student, we hope you enjoy these images!
The following article appeared, in edited format, in TES on 25 January 2008:
Political Cartoons in History
As all History teachers know, students who study History at GCSE will have to demonstrate their ability to interpret a political cartoon. This is a staple of all examination boards and the marks available don’t really do justice to the complexity of the skills required. In order to access the higher levels, students have to be able to deconstruct a cartoon into component parts, rather than commenting summarily on the meaning. They have to explain the meaning of these component parts and substantiate their explanation with reference to the specific historical context. No mean feat!
So, after struggling to drily teach technique to a non-enthusiastic GCSE class, I started to wonder if becoming the cartoonist might be more inspiring!
So, students were given a crash course in political cartooning, learning the basic techniques of context, caption, caricature, personification, placement, symbolism and sizing. This can be done at any appropriately controversial moment of study, not just at GCSE. For example, Year 8 students can use political cartoons to illustrate the ‘real’ reason why Henry broke with Rome or the nature of Cromwell’s rule; Year 9 students can use cartoons to illuminate Haig’s role in the Battle of the Somme or the impact of industrialization.
The advantages of such an activity extend far beyond the generation of techniques that will be needed at GCSE. Firstly, and most importantly, it’s fun -not least for you, the teacher! Additionally, it’s motivational for those students who struggle to articulate their ideas in writing, but who comprehend the historical context perfectly well. It provides meaningful differentiation, for students who are, in Gardner’s terms, ‘picture smart’.
Cartooning can also be used to help students to understand the nature of interpretation. When studying, for example, Cromwellian rule, author identities can be given to students, for example, MP/Lord or, more subtly, northerner/southerner, merchant/peasant. Cartoons have to be drawn as that individual would have seen events/individuals. If author identities are kept secret, the finished cartoons can be used as exercises in usefulness and reliability very successfully. Ownership of the product helps students understand the core issues of sources as evidence, rather than just information. It also helps students to explain their ideas effectively and peer teaching takes off.
Once learned, the technique is also a valuable revision aid. Arguably in response to the constant barrage of visual stimulation in the modern age, many more students are demonstrating a preference for visual learning. Creating cartoons is a valid and valuable mnemonic aid for such students.
The final advantage of cartooning, of course, is that complex historical periods can be summarized, without losing the subtleties, as can happen when reducing topics to key words and key cards.
Wednesday, 28 November 2007
They investigated the way Cromwell ruled the country, including his massacre of the Irish rebels at Droheda, his treatment of the Levellers and his outwardly puritanical attitude towards fun and festivities. Students made a judgment about the quality of Cromwell's leadership of England, and created a cartoon to reflect their key idea.
The picture is trying to show that the rules Cromwell put across, he himself did not take into account. So he lied to his own people; who looked up to and trusted in him and his rules. On one side of Cromwell he is wearing a crown and holding signs reading ‘No Smoking’ and ‘No Drinking.’ The people are behind the signs. He is wearing a crown to say he does these actions in the role of King. The people are behind the signs to show that the people followed his rules. On the other side he is smoking and drinking which shows he is not following his own rules and is lying to his Country. These are not elements a King should have. He also has a large head to show he is big- headed and selfish. Katie
My cartoon shows a crowd of people listening to a puritan (you can tell by his hat) ringing a bell (to get attention) and holding a list of paper which is so long it goes all the way down to the floor and rolls around. On the paper it says “don’t…” because it is a list of things Oliver Cromwell has forbidden, like singing and dancing. The people look shocked and horrified because they are upset at all the things Cromwell has said they can’t do. But, in the foreground and behind them, you can see Oliver Cromwell with a can of shaving foam, a pot of slime and a whoopee cushion. He has a finger to his lips to tell the reader not to tell the people he is there, because he is about to play practical jokes on them. What my cartoon is saying is that Cromwell banned people from doing just about anything that is fun and people like to do, even Christmas, while going and doing it all himself behind their backs!!! He was trying to do the right thing by banning things but what he was doing was very unfair. You should practice what you preach. I have done Cromwell in caricature by giving him lots of warts and a big nose so you can easily tell who he is. I have also done him in more detail than anyone else. I have also exaggerated the black Quaker style hat and outfit, so you know he is someone who is on Cromwell’s side. I’ve placed Cromwell more or less in the middle, and a lot bigger than everyone else, so you’re eyes are probably drawn to him first. The people in the crowd are quite small and in much less detail than Oliver Cromwell and the Puritan. Madeleine
My cartoon is about Cromwell. It suggests that he was an evil person andwas wrong to kill Charles. I used size and made Cromwell big because he is the most important thing in the cartoon. I have drawn a crown on his head with question marks on to show when he couldn't decide to accept the crown or not. I have drawn him on a cricket pitch with a cricket bat. He is hitting balls away from him. This is to show that he is knocking out things that he hates. I have used placemen to put the closest and most important thing that Cromwell is getting rid of in the biggest ball. In the first ball is Cromwell with his crown falling off his head and splitting in half. Then there are presents showing when he tried to ban Christmas. After that there are balls showing when Cromwell tried to ban smoking and drinking and finally in the the last ball trying to ban Easter. Divya
My cartoon is saying that Cromwell is a devil because of the horns, tail and fangs. I think Cromwell is a devil because he kills quite a lot of people. For example a party called the Levellers (who wanted to give poor men the chance to vote) were locked up in the
My Cartoon is saying that Cromwell was a villain how forbade beer and establishments where you can have fun. He was a killjoy. The Puritans supported him. He killed Charles and cushioned the poor people like the soldiers. I have shown Cromwell much bigger than the others to represent thatCromwell cushioned the others and that he had the power to do that. He is holding a gun and shooting at a cup of beer, a soldier, a theatre and Charles, because he forbade beer and closed theatres. He cushioned the soldiers and executed Charles. The Puritans are standing behind him to show that they supported him. They have a table with lots of guns and are holding a signpost which is saying ‘Go Cromwell’. Valerie
Tuesday, 30 October 2007
In 1919 the Treaty of Versailles was signed. This treaty was written mainly by the BIG THREE: Woodrow Wilson (USA), Georges Clemenceau (France) and David Lloyd George (UK). Germany was forced to agree to the terms of the Treaty, which were exceedingly harsh.
Clemenceau wanted to punish France harshly, because he believed Germany responsible for the outbreak of war. In addition, much of the war took place on French soil, resulting in huge damage to the infrastructure of France. Equally important to Clemenceau was historic French hatred of Germany, stemming from the loss of Alsace-Lorraine as part of the harsh settlement imposed by Germany after the French defeat of 1871.
Woodrow Wilson was an idealist. He wanted a fair treaty that would enable Germany to recover as a healthy and prosperous European nation. Wilson could afford to be idealistic as America was distant from European affairs and had entered the war late, resulting in few casualties. The American public wanted Wilson to concentrate on domestic concerns such as the American labour crisis and housing shortage, rather than tying America up in European affairs.
David Lloyd George was a realist. He knew that Germany had to be punished for the war, but he recognised that unfairness would result in future conflict as Germany would seek to overturn a harsh settlement. However, David Lloyd George had promised the British public that he would 'squeeze the German lemon till the pips squeak'. He promised this because the British public were psychologically scarred by the war and desired revenge. Lloyd George wanted to win the forthcoming election!
The following cartoons represent the attitudes of, and/or the pressures on, one or more of the BIG THREE at the Versailles negotiations.
1. Clemenceau's desire to crush Germany:
3. Clemenceau's desire to destroy Germany in order to extract reparations:
4. Wilson's desire to protect helpless Germany from Clemenceau's desire to destroy her. David Lloyd George is being pushed towards the Treaty by the clamour of the British public for retribution, even though he, personally, doesn't share their view.
5. The Tug-of-War between France and Wilson, with Britain trying to act as the half-way point.
6. The three judges at Versailles, with their alternative sentences:
7. Clemenceau destroying Germany, Wilson trying, hopelessly, to catch the fragments, ending with David Lloyd George's confused attempt to manage revenge and idealism.
Henry was married to Catherine of Aragon but only had one daughter, Mary, in years. He was desperate for a son, because he wanted to secure his dynasty and girls could not, at that time, inherit the throne in their own right. Henry then met Anne Boleyn and fell in love with her. He wanted to divorce Catherine and marry Anne, but the Pope wouldn't let him.
The Pope's refusal was not only based on the Catholic belief that marriage is for life, but also influenced by the fact that the Holy Roman Emperor, Charles V, was holding him captive. Charles V was Catherine of Aragon's nephew!
Henry kept asking the Pope for a divorce for six years. Then Anne became pregnant. The situation was now desperate. If the Pope wouldn't back down, Henry would have to find another way to get a divorce.
Henry knew that a German monk called Martin Luther had been complaining about the Catholic Church. He said that the Catholic Church was corrupt and was more concerned with making money, through high taxes, than with helping people get to heaven. The German princes, who didn't like paying taxes, supported Luther and had broken away from the Catholic Church. They no longer had to do as the Pope said. Henry had criticised Luther publicly, and had even written a book defending the Catholic Church in 1521. However, by 1532 he was starting to see the advantages of breaking away from the authority of the Pope, even if he still shared his religious views. In 1533 Henry VIII broke with Rome by marrying Anne Boleyn and declaring his marriage to Catherine void (invalid).
Have a look at these cartoons. Some cartoons show that Henry wanted to divorce Catherine and marry Anne, some cartoons show why he wanted to do this. The best cartoons show, as well, that he used the religious changes in Germany to his own advantage in order to divorce Catherine.
The students had the following background knowledge:
Caesar was one of two Generals that led the Roman army. For many years the soldiers of Rome were more loyal to their own General than they were to the government (Senate). Caesar took control of the whole Roman army by killing Pompey, the other powerful General.
This gave Caesar the power to destroy the Senate, but he still needed a reason.
The poor of Rome hated the Senate because the Senators were very rich and refused to help the poor. Rich senators had even murdered the Gracchus brothers, who were the only people in Rome to try and help the poor. The poor were desperate and Caesar took advantage of this by promising them land if they would support him against the Senate.
Have a look at these cartoons and see if you can work out what message they are trying to convey.
Some cartoons make a simple, single point. The best cartoons try to express all of the above ideas. |
A Brief History |
Trade and Travel |
Towns and Villas |
The Army |
Forts and Fortresses
Britain experienced almost four hundred years under the control of the Roman Empire. It grew from Celtic nation full of tribes who were often at war with each other1 to a (mostly) peaceful Roman province, populated by Romans, Britons, and foreigners. At its height, the Roman Empire encompassed hundreds of thousands of square miles and millions of people. Britain was only a small part of this, but the Roman period brought many changes and was one of the most influential in the history of Britain.
The Expeditions of Caesar
The First Expedition
Julius Caesar was the first Roman to come in force to Britain. In the late summer of 55 BC, while he was Governor of Gaul (modern-day France), he led an exploratory expedition with two legions2 and an unspecified number of auxiliary troops3. He gave his reason for the expedition as to stop the Britons sending military help to the Gauls4. The Britons had assembled to meet them and the two sides joined battle on the beach. The Romans eventually overcame a determined British resistance and the Britons asked for peace.
However, many of the Roman ships were wrecked in a storm and the Britons took advantage of this to return to open hostilities, attacking a legion collecting corn and even the Roman camp. They were defeated and again asked for peace. They accepted Caesar’s terms and he left again after only a few weeks in Britain.
The Second Expedition
Caesar returned in 54 BC, possibly because the British had to a large extent violated the terms of their agreement with him. Many of the tribes had not sent the hostages he demanded and had stopped paying tribute. This time he brought a considerable force, of five legions and two thousand cavalry. This second expedition was much the same. Caesar defeated several tribes but again left after a short time5.
Caesar’s successors as rulers of Rome (the Emperors) generally left Britain alone until the time of Emperor Claudius. His hold on the loyalty of the army was not strong and he may have felt that to conquer somewhere was the best way to win their fealty (and glory for himself), and settled on Britain as an easy place to achieve this.
The Romans invaded in 43 AD under the command of General Aulus Plautius, an experienced general and politician. He came with four legions, the second, ninth, fourteenth, and twentieth. Many of the Celtic tribes surrendered and made peace with the Romans. This was a great help to them, as it meant that they didn’t have to fight all of the tribes. Others fought, however, such as the Catevellauni, who were defeated in battle at the River Medway. Their leader, Caratacus, survived and led revolts against the Romans for many years but was eventually defeated in Wales.
Once most of southeast England was under control, Claudius himself arrived, bringing reinforcements. He supposedly led the Romans to victory against the Catevellauni at Colchester before returning to Rome. The four legions split up under their own commanders, to conquer different parts of the country. For instance, the future Emperor Vespasian (founder of the Flavian dynasty) led the second legion along the south coast, capturing hill forts such as Maiden Castle as well as the Isle of Wight. Eventually, the border of Roman territory ran from the mouth of the River Severn to the mouth of the River Humber.
Consolidation of the Conquest
Under later governors advances were made into Wales (though it was not fully conquered until the later first century) and northern England. Britain also began to develop as a recognisably Roman province, with towns, roads, army bases and other features of Roman control. By this time the south of the country was pacified.
Boudicca was the Queen of the Iceni tribe in East Anglia. After the death of her husband Prasutagus, the Roman procurator6 seized his property, which should have gone to Boudicca and her daughters, and that of the Iceni nobility. When Boudicca protested, she was flogged and her daughters were raped.
Unsurprisingly, this did not go down well with Boudicca or the Iceni. In 60 AD, she led a rebellion of the Iceni and the neighbouring Trinovantes against the Romans. They were very successful at first, sacking and razing London, Colchester and St Albans, all major Roman towns by this point, and killing thousands of Romans and Romanised Britons while the Governor, Suetonius Paulinus, was away attacking Anglesey.
Paulinus, receiving news of the revolt, hurried back and marshalled his forces somewhere in the Midlands (possibly near Mancetter). He had about 10,000 men, while the Britons reportedly had 100,000. You might think that this was certain to be an overwhelming victory for the Britons. You’d be wrong. They were defeated by the disciplined Roman army and Boudicca died7.
The Conquest of Scotland
By 79 AD, most, if not all, of northern England was under Roman control, and the new governor, Gnaeus Julius Agricola, was able to turn his attention to Scotland. Over a period of five years (79 AD to 84 AD) he occupied southern Scotland and pushed further north, defeating the locals and building forts as he went. The most conclusive Roman victory was a major battle at a place called Mons Graupius8. He won the battle and began to advance even further, but he was recalled to Rome the same year. The lack of troops meant that the Romans could not continue to hold Scotland permanently.
Hadrian’s Wall and the Antonine Wall
When the Emperor Hadrian visited the province of Britain in 122 AD, he ordered the building of a wall right across the border between Roman England and 'barbarian' Scotland. This wall was built by the legionaries but manned by the auxiliaries. It was a large stone wall which dominated the landscape, with a whitewashed front, gates (for trade and collection of taxes) and forts dotted along its length, with much smaller forts every mile. For many years it provided an effective border.
However, in 139 AD, the Romans reoccupied southern Scotland and began the building of the Antonine Wall (named after the then Emperor Antonius Pius). This stretched across a narrow part of Scotland and was built of turf. It was occupied and abandoned several times over a number of years but eventually the frontier was re-established at Hadrian’s Wall.
The End of Roman Britain
After this, we have little information about events in Britain. By 401 AD, troops were withdrawn from Britain to deal with growing invasions to the rest of the Empire by the likes of the Visigoths. Britain herself was under attack from the Saxons, but when they appealed to Rome for military aid in 410 AD, the emperor told them to arrange their own defence. This was the end of the Roman period in Britain, and the Western Empire itself fell a few years later. |
Let's set the record straight: King Richard III may not have been a hunchback, as commonly portrayed by Shakespeare. However, scientists believe that he did suffer from scoliosis, a spine-curving condition that can cause complications standing or sitting for long periods of time. Scientists now think that he may have undergone painful medical treatments to straighten out this health problem.
In February, archaeologists excavated bones from underneath a parking lot in Leicester, England, that belonged to the medieval king. Since his confirmation, examiners have continued to look for bones and historical records.
Previous work showed King Richard III likely developed severe scoliosis, a painful condition, in his teen years. [Image Gallery: Photos Reveal the Discovery of Richard III] |
NOVA scienceNOW: Profile: Erich Jarvis
Accurate language and terminology are critical to understanding. In the
program, Erich Jarvis discusses the importance of using correct terminology to
name the different parts of birds' brains. According to Jarvis, the old
nomenclature caused people to misjudge the intelligence of birds. Have students
name other areas of science in which specific, correct language matters a great
deal, such as conducting surgery, diagnosing diseases, naming chemical
elements, and classifying organisms. Then have students participate in the
following challenge that demonstrates the importance of using precise
On index cards, (one per card), write a phrase with the name of a specific
familiar place, such as: the food court in the mall, a specific section of a
local book store, a specific area of your school's campus, the counter in an
ice-cream shop in town, or the fire station or police station in town. Divide
the class into teams and have each one pick a card. Ask teams to generate a
list of five words—a mix of general and specific terms—associated
with the location on the card. However, they are not allowed to use a word
already in the phrase. Have them rank the words, as best they can, from the
most general to the most specific. Then, have a team read the most general word
to the class. The class's challenge is to identify the phrase written on the
card. A team reads its list, using increasingly specific terminology until the
class identifies the phrase. Discuss the effect of general versus specific
terminology as it conveys meaning.
The newly adopted system of bird brain nomenclature changes 95 percent of
the terms previously used to describe the structure of bird's brains. These
changes altered the field of biology in several ways. Birds were considered to
have only instinctive behaviors. However, research showed that they demonstrate
some complex behaviors similar to those in mammals. Erich Jarvis understood
that changing terminology would be important to biology, but that his work
would be met with resistance. On the board, write the following terms from the
old bird brain nomenclature: the Archicortex and the Palaeostraitum primitivum.
Ask if students can identify how these names reflect a view that the bird brain
is a simple, primitive structure. (Palaeo means "oldest," archi means
archaic, and primitivum means primitive.)
View a bird brain model on NOVA
scienceNOW and get an inside look at just what makes up a bird brain.
(QuickTime plug-in required for rotating brain image.)
In the program, an experiment demonstrates a crow's intelligence and
reasoning abilities. The crow must figure out how to lift a small bucket from a
cylinder. After unsuccessfully using a straight wire, the crow bends the wire
so it can hook around the handle of the bucket, enabling it to lift the bucket
out of the container and get the food. Demonstrate what researchers mean when
they refer to intelligence and complex behavior in birds— have students
experience their own reasoning, problem-solving, and tool-making abilities.
Some suggested activities include:
Repeat the test given to the crow by putting a small bucket in a tube
and providing the students with a straight piece of wire.
Place a ping-pong ball in a 500 ml graduated cylinder. Have students
figure out how to get the ball out without touching the cylinder. (Add
Place a small ball of clay under a set of pick-up sticks. Have students
remove the ball without touching the sticks with their fingers. (Use a
pencil or straw.)
Have students release their fingers from a set of finger
tubes—the straw cylinder that tighten around one's finger when attempts
are made to pull it the finger free. (Press the ends of the finger-cuff
together and gently remove the finger.)
Have students retrieve a button frozen in an ice cube. (Melt ice in
In the segment, Erich Jarvis discusses how working through a dance routine
is similar to working through scientific research questions. Encourage students
to contrast scientific thinking and creativity with artistic thinking and
creativity. Have half the class interview amateur or professional artists
(e.g., painters, musicians, dancers, writers, actors) and half interview
scientists, engineers, doctors, or mathematicians. You might have some students
work in teams. Before beginning, develop an interview questionnaire, such
Would you describe yourself as an artist, a scientist, or both?
What is your specific field?
Describe how you generate ideas for your work.
Describe how you perform your work.
How does your daily work provide you ways of being creative?
What is your biggest challenge in your work?
What is your biggest reward in your work?
What personal characteristics are most important in your work?
Have students share their findings. How are "doing science" and "doing art"
alike and different?
In the program, Jarvis' mentor, Rivka Rudner discusses Jarvis's creative
approach and dedication to his work. She provided him much support and
encouragement. Ask students if they have had mentors, and have them share how
the mentors influenced their lives. Then have students ask their teachers,
parents, or family friends about important mentors in their lives. As a class,
generate a list of questions students might ask such as:
Did you have someone who provided support and encouragement for you in your work?
How did you meet this person?
What did this person do that made you feel encouraged?
Does the support you received from your mentor still affect your life? How?
Hold a class discussion and ask students to share their findings.
To give students an idea of how to understand animal intelligence, have
teams design an experiment in which they test a question related to animal
behavior. To begin, choose one of the following questions as a sample
experiment to work through with the class.
- Do goldfish respond to music?
- Do goldfish respond to changes in light?
- Can dogs discriminate between paintings of living and nonliving things?
- Do dogs remember what happened a day earlier?
- Do dogs have a sense of humor?
- Can cats plan for a future event?
- Can cats differentiate amongst different kinds of music?
- Do turtles respond to heat? Cold?
- Can frogs return to the same hiding places?
- Which water temperature is most comfortable for frogs?
Work through a sample experiment with the class. Have the class identify a
question to test. Write it on the board. Next, have students define the
hypothesis, a materials list, and a procedure. In addition, consider any
concerns or problems related to the experimental design and ideas for future
experiments. After completing the sample, divide the class into teams. Each
team should choose a question, brainstorm experiment ideas, and write a
procedure for their experiment similar to the one modeled in class. Ask teams
to share their experiment ideas. Below is a sample of what a team's final
write-up might look like.
Question: Do goldfish respond to music?
Goldfish will move away from music
2 CD players
5-gallon tank holding 2-4 goldfish
Set the goldfish tank in a quiet area.
Set up the CD players so that one is on the left side and one is on the
right side of the tank. Wait 5-10 minutes.
Make sure there are no additional external stimuli, such as lighting changes
in the area of the tank or movement around the tank.
Note where the fish are in the tank.
Play music in one CD player for two minutes. Note the responses of the fish.
Turn off the CD player. Note the response of the fish over the next two
Repeat step 5, playing the same song in the other CD player.
Record observations and draw conclusions
Possible follow-up experiments
Test different volume levels or types of music.
"Birdbrain" No Longer Means "Stupid," Asserts Scientific Consortium
Discusses why Jarvis wanted to change the old terminology related to bird brain
Nervous System: Brain and Senses
Contrasts the classic and modern views of bird brains and presents a new
understanding of vertebrate brain evolution and intelligence in birds.
Singing in the Brain—Bird Studies
Explains Jarvis' laboratory work and field research.
The Life of Birds: Bird Brains
Describes behaviors in birds that demonstrate their intelligence.
Mind of the Raven by Bernd Heinrich. HarperCollins, 2000.
Describes Heinrich's observations of ravens and discusses the birds'
The Sibley Guide to Bird Life and Behavior by David
Allen Sibley. Knopf Publishers, 2001.
Incorporates the work of several bird experts and includes information about
bird brains and bird intelligence. |
Social security is "any government system that provides monetary assistance to people with an inadequate or no income." In the United States, this is usually called welfare or a social safety net, especially when talking about Canada and European countries.
Social security is asserted in Article 22 of the Universal Declaration of Human Rights, which states:
Everyone, as a member of society, has the right to social security and is entitled to realization, through national effort and international co-operation and in accordance with the organization and resources of each State, of the economic, social and cultural rights indispensable for his dignity and the free development of his personality.
In simple terms, the signatories agree that the society in which a person lives should help them to develop and to make the most of all the advantages (culture, work, social welfare) which are offered to them in the country.
Social security may also refer to the action programs of an organization intended to promote the welfare of the population through assistance measures guaranteeing access to sufficient resources for food and shelter and to promote health and well-being for the population at large and potentially vulnerable segments such as children, the elderly, the sick and the unemployed. Services providing social security are often called social services.
Terminology in this area is somewhat different in the United States from in the rest of the English-speaking world. The general term for an action program in support of the well being of poor people in the United States is welfare program, and the general term for all such programs is simply welfare. In American society, the term welfare arguably has negative connotations. In the United States, the term Social Security refers to the US social insurance program for all retired and disabled people. Elsewhere the term is used in a much broader sense, referring to the economic security society offers when people are faced with certain risks. In its 1952 Social Security (Minimum Standards) Convention (nr. 102), the International Labour Organization (ILO) defined the traditional contingencies covered by social security as including:
Modern authors often consider the ILO approach too narrow. In their view, social security is not limited to the provision of cash transfers, but also aims at security of work, health, and social participation; and new social risks (single parenthood, the reconciliation of work and family life) should be included in the list as well.
Social security may refer to:
A report published by the ILO in 2014 estimated that only 27% of the world's population has access to comprehensive social security.
While several of the provisions to which the concept refers have a long history (especially in poor relief), the notion of "social security" itself is a fairly recent one. The earliest examples of use date from the 19th century. In a speech to mark the independence of Venezuela, Simón Bolívar (1819) pronounced: "El sistema de gobierno más perfecto es aquel que produce mayor suma de felicidad posible, mayor suma de seguridad social y mayor suma de estabilidad política" (which translates to "The most perfect system of government is that which produces the greatest amount of happiness, the greatest amount of social security and the greatest amount of political stability").
In the Roman Empire, the Emperor Trajan (reigned A.D. 98-117) distributed gifts of money and free grain to the poor in the city of Rome, and returned the gifts of gold sent to him upon his accession by cities in Italy and the provinces of the Empire. Trajan's program brought acclaim from many, including Pliny the Younger.
In Jewish tradition, charity (represented by tzedakah) is a matter of religious obligation rather than benevolence. Contemporary charity is regarded as a continuation of the Biblical Maaser Ani, or poor-tithe, as well as Biblical practices, such as permitting the poor to glean the corners of a field and harvest during the Shmita (Sabbatical year). Voluntary charity, along with prayer and repentance, is befriended to ameliorate the consequences of bad acts.
The Song dynasty (c.1000 AD) government supported multiple forms of social assistance programs, including the establishment of retirement homes, public clinics, and pauper's graveyards.
The concepts of welfare and pension were put into practice in the early Islamic law[not in citation given] of the Caliphate as forms of Zakat (charity), one of the Five Pillars of Islam, since the time of the Rashidun caliph Umar in the 7th century. The taxes (including Zakat and Jizya) collected in the treasury of an Islamic government were used to provide income for the needy, including the poor, elderly, orphans, widowed persons, and the disabled. According to the Islamic jurist Al-Ghazali (Algazel, 1058-1111), the government was also expected to store up food supplies in every region in case a disaster or famine occurred. (See Bayt al-mal for further information.)
There is relatively little statistical data on transfer payments before the High Middle Ages. In the medieval period and until the Industrial Revolution, the function of welfare payments in Europe was principally achieved through private giving or charity. In those early times, there was a much broader group considered to be in poverty as compared to the 21st century.
Early welfare programs in Europe included the English Poor Law of 1601, which gave parishes the responsibility for providing poverty relief assistance to the poor. This system was substantially modified by the 19th-century Poor Law Amendment Act, which introduced the system of workhouses.
It was predominantly in the late 19th and early 20th centuries that an organized system of state welfare provision was introduced in many countries. Otto von Bismarck, Chancellor of Germany, introduced one of the first welfare systems for the working classes in 1883. In Great Britain the Liberal government of Henry Campbell-Bannerman and David Lloyd George introduced the National Insurance system in 1911, a system later expanded by Clement Attlee. The United States did not have an organized welfare system until the Great Depression, when emergency relief measures were introduced under President Franklin D. Roosevelt. Even then, Roosevelt's New Deal focused predominantly on a program of providing work and stimulating the economy through public spending on projects, rather than on cash payment.
This policy is usually applied through various programs designed to provide a population with income at times when they are unable to care for themselves. Income maintenance is based in a combination of five main types of program:
Social protection refers to a set of benefits available (or not available) from the state, market, civil society and households, or through a combination of these agencies, to the individual/households to reduce multi-dimensional deprivation. This multi-dimensional deprivation could be affecting less active poor persons (such as the elderly or the disabled) and active poor persons (such as the unemployed).
This broad framework makes this concept more acceptable in developing countries than the concept of social security. Social security is more applicable in the conditions, where large numbers of citizens depend on the formal economy for their livelihood. Through a defined contribution, this social security may be managed.
But, in the context of widespread informal economy, formal social security arrangements are almost absent for the vast majority of the working population. Besides, in developing countries, the state's capacity to reach the vast majority of the poor people may be limited because of its limited infrastructure and resources. In such a context, multiple agencies that could provide for social protection, including health care, is critical for policy consideration. The framework of social protection is thus holds the state responsible for providing for the poorest populations by regulating non-state agencies.
Collaborative research from the Institute of Development Studies debating Social Protection from a global perspective, suggests that advocates for social protection fall into two broad categories: "instrumentalists" and "activists". Instrumentalists argue that extreme poverty, inequality, and vulnerability is dysfunctional in the achievement of development targets (such as the MDGs). In this view, social protection is about putting in place risk management mechanisms that will compensate for incomplete or missing insurance (and other) markets, until a time that private insurance can play a more prominent role in that society. Activist arguments view the persistence of extreme poverty, inequality, and vulnerability as symptoms of social injustice and structural inequality and see social protection as a right of citizenship. Targeted welfare is a necessary step between humanitarianism and the ideal of a "guaranteed minimum income" where entitlement extends beyond cash or food transfers and is based on citizenship, not philanthropy.
Art 22. "22 The society in which you live should help you to develop and to make the most of all the advantages (culture, work, social welfare) which are offered to you and to all the men and women in your country." |
Todays’s classroom is mostly following along, retaining, and reiterating what the teacher is trying to convey. This can be an effective process for some students depending on the material being taught; maybe dates of an event or the presidents of the United States in chronological order, but are children really understanding or simply memorizing data? There are definitely many other approaches and processes that are used by educators that are constructive and efficient, but one tried and tested concept is PBL or project based learning.
Project based learning is the concept that children are given a driving question with learning goals in sight that may include real world roles, problems to be solved, a debate, or just a thought-provoking subject. The idea here is to give the students a base to begin and are then free to direct the project in any way. This really opens up their abilities and enables them to use their own curiosities and research. Project based learning allows students to explore, figure out, and learn along the way of their findings.
As a student, I have encountered both forms of teaching and project based learning was by far more intriguing, making me want to continue my self-directed studying. When I was given the freedom to conceptualize what I wanted my project to become, it made me want to excel and really grasp what I was gaining intellectually. These are the lessons I still remember fondly and after many years could retell what I learned.
As an educator, there is nothing more that I would like than for my students to be excited about learning, researching, and demonstrating what they have found. Project based learning is an incredible way to engage students and develop their self-efficacy. PBL will strengthen a child’s critical thinking, collaboration skills, problem-solving skills and creativity. This alone should be reason enough to incorporate it into any classroom.
Here is a great video on project based learning by Edutopia:
Here is an article explaining how project based learning is different from projects: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.