content
stringlengths 275
370k
|
---|
According to a study, published in the journal PLOS ONE on Wednesday, it was found that dogs can smell the odors in humans and know if they are stressed or calm. The impressive skills and mental health benefits of dogs have now been backed with scientific evidence as these furry friends have the ability to smell when you're stressed. To conduct the study, researchers collected the breath and sweat samples of the participants to use them as a baseline. Then, these participants completed a mental arithmetic task, which was to count backward from 9,000 in parts of 17 in front of two researchers up to three minutes.
Lead author of the study, Clara Wilson, a doctoral candidate at Queen's University Belfast in Northern Ireland, said, "If the participant gave a correct answer, they were given no feedback and were expected to continue, and if they gave an incorrect answer the researcher would interrupt with 'no' and tell them their last correct answer."
Also read: Why Is Petting A Dog Good For Health?
After performing the task, the team of researchers collected another round of breath and sweat samples. In addition, they also examined their reported stress levels, heart rate and blood pressure prior and post the task completion. There were thirty six people who reported stressful feelings and experienced an increase in heart rate and blood pressure. Therefore, their samples were shown to the dogs.
In the findings of the study, the dogs chose the right sample in 93.8% of the trials of sniffing stressed participants, which concluded that the stress odors were very different from the baseline samples, Wilson noted. She said, "It was fascinating to see how able the dogs were at discriminating between these odors when the only difference was that a psychological stress response had occurred." |
GCSE Population Glossary
Age-Sex Pyramid (Population Pyramid): a series of horizontal bars that illustrate the structure of a population. The horizontal bars represent different age categories, which are placed on either side of a central vertical axis. Males are to the left of the axis, females to the right.
Ageing Population: In the population structure of many MEDCs there is often a high proportion of elderly people who have survived due to advances in nutrition and medical care. This creates problems since these people do not work and have to be provided with pensions, medical care, social support, sheltered housing etc. from the taxes paid by a proportionally smaller number of workers. In addition, an increasing number of young people are employed as care workers for the elderly. This removes them from more productive jobs within the economy and harms a country's competitiveness.
Ageing Population Structure: a population pyramid with a narrower shape, broad at the top, found in MEDCs. This reflects their low birth rates and the greater proportion of elderly people.
Birth Rate: The number of live births per 1000 people per year.
Bulge of Young Male Migrants: on a population pyramid; young males move to urban areas due to push-pull factors.
Census: a counting of people by the government every ten years to gather data for planning of schools, hospitals, etc. This is unreliable for a number of reasons.
Child Dependency ratio: the number of children in relation to the number of working (economically active) population, usually expressed as a ratio.
Concentrated Population Distribution: where people are grouped densely in an urbanised area (see Port, Bridging-Point, Route Centre, Wet Point Site, Market Town, Mining Town, Resort).
Contraception: using birth control to stop pregnancy.
Counter-urbanisation movement of people in MEDCs away from urban areas to live in smaller towns and villages (see de-urbanisation and urban-rural shift).
Death rate: the number of deaths per 1000 people per year.
Demographic transition: the change from high birth rates and death rates to low birth rates and death rates.
Demographic Transition Model: diagram which shows the relationship between birth and death rates and how changes in these affect the total population.
Dependency ratio: the ratio between those of working age and those of non-working age. This is calculated as:
|% pop aged 0 -14 + % pop aged 65+
% of population aged 15-65
Dependent Population: those who rely on the working population for support e.g. the young and elderly.
Depopulation: the decline or reduction of population in an area.
De-urbanisation: the process in MEDCs by which an increasingly smaller percentage of a country’s population lives in towns and cities, brought about by urban-rural migration. (See Counter-Urbanisation and Urban-Rural Shift).
Dispersed Population Distribution: the opposite of a concentrated distribution; the population may be spread evenly over a fertile farming area, rather than concentrated in an urban centre. Dispersed population distributions tend to be of low density.
Distribution (of a population): where people are found and where they are not found.
Economic Migrant: person leaving her/his native country to seek better economic opportunities (jobs) and so settle temporarily in another country.
Emigrant: someone who leaves an area to live elsewhere.
Ethnic Group: the group of people a person belongs to categorised by race, nationality, language, religion or culture.
Family Planning: using contraception to control the size of your family.
Family Ties: the lack of family ties (no wife or children) encourages young males to migrate from LEDCs to MEDCs or from rural to urban areas to seek a better life. The young (20-35) are also best-suited physically to heavy unskilled/semi-skilled work. See Guest-Worker.
Fertile Age Group: the child-bearing years of women, normally 18-45 years of age.
Ghetto: an urban district containing a high proportion of one particular ethnic group. The term ghetto comes from the district of Geto in medieval Venice which was reserved for Jews.
Gross National Product (GNP) per capita: the total value of goods produced and services provided by a country in a year, divided by the total number of people living in that country.
Guest-Worker Migration: people leaving their country to work in another land but not to settle: the term is associated with unskilled/semi -skilled labour.
Human Development Index: a social welfare index, adopted by the United Nations as a measure of development, based upon life expectancy (health), adult literacy (education), and real GNP per capita (economic).
Immigrant: someone who moves into an area from elsewhere.
Infant Mortality: the number of babies dying before their first birthday per 1000 live births.
Life Expectancy: the average number of years a person born in a particular country might be expected to live.
Literacy Rate: the proportion of the total population able to read and write.
Malnutrition: ill-health caused by a diet deficiency, either in amount (quantity) or balance (quality).
Migrant: someone who moves from one place to another to live.
Migration: movement of people.
Model: a theoretical representation of the real world in which detail and scale are simplified in order to help explain reality.
Natural Increase or Decrease: the difference between the birth rate and the death rate. Additional effects of migration are not included.
Natural Population Change: the difference in number between those who are born and those who die in a year. Additional effects of migration are not included.
Net Migration: the difference between the number of emigrants and the number of immigrants.
New Commonwealth: the more recent members of Britain’s Commonwealth (ex-colonies, now independent), including countries such as India and Pakistan and the West Indian islands.
Overpopulation: where there are too many people and not enough resources to support a satisfactory quality of life.
Population Change: Births - Deaths + In-Migration - Out-Migration = Population Change.
Population Density: number of people per square kilometre.
Population Pyramid: a graph which shows the age and sex structure of a place.
Push-Pull Factors: push factors encourage or force people to leave a particular place; pull factors are the economic and social attractions (real and imagined) offered by the location to which people move (i.e. the things which attract someone to migrate to a place).
Quality of Life: things (e.g. housing) that affect your standard of living.
Quality of Life Index: a single number or score used to place different countries in rank order based on their quality of life. Various indicators are included, e.g. GNP per person, calorie intake, life expectancy, access to health care, number of doctors per 100,000 etc.
Racial Prejudice: thinking unpleasant things about people because of the colour of their skin and/or their ethnic group without knowing them.
Racism: unfair, ridiculing or threatening behaviour towards someone because oi their particular racial group.
Refugees: people forced to move from where they live to another area.
Repatriation: a government policy of returning immigrants to their country of origin.
Rural Depopulation: people leaving the countryside usually to live in towns (ie. rural-urban migration).
Rural Population Structure: young males move to urban areas due to push-pull factors. This creates a characteristic indentation in the 20-35 age group population structure.
Segregation: where immigrant groups such as Turks in Germany become increasingly isolated in inner city areas, of poor housing (see ghetto).
Sparsely Populated: an area that has few people living in it.
Sterilisation: a method of contraception: in men an operation prevents sperm from being released, and in women an operation stops the production of eggs.
Structure (of a population): the relative percentages of people of different age groups, usually shown on a population pyramid.
Urban-Rural Shift: the movement of people out of towns in MEDCs to seek a better quality of life living in the countryside. Some work from home using telecommunications technology; most travel into the city each day as commuters, contributing to the rush hour.
Urbanisation: the growth of towns and cities leading to an increasing proportion of a country’s population living there. It as a gradual process common in LEDCs where 1 million people move from the countryside to the cities every three days.
Urban Population Structure: young males move to urban areas due to push-pull factors. This creates a characteristic population pyramid bulge in the 20-35 age range.
Voluntary Migration: where people move to another area through choice.
Working Population: people in employment who have to support the dependent population.
Youthful Population: in the population structure of LEDCs, there is often a higher proportion of young people due to high birth rates and a reduction in infant mortality due to better nutrition, education and medical care. This may create problems since the children need feeding, housing, education and eventually a job. Medical care and education has to be paid for by taxing a proportionally small number of workers.
Youthful Population Structure: seen as a wide base on population pyramids that reflect high birth rates in LEDCs. |
Competence And Performance has become very core topic in linguistic study.You Must understand about it. Being a linguist student it will help you for understand the things and guide you how you observe the things linguistically.
is a term used in linguistic theory, especially in generative grammar, to refer to person’s knowledge of his language, the system of rules which a language user has mastered so that it would be possible for that user to be able to produce and understand an indefinite number of sentences and recognize grammatical mistakes and ambiguities.
In linguistics, the term “performance” has two senses: (1) a technique used in phonetics whereby aspiring practitioners of the subject are trained to control the use of their vocal organs; and (2) a term used in the linguistic theory of transformational generative grammar, to refer to language seen as a set of specific utterances produced by native speakers, as encountered in a corpus.
The distinction between performance and competence in the transformational generative grammar, however, has been severely criticized as being not that clear-cut, and there are problems, often in deciding whether a particular speech feature, such as intonation or discourse, is a matter of competence or performance.
You Must Know The Difference Between Competence And Performance In order to understand Its Behaviour.
Competence and performance are the terms which Noam Chomsky uses to distinguish two types of linguistic ability. As I have said, performance is concerned with the mechanical skills involved in the production and reception of language, that is, with language as substance. So, for example, the ability to form letter shapes correctly when writing, or to make the right movements with our speech organs when speaking, are aspects of performance. And some kinds of reading difficulty –notably the problem of distinguishing between letter shapes, commonly called dyslexia – are performance related.
Grammatical competence, on the other hand, covers a range of abilities which are broadly structural. It entails two kinds of cognitive skills: firstly, the ability to assign sounds and letters to word shapes distinguished from each other by meaning – we can call this lexical knowledge: and secondly, the ability to recognize larger structures such as phrase and clause to which individual words belong we can call this syntactic knowledge. And as we have seen from looking at the poem by Eugene Field, they are both necessary elements in the determination of meaning.
The distinction between competence and performance, however, is not unproblematic since performance can itself be represented as a kind of competence, and indeed, deciding whether a particular language difficulty is a matter of performance or competence is not always easy. But what Chomsky wants to emphasize by this distinction is that the mechanical skills of utterance or writing only have any value linguistically if they are a representation of grammatical competence.
It would be perfectly possible for someone to be trained to write or speak a passage in a foreign language without them having any idea of the words they were producing let alone their meaning. Performance does not necessarily imply competence, but without it, it is linguistically uninteresting. But what of the other difficulties I confessed to earlier – giving street directions and writing poetry? The first is something which many people find problematic.
The study of competence as the linguistic knowledge of the native speaker and performance as the actual production or utterance of that speaker is not an easy task. Not because the former is abstract while the latter is concrete, but also because there is no way to access to one’s linguistic competence.
The informant is no more the native speaker only, nor is the linguist himself; it is the psycholinguist as well. The linguist tries to infer the components of competence via studying the observable outcome, i.e. performance, and to make use of his/her own linguistic intuition. Thus, within a framework of a linguistic theory of competence only a grammar seems difficult to be formed.
Furthermore, a grammar that linguists try to construct within a theory of linguistic performance characterizes only one part of the speaker’s knowledge. It describes the psychological process involved in using the linguistic competence in all ways that the speaker can actually use it. These psychological processes include: producing and understanding utterances, making judgments about them and acquiring abilities to do such things, etc
Being a model of those linguistic abilities that enable the native speaker of a language to understand that language and speak it fluently, grammar seems difficult to be explicitly stated. Competence is not always reflected by performance in a perfect way. The performance of a speaker could be affected by some non- linguistic factors such as boredom, tiredness, drunkenness, or even chewing a gum, etc. |
Did you know that well managed natural forests help provide cleaner drinking water to urban communities? A report by the USDA Forest Service states nearly 80 percent of the nation’s freshwater originates from forestland. That crisp taste of fresh water is made possible by healthy forests, and when forests are neglected or destroyed it tampers with the quality of our water supply.
Because forests account for such a healthy portion of drinking water, it’s important to understand the science of how water is collected and dispensed. Forests absorb rainfall and use that water to refill underground aquifers, cleansing and cooling water along the way. Certain tree species even break down pollutants commonly found in urban soils, groundwater, and runoff, such as metals, pesticides and solvents (Watershed Forestry Resource Guide). By recycling rain water we’re not only producing higher quality water, but the impacts of such methods are valued at $3.7 billion per year.
Watersheds are a function of topography and carry water runoff downhill from land into a body of water, whether it is a lake, river, or stream. Freshwater springs in forests are an example of forestland watersheds. In urban settings watersheds serve as a key source of drinking water and can cut costs for water treatment systems. In addition, the presence of trees can retain stormwater runoff by absorbing excess water through its leaves and roots that would normally surge through gutters and pipes.According to the US Environmental Protection Agency, there are more than 2,110 watersheds in the continental United States.
Maintaining our nation’s forests is critical both ecologically for natural wildlife and habitat and economically in saving money for cities and residents. When forests are destroyed as a result of natural disasters such as wildfire, it has a profound impact on cities. In 2002 Pike National Forest experienced the largest wildfire in Colorado history, burning approximately 137,000 acres including the upper South Platte watershed—the primary source of water for the City of Denver and its residents.
The Arbor Day Foundation is replanting in Pike National Forest in efforts to restore it back to its natural state. Other forests linked to important watersheds that the Arbor Day Foundation is replanting on include Payette National Forest in Idaho, Manchester State Forest in South Carolina, and the North Carolina Sandhills. You can help replant our critical forests, and help to keep our water clean, by making a donation today. |
Electromagnetism is one of the four fundamental forces of natural, and is the strongest among them. The force is, essentially, responsible for holding everything together. For decades, scientists have attempted to harness the force as a way to power machines and unlock the secrets of the universe. These efforts have been met with significant challenges, however, as electromagnetism has proven to be a very dangerous force. Scientists from Duke University and Boston College may have found a way to capture the force without harming living organisms.
Using new composite materials often referred to as metamaterials. These man-made substances are constructed in such a way that their effects are not common in nature. As such, they can be designed to change the fundamental actions of electromagnetism and other such forces. Researchers believe that by decreasing the electric component of the force and increasing its magnetic aspect they are able to significantly reduce the risk to humans the force presents. This would make the force usable as a form of energy that could power various types of transportation and machinery.
Electromagnetism can be used as a form of alternative energy. It is often used in magnetically propelled trains found in Japan and China. It is also a key component to quantum levitation, which is a phenomenon that one day could produce flying cars that require no fuel at all. Where hydrogen is often referred to as a distant future solution to an immediate problem, researchers from Duke and Boston College say that electromagnetism is the true holder of such a title. It may, however, be the only fuel that powers humanity in the future, if scientists can figure out how to make it work for them. |
Food insecurity is the experience of not having access to sufficient quantity or quality of food needed to stay healthy. The degree of severity of food insecurity can vary. Mild food insecurity involves worrying about being able to afford enough food, while at the most acute end of the spectrum severe food insecurity means experiencing hunger. We know that people facing food insecurity adopt a number of coping strategies including, for the acutely food insecure, turning to emergency food aid.
There is currently no systematic measure of household food insecurity in Scotland and therefore the true scale of the problem is not known. Data from food banks shows a context of rising need. Yet we know many people don’t access emergency food at all. The most recent official surveys in Scotland show that nearly one in ten adults were worried they would run out of food due to a lack of money or resources, with this rising to nearly one in six adults in the most deprived areas.
Food insecurity in Scotland is caused by too much poverty, not too little food. As A Menu for Change has explored, the evidence suggests key drivers of acute food insecurity are income crises caused by: the operation and adequacy of the benefits system, low income, insecure work and the rising cost of living. Until we evolve our approach to do more to prevent people reaching the point of hunger by tackling the underlying causes of income crises, the need for emergency food aid in Scotland is unlikely to end. |
Picture in your mind the delta of a river – the way the main channel splits into smaller rivulets and tributaries. Something similar occurs in waves as they propagate through a certain kind of medium: the path of the wave splits, breaking up into smaller channels like the branches of a tree.
This is called a branching flow, and it’s been observed in such phenomena as the flow of electrons (electric current), ocean waves, and sound waves. Now, for the first time, physicists have observed it in visible light – and all it took was a laser and a soap bubble.
Depending on the structure of the medium, different things can happen to waves travelling through; they can attenuate, disperse, bend, spread, or continue flowing.
For branching flow, a few properties are required. The structure of the medium has to be random, and the spatial variations in the structure need to be larger than the wavelength of the flow. It also has to vary smoothly.
If all these conditions are met, small perturbations and fluctuations in the structure can scatter the flow, causing it to split.
Although this behaviour is ubiquitous to waves, observing it in light has proven challenging. That is, until a team of physicists from the Technion-Israel Institute of Technology in Israel and the University of Central Florida thought of using a soap bubble as the medium.
A soap membrane consists of a very thin film of liquid sandwiched between two layers of surfactant molecules. The thickness of this film varies quite substantially, from around five nanometres to a few micrometres. These thickness variations are what produces the colourful patterns in soap bubbles… but they also, as it turns out, can act as the perturbations that deflect light in a flow, causing that flow to split and branch.
It’s not easy to do, though. The laser light needs to shine between the two surfactant layers; this was achieved by inserting a fibre into the membrane for a curved film, or by coupling a broad elliptical beam into a flat film.
By shining a laser beam into a soap bubble, the researchers were able to observe the way the beam splits across the surface of the membrane. And when they lit up the membrane with a weak white light, they could see the thickness variations – visible as colour variations – that were splitting the beam.
Usually, air flow around a soap membrane causes the pattern to move around, but if the membrane can be isolated from air flow, the pattern can remain stable for several minutes. So the team also tested their laser on both stable and moving soap membranes.
The possibilities for this research are pretty huge. Using the bubble as the medium has implications for optofluidics, which is the science of light interacting with liquids. The experimental setup could be used, for example, to investigate how optical forces affect branched flow.
And thickening the film, the researchers note, could allow for branched flow in three dimensions – a phenomenon that has been hypothesised, but has never been observed in any context.
It could be used to explore other physical phenomena, too, including some aspects of general relativity.
“The thin soap films could be shaped into a variety of curved surfaces to study the branched flow in curved space,” the researchers wrote in their paper.
“Such curved space experiments are intimately related to general relativity.”
Plus, if the video can be looped, it’ll make for an absolutely baller screensaver.
The research has been published in Nature. |
What is an Otolaryngologist (ENT doctor)?
Otolaryngologists are doctors trained in the medical and surgical diagnosis and treatment of patients with diseases and disorders related to ear, nose, throat and related conditions of head and neck in both adults and children. Otolaryngologists are commonly referred to as the ENT surgeons.
The Common Cold
- More than 200 different viruses can cause the common cold.
- Viruses do not respond to antibiotic treatment.
- Symptoms typically last 2 – 14 days, but some symptoms can last for several weeks.
- Productive cough or discolored nasal discharge does not necessarily require antibiotic therapy.
- Influenza (flu) is a viral infection caused by the influenza virus.
Seek medical advice or treatment if symptoms are unchanged or getting worse after 10 days, you experience shortness of breath or have any respiratory difficulty, have a high fever (> 102 F).
Viral infections can be associated with bacterial overgrowth and occasionally lead to a bacterial infection (acute bacterial rhinosinusitis), which typically requires antibiotic therapy. Common cold also may worsen asthma symptoms (wheezing) in patients with asthma.
Cold and flu viruses are spread by touching infected persons or objects that have come in contact with the virus and then touching one’s nose or mouth. Frequent handwashing is important to prevent this process. Inhalation of infected particles in the air also can spread colds/respiratory viral infections. So be sure to cover your mouth while sneezing.
Are you removing Ear Wax ?
Cerumen, commonly known as ear wax is made up of dead skin cells, secretions from glands inside the ear canal, fatty acids, antibacterial enzymes and substances. Some have dry, brittle wax whereas others have wet, greasy wax.
Ear produces wax constantly and maintain a right amount in the ear. In fact if you have too little wax it may lead to dry ears and itching. So wax may be there for a reason to lubricate and protect the ear canal. Prevent dust and foreign bodies from entering into the ear as well as from trauma to the ear drum. |
- Discuss the features of daytime and night time while out and about (think: the stars and moon at night vs sun in daytime)
-Use household chores- such as cooking- to find fun ways to learn and discover the senses
- When out and about, point out different animals and discuss how they are alike
Unit Focus Standards
Students know objects have observable properties. They will classify/sort objects by properties such as: weight, size, and shape. Know objects have observable properties with recognizable shapes. Describe/sort variety of materials by properties/commonalities. Know materials can be changed. Explain/demonstrate objects undergo physical changes, but they remain same objects. Classify objects by observing object and discussion.
Student explains objects produce sound when vibrating and create pictures of objects that make sounds. Know objects move in different directions by push/pull, and investigate different ways objects move fast or slow, and collaborate to explain how things move and keep records.
Students work in groups, uses senses for observations, conduct investigations, and keep pictorial/written records. They describe discoveries, ask questions, draw conclusions based on investigations. They also ask questions/design inquiries to answer questions in the appropriate situations.
Students explore Law of Gravity, recognize repeating patterns of day and night and Sun can be seen during the day, but moon seen day or night. Know objects are far away or near and compare/contrast objects' sizes. Know things are big/small seen from Earth. Know Sun is much larger than Earth, but appears small because of distance. Moon is larger than appearance in sky because of distance.
Students observe living things and environments. Use five senses as observation tools and identify related body parts. Describe plants and animals how they are alike and different and compare observations. Know characteristics of plants/animals that are not in real books/media. |
Sarah Payton, Cassie Hague
01 December 2010
This resource is designed to support primary and secondary teachers to integrate the development of students’ digital literacy into everyday learning.
The activities cover the following areas:
- Developing practitioners’ understanding of digital literacy and its relevance to their own contexts
- Planning activities that can be integrated into everyday teaching to support students to develop both subject knowledge and digital literacy
- Practical ideas for the classroom, including explorations of free web-based tools and activities. |
To make a statement of the problem in investigatory projects, come up with a detailed paragraph outlining the essential issue that you will be investigating. If your teacher wants a hypothesis, or educated guess, consider the problem, outline it in a paragraph, and then add your educated guess about what will happen in your investigation. The best hypothesis will be the result of research and logic. Once you’ve decided how to describe a problem in four sentences or so (use more paragraphs if needed) and create an educated guess, you will be ready to create an investigatory project you can be truly proud of.
* Science project tips
The best science projects take time to put together; they never seemed rushed or sloppy. For example, if you’re making diagrams or models to go along with your statement of the problem in investigatory projects, give yourself enough time to add sufficient details, colouring, and embellishments. Don’t wait until the night before – you’ll be surprised to find out just how long it takes to draw or create a model.
Ask your teacher for an estimate of how many hours of work he or she expects on a project, and try to stick to these guidelines while you work.
* The art of science
Great scientists combine art and vision when they work on science research and projects. Mimic the actions of the greatest scientists by thinking big and finding a way to make your project stand out. From working models, to video, to audio, to schematics…use your imagination, and create a project that is both artful and scientific.
It’s always possible to create a great project when you learn how to write the perfect opening statement and then follow up on that with excellent research, and a perfectly formatted and structured report that is impressive to your teacher.
Science fair projects are common tasks assigned to school-age children throughout the world. The commonality across education systems in various countries in science fair projects is that they all rely on introducing the scientific method to children in the context of developing a research question or problem statement that is developed and answered through a controlled experiment. If your child or you has a science project assigned, the most important part of the project is writing a problem statement that identifies an independent variable that will be changed or manipulated through various dependent variables during the experiment.
The study aspired primarily, to search for alternative ways of utilizing the commonbackyard plants Malunggay and Spinach so that it wont be placed into waste. Secondarily,make a different and simple preparation of it right in ones home aside from the usualcapsule and tablet forms, which currently dominate in the market. Lastly, to disseminatethe information on how to augment a nutritional need within in reach and friendly to thepocket.The study addressed the following specific questions:
1.) What are the procedures in preparing the Malunggay and Spinach powder? 2.) Is there a specific expiration for this product? 3.) Are there different processes of drying each kind of leaves? 4.) How long can the end product be stored? 5.) Is it advisable to dry the Malunggay and Spinach leaves under direct sunlight? 6.) What are the uses of the Malunggay and Spinach powder as an additive? 7.) What specific amounts of Malunggay and Spinach to be prepared to meet our nutritional needs? 8.) What are the nutritional values from the combination of Malunggay and Spinach? 9.) Can this research work possibly be pursued in the future? Is there a significant difference of CALAMANSI as stain remover to the zonrox/Clorox?
Our study aspired primarily, to search for alternative ways on how to remove stains by utilizing the common citrus fruits such as Calamansi and Kamias. Also, the purpose of our research is to see and determine if these fruits can really remove stains. Our investigatory project addressed the following specific questions: 1.) What are the procedures on how Calamansi and Kalamias/Kamias extracts be a stain removers? 2.) Is there any disadvantages of using this kind of strategy? 3.) Are there any different process of using it?
4.) For how long can the process be done?5.) Is it advisable to use Calamansi and Kamias as stained removers?6.) Can this research work possibly be pursued in the future?7.) What if the experiment gone wrong?8.) What are the substances of the Calamansi and Kamias that makes the stain removes?9.) How effective are the calamansi and kamias as stained removers?10.) Is it very useful?
You may also be interested in the following: chapter 1 statement of the problem
Let’s block ads! (Why?) |
One may have perceived that pushing a terminally ill bus can give it a sudden start. That’s because lift provides upward push when it starts. Here Velocity changes and this is acceleration! Henceforth, the frame accelerates. Acceleration is described as the rate of change of velocity of an object. A body’s acceleration is the final result of all the forces being applied on the body, as defined by Newton’s Second Law. Acceleration is a vector quantity that is described as the frequency at which a body’s velocity changes.
Acceleration is the rate of change in velocity to the change in time. It is denoted by symbol a and is articulated as-
meter per second squared or m/s2 is the S.I unit for Acceleration,
If t (time taken), v (final velocity) and u (initial velocity) are provided. Then the acceleration is given by formula
Final Velocity = v
Initial velocity = u
acceleration = a
time taken = t
distance traveled = s
Acceleration Solved Examples
Underneath we have provided some sample numerical based on acceleration which might aid you to get an idea of how the formula is made use of:
Problem 1: A toy car accelerates from 3m/s to 5m/s in 5 s. What is its acceleration?
Given: Initial Velocity u = 3m/s,
Final Velocity v = 5m/s,
Time taken t = 5s.
Problem 2: A stone is released into the river from a bridge. It takes 4s for the stone to touch the river’s water surface. Compute the height of the bridge from the water level.
(Initial Velocity) u = 0 (because the stone was at rest),
t = 5s (t is Time taken)
a = g = 9.8 m/s2, (a is Acceleration due to gravity)
distance traveled by stone = Height of bridge = s
The distance covered is articulated by |
The Anderson shelter is a air-raid shelter (bomb shelter) that was used extensively throughout Europe (Britain in particular) during World War II. It was designed for 6 people and was made out of fourteen panels of corrugated steel. Six curved panels were used to form the top, three straight panels were used for the sides and two more straight panels (one with a door) were used for the front and rear of the shelter. These shelters were approximately 1.8 m (6 ft) high, 2 m (6 ft 6 in long) and 1.4 m (4 ft 6 in) wide. They were buried with about half a meter (19 in) of soil to protect the occupants. These shelters were given to all people under a certain income and more than 3.6 million of them were installed before and during the war. This type of shelter was effective against ground shocks and direct blasts because they could absorb a lot of energy. However, they lost popularity during all-night alerts because they were cold and often flooded in wet weather. Let’s find out who designed and invented this air-raid shelter.
Who designed the Anderson shelter?
In 1938 the Home Office (a department of the UK Government) requested a new design for an air raid shelter. The shelter was designed by William Paterson and Oscar Carl Kerrison and named after Sir John Anderson, who was responsible for air-raid preparations prior to the outbreak of the war. The shelter was evaluated by Civil Engineers, Dr. David Anderson, Bertram Lawrence Hurst and Sir Henry Jupp and was approved for production.
Did you know?
At the end of the war many Anderson shelters were removed by the authorities to recycle the corrugated iron. Any people that wanted to keep their shelter had to pay a fee! Some Anderson shelters are still is use today and after the war many were dug up and converted into garden storage sheds. |
Lobes of the human brain (the occipital lobe is shown in red)
|Artery||posterior cerebral artery|
|NeuroLex ID||Occipital lobe|
The occipital lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The occipital lobe is the visual processing center of the mammalian brain containing most of the anatomical region of the visual cortex. The primary visual cortex is Brodmann area 17, commonly called V1 (visual one). Human V1 is located on the medial side of the occipital lobe within the calcarine sulcus; the full extent of V1 often continues onto the posterior pole of the occipital lobe. V1 is often also called striate cortex because it can be identified by a large stripe of myelin, the Stria of Gennari. Visually driven regions outside V1 are called extrastriate cortex. There are many extrastriate regions, and these are specialized for different visual tasks, such as visuospatial processing, color differentiation, and motion perception. The name derives from the overlying occipital bone, which is named from the Latin ob, behind, and caput, the head. Bilateral lesions of the occipital lobe can lead to cortical blindness (See Anton's syndrome).
The two occipital lobes are the smallest of four paired lobes in the human cerebral cortex. Located in the rearmost portion of the skull, the occipital lobes are part of the forebrain. None of the cortical lobes are defined by any internal structural features, but rather by the bones of the head bone that overlie them. Thus, the occipital lobe is defined as the part of the cerebral cortex that lies underneath the occipital bone. (See the human brain article for more information.)
The lobes rest on the tentorium cerebelli, a process of dura mater that separates the cerebrum from the cerebellum. They are structurally isolated in their respective cerebral hemispheres by the separation of the cerebral fissure. At the front edge of the occipital are several lateral occipital gyri, which are separated by lateral occipital sulcus.
The occipital aspects along the inside face of each hemisphere are divided by the calcarine sulcus. Above the medial, Y-shaped sulcus lies the cuneus, and the area below the sulcus is the lingual gyrus.
The occipital lobe is divided into several functional visual areas. Each visual area contains a full map of the visual world. Although there are no anatomical markers distinguishing these areas (except for the prominent striations in the striate cortex), physiologists have used electrode recordings to divide the cortex into different functional regions.
The first functional area is the primary visual cortex. It contains a low-level description of the local orientation, spatial-frequency and color properties within small receptive fields. Primary visual cortex projects to the occipital areas of the ventral stream (visual area V2 and visual area V4), and the occipital areas of the dorsal stream—visual area V3, visual area MT (V5), and the dorsomedial area (DM).
The ventral stream is known for the processing the "what" in vision, while the dorsal stream handles the "where/how." This is because the ventral stream provides important information for the identification of stimuli that are stored in memory. With this information in memory, the dorsal stream is able to focus on motor actions in response to the outside stimuli.
Although numerous studies have shown that the two systems are independent and structured separately from another, there is also evidence that both are essential for successful perception, especially as the stimuli takes on more complex forms. For example, a case study using fMRI was done on shape and location. The first procedure consisted of location tasks. The second procedure was in a lit-room where participants were shown stimuli on a screen for 600 ms. They found that the two pathways play a role in shape perception even though location processing continues to lie within the dorsal stream
The dorsomedial (DM) is not as thoroughly studied. However, there is some evidence that suggests that this stream interacts with other visual areas. A case study on monkeys revealed that information from V1 and V2 areas make up half the inputs in the DM. The remaining inputs are from multiple sources that have to do with any sort of visual processing
A significant functional aspect of the occipital lobe is that it contains the primary visual cortex.
Retinal sensors convey stimuli through the optic tracts to the lateral geniculate bodies, where optic radiations continue to the visual cortex. Each visual cortex receives raw sensory information from the outside half of the retina on the same side of the head and from the inside half of the retina on the other side of the head. The cuneus (Brodmann's area 17) receives visual information from the contralateral superior retina representing the inferior visual field. The lingula receives information from the contralateral inferior retina representing the superior visual field. The retinal inputs pass through a "way station" in the lateral geniculate nucleus of the thalamus before projecting to the cortex. Cells on the posterior aspect of the occipital lobes' gray matter are arranged as a spatial map of the retinal field. Functional neuroimaging reveals similar patterns of response in cortical tissue of the lobes when the retinal fields are exposed to a strong pattern.
Stochastic Resonance in the Occipital Cortex
Stochastic resonance is characterized by adding noise to a stimulus that can not normally be detected, which then enhances that signal. As noise is added to a visual stimulus, the individual is able to detect the signal to a greater degree. The same pattern is seen if noise is added directly to the occipital cortex. However, if noise is added to the frontal cortex, the visual signal decreases. Administering noise centrally to the occipital cortex has the ability to advance human behavior with increases in stimulus detectability. More specifically, it shows routes toward social behavior in the sense of decision-making, task switching, and goal-oriented responses that are involved with visual processing. Additionally, Transcranial Random Noise Stimulation (tRNS) is a useful tool in order to regulate human behavior and neural processing circuits in the visual cortex and other cortical areas as well.
If one occipital lobe is damaged, the result can be homonymous hemianopsia vision loss from similarly positioned "field cuts" in each eye. Occipital lesions can cause visual hallucinations. Lesions in the parietal-temporal-occipital association area are associated with color agnosia, movement agnosia, and agraphia. Damage to the primary visual cortex, which is located on the surface of the posterior occipital lobe, can cause blindness due to the holes in the visual map on the surface of the visual cortex that resulted from the lesions.
Recent studies have shown that specific neurological findings have affected idiopathic occipital lobe epilepsies. Occipital lobe seizures are triggered by a flash, or a visual image that contains multiple colors. These are called flicker stimulation (usually through TV) these seizures are referred to as photo-sensitivity seizures. Patients having experienced occipital seizures described their seizures as featuring bright colors, and severely blurring their vision (vomiting was also apparent in some patients). Occipital seizures are triggered mainly during the day, through television, video games or any flicker stimulatory system. Occipital seizures originate from an epileptic focus confined within the occipital lobes. They may be spontaneous or triggered by external visual stimuli. Occipital lobe epilepsies are etiologically idiopathic, symptomatic, or cryptogenic. Symptomatic occipital seizures can start at any age, as well as any stage after or during the course of the underlying causative disorder. Idiopathic occipital epilepsy usually starts in childhood. Occipital epilepsies account for approximately 5% to 10% of all epilepsies.
- Base of brain.
- Drawing to illustrate the relations of the brain to the skull.
- Occipital lobe in blue
- Occipital lobe
- Occipital lobe
- Ventricles of brain and basal ganglia.Superior view. Horizontal section.Deep dissection
- Lobes of the brain
- Regions of the human brain
- Sulcus Lunatus
- Visual evoked potential
- Vertical occipital fasciculus
|Wikimedia Commons has media related to Occipital lobe.|
- "SparkNotes: Brain Anatomy: Parietal and Occipital Lobes". Archived from the original on 2007-12-31. Retrieved 2008-02-27.
- Schacter, D. L., Gilbert, D. L. & Wegner, D. M. (2009). Psychology. (2nd ed.). New Work (NY): Worth Publishers.
- (Valyear, Culham, Sharif, Westwood, & Goodale, 2006).
- (Valyear et al., 2006).
- van der Groen, O; Wenderoth, N (11 May 2016). "Transcranial Random Noise Stimulation of Visual Cortex: Stochastic Resonance Enhances Central Mechanisms of Perception". J Neurosci. 36 (19): 5289–5298. PMID 27170126.
- Carlson, Neil R. (2007). Psychology : the science of behaviour. New Jersey, USA: Pearson Education. p. 115. ISBN 978-0-205-64524-4.
- Chilosi, Anna Maria; Brovedani (November 2006). "Neuropsychological Findings in Idiopathic Occipital Lobe Epilepsies". Epilepsia. 47 (s2): 76–78. doi:10.1111/j.1528-1167.2006.00696.x. PMID 17105468.
- Destina Yalçin, A., Kaymaz, A., & Forta, H. (2000). Reflex occipital lobe epilepsy. Seizure, 9(6), 436-441.
- Adcock, Jane E; Panayiotopoulos, Chrysostomos P (31 October 2012). "Journal of Clinical Neurophysiology". Occipital Lobe Seizures and Epilepsies. 29 (5): 397–407. doi:10.1097/wnp.0b013e31826c98fe. PMID 23027097. |
The issue: Since penicillin was introduced in the 1940s, antibiotics have saved countless lives. But we have used these drugs so much for so long that the diseases they once killed have adapted and developed immunity. According to the Centers for Disease Control and Prevention (CDC), 23,000 Americans now die each year from infections by bacteria that are impervious to antibiotics.
The misuse and excessive use of antibiotics to treat benign and common illnesses and to boost productivity in farming has contributed to their waning potency. The CDC estimates antibiotics are improperly prescribed 50 percent of the time. In some countries you do not even need a prescription, but can simply purchase them over the counter. Overuse has allowed bacteria, viruses, parasites, and fungi to mutate and build immunity. (Antibiotics are a type of antimicrobial, which is, broadly speaking, a drug designed to kill or treat infections.) “Antimicrobial resistance” (AMR) is a man-made problem.
Researchers fear that these newly immune pathogens — sometimes called “superbugs” — are as dangerous as any pandemic because, even though they are familiar, there is no cure. And because untreatable bacteria lurk in hospitals, they make routine procedures like dental surgery far more dangerous. The 20th century’s gains in longevity are at risk.
Drug-resistant diseases do not respect borders, making this a global problem requiring global cooperation and standards on the use of antibiotics, a new World Bank paper says.
Study summary: To make the case that all countries must help stop AMR, the World Bank calculates how much AMR could cost the global economy. The Bank simulates the growth of AMR to the year 2050 in two scenarios: “low AMR impacts” and “high AMR impacts.”
The Bank’s models are built around the economic shocks of a decreased labor supply and lower livestock productivity. All economic sectors would be affected in all countries, but the poorest countries are often hardest hit, as they can lack even rudimentary medical facilities. Indeed, the growth of AMR would increase global inequality and push millions into poverty.
Beyond the economic impact, the paper considers hard-to-quantify quality of life issues. For example, gonorrhea has become harder to treat because of AMR. Alternative treatments, including injecting iodine through urethral or vaginal catheters, are more painful.
In the optimistic scenario, with a “low AMR impact,” global GDP loses 1.1 percent of growth a year by 2050, with an annual output shortfall over $1 trillion after 2030.
In the worse scenario, with a “high AMR impact,” global GDP growth loses 3.8 percent a year; the annual shortfall reaches $3.4 trillion after 2030.
Between 8 million and 24 million people would enter poverty by 2050.
Total global exports would fall between 1.1 percent and 3.8 percent.
By 2050, annual health care costs would rise 25 percent in low-income countries, 15 percent in middle-income countries and 6 percent in high-income countries. That could cost over $1 trillion per year.
Global livestock production would decline between 2.6 percent and 7.5 percent, hurt by disease as well as new trade restrictions. In the poorest countries, livestock production could decline by 11 percent.
“The lack of veterinary capacity in many low-income countries presents a substantial (and rising) risk to global economic and health security and causes a large ongoing human health burden in those countries.”
The cost of containment measures in 139 low- and middle-income countries is $9 billion a year.
Even half that investment would create between $10 trillion and $27 trillion in cumulative global benefits. (The Bank estimates this represents a 58 percent annual return. “AMR containment is a hard-to-resist investment opportunity.”)
Tests of 1,606 bacteria samples in Ghana show antibiotics’ reduced effectiveness: 80 percent of pathogens were resistant to older antibiotics such as ampicillin and tetracycline and 50 percent to newer antibiotics like cephalosporin and quinolone. “Most” were immune to multiple drugs.
“That pathogens will continue to evolve also means that any new antimicrobial ‘miracle cures’ that are developed will not last.”
Moreover, access to new drugs is a problem: Because the first-generation of antibiotics is no longer effective in many cases, one million children die each year due to treatable diseases like pneumonia and sepsis. Newer drugs are expensive and unavailable to the world’s poorest.
Substandard and counterfeit drugs exacerbate AMR, allowing bacteria to build immunity while not curing the patient. Up to 60 percent of antimicrobial drugs sold in Africa and Asia may be low quality, “often having none, or too little, of the active ingredient.”
This 2015 PBS Frontline documentary discusses how the widespread use of antibiotics in livestock has contributed to the rise of AMR “superbugs.” Antibiotics are not only used to target disease and as a prophylactic, but also to promote animal growth.
RAND Corporation, a nonpartisan think tank, estimates that the status quo would result in between 11 and 14 million fewer working-age adults by 2050. The worst-case scenario, RAND says, would mean 444 million dead by 2050.
A review group reporting to the prime minister of the United Kingdom has published a number of papers on microbial resistance.
This 2013 paper in JAMA Internal Medicine estimates the number of drug-resistant staph infections (Methicillin-resistant Staphylococcus aureus (MRSA)) in the United States.
This 2013 study in PLoS One demonstrates how humans are contracting drug-resistant staph infections (MRSA) from livestock.
The Lancet has reported on the rise of drug resistance and The BMJ has discussed the economic cost of AMRs.
Keywords: drug resistance, antibiotic resistance, staph infections, global public health, disease and treatment, epidemics, pandemics, penicillin |
Energy takes many forms including light, sound and heat. Different colors of light are given by photons of various wavelengths. The relationship between energy and wavelength are inversely proportional, meaning that as the wavelength increases the associated energy decreases. A calculation for energy as it relates to wavelength includes the speed of light and Planck’s constant. The speed of light is 2.99x10^8 meters per second and Planck’s constant is 6.626x10^-34joule second. The calculated energy will be in joules. Units should match before performing the calculation to ensure an accurate result.
Energy in Joules
Identify the wavelength of light you are calculating. This term is measured in meters; visible light is usually on the scale of nanometers (nm).
Convert the wavelength to meters. Multiply the wavelength by 10^-9 for nanometers or10^-6 for micrometers.
Sciencing Video Vault
Multiply Planck’s constant (6.626x10^-34) and the speed of light (2.99x10^8).
Divide the product of Planck’s constant and speed of light by the wavelength (in meters). The result is energy in joules.
Energy in Electronvolts
Identify the wavelength of light you are calculating and convert to micrometers using conventional metric conversion rules.
Divide 1.24 by the wavelength in micrometers to calculate energy.
The result of this calculation will be given in electronvolts (eV). This is the energy needed to raise an electron through one volt.
Convert the wavelength to the proper units depending on the equation used before starting the calculation. |
Retorts have long been standard pieces of laboratory equipment. They are used during chemical experiments and large-scale processes such as distillation. Normally made of glass, this example is made from earthenware and is a typical shape, with a long stem and a large bowl. Reagents are placed in the bowl and heated from underneath. It is shown with a similar example (A634003).
Related Themes and Topics
There are 452 related objects. View all related objects
A vessel in which chemical substances are heated for distillation or decomposition. Usually made from glass.
Pottery made of clay which is fired at a relatively low temperature. Earthenware is often semi-porous, meaning some liquid or air can pass through it. This can be altered by treating the pottery with a glaze. |
Think about the kids you knew in elementary school who had a lisp or a stutter. Imagine how difficult life would be if you couldn’t accurately and effectively communicate due to a speaking disorder. Would you feel comfortable or capable dealing with those problems alone?
Stuttering and lisping are just two examples of common speech disorders. Speech disorders come from a person’s inability to produce sounds correctly or with fluency. And the problems can be even more complicated than that. A person can also suffer from communication disorders: receptive and expressive. A receptive communication disorder means the person is unable to accurately understand others, while an expressive communication disorder means they are unable to accurately express themselves. It must be a troublesome existence to be stuck in your own thoughts and ideas with no way to explain yourself. Fortunately, these difficulties can be remedied with the help of a speech-language pathologist.
A speech-language pathologist (SLP) is a specialist who works with those who struggle with speech and articulation, communication disorders, and fluency. Sometimes known as a speech therapist, an SLP’s work also allows them to assist those who have difficulty feeding or swallowing. SLPs are educated with a master’s degree and licensed or certified to diagnose and treat a number of speech communication and swallowing disorders. They work with clients of any age who may be suffering for any number of reasons, including cerebral palsy, cleft palate, traumatic injury, and mental illness.
Salary and Career Opportunities
Because the money is generally good and the field is growing, speech therapy can be an attractive career choice. A certified speech-language pathologist can make upwards of $70,000 a year. One can choose a concentration under the umbrella of the field, narrowing the work down to specific types of disorders and complications. Speech therapists typically work in research, educational, or healthcare environments. Because a majority of clients tend to be in their youth, SLPs often work in school settings, but research shows that speech difficulties can increase with age, creating a bigger need for SLPs among the elderly as well. This means that the job prospects arguably look bright for future SLPs as the field grows with each passing year.
What’s more attractive than the blossoming career opportunities, however, is the fact that SLPs can have a significantly positive impact on society. The work they do fosters the notion that everyone should be able to communicate clearly, accurately, and effectively. This is central to creating a social environment where one person’s voice is equally as important as anyone else’s, regardless of gender, age, culture, race, sexuality, or disability. For young patients, learning how to use their voice can help them mature. Speech therapy teaches them to develop their opinions and share them with less fear and anxiety of how they will be received. This can foster a culture where difficult and opinionated conversations are not so uneasily had and are welcomed as a source of positive change and growth. For the elderly, it can mean making their experienced voices as sharp and resonant in the cultural climate as any young person’s. Instead of being overlooked for suffering a traumatic injury or invasive surgery, one can recover to communicate the way they once could, and perhaps even better than before. In these ways, it is possible for speech-language pathology to change the world, one voice at a time. The world could use prospective SLPs, who are interested in putting a life of service and speaking before anything else. |
Explain the meaning, need, importance and methods of training. Discuss the existing training system of your organisation or an organisation you are familiar with. Describe how an organisation identifies the needs of training and comes up with the appropriate training programmes. Describe the organisation you are referring to.
In simple terms, training refers to the imparting of specific skills, abilities and knowledge to an employee. Training intends to develop specific and useful knowledge, skills and techniques. It is basically a task oriented activity which prepares people to carry out predetermine tasks. A formal definition of training and development is
.. it is any attempt to improve current or future employee performance by increasing an employee's ability to perform through learning, usually by changing the employee's attitude or increasing his or her skills and knowledge. The need for training is determined by the employee's performance deficiency, computed as follows:
Training and development need = Standard performance – Actual performance
Need of Training
· Increased productivity: Training programs help in increasing skill, aptitude and abilities of workers. It results in increased productivity.
· Reduced Supervision: The trained employee requires less supervision and he can supervise himself.
· Reduced dissatisfaction: Well-trained employees usually experience satisfaction associated with a sense of achievement and knowledge. Consequently they develop their inherent capabilities at work.
· Minimum wastage and accidents: An important advantage of training is that accident, spoiled work and damage to machinery and equipments can be kept at minimum.
· Organizational flexibility and stability: Trained and motivated personnel are assets for an organization who can adjust to short run variations in the volume of work whether there is loss of key personnel. Hence personnel with multiple skills are in a position to transfer to jobs where the demands is the highest.
Importance of Training and development
Training and development programmes help remove performance deficiencies in employee. This is particularly true when –(i) the deficiency is cause by lack of ability rather than a lack of motivation to perform, (ii) the individuals involved have the aptitude and motivation need to learn to do the job better, and (iii) supervisors and peers are supportive of the desired behaviours.
There is greater stability, flexibility, and capacity for growth in an organization. Training contributes to employee stability in at least two ways. Employees become efficient after undergoing training and efficient employees contribute to the growth of the organization. Future needs of employees will be met through training and development programmes. Organizations take fresh diploma holders or graduates as apprentices or management trainees. They are absorbed after course completion. Training serves as an effective source of recruitment. Training is an investment in HR with a promise of better returns in future.
Most training takes place on the job. This can be attributed to the simplicity of such methods and their usually lower cost. However, on the job training can disrupt the workplace and result in an increase in errors as learning proceeds. In such cases off the job training methods are used.
On the Job training: Popular training methods include job rotation and understudy assignments. Job rotation involves lateral transfers that enable employees to work at different jobs. Employees get to learn a wide variety of jobs and gain increased insight into the interdependency between jobs and a wider perspective on organizational activities. New employees frequently learn their jobs by understudying a seasoned veteran. On the job methods includes:
· Orientation training
· Job-instruction training
· Apprentice training
· Internships and assistantships
· Job rotation
Off the job training: There are a number of off the job training methods that managers may want to make available to employees.
· Special study
· Conference and discussion
· Case study
· Role playing
· Programmed instruction
· Laboratory training
I am familiar with Reliance Infocomm Ltd. Reliance Infocomm is the outcome of the late visionary Dhirubhai Ambani's dream to herald a digital revolution in India by bringing affordable means of information and communication to the doorsteps of India's vast population.
Working at breakneck speed, from late 1999 to 2002 Reliance Infocomm built the backbone for a digital
Existing training system of our organization
In today's electronic world, the world wide web (www) is all pervasive. The internet and intranet are changing the face of training and learning. Using a PC, modem and a web browser, it has become possible to learn online. In our company, we are reaping the benefits of virtual learning. Employees of our company have access of self-paced computer based training (CBT) material through the firm's Intranet. Our company offers nearly 100 courses online, mostly in information technology. Learning through the web can be very convenient for employees. There are no fixed schedules or limitations of time. One can attend a course at home, in the evening, or while travelling. It is not just technical programs: soft skills can also be learnt electronically. We use a CD-ROM based manual to impart soft skills like performance management, coaching, and interviewing skills.. The CD-ROM based training is supplemented with shared learning via teleconferencing, where mangers discuss key learnings and ask for clarifications. Face to face, role-playing exercises are added for the human touch.
Training need identification
Needs assessment diagnoses present problems and future challenges to be met through training and development. There are several methods for the purpose:
In our organization, we developed a 3-tier need survey to identify training needs of employees at different levels.
3-tier of the proposed survey cab be described as follows:
Tier 1: Identifying problems of various departments and the training needs for the various categories of employees by line managers through questionnaire.
Tier 2: Prioritizing the various training areas for different categories of employees by the Senior Managers and Head of Departments.
Tier 3: Based on the findings from the above two tiers, conducting interviews with the Deputy General Managers and General Managers to get their feedback.
The survey was taken up with following four basic objectives.
· To identify problems of line managers which may by overcome with the help of training.
· To enhance the effectiveness of present training programmes vis-a-vis identified problems.
· To identify training needs of the employees in the division.
· To involve line managers by initiating the process of participative management in the training function.
Tier1: A questionnaire consisting of six questions was designed and circulated to a random sample of 50 line managers from various departments in order to find out-answers to the following questions:
· What are the burning problems in. the department?
· What are the training needs of different categories of employees in the department?
· What are the perceptions of line managers about the effectiveness of programmes presently organized by the Training Development Department?
· What are the expectations of line managers from the Training and Development Department.
Tier 2: A list of various training areas/programmes was compiled and circulated to 66 Senior Managers/Head of Departments. The programmes were classified under the following heads.
· General Management Programmes
· Behavioral Science Oriented Programmes
· Technique Oriented Programmes
· Functional Management Programmes
· Skill Development Programmes
The respondents were asked to indicate the priorities 1 to 5 under each subject for each category of employee.
Tier 3: Detailed personal discussions were held individually with the seven Deputy General Managers by Training and Development managers and his deputies so as to.
· Obtain their view/perception/suggestions about the training needs
· To get validation of the date/views expressed in the earlier two tiers
· To intimate the process of involvement of the line as well as top management
The response to questionnaires was not very encouraging. This is a normal phenomenon. Based on responses, line managers identified some of the common problems for different categories of employees as given below.
Based on the above problems line managers identify the training needs of their employees. The category wise priorities of the training needs were also determined.
Tier 2: Here the response to the questionnaire was much better. The information received was compiled to find out priorities for different programmes.
Tier 3: Personal discussions were held with Deputy General Managers, Senior Managers of Production, Finance and Marketing departments. Findings from the earlier two tiers were also presented to them. Their views were mostly in line with views of Senior Managers in Tier 2. They frankly expressed their views and priorities. Their emphasis was on methodology so that such forums could be used to generate solutions to problems within the organization.
The summary findings were discussed in detail in the Personnel Department. Some programmes were designed around real organizational problems. In some other programmes, application of concepts from innovative managers was obtained by designing some of them as Programme Directors and by involving them in identifying participants for different programmes. This method resulted in increasing the credibility, acceptance and involvement of line managers in this company's training programmes.
Explore and discover exciting holidays and getaways with Yahoo! India Travel Click here! |
||This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (September 2013)|
||The examples and perspective in this article may not represent a worldwide view of the subject. (February 2011)|
Civil procedure is the body of law that sets out the rules and standards that courts follow when adjudicating civil lawsuits (as opposed to procedures in criminal law matters). These rules govern how a lawsuit or case may be commenced, what kind of service of process (if any) is required, the types of pleadings or statements of case, motions or applications, and orders allowed in civil cases, the timing and manner of depositions and discovery or disclosure, the conduct of trials, the process for judgment, various available remedies, and how the courts and clerks must function.
Differences between civil and criminal procedure
Some systems, including the English and French, allow governmental persons to bring a criminal prosecution against another person, prosecutions are nearly always started by the state, in order to punish the accused. Civil actions, on the other hand, are started by private individuals, companies or organizations, for their own benefit. In addition, governments (or their subdivisions or agencies) may also be parties to civil actions. The cases are usually in different courts, and juriescases. However this is distinguished from civil penal actions.
In jurisdictions based on English common-law systems, the party bringing a criminal charge (in most cases, the state) is called the "prosecution", but the party bringing most forms of civil action is the "plaintiff" or "claimant". In both kinds of action the other party is known as the "defendant". A criminal case against a person called Ms. Sanchez would be described as “The People v. (="versus", "against" or "and") Sanchez,” "The State (or Commonwealth) v. Sanchez" or "[The name of the State] v. Sanchez" in the United States and “R. (Regina, that is, the Queen) v. Sanchez” in England and Wales. But a civil action between Ms. Sanchez and a Mr. Smith would be “Sanchez v. Smith” if it was started by Sanchez, and “Smith v. Sanchez” if it was started by Mr. Smith (though the order of parties' names can change if the case is appealed).
Most countries make a clear distinction between civil and criminal procedure. For example, a criminal court may force a convicted defendant to pay a fine as punishment for his crime, and the legal costs of both the prosecution and defence. But the victim of the crime generally pursues his claim for compensation in a civil, not a criminal, action. In France and England, however, a victim of a crime may incidentally be awarded compensation by a criminal court judge.
Evidence from a criminal trial is generally admissible as evidence in a civil action about the same matter. For example, the victim of a road accident does not directly benefit if the driver who injured him is found guilty of the crime of careless driving. He still has to prove his case in a civil action, unless the doctrine of collateral estoppel applies, as it does in most American jurisdictions. In fact he may be able to prove his civil case even when the driver is found not guilty in the criminal trial, because the standard to determine guilt is higher than the standard to determine fault. However, if a driver is found by a civil jury not to have been negligent, a prosecutor may be estopped from charging him criminally.
If the plaintiff has shown that the defendant is liable, the main remedy in a civil court is the amount of money, or "damages", which the defendant should pay to the plaintiff. Alternative civil remedies include restitution or transfer of property, or an injunction to restrain or order certain actions.
The standards of proof are higher in a criminal case than in a civil one, since the state does not wish to risk punishing an innocent person. In English law the prosecution must prove the guilt of a criminal “beyond reasonable doubt”; but the plaintiff in a civil action is required to prove his case “on the balance of probabilities”. Thus, in a criminal case a crime cannot be proven if the person or persons judging it doubt the guilt of the suspect and have a reason (not just a feeling or intuition) for this doubt. But in a civil case, the court will weigh all the evidence and decide what is most probable.
Civil procedure by country
- Affirmative defense
- Civil Justice Fairness Act
- Criminal procedure
- Prejudice (law)
- Statute of limitations
- Summary judgment
- Time constraints
- Trial de novo
|Wikisource has the text of the 1905 New International Encyclopedia article Civil Procedure.|
- Civil Procedure Rules applying to England and Wales
- Complete text of Federal Rules of Civil Procedure (Cornell Univ.)
- Rhode Island Civil Court Rules of Procedure - Optimized by a Constable from the law library at the 6th District Court of Rhode Island |
Bastet , the ancient ruins of Egypt were erected by the ancient Egyptians as their symbol of affection with the cat goddess known as Bast or Bastet who was also represented as lion-headed woman. She was considered as the protective goddess and the goddess of love and fertility for the ancient Egyptians. In the ancient Egyptian beliefs Bastet was known was one of Ra’ daughters, The god of the Sun and the most important god in ancient Egypt. The name (Bastet) means the warmth of the sun. Bastet was also linked by the Greeks with their Goddess Artemis.
Bastet , the ancient ruins of Egypt comprise of Bastet Temple which is located in the eastern Delta city known as (Per-Bast) or (Per-Bastet) which means the House of Bastet. Later the Greeks called it (Bubastis). Nowadays it is a tourism archaeological site called (Tell-Basta) on the south-eastern edge of the modern city of Zagazig, which is situated about 80 Km north-east of Cairo. The constriction works in the Bastet temple began in the reign of the 4th dynasty, exactly in the reign of Khufu, who built it to honor Bastet. Afterwards many Pharaohs like: Khafre, Ramses II, Pepi I, Osorkon, & Nkinbu amended it by adding their structures and contributed in expansion of the temple over 1700 years. This amazing temple was constructed with red granite which was brought specially from Aswan across the Nile.
In the 5th century, the Greek historian Herodotus visited Bubastis. He described the town as having a beautiful temple on low ground in the center of the city and surrounded by tree-lined canals, giving it the appearance of being on an island. A stone paved road led from a Temple of Hermes to a huge carved gateway which dominated the entrance to the Temple of Bastet and inside was a shrine containing a statue of the goddess. Herodotus gave a vivid account of the annual festival of the goddess Bastet, when an estimated 700,000 Egyptian pilgrims would visit the site. Many details of Herodotus’s description were confirmed by Edouard Naville’s investigation of Temple of Bastet for the Egypt Exploration Fund during 1887-1889. During the reign of the 22nd dynasty (945-715) BC, the capital of Egypt moved from Tanis to Bubastis, and the city of the cats became the capital of Egypt for 230 years.
Bubastis was a very important city & the capital of Nome ” Am-Khent “. This Nome (Subnational administrative division of ancient Egypt, can be understood as a district ) was comprising of the areas of lower Egypt along with the East Delta of the river, Nile. Bubastis was the Eastern gateway of Egypt which was linking between Asia and Memphis, and it was located on a very strategic geographic position, on Sizostrees canal or the Canal of the Pharaohs, the ancient Suez Canal which today works as a juncture between the Nile River and the Red sea.
Bubastis also is identified with the name of Phibeseth in the Christian Bible, book of Ezekiel 30-17.In 525 BC, the city and the temple was collapsed after the Persian conquest by Cambyses II, which heralded the end of the Saite 26th dynasty and the start of the Achaemenid dynasty.In 1906, a hoard of gold and silver vessels and jewellery was discovered by local workmen near the temple site, the earliest pieces dating to the Ramesside period. Some of these treasures were taken illicitly out of Egypt and were subsequently acquired by Berlin and the New York Metropolitan Museums.
A second similar hoard was found later in the same year from a place just few meters away from the site of the first discovery and is now preserved in Cairo Museum. There are many other worth seen sites in Bubastis or Bastet , the ancient ruins of Egypt like the Cat Necropolis, where many bronze statues of cats were found in a series of rooms under the ground during its exploration. Also this site contains a destroyed palace dated back to the Middle Kingdom, the New Kingdom Burials, the Ruins of Bastet temple and many other magnificent monuments.
* This article is written after the research made by Mohammed Mamdouh from Egypt as a symbol of dedication to this website. |
I enjoy getting to know my students. But I also feel pressure to make sure all activities are learning something. My time is limited– and in the alternative classroom, there was little or no homework: assigning it was pointless because it never came back. My students had other priorities than school. Unfortunate, sure, but I have to work with what I have. And with what time I have.
So, I found ways to make sure each activity had some value, such as reviewing or reinforcing concepts. Including those quintessential get-to-know you activities the first day of class. I snuck in descriptive writing, symbolism, even theme and genre into different get-to-know you activities.
Here are my five favorite get-to-know you activities that include ELA concepts in them. Each of these is also available in my TPT store a pre-made activity, ready to print and use.
- Animal Mash-Up: Students use animal traits to share who they are. Then they design an animal mash-up with parts of those animals. ELA tie in: analogies, comparisons. Fast as a cheetah (and might draw cheetah legs or spots on their mash-up.)
- Move of My Life: Students imagine a movie version of their life (or a part of their life). The pick actors/ actresses to play the roles, summarize main events in the movie, identify theme and genre of their movie. ELA tie in: Theme and Genre. Also summary.
- Wanted Poster: Students introduce themselves with a drawing of themselves and what they are wanted for. Can be wanted for Good or Bad reasons. Also includes bonus printable for use with any person or character from any text. ELA tie in: Wanted poster for a character or person in a text and description of a person.
- Welcome to My Island: Students design their own private island and tell what’s on it using descriptions and location words. ELA tie in: descriptive writing and location words (around, next, nearby etc.)
- Coat of Arms Project: students create a coat of arms with symbols and colors that identify them and their values. Also ties in with “The Cask of Amontillado” by Edgar Allan Poe– which is an engaging short text that makes a good, flexible start to the year. ELA tie in: Symbolism.
How I use these:
While these can be put on the board, I like to have students take a printable since it means they can work on it immediately upon arriving while I greet others and get the class rolling that chaotic first day. Then they can share with the class. Some classes I let students choose from two or more of the activities, while other classes get assigned.
Given that adult coloring is a thing, it also reinforces what I saw with my students and how much they would enjoy drawing and coloring. Good relaxing task for the first day for students, different than the common get-to-know-you activities, and can avoid any issues with Ice Breakers.
Once we have gotten to know each other, then it’s time to figure how to deal with my roster still being in flux the first week(s)! But at least the first day is generally a solid start with these activities. |
According to the United Nations Commission on Sustainable Development, sustainability is simply defined as “meeting the needs of the present without compromising the ability of future generations to meet their own needs.” Sustainability is normally separated into three main dimensions: social, economic, and environmental. Sustainable development in these three areas is necessary to ensure that human societies will be able to endure, thrive, and last without destroying their environments and depleting their natural resources.
Our vision at CSF is that all African American communities in the future will be safe, healthy, socially and economically equitable and harmonious with the environment. In order to make this vision a reality we have identified 8 practices that will increase the ability of African Americans to pursue sustainable development and environmental justice.
The Sustainable Living Practices
- Growing Your Own Food – CSF will promote urban farming that allows community members to supply themselves with healthy produce. Growing your own food will help to address problems caused by food deserts and food insecurity in our community.
- Healthy Eating – CSF will advocate for community members lowering their consumption of salts sugar and fats. Furthermore we will advise lowering consumption of meats, GMO’s, and processed food. Members of our community should strive to improve our eating habits in order to have healthier, more productive lives as well as lower health care cost.
- Water Filtration and Conservation – CSF will promote the use of water filters to avoid consumption of potentially hazardous chemicals found in tap water. Furthermore, we will promote personal responsibility with general water usage. The impending water crisis worldwide threatens the future of many life forms on the planet. Additionally, pollution in water further compounds this problem. It is very important for members of our community to be aware of their water consumption in relation to their own health as well as the environment.
- Energy Efficiency – CSF will promote responsible energy consumption, as well as raise awareness of the alternative sources of energy. With African Americans spending 25-30 percent higher on energy costs there must be an effort to increase awareness of energy sources and personal efficiency measures.
- Recycling – CSF will promote limiting community output of waste by finding ways to re-use items. Finding ways to reuse trash and other waste products will be imperative for our community.
- Collective Civic Engagement – CSF will promote awareness of community needs and facilitate the establishment of community voting blocs and community voting agendas. It is important that African Americans begin to approach politics and public policy more cohesively. Furthermore there is a need for increased accountability for elected officials.
- Cooperative Economics – CSF will promote community support for locally owned businesses and entrepreneurs, as well as facilitate the development of “cooperatives” throughout our community. We advocate that African Americans spend at least 25-40 percent of their income with other Black Businesses. In order to increase employment rates and economic well-being it will be important for African Americans to pursue cohesive economic strategies. Very importantly will be the development of cooperatives where people come together as partners to form businesses.
- Breastfeeding/Breastfeeding Support – CSF will promote breastfeeding for newborn children and facilitate breastfeeding support networks in our community. African American mothers have dismal breastfeeding rates. This leads to reliance on formula feeding which is not the healthiest option nor is it sustainable. Lack of breastfeeding in African American communities has been tied to limited support networks.
Environmental Injustice exists when people face more exposure to environmental hazards due to their race, color, national origin, or income. Much of the research in this field has determined that race is a key factor in determining who is protected from environmental hazards.
Research Studies have shown that:
- Although African-Americans contribute 20 percent less than white households to the causes of climate change, research suggests they are more vulnerable to consequences of this activity.
- In all forty-four of the major metropolitan areas of the United States, African Americans face greater exposure to higher concentrations of air-borne toxins.
- Three out of every five African American and Latinos live in areas near toxic waste sites, as well as live in areas where the levels of poverty are well above the national average.
- Children of color who live in poor areas are more likely to attend schools filled with asbestos, live in homes with peeling lead paint, and play in parks that are contaminated.
- As of 2002, more than 70% of African Americans lived in counties that are in violation of federal clean air laws and standards (these are called “non-attainment areas”) compared to the 58 percent of whites living in such communities.
- Black children are two times as likely to be hospitalized for asthma and are four times as likely to die from asthma as White children.
- 96% of African American children who live in inner cities have unsafe amounts of lead in their blood. African Americans are more likely than others to live in neighborhoods where nutritious, fresh, healthy, and affordable food is largely unavailable which has been connected to childhood obesity along with other health issues.
Our vision is to create a national network of sustainable African American communities where the residents and families are able to thrive. These communities will feature clean air and water. The neighborhoods will be seen as safe, well-kept and energy efficient. Most importantly these communities will be affordable. We also want to see decision making in our communities include full participation and collective cooperation. The police forces should be locally controlled and responsive to the concerns of community members. Furthermore, the communities should have the ability to produce their own experts and specialists to deal with their most significant problems. Thus there should be a strong emphasis on education and training for all residents. The general concept is that all activity in the community will promote enhanced quality of life in the present without jeopardizing the opportunities for improvements of future generations. |
(Medical Xpress)—A team of researchers in France has found evidence that suggests that human hand-to-mouth actions are hard-wired into the brain. In their paper published in Proceedings of the National Academy of Sciences, the researchers describe an experiment they conducted on adults undergoing brain surgery and why what they found could have profound implications on human brain development theories.
Because human babies are born with so few abilities, scientists, and others have come to believe that virtually everything they do has to be learned—we really don't have any genetically hard-wired things we can do, e.g. baby kangaroos that can find their way into their mother's pouch or newly born wildebeests that instinctively run when a lion comes near. But as it turns out, conventional thinking may not be right. In this new effort, the researchers have found evidence of what might be an instinctive behavior in humans, the raising of the hand to the mouth in the conjunction with the mouth opening to receive it.
The research by the team in France involved stimulating a part of the human brain that has been found to be involved in automatic hand to mouth gestures in other primates. In this case, the human brains belonged to adult patients undergoing brain surgery—they were unconscious, yet when a certain brain region was stimulated, 9 of 26 patients (who'd given permission to be tested) raised their arms to lift their hands to their mouth and their mouths opened. This suggests, the researchers conclude that an involuntary instinctual activity is taking place—if it were learned, they point out, more than one area in the brain would be involved.
The research backs up claims made earlier by some researchers and mother's alike—babies don't have to be taught to grab things and lift them to their mouth—they do so as an automatic response to things they discover in their immediate surroundings. That generally includes fingers and thumbs as well—some babies have been seen sucking on them while still in utero.
The researchers suggest that their findings have implications regarding our understanding of human brain development and how motor functions originate in primates, including humans and that further research needs to be conducted to see if other instinctive types of behaviors can be found.
More information: Neural representations of ethologically relevant hand/mouth synergies in the human precentral gyrus, PNAS, Michel Desmurget, DOI: 10.1073/pnas.1321909111
Complex motor responses are often thought to result from the combination of elemental movements represented at different neural sites. However, in monkeys, evidence indicates that some behaviors with critical ethological value, such as self-feeding, are represented as motor primitives in the precentral gyrus (PrG). In humans, such primitives have not yet been described. This could reflect well-known interspecies differences in the organization of sensorimotor regions (including PrG) or the difficulty of identifying complex neural representations in peroperative settings. To settle this alternative, we focused on the neural bases of hand/mouth synergies, a prominent example of human behavior with high ethological value. By recording motor- and somatosensory-evoked potentials in the PrG of patients undergoing brain surgery (2–60 y), we show that two complex nested neural representations can mediate hand/mouth actions within this structure: (i) a motor representation, resembling self-feeding, where electrical stimulation causes the closing hand to approach the opening mouth, and (ii) a motor–sensory representation, likely associated with perioral exploration, where cross-signal integration is accomplished at a cortical site that generates hand/arm actions while receiving mouth sensory inputs. The first finding extends to humans' previous observations in monkeys. The second provides evidence that complex neural representations also exist for perioral exploration, a finely tuned skill requiring the combination of motor and sensory signals within a common control loop. These representations likely underlie the ability of human children and newborns to accurately produce coordinated hand/mouth movements, in an otherwise general context of motor immaturity. |
Some shine like disco balls and others are barely visible. But the Milky Way's galactic companions all seem to weigh the same at their core, according to a new study.
Since these 'dwarf galaxies' are the smallest known, the find suggests there is a minimum amount of matter needed to form a galaxy.
A motley collection of at least 22 dwarf galaxies orbits the Milky Way. The most dazzling are 10,000 times brighter their dimmer brethren.
But despite appearances, all of the galaxies weigh the same at their cores, likely due to a higher concentration of dark matter inside the dimmer galaxies.
Louis Strigari of the University of California, Irvine, and colleagues analysed the motion of stars in 18 of these dwarf galaxies using images taken at the Keck Observatory in Hawaii and the Magellan telescope in Chile.
Because the width of the galaxies varies widely, the team focused on the inner 1000 light years of each galaxy, where they might expect common properties to emerge if any existed.
They measured the velocities of hundreds of stars in orbit around the galaxies' centres, which allowed them to calculate the mass of the galaxies' cores.
Surprisingly, the team found that the galaxies all weigh the same - roughly 10 million times the mass of the Sun. Most of this mass seems to be dark matter, and the dimmest galaxies appear to contain 10,000 times more dark matter than visible matter.
Rich in dark matter
This is an unusual ratio. The Milky Way, for example, is thought to contain only roughly 10 times as much dark matter as ordinary matter. "As far as we know, these are the most dark matter-dominated galaxies in the universe," says team member Manoj Kaplinghat, also of Irvine.
The find hints that these tiny galaxies must be at least this massive in order to form.
The result could have implications for the study of conditions in the early universe. Galaxies are thought to form from the inside out, beginning as knots of dark matter that over time gravitationally draw in surrounding matter.
Because the dwarf galaxies are so dense, the team estimates they must have gotten their start at a time when the universe itself was as dense, only a few hundred million years after the big bang.
The team's results agree with earlier work that shows these dwarf galaxies are dark matter-dominated, says Gerry Gilmore of Cambridge University in the UK.
If galaxies have a minimum mass, he says their study could shed light on the nature of dark matter. That's because the minimum size at which a clump of dark matter could form depends on the properties of the dark matter candidate used.
"This implies that we are approaching the scale at which the properties of the elementary [particles] which presumably make up the dark matter are producing observable astrophysical effects," Gilmore told New Scientist.
Because the dwarf galaxies are nearby and not obscured by gas and dust, they may also be ideal places to look for evidence of dark matter particles, Kaplinghat says.
Some dark matter particle candidates can annihilate and release gamma rays that might be seen by telescopes such as the newly-launched Fermi Gamma-ray Space Telescope.
Cosmology - Keep up with the latest ideas in our special report.
Journal reference: Nature (vol 454, p 1096)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. |
|Part of a series on|
Liberalism is a political philosophy or worldview founded on ideas of liberty and equality. Liberals espouse a wide array of views depending on their understanding of these principles, but generally they support ideas and programmes such as freedom of speech, freedom of the press, freedom of religion, free markets, civil rights, democratic societies, secular governments, gender equality, and international cooperation.
Liberalism first became a distinct political movement during the Age of Enlightenment, when it became popular among philosophers and economists in the Western world. Liberalism rejected the prevailing social and political norms of hereditary privilege, state religion, absolute monarchy, and the Divine Right of Kings. The 17th-century philosopher John Locke is often credited with founding liberalism as a distinct philosophical tradition. Locke argued that each man has a natural right to life, liberty and property, while adding that governments must not violate these rights based on the social contract. Liberals opposed traditional conservatism and sought to replace absolutism in government with representative democracy and the rule of law.
Prominent revolutionaries in the Glorious Revolution, the American Revolution, and the French Revolution used liberal philosophy to justify the armed overthrow of what they saw as tyrannical rule. Liberalism started to spread rapidly especially after the French Revolution. The 19th century saw liberal governments established in nations across Europe, South America, and North America. In this period, the dominant ideological opponent of classical liberalism was conservatism, but liberalism later survived major ideological challenges from new opponents, such as fascism and communism. During the 20th century, liberal ideas spread even further as liberal democracies found themselves on the winning side in both world wars. In Europe and North America, the establishment of social liberalism became a key component in the expansion of the welfare state. Today, liberal parties continue to wield power and influence throughout the world.
- 1 Etymology and definition
- 2 History
- 3 Philosophy
- 4 Worldwide
- 5 Impact and influence
- 6 See also
- 7 Notes
- 8 References and further reading
- 9 External links
Etymology and definition
Words such as liberal, liberty, libertarian, and libertine all trace their history to the Latin liber, which means "free". One of the first recorded instances of the word liberal occurs in 1375, when it was used to describe the liberal arts in the context of an education desirable for a free-born man. The word's early connection with the classical education of a medieval university soon gave way to a proliferation of different denotations and connotations. Liberal could refer to "free in bestowing" as early as 1387, "made without stint" in 1433, "freely permitted" in 1530, and "free from restraint" – often as a pejorative remark – in the 16th and the 17th centuries. In 16th century England, liberal could have positive or negative attributes in referring to someone's generosity or indiscretion. In Much Ado About Nothing, Shakespeare wrote of "a liberal villaine" who "hath ... confest his vile encounters". With the rise of the Enlightenment, the word acquired decisively more positive undertones, being defined as "free from narrow prejudice" in 1781 and "free from bigotry" in 1823. In 1815, the first use of the word liberalism appeared in English. In Spain, the Liberales, the first group to use the liberal label in a political context, fought for the implementation of the 1812 Constitution for decades. From 1820 to 1823, during the Trienio Liberal, King Ferdinand VII was compelled by the liberales to swear to uphold the Constitution. By the middle of the 19th century, liberal was used as a politicised term for parties and movements worldwide.
Over time, the meaning of the word "liberalism" began to diverge in different parts of the world. According to the Encyclopædia Britannica, "In the United States, liberalism is associated with the welfare-state policies of the New Deal programme of the Democratic administration of Pres. Franklin D. Roosevelt, whereas in Europe it is more commonly associated with a commitment to limited government and laissez-faire economic policies." Consequently, in the U.S., the ideas of individualism and laissez-faire economics previously associated with classical liberalism became the basis for the emerging school of libertarian thought, and are key components of American conservatism.
File:Agreement of the People (1647-1649).jpg Isolated strands of liberal thought had existed in Western philosophy since the Ancient Greeks, but the first major signs of liberal politics emerged in modern times. In the 17th century, political and financial disputes between the English Parliament and King Charles I sparked a civil war in the 1640s, culminating in the execution of Charles and the establishment of the Commonwealth of England, and producing a significant amount of political and philosophical commentary. In particular, the Levellers, a radical political movement, published their manifesto Agreement of the People, in which they advocated for popular sovereignty, for extended voting suffrage, religious tolerance and equality before the law. Many of the liberal concepts of Locke were foreshadowed in the radical ideas that were freely aired at the time. Algernon Sidney was second only to John Locke in his influence on liberal political thought in eighteenth-century Britain. He believed that absolute monarchy was a great political evil, and his major work, Discourses Concerning Government, argued that the subjects of the monarch were entitled by right to share in the government through advice and counsel.
These ideas were first unified as a distinct ideology by the English philosopher John Locke, generally regarded as the father of modern liberalism. Locke developed the radical notion that government acquires consent from the governed, which has to be constantly present for a government to remain legitimate. His influential Two Treatises (1690), the foundational text of liberal ideology, outlined his major ideas. His insistence that lawful government did not have a supernatural basis was a sharp break from previous theories of governance. Locke also defined the concept of the separation of church and state. Based on the social contract principle, Locke argued that there was a natural right to the liberty of conscience, which he argued must therefore remain protected from any government authority. He also formulated a general defence for religious toleration in his Letters Concerning Toleration. Locke was influenced by the liberal ideas of John Milton, who was a staunch advocate of freedom in all its forms. Milton argued for disestablishment as the only effective way of achieving broad toleration. In his Areopagitica, Milton provided one of the first arguments for the importance of freedom of speech – "the liberty to know, to utter, and to argue freely according to conscience, above all liberties".
The impact of these ideas steadily increased during the 17th century in England, culminating in the Glorious Revolution of 1688, which enshrined parliamentary sovereignty and the right of revolution, and led to the establishment of what many consider the first modern, liberal state. Significant legislative milestones in this period included the Habeas Corpus Act of 1679, which strengthened the convention that forbade detention lacking sufficient cause or evidence. The Bill of Rights formally established the supremacy of the law and of parliament over the monarch and laid down basic rights for all Englishmen. The Bill made royal interference with the law and with elections to parliament illegal, made the agreement of parliament necessary for the implementation of any new taxes and outlawed the maintenance of a standing army during peacetime without parliament's consent. The right to petition the monarch was granted to everyone and "cruel and unusual punishments" were made illegal under all circumstances. This was followed a year later with the Act of Toleration, which drew its ideological content from John Locke's four letters advocating religious toleration. The Act allowed freedom of worship to Nonconformists who pledged oaths of Allegiance and Supremacy to the Anglican Church. In 1695, the Commons refused to renew the Licensing of the Press Act 1662, leading to a continuous period of unprecedented freedom of the press.
Age of Enlightenment
The development of liberalism continued throughout the 18th century with the burgeoning Enlightenment ideals of the era. This was a period of profound intellectual vitality that questioned old traditions and influenced several European monarchies throughout the 18th century. In contrast to England, the French experience in the 18th century was characterised by the perpetuation of feudal payments and rights and absolutism. Ideas that challenged the status quo were often harshly repressed. Most of the philosophes of the French Enlightenment were progressive in the liberal sense and advocated the reform of the French system of government along more constitutional and liberal lines. The American Enlightenment is a period of intellectual ferment in the thirteen American colonies in the period 1714–1818, which led to the American Revolution, and the creation of the American Republic. Influenced by the 18th-century European Enlightenment, and its own native American Philosophy, the American Enlightenment applied scientific reasoning to politics, science, and religion, promoted religious tolerance, and restored literature, the arts, and music as important disciplines and professions worthy of study in colleges.
Baron de Montesquieu wrote a series of highly influential works in the early 18th century, including Persian letters (1717) and The Spirit of the Laws (1748). The latter exerted tremendous influence, both inside and outside France. Montesquieu pleaded in favour of a constitutional system of government, the preservation of civil liberties and the law, and the idea that political institutions ought to reflect the social and geographical aspects of each community. In particular, he argued that political liberty required the separation of the powers of government. Building on John Locke's Second Treatise of Government, he advocated that the executive, legislative, and judicial functions of government should be assigned to different bodies. He also emphasised the importance of a robust due process in law, including the right to a fair trial, the presumption of innocence and proportionality in the severity of punishment. Another important figure of the French Enlightenment was Voltaire. Initially believing in the constructive role an enlightened monarch could play in improving the welfare of the people, he eventually came to a new conclusion: "It is up to us to cultivate our garden." His most polemical and ferocious attacks on intolerance and religious persecutions indeed began to appear a few years later. Despite much persecution, Voltaire remained a courageous polemicist who indefatigably fought for civil rights – the right to a fair trial and freedom of religion – and who denounced the hypocrisies and injustices of the Ancien Régime.
Political tension between England and its American colonies grew after 1765 over the issue of taxation without representation, culminating in the Declaration of Independence of a new republic, and the resulting American Revolutionary War to defend it.
The Declaration of Independence, written in committee largely by Thomas Jefferson, echoed Locke: "We hold these truths to be self-evident, that all men are created equal, and are endowed by their creator with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness." After the war, the leaders debated about how to move forward. The Articles of Confederation, written in 1776, now appeared inadequate to provide security, or even a functional government. The Confederation Congress called a Constitutional Convention in 1787, which resulted in the writing of a new Constitution of the United States establishing a federal government.
In the context of the times, the Constitution was a republican and liberal document. It established a strong national government and provided clear Separation of powers between the branches of government (executive, legislative, and judicial) to limit any one branch from exercising the core functions of another. Additionally, the first ten amendments to the Constitution, known collectively as the Bill of Rights, placed restrictions on the powers of government and offered specific protections of several of the natural rights liberal thinkers used to justify the Revolution. It remains the oldest liberal governing document in effect worldwide.
Historians widely regard the French Revolution as one of the most important events in history. The Revolution is often seen as marking the "dawn of the modern era", and its convulsions are widely associated with "the triumph of liberalism".
The French Revolution began in 1789 with the convocation of the Estates-General in May. The first year of the Revolution witnessed members of the Third Estate proclaiming the Tennis Court Oath in June, the Storming of the Bastille in July. The two key events that marked the triumph of liberalism were the Abolition of feudalism in France on the night of 4 August 1789, which marked the collapse of feudal and old traditional rights and privileges and restrictions, and the passage of the Declaration of the Rights of Man and of the Citizen in August. The rise of Napoleon as dictator in 1799, heralded a reverse of many of the republican and democratic gains. However Napoleon did not restore the ancien regime. He kept much of the liberalism and imposed a liberal code of law, the Code Napoleon.
Outside France the Revolution had a major impact and its ideas became widespread. Furthermore, the French armies in the 1790s and 1800s directly overthrew feudal remains in much of western Europe. They liberalised property laws, ended seigneurial dues, abolished the guild of merchants and craftsmen to facilitate entrepreneurship, legalised divorce, and closed the Jewish ghettos. The Inquisition ended as did the Holy Roman Empire. The power of church courts and religious authority was sharply reduced, and equality under the law was proclaimed for all men.
Artz emphasises the benefits the Italians gained from the French Revolution:
- For nearly two decades the Italians had the excellent codes of law, a fair system of taxation, a better economic situation, and more religious and intellectual toleration than they had known for centuries ... Everywhere old physical, economic, and intellectual barriers had been thrown down and the Italians had begun to be aware of a common nationality.
Likewise in Switzerland the long-term impact of the French Revolution has been assessed by Martin:
- It proclaimed the equality of citizens before the law, equality of languages, freedom of thought and faith; it created a Swiss citizenship, basis of our modern nationality, and the separation of powers, of which the old regime had no conception; it suppressed internal tariffs and other economic restraints; it unified weights and measures, reformed civil and penal law, authorised mixed marriages (between Catholics and Protestants), suppressed torture and improved justice; it developed education and public works.
The radical liberal movement began in the 1790s in England and concentrated on parliamentary and electoral reform, emphasising natural rights and popular sovereignty. Thomas Paine's The Rights of Man (1791) was a response to Edmund Burke's conservative essay Reflections on the Revolution in France. The ensuing Revolution Controversy featured, among others, Mary Wollstonecraft, who followed with an early feminist tract A Vindication of the Rights of Woman. Radicals encouraged mass support for democratic reform along with rejection of the monarchy, aristocracy, and all forms of privilege. The Reform Act 1832 was put through with the support of public outcry, mass meetings of "political unions" and riots in some cities. This now enfranchised the middle classes, but failed to meet radical demands. Following the Reform Act the mainly aristocratic Whigs in the House of Commons were joined by a small number of parliamentary Radicals, as well as an increased number of middle class Whigs. By 1839 they were informally being called "the Liberal Party". The Liberals produced one of the greatest British prime ministers – William Gladstone, who was also known as the Grand Old Man and was the towering political figure of liberalism in the 19th century. Under Gladstone, the Liberals reformed education, disestablished the Church of Ireland, and introduced the secret ballot for local and parliamentary elections.
Liberal economic theory
The development into maturity of classical liberalism took place before and after the French Revolution in Britain, and was based on the following core concepts: classical economics, free trade, laissez-faire government with minimal intervention and taxation and a balanced budget. Classical liberals were committed to individualism, liberty and equal rights. The primary intellectual influences on 19th century liberal trends were those of Adam Smith and the classical economists, and Jeremy Bentham and John Stuart Mill.
Adam Smith's The Wealth of Nations, published in 1776, was to provide most of the ideas of economics, at least until the publication of J. S. Mill's Principles in 1848. Smith addressed the motivation for economic activity, the causes of prices and the distribution of wealth, and the policies the state should follow in order to maximise wealth.
Smith wrote that as long as supply, demand, prices, and competition were left free of government regulation, the pursuit of material self-interest, rather than altruism, would maximise the wealth of a society through profit-driven production of goods and services. An "invisible hand" directed individuals and firms to work toward the nation's good as an unintended consequence of efforts to maximise their own gain. This provided a moral justification for the accumulation of wealth, which had previously been viewed by some as sinful.
His main emphasis was on the benefit of free internal and international trade, which he thought could increase wealth through specialisation in production. He also opposed restrictive trade preferences, state grants of monopolies, and employers' organisations and trade unions. Government should be limited to defence, public works and the administration of justice, financed by taxes based on income. Smith was one of the progenitors of the idea, which was long central to classical liberalism and has resurfaced in the globalisation literature of the later 20th and early 21st centuries, that free trade promotes peace.
Utilitarianism provided the political justification for the implementation of economic liberalism by British governments, which was to dominate economic policy from the 1830s. Although utilitarianism prompted legislative and administrative reform and John Stuart Mill's later writings on the subject foreshadowed the welfare state, it was mainly used as a justification for laissez-faire. The central concept of utilitarianism, which was developed by Jeremy Bentham, was that public policy should seek to provide "the greatest happiness of the greatest number". While this could be interpreted as a justification for state action to reduce poverty, it was used by classical liberals to justify inaction with the argument that the net benefit to all individuals would be higher. His philosophy proved to be extremely influential on government policy and led to increased Benthamite attempts at government social control, including Robert Peel's Metropolitan Police, prison reforms, the workhouses and asylums for the mentally ill.
The repeal of the Corn Laws in 1846 was a watershed moment and encapsulated the triumph of free trade and liberal economics. The Anti-Corn Law League brought together a coalition of liberal and radical groups in support of free trade under the leadership of Richard Cobden and John Bright, who opposed militarism and public expenditure. Their policies of low public expenditure and low taxation were later adopted by the liberal chancellor of the exchequer and later prime minister, William Ewart Gladstone. Although British classical liberals aspired to a minimum of state activity, the passage of the Factory Acts in the early 19th century, which involved government interference in the economy, met with their approval.
Spread of liberalism
Abolitionist and suffrage movements spread, along with representative and democratic ideals. France established an enduring republic in the 1870s, and a vicious war in the United States ensured the integrity of the nation and the abolition of slavery in the south. Meanwhile, a mixture of liberal and nationalist sentiment in Italy and Germany brought about the unification of the two countries in the late 19th century. Liberal agitation in Latin America led to independence from the imperial power of Spain and Portugal.
In France, the July Revolution of 1830, orchestrated by liberal politicians and journalists, removed the Bourbon monarchy and inspired similar uprisings elsewhere in Europe. Frustration with the pace of political progress in the early 19th century sparked even more gigantic revolutions in 1848. Revolutions spread throughout the Austrian Empire, the German states, and the Italian states. Governments fell rapidly. Liberal nationalists demanded written constitutions, representative assemblies, greater suffrage rights, and freedom of the press. A second republic was proclaimed in France. Serfdom was abolished in Prussia, Galicia, Bohemia, and Hungary. The indomitable Metternich, the Austrian builder of the reigning conservative order, shocked Europe when he resigned and fled to Britain in panic and disguise.
Eventually, however, the success of the revolutionaries petered out. Without French help, the Italians were easily defeated by the Austrians. With some luck and skill, Austria also managed to contain the bubbling nationalist sentiments in Germany and Hungary, helped along by the failure of the Frankfurt Assembly to unify the German states into a single nation. Two decades later, however, the Italians and the Germans realised their dreams for unification and independence. The Sardinian Prime Minister, Camillo di Cavour, was a shrewd liberal who understood that the only effective way for the Italians to gain independence was if the French were on their side. Napoleon III agreed to Cavour's request for assistance and France defeated Austria in the Franco-Austrian War of 1859, setting the stage for Italian independence. German unification transpired under the leadership of Otto von Bismarck, who decimated the enemies of Prussia in war after war, finally triumphing against France in 1871 and proclaiming the German Empire in the Hall of Mirrors at Versailles, ending another saga in the drive for nationalisation. The French proclaimed a third republic after their loss in the war.
By the end of the nineteenth century, the principles of classical liberalism were being increasingly challenged by downturns in economic growth, a growing perception of the evils of poverty, unemployment and relative deprivation present within modern industrial cities, and the agitation of organised labour. The ideal of the self-made individual, who through hard work and talent could make his or her place in the world, seemed increasingly implausible. A major political reaction against the changes introduced by industrialisation and laissez-faire capitalism came from conservatives concerned about social balance, although socialism later became a more important force for change and reform. Some Victorian writers – including Charles Dickens, Thomas Carlyle, and Matthew Arnold – became early influential critics of social injustice.
John Stuart Mill contributed enormously to liberal thought by combining elements of classical liberalism with what eventually became known as the new liberalism. Mill's 1859 On Liberty addressed the nature and limits of the power that can be legitimately exercised by society over the individual. He gave an impassioned defence of free speech, arguing that free discourse is a necessary condition for intellectual and social progress. Mill defined "social liberty" as protection from "the tyranny of political rulers". He introduced a number of different concepts of the form tyranny can take, referred to as social tyranny, and tyranny of the majority respectively. Social liberty meant limits on the ruler's power through obtaining recognition of political liberties or rights and by the establishment of a system of "constitutional checks".
However, although Mill's initial economic philosophy supported free markets and argued that progressive taxation penalised those who worked harder, he later altered his views toward a more socialist bent, adding chapters to his Principles of Political Economy in defence of a socialist outlook, and defending some socialist causes, including the radical proposal that the whole wage system be abolished in favour of a co-operative wage system.
Another early liberal convert to greater government intervention was Thomas Hill Green. Seeing the effects of alcohol, he believed that the state should foster and protect the social, political and economic environments in which individuals will have the best chance of acting according to their consciences. The state should intervene only where there is a clear, proven and strong tendency of a liberty to enslave the individual. Green regarded the national state as legitimate only to the extent that it upholds a system of rights and obligations that is most likely to foster individual self-realisation.
This strand began to coalesce into the social liberalism movement at the turn of the twentieth century in Britain. The New Liberals, which included intellectuals like L.T. Hobhouse, and John A. Hobson, saw individual liberty as something achievable only under favourable social and economic circumstances. In their view, the poverty, squalor, and ignorance in which many people lived made it impossible for freedom and individuality to flourish. New Liberals believed that these conditions could be ameliorated only through collective action coordinated by a strong, welfare-oriented, and interventionist state. The People's Budget of 1909, championed by David Lloyd George and fellow liberal Winston Churchill, introduced unprecedented taxes on the wealthy in Britain and radical social welfare programmes to the country's policies. It was the first budget with the expressed intent of redistributing wealth among the public.[page needed]
Liberalism gained momentum in the beginning of the 20th century. The bastion of autocracy, the Russian Tsar, was overthrown in the first phase of the Russian Revolution. The Allied victory in the First World War and the collapse of four empires seemed to mark the triumph of liberalism across the European continent, not just among the victorious allies, but also in Germany and the newly created states of Eastern Europe. Militarism, as typified by Germany, was defeated and discredited. As Blinkhorn argues, the liberal themes were ascendant in terms of "cultural pluralism, religious and ethnic toleration, national self-determination, free-market economics, representative and responsible government, free trade, unionism, and the peaceful settlement of international disputes through a new body, the League of Nations".
Liberalism was defeated in Russia when the Communists came to power under Vladimir Lenin in October 1917, in Italy when Mussolini set up his dictatorship in 1922, in Poland in 1926 under Józef Piłsudski, and in Spain in 1939 after the Spanish Civil War. Japan, which was generally liberal in the 1920s, saw liberalism wither away in the 1930s under pressure from the military.
The worldwide Great Depression, starting in 1929, hastened the discrediting of liberal economics and strengthened calls for state control over economic affairs. Economic woes prompted widespread unrest in the European political world, leading to the strengthening of fascism and communism. Their rise in 1939 culminated in the Second World War. The Allies, which included most of the important liberal nations as well as communist Russia, won World War II, defeating Nazi Germany, Fascist Italy, and militarist Japan. After the war, there was a falling out between Russia and the West, and the Cold War opened in 1947 between the Communist Eastern Bloc and the liberal Western Alliance.
Meanwhile, the definitive liberal response to the Great Depression was given by the English economist John Maynard Keynes (1883–1946). Keynes had been "brought up" as a classical liberal, but especially after World War I became increasingly a welfare or social liberal. A prolific writer, among many other works, he had begun a theoretical work examining the relationship between unemployment, money and prices back in the 1920s. His The General Theory of Employment, Interest and Money was published in 1936, and served as a theoretical justification for the interventionist policies Keynes favoured for tackling a recession. The General Theory challenged the earlier neo-classical economic paradigm, which had held that, provided it was unfettered by government interference, the market would naturally establish full employment equilibrium.
The book advocated activist economic policy by government to stimulate demand in times of high unemployment, for example by spending on public works. "Let us be up and doing, using our idle resources to increase our wealth," he wrote in 1928. "With men and plants unemployed, it is ridiculous to say that we cannot afford these new developments. It is precisely with these plants and these men that we shall afford them." Where the market failed to properly allocate resources, the government was required to stimulate the economy until private funds could start flowing again – a "prime the pump" strategy designed to boost industrial production.
The social liberal programme launched by President Roosevelt in the United States in 1933, reduced the unemployment rate from roughly 25 percent to about 15 percent by 1940. Additional state spending and the very large public works programme sparked by the Second World War eventually pulled the United States out of the Great Depression. From 1940 to 1941, government spending increased by 59 percent, the gross domestic product increased 17 percent, and unemployment fell below 10 percent for the first time since 1929.
The comprehensive welfare state was built in the UK after the Second World War. Although it was largely accomplished by the Labour Party, it was also significantly designed by John Maynard Keynes, who laid the economic foundations, and by William Beveridge, who designed the welfare system. By the early years of the 21st century, most countries in the world have mixed economies, which combine capitalism with economic liberalism.
The Cold War featured extensive ideological competition and several proxy wars, but the widely feared Third World War between the Soviet Union and the United States never occurred. While communist states and liberal democracies competed against one another, an economic crisis in the 1970s inspired a move away from Keynesian economics, especially under Margaret Thatcher in the UK and Ronald Reagan in the US.
This classical liberal renewal, called pejoratively "neoliberalism" by its opponents, lasted through the 1980s and the 1990s. Meanwhile, nearing the end of the 20th century, communist states in Eastern Europe collapsed precipitously, leaving liberal democracies as the only major forms of government in the West.
At the beginning of the Second World War, the number of democracies around the world was about the same as it had been forty years before. After 1945, liberal democracies spread very quickly, but then retreated. In The Spirit of Democracy, Larry Diamond argues that by 1974, "dictatorship, not democracy, was the way of the world", and that "Barely a quarter of independent states chose their governments through competitive, free, and fair elections." Diamond goes on to say that democracy bounced back and by 1995 the world was "predominantly democratic". Liberalism still faces challenges, especially with the phenomenal growth of China as a model combination of authoritarian government and economic liberalism. The Great Recession, which began around 2007, prompted a resurgence in Keynesian economic thought.
Liberalism – both as a political current and an intellectual tradition – is mostly a modern phenomenon that started in the 17th century, although some liberal philosophical ideas had precursors in classical antiquity. The Roman Emperor Marcus Aurelius praised, "the idea of a polity administered with regard to equal rights and equal freedom of speech, and the idea of a kingly government which respects most of all the freedom of the governed". Scholars have also recognised a number of principles familiar to contemporary liberals in the works of several Sophists and in the Funeral Oration by Pericles. Liberal philosophy symbolises an extensive intellectual tradition that has examined and popularised some of the most important and controversial principles of the modern world. Its immense scholarly and academic output has been characterised as containing "richness and diversity", but that diversity often has meant that liberalism comes in different formulations and presents a challenge to anyone looking for a clear definition.
|Part of a series on|
Though all liberal doctrines possess a common heritage, scholars frequently assume that those doctrines contain "separate and often contradictory streams of thought". The objectives of liberal theorists and philosophers have differed across various times, cultures, and continents. The diversity of liberalism can be gleaned from the numerous adjectives that liberal thinkers and movements have attached to the very term liberalism, including classical, egalitarian, economic, social, welfare-state, ethical, humanist, deontological, perfectionist, democratic, and institutional, to name a few. Despite these variations, liberal thought does exhibit a few definite and fundamental conceptions. At its very root, liberalism is a philosophy about the meaning of humanity and society.
Political philosopher John Gray identified the common strands in liberal thought as being individualist, egalitarian, meliorist, and universalist. The individualist element avers the ethical primacy of the human being against the pressures of social collectivism, the egalitarian element assigns the same moral worth and status to all individuals, the meliorist element asserts that successive generations can improve their sociopolitical arrangements, and the universalist element affirms the moral unity of the human species and marginalises local cultural differences. The meliorist element has been the subject of much controversy, defended by thinkers such as Immanuel Kant, who believed in human progress, while suffering from attacks by thinkers such as Rousseau, who believed that human attempts to improve themselves through social cooperation would fail. Describing the liberal temperament, Gray claimed that it "has been inspired by scepticism and by a fideistic certainty of divine revelation ... it has exalted the power of reason even as, in other contexts, it has sought to humble reason's claims".
The liberal philosophical tradition has searched for validation and justification through several intellectual projects. The moral and political suppositions of liberalism have been based on traditions such as natural rights and utilitarian theory, although sometimes liberals even requested support from scientific and religious circles. Through all these strands and traditions, scholars have identified the following major common facets of liberal thought: believing in equality and individual liberty, supporting private property and individual rights, supporting the idea of limited constitutional government, and recognising the importance of related values such as pluralism, toleration, autonomy, bodily integrity, and consent.
Classical and modern
Enlightenment philosophers are given credit for shaping liberal ideas. Thomas Hobbes attempted to determine the purpose and the justification of governing authority in a post-civil war England. Employing the idea of a state of nature – a hypothetical war-like scenario prior to the State – he constructed the idea of a social contract that individuals enter into to guarantee their security and in so doing form the State, concluding that only an absolute sovereign would be fully able to sustain such a peace. John Locke, while adopting Hobbes's idea of a state of nature and social contract, nevertheless argued that when the monarch becomes a tyrant, that constituted a violation of the social contract, which bestows life, liberty, and property as a natural right. He concluded that the people have a right to overthrow a tyrant. By placing life, liberty and property as the supreme value of law and authority, Locke formulated the basis of liberalism based on social contract theory. To these early enlightenment thinkers securing the most essential amenities of life – liberty and private property among them – required the formation of a "sovereign" authority with universal jurisdiction. In a natural state of affairs, liberals argued, humans were driven by the instincts of survival and self-preservation, and the only way to escape from such a dangerous existence was to form a common and supreme power capable of arbitrating between competing human desires. This power could be formed in the framework of a civil society that allows individuals to make a voluntary social contract with the sovereign authority, transferring their natural rights to that authority in return for the protection of life, liberty, and property. These early liberals often disagreed about the most appropriate form of government, but they all shared the belief that liberty was natural and that its restriction needed strong justification. Liberals generally believed in limited government, although several liberal philosophers decried government outright, with Thomas Paine writing, "government even in its best state is a necessary evil."
As part of the project to limit the powers of government, various liberal theorists such as James Madison and the Baron de Montesquieu conceived the notion of separation of powers, a system designed to equally distribute governmental authority among the executive, legislative, and judicial branches. Governments had to realise, liberals maintained, that poor and improper governance gave the people authority to overthrow the ruling order through any and all possible means, even through outright violence and revolution, if needed. Contemporary liberals, heavily influenced by social liberalism, have continued to support limited constitutional government while also advocating for state services and provisions to ensure equal rights. Modern liberals claim that formal or official guarantees of individual rights are irrelevant when individuals lack the material means to benefit from those rights and call for a greater role for government in the administration of economic affairs.
Early liberals also laid the groundwork for the separation of church and state. As heirs of the Enlightenment, liberals believed that any given social and political order emanated from human interactions, not from divine will. Many liberals were openly hostile to religious belief itself, but most concentrated their opposition to the union of religious and political authority, arguing that faith could prosper on its own, without official sponsorship or administration by the state.
Beyond identifying a clear role for government in modern society, liberals also have obsessed over the meaning and nature of the most important principle in liberal philosophy: liberty. From the 17th century until the 19th century, liberals – from Adam Smith to John Stuart Mill – conceptualised liberty as the absence of interference from government and from other individuals, claiming that all people should have the freedom to develop their own unique abilities and capacities without being sabotaged by others. Mill's On Liberty (1859), one of the classic texts in liberal philosophy, proclaimed, "the only freedom which deserves the name, is that of pursuing our own good in our own way." Support for laissez-faire capitalism is often associated with this principle, with Friedrich Hayek arguing in The Road to Serfdom (1944) that reliance on free markets would preclude totalitarian control by the state.
Beginning in the late 19th century, however, a new conception of liberty entered the liberal intellectual arena. This new kind of liberty became known as positive liberty to distinguish it from the prior negative version, and it was first developed by British philosopher Thomas Hill Green. Green rejected the idea that humans were driven solely by self-interest, emphasising instead the complex circumstances that are involved in the evolution of our moral character. In a very profound step for the future of modern liberalism, he also tasked society and political institutions with the enhancement of individual freedom and identity and the development of moral character, will and reason. And the state to create the conditions that allow for the above, giving the opportunity for genuine choice. Foreshadowing the new liberty as the freedom to act rather than to avoid suffering from the acts of others, Green wrote the following:
If it were ever reasonable to wish that the usage of words had been other than it has been ... one might be inclined to wish that the term 'freedom' had been confined to the ... power to do what one wills.
Rather than previous liberal conceptions viewing society as populated by selfish individuals, Green viewed society as an organic whole in which all individuals have a duty to promote the common good. His ideas spread rapidly and were developed by other thinkers such as L. T. Hobhouse and John Hobson. In a few years, this New Liberalism had become the essential social and political programme of the Liberal Party in Britain, and it would encircle much of the world in the 20th century. In addition to examining negative and positive liberty, liberals have tried to understand the proper relationship between liberty and democracy. As they struggled to expand suffrage rights, liberals increasingly understood that people left out of the democratic decision-making process were liable to the tyranny of the majority, a concept explained in Mill's On Liberty and in Democracy in America (1835) by Alexis de Tocqueville. As a response, liberals began demanding proper safeguards to thwart majorities in their attempts at suppressing the rights of minorities.
Besides liberty, liberals have developed several other principles important to the construction of their philosophical structure, such as equality, pluralism, and toleration. Highlighting the confusion over the first principle, Voltaire commented, "equality is at once the most natural and at times the most chimeral of things." All forms of liberalism assume, in some basic sense, that individuals are equal. In maintaining that people are naturally equal, liberals assume that they all possess the same right to liberty. In other words, no one is inherently entitled to enjoy the benefits of liberal society more than anyone else, and all people are equal subjects before the law. Beyond this basic conception, liberal theorists diverge on their understanding of equality. American philosopher John Rawls emphasised the need to ensure not only equality under the law, but also the equal distribution of material resources that individuals required to develop their aspirations in life. Libertarian thinker Robert Nozick disagreed with Rawls, championing the former version of Lockean equality instead. To contribute to the development of liberty, liberals also have promoted concepts like pluralism and toleration. By pluralism, liberals refer to the proliferation of opinions and beliefs that characterise a stable social order. Unlike many of their competitors and predecessors, liberals do not seek conformity and homogeneity in the way that people think; in fact, their efforts have been geared towards establishing a governing framework that harmonises and minimises conflicting views, but still allows those views to exist and flourish. For liberal philosophy, pluralism leads easily to toleration. Since individuals will hold diverging viewpoints, liberals argue, they ought to uphold and respect the right of one another to disagree. From the liberal perspective, toleration was initially connected to religious toleration, with Spinoza condemning "the stupidity of religious persecution and ideological wars". Toleration also played a central role in the ideas of Kant and John Stuart Mill. Both thinkers believed that society will contain different conceptions of a good ethical life and that people should be allowed to make their own choices without interference from the state or other individuals.
Criticism and support
File:Fusilamiento de Torrijos (Gisbert).jpg Liberalism has drawn both criticism and support in its history from various ideological groups. Some scholars suggest that liberalism gave rise to feminism, although others maintain that liberal democracy is inadequate for the realisation of feminist objectives. Liberal feminism, the dominant tradition in feminist history, hopes to eradicate all barriers to gender equality – claiming that the continued existence of such barriers eviscerates the individual rights and freedoms ostensibly guaranteed by a liberal social order. British philosopher Mary Wollstonecraft is widely regarded as the pioneer of liberal feminism, with A Vindication of the Rights of Woman (1792) expanding the boundaries of liberalism to include women in the political structure of liberal society. Less friendly to the goals of liberalism has been conservatism. Edmund Burke, considered by some to be the first major proponent of modern conservative thought, offered a blistering critique of the French Revolution by assailing the liberal pretensions to the power of rationality and to the natural equality of all humans.
Some confusion remains about the relationship between social liberalism and socialism, despite the fact that many variants of socialism distinguish themselves markedly from liberalism by opposing capitalism, hierarchy, and private property. Socialism formed as a group of related yet divergent ideologies in the 19th century such as Christian socialism, Communism (with the writings of Karl Marx), and Social Anarchism (with the writings of Mikhail Bakunin), the latter two influenced by the Paris Commune. These ideologies – as with liberalism and conservatism – fractured into several major and minor movements in the following decades. Marx rejected the foundational aspects of liberal theory, hoping to destroy both the state and the liberal distinction between society and the individual while fusing the two into a collective whole designed to overthrow the developing capitalist order of the 19th century. Today, socialist parties and ideas remain a political force with varying degrees of power and influence on all continents leading national governments in many countries. Liberal socialism is a socialist political philosophy that includes liberal principles within it. Liberal socialism does not have the goal of abolishing capitalism with a socialist economy; instead, it supports a mixed economy that includes both public and private property in capital goods. Principles that can be described as "liberal socialist" have been based upon or developed by the following philosophers: John Stuart Mill, Eduard Bernstein, John Dewey, Carlo Rosselli, Norberto Bobbio and Chantal Mouffe. Other important liberal socialist figures include Guido Calogero, Piero Gobetti, Leonard Trelawny Hobhouse, and R. H. Tawney. Liberal socialism has been particularly prominent in British and Italian politics.
One of the most outspoken critics of liberalism was the Roman Catholic Church, which resulted in lengthy power struggles between national governments and the Church. In the same vein, conservatives have also attacked what they perceive to be the reckless liberal pursuit of progress and material gains, arguing that such preoccupations undermine traditional social values rooted in community and continuity. However, a few variations of conservatism, like liberal conservativism, expound some of the same ideas and principles championed by classical liberalism, including "small government and thriving capitalism".
Social democracy, an ideology advocating progressive modification of capitalism, emerged in the 20th century and was influenced by socialism. Yet unlike socialism, it was not collectivist nor anti-capitalist. Broadly defined as a project that aims to correct, through government reformism, what it regards as the intrinsic defects of capitalism by reducing inequalities, social democracy was also not against the state. Several commentators have noted strong similarities between social liberalism and social democracy, with one political scientist even calling American liberalism "bootleg social democracy" due to the absence of a significant social democratic tradition in the United States that liberals have tried to rectify. Another movement associated with modern democracy, Christian democracy, hopes to spread Catholic social ideas and has gained a large following in some European nations. The early roots of Christian democracy developed as a reaction against the industrialisation and urbanisation associated with laissez-faire liberalism in the 19th century. Despite these complex relationships, some scholars have argued that liberalism actually "rejects ideological thinking" altogether, largely because such thinking could lead to unrealistic expectations for human society.
Liberalism is frequently cited as the dominant ideology of modern times. Politically, liberals have organised extensively throughout the world. Liberal parties, think tanks, and other institutions are common in many nations, although they advocate for different causes based on their ideological orientation. Liberal parties can be centre-left, centrist, or centre-right depending on their location.
They can further be divided based on their adherence to social liberalism or classical liberalism, although all liberal parties and individuals share basic similarities, including the support for civil rights and democratic institutions. On a global level, liberals are united in the Liberal International, which contains over 100 influential liberal parties and organisations from across the ideological spectrum.
Some parties in the LI are among the most famous in the world, such as the Liberal Party of Canada, while others are among the smallest, such as the Gibraltar Liberal Party. Regionally, liberals are organised through various institutions depending on the prevailing geopolitical context. The European Liberal Democrat and Reform Party, for example, represents the interests of liberals in Europe while the Alliance of Liberals and Democrats for Europe is the predominant liberal group in the European Parliament.
In Europe, liberalism has a long tradition dating back to 17th century. Scholars often split those traditions into British and French versions, with the former version of liberalism emphasising the expansion of democratic values and constitutional reform and the latter rejecting authoritarian political and economic structures, as well as being involved with nation-building. The continental French version was deeply divided between moderates and progressives, with the moderates tending to elitism and the progressives supporting the universalisation of fundamental institutions, such as universal suffrage, universal education, and the expansion of property rights. Over time, the moderates displaced the progressives as the main guardians of continental European liberalism. A prominent example of these divisions is the German Free Democratic Party, which was historically divided between national liberal and social liberal factions.
Before World War I, liberal parties dominated the European political scene, but they were gradually displaced by socialists and social democrats in the early 20th century. The fortunes of liberal parties since World War II have been mixed, with some gaining strength while others suffered from continuous declines. The fall of the Soviet Union and the breakup of Yugoslavia at the end of the 20th century, however, allowed the formation of many liberal parties throughout Eastern Europe. These parties developed varying ideological characters. Some, such as the Slovenian Liberal Democrats or the Lithuanian Social Liberals, have been characterised as centre-left. Others, such as the Romanian National Liberal Party, have been classified as centre-right.
In the United Kingdom, the Liberal Party was founded in 1859 and was one of two major parties in British politics for that century, the other being the Conservative Party. William Ewart Gladstone of the Liberals was Prime Minister four times. The party lost its influence in the early 20th century due to the growth of the Labour Party, and in 1988 it joined with the Labour splinter Social Democratic Party to form the Liberal Democrats. Following the general election of 2010, the Liberal Democrats formed a coalition government with the Conservatives, resulting in party leader Nick Clegg becoming the Deputy Prime minister and many other members becoming ministers. However, the Liberal Democrats lost 49 of their 56 seats in the 2015 general election, with their review of the result concluding that a number of policy reversals were responsible for their poor electoral performance.
Both in Britain and elsewhere in Western Europe, liberal parties have often cooperated with socialist and social democratic parties, as evidenced by the Purple Coalition in the Netherlands during the late 1990s and into the 21st century. The Purple Coalition, one of the most consequential in Dutch history, brought together the progressive left-liberal D66, the economic liberal and centre-right VVD, and the social democratic Labour Party – an unusual combination that ultimately legalised same-sex marriage, euthanasia, and prostitution while also instituting a non-enforcement policy on marijuana.
In North America, unlike Europe and Latin America, the word liberalism almost exclusively refers to social liberalism. The dominant Canadian party is the Liberal Party and in the United States, the Democratic Party, is usually considered liberal. In Canada, the long-dominant Liberal Party, colloquially known as the Grits, ruled the country for nearly 70 years during the 20th century. The party produced some of the most influential prime ministers in Canadian history, including Pierre Trudeau, Lester B. Pearson and Jean Chrétien, and has been primarily responsible for the development of the Canadian welfare state. The enormous success of the Liberals – virtually unmatched in any other liberal democracy – has prompted many political commentators over time to identify them as the nation's natural governing party.
In the United States, modern liberalism traces its history to the popular presidency of Franklin Delano Roosevelt, who initiated the New Deal in response to the Great Depression and won an unprecedented four elections. The New Deal coalition established by Franklin Roosevelt left a decisive legacy and influenced many future American presidents, including John F. Kennedy, a self-described liberal who defined a liberal as "someone who looks ahead and not behind, someone who welcomes new ideas without rigid reactions ... someone who cares about the welfare of the people".
In the late 20th century, a conservative backlash against the kind of liberalism championed by Roosevelt and Kennedy developed in the Republican Party. This brand of conservatism primarily reacted against the cultural and political upheavals of the 1960s. It helped launch into power such presidents as Ronald Reagan, George H. W. Bush, George W. Bush, and Donald Trump. Economic woes in the early 21st century led to a resurgence of social liberalism with the election of Barack Obama in the 2008 presidential election, along with countervailing and partly reactive conservative populism and nativism embodied in the Tea Party movement and the election of Donald Trump.
In Latin America, liberal unrest dates back to the 19th century, when liberal groups frequently fought against and violently overthrew conservative regimes in several countries across the region. Liberal revolutions in countries such as Mexico and Ecuador ushered in the modern world for much of Latin America. Latin American liberals generally emphasised free trade, private property, and anti-clericalism. Today, market liberals in Latin America are organised in the Red Liberal de América Latina (RELIAL), a centre-right network that brings together dozens of liberal parties and organisations.
RELIAL features parties as geographically diverse as the Mexican Nueva Alianza and the Cuban Liberal Union, which aims to secure power in Cuba. Some major liberal parties in the region continue, however, to align themselves with social liberal ideas and policies – a notable case being the Colombian Liberal Party, which is a member of the Socialist International. Another famous example is the Paraguayan Authentic Radical Liberal Party, one of the most powerful parties in the country, which has also been classified as centre-left.
In Asia, liberalism is a much younger political current than in Europe or the Americas. Continentally, liberals are organised through the Council of Asian Liberals and Democrats, which includes powerful parties such the Liberal Party in the Philippines, the Democratic Progressive Party in Taiwan, and the Democrat Party in Thailand. Two notable examples of liberal influence can be found in India and Australia, although several Asian nations have rejected important liberal principles.
In Australia, liberalism is primarily championed by the centre-right Liberal Party. The Liberals are a fusion of classical liberal and conservative forces and are affiliated with the centre-right International Democrat Union. In India, the most populous democracy in the world, the Indian National Congress has long dominated political affairs. The INC was founded in the late 19th century by liberal nationalists demanding the creation of a more liberal and autonomous India. Liberalism continued to be the main ideological current of the group through the early years of the 20th century, but socialism gradually overshadowed the thinking of the party in the next few decades.
A famous struggle led by the INC eventually earned India's independence from Britain. In recent times, the party has adopted more of a liberal streak, championing open markets while simultaneously seeking social justice. In its 2009 Manifesto, the INC praised a "secular and liberal" Indian nationalism against the nativist, communal, and conservative ideological tendencies it claims are espoused by the right. In general, the major theme of Asian liberalism in the past few decades has been the rise of democratisation as a method facilitate the rapid economic modernisation of the continent. In nations such as Myanmar, however, liberal democracy has been replaced by military dictatorship.
In Africa, liberalism is comparatively weak. The Wafd Party ("Delegation Party") was a nationalist liberal political party in Egypt. It was said to be Egypt's most popular and influential political party for a period in the 1920s and 30s. Recently, however, liberal parties and institutions have made a major push for political power. On a continental level, liberals are organised in the Africa Liberal Network, which contains influential parties such as the Popular Movement in Morocco, the Democratic Party in Senegal, and the Rally of the Republicans in Côte d'Ivoire.
Among African nations, South Africa stands out for having a notable liberal tradition that other countries on the continent lack. In the middle of the 20th century, the Liberal Party and the Progressive Party were formed to oppose the apartheid policies of the government. The Liberals formed a multiracial party that originally drew considerable support from urban Blacks and college-educated Whites. It also gained supporters from the "westernised sectors of the peasantry", and its public meetings were heavily attended by Blacks. The party had 7,000 members at its height, although its appeal to the White population as a whole was too small to make any meaningful political changes. The Liberals were disbanded in 1968 after the government passed a law that prohibited parties from having multiracial membership. Today, liberalism in South Africa is represented by the Democratic Alliance, the official opposition party to the ruling African National Congress. The Democratic Alliance is the second largest party in the National Assembly and currently leads the provincial government of Western Cape.
Impact and influence
The fundamental elements of contemporary society have liberal roots. The early waves of liberalism popularised economic individualism while expanding constitutional government and parliamentary authority. One of the greatest liberal triumphs involved replacing the capricious nature of royalist and absolutist rule with a decision-making process encoded in written law. Liberals sought and established a constitutional order that prized important individual freedoms, such as the freedom of speech and of association, an independent judiciary and public trial by jury, and the abolition of aristocratic privileges.
These sweeping changes in political authority marked the modern transition from absolutism to constitutional rule. The expansion and promotion of free markets was another major liberal achievement. Before they could establish markets, however, liberals had to destroy the old economic structures of the world. In that vein, liberals ended mercantilist policies, royal monopolies, and various other restraints on economic activities. They also sought to abolish internal barriers to trade – eliminating guilds, local tariffs, the Commons and prohibitions on the sale of land along the way.
Later waves of modern liberal thought and struggle were strongly influenced by the need to expand civil rights. In the 1960s and 1970s, the cause of Second Wave feminism in the United States was advanced in large part by liberal feminist organisations such as the National Organization for Women. In addition to supporting gender equality, liberals also have advocated for racial equality in their drive to promote civil rights, and a global civil rights movement in the 20th century achieved several objectives towards both goals. Among the various regional and national movements, the civil rights movement in the United States during the 1960s strongly highlighted the liberal efforts for equal rights. Describing the political efforts of the period, some historians have asserted, "the voting rights campaign marked ... the convergence of two political forces at their zenith: the black campaign for equality and the movement for liberal reform," further remarking about how "the struggle to assure blacks the ballot coincided with the liberal call for expanded federal action to protect the rights of all citizens". The Great Society project launched by President Lyndon B. Johnson oversaw the creation of Medicare and Medicaid, the establishment of Head Start and the Job Corps as part of the War on Poverty, and the passage of the landmark Civil Rights Act of 1964 – an altogether rapid series of events that some historians have dubbed the Liberal Hour.
Another major liberal accomplishment includes the rise of liberal internationalism, which has been credited with the establishment of global organisations such as the League of Nations and, after World War II, the United Nations. The idea of exporting liberalism worldwide and constructing a harmonious and liberal internationalist order has dominated the thinking of liberals since the 18th century. "Wherever liberalism has flourished domestically, it has been accompanied by visions of liberal internationalism," one historian wrote. But resistance to liberal internationalism was deep and bitter, with critics arguing that growing global interdependency would result in the loss of national sovereignty and that democracies represented a corrupt order incapable of either domestic or global governance.
Other scholars have praised the influence of liberal internationalism, claiming that the rise of globalisation "constitutes a triumph of the liberal vision that first appeared in the eighteenth century" while also writing that liberalism is "the only comprehensive and hopeful vision of world affairs". The gains of liberalism have been significant. In 1975, roughly 40 countries around the world were characterised as liberal democracies, but that number had increased to more than 80 as of 2008. Most of the world's richest and most powerful nations are liberal democracies with extensive social welfare programmes.
- Constitutional liberalism
- Friedrich Naumann Foundation is a global advocacy organisation that supports liberal ideas and policies.
- Liberalism by country
- Muscular liberalism
- Rule according to higher law
- The American Prospect, an American political magazine that backs social liberal policies
- The Liberal, a former British magazine dedicated to coverage of liberal politics and liberal culture
- "liberalism In general, the belief that it is the aim of politics to preserve individual rights and to maximize freedom of choice." Concise Oxford Dictionary of Politics, Iain McLean and Alistair McMillan, Third edition 2009, ISBN 978-0-19-920516-5
- "political rationalism, hostility to autocracy, cultural distaste for conservatism and for tradition in general, tolerance, and ... individualism." John Dunn, Western Political Theory in the Face of the Future, Cambridge University Press, (1993), ISBN 978-0-521-43755-4
- "With a nod to Robert Trivers' definition of altruistic behavior (Trivers 1971, p. 35), Satoshi Kanazawa defines liberalism (as opposed to conservatism) as "the genuine concern for the welfare of genetically unrelated others and the willingness to contribute larger proportions of private resources for the welfare of such others" (Kanazawa 2010, p. 38).
- "The Liberal Agenda for the 21st Century". Archived from the original on 7 February 2011. Retrieved 20 March 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Nader Hashemi (11 March 2009). Islam, Secularism, and Liberal Democracy: Toward a Democratic Theory for Muslim Societies. Oxford University Press. ISBN 978-0-19-971751-4.
Liberal democracy requires a form of secularism to sustain itself<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Kathleen G. Donohue (19 December 2003). Freedom from Want: American Liberalism and the Idea of the Consumer (New Studies in American Intellectual and Cultural History). Johns Hopkins University Press. ISBN 978-0-8018-7426-0. Retrieved 31 December 2007.
Three of them – freedom from fear, freedom of speech, and freedom of religion – have long been fundamental to liberalism.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "The Economist, Volume 341, Issues 7995–7997". The Economist. 1996. Retrieved 31 December 2007.
For all three share a belief in the liberal society as defined above: a society that provides constitutional government (rule by law, not by men) and freedom of religion, thought, expression and economic interaction; a society in which ...<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Sehldon S. Wolin (2004). Politics and Vision: Continuity and Innovation in Western Political Thought. Princeton University Press. ISBN 978-0-691-11977-9. Retrieved 31 December 2007.
While liberalism practically disappeared as a publicly professed ideology, it retained a virtual monopoly in the ... The most frequently cited rights included freedom of speech, press, assembly, religion, property, and procedural rights<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Edwin Brown Firmage; Bernard G. Weiss; John Woodland Welch (1990). Religion and Law: Biblical-Judaic and Islamic Perspectives. Eisenbrauns. ISBN 978-0-931464-39-3. Retrieved 31 December 2007.
There is no need to expound the foundations and principles of modern liberalism, which emphasises the values of freedom of conscience and freedom of religion<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- John Joseph Lalor (1883). Cyclopædia of Political Science, Political Economy, and of the Political History of the United States. Nabu Press. Retrieved 31 December 2007.
Democracy attaches itself to a form of government: liberalism, to liberty and guarantees of liberty. The two may agree; they are not contradictory, but they are neither identical, nor necessarily connected. In the moral order, liberalism is the liberty to think, recognised and practiced. This is primordial liberalism, as the liberty to think is itself the first and noblest of liberties. Man would not be free in any degree or in any sphere of action, if he were not a thinking being endowed with consciousness. The freedom of worship, the freedom of education, and the freedom of the press are derived the most directly from the freedom to think.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "All mankind ... being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions", John Locke, Second Treatise of Government
- New Liberalism: Matthew Kalkman: 9781926991047: Amazon.com: Books. amazon.com. ISBN 1-926991-04-4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Often referred to simply as "liberalism" in the United States.
- "Liberalism in America: A Note for Europeans" by Arthur Schlesinger, Jr. (1956) from: The Politics of Hope (Boston: Riverside Press, 1962). "Liberalism in the U.S. usage has little in common with the word as used in the politics of any other country, save possibly Britain."
- Gross, p. 5.
- Kirchner, pp. 2–3.
- Colton and Palmer, p. 479.
- Emil J. Kirchner, Liberal Parties in Western Europe, "Liberal parties were among the first political parties to form, and their long-serving and influential records, as participants in parliaments and governments, raise important questions ...", Cambridge University Press, 1988, ISBN 978-0-521-32394-9
- "Liberalism", Encyclopædia Britannica
- Rothbard, The Libertarian Heritage: The American Revolution and Classical Liberalism.
- "The Rise, Decline, and Reemergence of Classical Liberalism". Retrieved 17 December 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- West 1996, p. xv.
- Delaney, p. 18.
- Godwin et al., p. 12.
- Copleston, pp. 39–41.
- Locke, p. 170.
- Forster, p. 219.
- Zvesper, p. 93.
- Feldman, Noah (2005). Divided by God. Farrar, Straus and Giroux, p. 29 ("It took John Locke to translate the demand for liberty of conscience into a systematic argument for distinguishing the realm of government from the realm of religion.")
- Feldman, Noah (2005). Divided by God. Farrar, Straus and Giroux, pg. 29
- McGrath, Alister. 1998. Historical Theology, An Introduction to the History of Christian Thought. Oxford: Blackwell Publishers. pp. 214–15.
- Bornkamm, Heinrich (1962). "Die Religion in Geschichte und Gegenwart - Chapter: Toleranz. In der Geschichte des Christentums" (in German). <templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>, 3. Auflage, Band VI, col. 942
- Hunter, William Bridges. A Milton Encyclopedia, Volume 8 (East Brunswick, N.J.: Associated University Presses, 1980). pp. 71, 72. ISBN 0-8387-1841-8.
- Steven Pincus (2009). 1688: The First Modern Revolution. Yale University Press. ISBN 0-300-15605-7. Retrieved 7 February 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "England's revolution". The Economist. 17 October 2009. Retrieved 17 December 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Windeyer, W. J. Victor (1938). "Essays". In Windeyer, William John Victor. Lectures on Legal History. Law Book Co. of Australasia.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- John J. Patrick; Gerald P. Long (1999). Constitutional Debates on Freedom of Religion: A Documentary History. Westport, CT: Greenwood Press.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Professor Lyman Ray Patterson, "Copyright and 'The Exclusive Right' Of Authors" Journal of Intellectual Property, Vol. 1, No. 1 Fall 1993.
- "Letter on the subject of Candide, to the Journal encyclopédique July 15, 1759". University of Chicago. Archived from the original on 13 October 2006. Retrieved 7 January 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Bernstein, p. 48.
- Roberts, p. 701.
- Milan Zafirovski (2007). Liberal Modernity and Its Adversaries: Freedom, Liberalism and Anti-Liberalism in the 21st Century. BRILL. pp. 237–38. ISBN 90-04-16052-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Frey, Foreword.
- Frey, Preface.
- Ros, p. 11.
- Palmer and Colton, (1995) pp. 428–29.
- Frederick B. Artz, Reaction and Revolution: 1814–1832 (1934) pp. 142–43
- William Martin, Histoire de la Suisse (Paris, 1926), pp. 187–88, quoted in Crane Brinson, A Decade of Revolution: 1789–1799 (1934) p. 235
- Turner, p. 86
- Cook, p. 31.
- Mills, pp. 63, 68
- Mills, p. 64
- The Wealth of Nations, Strahan and Cadell, 1778
- Mills, p. 66
- Mills, p. 67
- Mills, p. 68
- See, e.g., Donald Markwell, John Maynard Keynes and International Relations: Economic Paths to War and Peace, Oxford University Press, 2006, chapter 1.
- Richardson, p. 32
- Mills, p. 76
- Gray, pp. 26–27
- Palmer and Colton, p. 510.
- Palmer and Colton, p. 509.
- Palmer and Colton, pp. 546–47.
- Richardson, pp. 36–37
- Mill, John Stuart On Liberty Penguin Classics, 2006 ISBN 978-0-14-144147-4 pp. 90–91
- Mill, John Stuart On Liberty Penguin Classics, 2006 ISBN 978-0-14-144147-4 pp. 10–11
- IREF | Pour la liberte economique et la concurrence fiscale (PDF) March 2009/https://web.archive.org/web/20090327011315/http://www.irefeurope.org/col_docs/doc_51_fr.pdf Archived March 27, 2009 at the Wayback Machine
- Mill, John Stuart; Bentham, Jeremy (2004). Ryan, Alan., ed. Utilitarianism and other essays. London: Penguin Books. p. 11. ISBN 0-14-043272-8.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Nicholson, P. P., "T. H. Green and State Action: Liquor Legislation", History of Political Thought, 6 (1985), 517–50. Reprinted in A. Vincent, ed., The Philosophy of T. H. Green (Aldershot: Gower, 1986), pp. 76–103
- Adams, Ian (2001). Political Ideology Today (Politics Today). Manchester: Manchester University Press. ISBN 0-7190-6020-6.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- The Routledge encyclopaedia of philosophy, p. 599
- Geoffrey Lee – "The People's Budget: An Edwardian Tragedy"
- Whigs, Radicals, and Liberals, 1815–1914, by Duncan Watts
- Gilbert, "David Lloyd George: Land, The Budget, and Social Reform", The American Historical Review Vol. 81, No. 5 (Dec. 1976), pp. 1058–66
- See studies of Keynes by, e.g., Roy Harrod, Robert Skidelsky, Donald Moggridge, and Donald Markwell.
- Pressman, Steven (1999). Fifty Great Economists. London: London: Routledge. pp. 96–100. ISBN 0-415-13481-1.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Keith Tribe, Economic careers: economics and economists in Britain, 1930–1970 (1997), p. 61
- Cassidy, John (10 October 2011). "The Demand Doctor". The New Yorker.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Palmer and Colton, p. 808.
- Auerbach and Kotlikoff, p. 299.
- Knoop, p. 151.
- Colomer, p. 62.
- Larry Diamond (2008). The Spirit of Democracy: The Struggle to Build Free Societies Throughout the World. Henry Holt. p. 7. ISBN 978-0-8050-7869-5.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Freedom in the World 2016". Freedom House.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Peerenboom, pp. 7–8.
- Antoninus, p. 3.
- Young 2002, pp. 25–26.
- Young 2002, p. 24.
- Young 2002, p. 25.
- Gray, p. xii.
- Wolfe, pp. 33–36.
- Young 2002, p. 45.
- Young 2002, pp. 30–31.
- Young 2002, p. 30.
- Young 2002, p. 31.
- Young 2002, p. 32.
- Young 2002, pp. 32–33.
- Gould, p. 4.
- Young 2002, p. 33.
- Wolfe, p. 74.
- Adams, pp. 54–55.
- Wempe, p. 123.
- Adams, p. 55.
- Adams, p. 58.
- Young 2002, p. 36.
- Wolfe, p. 63.
- Young 2002, p. 39.
- Young 2002, pp. 39–40.
- Young 2002, p. 40.
- Young 2002, pp. 42–43.
- Young 2002, p. 43.
- Young 2002, p. 44.
- Jensen, p. 1.
- Jensen, p. 2.
- Falco, pp. 47–48.
- Grigsby, p. 108.
- Grigsby, pp. 119–22.
- Koerner, pp. 9–12.
- Gerald F. Gaus, Chandran Kukathas. Handbook of political theory. London, England, UK; Thousand Oaks, California, USA; New Delhi, India: SAGE Publications, 2004. p. 420.
- Ian Adams (1998). Ideology and Politics in Britain Today. Manchester University Press. pp. 127–. ISBN 978-0-7190-5056-5. Retrieved 1 August 2013.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Stanislao G. Pugliese. Carlo Rosselli: socialist heretic and antifascist exile. Harvard University Press, 1999. p. 99.
- Noel W. Thompson. Political economy and the Labour Party: the economics of democratic socialism, 1884–2005. 2nd edition. Oxon, England, UK; New York, New York, USA: Routledge, 2006. pp. 60–1.
- Nadia Urbinati. J.S. Mill's political thought: a bicentennial reassessment. Cambridge, England, UK: Cambridge University Press, 2007 p. 101.
- Steve Bastow, James Martin. Third way discourse: European ideologies in the twentieth century. Edinburgh, Scotland, UK: Edinburgh University Press, Ltd, 2003. p. 72.
- Grew, Raymond (1997). "Liberty and the Catholic Church in 19th century Europe". In Richard Helmstadter. Freedom and Religion in the 19th Century. Stanford University Press. p. 201. ISBN 978-0-8047-3087-7.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Koerner, p. 14.
- Lightfoot, p. 17.
- Susser, p. 110.
- Riff, pp. 34–36.
- Riff, p. 34.
- Wolfe, p. 116.
- "The International"
- Wolfe, p. 23.
- Adams, p. 11.
- German songs like "Die Gedanken sind frei" (thoughts are free) can be dated even centuries before that.
- Kirchner, p. 3.
- Kirchner, p. 4.
- Kirchner, p. 10.
- Karatnycky et al., p. 247.
- Hafner and Ramet, p. 104.
- Various authors, p. 1615.
- Campaigns & Communications Committee. "2015 Election Review" (PDF). 2015 Election Review. Liberal Democrats. Retrieved May 11, 2016.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Schie and Voermann, p. 121.
- Gallagher et al., p. 226.
- Puddington, p. 142. "After a dozen years of centre-left Liberal Party rule, the Conservative Party emerged from the 2006 parliamentary elections with a plurality and established a fragile minority government."
- Grigsby, pp. 106–07. [Talking about the Democratic Party] "Its liberalism is for the most part the later version of liberalism – modern liberalism."
- Arnold, p. 3. "Modern liberalism occupies the left-of-center in the traditional political spectrum and is represented by the Democratic Party in the United States."
- Chodos et al., p. 9.
- Alterman, p. 32.
- Flamm and Steigerwald, pp. 156–58.
- Patrick Allitt, The Conservatives, p. 253, Yale University Press, 2009, ISBN 978-0-300-16418-3
- Wolfe, p. xiv.
- Dore and Molyneux, p. 9.
- Ameringer, p. 489.
- Monsma and Soper, p. 95.
- "International Democrat Union » Member Parties". International Democrat Union.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "A new battleline for Liberal ideas". The Australian. 26 October 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Vote 1 Baillieu to save small-l liberalism". The Age. Melbourne.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Karatnycky, p. 59.
- Hodge, p. 346.
- 2009 "Manifesto" Indian National Congress. Retrieved 21 February 2010.
- Routledge et al., p. 111.
- Steinberg, pp. 1–2.
- Van den Berghe, p. 56.
- Van den Berghe, p. 57.
- Gould, p. 3.
- Worell, p. 470.
- Mackenzie and Weisbrot, p. 178.
- Mackenzie and Weisbrot, p. 5.
- Sinclair, p. 145.
- Schell, p. 266.
- Schell, pp. 273–80.
- Venturelli, p. 247.
- Farr, p. 81.
- Pierson, p. 110.
References and further reading
- Adams, Ian. Ideology and politics in Britain today. Manchester: Manchester University Press, 1998. ISBN 0-7190-5056-1
- Alterman, Eric. Why We're Liberals. New York: Viking Adult, 2008. ISBN 0-670-01860-0
- Ameringer, Charles. Political parties of the Americas, 1980s to 1990s. Westport: Greenwood Publishing Group, 1992. ISBN 0-313-27418-5
- Amin, Samir. The liberal virus: permanent war and the americanization of the world. New York: Monthly Review Press, 2004.
- Antoninus, Marcus Aurelius. The Meditations of Marcus Aurelius Antoninus. New York: Oxford University Press, 2008. ISBN 0-19-954059-4
- Arnold, N. Scott. Imposing values: an essay on liberalism and regulation. New York: Oxford University Press, 2009. ISBN 0-495-50112-3
- Auerbach, Alan and Kotlikoff, Laurence. Macroeconomics Cambridge: MIT Press, 1998. ISBN 0-262-01170-0
- Barzilai, Gad. Communities and Law: Politics and Cultures of Legal Identities University of Michigan Press, 2003. ISBN 978-0-472-03079-8
- Bell, Duncan. "What is Liberalism?" Political Theory, 42/6 (2014).
- Brack, Duncan and Randall, Ed (eds.). Dictionary of Liberal Thought. London: Politico's Publishing, 2007. ISBN 978-1-84275-167-1
- George Brandis, Tom Harley & Donald Markwell (editors). Liberals Face the Future: Essays on Australian Liberalism, Melbourne: Oxford University Press, 1984.
- Alan Bullock & Maurice Shock (editors). The Liberal Tradition: From Fox to Keynes, Oxford: Clarendon Press, 1967.
- Chodos, Robert et al. The unmaking of Canada: the hidden theme in Canadian history since 1945. Halifax: James Lorimer & Company, 1991. ISBN 1-55028-337-5
- Coker, Christopher. Twilight of the West. Boulder: Westview Press, 1998. ISBN 0-8133-3368-7
- Colomer, Josep Maria. Great Empires, Small Nations. New York: Routledge, 2007. ISBN 0-415-43775-X
- Cook, Richard. The Grand Old Man. Whitefish: Kessinger Publishing, 2004. ISBN 1-4191-6449-X
- Delaney, Tim. The march of unreason: science, democracy, and the new fundamentalism. New York: Oxford University Press, 2005. ISBN 0-19-280485-5
- Diamond, Larry. The Spirit of Democracy. New York: Macmillan, 2008. ISBN 0-8050-7869-X
- Dobson, John. Bulls, Bears, Boom, and Bust. Santa Barbara: ABC-CLIO, 2006. ISBN 1-85109-553-5
- Dorrien, Gary. The making of American liberal theology. Louisville: Westminster John Knox Press, 2001. ISBN 0-664-22354-0
- Farr, Thomas. World of Faith and Freedom. New York: Oxford University Press US, 2008. ISBN 0-19-517995-1
- Falco, Maria. Feminist interpretations of Mary Wollstonecraft. State College: Penn State Press, 1996. ISBN 0-271-01493-8
- Fawcett, Edmund. Liberalism: The Life of an Idea. Princeton: Princeton University Press, 2014. ISBN 978-0-691-15689-7
- Flamm, Michael and Steigerwald, David. Debating the 1960s: liberal, conservative, and radical perspectives. Lanham: Rowman & Littlefield, 2008. ISBN 0-7425-2212-1
- Frey, Linda and Frey, Marsha. The French Revolution. Westport: Greenwood Press, 2004. ISBN 0-313-32193-0
- Gallagher, Michael et al. Representative government in modern Europe. New York: McGraw Hill, 2001. ISBN 0-07-232267-5
- Gifford, Rob. China Road: A Journey into the Future of a Rising Power. Random House, 2008. ISBN 0-8129-7524-3
- Godwin, Kenneth et al. School choice tradeoffs: liberty, equity, and diversity. Austin: University of Texas Press, 2002. ISBN 0-292-72842-5
- Gould, Andrew. Origins of liberal dominance. Ann Arbor: University of Michigan Press, 1999. ISBN 0-472-11015-2
- Gray, John. Liberalism. Minneapolis: University of Minnesota Press, 1995. ISBN 0-8166-2801-7
- Grigsby, Ellen. Analyzing Politics: An Introduction to Political Science. Florence: Cengage Learning, 2008. ISBN 0-495-50112-3
- Gross, Jonathan. Byron: the erotic liberal. Lanham: Rowman & Littlefield Publishers, Inc., 2001. ISBN 0-7425-1162-6
- Hafner, Danica and Ramet, Sabrina. Democratic transition in Slovenia: value transformation, education, and media. College Station: Texas A&M University Press, 2006. ISBN 1-58544-525-8
- Handelsman, Michael. Culture and Customs of Ecuador. Westport: Greenwood Press, 2000. ISBN 0-313-30244-8
- Hartz, Louis. The liberal tradition in America. New York: Houghton Mifflin Harcourt, 1955. ISBN 0-15-651269-6
- Heywood, Andrew (2003). Political Ideologies: An Introduction. New York: Palgrave Macmillan. ISBN 0-333-96177-3.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Hodge, Carl. Encyclopedia of the Age of Imperialism, 1800–1944. Westport: Greenwood Publishing Group, 2008. ISBN 0-313-33406-4
- Jensen, Pamela Grande. Finding a new feminism: rethinking the woman question for liberal democracy. Lanham: Rowman & Littlefield, 1996. ISBN 0-8476-8189-0
- Johnson, Paul. The Renaissance: A Short History. New York: Modern Library, 2002. ISBN 0-8129-6619-8
- Lua error in Module:Citation/CS1/Identifiers at line 47: attempt to index field 'wikibase' (a nil value).
- Karatnycky, Adrian. Freedom in the World. Piscataway: Transaction Publishers, 2000. ISBN 0-7658-0760-2
- Karatnycky, Adrian et al. Nations in transit, 2001. Piscataway: Transaction Publishers, 2001. ISBN 0-7658-0897-8
- Kelly, Paul. Liberalism. Cambridge: Polity Press, 2005. ISBN 0-7456-3291-2
- Kirchner, Emil. Liberal parties in Western Europe. Cambridge: Cambridge University Press, 1988. ISBN 0-521-32394-0
- Knoop, Todd. Recessions and Depressions Westport: Greenwood Press, 2004. ISBN 0-313-38163-1
- Koerner, Kirk. Liberalism and its critics. Oxford: Taylor & Francis, 1985. ISBN 0-7099-1551-9
- Leroux, Robert, Political Economy and Liberalism in France: The Contributions of Frédéric Bastiat, London and New York, 2011.
- Leroux, Robert, Davi M. Hart (eds), French Liberalism in the 19th Century, London and New York: London, 2012.
- Lightfoot, Simon. Europeanizing social democracy?: The rise of the Party of European Socialists. New York: Routledge, 2005. ISBN 0-415-34803-X
- Losurdo, Domenico. Liberalism: a counter-history. London: Verso, 2011
- Lyons, Martyn. Napoleon Bonaparte and the Legacy of the French Revolution. New York: St. Martin's Press, Inc., 1994. ISBN 0-312-12123-7
- Mackenzie, G. Calvin and Weisbrot, Robert. The liberal hour: Washington and the politics of change in the 1960s. New York: Penguin Group, 2008. ISBN 1-59420-170-6
- Manent, Pierre and Seigel, Jerrold. An Intellectual History of Liberalism. Princeton: Princeton University Press, 1996. ISBN 0-691-02911-3
- Donald Markwell. John Maynard Keynes and International Relations: Economic Paths to War and Peace, Oxford University Press, 2006.
- Mazower, Mark. Dark Continent. New York: Vintage Books, 1998. ISBN 0-679-75704-X
- Monsma, Stephen and Soper, J. Christopher. The Challenge of Pluralism: Church and State in Five Democracies. Lanham: Rowman & Littlefield, 2008. ISBN 0-7425-5417-1
- Palmer, R.R. and Joel Colton. A History of the Modern World. New York: McGraw Hill, Inc., 1995. ISBN 0-07-040826-2
- Perry, Marvin et al. Western Civilization: Ideas, Politics, and Society. Florence, KY: Cengage Learning, 2008. ISBN 0-547-14742-2
- Pierson, Paul. The New Politics of the Welfare State. New York: Oxford University Press, 2001. ISBN 0-19-829756-4
- Puddington, Arch. Freedom in the World: The Annual Survey of Political Rights and Civil Liberties. Lanham: Rowman & Littlefield, 2007. ISBN 0-7425-5897-5
- Riff, Michael. Dictionary of modern political ideologies. Manchester: Manchester University Press, 1990. ISBN 0-7190-3289-X
- Rivlin, Alice. Reviving the American Dream Washington D.C.: Brookings Institution Press, 1992. ISBN 0-8157-7476-1
- Ros, Agustin. Profits for all?: the cost and benefits of employee ownership. New York: Nova Publishers, 2001. ISBN 1-59033-061-7
- Routledge, Paul et al. The geopolitics reader. New York: Routledge, 2006. ISBN 0-415-34148-5
- Russell, Bertrand (2000) . History of Western Philosophy. London: Routledge. ISBN 0-415-22854-9.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Ryan, Alan. The Philosophy of John Stuart Mill. Humanity Books: 1970. ISBN 978-1-57392-404-7.
- Ryan, Alan. The Making of Modern Liberalism (Princeton UP, 2012)
- Ryan, Alan. On Politics: A History of Political Thought: From Herodotus to the Present. Allen Lane, 2012. ISBN 978-0-87140-465-7.
- Shell, Jonathan. The Unconquerable World: Power, Nonviolence, and the Will of the People. New York: Macmillan, 2004. ISBN 0-8050-4457-4
- Shaw, G. K. Keynesian Economics: The Permanent Revolution. Aldershot, England: Edward Elgar Publishing Company, 1988. ISBN 1-85278-099-1
- Sinclair, Timothy. Global governance: critical concepts in political science. Oxford: Taylor & Francis, 2004. ISBN 0-415-27662-4
- Song, Robert. Christianity and Liberal Society. Oxford: Oxford University Press, 2006. ISBN 0-19-826933-1
- Stacy, Lee. Mexico and the United States. New York: Marshall Cavendish Corporation, 2002. ISBN 0-7614-7402-1
- Steinberg, David I. Burma: the State of Myanmar. Georgetown University Press, 2001. ISBN 0-87840-893-2
- Steindl, Frank. Understanding Economic Recovery in the 1930s. Ann Arbor: University of Michigan Press, 2004. ISBN 0-472-11348-8
- Susser, Bernard. Political ideology in the modern world. Upper Saddle River: Allyn and Bacon, 1995. ISBN 0-02-418442-X
- Lua error in Module:Citation/CS1/Identifiers at line 47: attempt to index field 'wikibase' (a nil value).
- Van den Berghe, Pierre. The Liberal dilemma in South Africa. Oxford: Taylor & Francis, 1979. ISBN 0-7099-0136-4
- Van Schie, P. G. C. and Voermann, Gerrit. The dividing line between success and failure: a comparison of Liberalism in the Netherlands and Germany in the 19th and 20th Centuries. Berlin: LIT Verlag Berlin-Hamburg-Münster, 2006. ISBN 3-8258-7668-3
- Various authors. Countries of the World & Their Leaders Yearbook 08, Volume 2. Detroit: Thomson Gale, 2007. ISBN 0-7876-8108-3
- Venturelli, Shalini. Liberalizing the European media: politics, regulation, and the public sphere. New York: Oxford University Press, 1998. ISBN 0-19-823379-5
- Wallerstein, Immanuel. The Modern World-System IV: Centrist Liberalism trimphant 1789–1914. Berkeley and Los Angeles: University of California Press, 2011.
- Wempe, Ben. T. H. Green's theory of positive freedom: from metaphysics to political theory. Exeter: Imprint Academic, 2004. ISBN 0-907845-58-4
- Whitfield, Stephen. Companion to twentieth-century America. Hoboken: Wiley-Blackwell, 2004. ISBN 0-631-21100-4
- Wolfe, Alan. The Future of Liberalism. New York: Random House, Inc., 2009. ISBN 0-307-38625-2
- Worell, Judith. Encyclopedia of women and gender, Volume I. Amsterdam: Elsevier, 2001. ISBN 0-12-227246-3
- Young, Shaun (2002). Beyond Rawls: An Analysis of the Concept of Political Liberalism. Lanham, MD: University Press of America. ISBN 0-7618-2240-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Zvesper, John. Nature and liberty. New York: Routledge, 1993. ISBN 0-415-08923-9 |
From: Katie Johnson ([email protected])
Date: Wed Mar 05 2003 - 17:12:59 EST
In response to sharing how we sequence the year.
The first week of school I give an overview of the year long course. It
revolves around 3 essential questions:
#1: How do I know WHAT chemical I have?
#2: If I put 2 chemicals together, can I predict how they will change
(what will form?, how much?, how fast?)
#3: Can I control the event? (the what, the how much and the rate)
Unit One: The Basics - Essential Question #1
How do I know WHAT I have?
i.e. physical and chemical properties
How do I know HOW MUCH I have?
i.e. measurment: meniscus, pan balance, significant digits
How do I know WHEN it has changed to something different?
i.e. clues to changes (dissolving does not equal melting, etc)
Unit Two: Basics #1 - more details about identifying WHAT chemical it is.
WHAT kind of substance?
i.e. element, compound, mixture
WHAT kind of particle?
i.e. atom, molecule, sub-atomic particle
WHAT do I call it?
i.e. rules for naming
Unit Three: Basics #2 - more details about HOW MUCH we have
There are 3 ways to measure HOW MUCH:
count particles, measure a volume, measure a mass
They are related by the concept of the MOLE
(here is where we deal with scientific notation)
Unit Four: Basics #3 - more details about CHANGES in a chemical
Patterns for simple reactions
We introduce REDOX here
Unit Five: Stoichiometry - Essential Question #2
Predicting reaction products including solubility rules
% composition, empirical and molecular formulas
Unit Six: Energy Issues
endo and exothermic events
heat of reaction (and its synonyms: heat of solution, heat of combustion etc)
Heat and CHEMICAL changes
standard heats of formation and Hess's Law
Heat and PHYSICAL change
heating and cooling graphs
States of Matter
Unit Seven: The Gas Laws
(this is usually the beginning of 2nd semester)
Phase Change Diagrams
Gas Law Problems
Unit Eight: Periodicity
a closer look at the atom: electron configurations
other periodic trends
Unit Nine: Bonding
Lewis structures, valence electrons
ionic, covalent and polar covalent bonding
simple molecular shapes, VSEPR
Unit Ten: Introduction to Organic Chem
(this unit is optional, I do it if I have time -- about 1 week)
the basics of naming hydrocarbons, functional groups and isomers
Unit Eleven: The chemistry of Mixtures
solutions, colloids, suspensions
strong and weak electrolytes
(Ksp if time)
Unit Twelve: Reaction Mechanics - Essential Question #3
factors affecting reaction rates
driving forces: entropy and enthalpy
Gibbs Free energy
Unit Thirteen: Acids and Bases
strong and weak acids/bases
Ka and Kb
I usually finish everything mentioned, some years we don't do much with
Madison East High School |
A toothbrush is a tool that is used to clean the teeth and to maintain our oral hygiene. Tooth brushing is one of the ways to control plaque level in our mouth , which is mechanically. It can also be done chemically, but mechanical cleaning is the most commonly practiced method around the world. It is also the most fundamental way of removing all the debris in our mouth, in which cannot be done by just using the chemical cleaning method alone.
Every time when we brush, the adherent force between the debris and the tooth surface will be disrupted. This causes the debris to be dislodged from the tooth surface, and be washed away when we rinse our mouth after brushing. Other than debris, all the plaque that are accumulated at the tooth surface will also be disrupted and cleared away. By taking good care of our teeth’s cleanliness, we are also taking care of our gum’s health, as they are inter-related. Healthy gums, in turn, will prevent bad breath, loose teeth, and bleeding gums.
What are the components of a toothbrush?
A toothbrush can be divided into the handle, which is the part where the person hold onto the toothbrush, and also the head, which contains the bristles. Nowadays, toothbrush comes in many colours, forms, and sizes to suit the needs and fancy of the users. For example, a child will prefer a brightly coloured toothbrush with cartoon figures imprinted on the handle, while an adult will prefer a more practical toothbrush. In some children’s toothbrush, a tune will be played for two minutes as an indicator for the child to know how long he/she has to remain brushing.
The bristles of the toothbrush can come as extra soft, soft, medium and hard. For a normal patient, medium or soft bristles will be more ideal as they will not hurt the gums during brushing, but can still remove debris at the same time. However, patients who had just undergo oral surgery will need bristles which are soft or extra soft.
The head of the toothbrush should ideally be as small as possible, so that it can reach to the back of the mouth because the space there is small. The teeth at the back of the mouth is more prone to calculus formation because of the presence of the opening of the salivary glands nearby. The contents of the saliva will promote faster formation of calculus, and if that area is not brushed diligently, chronic presence of calculus will lead to gum recession and other periodontal problems.
What types of toothbrush are there?
There are two main types of toothbrush that are available commercially on the market – manual and electrical toothbrush. Electrical toothbrushes are indicated when a person is having some disability or some difficulty to control his muscles to brush effectively. In a normal individual, it is still best to use a normal toothbrush to prevent the individual from totally relying on the electrical toothbrush to clean the teeth. To clean the teeth effectively in a normal person, a Modified Bass technique is often recommended by the dentist or the oral hygienist. In this technique, the brush is held by the individual at an angle, with the bristles angled upwards, pointing at the gum line. A short, firm vibratory motion is then applied onto the tooth surface by moving the bristles of the brush upwards and downwards.
A common mistake that people use to make is that they tend to brush their teeth horizontally, whether is the front or the back teeth. This actually pushes the debris further into the space between each tooth, which is also known as the interdental space. Even with the correct brushing method, it is sometimes not possible to get the debris out from the interdental spaces and requires the help of a floss. Therefore, if a correct brushing habit is not practiced, it can cause more harm than good to the individual.
Another type of toothbrush is a special type of toothbrush, used by patients who are undergoing orthodontic treatment, or who has underwent periodontal surgery before. It is the single tufted toothbrush, and as the name suggests, instead of a head of bristles, it contains only a tuft of bristles. This is to enable the bristles to go into hard to reach places where the bristles of a normal toothbrush could not reach.
Even though a toothbrush seems like a simple tool, it has been improvised many times in many ways throughout history. This includes the tongue brushing function being incorporated into the toothbrush, as well as the ability of the toothbrush to move its head. However, no matter how many advancements are made on the toothbrush, one must never forget that a good oral health comes from a good and consistent brushing technique. |
The third largest of the continents, North America extends from Alaska, the Queen Elizabeth Islands, and Greenland to Panama’s eastern border with Colombia in South America. Canada, the United States, Mexico, the Central American republics, the Bahama Islands and the Greater and Lesser Antilles are all parts of North America—more than 9,300,000 square miles (24,100,000 square kilometers), or more than 16 percent of the world’s land area. North America’s population of more than 490 million is increasingly urban.
The topography of North America is oriented largely north and south. The old Appalachian Mountains form a highland area in the eastern part of the continent, while the younger Rocky Mountains form a higher, more rugged frame in the west. Between the two are the lower, flatter physiographic provinces: the Canadian Shield, the Interior Plains, and the Great Plains. East and south of the Appalachians is the Atlantic Coastal Plain (Yucatán to New England), and west of the Rockies are the intermontane basins and plateaus, including the Great Basin and the Colorado Plateau, and the Pacific ranges. The broad Rocky Mountains frame of the west is carried southward in the Sierra Madre Oriental and the Sierra Madre Occidental, which enclose the Mexican Plateau. In the southern Mexican highlands the physiographic orientation changes abruptly from north-south to east-west. South of the Isthmus of Tehuantepec, one mountain branch of the Central American-Antillean system extends from Mexico’s Pacific coast through Chiapas and Oaxaca, the Cayman Islands, and southeastern Cuba to Haiti and the Dominican Republic. A second branch extends from Guatemala and Honduras to Jamaica and then on to Haiti and the Dominican Republic. Both branches merge to form the mountains of Puerto Rico and the Virgin Islands. A string of volcanic mountains—the Lesser Antilles—continues to the south. Another string of volcanic mountains runs through Costa Rica and Panama.
The physiographic frame of the continent has a great effect on water flow. Waters from the Great Lakes pour over Niagara Falls into the St. Lawrence River and to the sea. Rivers in the Canadian Shield flow into Hudson Bay. From the Appalachians, rivers flow eastward to the sea and westward into the Mississippi River drainage system. The Continental Divide along the Rocky Mountains frame separates the waters that flow eastward from those that flow westward. The Mississippi, for example, funnels the waters of the American interior into the Gulf of Mexico; the Colorado River goes to the Gulf of California; the Columbia and Fraser rivers flow to the Pacific Ocean; the Yukon to the Bering Sea. The rivers of Mexico, except for the Rio Grande, are comparatively short (Yaqui, Balsas, Usumacinta), as are those in Central America (Coca Segovia, San Juan) and in the Greater and Lesser Antilles.
Latitude, topography, the position of the air masses, and the relationships between land and water tend to control the climates of the continent. In the Pacific Northwest the westerly winds, coming off the Pacific Ocean and striking the Pacific Ranges at right angles, deposit much rain on the windward side of the mountains. The leeward sides are much drier—strong evidence of topographic control. In the interior of the continent, climate is much affected by the violently colliding air masses that blow southward from the Arctic and northward from the Gulf of Mexico. The Caribbean Sea and the Gulf of Mexico are charged with the warm waters of the North Equatorial Current. Surface temperatures exceed 80° F (27° C) all year. Therefore the islands of the West Indies and the neighboring mainland enjoy a hot, humid climate. In general, North America is much wetter east of the 100th meridian than to the west. Charleston, S.C., receives 40 inches (102 centimeters) of precipitation annually; Cleveland, Ohio, 31 inches (79 centimeters); and Omaha, Neb., 25 inches (64 centimeters). But Denver, Colo., receives only 14 inches (36 centimeters) and Yuma, Ariz., less than 4 inches (10 centimeters). Most of the precipitation at these stations comes in the summer. In California and the Pacific Northwest much of the precipitation is likely to come in the winter. San Francisco receives slightly more than 20 inches (51 centimeters) of rain annually; 15 inches (38 centimeters) of it fall between December and March. The summer months are dry. Vancouver, B.C., receives 57 inches (145 centimeters), more than half of it between October and February. A unique feature of the Caribbean and Gulf of Mexico is the hurricane, whose tracks sweep over the West Indies, often striking the mainland between Texas and Florida, North Carolina, and New England.
Temperature and rainfall affect the natural vegetation. In the well-watered Eastern United States, for example, a deciduous forest is the natural result. In Ohio, Indiana, and Illinois there is a tall-grass prairie and farther west a short-grass steppe. The Far South is dominated by a subtropical evergreen forest, and the little-watered West by desert shrub, semidesert, and desert. Most of the Canadian Shield is covered with a taiga, or coniferous forest, while the lands along the Arctic shore bear only tundra vegetation. Desert shrub and steppe dominate the Mexican north; semi-deciduous forest and selva, or tropical rain forest, the eastern Mexican coast and portions of Yucatán, Belize, and Honduras. The rest of Central America is mountainous. The Greater Antilles are dominated in the wet areas by rain forest and in the drier areas by scrub woodland. The Lesser Antilles are mountainous; the low-lying islands are fringed by atolls.
Soils are greatly influenced by the environment. Acid soils are found in wet areas, alkaline soils in dry areas. Under the United States Department of Agriculture’s classification scheme, a wide variety of soils have been identified. In the great center of the continent are the mollisols, the naturally fertile soils that help to produce so much corn and wheat. In eastern North America are the alfisols—soils that are acidic but fertile when lime is added to reduce the acidity. Making up much of the Canadian Shield are the acidic spodosols, which are of little agricultural value, and the tundra soils, which are enveloped in permafrost, or permanently frozen ground. Ultisols, highly acidic soils that are often fertile, have developed in the American Southeast; aridisols, found in the dry Southwest, have little value for agriculture. Desert soils make up Mexico’s north, though chestnut soils are found south of the Rio Grande and along portions of the Pacific coast. Black soils are found in large pockets of northern, central, and southern Mexico. Much of Central America’s soil has been intensively weathered and leached. In the Greater Antilles, Cuba is known for Matanzas clay, a red limestone soil that supports sugarcane. Fine soils also support the economies of the Lesser Antilles.
The Canadian Shield and the Arctic shores and islands are known as the Northlands. Population is sparse. There are scattered Indian communities in the taiga and Inuit, or Eskimo, communities in the tundra. The environment is harsh. Logging, mining, and petroleum production are the leading economic endeavors. The region leads Canada in pulp and paper production. It ranks high as a producer of iron ore and nickel. Increasing amounts of petroleum move by pipeline from Alaska’s North Slope, and there are possibilities for production from the Arctic islands. There are huge hydroelectric facilities at Churchill Falls and at James Bay.
The industrial heartland of Canada is the St. Lawrence Valley of southern Ontario and southern Quebec. Half of Canada’s people live here, most of them in urban areas. Toronto, the largest city, is a diverse manufacturing center and tourist attraction. Oshawa makes automobiles; Hamilton makes steel. Montreal is the center of French culture and a petroleum refining center; it produces petrochemicals, clothing, electrical products, transportation equipment, heavy machinery, and foods. Windsor, like its United States neighbor Detroit, makes cars, and Sarnia is Canada’s center for the manufacture of petrochemicals. The St. Lawrence Valley is the home of a large French-speaking population. Since 1969 there have been two official languages in Canada—English and French.
The New England–Maritime region on the sea early attracted manufacturing industries based on waterpower. Boston, Mass., and other New England communities are now high-technology centers. There are also agricultural centers in the region: the Aroostook Valley, famous for its potatoes; the Lake Champlain Lowland, milk; Prince Edward Island, seed potatoes; and Nova Scotia’s Annapolis Valley, apples. Newfoundlanders, as of old, still go to sea. The 200-nautical-mile offshore management zone, proclaimed by the Canadian government, helps to protect Newfoundland’s fishermen. Petroleum and natural gas fields have been discovered offshore. Canada’s largest city in the region is Halifax.
The states of New York, New Jersey, Pennsylvania, Maryland, and Delaware make up the Middle Atlantic region. A diverse area, it has truck and dairy farms; rural, urban, and ethnic populations; great manufacturing centers; and large cities. New York City is the dominant urban area of the region and of the United States. It is the leading financial, tourist, and manufacturing center. New York City has a diverse industrial base, which helps its economy to withstand downturns in certain sectors. It is a leading port and a transportation and trade center. It is a part of the megalopolis that extends from Boston to Washington, D.C.
The premier agricultural area known as the American heartland consists of the Middle West, Great Plains, and Prairie Provinces. A corn belt grew up between central Ohio and central Nebraska. Not until 1979 did soybeans replace corn as the leading crop. On the drier steppe—from Texas north to the Prairie Provinces of Canada—winter and spring wheat predominate. The steppe is underlain by large coal, petroleum, and natural gas resources. The agricultural economy of Saskatchewan, for example, is bolstered by fine deposits of uranium and potash. Alberta is petroleum-rich. The American heartland also supports a large urban population.
One of North America’s best defined areas, the South reaches from Virginia and Kentucky to the Gulf of Mexico and includes much of Arkansas and eastern Texas. Its origins lie in plantation agriculture (tobacco, rice, and indigo) and, after the beginning of the 19th century, in the spread of cotton culture over much of the region. During the 1880s a new South was born. Textile manufacturing moved south from New England, tobacco farmers turned to cigarette manufacturing, and there were the beginnings of an iron and steel industry and the utilization of timber. During World Wars I and II many blacks left the region. The South was also becoming increasingly urban. Manufacturing industries increased dramatically. Agriculture, long dominated by cotton, also changed. Cotton remains a major crop, but soybeans, tobacco, sugarcane, peanuts, and rice are now significant. In Florida citrus fruits and vegetables dominate. The Civil Rights Act of 1964 also brought changes. Blacks, for example, became increasingly part of the American mainstream. The same can be said for the South as a region.
The sunny and dry Southwest extends from the Rio Grande Valley in Texas to southern California, including most of New Mexico and Arizona. Most of the area became part of the United States in 1848 following the Mexican War (Texas had been acquired in 1845). Mexicans had earlier established churches, presidios, and pueblos (Santa Fe, Albuquerque, and Tucson) marked by rectangular plans and plazas. They had introduced the Spanish language and the Roman Catholic religion and had intermarried with the indigenous Indian population. Following the Mexican War the numbers of both Mexicans and Anglos increased in the Southwest. Unemployment and poverty in Mexico continue to send illegal immigrants northward. Millions of illegal immigrants have been arrested at United States border points. The Indian population continues to live on reservations or in urban centers. Growth in the Anglo communities has been phenomenal. Retirees are notable among the new arrivals, but opportunities in manufacturing have also brought migrants.
Extending from the Southwest to Alaska, the Mountainous West includes the Rocky Mountains and the plateau country between the Rockies and the far western mountain ranges (Sierra Nevadas and Cascades). Much of the region is semiarid, government owned, and sparsely populated. Brigham Young led the Mormons here in 1846. They established Salt Lake City and many other prosperous communities in the area. Much of Utah and neighboring Idaho is still home to the Mormons. Copper is the chief mineral, and the Bingham Mine outside Salt Lake City has been a big producer. Lead, zinc, molybdenum, uranium, and oil shale are all extracted from the region. Lumbering and ranching are other economic activities. Tourism is vital for the income it generates.
California is the area between the Southwest and the Pacific Northwest and between the Sierra Nevada Mountains and the Pacific Ocean. As such it includes the Coastal Ranges and valleys, the wetter north, the Great Central Valley, the drier south (an intrusion into the Southwest), and the Sierra Nevadas. Outstanding is the Great Central Valley, famous for its miles of fruit orchards and for its wheat, barley, rice, and cotton fields. Because water is naturally scarce, Californians have built structures to carry water from the Sacramento Valley in the north to the San Joaquin Valley in the south (the Delta-Mendota Canal and the California Aqueduct). Irrigation water is critical for crops in the dry south. In the foggy Coastal Range, vegetable crops thrive—lettuce, brussels sprouts, artichokes, and broccoli. The wetter north is famous for the California redwoods; the Sierras for their recreational opportunities; and the drier south for its citrus fruits and for Los Angeles and the Los Angeles Basin. California is becoming increasingly urban, and the city blocks are eating into California’s farmland.
Stretching from northwestern California to Alaska is the Pacific Northwest. Western Oregon, most of Washington, and the British Columbia, Yukon, and south Alaska coasts are all included. It is for the most part a wet land, though dryness comes quickly in areas east of the Cascades and Coast Mountains. It is a land of trees. Lumbering is consequently a major economic enterprise, as is mining. British Columbia, for example, exports coal and copper to Japan. Salmon fishing, inaugurated by the Indian tribes, is also a staple industry. The finest salmon fisheries have moved from Oregon and Washington to British Columbia and Alaska. Agriculture is also of great significance. The lowlands—the Willamette Valley, Puget Sound, and Fraser River delta—produce dairy products, fruits, and vegetables. Washington’s apples are known widely over the continent. Eastern Washington is well known for the wheat of the Palouse. The region is also famous for harnessing its rivers by erecting large dams and producing electrical energy. From the North Slope of Alaska petroleum is sent to Valdez for shipment south.
Between the United States border (Texas, New Mexico, Arizona, and California) and Central America lies Mexico. The country can be divided into the desert north, the interior plateau and mountains, and the tropical lowlands (Caribbean, Gulf, and Pacific). Altitudinal zones can also be distinguished: tierra caliente from sea level to just above 2,000 feet (600 meters); tierra templada, 2,000 to 6,000 feet (600 to 1,800 meters); and tierra fria, from 6,000 feet (1,800 meters) to 11,000 feet (3,350 meters). About two thirds of Mexico’s people have mixed Indian and Spanish origins. Most people are concentrated in the temperate areas of the central plateau. Corn, beans, squash, and wheat support dense rural populations, but people have flocked in increasing numbers to Mexico City, the old Aztec capital. The city, one of the largest in the world, has many urban problems—substandard housing, pollution, and unemployment among them. Petroleum discoveries have had a major influence on the Mexican economy. Tourism also plays a role. Americans and Europeans visit many parts of the country: Acapulco, Guadalajara, Mexico City, and the Yucatán. But much poverty remains, prompting many Mexicans to move across the border to the United States in pursuit of jobs and a better life.
The region known as Central America is dominated by the high mountain frame that runs east-west across Guatemala, Honduras, and Nicaragua and by the lower mountains that run through Costa Rica and Panama. Most of Central America’s 37 million people live in the highland areas. They are largely of Indian, Spanish, and mixed origins. A large black population lives along the wet, hot Caribbean coast. Guatemala is known for its Indian population and its coffee, sugar, and cotton haciendas; El Salvador for its coffee and cotton; Nicaragua for its coffee, cotton, and bananas; Costa Rica for its coffee and sugarcane; and Panama for the famous canal that joins the Atlantic and Pacific oceans.
The group of islands called the Antilles consists of the Greater Antilles (Cuba, Hispaniola, Puerto Rico, and Jamaica) and the Lesser Antilles (the Leeward and Windward Islands). The islands were originally inhabited by Indian populations, but the Indians succumbed to diseases carried by their European conquerors. Black slaves were brought from Africa to work in the sugarcane fields. Sugarcane remains the major crop, but coffee and tobacco are also grown. Saint Vincent produces arrowroot; Grenada, cocoa and nutmeg; Saint Lucia, bananas. Jamaica is endowed with bauxite, a major mineral resource. Much of the area is quite poor.
Indian populations arrived from Asia via the Bering Land Bridge and populated the continent between 50,000 and 15,000 years ago. They established cultures—gathering, hunting, fishing, and farming—from Alaska to the Lesser Antilles. They may have numbered 20 million at the time of the coming of the first European immigrants. The Norsemen arrived in Newfoundland in ad 1000, but their influence on the region and its inhabitants was slight. In the wake of Christopher Columbus in 1492, however, the Spanish influence was devastating. The Spaniards settled in Hispaniola and the neighboring islands. Not immune to the diseases imported by the Spaniards, the Arawak and Carib populations declined sharply, the Arawak disappearing from the islands. The Aztecs and the Mayas of Mexico and Guatemala were also conquered, and their numbers also declined. On the islands, black Africans were brought to replace Indian workers. In Mexico and Central America the populations were increased by the descendants of intermarriages between Spanish and Indians. Along the St. Lawrence the French also intermarried with the Indian inhabitants. The English, Dutch, and Swedes profoundly reduced the Indian population in the Northeast.
The Africans, originally brought to the Antilles, were compelled to move to Virginia in 1619. By 1790 they constituted 20 percent of the population of the United States and they were a majority in the South. Meanwhile, the Spanish-Mexican population pushed into the Southwest, and the influx of Europeans—especially from the British Isles and Germany—increased considerably. In 1913 nearly 9 million Europeans came to the United States, and 400,000 entered Canada. In the 1920s immigration was restricted, but it increased later. Today more than one quarter of North American immigrants come from Asia, another quarter from Europe, and the remainder from Central and South America. North America thus has a great variety of racial and ethnic types.
The United States and Canada are largely English- speaking countries. But Spanish is widely spoken in the American Southwest, southern Florida, and in many large urban areas. French is dominant in Quebec and common in parts of New England and Louisiana. Moreover, European and Asian immigrants continue to bring different languages to the United States and Canada. In Mexico Spanish is dominant, and in the Antilles Spanish, French, and English are dominant in particular areas.
The Christian religion has the greatest number of adherents. Roman Catholics make up the largest single denomination. Methodist numbers are greatest in the area between New York and Nebraska, Lutherans between Wisconsin and Montana, Baptists in the South, and Mormons in Utah. Catholicism has its greatest number of adherents in New England, the Southwest, and in urban areas. The Jewish population is largely urban. Eastern Orthodox, Old Catholic, Polish National Catholic, Armenian, Buddhist, and Muslim faiths have fewer adherents. In Canada, Roman Catholics are in a majority in Quebec, but Protestants dominate elsewhere. Roman Catholics are in the majority in Mexico, Central America, and the Antilles.
The arts have been well expressed through painting, sculpture, architecture, music, dance, theater, photography, and filmmaking. American Indians are known for their ceramics, stone and metals work, house types, and painting and music.
The Inuit are celebrated for their work in stone and ivory. African, European, and Asian immigrants have also made significant contributions to North American art. Much of that work can be seen in the museums, theaters, and galleries scattered over the continent. North American photographers and filmmakers have helped to capture the essence of their times. Hollywood is a household word the world over. In English, French, and Spanish, poets, short-story writers, and novelists have made significant contributions.
The standard of living varies significantly in the different parts and among the different people of North America. In Canada and the United States the quality of life is generally high. The literacy rate in those two countries exceeds 95 percent. Fine educational facilities, both public and private, are available from the primary grades through college and graduate school. Hospital facilities are second to none. Yet not all people share equally in the wealth of educational and health-care resources. In the United States, for example, the percentages of college graduates among African Americans and Hispanic Americans lag behind that of whites largely because of unequal educational opportunities. In addition, life expectancy for white Americans is about 75 years for males and more than 80 years for females, but it is 68 and 75 years, respectively, for African Americans. Such discrepancies are even greater when Canada and the United States are compared to Mexico, Central America, and the Antilles. In these places poverty is more prevalent and health and educational facilities are less available.
North America is blessed with abundant fossil fuels and metallic mineral resources. Fifty percent of the United States’ and 55 percent of Canada’s energy sources are based on petroleum and related products. The major petroleum reserves occur in a broad arc from northern Alaska to Texas and in smaller arcs from Texas and Louisiana north to Michigan and along the continental shelf from Texas to Nova Scotia. Texas, Louisiana, and California are the major producing states, and Alberta is the leading producer among the Canadian provinces. Mexico is also a major producer. Natural gas supplies more than 30 percent of the United States’ energy requirements and more than 25 percent of Canada’s. Texas and Louisiana have the largest fields, and Alberta is a leader in Canada. The United States has much coal in its Appalachian, Eastern Interior, Western Interior, and Rocky Mountain fields. If petroleum and natural gas reserves are depleted, coal as an energy source may have a resurgence. The continent is endowed with iron ore (Great Lakes and Quebec-Labrador), copper (Canadian Shield and Southwest), lead and zinc (Canadian Shield and British Columbia), and nickel and molybdenum (Canadian Shield, British Columbia, and Colorado). Mexico produces silver. Jamaica has bauxite for the manufacture of aluminum. Canada also produces asbestos, sulfur, and potash.
The waterways of the continent—especially the Great Lakes and Ohio-Mississippi rivers—and the locations of the mineral resources have had a profound effect on the location of manufacturing. Iron ore from Michigan, Wisconsin, and Minnesota (transported on the Great Lakes) and coal from Illinois and Pennsylvania (transported by rail and barge) help to support the steel mills in Chicago, Ill.; Gary, Ind.; Cleveland, Ohio; and Buffalo, N.Y., though the steel industry suffered beginning in the 1970s from the importation of cheaper foreign steel. Pittsburgh, Pa., and Youngstown, Ohio, have been major steel-manufacturing centers. Steel manufacture has also been a factor in the location of the automobile industry. Detroit, Windsor, and Oshawa are all leading automobile manufacturing centers. Overall there are considerable differences in manufacturing on a regional basis. New England, for example, has long been famous for manufacturing machinery and electrical equipment and for fabricating metals; the Midwestern areas for producing automobiles and food; the South for textiles and chemicals; and so on. The fastest growing industries today involve high levels of technology and innovation: information processing, communications (computers), and aerospace. These industries have been released from reliance on the locations of minerals and waterways. They have found homes in the Sunbelt—the South and Southwest.
Only 5 percent of the United States population and 6 percent of the Canadian population are engaged in agriculture, yet both countries rank among the leading exporters of agricultural products in the world. Southern Canada and the northeastern part of the United States are dominated by mixed farming, the South by general farming, with soybeans, tobacco, and cotton as major crops. The great center of the continent is dominated by corn and soybeans; beef cattle, pigs, and dairy cattle; and feed crops. Farther west the great winter and spring wheats become dominant. Irrigated crops and grazing are significant in the West (Great Basin), wheat in the Palouse, and mixed farming in the Pacific Northwest. California is famous for its mixed farming, vineyards, and citrus fruits. In both the United States and Canada, increasing production is largely the result of the application of commercial fertilizers. Since the 1940s the yield per acre of corn and potatoes has tripled; wheat, cotton, and tobacco yields have doubled. The yield of milk from dairy cows has also doubled. Beef animals are now slaughtered after two years of growth rather than three or four, having attained maximum growth at an earlier age. Improvements have been made in hybrid seed corn and in soybean, tobacco, wheat, and alfalfa strains. Many of America’s farmers are not prospering, however. The combination of soaring interest rates on their loans and depressed prices for their farm products caused them serious financial difficulty in the 1980s and 1990s. Net income per farm has dropped considerably. In Mexico, Central America, and the Antilles the percentage of farmers in the population is high. Production is much less per acre, and the prospects are not very different from what they were in the 1960s and 1970s.
Less than 1 percent of the labor force in the United States and Canada is employed in commercial fishing. The great fisheries include the continental shelves, the Grand Banks off Newfoundland, and the Great Lakes. Both the United States and Canada have instituted a 200-nautical-mile limit, thus restricting the approach of foreign fishing vessels—principally Japanese and Russian. The Pacific coast is still famous for its salmon and tuna catches, the Grand Banks for cod, and the Great Lakes for whitefish. In the inshore fishery off Maine and Nova Scotia, lobster is a major catch. Most salmon are caught today off the Alaskan coast and in British Columbia. There are local fisheries in Mexico, Central America, and the Antilles.
Less than 1 percent of the labor force in the United States and Canada is employed in lumbering. The area covered by the lumbering enterprise, however, is large in both countries. The chief forest areas are in the Canadian Shield, the Pacific Northwest coast and California, the Rocky Mountains, the South, New England and the Maritimes, and the Great Lakes. The chief products are lumber, pulp and paper, and veneer. About half of the sawed timber in the United States comes from the West, more than 35 percent from the South, and most of the rest from New England. In Canada most sawed timber is derived from British Columbia, Quebec, and Ontario. Paper and pulp production dominated the forest economy of the Canadian Shield throughout the 20th century. Almost 90 percent of the output is exported to the United States. The South is also a contributor of pulp and paper. Tropical rain forests, seasonal forests, and woodlands cover portions of Mexico, Central America, and the Antilles.
The great spaces of North America are linked by impressive rail, highway, and air networks. Waterways and pipelines have also contributed to the transportation network. In Canada the Canadian Pacific Railway and the Canadian National Railway join the Atlantic and Pacific oceans.
In the United States a dense rail network covers the Middle West, the Middle Atlantic states, and southern New England. A less dense network covers the South and West. Because both countries operate standard-gauge tracks (4 feet 81/2 inches), rail lines from one to the other have no impediments. Since 1950 Mexico’s railroads have also adopted the standard gauge. Mexico is connected with Guatemala by rail.
There is a tendency for the highway network to parallel rail lines. The Trans-Canada Highway links the Atlantic and Pacific coasts. In the United States a vast federal program has developed a superior highway system. The Pan-American Highway connects the United States with Mexico and Central America.
Airlines also join the large urban areas as well as many of the remote places on the continent. Bush pilots even fly to remote Inuit villages.
Extensive use is made of the continent’s inland waterways, especially for shipping. The Great Lakes–St. Lawrence Seaway permits small oceangoing vessels to enter the Great Lakes. Other ships carry cargoes to Europe. Heavy barge traffic is maintained on the Ohio–Mississippi River waterway and along the Intracoastal Canal.
Pipelines are increasingly significant as transportation vehicles. Petroleum and natural gas lines link the producing areas with their principal markets. Crude petroleum is sent by pipeline from Prudhoe Bay to Valdez in Alaska and transferred to shipping tankers.
The countries of North America are all part of the larger world community. Most are members of the United Nations. The United States is a permanent member of the Security Council, helping to qualify it as one of the world’s superpowers. Its influence is worldwide. Canada and the United States are both members of the North Atlantic Treaty Organization (NATO), the major Western military alliance whose prime responsibility is safeguarding Europe and North America from foreign aggression. All North American countries are members of the Organization of American States (OAS), which provides social, economic, political, and technical services for its members. In 1973 the Caribbean Community and Common Market was formed to promote economic union in the Caribbean. Canada and the United States, among the world’s richest nations, are heavily involved in foreign economic assistance.
In 1993 Canada, Mexico, and the United States adopted the North American Free Trade Agreement. Canada and the United States are among the leading trading nations in the world, and they are each the other’s leading trading partner. About three fourths of Canada’s exports, largely raw materials, go to its southern neighbor. In return the United States sends Canada manufactured products. The United States has penetrated deeply into the Canadian economy. Many American multinational corporations have large investments in Canada. Mining and smelting, petroleum and natural gas, and much of the manufacturing enterprise is dominated by American firms. In 1980 the Canadian government announced its National Energy Programme, an attempt to regain Canadian control of the oil and natural gas industry.
The United States is also Mexico’s leading trading partner. Petroleum, metals, machinery, equipment, cotton, and coffee are the leading Mexican exports. Mexico also sells to Japan and the countries of the European Union (EU). Mexico’s trade balance also relies on tourism. Bananas, cotton, coffee, and cacao enter international trade from Central America. The Antilles contribute sugarcane, coffee, and cacao.
North American settlement began with Asian immigrants—first the Native Americans and then the Inuit, or Eskimo. By the time the Norsemen, or Vikings, reached Newfoundland in ad 1000, Native Americans had effectively occupied the entire continent. True European settlement did not begin until the 16th and 17th centuries. The Spaniards occupied the Antilles and the Caribbean rim, the French entered the St. Lawrence Valley, and the English secured a foothold on the Virginia and Massachusetts coasts. By the mid–18th century Spanish plazas and Roman Catholic churches could be found from Panama through Mexico, the Southwest, and the Southeast; French farms could be seen in the St. Lawrence Valley and the Midwest; and English settlement was pushing steadily westward. The struggle for the continent was an epic one. In the 1763 Treaty of Paris that concluded the French and Indian War, the English emerged as the victors. The French were reduced to holding several islands in the West Indies and the islands of St. Pierre and Miquelon; the Spaniards—a less powerful foe—still held dominion in the Antilles, Central America, Mexico, the Southwest, and the Southeast.
English institutions gained a clear ascendancy in the eastern portions of the continent. But repressive English laws—and the creation of an American personality—led ultimately to the American Revolution and political freedom for the colonists. A Constitution for the United States was formulated in 1787, and the Northwest Ordinance the same year established a government for the Old Northwest. The United States was moving westward. American settlers, in the wake of Lewis and Clark, began to move to Oregon in the 1830s and 1840s. The Mexican War from 1846 to 1848 and the Treaty of Guadalupe-Hidalgo led to United States acquisition of the American Southwest. The Gadsden Purchase in 1853 brought final agreement on the United States–Mexico boundary.
In Canada, too, the movement was westward. Settlement moved from Ontario into Manitoba, Saskatchewan, and Alberta, and finally into British Columbia. Confederation was achieved in 1867.
Mexico survived the Mexican War and the Mexican Revolution (1910–15) to become a stable and coherent state. In Central America and the Antilles a canal opened for traffic across the isthmus of Panama in 1914, the English held British Honduras (later Belize), and European countries continued to administer territories in the Antilles.
The United States experienced a disastrous Civil War from 1861 to 1865. At issue were the questions of black slavery and the Southern and Northern economies. The South became increasingly defiant of Northern opinion. South Carolina seceded from the Union, and other states followed suit. Armies were conscripted, and the Northern forces eventually won.
In the years following the Civil War, the United States became the mecca for European immigrants. Europeans rejoiced in American freedom and in turn made great contributions to the labor force in the massive industrialization that took place in the United States in the decades after 1865. Between 1900 and 1914 more than 13 million immigrants entered the United States. In Canada a program brought Slavs, Hungarians, Poles, Germans, and Romanians to the Prairie Provinces. Immigrants also flocked to Ontario and British Columbia. In Mexico, Central America, and the Antilles, population increases were largely from natural causes.
As a result of the Spanish-American War (1898) and World War I (1914–18), the United States emerged as a leading world power. In the 1920s the country grew prosperous. Tariffs were raised, immigration was reduced, and there was a reaction against imperialism and a tendency toward isolation from foreign involvement. The boom years ended with the stock market crash of 1929, signaling the start of the Great Depression. By 1932, about 12 million people were unemployed. Banks closed and the country faced economic paralysis. President Franklin D. Roosevelt created a program called the New Deal, spearheaded by the National Industrial Recovery Act of 1933. Notable New Deal innovations were the Public Works Administration (PWA), Tennessee Valley Authority (TVA), and Social Security Act.
On Dec. 7, 1941, without warning, the Japanese air force and navy attacked Pearl Harbor in Hawaii, making a shambles of the United States Pacific fleet. The United States accordingly declared war on Japan and the other Axis powers—Germany and Italy. Canada did the same. North American forces fought well in all of the war zones. Germany finally surrendered on May 7, 1945. In August the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki. World War II was over.
There were wars in Korea in the 1950s and Vietnam in the 1960s and 1970s. In the 1980s United States forces served in the Middle East in Lebanon and invaded the island of Grenada. They also led attacks on Iraq in 1991 and 2003.
North American society changed considerably after the Vietnam War—particularly in the United States and Canada. Material wealth increased dramatically. Incomes rose. A majority of families owned their own homes, cars, and televisions and worked at good jobs. Work at home and on the farm became highly mechanized. Increasingly, North Americans moved to the suburbs, went to college, and sought more leisure time. The credit card and computer became North American institutions. But poverty, unemployment, prejudice, the denial of civil rights, and other social ills persisted.
In Mexico, Central America, and the Antilles, there was less prosperity. Illegal Mexican immigrants continued to cross the United States border; El Salvador, Nicaragua, and Guatemala struggled with long civil wars; Fidel Castro presided over a dictatorship in Cuba; and Haitians were gripped by poverty. By the late 20th century, however, some progress had been made. A number of countries became more democratic, and some saw the end of long-standing conflicts. (See also Central America, and the various articles on North American countries, Canadian provinces, states of the United States, major cities, and geographic features.)
Anderson, M.K. Greenland: Island at the Top of the World (Dodd, 1983).Bailyn, Bernard. The Peopling of British North America (Knopf, 1986).Birdsall, S.S., and Florin, J.W. Regional Landscapes of the United States and Canada, 3rd ed. (Wiley, 1985).Boorstin, Daniel. Portraits from the Americans: The Democratic Experience (Random, 1975).Brandt, Keith. Mexico and Central America (Troll, 1985).Dickerman, Pat. Adventure Travel in North America (Holt, 1986).English, P.W. World Regional Geography, 2nd ed. (Wiley, 1984).Evans, F.C. A First Geography of the West Indies (Cambridge Univ. Press, 1974).Garreau, Joel. The Nine Nations of North America (Avon, 1982).Jacobsen, P., and Kristensen, P. A Family in Central America (Watts, 1986).James, P.E., and Minkel, C.W. Latin America, 5th ed. (Wiley, 1986).Mitchell, R.D., and Groves, P.A., eds. North America: The Historical Geography of a Changing Continent (Rowman, 1986).National Geographic Society. America’s Hidden Corners (Natl. Geog. Soc., 1983).Rooney, J.F., Jr., and others, eds. This Remarkable Continent: An Atlas of U.S. and Canadian Society and Cultures (Texas A&M Univ. Press, 1982).Zaslowsky, Dvan. These American Lands: Parks, Wilderness, and the Public Lands (Holt, 1986). |
Earth’s Core is Melting and Freezing
|Beneath Earth’s solid crust are the mantle, the outer core, and the inner core. Scientists learn about the inside of Earth by studying how waves from earthquakes travel through the planet. Credit: World Book illustration by Raymond Perlman and Steven Brayfield, Artisan-Chicago|
The inner core of the Earth is simultaneously melting and freezing due to circulation of heat in the overlying rocky mantle, according to research from the University of Leeds, UC San Diego and the Indian Institute of Technology. The Earth is the only example we have of a habitable planet, and understanding the properties of the Earth can help us determine how the planet supports a biosphere – and what to look for when searching for habitable worlds around distant stars.
The new findings, published in the journal Nature, could help us understand how the inner core formed and how the outer core acts as a ‘geodynamo’, which generates the planet’s magnetic field.
"The origins of Earth’s magnetic field remain a mystery to scientists," said study co-author Dr Jon Mound from the University of Leeds. "We can’t go and collect samples from the centre of the Earth, so we have to rely on surface measurements and computer models to tell us what’s happening in the core."
"Our new model provides a fairly simple explanation to some of the measurements that have puzzled scientists for years. It suggests that the whole dynamics of the Earth’s core are in some way linked to plate tectonics, which isn’t at all obvious from surface observations.
"If our model is verified it’s a big step towards understanding how the inner core formed, which in turn helps us understand how the core generates the Earth’s magnetic field."
The Earth’s inner core is a ball of solid iron about the size of our moon. This ball is surrounded by a highly dynamic outer core of a liquid iron-nickel alloy (and some other, lighter elements), a highly viscous mantle and a solid crust that forms the surface where we live.
|Artist’s concept of the interior of Mars with a hot liquid core. Studying the interior of Earth allows astrobiologists to compare our world to other celestial bodies. This ‘comparative planetology’ can provide clues about the potential habitability of other planets. Image Credit: NASA/JPL|
Over billions of years, the Earth has cooled from the inside out causing the molten iron core to partly freeze and solidify. The inner core has subsequently been growing at the rate of around 1mm a year as iron crystals freeze and form a solid mass.
The heat given off as the core cools flows from the core to the mantle to the Earth’s crust through a process known as convection. Like a pan of water boiling on a stove, convection currents move warm mantle to the surface and send cool mantle back to the core. This escaping heat powers the geodynamo and coupled with the spinning of the Earth generates the magnetic field.
Scientists have recently begun to realise that the inner core may be melting as well as freezing, but there has been much debate about how this is possible when overall the deep Earth is cooling. Now the research team believes they have solved the mystery.
Using a computer model of convection in the outer core, together with seismology data, they show that heat flow at the core-mantle boundary varies depending on the structure of the overlying mantle. In some regions, this variation is large enough to force heat from the mantle back into the core, causing localised melting.
The model shows that beneath the seismically active regions around the Pacific ‘Ring of Fire’, where tectonic plates are undergoing subduction, the cold remnants of oceanic plates at the bottom of the mantle draw a lot of heat from the core. This extra mantle cooling generates down-streams of cold material that cross the outer core and freeze onto the inner core.
|Temperature (colour contours) and fluid flow (arrows) on the equatorial section for the statistically locked tomographic model. The lowest temperature is blue and the highest temperature is deep red. Credit: Gubbins, D., Sreenivasan, B., Mound, J., Rost, S. 2011. Melting of the Earth’s Inner Core. Nature, Vol. 473, p. 361-363.|
Conversely, in two large regions under Africa and the Pacific where the lowermost mantle is hotter than average, less heat flows out from the core. The outer core below these regions can become warm enough that it will start melting back the solid inner core.
Co-author Dr Binod Sreenivasan from the Indian Institute of Technology said: "If Earth’s inner core is melting in places, it can make the dynamics near the inner core-outer core boundary more complex than previously thought.
"On the one hand, we have blobs of light material being constantly released from the boundary where pure iron crystallizes. On the other hand, melting would produce a layer of dense liquid above the boundary. Therefore, the blobs of light elements will rise through this layer before they stir the overlying outer core.
"Interestingly, not all dynamo models produce heat going into the inner core. So the possibility of inner core melting can also place a powerful constraint on the regime in which the Earth’s dynamo operates."
Co-author Dr Sebastian Rost from the University of Leeds added: "The standard view has been that the inner core is freezing all over and growing out progressively, but it appears that there are regions where the core is actually melting. The net flow of heat from core to mantle ensures that there’s still overall freezing of outer core material and it’s still growing over time, but by no means is this a uniform process.
"Our model allows us to explain some seismic measurements which have shown that there is a dense layer of liquid surrounding the inner core. The localised melting theory could also explain other seismic observations, for example why seismic waves from earthquakes travel faster through some parts of the core than others." |
Accuracy: – and instrument is said to be accurate if the measured value of a physical quantity resembles very closely to its true value.
Precision: – an instrument is said to have a high degree of precision of the measured value remains unchanged, howsoever, large number of times it may have been repeated.
Difference between actual and measured value of a physical quantity is called error. If am is measured value and at is actual value of a physical quantity then the error is
E = ∆a = |am – at|
Types of error
Errors may broadly be divided into two types: systematic errors and random errors.
Systematic error: – errors arising due to the system of measurement or the errors made due to the parts involved in the system of measurement are called systematic errors.
Instrumental error: – these may occur (a) due to faulty calibration (b) due to wear and tear (for instance, zero error in venire caliper or screw gauge) (c) because of faulty fitting and so on.
Personal errors: – or errors caused by the observer each observer has a peculiar behaviour. Some may be quite carless while the others get bore when they have to of repeated jobs and still others cannot read correct due to eyesight problem or due to other personal problems. Such errors are eliminated if the observations are taken by different observers.
Environmental error: – change in temperature (due to weather conditions), pressure, wind direction, humidity and so on play a vital role while recording the reading. For example, if we measure the length of a rod in summer and in winter it would be different as rod and measuring scale may have different expansion coefficients, however, these errors can be avoided b artificially creating the same environment in the laboratory.
Random errors: – (or statistical errors) consider an example. The probability of tossing a coin is 1/2. If a coin is tossed 1000 times then the chance that we get exactly 500 times head and 500 time tail is negligible. If 1 m A current is passing through a wire, can we be
Sure that always [n = 10-3 / 1.6 x 10-19 – 6.25 x 1015]
Constant numbers of electrons equal to 6.25 x 1015 are flowing per second through it. These examples illustrate how random errors creep in. even this error cannot be removed.
Methods of expressing error
Absolute error: – the deviation from true value of measured value or deviation of the value from its mean value (of all observations).
Thus, ∆Xi = | xi – xm|
Is absolute error where xm is mean value and Xi is the component of the observation?
Relative error: – the ratio of mean absolute error to the true value of physical quantity is called relative error.,
∆x /x or ∆x / xm is called relative error.
Percentage error = relative error x 100 = ∆x / x 100
Propagation or combination of errors
Case (i) when x = a +b
Then maximum possible % error = ∆x/x x 100 = ∆a + ∆b / a + b x 100
Case (ii) when x = a – b
Then maximum possible % error = ∆x / x x100 = ∆a + ∆B / a – b x 100
Case (iii) when x =a. b
Then maximum possible % error = ∆x/x x 100 = [∆a / a =+ ∆b / b] x 100
Case (IV) when x = a/b
Then maximum possible % error = ∆x/x x 100 = (∆a /a + ∆b / b) x 100
Case (v) when x = a1 bm / yp zk
Then maximum possible % error
= ∆x/x x 100 = (I∆a/a + m∆b/b + p∆y/y + k∆z/z) x 100 |
How important is it to understand parts of speech and how to use them correctly? This workbook is filled with a variety of exercises for application and review emphasizing correct use of grammar and mechanics, so you can help your child know the parts of speech. Further your 3rd grader’s knowledge of capitalization and punctuation as he improves his sentence writing. Introduce your child to nouns, verbs, and adjectives in his quest of learning all 8 parts of speech. Forming the plurals of words, using contractions, and correctly using words like sit/set/sat and can/may will help your child speak and write correctly. The new graphic organizers will guide your students from the very beginning as they write compositions. With over 100 creative writing exercises presented in fun themes such as outer space, zoos of the world, nocturnal animals, and many more, your child will learn to be a more effective communicator. This item coordinates with Homeschool Language Arts 3 Curriculum Lesson Plans and the Language 3 Answer Key, which includes the answers.
Available on these devices
*Some features only available on iOS devices. |
A computer is a device that can be instructed to carry out arbitrary sequences of arithmetic or logical operations automatically. The ability of computers to follow generalized sets of operations, called programs, enables them to perform an extremely wide range of tasks.
Such computers are used as control systems for a very wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer assisted design, but also in general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects millions of other computers.
Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then. |
Points of View
From Hip-Hop: Beyond Beats and Rhymes collection, lesson plan 1 of 5
(90 minutes + assignments)
Grade Level: 9-12, College
Subject areas: Social Studies, Language Arts, Ethnic Studies, U.S. History, Media Studies, Cultural Studies, Art/Music, Current Events
Purpose of the lesson: Good documentary work presents myriad points of view. Although Byron Hurt takes a strong stance against the violence, misogyny, and homophobia in hip-hop, in his film he presents many voices that speak for themselves on the subject. This lesson examines those voices and allows the students to reach their own conclusions about and evince their own responses to these points of view.
National teaching standards addressed:
National standards from the following organizations were used in developing this lesson plan. See recommended national standards available in the educator guide for full descriptions of standards employed.
- National Council for the Social Studies
- National Council of Teachers of English/International Reading Association
Curricula writer: David Maduli
David Maduli is an independent educational consultant who has contributed many curriculum guides and conducted various workshops for PBS programs. He has a master’s in teaching and curriculum from Harvard Graduate School of Education and continues to work as a veteran Bay Area public school language arts and social studies teacher.
Write this quote:
“Hip-hop is the voice of this generation. ... It has become a powerful force.”
—DJ Kool Herc, one of the “founding fathers” of hip-hop
Have students write a response using the sentence stems:
- I think Kool Herc is saying that...
- I agree/disagree with Kool Herc because...
Have the class show their point of view with a “thumbs up/thumbs down,” then call on a few students from each perspective to read their sentences.
In small groups, have students brainstorm what they know about hip-hop from their own experience and from various forms of media. They can use a chart like this one to organize the information they come up with, which can be in the form of related words, phrases, examples, and so on. They can use any or all of these categories or come up with their own. Refer to Student Handout B: Hip-Hop Matrix as a worksheet for your students.
Have students read “Issue Brief: Hip-Hop” to inform their “History/Origins” square.
Each group picks two or three squares to brainstorm. After the brainstorming session, have each group tell the rest of the class where their information comes from, then have the class as a whole discuss how the groups have similar or different ideas.
Show Film Module 1, “Overview and Images of Masculinity” (7:40). While they are watching, have the students record speakers and quotations for further reference and discussion after viewing the video module. Refer to Student Handout C: Speaker and Quotes as a worksheet for your students.
Refer to Teacher Handout A for a list of recommended speakers and quotations.
- Think – Have each student choose one of the speakers and quotations from his or her chart and journal responses to these questions: What is the speaker’s relationship to or role in hip-hop? What is the speaker’s view on violence and/or misogyny in hip-hop? If you could respond to the speaker’s statement, what would you say to him or her?
- Pair – Divide students into pairs, then have them compare with each other what they wrote about their respective speakers. They can use the following questions to guide their discussion: Would the two speakers agree with each other? What would they say in response to each other’s statements? Which of the speakers most represents your view?
- Share – The pairs share with the class, using these speaking stems:
We agree with , who says ... / We disagree with , who says...
Activity: Crossing the Line
In this activity, students will think about their own views on hip-hop and express those views in a nonverbal activity. Make a line on the floor through the middle of the classroom with masking tape. Standing on one side of the line will indicate agreement with the statement the facilitator reads. Standing on the other side will indicate disagreement. One at a time, read each of the following statements aloud to the class and allow the students to go to the side that indicates their view:
- Hip-hop is a creative art form and a form of expression.
- I enjoy listening to rap music.
- When I hear a rap song, I pay more attention to the beat than to the lyrics.
- Rap lyrics contain too many references to violence and gunplay.
- Many rappers are just reinforcing negative stereotypes about urban youth and young people of color.
- Rappers who talk about violence and the streets are just reflecting the violent American culture that we live in.
- Musicians have a responsibility to provide positive messages and images because children are listening.
- Consumers of music don’t want to hear music with conscientious, righteous, or positive messages.
- Record labels would rather promote stereotypical “gangster” rap music because it sells more units.
- Hip-hop has become commodified and exploited by corporate America.
- Rap music as a whole is disrespectful toward women.
- Rap music as a whole is hateful toward gays.
- Rap music, like movies, is ultimately entertainment and should not be taken so seriously.
- Hip-hop is a culture that has the power to unify people across linguistic, racial, and geographic lines.
- Hip-hop has the power to be a voice of resistance and social change.
- Hip-hop has become a caricature and a modern-day minstrel show.
Discussion – Reflect on the activity with the following guide questions:
- Which statements were easy/difficult for you?
- Which responses from the class surprised you?
- Which statements did you feel very strongly about?
Assignment: Persuasive Essay
Have each student pick one statement from the Crossing the Line activity about which to write a persuasive essay. In their essay, students should take a clear stance on whether they agree or disagree with the statement and support their claims with evidence from the film. They should use as examples the quotations they selected in Step 3 of this activity.
There is no extension activity with this lesson plan.
Hip-Hop: Overview and Images of Masculinity |
The Gothic art movement in medieval Europe began almost exclusively as a manifestation of religious painting, sculpture and architecture in the 12th century. Growing out of the sturdy, somewhat crude representations of the Romanesque style, the Gothic art movement strove, by its late period in the 14th and 15th centuries, to liberate painted and sculpted images to a more natural and free-flowing depiction. Late Gothic architecture saw the erection of high-ceilinged buildings that soared heavenward.
Painting during the Gothic period was strictly based upon religious themes and subject matter. As this artistic movement deepened into its "late" period, however, the figures that were once cast in a flat, nonrealistic way similar to the painting of Eastern religious icons became much more lifelike as painters began to work with concepts such as perspective. Perspective in painting is the means a painter uses to make things appear to recede into the distance from the central image of the painting. Paintings also began to include much more detail and depicted many figures in motion rather than statically sitting or standing. Both of these innovations gave late Gothic paintings a more "natural" look than earlier ones had.
Early Gothic sculpture resembled the sculpture of the Romanesque period. Most often carved in relief on the side or interior of a church, these images were simplistically rendered and meant to represent an idealized view of man, saints, angels and Jesus. As the Gothic period progressed, sculptors strove to create more natural-looking images, adding detail, movement and very human facial expressions to their sculptures. This movement toward a more natural depiction of the subject matter allowed, for example, viewers to see the Virgin Mary as caring for her son, Jesus, as any human mother would, rather than as a stiff, stone figure. Emotions such as joy and sorrow also became evident within the late Gothic sculptural representations.
Gothic painting and sculpture was religious in nature and most often found on the outside or inside of a Gothic cathedral. Stone cathedrals are considered the crowning artistic achievement of the Gothic period of art. Sometimes requiring 100 years to construct, these churches were meant to glorify God with their soaring heights and breathtaking stained glass windows. By the late Gothic period, the buildings' flying buttresses (the support systems that allowed the soaring heights) enabled the cathedral walls to contain more and more stained glass windows, which became exquisitely detailed images of the life of Christ and other Biblical themes.
The Early Renaissance
Since no "period" of art begins and ends abruptly, the late Gothic period can also be seen as blending into the early Renaissance. The characteristic that most influenced this transition was the late Gothic development of a more natural, realistic approach to figures and images in painting and sculpture. Artists of the early Renaissance still worked frequently with religious subject matter, but they began to also paint and sculpt images from classical Greece and Rome as well as modern royal figures. The trend in all of this art was to capture a more true sense of the "humanness" of the image and a more realistic presentation of natural settings.
- Photo Credit Comstock/Comstock/Getty Images
Characteristics of Art Nouveau Furniture
Art Nouveau is a design style that flourished during the late 19th and early 20th centuries and permeated architecture, decoration, furniture and... |
Advancing Basic Science for Humanity
Nomads of the Galaxy
Planets simply adrift in space may not only be common in the cosmos; in the Milky Way Galaxy alone, their number may be in the quadrillions. Three experts discussed what this might mean, whether a nomad planet could drift close to our solar system, and how it is possible for a nomad planet to sustain life.
TO THE LAYPERSON, THE HEAVENS FOLLOW FAIRLY PREDICTABLE PATTERNS. Moons orbit planets. Comets, asteroids and planets, such as the Earth, orbit stars. So when the news broke in late February that astronomers have estimated an almost incomprehensible number of planets drifting through interstellar space – unbound to any star – the story was everywhere.
An artistic rendition of a nomad object wandering the interstellar medium. The object is intentionally blurry to represent uncertainty about whether or not it has an atmosphere. A nomadic object may be an icy body akin to an object found in the outer Solar System, a more rocky material akin to asteroid, or even a gas giant similar in composition to the most massive Solar System planets and exoplanets. (Image by Greg Stewart/SLAC)
Previous studies have found evidence for nomad planets. Last year, researchers detected about a dozen of them through gravitational microlensing – a technique that looks for stars whose light is momentarily refocused by the gravity of passing planets. But the new study, aptly titled “Nomads of the Galaxy” and published in the Monthly Notices of the Royal Astronomical Society, proposed an upper limit to the number of nomad planets that might exist in the Milky Way Galaxy: 100,000 for every star. And because the Milky Way is estimated to have 200 to 400 billion stars, that could put the number of nomad planets in the quadrillions. (Note: The researchers calculated their estimate by considering the known gravitational pull of the Milky Way, the amount of matter available to make such objects, and how that matter might be distributed into objects that range from the size of Pluto to larger than Jupiter.)
The Kavli Foundation spoke recently with two of the authors of the new nomad planets study, as well a leading researcher in the search for planets and life beyond Earth. The participants:
- Roger D. Blandford, Director of the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) at Stanford University and the SLAC National Accelerator Laboratory, Professor of Physics at Stanford University and at the SLAC National Accelerator Laboratory, and a co-author of the study.
- Dimitar D. Sasselov, Professor of Astronomy at Harvard University and the Harvard-Smithsonian Center for Astrophysics, and the Director of the Harvard Origins of Life Initiative, which bridges the physical and life sciences to study issues ranging from planet formation to the origin and early evolution of life.
- Louis E. Strigari, Research Associate at KIPAC and the SLAC National Accelerator Laboratory, who led the team that reported the result.
The following roundtable discussion has been edited by the participants.
THE KAVLI FOUNDATION: Let's pretend we’re in a spacecraft looking for unexplored planets in the Milky Way Galaxy. How often are we going to be running into nomad planets as we travel from star to star?
LOUIS STRIGARI: We still don't have a great inventory of how many nomad planets there are in our galaxy, at the very small scales. There could be 105 nomad planets greater than say, the mass of Pluto, per star. But I think you have to realize that interstellar space is vast. Much of it is empty, as we perceive on a human scale. Still, it's possible that maybe within a light year of us going out toward the nearest star, there could very easily be a few of these planets floating around for us to find.
TKF: Can we say anything about how large they might be?
ROGER BLANDFORD: I’ll pick up the story there. Of course, there have been some well-publicized discussions of what the definition of a planet is, and in our paper we talk about nomad planets from Jupiter on downward to very small masses. We considered nomad planets down to the size of Pluto because we have a particular technique in mind for trying to detect them. So we asked, “What is the maximum number we can imagine being there?” We then presented this as an observational challenge to either detect them or place better constraints on how many of them are out there.
A Pluto-sized nomad planet is a convenient size for the microlensing technique. But if you have some other technique like your hypothetical interstellar spacecraft whizzing around at warp speed, then you could encounter smaller bodies more frequently than that. At a maximum, the closest one to Earth might be, say, 10 percent of a light year away, with a light year equal to 5.87 trillion miles. Smaller nomad planets the mass of Pluto could be encountered much more frequently than that. The short answer, of course, is that we don't know. But the exciting thing is the opportunity and the wherewithal is there to start finding out.
Roger D. Blandford, Director, Kavli Institute for Particle Physics and Cosmology and a co-author of the study, has made significant theoretical contributions in diverse areas of astrophysics and cosmology. A Professor of Physics at Stanford University and at the SLAC National Accelerator Laboratory, Blandford was the chair of ASTRO 2010, the decadal survey sponsored by the National Research Council of the National Academies of Science that identifies the highest priorities in astronomy and astrophysics research for the next decade. (Courtesy: KIPAC)
DIMITAR SASSELOV: Based on what we know about how planetary systems form, we would expect a lot of the smaller nomad planets to be ejected into interstellar space, due to close encounters with planets the size of Jupiter and Saturn and sometimes even binary stars in the original protoplanetary disk. So, it is almost a given that the galaxy should have a lot of free-floating small nomad planets the size of Pluto and smaller.
LOUIS STRIGARI: That's correct. Most stars form in clusters, and around many stars there are protoplanetary disks of gas and dust in which planets form and then potentially get ejected in various ways. Questions we can ask for a computer simulation include, “How often during the formation of these star clusters, and then during their subsequent dispersal, do planets get ejected?” “How often do these planets get exchanged between stellar systems?” If these early-forming solar systems have a large number of planets down to the mass of Pluto, you can imagine that exchanges could be frequent.
TKF: Nomad planets, by definition, are cast completely into interstellar space and therefore aren’t bound to any star. How could this happen?
Dimitar D. Sasselov, Professor of Astronomy at Harvard University and the Harvard-Smithsonian Center for Astrophysics, is the Director of the Harvard Origins Project. The project is an interdisciplinary center that studies planet formation, the detection of exoplanets, the origin and evolution of life, and other subjects. Sasselov, the author of the recently published book, The Life of Super-Earths (Basic Books, 2012), has spoken on the search for Earth-like planets at the renowned TED conferences and at other venues.(Credit: Hubert Burda Media/Flickr)
DIMITAR SASSELOV: You could say this has happened in our own solar system with small objects, typically comets, which pass close to Jupiter. They don’t really hit the planet but instead come very close. They get what’s called a gravitational assist, which our own spacecraft use in order to speed up toward the edge of the solar system and leave it altogether – as is the case with NASA’s Voyager 1 and Voyager 2 spacecraft. This can happen also to a comet. It would have happened much more frequently early on when there were a lot of these small objects roaming around, and Jupiter probably did its fair share of expelling them from the solar system. When we consider larger nomad planets floating around today, it’s important to recognize that the ones that formed in a protoplanetary disk and then were ejected probably account for only part of the total number we expect – maybe one or two per star in the galaxy. That means we have to think of other ways in which they formed.
DIMITAR SASSELOV: Yes, certainly, that would be an extension of the mechanism by which stars form.
LOUIS STRIGARI: Theoretical calculations say that probably the lowest-mass nomad planet that can form by that process is something around the mass of Jupiter. So we don’t expect that planets smaller than that are going to form independent of a developing solar system.
Louis E. Strigari is a research associate at the Kavli Institute for Particle Physics and Cosmology at Stanford University and the SLAC National Accelerator Laboratory. His research interests include dark matter in astrophysics and particle physics, galactic structure, substructure and dwarf satellites, the search for galactic satellites, direct dark matter detection, neutrino astrophysics, and galactic microlensing. (Courtesy: L. Strigari)
DIMITAR SASSELOV: This is the big mystery that surrounds this new paper. How do these smaller nomad planets form?
DIMITAR SASSELOV: There are smaller bodies in our solar system and one of them that is still difficult to explain – way out beyond the orbit of Pluto – is Sedna.
ROGER BLANDFORD: I think there is evidence that Sedna, which many scientists refer to as a “dwarf planet,” is not indigenous. It's probably pretty implausible that an object that large would have landed in the solar system but it's not out of the question.
LOUIS STRIGARI: Extrapolating that further, you could even ask the question, “What's the probability or likelihood that we've been visited by an interstellar type of comet?” You could start addressing that question by looking at the inventory of the comets that are known to exist. Many of them are under hyperbolic orbits and are very weakly bound, if bound at all, to our solar system. I think it's interesting to note that in the near future we will have a better inventory of these transient-type comets with large-scale surveys. That will allow us to speculate if we are being visited by one of these smaller-mass nomad planets.
ROGER BLANDFORD: The odds go up, as you get to smaller and smaller planets, of transferring from one planetary system to another. There are many more of them, of course.
TKF: So, future studies should be able to better characterize the population of nomad planets, their size and so on.
ROGER BLANDFORD: That's definitely what we were pointing out in our paper. We are encouraging the use of existing facilities, or telescopes that are going to be coming online. Perhaps there could also be some inexpensive new technique that would make the estimate for the number of nomad planets more precise. Nomad planets are certainly enticing targets for observational astronomers. Remember, it wasn't so long ago that we had no good evidence for any extrasolar planets. We just had a lot of false alarms. And now, we are talking about up to 2,000 or 3,000 of them. The subject has developed at an extraordinary rate.
In this artist's visualization,the planet-like object dubbed "Sedna" is shown where it resides at the outer edges of the known solar system. The object is so far away that the Sun appears as an extremely bright star instead of the large, warm disc observed from Earth. In the distance is a hypothetical small moon, which scientists believe may be orbiting this distant body. Credit: NASA/JPL-Caltech/R. Hurt (SSC)
The discovery of nomad planets is prompting astronomers to re-think their definition of a planet. The standard description characterizes a planet as a celestial body, either rocky or gaseous, that orbits a star and has enough mass to be rounded but not enough mass to generate thermonuclear fusion. A nomad planet fits this definition with one key exception: it is not orbiting a star and is therefore not bound to any solar system.
There are a few ways, theoretically, that nomad planets could originate. Among them: Young planets in the primordial disk of gas, dust and rock that orbits a young star – called a protoplanetary disk – could be ejected by the gravitational force of a passing object. They also might form independently in the same molecular clouds of gas and dust from which stars are born. Perhaps the most intriguing speculation about nomad planets is that they could theoretically harbor life, even though they are on their own, far from the warmth of any star. To find more nomad planets and learn more about them, one proposal is to use microlensing techniques with current and next-generation survey instruments, including the future space-based Wide-field Infrared Survey Telescope (WFIRST) and the ground-based Large Synoptic Survey Telescope (LSST). Both are scheduled to begin operations in the early 2020s.
TKF: One more question about the exchange of planets between solar systems. Is this something that we need to worry about, here in our own solar system? Could a nomad planet be swept up into our solar system and perturb the orbits of planets here, even collide with Earth?
LOUIS STRIGARI: If we’re talking about an object impacting the Earth and causing a significant amount of damage, we should be more worried about an asteroid-type object that we know about, that already resides in our solar system.
ROGER BLANDFORD: The threat is internal, not external.
TKF: Right now, we can indirectly detect nomad planets with this microlensing technique, but we don’t know where it is other than somewhere between the Earth and the background star whose light it’s changing. Will we ever be able to actually pinpoint the location of these nomad planets? And what kind of technology would be required to do that?
MICROLENSING. Nomad planets have been discovered using a phenomenon known as microlensing. During this effect, a star appears to momentarily brighten as a nomad planet passes between it and Earth. This is because the gravitational distortion of space caused by the planet bends the starlight as it travels past it and toward Earth. Multiple images of the star can be created along a ring that circles the nomad planet, as viewed from Earth, and those images collectively make the star appear momentarily brighter. Albert Einstein predicted this effect would be impossible to observe, but microlensing is now a standard technique used to study low-mass stars and other dark objects in the Milky Way Galaxy. In this artistic rendering, the size of the Earth and the nomad planet are exaggerated to highlight the microlensing effect. (Credit: L. Strigari)
ROGER BLANDFORD: Using this microlensing technique, all we can probably say is there's a probability distribution of where the nomad planet would be found. Another technique – it's a bit of a long shot – is to look for infrared radiation, the heat, from a very nearby nomad planet. Using this data, you could determine its motion and other characteristics, which would give you additional clues to the distance. As I mentioned before, smaller nomad planets are expected to be closer than larger ones, in part because we think there are many more smaller nomad planets than larger ones. So if you are going to find a nomad planet by detecting its infrared radiation, it will likely be small and very close to our solar system. And in that case, I think you’d be able to get a much better estimate of the distance.
TKF: I’d like to switch gears a bit and talk about the speculation that some nomad planets might harbor life – a topic discussed briefly in the paper. It seems easy to imagine that these planets, unattached to a star and drifting aimlessly in deep space, are very cold and lifeless. But under the right conditions, they could be something very different. Dr. Sasselov, you direct a program at Harvard that studies what conditions are essential for life to develop. Why are nomad planets not necessarily cold and lifeless?
DIMITAR SASSELOV: It's good to separate the subject of life into two parts. First, we know that a water environment is essential for life, as we know it, to arise after a transition from chemistry to biochemistry. Secondly, we need to consider what kind of environmental conditions are necessary for that biochemistry, once it emerges, to survive over astronomically significant periods of time.
Planets the size of the Earth or larger provide very good potential habitats, because they have their own internal heat. But we should pause here and recognize that we still don’t know how life emerges. We don’t know, for example, whether ultraviolet radiation or other conditions that a nearby star provides could be critical to the emergence of life.
TKF: Of course, our thinking on this subject is limited because we have only one example of what life is.
DIMITAR SASSELOV: That’s right. Human science does not have a good definition, nor do we understand the true nature of life as a phenomenon. On the other hand, we understand organic chemistry fairly well, and also the essentials of biochemistry. We can therefore compare conditions in the galaxy to what it takes to have the biochemistry we know about. You find these necessary conditions on planets, in particular planets that range in mass from Mars to Uranus and Neptune. This is exactly what we are talking about today when we talk about nomad planets. These kinds of planets have their own internal heat, which enables complex chemistry. And it doesn’t matter whether they are associated with a star or not.
TKF: This internal heat – is it going to be created by some kind of radioactive decay?
DIMITAR SASSELOV: Yes, our own Earth is a good example of this. The majority of the heat that goes through the crust to the surface of the Earth is due to latent heat still preserved from its formation – and in no small order radioactive heat from radioactive elements, including uranium, thorium and potassium.
ROGER BLANDFORD: If one of these nomads was in interstellar space and had some moons associated with it, like Mars has moons associated with it, that could provide tidal heating – the squeezing and stretching of the surface of the planet. That would cause a certain amount of frictional heating and that could be an additional source of heat, which should last quite a while.
TKF: And which could help drive some of the tectonic action at the surface.
ROGER BLANDFORD: That's right. The other key to this – and there've been several rather speculative papers, but very interesting ones – is that small nomad planets could retain very dense, high-pressure “blankets” around them. These could conceivably include molecular hydrogen atmospheres or possibly surface ice that would trap a lot of heat. They might be able to keep water liquid, which would be conducive to creating or sustaining life. We already know that bacteria and other organisms can survive under the most inclement conditions at the bottom of the deep ocean, in vents around volcanoes, and even in the human gut – and that must be as miserable as anywhere and they do just fine. It's at least a reasonable question to ask: “How long could such extremophiles survive in conditions such as those around a suitably-configured wandering nomad planet?”
Artist's conception of a Jupiter-like nomad planet alone in the dark of space, floating freely without a parent star. (Credit: NASA/JPL-Caltech)
DIMITAR SASSELOV: That's a very good point, and I'd like to continue in that direction. If you imagine the Earth as it is today becoming a nomad planet – it gets expelled and then it's flung into interstellar space. Life on Earth is not going to cease. That we know. It's not even speculation at this point. People who study extremophiles, in particular in the deep biosphere in the crust of the Earth, already have identified a large number of microbes and even two types of nematodes that survive entirely on the heat that comes from inside the Earth. And because the internal heat of the Earth is going to continue at this level for at least another 5 billion years, this entire deep biosphere is going to be completely uninhibited by the Earth becoming a nomad planet.
TKF: That could make nomad planets carriers of life throughout the galaxy. We’re veering toward the subject of panspermia, the hypothesis that life exists throughout the universe and is spread by meteoroids, asteroids and other objects, aren’t we?
ROGER BLANDFORD: People have been talking about how life may have been seeded throughout the cosmos since Anaxagoras in the 5th century B.C. (the Greek Philosopher who first wrote about the concept of panspermia). In the 20th century, many eminent scientists have entertained the speculation that life propagated either in a directed, random or malicious way throughout the galaxy. One thing that I think modern astronomy might add to that is clear evidence that many galaxies collide and spray material out into intergalactic space. So life can propagate between galaxies too, in principle. And so it's a very old speculation, but it's a perfectly reasonable idea and one that is becoming more accessible to scientific investigation. We want to write a sort of “Hitchhiker’s Guide to the Galaxy.”
"In the 20th century, many eminent scientists have entertained the speculation that life propagated either in a directed, random or malicious way throughout the galaxy. One thing that I think modern astronomy might add to that is clear evidence that many galaxies collide and spray material out into intergalactic space. So life can propagate between galaxies too, in principle." - Roger Blandford
DIMITAR SASSELOV: Having larger bodies like these nomad planets allow microbes to not simply survive the journey but in fact prosper and be robust.
TKF: Turning back to more immediate work on nomad planets, what are the next steps for better understanding them? More comprehensive surveys of the sky are needed, but NASA is facing huge budget cuts.
LOUIS STRIGARI: I think we have to use our imagination a little bit in terms of how we can do this. A space-based telescope survey like WFIRST is important, but several other types of space-based observatories that will happen in the next decade or so will have the capability to make these observations – without interfering with their primary missions. Of course, it would be outstanding if larger more dedicated observatories such as WFIRST would really nail this topic.
ROGER BLANDFORD: Pioneers like Dimitar have been extraordinarily ingenious at using relatively inexpensive observatories to make great discoveries. They've used existing facilities and existing projects in new ways, and I think we’re going to see that this is a subject that attracts lots of young people. They don't know what's impossible, and therefore they make discoveries. I am optimistic on those grounds, but I'm also optimistic about some of the more directed space missions and ground-based facilities coming online over the next decade.
TKF: Most people think the biggest mysteries in astronomy today are dark matter and dark energy, which make up 96 percent of the universe but are a complete mystery. The possibility of a galaxy teeming with nomad planets, planets we never before thought existed, means there’s still a lot to learn about the remaining four percent of the universe - that is, the ordinary stuff that makes up stars, planets, moons and all the other so-called “baryonic” matter in the cosmos. What does each of you think are the biggest unanswered questions related to this four percent?
DIMITAR SASSELOV: The four percent holds the key to another of the big mysteries of human knowledge, which is life. What is the nature of life?
ROGER BLANDFORD: Just looking at the inanimate properties of this four percent, of baryonic matter, there are many great mysteries. For example, we can’t locate some of it but we know it is there. The dwarf galaxies seem to have rather carelessly lost a good fraction of it. This is one of the big observational challenges in modern cosmology.
LOUIS STRIGARI: We still don't know about the mix of dark matter and luminous matter in our galaxy. One of the questions that I tend to get asked about is, “Could these nomad planets make up all the dark matter?” In other words, could they be the source of both the mysterious dark matter and ordinary, baryonic matter that is “dark” simply because we haven’t detected it yet? Nomad planets would not be able to account for much of the unseen baryonic matter. And that’s the problem: we still don't have a very great inventory of ordinary matter – even on the scale of our own sun. That sort of highlights what we don't know.
ROGER BLANDFORD: Maybe I could put Dimitar on the spot. Ten years down the road, what do you think we’ll know about extrasolar planets both attached and detached from stars?
DIMITAR SASSELOV: I hope we can better understand how planets form in protoplanetary disks. That is still a big mystery, although the data is accumulating at fast pace right now. Also, I think we are going to identify, in the vicinity of our solar system, extrasolar planets that are potentially habitable. We’ll learn as much as we can about their atmospheres, and we’ll search for geochemistry that may lead to a signature of life.
In the meantime we’ll figure out a way, in the infrared, to find nomad planets that are even closer – within a couple of light years. And that will be the next big breakthrough. I hope that this happens in the next ten years. But I'm not sure if we will be able to pull it off.
LOUIS STRIGARI: I'm really curious about the exchange of planets between solar systems, as we’ve talked about. How often does it happen, and how far can a nomad planet travel? How many trips around our galaxy does it make? I think these are brand new, basic questions. And I think that's an exciting place to be.
TKF: And then, as you said earlier Dr. Blandford, nomad planets could also be intergalactic travelers.
ROGER BLANDFORD: Yes, at very high velocities you can escape the galaxy. Just a stellar or black hole encounter within the galaxy can, in principle, give a planet the escape velocity it needs to be ejected from the galaxy. If you look at galaxies at large, collisions between them leads a lot of material being cast out into intergalactic space.
TKF: Our own Milky Way is headed for a collision with the Andromeda galaxy, which I suspect will lead to a fair number of nomad planets.
ROGER BLANDFORD: Yes. But don't frighten people. It's not for many billions of years.
- May, 2012 |
Living systems at all levels of organization demonstrate the complementary nature of structure and function. Important levels of organization for structure and function include cells, organs, tissues, organ systems, whole organisms, and ecosystems.
All organisms are composed of cells—the fundamental unit of life. Most organisms are single cells; other organisms, including humans, are multicellular.
Cells carry on the many functions needed to sustain life. They grow and divide, thereby producing more cells. This requires that they take in nutrients, which they use to provide energy for the work that cells do and to make the materials that a cell or an organism needs.
Specialized cells perform specialized functions in multicellular organisms. Groups of specialized cells cooperate to form a tissue, such as a muscle. Different tissues are in turn grouped together to form larger functional units, called organs. Each type of cell, tissue, and organ has a distinct structure and set of functions that serve the organism as a whole.
The human organism has systems for digestion, respiration, reproduction, circulation, excretion, movement, control, and coordination, and for protection from disease. These systems interact with one another.
Disease is a breakdown in structures or functions of an organism. Some diseases are the result of intrinsic failures of the system. Others are the result of damage by infection by other organisms.
Regulation and Behavior
All organisms must be able to obtain and use resources, grow, reproduce, and maintain stable internal conditions while living in a constantly changing external environment.
Regulation of an organism's internal environment involves sensing the internal environment and changing physiological activities to keep conditions within the range required to survive.
Behavior is one kind of response an organism can make to an internal or environmental stimulus. A behavioral response requires coordination and communication at many levels, including cells, organ systems, and whole organisms. Behavioral response is a set of actions determined in part by heredity and in part from experience.
An organism's behavior evolves through adaptation to its environment. How a species moves, obtains food, reproduces, and responds to danger are based in the species' evolutionary history. |
How does one measure the wind speed inside a tornado? Bernie Vonnegut looked back at an early state-of-the-art method, and wrote a report called “Chicken Plucking as Measure of Tornado Wind Speed” [published in "Weatherwise," October 1975, p. 217]. Vonnegut told the world why that early method, which involved a chicken carcass and a cannon, may have been imperfect. For this achievement, Bernie Vonnegut was posthumously awarded the 1997 Ig Nobel Prize for meteorology.
Vonnegut was a scientist — by all accounts a good one. A tribute written the year Bernie Vonnegut died says: “Bernard Vonnegut is best known, however, for his discovery on November 14, 1946 at the General Electric Research Laboratory of the effectiveness of silver iodide as ice-forming nuclei that has been widely used to seed clouds in efforts to augment rainfall.”
BONUS: Vonnegut also wrote about The Smell of Tornadoes. |
Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Relationship Between Circulatory and Respiratory System
Transcript of Relationship Between Circulatory and Respiratory System
The first step of the respiratory system is breathing in oxygen through your nose. This passes through your nasal cavity which filters out the germs, viruses, etc. with hair-like follicles called cilia.
Here the oxygen and other components rush past the tongue, epiglottis, and pharynx. The pharynx is an important factor that aids in speech. The epiglottis keeps food from going down your trachea.
The larynx is the next place in which the air rushes past. It is vital for speech, and is commonly known as your voice box or vocal chords.
This long section is called the trachea. Cilia also lines the inside of the trachea, and serves the same purpose; to keep germs and viruses from getting through. Goblet cells, inside the trachea, produce mucus which also catches germs. The outside of the trachea is ringed with cartilage, which reinforces it.
This separation is one bronchus, or bronchi for plural. It also has rings of cartilage for extra support. The first separations are called primary bronchi , the second are secondary, and third are tertiary. The air goes through these two bronchi and further down into the bronchioles.
The bronchioles serve a similar function as the bronchi, but these do not have the rings of cartilage. They further open up into the alveoli.
The alveoli are the most important part of the respiratory system. They are the way oxygen gets from the air into your blood. Each alveolus is surrounded by capillaries which allow for diffusion. Oxygen diffuses into the capillaries and gets taken back to the heart. Carbon dioxide diffuses out and gets breathed out.
The whole process then repeats with every breath.
The right atrium receives de-oxygenated blood from the superior and inferior vena cava. It pumps it through the right AV valve to the right ventricle.
The right ventricle receives de-oxygenated blood from the right atrium and pumps it against gravity through the pulmonary semi-lunar valve, which goes through the left and right pulmonary arteries to the lungs.
The left atrium receives freshly oxygenated blood from the lungs, via the left and right pulmonary veins. It then pumps it through the left AV valve and into the left ventricle.
The left ventricle receives oxygenated blood from the left atrium. It pumps it through the aortic semi-lunar valve and the aorta. Since the left ventricle must pump blood all around the body, it is the strongest of the four chambers.
The aortic semi-lunar valve prevents back flow of blood from the left ventricle to the aorta. Without this valve the heat would be much less efficient.
The right AV valve prevents back flow from blood going from the right atrium to the right ventricle.
The pulmonary semi-lunar valve prevents the back flow of blood going from the right ventricle to the lungs.
The left AV valve prevents the back flow of blood from the right ventricle to the lungs.
The superior vena cava carries deoxygenated blood from the upper half of the body to the right atrium.
The inferior vena cava brings deoxygenated blood from the lower half of the body to the right atrium.
The left and right pulmonary arteries bring deoxygenated blood from the right ventricle to the lungs.
The pulmonary veins receive oxygenated blood from the lungs which direct it to the left atrium.
The aorta is the largest blood vessel in the human body. It takes oxygenated blood from the right ventricle and directs it around the body.
The SA node (sinoatrial node) is a group of cells that initiate the normal sinus rhythm of the heart.
The AV node (atrioventricular node) conducts electrical charges from the atria to the ventricles, therefore controlling the top of the heart.
Purkinje fibers allow the heart's conduction system to create synchronized contractions of its ventricles. This makes sure the heart is at a healthy rhythm.
Systole = phase of the heartbeat in which blood flows from the chambers into the arteries.
Diastole = the phase of the heartbeat when the heart muscles relax and allow bloodflow into the chambers.
The circulatory and respiratory system are connected at the lungs. The circulatory needs oxygen form the lungs to pump the blood around the body and the respiratory system supplies the oxygen and gets the oxygen from the air into your bloodstream. Without either of these systems, the other would have no purpose. What's the point of oxygen if it can't get to your muscles and tissues? What's the point of blood without oxygen to drop off? |
Teacher-author: This lesson was created by Dave Neumann, Long Beach Unified School District.
This lesson deals with the educational opportunities available to females in the colonial period. It includes a background essay which discusses the nature of colonial education, a narrative about Elizabeth Murray's efforts to educate her nieces, and documents that relate to the theme of female education. The only material necessary is the website.
3. Historical background and Bibliography:
In the colonial period, economic opportunity was based heavily on one's education. Education often took the form of practical apprenticeships. But throughout the eighteenth century, increasing numbers of individuals began receiving formal schooling, especially in New England. Throughout the colonial period, education was generally based on gender. By the time of the revolution, few public schools were available to girls. Middling families could enroll girls in a "dame school," though this might offer little more than babysitting. Beyond that, adventure schools, stressed skills that were expected to make young women marriageable, not to teach them practical skills that might help them manage a household. The American Revolution began to change the public's ideas about women and education. By the mid1800s, large numbers of women were being educated and some were even attending college.
Linda Kerber, Women of the Republic: Intellect and Ideology in Revolutionary America (Chapel Hill, 1980), 36-39; Kenneth A. Lockridge, Literacy in Colonial New England: An Enquiry into the Social Context of Literacy in the Early Modern West (New York, 1974); Mary Beth Norton, Liberty's Daughters: The Revolutionary Experience of American Women, 1750-1800 (Ithaca, 1980).
4. Guiding question:
- What role did Elizabeth Murray play in the education of her nieces?
- Why was education so important to Murray?
- What kind of education did she provide for her nieces?
- What resources enabled Murray to provide this education?
- What does this activity reveal about opportunities for female education and the nature of female education in the colonial period?
5. Learning objectives:
- Students will analyze the experiences of every day people within the context of the era in which they lived.
- Students will analyze primary documents, using them to understand life in the eighteenth century.
Input and Guided Practice
The overview essay is the starting point of this lesson. Teachers may assign students to read the essay before class, in class, or may choose to lecture from it. Please note that the teacher version has additional detail and commentary.
Once students are familiar with the background of female education, they are ready to examine the documents. Several documents provide broad context regarding education, especially the contrasts between male and female education. The remaining documents reveal the particular experiences of Murray's nieces as Murray made special efforts to give them a "practical" education. This activity can be done in groups, pairs, or individually. It might be a good idea, however, to begin by modeling or discussing the first document together. Students can use the suggestions for interpreting documents as a guideline. After examining the documents, students should be able to discuss their responses to the questions regarding the education Murray sought to provide her nieces and the light her efforts shed on the nature of education for females more generally in the colonial period. Students may need some assistance in recognizing that Murray's support for her nieces was relatively unusual and reflects the importance wealth played in the educational options available to different colonists. Students should end this lesson with a clear sense of the possibilities, and limits, of women's education, how those limits were justified, and how the rhetoric of the American Revolution began to shift the terms of discussion and open more opportunities to women.
The lesson could conclude by sharing out verbally, with brief quickwrites, or with longer and more formally-structured essays |
During the months of November and December we have been talking about plastics in the Biology and English lessons. These are some of the topics we have talked about:
- How are plastics produced?
- What resources are necessary to produce plastic?
- Different types of plastic.
- How long does it take for plastic to decompose?
- What are the effects of the use of plastic on the environment?
- The Great Garbage Patch.
- The life of a plastic bottle.
- How to reduce the use of plastic. Alternatives.
- Many important Rs: Reject, Reduce, Reuse, Repair, Recycle, Rot, Respect...
- How to use the yellow, blue and green recycling bins.
- What can we all do at school, at home, in shops...?
If you want to know more, visit our Plastic Free A Guarda blog. |
Playing to Learn
Kathryn L. Stout, B.S.Ed., M.Ed.
Published: April 1998
E-mail to yourself
Long before official "school" begins, children's play can provide opportunities for development in motor skills, language skills, and reasoning. Here are just a few ideas that will contribute to a strong foundation for later learning:
Give your children plenty of opportunity to build muscular strength and coordination. Teach them proper form for movements: roll, crawl, walk, run, jump, hop, gallop, and dance. Let them practice throwing small balls and bean bags (one hand) as well as large balls (using both hands) and catching large balls. They can throw balls into clothes baskets, beginning up close and gradually increasing the distance between themselves and the target. You can make a simple obstacle course in the kitchen, having them move around chairs, and under the table. Don’t add the pressure of timing them or turning any of these activities into a competition.
Balance and an awareness of their own bodies can be aided by having them walk on a line (which can be made easily with chalk or tape) and jump over a line. Also have them Walk on tip toe, stand on one foot, and move like animals: an elephant, duck, seal, rabbit, and so on. Play music and teach them to move to the rhythm, imitate a rhythm by clapping, or tapping on a drum, and learn simple movements to accompany songs (like the hand movements to "Itsy Bitsy Spider," or body movements to "I'm a Little Teapot"). Participating in a song with cymbals, triangles, wooden blocks, and/or bells are all helpful and fun at this age, as well.
Strengthen smaller muscles and improve coordination. Provide opportunities to paint, draw, and trace using large, easily held, brushes, crayons, markers, and pencils. Be sure to show them the proper grip and have them draw from the left side of the paper to the right side, or from the top to the bottom. This will smooth the transition to handwriting later. Other activities to encourage development of fine motor skills include cutting, pasting, placing pegs in pegboards, stringing beads, sewing (large plastic needles with prepunched picture cards are used at this age) and rolling clay to make simple coil bowls. Allow them to help in the kitchen-mixing cookie dough, rolling it out with a rolling pin, or rolling small bits of dough into balls. Provide small pitchers and containers that can be used in the tub or pool to practice pouring. Sandbox toys should include sifters, and various size scoops and containers for pouring sand.
Practice identifying sounds buildsskill in auditory discrimination necessary for reading. Have the children close their eyes as you blow a whistle, stamp your feet, or ring a bell, and let them tell you what they hear. Throughout the day or while taking a walk, stop and listen. Let them identify the sounds made by birds, lawnmowers, passing trucks, barking dogs, water boiling, a phone ringing, a door opening or closing, and so on.
Enhance visual memory skills with simple games. Begin with two very different objects. Have the children look for several seconds, then cover the objects and ask them to tell you what they just saw. Gradually increase the number of objects. Gradually use objects that are more similar in appearance. Eventually, pictures of objects can be used. Once they have succeeded in remembering a few items, encourage them to also remember the location. Have them look at the row of objects from left to right. Begin by pointing to the first in the row and saying "First is ____, then there's ______" and later use the words first, second, third, and so on, as you point.
Read or tell a simple story with an obvious sequence of events, such as "The Three Pigs" and have them act it out. Act as a narrator in order to remind them of what to do and when, until they are able to repeat the sequence without help. For variety, let them use costumes or puppets. If they have a favorite story, occasionally pause and ask "What happens next?" Recall is important for later reading comprehension.
Provide opportunities to play with simple puzzles. At ages 2 1/2 and 3 they place one piece into the hole with the same shape. Eventually they progress to pictures made with five or six pieces that fit together. Let them copy patterns (moving left to right in a row) using colored pegs on a pegboard, beads to string, or even simple household objects. A three-year-old can be given three, four, and five objects to match to items you have put in a row. For example, line up a spoon, a can of beans, and a napkin. To one side have a group of five items which include a matching spoon, can, and napkin. The child would select each matching item, setting it in front of its mate in the row that you made.
Suit the length of time involved in the activity to the attention span of the child. It only takes a few seconds to have them tip toe, and a minute or two to match a row of objects. They can identify a few sounds while riding in the car, and hop a few steps on the way to the front door. You don't need formal lesson plans or a separate teaching time, just the habit of encouraging children to do activities that are fun, but beneficial, here and there throughout the day. |
Learning Shapes With Toy Cars. Create your own shaped road tracks using black sandpaper. Teach your toddler or preschooler their shapes with a triangle, square, circle, rectangle, pentagon and octagon road track.
Writing centre. Samples of what students can work on include: Letter writing List making Story writing Start a Topic book (a book all about topic of student's choice) Comic strips Card making... All materials available for students at one location. |
قسمتی از زبان اصلی کتاب
Talking about feelings is one of the cornerstones of emotional literacy. How do we teach children the meanings of words about feelings? Are we conscious of the need to build their emotional vocabularies as we attempt to develop their ‘emotional intelligence’?
At times we all struggle to find the right words to talk about what we are feeling. Often it is the simplest images, symbols and metaphors that are the most powerful. A major benefit of using imagery to describe feelings is that literacy barriers (either because the children are too young or because they have learning difficulties) don’t have any impact on their ability to communicate how they feel.
The Bears are a wonderful tool for helping children communicate how they feel. This card pack contains 48 high quality cards, each depicting a colourful, loveable cartoon bear which could be happy, sad, confident, afraid, shy, energetic, tired, noisy, caring, grumpy - you decide! There are lots of differents uses (many are described in the booklet included in the card pack and you are also encouraged to be creative and come up with your own) |
Humans might be plagued by wisdom teeth problems today because our ancestors shifted from a hunter-gatherer lifestyle to a soft modern diet, new research finds.
Scientists are increasingly analyzing how culture interacts with our biology. One key cultural development in human history was the move away from hunting-gathering toward farming, a dietary change that physical anthropologist Noreen von Cramon-Taubadel at the University of Kent in England reasoned might have influenced the anatomy of our faces and jaws.
To find out more,von Cramon-Taubadel investigated museum specimens of skulls from 11 human populations drawn from across the world. Five of these groups primarily had lifestyles based on hunting, gathering or fishing, such as the San Bushmen of Africa or the Inuit of Alaska and Greenland, while the other six relied on agriculture.
The jawbone differences von Cramon-Taubadel saw between populations depended on diet. Overall, people who lived a hunter-gatherer lifestyle had longer, narrower jawbones. This might be due to how people in agricultural societies more often eat softer foods such as starches and cooked items, while hunter-gatherers on average eat more foods that are raw and unprocessed. The amount of exercise that jaws experience from their lifestyles affects how they grow and develop — longer jaws may do better on diets that contain harder items.
"This research shows the interaction between what is fundamentally a cultural behavior, farming, and its effects on our anatomy," von Cramon-Taubadel told LiveScience.
This change might explain why there are such high levels of teeth crowding and misalignment in many post-industrial human populations. Since jaws of modern societies are now shorter, they "are not big enough to accommodate the size of our teeth," von Cramon-Taubadel said.
The result could be crowded, painful wisdom teeth.
Von Cramon-Taubadel detailed her findings online Nov. 21 in the Proceedings of the National Academy of Sciences.
- 10 Things You Didn't Know About You
- Top 10 Mysteries of the First Humans
- Top 10 Things that Make Humans Special |
What is a poacher? Historically, a poacher was from a low socio-economic class in Europe, where wealthy landowners were considered to own the animals that roamed their land. When a hungry peasant would occasionally kill an animal for food, that peasant was called a poacher. Poaching was a serious crime against a landowner.
The word "poach" comes from the Middle English word "pocchen," which literally means to enclose in a pouch, or to "bag" something. The idea is to hide what one has taken.
In this country during the Great Depression when game laws were not yet widely respected, many people poached because they felt the laws were unjust. In those days, a game warden might occasionally look the other way when he knew a family was hungry and had no other alternative.
Many hunters today are sympathetic with that motivation, but with so many government food programs and a variety of agencies that provide food, including venison donation programs, hunger is no excuse for poaching.
But poaching continues because poachers have a variety of other motivations. Some poachers kill for pride. Some poachers kill for certain body parts that have a value on the black market. Some do it because they disagree with hunting regulations. Some make poaching a game of outsmarting game wardens. Some poach purely for the pleasure of killing.
In the late 20th century environmental scientists began applying the word "poach" to the illegal harvest of plant species, so even the innocent picking of wildflowers could in some cases be considered poaching. When the definition is broadened, its application to game animals is weakened.
That may be why few people accept such a broad application of the word, and most still connect poaching primarily with game animals. Some mistakenly equate hunters with poachers, but poachers are not hunters. Here's why?
1. Poachers don't abide by laws that govern hunters. Hunting and conservation laws have a long and strong history. Hunters during the early 20th century created a wildlife conservation system that has no room for the idea of poaching. The system of enforcing game laws is respected by hunters, but not by poachers.
2. Poachers aren't self-limiting as hunters are. Hunters have limits, and they want limits. When a hunter attaches a tag to an animal, he is well aware of the limit and he accepts it. He recognizes that it's illicit to try to use that tag again. He has successfully made a harvest, and recognizes that to attempt to use that tag again is a selfish act. The poacher doesn't care that he's selfish.
3. The methods of poachers are unacceptable to hunters. Most hunting regulations are created at the state level, so state game agencies stipulate what methods of harvest are legal. Hunters accept those regulations and methods. Poachers do not. Poachers use weapons that are not legal for hunting, think nothing of taking animals outside the legal dates or hours stipulated for harvesting a species, and take animals that are illegal to hunt even threatened and endangered species for which there is no open season.
4. Poachers steal from hunters and from the population at large. In North America, wildlife is not owned by those who own the land it lives on. Nor is it owned by those licensed to hunt it. Until it is killed, it's owned by the people at large, and to kill an animal illegally is to steal from them. Properly licensed hunters are not stealing when they use the methods and weapons sanctioned for hunting, and hunt within the stipulated seasons and times.
5. As thieves, poachers operate in a covert way. This relates to the origin of the word "poach," to hide in a pouch. The actions of poachers don't bear the scrutiny of public view, so poachers must hide their kills and manipulate the facts and circumstances when they take an animal to a butcher or a taxidermist, or display it on the wall. No hunter needs to hide his kill, and hunters can be honest about the facts of the kill.
Hunters are a healthy and necessary part of wildlife conservation. Poachers are destructive to it. Poaching is not hunting, and poachers are not hunters any more than bank robbers are a bank's customers.
When the "Everyday Hunter" isn't hunting, he's thinking about hunting, talking about hunting, dreaming about hunting, writing about hunting, or wishing he were hunting. If you want to tell him exactly where your favorite hunting spot is, contact him at [email protected]. This column and others can be accessed online at www.EverydayHunter.com. |
Seabirds drink only sea water, but never get sick – this is perhaps one of the most special and most interesting facts related to the life of marine birds and certainly one worth taking a closer look at, along with a number of other phenomena related to the extraordinary survival strategies adopted by seabirds.
Why Seabirds Never Get Sick Because of the Salt in the Water
Marine birds live close to the sea and they live on the plants and the animals they avail themselves to from the water, yet they never suffer from any illnesses related to salinity or other water features. The reason is that their bodies have adapted wonderfully to salt water environments and they have developed a special organ called the salt gland, which evacuates the salt that they don’t need. The salt gland is usually found at the root of the bird’s beak and it works by concentrating the salt from the bird’s blood into the sinus area, from where it eliminates the extra salt by simply sneezing it out.
How Come Seabirds Are Never Cold?
Seabirds don’t get sick because of the cold, but this alone would not mean they are not cold, right? Actually, most probably, they aren’t! Their thermoregulatory system is in fact so good that they are able to use even the weakest warm current to their advantage. Seabirds, especially the ones that spend a long time swimming in water and exposed to extreme cold, lose a lot of body heat, but they are also able to compensate for the loss in areas where the water is warmer. They also employ a mechanism called counter current exchange to maintain their body heat. The cold water prompts the bird’s blood stream to recycle body heat – the warm blood from the heart practically warms up the colder blood coming from the feet, thus reducing heat loss to the minimum. This heat exchange mechanism is often used by birds when they encounter extremely hot conditions, but in those cases the process is reversed to cool them down.
Why Seabirds Never Get Sick because of Cold Water
Seabirds never catch a cold, not even in icy water. Being extremely adaptive species, marine birds have developed physical features and behavior patterns that keep them from freezing. Their feathers provide almost perfect insulation and they also do something called fluffing – they fluff their feathers to capture air between their plumage and their body, thus adding one more extra layer of natural insulation. Some bird species are also known to tuck together and keep each other warm, getting protection against extremely low temperatures.
Why Seabirds Never Get Sick because of Getting Wet
The insulation that protects the birds against the cold offers them protection against getting wet to the skin, too. They also have an extra gland called the preening gland, which secrets several different types of fats and oils that spread to the end of the feathers, waterproofing them. Most seabirds also have special feathers that break down into a keratin powder distributed along the rest of the feathers, making them watertight. |
Freedom and Happiness
We talked in this session about morality and what is demanded of us in the way we live our lives. Here is the worksheet that we filled out with the answers included:
1. The conscience is the part of the human heart that hears the voice of God, and which tells us how to live. What two fundamental laws does the conscience direct a person to obey?
2. If a person always practices doing good and avoiding evil, it becomes a habit. When this habit is formed, it is easier to fight off temptations to go against your conscience.
What is this called when it is easy to do good and avoid evil because a person has a habit of always doing the right thing?
3. If a person regularly does evil and fails to do good, it becomes a habit. It gets harder and harder to fight temptation to sin. It even becomes hard to know what is right, because a person is so used to ignoring their conscience.
What is this called when a person gets accustomed to habitually doing what is morally wrong?
4. There are four “Cardinal Virtues” that help a person to obey their conscience no matter what, and they all serve a different purpose in the understanding and practice of living a good life. Name the virtue that helps with the following:
- The ability to tell the right thing to do in a particular circumstance: Prudence
- The habit of giving each person his due (God, others, self): Justice
- The courage and strength to do the right thing even when it is difficult: Fortitude
- The self-control to avoid doing evil even when it is tempting: Temperance
5. List the “Seven Deadly Sins”:
Pride, Envy, Wrath, Sloth, Greed, Gluttony, Lust
- Click here for the homework reading for next time. |
What is the spelling of 20?
The Pattern for 20 to 99203090twentythirtyninety.
How do you write 2000 in a letter?
How to Write Out Number 2,000 in Words, in (US) American English, Number Converted (Spelled Out) Using Different Letter Cases2,000 written in lowercase: two thousand.WRITTEN IN UPPERCASE: TWO THOUSAND.Title Case: Two Thousand.Sentence case: Two thousand.
What do 2000 words look like?
2,000 words is 4 pages single spaced, 8 pages double spaced. 2,500 words is 5 pages single spaced, 10 pages double spaced. 3,000 words is 6 pages single spaced, 12 pages double spaced. 4,000 words is 8 pages single spaced, 16 pages double spaced.
How do you write 1000000 in words?
One million (1,000,000), or one thousand thousand, is the natural number following 999,999 and preceding 1,000,001. The word is derived from the early Italian millione (milione in modern Italian), from mille, “thousand”, plus the augmentative suffix -one.
How do you write amounts?
You can write the amount in words by writing the number of whole dollars first, followed by the word ‘dollars’. Instead of the decimal point, you will write the word ‘and,’ followed by the number of cents, and the word ‘cents’. If you want, you can write out the numbers using words too.
How do you write 5 cents?
To do this, we need to think about 5 out of 100. We can say that 5 cents is 5 hundredths of a dollar since there are 100 pennies in one dollar. Let’s write 5 cents as a decimal using place value. The five is in the hundredths box because five cents is five one hundredths of a dollar.
How do you write 2200 in English?
2200 in english :2199.2200.2201.
How do you write amounts in English?
The Chicago Manual of Style recommends spelling out the numbers zero through one hundred and using figures thereafter—except for whole numbers used in combination with hundred, thousand, hundred thousand, million, billion, and beyond (e.g., two hundred; twenty-eight thousand; three hundred thousand; one million).
How do you write 1000 in English?
1000 or one thousand is the natural number following 999 and preceding 1001. In most English-speaking countries, it is often written with a comma separating the thousands unit: 1,000. |
Iron ores such as haematite contain ironiii oxide, fe 2 o 3. Extracting iron and copper extracting and using metals eduqas. Gcse sciencechemistry worksheet on the extraction of. Generally the various processes are carried out at different plant sites. The extraction of iron from its ore is a long and subdued process, that helps in separating the useful components from the waste materials such as slag. Particles size solubility insolublesoluble degree of solubility density magnetic iron. Extraction of iron in a blast furnace iron is also below carbon in the reactivity series, so therefore carbon can be used to reduce it note. Pdf extraction of iron compounds from wood from the vasa.
Click here to view some great books which can aid your learning. Calcination the ore is then heated in absence of air calcined. This page looks at the use of the blast furnace in the extraction of iron from iron ore, and the conversion of the raw iron from the furnace into various kinds of steel. Extraction of iron metallurgy blast furnace and reactions. In the periodic table, aluminum is a metal of group and. Metal extraction detailed notes and diagrams metal reactivity transition metals more industrial chemistry of metals a rock is a mixture of minerals from which useful substances can be made a mineral is a solid element or compound found naturally in the earths crust. The extraction of iron from iron ore was a major technological achievement that allowed the expansion of the iron trade and ultimately helped lead to the industrial revolution. In this lesson, were going to discuss the extraction of four metals. Ores are rocks containing enough metal to make it economical to extract. Extracting iron redox, extraction of iron and transition. Displaying all worksheets related to extraction of metals. Gcse sciencechemistry worksheet on the extraction of metals do online or printout, do, return to check answers. Metallic iron is added to fortified cereal and this form of iron is magnetic. A blast furnace is a gigantic, steel stack lined with.
Chemistry science unit2 42 content4 extraction of metals content5 corrosion of iron causes and prevention students build links between the reactivity of metals and the ease of their extraction. For copper and zinc, any of the listed ores table 6. For the purpose of extraction, bauxite is chosen for aluminium. Occurrence ores of some metals are very common iron, aluminium. Extracting metals worksheet this worksheet accompanies extracting metals summary. Write the equations for each of the following reactions. Wells center for the management, utilization and protection of water resources and department of chemistry, tennessee technological university, cookeville, tennessee 2. Math worksheets a series of free igcse chemistry activities and experiments cambridge igcse chemistry. It also tests uses of steel, transition metals and alloys. Its full of approaches to solidify students knowledge of extraction, building a bedrock for. We are going to discuss in details about the different types of iron in the extraction of the iron part. This section deals with the types of iron and steel which are produced as a result of the steelmaking process. Major iron ore mining is done in goa, madhya pradesh, bihar, karnataka, orissa and maharashtra.
Flett and others published solvent extraction for iron control in hydrometallurgy find, read and cite all the research you need on researchgate. The purpose of a blast furnace is to reduce the concentrated ore chemically to its liquid metal state. Before you use these worksheets, you may find it valuable to read the education in chemistry article, how to teach the extraction of metals. Section a page 63 of 83 minerals education worksheet extraction based on revised version by women in mining source unknown purpose. Iron formed moves down and melts in the fusion zone. Choose from 146 different sets of extraction of iron flashcards on quizlet. Metal extraction chemistry including garyturnerscience.
Molten iron dissolves in it some carbon, silicon, and phosphorus and forms the lower layer at the base of the furnace. Learn extraction of iron with free interactive flashcards. Concentration the ore is crushed in crushers and is broken to small pieces. For iron, usually the oxide ores which are abundant and do not produce polluting gases like so2 that is produced in case iron pyrites are taken. In this video, we look at how displacement reactions can be used to extract metals from their compounds looking specifically at the extraction of iron. Extraction of metals cie igcse chemistry revision notes. The process of extracting iron from ore produces great quantities of. Question paper paper 2c june 2012 pearson qualifications.
It has been designed for students whose english is a second language. The process of the extraction of iron is carried out by the following steps. Most metals are found in ores where they have reacted with other elements. Pdf solvent extraction for iron control in hydrometallurgy. Rather, most iron ore compounds are polluted with sand, rock and silica. Alloys of irons are used to make cars, nails, bridges etc. The results from this study imply that aqueous extraction with strong chelators at relatively high ph, 911, effectively removes iron compounds and neutralises the acids present. Extraction of iron iron, perhaps the most important element to all civilization is also one of earths most abundant.
This is used as a source of carbon for metal extraction. Aluminium its chief ore, bauxite is available in india in abundance. In a blast furnace oxide ores of iron, after concentration through calcinationroasting are mixed with limestone and coke are fed from its top. The most common ore of iron is called haematite ironiii oxide.
In this metallurgical operation, the ore is concentrated by removing impurities like soil etc. Worksheets are extraction of metals, 10 extracting metals, old iron, metals and non metals chapter 3, g10 work for electrloysis, chemistry science unit 2, cambridge igcse chemistry topic 10 metals. Concentration of ore calcination or roasting of ore reduction of ore. The common ores of iron are both iron oxides, and these can be reduced to iron by heating them with carbon in the form of coke.
The iron extraction process is split into two lessons due to it being created for the igcse cambridge syllabus. The reactivity of a metal determines which method should be used to extract it from an ore. If all the carbon is removed from the iron to give high purity iron, it is known as wrought iron. They should be able to out line the method used for extraction of a. These worksheets help to build this knowledge, taking the extraction of iron, lead and copper as examples. The powerpoint is to be delivered the entire duration of the lesson which also. Extraction of iron very unreactive metals are found directly in the ground as elements, eg. Alloys are nothing but a mixture of different types of metals. Extraction of metals teaching resources the science teacher. Learn about and revise extracting and using metals with this bbc bitesize gcse chemistry eduqas study guide. Carriers and tracers, which are unique to radiochemistry, are described in section 14. The earths crust contains metals and metal compounds such as gold, iron oxide and aluminium oxide. Principles of extraction this chapter focuses on three widely used techniques for extraction of semi.
Corus the chemistry of steelmaking notes for teachers. The oxygen must be removed from the ironiii oxide in order to leave the iron behind. The blast furnace diagram is a particularly useful visual aid for teachers. Reduction of iron is shown in the following chemical equation. Click on popout icon or print icon to worksheet to print or. Reduction does not work for all metals, so there are other methods for extraction apart from reduction with carbon. Like the majority of metal ores, iron ores are not pure compounds. Extraction of iron examples, answers, activities, experiment, videos. Wrought iron is quite soft and easily worked and has. Igcse extraction of metals from ores notes igcse and ial. Redox reactions are involved in the extraction of metals from their ores, eg extracting iron by reduction within the blast furnace.
Extraction of iron lesson plan template and teaching resources. Corus the chemistry of steelmaking notes for teachers age 1416 using the resource 1 extracting iron from iron ore the extraction of iron from its ore and its conversion into steel can be taught andor consolidated with the help of this electronic resource. The quizworksheet combo aids in checking your knowledge of the extraction of aluminum, copper, zinc, and iron. Pig iron contains about 4% of carbon and many other impurities such. Extraction of metals occurrence ores of some metals are very common iron, aluminium others occur only in limited quantities in selected areas ores need to be purified before being reduced to the metal this adds to the expense high grade ores are therefore cheaper to process. This resource will help teachers to test, and challenge students to improve their understanding of the extraction of iron from the blast furnace and the various reactions taking place in the furnace. Extraction based on revised version by women in mining. It is concentrated with gravity separation process in which it is washed with water to remove clay, sand, etc. Where metals are extracted from and the process of oxidation are among the topics on. Visit a county like devon, with its red soils, and youll see iron ore iron oxide right there. The diagram shows the blast furnace for the extraction of iron from its ore, taken as haematite.
Objectives general principles and gener processes of. Iron iron ore reserves in the country are estimated at 1750 crore tonnes. Transition metals have high melting points and densities, form. In some places, the mineral deposits in rock contain a high concentration of one metal. Certain metals, such as iron, can be only be reduced using carbon if they are heated to very high temperatures. Gcse notes on metal extraction aqa ccea edexcel ocr 21stc ocr gateway wjec email comment. Place the metals lead, calcium, copper, zinc and aluminium into the reactivity series below. Documents demonstrate extraction of iron from iron ore using the blast furnace and the key word equations for the different reactions. But alloys of iron such as steel are highly useful.1552 74 1267 1052 1137 45 751 1152 1382 1631 1645 1482 310 175 1599 1532 942 897 707 1396 108 88 419 135 826 872 673 448 1328 63 1368 118 |
Omega-3 fatty acids
What exactly are omega-3 fatty acids and why are they so healthy?
Omega-3 fatty acids belong to the group of polyunsaturated fatty acids. One of these omega-3 fatty acids the so-called ALA (alpha linoleic acid) is an essential fatty acid which means that we cannot produce this from other nutrients, but need to obtain it from our food. The other and more known omega-3 fatty acids are EPA (eicosapentaenoic acid) and DHA (docosahexaenoic acid) which we can produce ourselves from ALA. However, this is only partially sufficient and food is the preferred source of EPA and DHA.
Mackerel is full of Omega 3 fatty acids
Fatty fish are rich sources of omega-3 fatty acids
Omega-3 fatty acids play an important role in many processes of our body such as the development and maintenance of a healthy brain and the development of the eyes of unborn children. In adults, it has become increasingly clear that omega-3 fatty acids also play an important role in maintaining healthy cholesterol levels which is important to keep our blood vessels healthy.
You may have heard also about omega-6 fatty acids of which linoleic acid is the best known. For our health, it is important that the ratio of omega-6 fatty acids to omega -3 fatty acids is between 3 to 1 and 5 to 1. However, our modern Western food contains far too many omega-6 fatty acids and too little omega-3 fatty acids that results in an unhealthy ratio between 10 to 1 and 30 to 1 which can lead to a variety of health problems such as painful joints, cardiovascular diseases and even cancer. It is, therefore, a wise decision to consume foods regularly that are rich in omega-3.
You probably may know that various types of fish contain relatively large amounts of omega-3 fatty acids. These fish are usually called ‘fatty fish‘ and the most well-known species are herring, mackerel, anchovies, salmon and trout. These fish do not produce the fatty acids themselves but source these from their feed, especially algae. Farmed fish contains fewer omega-3 fatty acids than wild fish since their feed contains less omega-3 fatty acids and more omega-6 fatty acids.
Omega-3 fatty acids are also present in other types of food such as various types of nuts, meat and dairy from grass-fed cattle and olive oil.
Other sources of omega-3 fatty acids
On the internet, you can discover other sources of omega-3 fatty acids as well. However, the reality is that most of us do not get sufficient omega-3 fatty acids from our regular diets, especially because we do not eat fatty fish preferably twice a week. It is therefore wise to consider to take supplements that contain a balanced combination of EPA and DHA. |
Light travels as waves of energy. Waves of light have different wavelengths (the distance between the top of one wave and the top of the next). Different colors of light have different wavelengths. Purple and blue light waves have short wavelengths. Red light has a longer wavelength. This picture shows the lengths of the waves of different colors of light. The longest red waves are about 700 nanometers long. The shortest purple waves are 400 nanometers long. Light waves with short wavelengths carry more energy than ones with long wavelengths. "Light" waves shorter than 400 nm are called "ultraviolet" or "UV" light. "Light" waves longer than 700 nm are called "infrared" or "IR" light. Some people use a distance unit called an Ångström to measure light waves. There are 10 Ångströms in one nanometer. Green light has a wavelength of 5,500 Ångströms, which is the same as 550 nanometers.
Original Windows to the Universe artwork by Randy Russell. |
Designed to accompany the Grammar 4 Student Book, the Grammar 4 Teacher’s Book provides detailed lesson plans for teaching spelling and grammar to children in the 5th year of Jolly Phonics. Children are able to work through the Student Book and complete a variety of activities which develop key spelling and grammar skills. The teacher is able to support and guide the children through the book with the lesson plans provided in the Grammar 4 Teacher’s Book.
Grammar 4 Teacher’s Book follows on from Grammar 3 Teacher’s Book, providing daily structured lesson plans, which correspond to the daily activities in the Student Book. The book contains 36 spelling and 36 grammar lessons covering the following areas: new spelling patterns; syllables; collective nouns; possessive pronouns; contractions; present participle; conjugating verbs; irregular plurals; prefixes and suffixes; nouns acting as adjectives; simple and continuous verb tenses; punctuation; dictionary work; writing in paragraphs and further sentence development. |
To choose the appropriate regulator for specific applications, parameters such as the input voltage, required output voltage, maximum load current, size, efficiency and power rating need to be considered to maximize regulator characteristics.
LDO (Low Dropout) designs are simple and cost-effective, used to produce a regulated output voltage from a higher input voltage. LDO has a very low voltage drop across it when regulating the output voltage. This allows LDOs to regulate the output voltage when the difference between the output and the input voltage is very small. LDOs lower voltage by turning surplus power into heat and provide a steady, low-noise (as no switching takes place) DC output voltage.
Since the power not delivered to the load is lost as heat, an LDO is inefficient when Vin is much greater than Vout. An alternative, the switching converter, stores energy in an inductor’s magnetic field. It releases the energy to the load at a different voltage, and hence is highly efficient. Buck, or step-down converter provides lower voltage. Most systems with small power requirements use both buck and LDOs to achieve cost and efficiency objectives.
The LDO efficiency is given by,
where Iq is the quiescent current as indicated in the LDO’s specifications. For a Vin of 5V, the efficiency is less than 60% and so, for example, to get 3W, the input power must be over 5W, where ~2W is lost as heat. LDOs are less efficient than buck but for small power requirements or when there is a small difference between Vin and Vout (some tenths of a volt), they are the better option as they are cheaper, smaller, and simpler to implement with fewer components. For the buck regulator, power is lost to switching components and their impedances but efficiency is still high; 80% efficiency is easily achievable and it can be as high as 95%. A buck regulator is preferred if a lot of power is needed or if Vin is much larger than Vout (Vin >> Vout, e.g. from 10V to 1.5V) since thermal performance is not an issue in the operation of providing the desired (much lower) output. A buck converter circuit also tends to be more complex than LDO.
As for electrical noise, LDOs can reach a lower noise floor than buck converters. This is because buck converters (and other switching power supplies) use inductors that have the tendency to produce significant noise. Other things to consider in choosing a regulator are the size, Vin and Vout range, frequency, and PWM/PFM modes. In any case, the regulator should be capable of providing enough current to the load accurately.
Buck converters and LDOs have their own upsides and downsides. If efficiency is not your priority, heat is not a concern, the current necessary is very small, or Vin is only slightly higher than Vout, an LDO can be used. But, if efficiency and performance are your utmost concern even if it is more complex and likely to be more expensive, then a buck converter is the ideal choice. |
If the title seems confusing or doesn’t make sense to you, allow me to elaborate.
You might already know that commercial planes need to maintain a minimum distance from other commercial planes during flight. That’s why the ATC (Air Traffic Control) exists. One of the primary and most critical tasks of the ATC workers is to ensure that every plane taking off, landing or even passing through their airport doesn’t fly too close or get in the way of other planes flying in the same airspace.
Because if that happens, it could potentially lead to a catastrophe of epic proportions. That’s why commercial airplanes maintain separation from other planes, i.e., they keep a safe distance away from other planes in their vicinity.
However, military aircraft are nothing like that. They often fly very close to each other in tight formations on purpose. They don’t maintain a miles’ worth of clearance from other military aircraft in the formation. It doesn’t seem to make sense.
Why do large, commercial airplanes require vertical and horizontal separation, but military aircraft don’t?
In the field of flying and aeronautics, the term ‘separation’ is used to refer to the concept of keeping a plane at least a minimum distance from another aircraft. The basic reason behind this is to keep planes from running into each other and avoid accidents due to secondary factors (e.g., wake turbulence).
In the US, normal aircraft separation guidelines under IFR are defined in an FAA order (you can read the whole document here).
Two planes need to either maintain at least 1000 feet (305 meters) of vertical separation or 3 miles of lateral separation. These values increase if the planes in question are larger.
Interestingly enough, not all aircraft need to be separated by the ATC. Whether airplanes actually need such a separation depends on the class of the airspace in which they’re flying, and the rules under which their pilots are operating their aircraft. These are the rules under which an aircraft can be flown: Visual Flight Rules (VFR), Special Visual Flight Rules (SVFR) and Instrument Flight Rules (IFR). Commercial aircraft are almost always operated under IFR.
Why do commercial planes need separation?
The primary reason is to prevent collisions. Commercial airliners carry hundreds of passengers (as opposed to military jets, which carry only a few) and operate under civil air traffic guidelines, which are followed by every commercial aircraft flying in that airspace. By maintaining an appropriate clearance, they make sure to never get in the paths of other aircraft. This minimizes the risk of collisions occurring between planes.
They also maintain a separation because of something called wake turbulence. When a big plane flies, its wings produce the lift it needs, which in turn produces wake turbulence, i.e., turbulence left in the wake of the plane.
If two commercial planes fly too close to each other, they risk getting mixed up in the disturbance in the wind created by the leading plane, which could have nasty implications for the trailing plane.
Commercial airliners are also less maneuverable than military planes, and their pilots are less likely to be trained in formation flight. Lastly, there is simply not a good enough reason to make planes fly closer than they already are.
Why can military aircraft fly so close to each other?
Of course, military aircraft also need a minimum separation from other aircraft, but they easily manage to fly in close, tight formations composed of quite a few planes. There are a few reasons behind this.
Fighter aircraft are much more maneuverable than commercial planes. They can take sharp turns and dip/climb in a split second at the very last moment. As mentioned earlier, separation between aircraft is for safety during sudden, unexpected maneuvers, not just for avoiding wake turbulence. Since military pilots are talking directly to the other pilots in the formation on a discrete radio frequency at all times, they know well in advance what maneuver their fellow pilots will make and can prep themselves accordingly.
Long before the actual flight, each pilot in the formation is briefed about the general flight plan, how it will be carried out and what formations they will be flying in. Thus, there are minimal chances of a pilot engaging in an unexpected maneuver to begin with. That’s why military pilots manage to pull off such death-defying stunts in the air, because it’s all planned and prepped in advance.
Last but not least, military pilots are trained to fly in tight formations. They spend hours and hours learning about flying in such formations, and then actually practice flying in them. The majority of commercial pilots can be generally assumed to not have that level of training or expertise to fly so close to other commercial planes.
That’s why commercial planes need to maintain a safe distance from each other, whereas military aircraft often fly very close to each other. |
This article includes a list of general references, but it remains largely unverified because it lacks sufficient corresponding inline citations. (March 2017) (Learn how and when to remove this template message)
|Era||(survives as Nheengatu)|
Old Tupi or classical Tupi is an extinct Tupian language which was spoken by the aboriginal Tupi people of Brazil, mostly those who inhabited coastal regions in South and Southeast Brazil. It belongs to the Tupi–Guarani language family, and has a written history spanning the 16th, 17th, and early 18th centuries. In the early colonial period, Tupi was used as a lingua franca throughout Brazil by Europeans and aboriginal Americans, and had literary usage, but it was later suppressed almost to extinction, leaving only one modern descendant with an appreciable number of speakers, Nheengatu.
The names Old Tupi or classical Tupi are used for the language in English and by modern scholars (it is referred to as tupi antigo in Portuguese), but native speakers called it variously ñeengatú "the good language", ñeendyba "common language", abáñeenga "human language", in Old Tupi, or língua geral "general language", língua geral amazônica "Amazonian general language", língua brasílica "Brazilian language", in Portuguese.
Old Tupi was first spoken by the Tupinambá people, who lived under cultural and social conditions very unlike those found in Europe. It is quite different from Indo-European languages in phonology, morphology, and grammar, but it was adopted by many Luso-Brazilians born in Brazil as a lingua franca known as Língua Geral.
It belonged to the Tupi–Guarani language family, which stood out among other South American languages for the vast territory it covered. Until the 16th century, these languages were found throughout nearly the entirety of the Brazilian coast, from Pará to Santa Catarina, and the River Plate basin. Today, Tupi languages are still heard in Brazil (states of Maranhão, Pará, Amapá, Amazonas, Mato Grosso, Mato Grosso do Sul, Goiás, São Paulo, Paraná, Santa Catarina, Rio Grande do Sul, Rio de Janeiro, and Espírito Santo), as well as in French Guiana, Venezuela, Colombia, Peru, Bolivia, Paraguay, and Argentina.
It is a common mistake to speak of the "Tupi–Guarani language": Tupi, Guarani and a number of other minor or major languages all belong to the Tupian language family, in the same sense that English, Romanian, and Sanskrit belong to the Indo-European language family. One of the main differences between the two languages was the replacement of Tupi /s/ by the glottal fricative /h/ in Guarani.
The first accounts of the Old Tupi language date back from the early 16th century, but the first written documents containing actual information about it were produced from 1575 onwards – when Jesuits André Thévet and José de Anchieta began to translate Catholic prayers and biblical stories into the language. Another foreigner, Jean de Lery, wrote the first (and possibly only) Tupi "phrasebook", in which he transcribed entire dialogues. Lery's work is the best available record of how Tupi was actually spoken.
In the first two or three centuries of Brazilian history, nearly all colonists coming to Brazil would learn the tupinambá variant of Tupi, as a means of communication with both the Indians and with other early colonists who had adopted the language.
The Jesuits, however, not only learned to speak tupinambá, but also encouraged the Indians to keep it. As a part of their missionary work, they translated some literature into it and also produced some original work written directly in Tupi. José de Anchieta reportedly wrote more than 4,000 lines of poetry in tupinambá (which he called lingua Brasilica) and the first Tupi grammar. Luís Figueira was another important figure of this time, who wrote the second Tupi grammar, published in 1621. In the second half of the 18th century, the works of Anchieta and Figueira were republished and Father Bettendorf wrote a new and more complete catechism. By that time, the language had made its way into the clergy and was the de facto national language of Brazil – though it was probably seldom written, as the Roman Catholic Church held a near monopoly of literacy.
When the Portuguese Prime Minister Marquis of Pombal expelled the Jesuits from Brazil in 1759, the language started to wane fast, as few Brazilians were literate in it. Besides, a new rush of Portuguese immigration had been taking place since the early 18th century, due to the discovery of gold, diamonds, and gems in the interior of Brazil; these new colonists spoke only their mother tongue. Old Tupi survived as a spoken language (used by Europeans and Indian populations alike) only in isolated inland areas, far from the major urban centres. Its use by a few non-Indian speakers in those isolated areas would last for over a century still.
When the Portuguese first arrived on the shores of modern-day Brazil, most of the tribes they encountered spoke very closely related dialects. The Portuguese (and particularly the Jesuit priests who accompanied them) set out to proselytise the natives. To do so most effectively, doing so in the natives' own languages was convenient, so the first Europeans to study Tupi were those priests.
The priests modeled their analysis of the new language after the one with which they had already experience: Latin, which they had studied in the seminary. In fact, the first grammar of Tupi – written by the Jesuit priest José de Anchieta in 1595 – is structured much like a contemporary Latin grammar. While this structure is not optimal, it certainly served its purpose of allowing its intended readership (Catholic priests familiar with Latin grammars) to get enough of a basic grasp of the language to be able to communicate with and evangelise the natives. Also, the grammar sometimes regularised or glossed over some regional differences in the expectation that the student, once "in the field", would learn these finer points of the particular dialect through use with his flock.
Significant works were a Jesuit catechism of 1618, with a second edition of 1686; another grammar written in 1687 by another Jesuit priest, Luís Figueira; an anonymous dictionary of 1795 (again published by the Jesuits); a dictionary published by Antônio Gonçalves Dias, a well-known 19th century Brazilian poet and scholar, in 1858; and a chrestomathy published by Dr Ernesto Ferreira França in 1859.
Considering the breadth of its use both in time and space, this language is particularly poorly documented in writing, particularly the dialect of São Paulo spoken in the South.
The phonology of tupinambá has some interesting and unusual features. For instance, it does not have the lateral approximant /l/ or the multiple vibrant rhotic consonant /r/. It also has a rather small inventory of consonants and a large number of pure vowels (12).
This led to a Portuguese pun about this language, that Brazilians não têm fé, nem lei, nem rei (have neither faith, nor law, nor king) as the words fé (faith), lei (law) and rei (king) could not be pronounced by a native Tupi speaker (they would say pé, re'i and re'i).
|Close||/i/, /ĩ/||/ɨ/, /ɨ̃/||/u/, /ũ/|
|Mid||/ɛ/, /ɛ̃/||/ɔ/, /ɔ̃/|
The nasal vowels are fully vocalic, without any trace of a trailing [m] or [n]. They are pronounced with the mouth open and the palate relaxed, not blocking the air from resounding through the nostrils. These approximations, however, must be taken with caution, as no actual recording exists, and Tupi had at least seven known dialects.
|Nasals||m /m/||n /n/||ñ /ɲ/||ng /ŋ/|
|Plosive||prenasalized||mb /ᵐb/||nd /ⁿd/||ng /ᵑɡ/|
|voiceless||p /p/||t /t/||k /k/||(/ʔ/)^|
|Fricatives||b /β/||s /s/†||x /ʃ/||g /ɣ/||h /h/|
|Semivowels||û /w/||î /j/||ŷ /ɰ/‡|
- ^ The glottal stop is found only between a sequence of two consecutive vowels and at the beginning of vowel-initial words (aba, y, ara, etc.). When it is indicated in writing, it is generally written as an apostrophe.
- † Some authors remark that the actual pronunciation of /s/ was retroflex /ʂ/. Also, most sources describe some dialects having /s/ and /h/ in free variation.
- ‡ The actual pronunciation of ŷ is the corresponding semivowel for /ɨ/. It may not have existed in all dialects.
According to Nataniel Santos Gomes, however, the phonetic inventory of Tupi was simpler:
- p, t, k, ‘ (/ʔ/)
- b (/β/)
- s, x (/ʃ/)
- m, n, ñ (/ɲ/)
- û (/w/), î (/j/)
- r (/ɾ/)
- i, y (/ɨ/), u, ĩ, ỹ, ũ
- e, o, õ, ẽ
- a, ã
This scheme does not regard Ŷ as a separate semivowel, does not consider the existence of G (/ɣ/), and does not differentiate between the two types of NG (/ŋ/ and /ⁿɡ/), probably because it does not regard MB (/ⁿb/), ND (/ⁿd/) and NG (/ⁿɡ/) as independent phonemes, but mere combinations of P, T, and K with nasalization.
Santos Gomes also remarks that the stop consonants shifted easily to nasal stops, which is attested by the fitful spelling of words like umbu (umu, ubu, umbu, upu, umpu) in the works of the early missionaries and by the surviving dialects.
According to most sources, Tupi semivowels were more consonantal than their IPA counterparts. The Î, for instance, was rather fricative, thus resembling a very slight [ʑ], and Û had a distinct similarity with the voiced stop [ɡʷ] (possibly via [ɣʷ], which would likewise be a fricative counterpart of the labiovelar semivowel), thus being sometimes written gu. As a consequence of that character, Tupi loanwords in Brazilian Portuguese often have j for Î and gu for Û.
It would have been almost impossible to reconstruct the phonology of Tupi if it did not have a wide geographic distribution. The surviving Amazonian Nhengatu and the close Guarani correlates (Mbyá, Nhandéva, Kaiowá and Paraguayan Guarani) provide material that linguistic research can still use for an approximate reconstruction of the language.
Scientific reconstruction of Tupi suggests that Anchieta either simplified or overlooked the phonetics of the actual language when he was devising his grammar and his dictionary.
The writing system employed by Anchieta is still the basis for most modern scholars. It is easily typed with regular Portuguese or French typewriters and computer keyboards (but not with character sets such as ISO-8859-1, which cannot produce ẽ, ĩ, ũ, ŷ and ỹ).
Its key features are:
- The tilde indicating nasalisation: a → ã.
- The circumflex accent indicating a semivowel: i → î.
- The acute accent indicating the stressed syllable: abá.
- The use of the letter x for the voiceless palatal fricative /ʃ/, a spelling convention common in the languages of the Iberian Peninsula but unusual elsewhere.
- The use of the digraphs yg (for Ŷ), gu (for /w/), ss (to make intervocalic S unvoiced), and of j to represent the semivowel /j/.
- Hyphens are not used to separate the components of compounds except in the dictionary or for didactical purposes.
Most Tupi words are roots with one or two syllables, usually with double or triple meanings that are explored extensively for metaphorical purposes:
- a = round / head / seed
- kaa = forest / bush / plant
- oby = green / blue; considered a single colour in many languages.
- y = water / liquid / spring / lake, puddle / river, brook
The most common words tend to be monosyllables:
- a = head / round
- ã = shadow / ghost
- po = hand
- sy = mother / source
- u = food
- y = water, river
Disyllabic words belong to two major groups, depending on which syllable is stressed:
- If the stress falls on the penult, the last syllable ends with an unstressed vowel (traditionally written with the letter a). Such words usually drop the last vowel (or sometimes even the entire last syllable) to form compounds or drop the vowel and undergo a consonant mutation (nasalisation): ñeenga (speech) + katú (good) = ñeen-ngatú (the good language).
- If the stress falls on the last syllable, the syllable is unchanged: itá (rock, stone) + úna (black) = itaúna.
Polysyllabic (non-compound) words are less common but are still frequent and follow the same scheme:
- paranã (the sea) + mirĩ (little) = paranãmirĩ (salty lagoon)
- pindóba (palm tree) + ûasú (big) = pindobusú.
Nasal mutation of the initial consonant is always present, regardless of stress. Polysyllabic words are never stressed on the first syllable.
Compound nouns are formed in three ways:
- Simple agglutination:
- arasy = ara + sy (day + mother) = mother of day: the sun
- yîara = y + îara (water + lord/lady) = lady of the lake (a mythological figure).
- Blending with either apocope or aphesis:
- Pindorama = pindoba + rama (palm tree + future aspect) = where there will be palm trees (this was the name by which some of the coast tribes called their homeland).
- Takûarusu = takûara + ûasú (bamboo + big) = big bamboo tree. Portuguese: Taquaruçu (a variant of bamboo).
- Complex blending, with both apocope and aphesis:
- Taubaté = taba + ybaté (village + high) = the name of a Brazilian town, Taubaté, which was originally the name of a village on the top of a mountain.
- Itákûakesétyba = takûara + kesé + tyba (bamboo + knife + collective mark): where knives are made out of bamboo wood (the name of a Brazilian town: Itaquaquecetuba).
Later, after colonisation, the process was used to name things that the Indians originally did not have:
- îande + Îara (our + Lord) = a title held by Christ in Catholic worship.
- Tupã + sy (God + mother) = the mother of God (Mary).
Some writers have even extended it further, creating Tupi neologisms for the modern life, in the same vein as New Latin. Mário de Andrade, for instance, coined sagüim-açu (saûĩ + [g]ûasú) for "elevator", from sagüim, the name of a small tree-climbing monkey.
Tupi was an agglutinative language with moderate degree of fusional features (nasal mutation of stop consonants in compounding, the use of some prefixes and suffixes), although Tupi is not a polysynthetic language.
Tupi parts of speech did not follow the same conventions of Indo-European languages:
- Verbs are "conjugated" for person (by means of prepositioning subject or object pronouns) but not for tense or mood (the very notion of mood is absent). All verbs are in the present tense.
- Nouns are "declined" for tense by means of suffixing the aspect marker (Nominal TAM) but not for gender or number.
- There is a distinction of nouns in two classes: "higher" (for things related to human beings or spirits) and "lower" (for things related to animals or inanimate beings). The usual manifestation of the distinction was the use of the prefixes t- for high-class nouns and s- for low-class ones, so that tesá meant "human eye", and sesá meant "the eye of an animal". Some authors argue that it is a type of gender inflection.
- Adjectives cannot be used in the place of nouns, neither as the subject nor as the object nucleus (in fact, they cannot be used alone).
Tupi had a split-intransitive grammatical alignment. Verbs were preceded by pronouns, which could be subject or object forms. Subject pronouns like a- "I" expressed the person was in control, while object pronouns like xe- "me" signified the person was not. The two types could be used alone or combined in transitive clauses, and they then functioned like subject and object in English:
- A-bebé = I-fly, "I can fly", "I flew".
- Xe pysyka = me catch, "Someone has caught me" or "I'm caught".
- A-î-pysyk = I-him-catch, "I have caught him".
Although Tupi verbs were not inflected, a number of pronominal variations existed to form a rather complex set of aspects regarding who did what to whom. That, together with the temporal inflection of the noun and the presence of tense markers like koára "today," made up a fully functional verbal system.
Word order played a key role in the formation of meaning:
- taba abá-im (village + man + tiny) = tiny man from the village
- taba-im abá = man from the small village
Tupi had no means to inflect words for gender, so used adjectives instead. Some of these were:
- apyŷaba = man, male
- kuñã = woman, female
- kunumĩ = boy, young male
- kuñãtãĩ = girl, young female
- mena = male animal
- kuñã = female animal
The notion of gender was expressed, once again, together with the notion of age and that of "humanity" or "animality".
The notion of plural was also expressed by adjectives or numerals:
- abá = man; abá-etá = many men
Unlike Indo-European languages, nouns were not implicitly masculine except for those provided with natural gender: abá "man" and kuñã[tã] "woman/girl"; for instance.
Without proper verbal inflection, all Tupi sentences were in the present or in the past. When needed, tense is indicated by adverbs like ko ara, "this day".
Adjectives and nouns, however, had temporal inflection:
- abáûera "he who was once a man"
- abárama "he who shall be a man someday"
That was often used as a semantic derivation process:
- akanga "head"
- akangûera "skull" (of a skeleton)
- abá "man"
- abárama "teenager"
With respect to syntax, Tupi was mostly SOV, but word order tended to be free, as the presence of pronouns made it easy to tell the subject from the object. Nevertheless, native Tupi sentences tended to be quite short, as the Indians were not used to complex rhetorical or literary uses.
Most of the available data about Old Tupi are based on the tupinambá dialect, spoken in what is now the Brazilian state of São Paulo, but there were other dialects as well.
According to Edward Sapir's categories, Old Tupi could be characterized as follows:
- With respect to the concepts expressed: complex, of pure relation, that is, it expresses material and relational content by means of affixes and word order, respectively.
- With respect to the manner in which such concepts are expressed: a) fusional-agglutinative, b) symbolic or of internal inflection (using reduplication of syllables, functionally differentiated).
- With respect to the degree of cohesion of the semantic elements of the sentence: synthetic.
- îubá = yellow, golden
- (s)oby = blue, green
- pirang = red
- ting = white
- (s)un = black
- (t)atá = fire
- itá = rock, stone, metal,
- y = water, river
- yby = earth, ground
- ybytu = air, wind
- abá = man (as opposed to woman), Indian or Native-American (as opposed to European), human being (as opposed to the animal world)
- aîuba = Frenchman (literally "yellow heads")
- maíra = Frenchman (the name of a mythological figure that the Indians associated with the Frenchmen)
- karaíba = foreigner, white man (literally means "spirit of a dead person"). Means also prophet.
- kunhã = woman
- kuñãtã'ĩ = girl
- kuñãmuku = young woman
- kunumĩ = boy
- kunumĩgûasu = young man
- morubixaba = chief
- peró = Portuguese (neologism, from "Pero", old variant of "Pedro" = "Peter", a very common Portuguese name)
- sy = mother
- tapy'yîa = slave (also the term for non-Tupi speaking Indians)
- akanga = head
- îuru = mouth
- îyba = arm
- nambi = ear
- pó = hand
- py = foot
- py'a = heart
- (t)esá = eye
- (t)etimã = leg
- tĩ = nose
- (t)obá = face
- aîuru = parrot, lory, lorykeet
- arara = macaw, parrot
- îagûara = jaguar
- ka'apiûara = capybara
- mboîa = snake, cobra
- pirá = fish
- so'ó = game (animal)
- tapi'ira = tapir
- ka'api = grass, ivy (from which the word capybara comes)
- ka'a = plant, wood, forest
- kuri = pine
- (s)oba = leaf
- yba = fruit
- ybá = plant
- ybyrá = tree, (piece of) wood
- ybotyra = flower
- oka = house
- taba = village
- beraba = brilliant, gleamy, shiny
- katu = good
- mirĩ, 'í = little
- panema = barren, contaminated, unhealthy, unlucky
- poranga = beautiful
- pûera, ûera = bad, old, dead
- (s)etá = many, much
- ûasu, usu = big
Oré r-ub, ybak-y-pe t-ekó-ar, I moeté-pyr-amo nde r-era t'o-îkó. T'o-ur nde Reino! Tó-ñe-moñang nde r-emi-motara yby-pe. Ybak-y-pe i ñe-moñanga îabé! Oré r-emi-'u, 'ara-îabi'õ-nduara, e-î-me'eng kori orébe. Nde ñyrõ oré angaîpaba r-esé orébe, oré r-erekó-memûã-sara supé oré ñyrõ îabé. Oré mo'ar-ukar umen îepe tentação pupé, oré pysyrõ-te îepé mba'e-a'iba suí.
Notice that two Portuguese words, Reino (Kingdom) and tentação (temptation) have been borrowed, as such concepts would be rather difficult to express with pure Tupi words.
Presence of Tupi in Brazil
As the basis for the língua geral, spoken throughout the country by white and Indian settlers alike until the early 18th century, and still heard in isolated pockets until the early 20th century, Tupi left a strong mark on the Portuguese language of Brazil, being by far its most distinctive source of modification.[dubious ]
Tupi has given Brazilian Portuguese:
- A few thousand words (some of them hybrids or corrupted) for animals, plants, fruit and cultural entities.
- Multiple names of locations, including states (e.g. Paraná, Pará, Paraíba)
Some municipalities which have Tupi names:
- Iguaçu ('y ûasú): great river
- Ipanema ('y panema): bad, fishless water
- Itanhangá (itá + añãgá): devil's rock
- Itaquaquecetuba (takûakesétyba, from itá + takûara + kesé + tyba): where bamboo knives are made
- Itaúna ("itá + una"): black rock
- Jaguariúna (îagûara + 'í + una): small black jaguar
- Pacaembu (paka + embu): valley of the pacas.
- Paraíba (pará + aíba): bad to navigation or "bad river"
- Paranaíba (paranãíba, from paranã + aíba): dangerous sea
- Paraná-mirim (paranã + mirĩ): salty lagoon (literally: "small sea")
- Pindorama (from pindó, "palm tree", and (r)etama , country): palm country (this is the name the tupiniquins gave to the place where they lived, today known as Brazil).
- Piracaia ("pirá" + "caia"): fried fish
- Piraí (pirá + y): "fish water"
- Umuarama (ũbuarama, from ũbu + arama): where the cacti will grow
Among the many Tupi loanwords in Portuguese, the following are noteworthy for their widespread use:
- abacaxi (pineapple, literally: "fruit with thorns")
- jacaré (caiman)
- mirim (small or juvenile) as in "escoteiro-mirim" ("Boy Scout")
- perereca (a type of small frog, also slang for vulva), literally: "hopper"
- peteca (a type of badminton game played with bare hands) literally: "slap"
- piranha (a carnivorous fish, also slang for immoral women) literally: "toothed fish"
- pipoca (popcorn) literally "explosion of skin"
- piroca (originally meaning "bald", now a slang term for penis)
- pororoca (a tidal phenomenon in the Amazon firth) literally: "confusion"
- siri (crab)
- sucuri (anaconda)
- urubu (the Brazilian vulture)
- urutu (a kind of poisonous snake)
- uruçu (the common name for Melipona scutellaris)
It is interesting, however, that two of the most distinctive Brazilian animals, the jaguar and the tapir, are best known in Brazilian Portuguese by non-Tupi names, onça and anta, despite being named in English with Tupi loanwords.
A significant number of Brazilians have Tupi names as well:
- Araci (female): ara sy, "mother of the day"
- Bartira, Potira (female): Ybotyra, "flower"
- Iara (female): 'y îara, lady of the lake
- Jaci (both): îasy, the moon
- Janaína (female): îandá una, a type of black bird
- Ubirajara (male): ybyrá îara, "lord of the trees/lance"
- Ubiratã (male): ybyrá-atã, "hard wood"
Some names of distinct Indian ancestry have obscure etymology because the tupinambá, like the Europeans, cherished traditional names which sometimes had become archaic. Some of such names are Moacir (reportedly meaning "son of pain") and Moema.
Old Tupi literature was composed mainly of religious and grammatical texts developed by Jesuit missionaries working among the colonial Brazilian people. The greatest poet to express in written Tupi language, and its first grammarian was José de Anchieta, who wrote over eighty poems and plays, compiled at his Lírica Portuguesa e Tupi. Later Brazilian authors, writing in Portuguese, employed Tupi in the speech of some of their characters.
Tupi is also remembered as distinctive trait of nationalism in Brazil. In the 1930s, Brazilian Integralism used it as the source of most of its catchphrases (like Anaûé (meaning "you are my brother", the old Tupi salutation which was adopted as the Brazilian version of the German Sieg Heil, or the Roman "Ave") and terminology.
- http://www.fflch.usp.br/dlcv/tupi/posposicao_em_tupi.htm Archived May 25, 2009, at the Wayback Machine
- ALVES Jr., Ozias. Uma breve história da língua tupi, a língua do tempo que o brasil era canibal.
- Ioseph de Anchieta: Arte de grammtica da lingoa mais usada na costa do Brasil. 1595.
- ANCHIETA, José de. Arte da gramática da língua mais usada na costa do Brasil. Rio de Janeiro: Imprensa Nacional, 1933.
- Anchieta, José de (2004). Poemas. ISBN 978-85-336-1956-2.
- DI MAURO, Joubert J. Curso de Tupi Antigo.
- GOMES, Nataniel dos Santos. Síntese da Gramática Tupinambá.
- Perfil da língua tupi
- EDELWEISS, Frederico G. Tupis e Guaranis, Estudos de Etnonímia e Lingüística. Salvador: Museu do Estado da Bahia, 1947. 220 p.
- EDELWEISS, Frederico G. O caráter da segunda conjugação tupi. Bahia: Livraria Progresso Editora, 1958. 157 p.
- EDELWEISS, Frederico G. Estudos tupi e tupi-guaranis: confrontos e revisões. Rio de Janeiro: Livraria Brasiliana, 1969. 304 p.
- GOMES, Nataniel dos Santos. Observações sobre o Tupinambá. Monografia final do Curso de Especialização em Línguas Indígenas Brasileiras. Rio de Janeiro: Museu Nacional / UFRJ, 1999.
- LEMOS BARBOSA, A. Pequeno Vocabulário Tupi–Português. Rio de Janeiro: Livraria São José, 1951.
- LEMOS BARBOSA, A. Juká, o paradigma da conjugação tupí: estudo etimológico-gramatical in Revista Filológica, ano II, n. 12, Rio de Janeiro, 1941.
- LEMOS BARBOSA, A. Nova categoria gramatical tupi: a visibilidade e a invisibilidade nos demonstrativos in Verbum, tomo IV, fasc. 2, Rio de Janeiro, 1947.
- LEMOS BARBOSA, A. Pequeno vocabulário Tupi–Português. Rio de Janeiro: Livraria São José, 1955. (3ª ed.: Livraria São José, Rio de Janeiro, 1967)
- LEMOS BARBOSA, A. Curso de Tupi antigo. Rio de Janeiro: Livraria São José, 1956.
- LEMOS BARBOSA, A. Pequeno vocabulário Português-Tupi. Rio de Janeiro: Livraria São José, 1970.
- MICHAELE, Faris Antônio S. Tupi e Grego: Comparações Morfológicas em Geral. Ponta Grossa: UEPG, 1973. 126 p.
- Eduardo De Almeida Navarro (1998). Método moderno de tupi antigo a língua do Brasil dos primeiros séculos. ISBN 978-85-326-1953-2.
- RODRIGUES, Aryon Dall'Igna. Análise morfológica de um texto tupi. Separata da Revista "Logos", ano VII, N. 5. Curitiba: Tip. João Haupi, 1953.
- RODRIGUES, Aryon Dall'Igna. Morfologia do Verbo Tupi. Separata de "Letras". Curitiba, 1953.
- RODRIGUES, Aryon Dall'Igna. Descripción del tupinambá en el período colonial: el arte de José de Anchieta. Colóquio sobre a descrição das línguas ameríndias no período colonial. Ibero-amerikanisches Institut, Berlim.
- SAMPAIO, Teodoro. O Tupi na Geografia Nacional. São Paulo: Editora Nacional, 1987. 360 p.
- Francisco da Silveira Bueno (1998). Vocabulário tupi-guarani, português. ISBN 978-85-86632-03-7.
- Tibiriçá, Luís Caldas (2001). Dicionário tupi-português com esboço de gramática de Tupi Antigo. ISBN 978-85-7119-025-2.
|For a list of words relating to Tupi language, see the Old Tupi language category of words in Wiktionary, the free dictionary.|
- The art of the grammar of the Tupi language, by Father Luis Figueira
- Tupi Swadesh-vocabulary list (from Wiktionary's Swadesh-list appendix)
- "Abá nhe'enga oîebyr – Tradução: a língua dos índios está de volta", by Suzel Tunes essay in Portuguese.
- An elementary course of Old Tupi (in Portuguese)
- Another course of Old Tupi (in Portuguese)
- Ancient Tupi Home Page
- Tupi–Portuguese dictionary (with non-standard Tupi spelling)
- Sources on Tupinambá at the Curt Nimuendaju Digital Library |
In linguistics, a language is a whole system of communication used by a group, encompassing its semantics, grammar, and pragmatics. A dialect is a particular form of a language typical of a specific region. A sociolect is like a dialect, except that it is used by a dispersed social group, such as a profession or a class. A register is a form of a language used in a particular situation or context, such as by lawyers in court. And an idiolect is the distinctive way that an individual speaks: that person’s active vocabulary, grammar, accent, and preferred forms and style of speech.
Previous authors have used these distinctions from linguistics to inform models of culture. Language offers a useful model or example for understanding culture because we are familiar with explicit efforts to learn second languages, to list components of a given language in a dictionary, and to invent a writing system to represent a language (Goodenough 1981, p. 3; Goodenough 2003). I am especially interested in the aspects of culture that involve value-judgments: ethics, politics, aesthetics and religion.
I believe the following ten points from linguistics are relevant to thinking about culture (in general) and specifically about values:
- Registers, dialects, sociolects, and languages all encompass variation. Each person talks and understands differently from others and evolves as a speaker over time. Therefore, the same body of communication by real human beings can be carved up in many ways. For instance, we can draw a dialect map of the USA that shows more or fewer regions, and we can declare the same material to be a language or a dialect depending on whether we prefer to accentuate differences or similarities.
- Idiolects do not nest neatly inside dialects and sociolects, which do not not nest neatly inside languages. They are more like complex Venn diagrams. An individual may draw on several dialects or languages. In cases of code-switching, the individual sometimes uses one register or sociolect and sometimes switches to a different one. Or people may consistently use a mix of influences from multiple sources. For instance, Spanglish is characterized by Spanish influences on English–and there are several different regional Spanglish dialects within the United States today. On the other hand, imagine an elderly American whose speech reflects some influence of Yiddish from the old country. Her idiolect is then not part of any dialect but is a unique mix of two languages with which she may successfully communicate even though no one else nearby sounds like her.
- Nobody knows all of any given language, dialect, or sociolect. An effort to catalogue an entire language describes a body of material that none of the users know in full. For instance, nobody knows all the meanings of all the words in an impressive dictionary (which is, itself, an incomplete catalogue of the entire language). Instead, people share sufficient, overlapping portions of the whole that they can communicate, to varying degrees.
- Each of us knows our own entire idiolect. (That is what the word means.) However, we could not fully describe it. For instance, I would not be able to sit down and list all the words I know and all the definitions I employ of those words, let alone the grammatical rules I employ. If I were presented with a dictionary and asked which words I already know, I would check too many of them. The dictionary would prompt me to recall words and meanings that are normally buried too deep in my memory for me to use them. That brings up the next point:
- Language is dynamic, constantly changing as a result of interaction. A language constantly borrows from other languages. A person’s idiolect is subject to change depending on whom the person talks to. The best way to characterize an idiolect or a language (or anything in between) is to collect a corpus of material and investigate its vocabulary and grammar. That is a worthwhile empirical exercise, but it requires a caveat. The corpus is a sample taken within a specific timeframe. The idiolect or the language will change.
- It is possible to make a language (or a smaller unit, like a sociolect) relatively consistent among individuals and sharply distinguish it from other languages. For instance, a government can set rules for one national language and encourage or even compel compliance through schooling, media, religious observance, and even criminal penalties. Law schools teach students to talk like lawyers in court. However, there are also language continua, in which local people speak alike, people a little further away speak a little differently, and so on until they become mutually unintelligible. Before the rise of the nation-state and modern mass media, language continua were much more common than sharply distinguishable languages. Linguistic boundaries require exercises of power that are costly and never fully successful. Meanwhile, at the level of an idiolect, individuals may strive to make their own speech consistent and distinctive or else let it change dynamically in relation with others. People vary in this respect.
- Language is–in some way or degree–holistic. That is to say, the components of a language depend on other components. For instance, a definition of a word uses other words that need definitions. I leave aside interesting and complicated debates about holism in the philosophy of language and presume that some degree of holism is inescapable.
- Therefore, it can be helpful to diagram an idiolect, a dialect, or a language as a network of connected components rather than a mere list. I am not saying that language is a network; language is language. However, semantic network diagrams are useful models of idiolects, dialects, and languages because they identify important components (e.g., words) and connections among them. A semantic network diagram for a group of people will capture only a small proportion of each individual’s language but may illuminate what they share, showing how they communicate effectively.
- While words and other components of language are linked in ways that can be modeled as networks, people also belong to social networks, linked together by relationships of influence. A celebrity influences many others because many people receive her communications. The celebrity is the hub of a large social network. At the opposite extreme, a hermit would not influence, or be influenced by, anyone (except perhaps by way of memories).
- Whole populations change their languages surprisingly quickly, and sometimes without mass physical migration. The same population that spoke a Celtic language (Common Brittonic), transitioned partly to Latin, then fully to Germanic Anglo-Saxon, and then to a mixture of Anglo-Saxon and French without very many people ever moving across the sea to England. Today about 33 million people in South Asia also speak dialects of it. A few people can strongly influence a whole population due to their network position–which, in turn, often reflects power.
The point of this list is to suggest some similarities with other aspects of culture. Like a language, another part of a culture can be modeled as a shared network of components (e.g., beliefs, values, or practices) as they are used by people who are organized in social networks, which reflect power.
Such a model is a radical simplification, because each individual holds a distinctive and evolving set of components and connections (e.g., linked moral values); but simplifications are useful. And we can model culture usefully at multiple levels, from the individual to a vast nation.
There are precedents for this kind of analysis. The anthropologist Ward Hunt Goodenough encountered the concept of idiolects in the 1940s, while studying for a doctorate in anthropology (Goodenough 2003). By 1962, he had postulated the idea of “each individual’s private culture, if we may call it that, [which] includes his conception of several wholly or partially distinct cultures (some well elaborated and others only crudely developed in his mind) which he attributes to others individually and collectively, both within and without his community. A person’s private culture is likely to include knowledge of more than one language, more than one system of etiquette, more than one set of beliefs, more than one hierarchy of choices, and more than one set of principles for getting things done” (Goodenough, 1963, 261)
Later, Goodenough named a private culture a “propriospect” from the Latin words for “private” and “view” (Goodenough 1981, p. 98). Harry Wolcott summarizes: “Propriospects … are networks of sense-making connections created and constantly being reformulated by each of us out of direct experience. As we develop and refine our competencies, simultaneously we ‘construct” (and continually ‘re-model’) our individual propriospects” (Wolcott 1991, p. 267).
Individuals may actually share fundamental characteristics of a group, they may perceive themselves to share those characteristics, and they may be viewed by others as sharing them–but these perceptions do not necessarily align, because every group encompasses diverse propriospects. For instance, you might think that believing in God is essential to a culture; and since you believe in God, you are part of that culture; but someone else may define the culture differently and view you as an outsider.
In a review of Goodenough, Mac Marshall (1982) wrote, “While some may disagree with the finer points of his argument, his position on these matters represents the dominant orientation in American anthropology today.”
A different precedent derives from the Polish tradition of humanistic sociology, founded by Florian Znaniecki in the early 1900s. In the 1980s, a Polish sociologist named Marek Ziolkowski (later a distinguished diplomat) wrote interestingly about “idio-epistemes” (presumably from the Greek words for “private” and “knowledge”) meaning “not only … the cognitive content of an individual’s consciousness at a given moment, but … the whole potential content that can be activated by the individual at any moment, used for definite action, and reproduced introspectively.”
Ziolkowski called on sociologists to “enquire into the social regularities in the formation of idio-epistemes, seek relationships among them, establish basic intragroup similarities an intergroup differences.” He acknowledge that any individual will have a unique mentality, “yet every individual shares an overwhelming majority of opinions and items of information with other definite individuals and/or groups.” Those similarities arise because of shared environments and deliberate mutual influence. Ziolkowski coined the word “socio-episteme” for those aspects of an idio-episteme that are shared with other individuals. As he noted, these distinctions were inspired by concepts from linguistics.
Even earlier, in 1951, the psychologist Saul Rosenzweig had coined the word “idioverse” for an individual’s universe of events” (Rosenzweig 1951, p. 301).
An analogous move is Wilfred Cantwell Smith’s influential redefinition of “religion” from a bounded system of beliefs (each of which contradicts all other religions in some key respects) to a “cumulative tradition” of thought and behavior that varies internally and overlaps with other traditions (Smith 1962). According to this model, everyone has a unique religion, although shared traditions are important.
These coinages–idioverse, propriospect, idio-episteme, and cumulative tradition–capture ideas that I would endorse, and they reflect research, respectively, in psychology, anthropology, sociology, and religious studies. However, none of the vocabulary has really caught on. There may still be room for a new entrant, and I would like to emphasize the network structure of culture more than the previous words have done. Thus I tentatively suggest idiodictuon, from the classical Greek words for “private” and “net” (as in a fishing net–but modern Greek uses a derivation to mean a network). An individual has an idiodictuon, a group has a phylodictuon, and a whole people shares a demodictuon.
These words are unlikely to stick any better than their predecessors, but if they did, it would reflect their diffusion through a social network plus their utility when added to people’s existing networks of ideas. That is how all ideas propagate, or so I would argue.
A final point: “culture” encompasses an enormous range of components, including words, values, beliefs, habits, desires, and many more. Given this range, it is useful to carve out narrower domains for study. Language is one. I am especially interested in the domain of values, so I would focus on the interconnections among the values that people hold.
Note, however, that there is no neat way to distinguish values from other aspects of culture, such as desires and urges or beliefs about nature. In his influential empirical theory of moral psychology, Jonathan Haidt identifies at least five “Moral Foundations,” one of which is “sanctity/degradation.” I would have treated this category as a powerful human motivation, akin to sexual desire or violence, but not as a moral category, parallel to “care/harm.” Maybe Haidt is right, or maybe I am, but the evidence isn’t empirical. People actually see all kinds of things as the basis for action and make all kinds of connections among the things they believe. They may link a moral judgment to a metaphysical belief, or a personal aversion, or an aspect of their identity, or a belief about prevailing norms. When we carve out an area for study and call it something like “morality” or “ethics” (or “religion,” or “politics”) we are making our own claims about how that domain should be defined. Such claims require philosophical arguments, not data.
So the steps are: (1) define a domain, a priori, (2) collect a corpus of material relevant to that domain, (3) map it as a network of ideas, and (4) map the human relationships among people who hold different idiodictuons.
For me, the ultimate point is to try to have better values, and the study of what people actually value is preparatory for that inquiry. Clifford Geertz concludes his famous “Thick Description” essay: “To look at the symbolic dimensions of social action–art, religion, ideology, science, law, morality, common sense–is not to turn away from the existential dilemmas of life for some empyrean realm of de-emotionalized forms; it is to plunge into the midst of them. The essential vocation of interpretive anthropology is not to answer our deepest questions, but to make available to us answers that others, guarding other sheep in other valleys, have given, and thus to include them in the consultable record of what man has said.”
- Geertz, Clifford, “Thick description: Toward an interpretive theory of culture.” In Geertz, The interpretation of cultures. Basic books, 1973. (pp. 41-51).
- Goodenough, Ward, Cooperation in change : an anthropological approach to community development. New York, Russel Sage 1963
- Goodenough, Ward H. In Pursuit of Culture, Annual Review of Anthropology 2003 32:1, 1-12
- Goodenough, Ward H. Culture, Language and Society (Menlo Park: Benjamin Cummings, 1981)
- Mashall, Mac, “Culture, Language, and Society by Ward H. Goodenough” (review), American Anthropologist, 84: 936-937.
- Rosenzweig, Saul. “Idiodynamics in Personality Theory with Special Reference to Projective Methods 1.” Dialectica 5, no. 3-4 (1951): 293-311.
- Smith, William Cantwell, The Meaning and End of Religion (1962), Fortress Press Edition, Minneapolis, 1991
- Wolcott, Harry F. “Propriospect and the acquisition of culture.” Anthropology & Education Quarterly 22, no. 3 (1991): 251-273.
- Ziolkowski, Marek, “How to Make the Sociology of Knowledge Sociological?.” The Polish Sociological Bulletin 57/60 (1982): 85-105. |
Polio is an infectious disease caused by the polio virus. The polio virus primarily affects the central nervous system (brain and/or spinal cord). Paralysis occurs with 0.1% of all infections.
Polio is caused by ingesting infected foods. This is how the virus enters the mouth and pharyngeal cavity. From here it spreads to the intestines, where it multiplies and is finally excreted with the feces.
The incubation period (time from the infection to the onset of illness) is around 6 to 10 days. When the infection stops at this stage, it is referred to as asymptomatic or abortive polio. This is the case in around 4% to 8% of all infected individuals.
Non-specific symptoms, which may also occur with other viral infections, are seen in the early stages of the illness: nausea, headaches, fever, and possibly diarrhea. In about 1% of all polio infections, the virus perforates the barrier of the intestinal tract and penetrates into the spinal cord and brain via the bloodstream. This can lead to a non-paralytic form of polio that manifests itself through pain in the head, neck, and back. In about 0.1% of all infections, the nerve cells in the spinal cord and/or brain are attacked by the virus directly, which results in the paralytic form of polio.
The symptoms of polio and long-term consequences of polio (post-polio syndrome) include:
A general lack of strength and endurance
Difficulty breathing and swallowing
Intolerance to cold temperatures
Pain in the muscles and/or joints
Increased muscle weakness/muscle pain
Muscle atrophy (amyotrophia)
Progressively unstable joints/joint deformities
Muscle convulsions (fasciculations)
Changes in the gait pattern and/or increased tendency to fall
Since no specific antiviral therapy is available, treatment is limited to symptomatic measures. These include bed rest with careful nursing, correct positioning, and physical therapy. Aftercare includes appropriate physiotherapy and treatment with orthopedic devices such as orthoses. This can improve mobility after the acute stage of the disease.
Ottobock orthoses and supports for polio/post-polio
The products below are designed to help improve mobility for individuals who experience mobility impairments as a result of polio or post-polio syndrome. Whether a product is actually suitable for you and you are capable of using the product to its fullest functionality depends on many different factors. Your physical condition, fitness and a detailed medical examination are key. Your doctor or orthopedic technician will also decide which product is best for you. |
Being able to hear is something most of us cherish. Without hearing, it can be difficult to communicate with the world. Sound plays a fundamental role in our relationship to the things around us.
Hearing loss can happen as we age. Sometimes, medical conditions can cause you to lose some of your hearing. Exposure to constant loud noise is another culprit. And acute infections can trigger sudden hearing loss. Hearing loss is considered the third leading chronic health problem among American adults.
So what happens physiologically when you lose your hearing? It usually means that the passage of sound waves to your brain is impaired.
The ears are pretty complex organs. You might know that you have an external, middle, and inner ear. You also have an eardrum, which is extremely sensitive to damage. But did you know that hair cells in your ear translate sound waves into nerve impulses, which are then transmitted to your brain? These hair cells can also affect your hearing. When hair cells die, they are unable to repair themselves and hearing loss that results from this is permanent.
Now here’s where things get really interesting. Researchers at Purdue University have discovered that the amount of cholesterol in the outer hair cell membranes found in the inner ear can affect how well you hear.
Previous studies have revealed that cholesterol is lower in the outer hair cell membranes than in the other cells of the body. But no one understood the link between cholesterol levels and hearing.
At least until the research team at Purdue decided to adjust cholesterol levels in the outer hair cells of mice and record what happened. When cholesterol levels were reduced, the mice experienced hearing loss. When cholesterol was added, hearing initially improved, but then caused hearing loss again.
It seems there are two types of sensory hair cells in the inner ear. These cells are called simply inner and outer hair cells. It is the outer hair cells that are affected by cholesterol levels. Your hearing can be changed by adding or subtracting cholesterol in these cells.
Cholesterol levels normally don’t change much in your outer hair cells. Your body does a good job of fine-tuning levels. However, the researchers believe that when cholesterol in the bloodstream varies depending on diet, levels in the cells of the inner ear may be adversely affected.
The research team hopes that the results of the study will help with understanding how hair cells regulate hearing, providing a new way to assist those with hearing loss.
Clearly diet is one piece of the puzzle when it comes to preventing hearing loss. Keep cholesterol levels in check.
Avoid those foods that are high in HDL cholesterol. There are also some foods that will help to keep your ears healthy and free from the damage caused by infection. Pineapples, garlic, kelp, and sea vegetables are all beneficial for the ear |
Also called caissons, drilled shafts, drilled piers, Cast-in-drilled-hole piles (CIDH piles) or Cast-in-Situ piles, a borehole is drilled into the ground, then concrete (and often some sort of reinforcing) is placed into the borehole to form the pile. Rotary boring techniques allow larger diameter piles then any other piling method and permit pile construction through particularly dense or hard strata. Construction methods depend on the geology of the site; in particular, whether boring is to be undertaken in ‘dry’ ground conditions or through water-saturated strata – i.e. ‘wet boring’. Casing is often used when the sides of the borehole are likely to slough off before concrete is poured.
For end-bearing piles, drilling continues until the borehole has extended a sufficient depth (socketing) into a sufficiently strong layer. Depending on site geology, this can be a rock layer, or hardpan, or other dense, strong layers. Both the diameter of the pile and the depth of the pile are highly specific to the ground conditions, loading conditions, and nature of the project. Pile depths may vary substantially across a project if the bearing layer is not level.
Drilled piles can be tested using a variety of methods to verify the pile integrity during installation. |
Choose one of your favourite animals; it could live in the sky, in the water or on land. Then research the biome (or sub-biome) that your animal lives in. Finally, create the environment and a model of your animal in a box!
Safety advice: Always cut with the scissors pointing away from you.
This is a good activity for kids to complete independently.
Ideal for: Middle Primary Ages 6-9
- be creative
- think and connect
Time required: 40 minutes
Curriculum connections: Science, Geography, Design and Technology, Critical and Creative Thinking
- Coloured pencils
- Device for researching
- Junk material
- Natural materials
- Shoe box or cereal box
[email protected] from Cool Australia
[email protected] resources are designed for parents and teachers to use with children in the home environment. They can be used as stand-alone activities or built into existing curriculum-aligned learning programs. Our [email protected] series includes two types of resources. The first are fun and challenging real-world activities for all ages, the second are self-directed lessons for upper primary and secondary students. These lessons support independent learning in remote or school settings.
Cool Australia’s curriculum team continually reviews and refines our resources to be in line with changes to the Australian Curriculum. |
Copper Sulphate also known as cupric sulphate is an inorganic compound that combines sulphur with copper. It is the best known and the most widely used of the copper salts and world’s consumption is around 275000 tonnes per annum, mainly used in agriculture, principally as a fungicide. The pentahydrate is the most common form
Copper Sulphate pentahydrate, cupric sulphate, Blue copper, Blue vitriol, Blue stone, Roman vitriol, Triangle.
Copper sulphate is a blue solid when hydrated. It is whitish when anhydrous. When hydrated, it normally has five water molecules attached to it. It can be dehydrated by heating it. |
White OakQuercus alba
The white oak is a large, strong, imposing specimen. It has a short stocky trunk with massive horizontial limbs.The wide spreading branches form an upright, broad-rounded crown. The bark is light ashy gray, scaly or shallow furrowed, variable in appearance, often broken into small, narrow, rectangular blocks and scales.The leaves are dark green to slightly blue-green in summer, brown and wine-red to orange-red in the fall. The fall foliage is showy. Oaks are wind pollinated. Acorns are produced generally when the trees are between 50-100 years old. Open-grown trees may produce acorns are early as 20 years. Good acorn crops are irregular and occur only every 4-10 years. The white oak prefers full sun, but has a moderate tolerance to partial shade. It is more shade tolerant in youth, and less tolerant as the tree grows larger. It can adapt to a variety of soil textures, but prefers deep, moist, well-drained sites. High pH soil will cause chlorosis. Older trees are very sensitive to construction disturbances. The deep tap root can make transplanting difficult. Transplant when young. New transplants should receive plenty of water and mulch beneath the canopy to eliminate grass competition. Old oaks on upland sites can be troubled by sudden competition from and excessive irrigation of newly planted lawns. Their root zones must be respected for them to remain healthy. White oak is less susceptible to oak wilt than the red oak species. |
Reinforcement vs. Punishment
Understanding Behavior Modification
All parents want their children to exhibit positive behaviors that will help them transition smoothly through the different stages of their development. And while parents have the best intentions, they often don’t use the most effective discipline strategies to achieve that goal. And granted, parents do their best to implement the knowledge and experiences that they have in their parenting. However, becoming more familiar with operant conditioning concerning behavior modification strategies such as positive reinforcement can take their parenting to a new level.
Operant conditioning was founded by B.F. Skinner, who believed that the most effective method for understanding behavior, looked at the causes of an action and its outcomes. His findings spawned from Edward Thorndike’s work on the “Law of Effect.” Through this “method of learning,” relationships are made between behaviors and consequences, which can be positive or negative. As Skinner performed his studies, he found that reinforced or rewarded actions are more likely to be repeated.
In operant conditioning, there are key terms used to describe the types of responses, which are often confused. Positive means adding something, negative means taking something away, reinforcement means increasing a behavior, and punishment means decreasing a behavior.
1) Positive Reinforcement: A behavior is strengthened by the addition of a desirable stimulus. Ex. A parent gives a child more time to play video games if he cleans his room.
2) Negative Reinforcement: A behavior is strengthened by the removal of an undesirable stimulus. Ex. A parent makes a child pay $5 every time her room is not clean.
3) Positive Punishment: A behavior is weakened by the addition of an undesirable stimulus. Ex. A parents scolds a child when they misbehave.
4) Negative Punishment: A behavior is weakened by removing a desirable stimulus. Ex. A parent takes away a child’s favorite toy when they misbehave.
Many parents fall into the discipline strategies involving punishment. And while it can be useful in the moment, more long-term effects such as fear and aggression can be instilled. Through his studies, Skinner found that positive reinforcement was the most effective method of reinforcing desirable behaviors.
When it comes to behavior modification, White Tiger Karate's SKILLZ Child Development Center utilize positive reinforcement as the go-to approach. The Certified Pediatric Ninja Specialists are trained in science and psychology to impact children’s lives positively. In the classroom, students earn stripes for their belts when they work hard to achieve a new skill. When they receive eight stripes, they can earn a higher-level belt. Also, the class structure allows time for a “game” at the end of class as a reward for hard work. These positive reinforcement strategies are also shared with parents in ways that can be implemented at home.
Using positive reinforcement strategies can aid parents in fostering more positive behaviors in their children. Although this takes a little more time and effort to implement, the benefits are long-lasting. Additionally, children will feel more confident and happy, and the parent-child bond will improve. |
A floor plan is a scaled drawing of a building which shows the layout of spaces and how they interact. A floor plan can also be described as a horizontal section of a building, it cuts through the walls and openings within a building. Today we’re going to dive back into the Preliminary Design Stage to fully understand how floor plans are made. It isn’t just lines and rooms brought together; a lot of thought goes into the design of a floor plan.
Let’s start from Case Study and Literature Review. We have studied the proposed building type, conducted case studies, understood the brief and made decisions on the spaces/functions we want in our building.
We then selected the best possible site using a Site Selection Criteria and studied the surrounding environments using Site Maps.
Next, we did a Site Analysis. We have determined the micro and macro climate of the proposed site, the sun path, major wind directions, and have identified the best possible orientation strategies to be adopted by the building to optimize natural lighting and ventilation.
We then used Zoning to identify which spaces belong in which zone (noisy/quiet zones, private/public zones, accessible zones and secure zones)
Bubble Diagrams helped us zone the spaces/functions of the proposed design and understand how each zone interacts with one another and how they are connected, while Functional Flowcharts helped us connect spaces and functions to each other to bring about functionality and circulation routes within the building.
Finally, we conducted a Space Analysis to understand the space needed to make the proposed spaces/functions functional and comfortable for the user.
Now, how do all these steps play a role in creating a floor plan? The answer is right here in this post.
Generating a Floor Plan from Your Design Preliminaries
Let’s take a look at our functional flowchart. We have the spaces we need for the design and we know how the connect on one another.
The next step would be assigning dimensions to these spaces. The dimensions we will be using come from the space analysis conducted. Being it a sketch, the shape of the floor plan may not be the final outcome after assigning dimensions. One way to sketch to scale is by using sketchpads that have grids or dots. You can use the grids as a guide to help you sketch floor plans to scale.
There might be a variation of walls used in the design. There might be masonry walls, curtain walls, partition walls, etc. These walls all have various thicknesses and styles which represent them. Add this to your sketch to start understanding the final outcome.
We then add openings to the sketch. We determine the positions of doors and windows. It is necessary to research the type of windows and doors you want to use in your design. Ensure you achieve cross ventilation when positioning windows, and doors are often positioned to rest on walls.
Next, we determine the position of fixtures and furniture. Fixtures are permanent pieces of furniture that can not be moved like sinks, toilet seats, cabinets, stairs. It is important to indicate the position of each fixture using its appropriate symbol.
Finally, we add other details like hidden objects, floor finishing, labels, etc. These give more details to the floor plan and help others understand each aspect of the floor plan.
Floor Plan Graphics
To draw a floor plan, there are certain graphical and design elements you need to consider.
The scale used in drawing a floor plan depends on the size of the floor plan, the available drawing space, and the level of detail to be shown. Generally, we use scale 1:100 to draw a floor plan. For large floor plans, the scale moves to 1:200. To show more details on a floor plan, we use scale 1:50 (scale is in millimeters).
- Line weight
There are a lot of line weights displayed in a floor plan. The most obvious one being wall thickness. It is the thickest of them all. Openings and fixtures are drawn in a medium thickness while floor finishing and dimensions are drawn with thin lines.
- Line type
Walls, openings, fixtures, furniture are drawn using continuous lines. The section line is a thick broken line, while hidden objects are drawn with short dashes. Grid lines are thin chained lines.
- Signs and symbols
Floor plans have a mixture of many signs and symbols. Below is a diagram of some signs and symbols used in a floor plan:
Here are some textures we adopt in designing floor plans:
- Labelling and Dimensions
Labelling spaces, indicating stair movement, adding grid lines and adding dimensions to the floor plan are basically the final touches. They convey more information that help you communicate certain details.
Floor plans are usually the starting point for other drawings; therefore, it is important to get it done right. Stay tuned to read about other drawings in future posts. Thank you for reading! |
When scientists mapped the human genome, they found that just 2% of it controlled our own functioning. Fully 8% was the DNA of fossil viruses.
Paleovirologists who study fossil viruses have found that all viruses share a common single-cell ancestor from more than 3 billion years ago.
About 1.5 billion years ago, viruses changed their protein coating, allowing them to penetrate the cells of their hosts.
As multicellular life became increasingly complex, viruses got simpler, eventually giving up independent life. They threw out genes they didn’t use and discarded their means of reproduction until they were totally reliant on cell hosts.
When a virus enters a cell, part of its RNA is converted to DNA within the cell’s gene code.
If that cell is an egg or sperm, that DNA could be passed to the next host individual.
However, that can only happen if the virus does not kill the host.
So, if the virus is beneficial or benign—or if the host’s immune system is able to defeat it and become healthy enough to reproduce—then the viral DNA can be passed down through generations.
This process has occurred enough times over millions of years for viral DNA to make up that 8% of our gene code.
And fossil viral DNA has been very beneficial to us and all other organisms, as we’ll discuss in another EarthDate.
Synopsis: Viruses have been around for billions of years. Scientists have never found fossilized viruses in rocks, but viral fossils are pretty easy to locate—they make up about 8% of the human genome.
- Researchers have found fossilized microbes like bacteria and fungi, but fossils of viruses have never been found in ancient rocks.
- Their fragile structures consist of just a few strands of genetic material and protein coatings that are easily broken down as rocks lithify, so direct fossil evidence isn’t preserved. Viral chemical and biosignatures, however, are still under assessment.
- But we can find viral fossils that preserve genetic code from millions of years ago hiding in our very own genomes and in the genomes of every other living thing—from other mammals to plants to fungi to bacteria and archaea.
- Paleovirology, the study of extinct fossil viruses, helps us to reconstruct the past history of various species and the viruses that shaped them, as well as helps us to shape effective responses to new viruses.
- Where did viruses come from?
- Researchers have used a detailed study of protein folding to show that viruses and bacteria descended from a common ancestor—a fully functioning cell that lived 3.4 billion years ago and was one of Earth’s earliest life forms.
- About 1.5 billion years ago, viruses changed the structure of their protein coat, enabling them to enter host cells.
- Eventually they diverged from cellular life and evolved into simpler symbiotic forms, throwing out genes they didn’t use until they ultimately discarded their own mechanisms for reproduction, becoming dependent on cellular hosts.
- Whereas cellular life evolved to be more complex, viruses evolved to be simpler.
- Viruses are masters of their own rapid evolution.
- Some evolve over decades or centuries, but others—like influenza—evolve so rapidly that different vaccine formulations are required each year to bolster our immune systems.
- Viruses also evolve zoonotically, jumping from one organism to another.
- Recent studies have shown that viruses can also cause their host organisms to evolve, and the evidence is found within the host’s very own genome.
- When researchers mapped the human genome in 2003, they found more than 3 billion haploid base pairs organized into the genes on our 23 chromosomes.
- They were surprised to find that less than 2% of the genome carries the coding for the creation of proteins, the building blocks of life that keep our cells functioning.
- The remaining 98% of our genome consists of a tangle of old genes that don’t function anymore along with strings of repetitive DNA and other elements with no obvious function.
- Within this junkyard of genetic material, researchers found that 8% of our DNA consists of forever-deactivated fossils of defeated viruses that infected our ancestors but lost the battle with our immune system. These are called endogenous retroviruses (ERVs).
- Retroviruses are thought to have originated in marine environments more than 460 million years ago, during the Ordovician Period. They only infect vertebrates.
- When a retrovirus infects a cell, it takes over most cell functions and directs the cell to create an enzyme that converts the virus’s RNA to DNA. Bits of that DNA are inserted into the host cell’s chromosome using another enzyme the retrovirus directs the cell to manufacture.
- If the infected cell is a sperm or egg cell, the retrovirus becomes “endogenous,” meaning its altered DNA can be passed down through generations.
- ERVs are not able to produce new viruses; they are segments of DNA left behind by the genetic engineering of a past virus.
- Like other life forms, humans have been engaged in a continual battle with viruses.
- Viruses invade, so our immune system fights back by changing the shape or composition of its proteins to defeat them. Meanwhile, the viruses struggle to evade our defensive response.
- Pandemics and epidemics punctuate human evolution—populations either adapt or they go extinct. Endogenous retroviruses record the evolution of our proteins, revealing details of these past evolutionary conflicts. |
Why is universal common descent so important, though? What does it actually do? It affirms Charles Darwin, of course, who famously wrote of life being “breathed into few forms or one”, but his theory didn’t actually demand it scientifically. He wrote against a prevailing assumption that natural species – if not artificially selected varieties – were unchanged since creation, but descent with modification needn’t imply a single ancestor, and the fossil record available to him certainly didn’t support that.
In fact, if as he thought evolution were a product of natural law, one would expect multiple origins. Life should be easily repeatable. A single ancestor implies a highly contingent event with a special cause – a contradiction still valid today. But as much research including a recent study confirms, LUCA was already a complex organism which included organelles, suggesting that today’s simpler lifeforms like bacteria have devolved, rather than eukaryotes growing in complexity. The paper does not, of course, suggest that LUCA was created de novo. Theobald’s 2010 study, which showed a single ancestor to be far more probable than multiple origins, did not actually claim that life arose only once:
Theobald’s study does not address how many times life may have arisen on Earth. Life could have originated many times, but the study suggests that only one of those primordial events yielded the array of organisms living today. “It doesn’t tell you where the deep ancestor was,” Penny says. “But what it does say is that there was one common ancestor among all those little beasties.”
In other words, multiple original ancestors could have evolved and exchanged genes over hundreds of millions of years, but only one form avoided eventual extinction to give rise to today’s life. So LUCA is as much a black box to modern life as mitochondrial Eve is to humans: it tells us about historical contingency in the form of a bottleneck, but not about origins.
But now another study adds weight and substance to the hypothesis that viruses devolved from complex cells. It was already known that large amounts of viral genetic material have been introduced into the genomes of complex organisms, one of the most potent and current forms of gene transfer. Whilst that material was restricted to the simple components of RNA viruses and so on, it might be argued whether transposons derived from viruses or vice versa, but at least the results could be distinguished from coding DNA.
But if giant viruses containing the genes of their previous cellular life are, or were, more common than we assume, HGT might be far more extensive in post-LUCA evolution than we know. Maybe it could even account for many of those instances where phylogenetic and taxonomic relationships differ. Or to put it another way, we may be as unable to trace common ancestry with confidence in the lifeforms we know as we are in the precursors of LUCA. Perhaps the most we can really say is that all life is related, on the basic principles of a shared genetic code, body chemistry and so on. That’s a long way from common ancestry, and even further from making common ancestry an incontrovertible biological axiom.
An analogy would be the human MRCA studies I have cited elsewhere to show the plausibility of a historical Adam in historical time. The fact that we can show mathematically that the current human race had a common ancestor within a few thousand years of the present doesn’t actually tell us much about the descent of the human species. It might be relevant to theology, and vaguely interesting to historical anthropology. But in terms of saying where Homo sapiens came from, so what?
If there’s any truth in my suggestion, then the problem is that the neat Darwinian tree of life does not become a bush, or even a forest, but a tangled and impenetrable thicket. How does one make a simple and persuasive pattern out of that? |
A chemistry course to cover selected topics covered in advanced high school chemistry courses, correlating to the standard topics as established by the American Chemical Society.
Prerequisites: Students should have a background in basic chemistry including nomenclature, reactions, stoichiometry, molarity and thermochemistry.
-The study of chemical kinetics is the study of change over time. It answers questions like:
How fast are reactants consumed?
How fast are products formed?
This unit is dedicated to the exploration of how these questions are answered. We will look at the experimental evidence of how concentration affects these rates.
We will also examine what occurs on the molecular level, especially with respect to the motion of molecules, that affects rates of reactions.
-This unit introduces the concept of chemical equilibrium and how it applies to many chemical reactions. The quantitative aspects of equilibrium are explored thoroughly through discussions of the law of mass action as well as the relationship between equilibrium constants with respect to concentrations and pressures of substances. Much of the discussion explores how to solve problems to find either the value of the equilibrium constant or the concentrations of substances at equilibrium. ICE (initial-change-equilibrium) tables are introduced as a problem-solving tool and multiple examples of their use are included.
From a qualitative standpoint, Le Châtelier’s principle is used to explain how various factors affect the equilibrium constant of a reaction along with the concentrations of all species.
The concept of equilibrium is applied to acid and base solutions. To begin, the idea of weak acids and bases is explored along with the equilibrium constants associated with their ionization in water and how the value of the equilibrium constant is associated with the strength of the acid or base. The autoionization of water is discussed and how temperature affects this process. A variety of problem types are covered including calculations of pH, pOH, [OH-], and [H+] for both strong and weak acids and bases.
Aqueous salt solutions are classified as acids and bases and the multi-step ionization of polyprotic acids is discussed. Finally, the concept of Lewis acids and bases is discussed and demonstrated through examples.
-This unit continues and expands on the theme of equlibria. You will examine buffers, acid/base titrations and the equilibria of insoluble salts.
-The overarching theme of thermodynamics is the prediction of whether a reaction will occur spontaneously under a certain set of conditions. Entropy and Free Energy are defined and utilized for this purpose.
, Allison Soult and Kim Woodrum |
Don't like to read?
According to a study completed on July 28th, 2020, a specific enzyme could be the cause of body odor( BO). If proven correct, researchers hope specialized treatments will be produced for severe cases. However, this may take years of research in order to create an effective solution.
Body Odor Causing Enzyme
While conducting a study at the University of York, a team discovered this enzyme. They noted that though people use antiperspirants to eliminate their BO. However, some people still struggle with it. In severe cases, this can lead to self-esteem issues. Sometimes people even opt to have their glands removed.
Before this study was released, not showering, excessive sweating, and genetic conditions were thought to the leading causes of body odor. However, the BO enzyme has allowed scientists to explore another reason. They were able to use the enzyme to identify the bacteria that create odor molecules. According to researchers, this is a massive development as it will aid in understanding human bodies better.
Although this may seem meaningless, this study could be beneficial to many people around the world. In severe cases, as mentioned before, people develop low self-esteem and sometimes even have their glands removed. Considering surgery can be very expensive, this could save someone thousands of dollars. Not to mention, low self-esteem often leads to isolation, so this study could also boost someone’s confidence. Due to this, looking into this enzyme sounds like a good idea.
How Can An Enzyme Lead To Body Odor?
Also, scientists believe the enzyme will allow them to target the inhibitor. As a result, they will be able to halt BO production without interrupting the armpit microbiome. Though armpits house several bacteria, researchers found that the BO enzyme existed in one specific type of bacteria.
The exciting thing about this specific enzyme is that it existed before homo sapiens evolved. This suggests to scientists that BO may play a more significant role in life than previously thought. According to Dr.Gordon James, “this research was a real eye-opener.” Scientists say to ward off BO; people should wash their problem areas at least twice a day with soap. Shaving under armpits, washing clothes, and wearing natural fabrics can also aid in warding off BO.
Though people already practice these habits, it is always essential to restate the information. Due to the lack of specificity in the study, it is hard to believe that the data is reliable. However, other reports suggest that this information is, in fact, accurate.
Interestingly, the study has not garnered much recognition, considering this could be a scientific breakthrough. Science seems to be accumulated towards the coronavirus lately, which is understandable. However, there is something to be said about the news focusing all attention on the pandemic. Most people would agree that this seems like a fear-mongering tactic. Undoubtedly, the coronavirus is to be taken seriously. However, the news should showcase both good and bad stories to scientific studies.
The Specific Molecules Involved
According to The Irish News, the Staphylococcus hominins are the main microbes behind the body odor. After scientists transferred the enzyme to non-odor producing bacteria, it began to create a smell. This is the same process that occurs when people start to sweat under their armpits.
It seems that over time as humans began to evolve, so did body odor. Hopefully, once scientists find a way to eliminate this enzyme, body odor will be a thing of the past. Though it is worth noting the study only tells the public how BO is produced, scientists have not yet found a way to remove the enzyme. Typically, experiments take a while before they are successful. The study could take up anywhere from five months to 10 years.
The best thing to do at the moment is to be patient and hope for the best. Although finding time to catch up on this study would not hurt. Especially, considering how chaotic the world is, this study could be a good break from reality. Though as stated before, it should be taken with a grain of salt. Overall, however, the study gives a somewhat detailed description of everything and even manages to tie in a bit of history. Interestingly, one tiny enzyme is the cause for years of some people’s pain.
Opinion by Reginae Echols
Edited by Cathy Milne-Ware
Yahoo! Life: True cause of body odour identified by scientists
Shropshire Star: Scientists identify what causes body odour
The Irish News: Scientists identify what causes body odour
Featured Image Courtesy of sandwich’ Flickr Page- Creative Commons License
Inline Image Courtesy of Clare_and_Ben’s Flickr Page- Creative Commons License |
Earlier you have learned about the functions that the DBMS should have. Among these, some closely related functions are proposed to make sure that any database should be reliable and remain in a steady state.
The names of the functions are:
- Transaction support.
- Concurrency Control.
Although each function can be discussed discretely; but they are mutually dependent. Many DBMSs allow users to carry out simultaneous operations on the database. If these operations are not restricted, the accesses may get in the way with one another, and the database can become incompatible. For defeating this problem the DBMS implements a concurrency control technique using a protocol which prevents database accesses from prying with one another. In this chapter, you will learn about the concurrency control and transaction support for any centralized DBMS that consists of a single database.
What is Transaction in DBMS?
It is an action or sequence of actions passed out by a single user and/or application program that reads or updates the contents of the database. A transaction is a logical piece of work of any database which may be a complete program, a fraction of a program, or a single command (like the: SQL command INSERT or UPDATE) that may involve any number of processes on the database. From the database point of view, the implementation of an application program can be considered as one or more transactions with non-database processing working in between.
Let's pick up an example of a simple transaction where a user transfers 620 from A's account into B's account. This transaction may seem small and straightforward but includes more than a few low-level tasks within it.
The first one is A's Account:
Old_Bal = A.bal
New_Bal = Old_Bal - 620
A.bal = New_Bal
The 2nd one is B's transaction:
Old_Bal = B.bal
New_Bal = Old_Bal + 620
B.bal = New_Bal
You can also see the DBMS Transactions page to get more information about DBMS transactions.
It is to be noted that the transaction is very closely related to concurrency control.
What is Concurrency Control in DBMS?
It is the method of managing concurrent operations on the database without getting any obstruction with one another.
The Need for Concurrency Control
A key purpose in developing a database is to facilitate multiple users to access shared data in parallel (i.e., at the same time). Concurrent accessing of data is comparatively easy when all users are only reading data, as there is no means that they can interfere with one another. However, when multiple users are accessing the database at the same time, and at least one is updating data, there may be the case of interference which can result in data inconsistencies.
Concurrency control technique implements some protocols which can be broadly classified into two categories. These are:
- Lock-based protocol: Those database systems that are prepared with the concept of lock-based protocols employ a mechanism where any transaction cannot read or write data until it gains a suitable lock on it.
- Timestamp-based Protocol: It is the most frequently used concurrency protocol is the timestamp based protocol. This protocol uses either system time or logical counter as a timestamp. |
10 Birds That You Really Want To Avoid
Eastern wild turkey
The wild turkey is native to North America and is the heaviest member of the diverse Galliformes. It is the same species as the domestic turkey, which was originally derived from a southern Mexican subspecies of wild turkey. Although native to North America, the turkey probably got its name from the domesticated variety being imported to Britain in ships coming from the Levant via Spain. The British at the time therefore associated the wild turkey with the country Turkey and the name prevails.
The Wild Turkey in the United States in 1957 ranged from Arizona to southeastern Oklahoma and thence through Tennessee, West Virginia, and New York, and south to Florida and Texas. It formerly ranged north to southeastern South Dakota, southern Wisconsin, southern Ontario, and southwestern Maine. The A.O.U. Checklist also described Upper Pliocene fossils in Kansas, and Pleistocene fossils widely from New Mexico to Pennsylvania and Florida.
The Californian turkey, Meleagris californica, is an extinct species of turkey indigenous to the Pleistocene and early Holocene of California. It became extinct about 10,000 years ago. The present wild turkey population derives from wild birds re-introduced from other areas by game officials.
The roseate spoonbill is a gregarious wading bird of the ibis and spoonbill family, Threskiornithidae. It is a resident breeder in South America mostly east of the Andes, and in coastal regions of the Caribbean, Central America, Mexico, the Gulf Coast of the United States and on central Florida’s Atlantic coast Merritt Island National Wildlife Refuge adjoined with NASA Kennedy Space Center.
The roseate spoonbill nests in shrubs or trees, often mangroves, laying two to five eggs, which are whitish with brown markings. Immature birds have white, feathered heads, and the pink of the plumage is paler. The bill is yellowish or pinkish. Information about predation on adults is lacking. Nestlings are sometimes killed by turkey vultures, bald eagles, raccoons and fire ants. In 2006, a 16-year-old banded bird was discovered, making it the oldest wild individual.
The vulturine guineafowl is the largest extant species of guineafowl. Systematically, it is only distantly related to other guineafowl genera. Its closest living relative, the white breasted guineafowl, Agelastes meleagrides inhabit primary forests in Central Africa. It is a member of the bird family Numididae, and is the only member of the genus Acryllium. It is a resident breeder in northeast Africa, from southern Ethiopia through Kenya and just into northern Tanzania.
The vulturine guineafowl is a gregarious species, forming flocks outside the breeding season typically of about 25 birds. This species’ food is seeds and small invertebrates. This guineafowl is terrestrial, and will run rather than fly when alarmed. Despite the open habitat, it tends to keep to cover, and roosts in trees. It makes loud chink-chink-chink-chink-chink calls. It breeds in dry and open habitats with scattered bushes and trees, such as savannah or grassland. It usually lays 4-8 cream-coloured eggs in a well-hidden grass-lined scrape.
The California condor is a New World vulture, the largest North American land bird. This condor became extinct in the wild in 1987 but the species has been reintroduced to northern Arizona and southern Utah, the coastal mountains of central and southern California, and northern Baja California.
Although other fossil members are known, it is the only surviving member of the genus Gymnogyps. The plumage is black with patches of white on the underside of the wings; the head is largely bald, with skin color ranging from gray on young birds to yellow and bright orange on breeding adults. Its huge 3.0 m wingspan is the widest of any North American bird, and its weight of up to 12 kg nearly equals that of the trumpeter swan, the heaviest among native North American bird species. The condor is a scavenger and eats large amounts of carrion. It is one of the world’s longest-living birds, with a lifespan of up to 60 years.
The marabou strok is a large wading bird in the stork family Ciconiidae. It breeds in Africa south of the Sahara, in both wet and arid habitats, often near human habitation, especially waste tips. It is sometimes called the “undertaker bird” due to its shape from behind: cloak-like wings and back, skinny white legs, and sometimes a large white mass of “hair”.
Marabou down is frequently used in the trimming of various items of clothing and hats, as well as fishing lures. Turkey down and similar feathers have been used as a substitute for making ‘marabou’ trimming.
Sri Lanka Frogmouth
The Sri Lanka frogmouth, Sri Lankan frogmouth or Ceylon frogmouth is a small frogmouth found in the Western Ghats of south India and Sri Lanka. Related to the nightjars, it is nocturnal and is found in forest habitats. The plumage coloration resembles that of dried leaves and the bird roosts quietly on branches, making it difficult to see. Each has a favourite roost that it uses regularly unless disturbed. It has a distinctive call that is usually heard at dawn and dusk. The sexes differ slightly in plumage.
This species is found in the Western Ghats of southwest India and Sri Lanka. Its habitat is tropical forest, usually with dense undergrowth. It can sometimes be found in more disturbed habitats, including plantations. Its presence may be overlooked due to its nocturnal behaviour and camouflage. This species is found in the Western Ghats of southwest India and Sri Lanka. Its habitat is tropical forest, usually with dense undergrowth. It can sometimes be found in more disturbed habitats, including plantations. Its presence may be overlooked due to its nocturnal behaviour and camouflage.
The Andean condor is a South American bird in the New World vulture family Cathartidae and is the only member of the genus Vultur. Found in the Andes mountains and adjacent Pacific coasts of western South America, the Andean condor has a wingspan of up to 3.2 m but is exceeded by the wandering albatross, the southern royal albatross, the Dalmatian and the great white pelicans.
It is a large black vulture with a ruff of white feathers surrounding the base of the neck and, especially in the male, large white patches on the wings. The head and neck are nearly featherless, and are a dull red color, which may flush and therefore change color in response to the bird’s emotional state. In the male, there is a wattle on the neck and a large, dark red comb or caruncle on the crown of the head. Unlike most birds of prey, the male is larger than the female.
The king vulture is a large bird found in Central and South America. It is a member of the New World vulture family Cathartidae. This vulture lives predominantly in tropical lowland forests stretching from southern Mexico to northern Argentina. It is the only surviving member of the genus Sarcoramphus, although fossil members are known. Large and predominantly white, the king vulture has gray to black ruff, flight, and tail feathers. The head and neck are bald, with the skin color varying, including yellow, orange, blue, purple, and red.
The king vulture has a very noticeable yellow fleshy caruncle on its beak. This vulture is a scavenger and it often makes the initial cut into a fresh carcass. It also displaces smaller New World vulture species from a carcass. King vultures have been known to live for up to 30 years in captivity. King vultures were popular figures in the Mayan codices as well as in local folklore and medicine. Although currently listed as least concern by the IUCN, they are decreasing in number, due primarily to habitat loss.
The Muscovy duck is a large duck native to Mexico, Central, and South America. Small wild and feral breeding populations have established themselves in the United States, particularly in the lower Rio Grande Valley of Texas and Florida as well as in many other parts of North America, including southern Canada. Feral Muscovy ducks are found in New Zealand, Australia, and in parts of Europe.
They are a large duck, with the males about 76 cm long, and weighing up to 7 kg. Females are considerably smaller, and only grow to 3 kg, roughly half the males’ size. The bird is predominantly black and white, with the back feathers being iridescent and glossy in males, while the females are more drab. The amount of white on the neck and head is variable, as well as the bill, which can be yellow, pink, black, or any mixture of these.
They may have white patches or bars on the wings, which become more noticeable during flight. Both sexes have pink or red wattles around the bill, those of the male being larger and more brightly colored. The domestic breed, Cairina moschata forma domestica, is commonly known in Spanish as the pato criollo. They have been bred since pre-Columbian times by Native Americans and are heavier and less able to fly long distances than the wild subspecies. Their plumage color are also more variable. Other names for the domestic breed in Spanish are pato casero and pato mudo.
Northern Bald Ibis
The northern bald ibis, hermit ibis, or waldrapp is a migratory bird found in barren, semi-desert or rocky habitats, often close to running water. This 70–80 cm glossy black ibis, which, unlike other members of the ibis family, is non-wading, has an unfeathered red face and head, and a long, curved red bill.
It breeds colonially on coastal or mountain cliff ledges, where it typically lays two to three eggs in a stick nest, and feeds on lizards, insects, and other small animals. The northern bald ibis was once widespread across the Middle East, northern Africa, southern and central Europe, with a fossil record dating back at least 1.8 million years. It disappeared from Europe over 300 years ago, and is now considered critically endangered.
There are believed to be about 500 wild birds remaining in southern Morocco, and fewer than 10 in Syria, where it was rediscovered in 2002. To combat this ebb in numbers, recent reintroduction programs have been instituted internationally, with a semi-wild breeding colony in Turkey, as well as sites in Austria, Spain, and northern Morocco. The reasons for the species’ long-term decline are unclear, but hunting, loss of foraging habitat, and pesticide poisoning have been implicated in the rapid loss of colonies in recent decades. |
Recent technological achievements in bandpass filter design have led to the relatively inexpensive construction of thin-film interference filters featuring major improvements in wavelength selection and transmission performance. These filters operate by transmitting a selected wavelength region with high efficiency while rejecting, through reflection and destructive interference, all other wavelengths. Explore how interference filters operate by selectively transmitting constructively reinforced wavelengths while simultaneously eliminating unwanted light with this interactive tutorial.
The tutorial initializes with a single ray of white light (simulated by a yellow sine wave) incident on the surface of an interference filter at a 20-degree angle. As the light passes through the filter, red wavelengths (represented by red sine waves having a peak of 713 nanometers) are reinforced by constructive interference and passed through, but other wavelengths are reflected away (symbolized by the blue wavelengths) and diminished in intensity through destructive interference inside the filter. In order to operate the tutorial, use the Incident Angle slider to vary the angle of incidence of white light through a range of zero to 20 degrees. As the slider is translated to the left, the peak wavelength passed through the interference filter decreases from 713 nanometers (at a 20-degree incident angle) to 626 nanometers (incident light normal to the filter surface), and the amount of reflected light is also decreased proportionally.
Modern interference filters are modeled after the Fabry-Perot interferometer, designed in the late 1800s by Charles Fabry and Alfred Perot, and are constructed with several layers of thin films applied to an optically flat transparent glass surface. The original interferometer consisted of a device having two partially transparent mirrors separated by a small air gap whose size could be varied by translating one or both of the mirrors. Today, more sophisticated interferometers utilize a variety of mechanisms to measure the interference between light beams, and are often employed to monitor thin-film thickness during fabrication of interference filters and mirrors.
Interference filters can be produced with very sharp transmission slopes, which result in steep cut-on and cut-off transition boundaries that greatly exceed those exhibited by standard absorption filters. To produce modern interference filters, successive layers of dielectric materials, with thickness values ranging between one-quarter and one-half of the target wavelength, are deposited onto an optically flat glass or polymer surface in a vacuum. Light that is incident on the multi-layer dielectric surface is either transmitted through the filter with constructive reinforcement, or reflected and reduced in magnitude by destructive interference (see Figure 1). The filter bandpass, which is determined by the nature of the layered dielectric surface, determines the wavelengths of light that are allowed to be transmitted and multiply reflected when passing through the filter. Blocked wavelengths that not reinforced and passed by the filter are reflected away and removed from the optical path.
The dielectric materials utilized to fabricate interference filters are generally nonconductive materials having a specific refractive index. Traditional bandpass interference filters are manufactured using zinc sulfide, zinc selenide, or sodium aluminum fluoride (also termed cryolite), but these coatings are hygroscopic and must be insulated from the environment by a protective coating. In addition, the zinc and cryolite salts suffer from low filter transmission characteristics and temperature instability, which further reduces their performance, even though they are simple and relatively cheap to manufacture. After deposition of the thin film salt layers, a final layer of glass or an abrasion-resistant protective coating of silicon monoxide is added.
Introducing semi-transparent layers of metal oxides (known as hard coatings) into thin-film coating technology has alleviated many of the environmental problems associated with interference filters, and dramatically improved their temperature stability. The thin coatings of metals and salts, each with a unique refractive index, are applied in successive layers having alternating high and low refractive index values. The critical element of this design is the interface between two dielectric materials of differing refractive index (one much higher than the other), which is responsible for partially reflecting incident light forward and backward through the filter and producing the interference effect that results in wavelength selection. Reinforced and transmitted wavelength values are determined by the thickness and refractive index of the interspersed dielectric layers. Even though the thin coatings themselves are transparent, light waves reflected and transmitted by the dielectric materials interfere to produce brilliant colors that appear to be emanating from the filter surface.
Dielectric coatings are often bundled into units termed cavities, which are constructed of three to five alternating salt and metal oxide (and sometimes pure metal) layers separated by a wider layer of magnesium fluoride termed a spacer (see Figure 2). The spacers are produced with a thickness that corresponds to even multiples of a quarter or half wavelength in order to reflect or transmit light in registration with the dielectric layers. Increasing the number of cavities utilized to build an interference filter produces a proportional increase in the slope of the cut-on and cut-off wavelength transmission boundaries. Filters featuring up to 15 stacked cavities can have a total of over 75 individual dielectric layers and display bandwidths only a few nanometers in size.
Virtually any type of filter can be designed and constructed with thin-film interference coating technology, including bandpass, shortpass, longpass, dichroic beamsplitters, neutral density, and a variety of mirrors. As discussed above, the number of layers and cavities is utilized to control, with very high precision, the nominal wavelength, bandwidth, and blocking level of the filter. Filters with multiple transmission bands, such as the complex triple-band filters so popular for fluorescence microscopy, can be fabricated with this technique.
The high degree of blocking obtained with thin-film interference filters only applies to a finite wavelength range, beyond which effective blocking drops off dramatically. The range can be extended by adding auxiliary components, such as wideband blockers, but often at a compromise in peak transmission values. In addition, the coating materials utilized in thin-film production are limited in their range of transparency. Once the range has been exceeded, these coatings can become highly absorbing rather than highly reflecting or transmitting, thus reducing the efficiency of the filter. Coating absorption characteristics can also be wavelength-dependent, so the same coating utilized for longpass filters will usually not perform adequately at lower wavelengths in the violet and ultraviolet region. Finally, interference thin-film coatings are sensitive to the incident angle of illumination. As this angle is increased, the spectral characteristics of the coating tend to shift towards shorter wavelengths (the spectrum is blue-shifted). Another drawback is that interference coatings often produce polarized light at high incident angles, an effect that is not always desirable. Regardless of the deficiencies found in thin-film coatings, this technology is still one of the most suitable for wavelength selection in a wide variety of applications. |
You will be able to copy single familiar words correctly. You label
items and select appropriate words to complete short phrases or
You will be able to copy familiar short phrases correctly. You write or
word process items [for example, simple signs and instructions] and set
phrases used regularly in class. When you write familiar words from
memory your spelling may be approximate.
You will be able to write short sentences on familiar topics, using
textbooks, wall charts and your own written work. You express personal
responses such as likes, dislikes and feelings. You write short phrases
from memory and your spelling is readily understandable.
You will be able to show that you understand single words presented in
clear script in a familiar context. You may need visual cues.
You will be able to show that you understand short phrases presented in
a familiar context. You match sound to print by reading aloud single
familiar words and phrases. You use books or glossaries to find out the
meanings of new words.
You will be able to show that you understand short texts and dialogues,
made up of familiar language, printed in books or word processed. You
identify and note main points and personal responses such as likes,
dislikes and feelings. You are beginning to read independently,
selecting simple texts and using a bilingual dictionary or glossary to
look up new words.
You will be able to
respond briefly, with single words or short
phrases, to what you see and hear. Your
pronunciation may be approximate, and you may need considerable support
from a spoken model and from visual cues.
You will be able to give short, simple responses to what you see and
hear. You name and describe people, places and objects. You use set
phrases such as asking for help and permission. Your pronunciation may
still be approximate and the delivery hesitant, but your meaning is
You will be able to take part in brief prepared tasks of at least two or
three exchanges, using visual or other cues to help them initiate and
respond. You use short phrases to express personal responses such as
likes, dislikes and feelings. Although you use mainly memorised
language, you occasionally substitute items of vocabulary to vary
questions or statements. |
This graphic organiser, ‘Chalking Up the Experiment’ helps students record an experiment on the three states of matter, showing the beginning, middle and end result.
This chemical science worksheet, ‘Ice Is Cool’ supports students to identify and record different materials. It supports an understanding of changes in materials.
This history poster ‘Robert Boyle’ features important factual background information about this famous Irish scientist who pioneered scientific experimentation and became known as the “father of chemistry”. This is a great resource for broadening students’ knowledge of significant figures from Irish history and could be used as a prompt for further discussion or study of …More |
Using Writing to Facilitate Discussion
In FYS, writing can easily be integrated to facilitate class discussion and to help students synthesize the ideas discussed in class. Here, we’ve divided our ideas for facilitating informed discussion into three venues where you might encourage student writing: in class, online, or in more formal written responses (on paper).
Providing students time to write in class can often result in more productive class discussions. You might consider:
- Setting aside a few minutes at the start of class to have students free-write about an idea you’ll then engage as a class.
- Starting class by having students review their notes from the previous class and develop a few questions for clarification or expansion.
- Concluding groupwork with collaborative writing: instead of asking students to immediately report their groups’ findings to the class, ask them to work together to draft a paragraph that you can then use to both continue the discussion and model student writing.
There are several forums you can use to facilitate conversations online. Students could try:
- Posting on a collaborative facebook page for the class, where they share relevant articles, links, and images and comment on one another’s posts.
- Interacting on a blog or forum where a “discussion leader” posts questions raised by class discussion or readings and other students respond.
- Using a blog to post low-stakes writing assignments and respond to one another by asking questions raised by their writing.
- Developing a collaborative course outline, whereby after each class a different student is responsible for posting a short summary of the class discussion and interesting questions online and other students can amend or expand on that summary.
- Isolating a single sentence from the reading and explaining why it’s central to the text at hand.
Typically, these are higher-stakes writing assignments that ask students to synthesize and respond to readings or discussions. Students can engage course discussion in more formal writing assignments by:
- Engaging (and citing, of course) the ideas of their classmates in their final papers.
- Writing weekly reflections that respond to the readings and class discussions.
- Identifying a “central question” from class discussion each week and writing a short response.
For more ideas on using writing to facilitate course discussion, please see our page Teaching Writing in Your Classroom. |
The principle of Boolean logic lets you organize concepts together in sets. When searching computer databases, including keyword searching of the online catalog, these sets are controlled by use of Boolean operators OR, AND, and NOT.
But forget about libraries and computers for a moment and think about ice cream. Imagine all the possibilities a soft ice-cream machine could make if it offered chocolate, strawberry, and vanilla, and could mix together any and all combinations of those flavors. There are seven possible combinations of ice cream flavors available: each flavor by itself, three combinations of two flavors in a swirl, plus all three flavors mixed together.
In Boolean logic terms, a set that included any of these flavor combinations would be expressed:
strawberry OR vanilla OR chocolate.The Venn diagram for this combination would look like this:
In database searching OR expands a search by broadening the set. It is often used to combine synonyms or like concepts. If you were interested in searching an online database for information about teenagers, to be more comprehensive you might use the set:
adolescents OR adolescence OR teens OR teenagers OR young adults
This search statement would retrieve records that mention any of those terms. Think of OR as either or. The OR operator, however, doesn't have to group together synonyms. You could also search a database for "infants OR children OR adolescents."
If you don't wish to try every possible flavor combination the soft ice cream machine offers all at once, you can narrow your selection. You might want to choose an individual flavor or one combination of flavors. To order a swirl of all three flavors combined, chocolate, vanilla, and strawberry all must be included.
In terms of Boolean logic, a set that includes all of three elements would be expressed as:
strawberry AND vanilla AND chocolate.The Venn diagram for this combination would look like this:
In database searching AND narrows a search. It is often used for linking together different concepts. Searching a database with the search statement college students AND behavior would retrieve records only if both the phrase "college students" and the word "behavior" appear. Think of AND as only if also.
When you order ice cream, if you do NOT want chocolate, that would leave you with only three possibilities, strawberry by itself, vanilla by itself, or a swirl of strawberry and vanilla. In other words, you're subtracting a concept out of it. The resulting set would be
(strawberry OR vanilla ) NOT chocolateThe Venn diagram for this combination would look like this:
In database searching, NOT is used to get rid of an unwanted concept. If you were interested in studying college students but not high school students, you could create a set college students NOT high school.
Keep in mind that NOT should be used sparingly, perhaps not at all, since it often brings about unintended results. If you were interested in research on college students but not on research of high school students, you could add the concept NOT high school to your search statement. That would eliminate many records that are only about high school students, but it would also eliminate records that deal with both college students and high school students, such as this one: "College students are better prepared than students in boarding high schools to deal with the challenges of living away from home."
You can combine sets in a variety of ways using the different combinations of Boolean operators. When writing out the sets, parentheses are important because they keep the logic straight. In the grouping (high school students OR college students) AND (behavior OR motivation) AND (drugs OR alcohol) the parenthesis around first set tells the database to create a final set of records that may include either of the phrases high school students OR college students, but only when the records also include either of the words behavior OR motivation plus only if in addition either of the words drugs OR alcohol appear.
BEYOND BOOLEAN (TRUNCATION AND WILD CARDS)
In addition to using Boolean operators, for good searches, it may be necessary to use truncation and wildcard characters to expand or control searches.
Truncation is a searchable shortened form of a word. This means you can take short cuts. Instead of writing out adolescents OR adolescence, you can use the truncated term adolescen*. Unfortunately, databases are not consistent with truncation symbols, so in one you might have to use adolescen*, but in another adolescen?, and in yet another, just adolescen (if truncation is automatic). Many databases are smart enough to pick up regular plurals without adding truncation, such as school retrieving both school and schools, but not all do, and they would be even less likely to be designed so that child would retrieve both child and children, without also retrieving childbirth, childhood, and childishness.
Wild card characters are useful because of alternate spellings and other quirks in the English language. Just as British and Canadian ice cream comes in flavours, not flavors, a British or Canadian study of college students may use the term behaviour, instead of behavior. Searching with a wild card can help. With the term behavio?r, both behaviour and behavior may be searched together. The most common use of wild cards is because of women. Wom*n should pick up both women and woman. Once again, because databases are not consistent with the characters they use, as the examples indicate, you will have to use different wildcard characters in different databases. It is best to check the help screens to see the exact symbols and rules. |
Music Is a Powerful Learning Tool for Hearing-Impaired Children
It may seem counter-intuitive that music can help the hearing-impaired to detect sound, but that is exactly what research is discovering. In fact, scientists are finding that music and singing may help hearing-impaired children to perceive speech. That’s an important finding given the role that communication plays in early childhood learning.
The most recent body of research1 in this area was conducted at the University of Helsinki, Finland, and University College London. Researchers studied children with cochlear implants, (an electronic device that helps children who are profoundly deaf or hard of hearing to hear sounds), and the impact of music on their hearing skills. Scientists played music for the children and then measured how they perceived sound, speech, and singing. Researchers also measured how the children’s brain responses changed as the music changed.
Research concluded that the more children participated in singing and music, the better their perception of speech was in a noisy environment, i.e., hearing a teacher in a noisy classroom.
The authors of the study believe that the findings indicate the importance of music-based activities in the classroom for all students, and that parents should adopt singing and musical activities in the home, as well. And the more it can be used in everyday life, the more it can support the development of communication skills in hearing-impaired children.
How Can Hearing-Impaired Children Hear Music?
The Music Therapy Association of British Columbia2 gives a detailed explanation of how music can be heard by hearing-impaired children. The most basic explanation is that the majority of children are not totally deaf; it’s only a small percentage that cannot hear at all. Technically speaking, it is the many different frequencies of music that allow hearing-impaired children to hear it. There are many more tones and pitches in music than there are in human speech and that makes it easier for children with differing levels of hearing impairment to hear it.
The association also provides some insight into the role that singing plays in language development for hearing-impaired children. The act of singing not only encourages the child to listen carefully in order to learn the words, but also provides a vocal activity and helps to improve speech. According to the association, “Learning songs can stimulate practice in auditory discrimination, differentiating and integrating letter sounds, syllabication, and pronunciation.” The act of singing is not only enjoyable but hugely beneficial for children as it helps them to learn sentence structure while participating in a fun activity that makes it easier to learn.
Music therapy is an effective method of helping hearing-impaired children to learn communication skills and participate in the world around them. It can also give them great joy and pride in learning. That’s good for children, their development and their future.
1: Ritva Torppa, Andrew Faulkner, Teija Kujala, Minna Huotilainen, Jari Lipsanen. Developmental Links Between Speech Perception in Noise, Singing, and Cortical Processing of Music in Children with Cochlear Implants. Music Perception: An Interdisciplinary Journal, 2018; 36 (2): 156 DOI: 10.1525/MP.2018.36.2.156 https://www.sciencedaily.com/releases/2018/11/181127111009.htm |
Whether you are riding or living normally, physics concepts are acting on you all the time, every day. In snowboarding, physics concepts play a major part. Gravity, for example, holds you on the earth and keeps you from floating away.
Gravity, as it does on earth when walking, works the same way in snowboarding. While riding, a constant physics concept of 9.8 Newtons (N) is pushing down on every inch of your body. Not only does this physics concept pull you down the mountain, but it keeps your board on the trail as well. If you performed an aerial stunt and gravity wasn't present, you would never descend.
The next of the major physics concepts in snowboarding is friction. Not only does it allow you to carve, but it also allows you to stop the board at the base of the trail. Friction is a physics concept that causes negative acceleration and the overall creation of heat over time. Friction acts on everything as long as matter is present, so, on earth, friction is present everywhere. Since this physics concept affects the speed of a run, snowboards are made with special composite materials that reduce friction as much as possible. The basic core materials in snowboards include wood and plastic, but others are used to reduce and add weight. Different board styles reduce friction as well. An example of this is a racing snowboard compared to a freestyle board. The racing board, due to its sleek, skinny, light-weight design, rides much faster. Of course, some riders (especially beginners) may want more friction. It is obviously easier to learn with less speed, so training boards are also sold. Tricks are easier to perform with less speed. For this type of riding, freestyle boards are available.
Another physics concept involved in snowboarding is acceleration. Acceleration speeds you up as you ride down a trail. It tells you your change of speed over a given amount of time. The formula used to determine this is Velocity Final minus Velocity Initial over Time Final minus Time Initial. On paper this would look like A= Vf - Vi / Tf - Ti. Acceleration varies with the steepness of a trail. Acceleration also occurs when you are slowing down to a stop. This type of acceleration is called negative acceleration. Despite arguments, there is no such physics concept as deceleration.
The last major physics concept involved in snowboarding is speed (or velocity). This can be measured by calculating the distance you have traveled divided by the time it takes you to ride the distance ( V= D / T ). Speed is the key physics concept to movement on the mountain. Without speed, snowboarding would not be possible. It is mainly measured in miles per hour or kilometers per hour, but can also be measured using any distance and time measurements. This physics concept makes the sport extremely difficult and dangerous. Falling at high speeds can easily break bones and sprain muscles.
In conclusion, the entire sport of snowboarding relies on physics concepts. From the time you step on the lift to the time you hit the base of the mountain, physics concepts affect your every movement and action. For more information on the physics concepts of snowboarding and other snowboarding related websites, click on the links tab. |
Qualified immunity provides protection from civil lawsuits for law enforcement officers and other public officials. It attempts to balance the need to allow public officials to do their jobs with the need to hold bad actors accountable. Proponents of qualified immunity argue that without a liability shield, public officials and law enforcement officers would be constantly sued and second-guessed in courts. Critics say the doctrine has led to law enforcement officers being able to violate the rights of citizens, particularly disenfranchised citizens, without repercussion.
Qualified immunity is not the result of a law passed by Congress, nor is it written in the Constitution. It is instead a legal doctrine refined by the U.S. Supreme Court. First outlined in 1967, it has since been greatly expanded. Because qualified immunity is largely a creation of the courts, that is not based on the U.S. Constitution, Congress could pass a law amending, affirming, or revoking qualified immunity at any time. It has so far declined to do so. However, both lawmakers and current Supreme Court justices have considered amending or revoking qualified immunity as it currently stands.
Understanding the pros and cons of this once obscure legal doctrine requires some knowledge of the surrounding legal issues and why qualified immunity was created in the first place. This article will briefly recount the history of qualified immunity, how it is applied in courts today, and the pros and cons it affords society.
Click on a link below to go to that section:
Section 1983 and Fourth Amendment Claims
In the Enforcement Acts of 1871, also known as the “Ku Klux Klan Acts," Congress specifically held that groups of people could be liable in court for violating the constitutional rights of other Americans, including public officials. This was an effort to help protect black Americans who were the frequent targets of horrific violence, including lynching, and that in some cases public officials condoned. However, the end of Reconstruction came about shortly after these laws were passed and the legal system ended up offering little protection for black citizens until the 1960s.
People refer to lawsuits against the police alleging civil rights violations as §1983 claims. This is because the civil rights movement of the 1960s reinvigorated 42 U.S.C. §1983 of the Ku Klux Klan Acts. Section 1983 provides that “every person who . . . subject[s], or causes to be subjected, any citizen of the United States . . . to the deprivation of any rights, privileges, or immunities secured by the Constitution and laws, shall be liable to the party injured in an action at law, suit in equity, or other proper proceeding for redress." Put simply, victims of constitutional rights violations can sue whoever was responsible. It was during the civil rights movement that black Americans first began alleging police use of excessive force in violation of §1983.
The Spreme Court has held that the Fourth Amendment prohibits police from using excessive force when apprehending a suspect or making an arrest. Under §1983, such a violation means that officers who use excessive force are subject to civil liability. This exact reasoning occurred in the 1958 case Monroe v. Pape. There, the U.S. Supreme Court held that a police officer acted “under the color of law" in using unreasonable force, and as such could be liable for violating the suspect's Fourth Amendment rights.
The Supreme Court has also held that a Fourth Amendment violation on its own – regardless of §1983 – can lead to civil liability in the 1971 case Bivens v. Six Unkown Fed. Narcotics Agents.
Creating Qualified Immunity
It was in 1967 that the U.S. Supreme Court first gave a police officer qualified immunity. In Pierson v. Ray, the U.S. Supreme Court held that a police officer acting in good faith was not liable for a false arrest. The Warren Court had two reasons for giving qualified immunity in the case. First, it wrote that courts had been granting qualified immunity for many years prior to §1983, and that Congress did not specifically ban qualified immunity in that section. The Warren Court then expanded that qualified immunity to acts undertaken by public officials in “good faith." Legal scholars have since questioned this reading of the law. Secondly, and perhaps more important to the Warren Court, the Supreme Court feared that police would not seek to arrest suspects or do their jobs as diligently if they feared being held liable. “A policeman's lot is not so unhappy that he must choose between being charged with dereliction of duty if he does not arrest when he has probable cause, and being mulcted in damages if he does" Chief Justice Earl Warren wrote.
Fifteen years later, in Harlow v. Fitzgerald the Supreme Court greatly expanded the doctrine to become closer to what it is today. In that case, an 8-1 decision, the Supreme Court said that public officials have immunity unless the official knew or should have known that their actions violated the plaintiff's constitutional rights. It replaced the previous “good faith" test with something more “objective." This test is now the analysis courts use when determining if qualified immunity protects an officer from a lawsuit.
How Qualified Immunity Works
Qualified immunity is not the same as absolute immunity. In other words, there are circumstances in which a public official can be held accountable for constitutional violations in civil court. However, in the Supreme Court's own words, qualified immunity is an officer-friendly doctrine that protects “all but the plainly incompetent or those who knowingly violate the law."
Courts employ a two-part test to determine whether qualified immunity applies. If the answer to both questions is yes, then the public official does not get immunity.
- Did the officer violate a Constitutional right?
- Did the officer know that their actions violated a “clearly established right"?
The next issue is to determine when a right is “clearly established." Under the current doctrine, a right is clearly established when the Supreme Court or the relevant federal appeals court has already treated the conduct as unconstitutional, or where a public official's conduct is “obviously unlawful".
In 2009, the U.S. Supreme Court told lower courts it could skip the first part of the test at its discretion. Many courts now do so.
The result is that judges now look to past court cases to see whether there is a similar set of facts on record that would put the officer on notice that their actions violated the “clearly established" statutory or constitutional rights of another. The result is that the facts of a situation alleging police misconduct are highly relevant to when qualified immunity applies.
Applying Qualified Immunity
The Supreme Court has told lower courts to waive qualified immunity in cases that are very similar. It is not enough to show that a previous case denied an officer qualified immunity for broadly similar circumstances or actions. Instead, the facts must be “sufficiently clear" that a reasonable officer would understand that they are violating a constitutional or statutory right.
Picking just one example of thousands, the Eleventh Circuit Court of Appeals has distinguished between an officer firing at a dog surrounded by children, hitting and injuring a child, and an officer firing at a truck, instead hitting a passenger. In both cases the officer fired at a target for questionable reasons, resulting in injury to the accidentally hit victim. However, the Eleventh Circuit said the two were dissimilar enough that the officer who shot the child was given qualified immunity, whereas a previous court found that the officer who fired at the truck did not get qualified immunity.
The Benefits of Qualified Immunity
There are several arguments made to continue the doctrine of qualified immunity as it currently exists, including:
- Officers and public officials need qualified immunity to carry out their jobs. Public officials, and particularly police officers, perform vital tasks that may require split-second decisions in stressful circumstances. Taking away qualified immunity could lead to officers being hesitant to act when it is most needed.
- Removing qualified immunity could open up public officials and police to unwarranted lawsuits, in which judges and juries could second-guess split-second decisions and lead to significant costs for cities, police officers, and other public officials.
- Officers do not have absolute immunity, and they can be held liable when they violate a clearly established constitutional right.
- The narrow interpretation of clearly established precedent is appropriate. Officers should not be forced to apply an abstract right under the Constitution to specific circumstances in split-second decisions. Officers cannot be expected to be legal scholars or think through legal arguments when attempting to make an arrest.
- Officers must have room to make mistakes or have moments of bad judgment without worrying about being sued.
The Arguments Against Qualified Immunity
Several arguments against qualified immunity as it currently stands include:
- Liability is necessary to hold officers accountable for excessive force. As it stands officers are free to maliciously violate the Fourth Amendment and other Constitutional rights of citizens without any cost to themselves, provided some obscure court case hasn't already dealt with almost the exact same situation.
- The fear of rampant lawsuits against police are overblown. Many municipalities indemnify their officers, meaning the city would pay for any settlement, not the officers themselves.
- The current doctrine as applied today in courts leads to hairsplitting and it is often impossible for plaintiffs to meet the burden.
- The doctrine is applied inconsistently and can greatly depend on the judge or judges involved in the case. For example, one judge has argued that “a court can almost always manufacture a factual distinction" when determining whether a previous precedent precludes an officer from getting qualified immunity.
What Will Happen to Qualified Immunity?
There has long been a discussion of ending or significantly amending the qualified immunity doctrine. Congress has introduced legislation to end qualified immunity, and Supreme Court justices of vastly different judicial philosophies have also endorsed revisiting police officers' liability shield. For now, however, it remains the doctrine of the courts that plaintiffs bringing a §1983 claim must first show that the public official's actions were very similar to a previous case in which qualified immunity was waived. |
If you have even glanced at the Common Core standards for ELA you’re bound to notice something: both reading strands are divided into three distinct categories. First, there is the Key Ideas and Details chunk. This section gives standards that address things like theme, citing evidence and describing story elements. Next comes Craft and Structure. Here the focus is on things like tone, point of view and format. Last comes Integration of Knowledge and Ideas – where text comes to meet other mediums and genres.
What They Are:
RaW Journals are something I thought up to smash together the first two chunks. I realized that Key Ideas and Details really asks you to look at a text as a Reader, while Craft and Structure has you examine the text like a Writer. RaW Journals are entries my students write that look at the text as both a Reader and Writer.
How I Use Them:
I have my students divide their paper in half, like setting up for any double-journal entry. They label one side R and one side W. Before I give the assignment, we brainstorm together different things that students notice while reading. Examples include:
- Setting details
- Interesting or difficult vocabulary
- Long or short sentences
- Amount of dialogue
Next, as a class we decide which of those things have to do with looking at the text as a reader. For example, noticing characterization or making a summary would both be reading details. Students would make these notes under the “R” column. Then we decide which have to do with looking at the text as a writer. Studying the sentence length or looking at the vocabulary would fall under that category, so those notes would go under the “W” category.
I never make my students do really long RaW journals, because I would rather have them write down meaningful bullet points than just list arbitrary things to take up space. A typical entry is no more than half of a page long.
Try a practice one first with a small piece of text. Have students read through it two times – once focusing only on “R” and the second time focusing only on “W”
When the students finish, these are a great way to spark discussion. See if students notice if certain reading characteristics seem to occur hand-in-hand with writing styles (for example – does a creepy mood come with long sentences?). Or, have your students compare their notes with each other. It’s always interesting to see what “W” notes someone else notices!
Will you be using RaW Journals in your classroom this year? |
The Effects of Exercise on the Body
The effects of exercise on various body parts and systems are often felt immediately and over time. At the beginning of exercising, one always feels the frequent contraction of muscles, the elevated heartbeat rate, breathing rate and body temperature as asserted by Sheldon (2011). As the body adapts to regular exercises, one develops long-term impacts of large hearts, the ability to breathe deeply and dense bones. Virtually all body organs and processes are affected by exercises as shown by Hughes (2011). From skin to the cardiorespiratory system, the cardiovascular system, thermoregulatory system, and musculoskeletal system – all experience the impacts.
Harms (2005, p.124) explains that exercises have effects on the muscular and skeletal system. People tend to lose the mass of muscles, density and strength of the bone with age. However, these losses can be blunted by exercising regularly. Regular exercising leads to the fit body and adaptation of the muscular and skeletal system. Cardiovascular system is also affected by exercise that helps in improving one’s ability to utilize oxygen and perform in work despite having a heart condition. Cardiovascular system often experiences increased stroke volume and heart rate, increased blood temperature and dilation of blood vessels. Respiratory system is the third system affected by performing exercises as explained by Bigelow (2010). The effects include increased breathing rate and tidal volume.
Gender affects the impact of exercise on the human body as well. Females generally show low respiratory exchange ratio as compared to males during exercise. This, in turn, translates into a low rate of carbohydrate oxidation, but high fat oxidation rate. However, in terms of muscle glycogen content, there is no difference in both females and males as suggested by Heaner (2009). Muscle lipid content, on the other hand, is higher in females than males. Due to the differences in these aspects, males tend to have a shorter recovery rate than females after an exercise. They achieve high heartbeats in the first five minutes and immediately return to resting pulse after five minutes. This is evident in William and Sean’s records. On the other hand, females take longer to return to normal pulse rates, for instance, Anita and Gemma. Karen and Orlagh may not bring out the effects of exercise with regards to gender, because they are also affected by age and fitness status. Females also exhibit physiologic and anatomic characteristics that help in distinguishing their exercise responses from those exhibited by males. The factors contribute to the low maximal aerobic power in females. The differences have effects on the integrated respiratory muscle work, pulmonary gas exchange and ventilatory response during exercise as shown by Hughes (2011). Women, therefore, demonstrate higher expiratory flow limitation, greater hypoxemia and increased breathing as compared to men during exercises.
Age is also a factor to consider when discussing the effects of exercise on the body. Exercising regularly can improve one’s health despite the age. However, as one grows older, changes that affect the activity level of an individual are experienced. The mass of muscles decreases with age, and the level and rate of metabolism also decreases as asserted by Slentz et al (2004, p. 34). The maximum heartbeat rate also decreases incredibly as one grows old. Aging has effects on exercising ability in numerous organs, for example, lungs where there is often an increase in rigidity of rib cage, decrease in lung capacity and elasticity of lung tissue. To the heart and blood vessels, aging causes decrease in cardiac output and maximum heart rate as well as reduced blood vessel elasticity and high blood pressure. Webster (2002, p. 106) argues that the musculoskeletal system is also affected by decreasing its flexibility, mass and strength. This is evident from the data collected where the youngest people, Orlagh who is 23 and Gemma who is 24 achieve the highest heartbeat rate in five minutes. However, the others are much older, and their maximum heartbeat rates range from 138 to 150. The youngest person, Orlagh, being more active than the rest, is able to retain her normal resting heartbeat of 84 beats per minute in just three minutes, whereas the others retain their pulse rates after five minutes or so.
Tarnopolsky (2000, p. 313) suggests that fitness is a characteristic of healthy nutrition and regular exercise. Fit individuals are able to endure many exercises and long periods of serious workouts. Fitness combined with young age gives one a perfect exercise impact. Fit individuals often experience maximum heartbeat and enjoy retention to normalcy in a few minutes as compared to the unfit individuals. During exercise, one can only utilize the available oxygen effectively and efficiently if he or she is fit. For instance, William is a fit individual hence able to achieve a heartbeat rate of 132 from his normal rate of 78 beats per minute. To retain the normal pulse rate, he only needs five minutes, which is very efficient. Again, due to his age, he is capable of utilizing the available oxygen in the period available to return to normalcy without any problems. He is young, and this means that he has a large lung capacity with an elastic lung tissue. His cardiac output and maximum heart rate are also high making him achieve arresting pulse as quickly as possible.
Smoking has an effect of making it hard to get fit no matter how young one may be. It is evident that most teenagers and young people lower their performance in exercises through smoking, as explained by Tarr Kent (2010). Smoking harms the physical endurance and exercise tolerance of an individual. It affects the vascular system by raising the levels of fibrinogen, the blood-clotting factor, through its production of carbon monoxide and nicotine. It also lowers the levels of good cholesterol. High fibrinogen causes clotting of blood in blood vessels, which with time narrows the blood arteries thus reducing blood supplies to body organs as evident in Harms (2005, p. 127). A healthy vascular system is essential during exercises, because during this time the vessels often dilate and blood flow increases to enhance oxygen supply to various muscles. Webster (2002, p. 107) explains that lung capacity is also decreased with smoking since a lung congesting phlegm is produced in smokers. Exercise is better enabled in people with good lung capacity and function. Only bodies able to get enough oxygen into the blood stream and pass it on to the working muscles can perform exercises effectively. Otherwise, a smoker is not able to distribute oxygen effectively to the working muscles because of low lung capacity. The amount of oxygen used during exercises is also less in the body of smokers, mainly because carbon monoxide occupies a huge portion. From the results, William is younger than Karen and Sean, thus it is expected that he has a faster recovery time. However, the three have similar recovery rate of five minutes, because William as a smoker has a low lung capacity just like the older individuals. Anita is a smoker and older than most of the others; that is the reason why she has the longest recovery rate of seven minutes. Her recovery time is even longer than Sean’s, who is older than her but does not smoke.
|The Family Memoir||Aboriginal Land Claims and Treaty Rights|
- Aboriginal Land Claims and Treaty Rights
- The Theme of Freedom
- The Family Memoir
- Critical Analysis of the Literary Works: “The Welcome Table” By Alice Walker |
See also the
Dr. Math FAQ:
Browse High School Higher-Dimensional Geometry
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Do cones or cylinders have edges?
Latitude and longitude.
MaximizIng the volume of a cylinder.
- Surface Area of Blocks Glued Together [09/09/2001]
Three cubes whose edges are 2, 6, and 8 centimeters long are glued
together at their faces. Compute the minimum surface area possible for
the resulting figure.
- Surface Area of Cones and Pyramids [09/27/2003]
Can the method for finding the surface area of a pyramid be used as
well to find the surface area of a cone?
- Surface Area of Earth (a Sphere) [8/30/1996]
Could you tell me the formula for determining the surface area of a
- Surface Area of Solid Figures [9/18/1995]
HELP!! My math teacher was talking today about the surface area of
figures. I know about the area and how to find it, but I am confused
- Surface Areas of Soap Bubbles [11/30/1999]
If you build a frame shaped like a tetrahedron and dip it in bubble
solution, why do all of the faces of the bubble collapse to a point in
the middle of the tetrahedron?
- Swimming Pool Volume [07/26/2001]
How many gallons will an above-ground 24-foot-diameter pool 48 inches
- Swimming Pool Volume [02/28/2002]
What formula would I use to calculate the volume in gallons of a swimming
pool 135' x 70' with different depths of 3', 5', 8' and 11'?
- Taping a Cylinder [01/29/2001]
If I want to wrap sticky tape around a cylinder to cover it, what is the
relation between the diameter of the cylinder, the thickness of the tape,
and the angle between the diameter of the cylinder and the length of the
- Tesseract [04/25/2001]
Why does a tesseract contain eight cubes?
- Tesseracts and Hypercubes [05/22/1997]
Can you give me any good sources of information that a high school
geometry student would understand?
- Tetrahedron Projected on a Plane [10/29/1996]
How do you project a regular tetrahedron perpendicularly onto a plane to
get the maximum area shadow?
- Three-dimensional Counterparts for Two-dimensional Objects [03/04/1998]
Three-dimensional counterparts for lines, polygons, perpendicular lines,
and collinear lines.
- Three-Dimensional Vectors [01/29/2003]
I am finding it very hard to understand and visualise the notion of a
vector in 3 dimensions.
- Three Holes Puzzle [05/02/2002]
A piece of plywood has three holes it it: a circular hole with a
diameter of 2 cm, a square hole with 2 cm sides, and a triangular hole
with a base and height of 2 cm. What object could completely plug AND
pass completely through each hole?
- A Three-Legged Stool [06/26/2001]
Why is a three-legged stool steady, while a four-legged stool can be
- Three Pyramids in a Cube [03/04/2002]
How can three pyramids fit exactly into a rectangular prism?
- Three Spheres in a Dish [08/04/1999]
What is the radius of a hemispherical dish if 3 spheres with radii of 10
cm are inside the dish and the tops of all 3 spheres are exactly even
with the top of the dish?
- Tic-Tac-Toe on a Torus [03/29/2001]
Can you make a tic-tac-toe game that won't end in a tie?
- Tipped and Partially Filled Frustum [12/14/2003]
A vessel in a plant where I work is the frustum of a cone on its side.
A liquid is contained in this section and pours out the end of the
cone section, therefore the liquid only takes up a certain portion of
the cone's volume. How can I compute the volume of the liquid?
- Topology [12/31/1997]
Is there a simple definition for homeomorphism? for topology?
- Topology [05/10/1997]
What is topology? What is knot theory?
- Topology [03/19/2001]
What is topology?
- Transformation between (x,y) and (longitude, latitude) [01/02/2002]
I have two questions on the transformation between (x,y) and (longitude,
- Triangle Centroid in 3-Space [12/30/1996]
Given three points in 3-space that, when connected, form a triangle, what
are the coordinates of the centroid?
- Trigonometry in the Third Dimension [04/30/1998]
How does trigonometry change when we move into the third dimension?
- True and Magnetic North [04/08/2002]
How do you convert from true north to magnetic north?
- Two Interpretations of Dimensionality in Geometric Figures [03/16/2004]
A line is 1 dimensional, a square or rectangle is 2 dimensional, and a
cube is 3 dimensional. My question is what if you throw in parabolas
or circles or the absolute value function, etc.? A circle is kind of
like a parabola, but it is very much like a square, so I am thinking
it is 2-dimensional. My conclusion is that the only 1 dimensional
object is a straight line, and a point is 0 dimensional, but I am not
confident that I am correct. Can you please clear this up for me?
- Two Polygons in 3D Space [01/31/2003]
How do you find the shortest distance between two polygons in 3D
- Understanding Fourth Dimension Figures [07/05/1998]
Can you help me figure out the equations for fourth dimension figures
such as the tesseract and the hypertetrahedron?
- Union of Spherical Caps [09/10/2001]
Suppose I have two spheres of radius r1 and r2 respectively, and they
partly overlap. What's the formula for the overlapping volume?
- Units and Cylinder Volume [02/06/2003]
Find the volume and surface area of a cylindical storage tank with a
radius of 15 feet and a height of 30 feet.
- Unit Sphere [01/21/2002]
Is there such thing as a "unit sphere" that has to do with trigonometric
functions and the placement of points on said sphere?
- Using Geometry to Make a Roof [8/30/1995]
I'm not a student but a wood worker with a question on solid geometry. I
want to know how to calculate the angle on the sides of triangles in
order to form a cone shaped (not round but angular) roof.
- Using Vectors in Geometry and Physics [07/10/1998]
How do you use vectors in problems about medians, areas, and acceleration
- Variable Volumes in an Oblate Spheroid [12/21/2002]
We need to know how much water is in the tank at any given time.
- Vertices in a Prism [05/12/1999]
What is the formula for finding the number of vertices in a prism?
- Visible/Hidden Sides on Stacked Cubes [12/12/2001]
Find a formula for the number of sides hidden/visible on cubes when put
in different arrangements: a line; a double line; stacked three-
- Visualizing a Klein Bottle [08/19/1999]
What part of a Klein bottle can't be seen or represented in 3D? Is there
a technique that can help me visualize higher dimensions?
- Visualizing Skew Lines [05/16/2002]
What do you call lines that are not parallel but don't intersect?
- Volume and Pi [11/10/1997]
How do you find the volume of a cylinder that is 7.5mm high and has a
diameter of 4mm? |
Use the Data Literacy Cubes to guide students’ exploration of data to enrich their observations and inferences. This is a flexible resource that may be used with a variety of graphical representations of data. This activity requires a graph for students to evaluate.
For over 20 years, satellite altimeters have measured the sea surface height of our ever-changing oceans. This series of images shows the complicated patterns of rising and falling ocean levels across the globe from 1993 to 2015.
An animation showing “sea level fingerprints,” or patterns of rising and falling sea levels across the globe in response to changes in Earth’s gravitational and rotational fields. Major changes in water mass can cause localized bumps and dips in gravity, sometimes with counterintuitive effects.
Sea surface temperature is a measure of the temperature of the very top layer of water. Scientists are not able to put a thermometer into every part of the ocean on Earth to measure its temperature.
Educators, consider using the My NASA Data Literacy Cubes to guide students’ exploration of graphs, maps, and datasets to enrich their observations and inferences.
Step 1: Download the dataset from the MND Earth System Data Explorer
Students and scientists investigate hydrology through the collection of data using measurement protocols and using instruments which meet certain specifications in order to ensure that data are com
Freshwater is found in lakes, rivers, soil, snow, groundwater and ice, and is one of the most essential of Earth's resources, for drinking water and agriculture. However, the distribution of freshwater around the planet is changing. |
Following the Spanish defeats in Cuba and Puerto Rico, an armistice was arranged on August 12, 1898. Fighting was halted and Spain recognized Cuba`s independence. The U.S. occupation of the Philippines was recognized pending final disposition of the islands.
The final treaty was concluded in Paris on December 10, 1898 and provided for the following:
Ratification of this treaty was not a foregone conclusion in the United States Senate. A great debate ensued, pitting imperialists against anti-imperialists . The point of friction was the Philippines, which were deemed by many not to be an area of vital interest to the U.S. Proponents of expansion argued that other powers (probably Germany) would move into the Philippines if American did not. Further, the U.S. had a duty to export its superior democratic institutions to this regionā~ez_euro~"a revival of the old manifest destiny argument.
Spain agreed to remove all soldiers from Cuba and recognize American occupation of the area; the U.S. had previously pledged not to annex the island in the Teller Amendment
Spain ceded Guam and Puerto Rico to the United States
- The United States compensated Spain for its losses with a payment of $20 million.
In February 1899, the treaty received the necessary two-thirds ratification approval by a single vote. The United States had emerged as a world power, but its public was divided over the nature of the role to be played. |
How to build a starship — and why we should start thinking about it now
- With nuclear fusion and nanotechnology advancing at the pace they are, it's possible that we might not be that far away from constructing small, interstellar space probes.
- If we ever found evidence suggesting life exists on a planet orbiting a nearby star, we'd need to go there for definitive proof — and this would require more sophisticated space travel.
- The most well thought-out interstellar propulsion concept is the nuclear rocket — but this, along with other concepts, would need to be created in space.
- To do this, we would first need to prioritize colonizing Mars and the moon.
With a growing number of Earth-like exoplanets discovered in recent years, it is becoming increasingly frustrating that we can't visit them. After all, our knowledge of the planets in our own solar system would be pretty limited if it weren't for the space probes we'd sent to explore them.
The problem is that even the nearest stars are a very long way away, and enormous engineering efforts will be required to reach them on timescales that are relevant to us. But with research in areas such as nuclear fusion and nanotechnology advancing rapidly, we may not be as far away from constructing small, fast interstellar space probes as we think.
Scientific and societal case
There's a lot at stake. If we ever found evidence suggesting that life might exist on a planet orbiting a nearby star, we would most likely need to go there to get definitive proof and learn more about its underlying biochemistry and evolutionary history. This would require transporting sophisticated scientific instruments across interstellar space.
But there are other reasons, too, such as the cultural rewards we would get from the unprecedented expansion of human experience. And should it turn out that life is rare in our galaxy, it would offer opportunities for us humans to colonise other worlds. This would allow us to spread and diversify through the cosmos, greatly increasing the long-term survival chances of Homo sapiens and our evolutionary descendants.
Five spacecraft — Pioneers 10 and 11, Voyagers 1 and 2, and New Horizons— are currently leaving the solar system for interstellar space. However, they will cease to function many millennia before they approach another star, should they ever get to one at all.
Clearly, if starships are to ever become a practical reality, they will need to be based on far more energetic propulsion technologies than the chemical rockets and gravitational sling shots past giant planets that we use currently.
To reach a nearby star on a timescale of decades rather than millennia, a spacecraft would have to travel at a significant fraction — ideally about 10% — of the speed of light (the Voyager probes are travelling at about 0.005%). Such speeds are certainly possible in principle — and we wouldn't have to invent new physics such as "warp drives," a hypothetical propulsion technology to travel faster than light, or "wormholes" in space, as portrayed in the movie Interstellar.
Top rocket-design contenders
Over the years, scientists have worked out a number of propulsion designs that might be able to accelerate space vehicles to these velocities (I outline several in this journal article). While many of these designs would be difficult to construct today, as nanotechnology progresses and scientific payloads can be made ever smaller and lighter, the energies required to accelerate them to the required velocities will decrease.
The most well thought through interstellar propulsion concept is the nuclear rocket, which would use the energy released when fusing together or splitting up atomic nuclei for propulsion.
Spacecraft using "light-sails" pushed by lasers based in the solar system are also a possibility. However, for scientifically useful payloads this would probably require lasers concentrating more power than the current electrical generating capacity of the entire world. We would probably need to construct vast solar arrays in space to gather the necessary energy from the sun to power these lasers.
Another proposed design is an antimatter rocket. Every sub-atomic particle has an antimatter companion that is virtually identical to itself, but with the opposite charge. When a particle and its antiparticle meet, they annihilate each other while releasing a huge amount of energy that could be used for propulsion. However, we currently cannot produce and store enough antimatter for this to work.
Interstellar ramjets, fusion rockets using enormous electromagnetic fields as a ram scoop to collect and compress interstellar hydrogen for a fusion drive are another possibility, but these would probably be yet harder to construct.
The most well developed proposal for rapid interstellar travel is the nuclear-fusion rocket concept described in the Project Daedalus study, conducted by the British Interplanetary Society in the late 1970s. This rocket would be capable of accelerating a 450 tonne payload to about 12% of the speed of light (which would get to the nearest star in about 36 years). The concept is currently being revisited and updated by the ongoing Project Icarus study. Unlike Daedalus, Icarus will be designed to slow down at its destination, permitting scientific instruments to make detailed measurements of the target star and planets.
All current starship concepts are designed to be built in space. They would be too large and potentially dangerous to launch from Earth. What's more, to get enough energy to propel them we would need to learn to collect and manage large amounts of sunlight or mine rare nuclear isotopes for nuclear fusion from other planets. This means that interstellar space travel is only likely to become practical once humanity has become a spacefaring species.
The road to the stars therefore begins here — by gradually building up our capabilities. We need to progressively move on from the International Space Station to building outposts and colonies on the moon and Mars (as already envisaged in the Global Exploration Roadmap). We then need to begin mining asteroids for raw materials. Then, perhaps sometime in the middle of the 22nd century, we may be prepared for the great leap across interstellar space and reap the scientific and cultural rewards that will result. |
What Is Kyphosis?
Kyphosis (pronounced: kye-FOH-sis) is a condition affecting the back. It makes the back rounded so it looks hunched over.
What Are the Signs & Symptoms of Kyphosis?
The main signs of kyphosis are:
- A rounded, hunched back. Sometimes the rounding is hard to see. Other times it's more noticeable. Some teens can't straighten their curve by standing up and some can.
- Back pain. Some teens with kyphosis have back pain.
The signs of kyphosis often become obvious during the growth spurt that happens around puberty.
What Are the Kinds of Kyphosis?
There are three main types of kyphosis:
- Congenital kyphosis: This means someone is born with it. Even though it has been there since birth, sometimes it isn't noticed until a teen has done a lot of growing.
- Postural kyphosis: Teens who slouch over a lot can develop a rounded back. The muscles and bones get used to being hunched over. Teens with this type of kyphosis can straighten their curve by standing up.
- Scheuermann's kyphosis: Viewed from the side, normal vertebrae look like stacked rectangles. In Scheuermann's kyphosis, the vertebrae are triangles, or wedge shaped. This makes the spine hunch forward. Kids with this kind of kyphosis aren't able to straighten their curve by standing up straight.
What Causes Kyphosis?
The causes of kyphosis depend on the type:
- Congenital kyphosis: Doctors don't know exactly why some kids are born with this.
- Postural kyphosis: This happens to many people, especially those who look down a lot of time, such as at schoolwork or a phone.
- Scheuermann's kyphosis: Doctors don't know the exact cause, but it runs in families.
How Is Kyphosis Diagnosed?
To diagnose kyphosis, a doctor or nurse will:
- examine the spine: while you stand, bend from the waist, and lie down
- get X-rays: to see the curve
How Is Kyphosis Treated?
Someone with kyphosis will see an orthopedist (a specialist who treats conditions involving the bones). The orthopedist will examine the spine, look at the X-rays, and recommend treatment.
Postural kyphosis is treated with physical therapy to improve posture. Exercises can strengthen the back muscles to help them better support the spine.
For congenital and Scheuermann's kyphosis, treatment options include:
Observation. This means routine checkups to make sure the rounding isn't starting to cause problems. Treatment might not be needed. Most cases will stop progressing when teens are done growing.
Back brace. Sometimes specialists recommend a back brace. This brace is like a jacket that can be worn under clothes. It won't straighten the curve, but for some kids and teens it could keep the curve from getting worse. Some wear the brace only at night while others might wear it for 18–20 hours a day. The brace is usually worn until someone stops growing.
Physical therapy. Exercises that strengthen the muscles in the back and abdomen to better support the spine can sometimes help.
Surgery. Surgery isn't usually needed. But doctors might recommend a procedure called a spinal fusion for a severe case that causes pain, or to prevent problems in the future.
If you have back pain or notice a rounded upper back, talk to your parent about seeing your doctor or nurse. |
What is Binary Search and How to implement in Python
In this tutorial, we will learn about the standard Binary search algorithm and will implement it in Python.
Binary Search in Python
This searching technique reduces the number of comparisons and hence save on processing time. It compares the element to be found with the middle element and then goes on further eliminating one half of the array; finally leading to the desired element at the middle position.
TIME COMPLEXITY of Binary Search:
Binary search in the worst case scenario would make log(n) comparisons; making the time required to be O(logn), where n is the number of elements in the array.
SPACE COMPLEXITY of Binary SEarch:
Binary Search takes constant space irrespective of the number of elements in the array taking the space required to be of the range O(1).
DISADVANTAGE of Binary Search:
The only downside of this algorithm is that it cannot work with unsorted arrays – the array should be sorted either in ascending or descending order.
Implementing Binary Search in Python
data_list=list(map(int , input().split())) value=int(input()) low=0 high=len(data_list) while(low<high): mid=int((low+high)/2) if(data_list[mid]==value): print("Element found at ",mid) break elif(data_list[mid]<value): low=mid+1 elif(data_list[mid]>value): high=mid-1 if(low>=high): print("Element not found")
12 25 43 59 61 78 92 25
Element found at 1
Let us consider that the element, value has to be searched in a list that is sorted in ascending order. value if first compared with the middle element; if value is greater than the middle element, the second half of the list becomes new segment to be searched. If the value is less than the middle element, the first half of the list is scanned for the value.
The process is repeated till either the element is found or we are left with just one element in the list to be checked and the value is still not found.
That’s it! Hope you have understood the concept of Binary Search.
Feel free to ask any doubts regarding the algorithm in the comments section below.
Also, have a look at other posts too, |
Planning & Storyboard
(written by Brien Jennings)
# Introduce the importance of planning
# Explain different styles (storyboard, script, shot list)
# Create examples of a filmmaking plan
Terms: Storyboard, Shot-List, Script, Pre-production, Post-production, C.U., M.S., L.S., O.T.S., Interior, Exterior, Establishing Shot
Main Question: How can we make the filmmaking process as efficient as possible?
Introduction: After arriving at a concept, having a clear plan is essential for a filmmaker. There is no better way to ensure that the filmmaking process goes as smoothly as possible. Introduce the 3 most useful methods of planning (script, storyboard, shot list). Be sure to explain that not every filmmaker has the same process. While some may use a combination of all three, others may use only one or two. As this workshop has a strict time component involved, try to keep it to one or two, or a hybrid. Of course, over the course of the day, filmmakers may have to deviate from the original plan, but having it as a guide will remove a lot of stress.
- (10 min) Explain and model script, storyboard, and shot list (the pros and cons of each)
- (5 min) Decide which model will be used and role assignment
- (20 min) Create the planning document |
Wind power can help us reduce our greenhouse gas emissions. Every megawatt-hour of wind energy generated by a wind farm effectively removes one tonne of greenhouse emissions in our atmosphere, which would have otherwise come from non-renewable sources.
How wind turbines work
The wind causes the blades to turn, rotating a generator at the top of the turbine.
The generator converts the movement energy from the wind into electricity.
The electricity travels to a substation, where the electricity is transformed to the right voltage for home use.
The electricity is transferred from the substation to the electricity grid via power lines into your home.
Denmark was one of the first countries to use wind power for electricity. Today, it’s one of the largest producers of wind-generated electricity in the world.
Australia has a good wind resource. We can harness the power of the wind by building more wind farms, and thus provide clean electricity to our towns and cities.
Sometimes, wind turbines need to be shut down in storms, because very strong winds can damage the blades. They’re typically switched off in winds higher than 90kph.
Modern wind turbines are designed to make as little noise as possible. You can hold a conversation while standing beneath one, without having to yell.
Wind energy is a reliable, renewable source of energy. This clean energy source will never run out.
Wind turbines convert wind energy into electricity, which is collected and transported into our homes.
Wind farms are an effective source of electricity generation, helping us to meet our energy needs while reducing harmful greenhouse gas emissions.
Australia can harvest much more wind energy and make better use of this clean, green energy source. |
Rattlesnakes and gopher snakes can look eerily similar to the untrained eye. With similar square- or diamond-shaped markings and no-nonsense temperaments, these two species are often mistaken for each other. However, it’s important to recognize the differences between non-venomous gopher snakes (members of the Pituophis genus) and venomous rattlesnakes (members of the Crotalus genus and viper family).
Size and Shape
Gopher snakes are often longer than rattlesnakes. The average adult gopher snake measures between 6 and 9 feet long, while (depending on the species) rattlesnakes come in between 3 and 6 feet long. However, while the gopher snake is longer, its body is slender and whip-like compared to the rattlesnake’s heavy-bodied, broad appearance. Rattlesnakes also have a flat, triangular head in comparison to a gopher snake’s narrow, rounded head.
While rattlesnakes and gopher snakes both have round eyes on both sides of their heads, it’s their pupils that give away their identity. Rattlesnakes have vertical, cat-like pupils, while gopher snakes have rounded pupils.
Members of the pit viper family, rattlesnakes possess unique openings between their nostrils and eyes called heat-sensing pits. These pits look like tiny holes or divots and allow rattlesnakes to detect temperature changes and effectively hunt prey in the dark. Only members of the pit viper family, which the gopher snake is not, have these heat-sensing pits.
A gopher snake will often hiss and vibrate its tail when agitated. This aggressive behavior and tail “rattling” mimics the rattlesnake. Although the buzzing sound of a gopher snake’s tail vibrating against the ground sounds nearly identical to the vibration of a rattlesnake’s actual rattle, gopher snakes lack the rattle found on the end of a rattlesnake’s tail. Additionally, a rattlesnake’s tail is wide and blunt while a gopher snake’s tail is slender and pointed.
One of the most obvious differences between a gopher snake and a rattlesnake is their reproductive process. Rattlesnakes give live birth to young, whereas gopher snakes lay eggs. The eastern diamondback rattlesnake (Crotalus adamanteus) gives birth to a brood of six to 21 live young. Gopher snakes, on average, lay two clutches of two to 24 eggs each year.
- Jupiterimages/Photos.com/Getty Images |
Photons emitted in a coherent beam from a laser
|Theorized:||Albert Einstein (1905–1917)|
In modern physics, the photon is the elementary particle responsible for electromagnetic phenomena. It is the carrier of electromagnetic radiation of all wavelengths, including gamma rays, X-rays, ultraviolet light, visible light, infrared light, microwaves, and radio waves. It can also be considered a mediator for any type of electromagnetic interactions, including magnetic fields and electrostatic repulsion between like charges.
The photon differs from many other elementary particles, such as the electron and the quark, in that it has zero rest mass; therefore, it travels (in vacuum) at the speed of light (c). Photons are vital in our ability to see things around us, without their existence we would not be able to have a visual sense of our surroundings.
The photon concept has led to momentous advances in experimental and theoretical physics, such as lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. According to the Standard Model of particle physics, photons are responsible for producing all electric and magnetic fields, and are themselves the product of requiring that physical laws have a certain symmetry at every point in spacetime. The intrinsic properties of photons—such as charge, mass and spin—are determined by the properties of this gauge symmetry.
The concept of photons is applied to many areas such as photochemistry, high-resolution microscopy, and measurements of molecular distances. Recently, photons have been studied as elements of quantum computers and for sophisticated applications in optical communication such as quantum cryptography.
The photon was originally called a “light quantum” (das Lichtquant) by Albert Einstein. The modern name photon derives from the Greek word for light, φῶς, (transliterated phôs), and was coined in 1926 by the physical chemist Gilbert N. Lewis, who published a speculative theory in which photons were “un-creatable and indestructible.” Although Lewis' theory was never accepted—being contradicted by many experiments—his new name, photon, was adopted immediately by most physicists.
In physics, a photon is usually denoted by the symbol , the Greek letter gamma. This symbol for the photon probably derives from gamma rays, which were discovered and named in 1900 by Paul Ulrich Villard and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Andrade. In chemistry and optical engineering, photons are usually symbolized by , the energy of a photon, where is Planck's constant and the Greek letter (nu) is the photon's frequency. Much less commonly, the photon can be symbolized by hf, where its frequency is denoted by f.
The photon is considered to have both wave and particle properties. As a wave, a single photon is distributed over space and shows wave-like phenomena, such as refraction by a lens and destructive interference when reflected waves cancel each other out; however, as a particle, it can only interact with matter by transferring the fixed amount (quantum) of energy "E," where:
where h is Planck's constant, c is the speed of light, and is the photon's wavelength. This is different from a classical wave, which may gain or lose arbitrary amounts of energy.
For visible light, the energy carried by a single photon would be very tiny—approximately 4 x 10−19 joules. This energy is just sufficient to excite a single molecule in a photoreceptor cell of an eye, thus contributing to vision.
Apart from energy a photon also carries momentum and has a polarization. It follows the laws of quantum mechanics, which means that often these properties do not have a well-defined value for a given photon. Rather, they are defined as a probability to measure a certain polarization, position, or momentum. For example, although a photon can excite a single molecule, it is often impossible to predict beforehand which molecule will be excited.
In most theories up to the eighteenth century, light was pictured as being made up of particles. Since particle models cannot easily account for the refraction and diffraction of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke, (1665), and Christian Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early nineteenth century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light and by 1850 wave models were generally accepted. In 1865, James Clerk Maxwell's prediction that light was an electromagnetic wave—which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves—seemed to be the final blow to particle models of light.
The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity.
At the same time, investigations of blackbody radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency is an integer multiple of an energy quantum . As shown by Albert Einstein, (German) A. Einstein, (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung (trans.) The Development of Our Views on the Composition and Essence of Radiation)". Physikalische Zeitschrift 10: 817–825. (German).</ref> some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation.
Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation.
The modern concept of the photon was developed gradually (1905–1917) by Albert Einstein to explain experimental observations that did not fit the classical wave model of light. In particular, the photon model accounted for the frequency dependence of light's energy, and explained the ability of matter and radiation to be in thermal equilibrium.
Other physicists sought to explain these anomalous observations by semiclassical models, in which light is still described by Maxwell's equations, but the material objects that emit and absorb light are quantized. Although these semiclassical models contributed to the development of quantum mechanics, further experiments proved Einstein's hypothesis that light itself is quantized; the quanta of light are photons.
In 1905, Einstein proposed that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law of black-body radiation is accepted, the energy quanta must also carry momentum , making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question was then: how to unify Maxwell's wave theory of light with its experimentally observed particle nature? The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model.
Einstein's 1905 predictions were verified experimentally in several ways within the first two decades of the twentieth century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showing that photons carried momentum proportional to their frequency (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (This reluctance is evident in the Nobel lectures of Wien, Planck and Millikan.) This reluctance was understandable, given the success and plausibility of Maxwell's electromagnetic wave model of light. Therefore, most physicists assumed rather that energy quantization resulted from some unknown constraint on the matter that absorbs or emits radiation. Niels Bohr, Arnold Sommerfeld and others developed atomic models with discrete energy levels that could account qualitatively for the sharp spectral lines and energy quantization observed in the emission and absorption of light by atoms; their models agreed excellently with the spectrum of hydrogen, but not with those of other atoms. It was only the Compton scattering of a photon by a free electron (which can have no energy levels, since it has no internal structure) that convinced most physicists that light itself was quantized.
Even after Compton's experiment, Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS model. To account for the then-available data, two drastic hypotheses had to be made:
However, refined Compton experiments showed that energy-momentum is conserved extraordinarily well in elementary processes; and also that the jolting of the electron and the generation of a new photon in Compton scattering obey causality to within 10 ps. Accordingly, Bohr and his co-workers gave their model “as honorable a funeral as possible“. Nevertheless, the BKS model inspired Werner Heisenberg in his development of quantum mechanics.
A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter obeys the laws of quantum mechanics. Although the evidence for photons from chemical and physical experiments was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, a sufficiently complicated theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by elegant photon-correlation experiments. Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven.
The basic photon is massless, has no electric charge and does not decay spontaneously in empty space. A photon has two possible polarization states and is described by exactly three continuous parameters: the components of its wave vector, which determine its wavelength and its direction of propagation. The photon is the gauge boson for electromagnetic interaction (they are responsible for electromagnetic interactions).
Photons are emitted in many natural processes, e.g., when a charge is accelerated, during a chemical reaction, electron transition to a lower energy level, or when a particle and its antiparticle are annihilated. Photons are absorbed in the reversed processes which correspond to those mentioned above: for example in an electron transitions to a higher energy level.
In empty space, the photon moves at (the speed of light) and its energy and momentum are related by , where is the magnitude of the momentum. For comparison, the corresponding equation for particles with a mass would be , as shown in special relativity.
and consequently the magnitude of the momentum is
where (known as Dirac's constant or Planck's reduced constant);
is the wave vector (with the wave number)
as its magnitude) and
is the angular frequency.
The photon also carries spin angular momentum that does not depend on its frequency. The magnitude of its spin is and the component measured along its direction of motion.
To illustrate the significance of these formulae, the annihilation of a particle with its antiparticle must result in the creation of at least two photons for the following reason. In the center of mass frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum. Hence, conservation of momentum requires that at least two photons are created, with zero net momentum. The energy of the two photons—or, equivalently, their frequency—may be determined from conservation of momentum.
The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time. The idea of the solar sail comes from this concept.
Photons, like all quantum objects, exhibit both wave-like and particle-like properties. Their dual wave–particle nature can be difficult to visualize. The photon displays clearly wave-like phenomena such as diffraction and interference on the length scale of its wavelength. For example, a single photon passing through a double-slit experiment lands on the screen with a probability distribution given by its interference pattern determined by Maxwell's equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; it does not spread out as it propagates, nor does it divide when it encounters a beam splitter. Rather, the photon seems like a point-like particle, since it is absorbed or emitted as a whole by arbitrarily small systems, systems much smaller than its wavelength, such as an atomic nucleus (≈10–15 m across) or even the point-like electron. Nevertheless, the photon is not a point-like particle whose trajectory is shaped probabilistically by the electromagnetic field, as conceived by Einstein and others; that hypothesis was also refuted by the photon-correlation experiments cited above. According to our present understanding, the electromagnetic field itself is produced by photons, which in turn result from a local gauge symmetry and the laws of quantum field theory
A key element of quantum mechanics is Heisenberg's uncertainty principle, which forbids the simultaneous measurement of the position and momentum of a particle along the same direction. Remarkably, the uncertainty principle for charged, material particles requires the quantization of light into photons, and even the frequency dependence of the photon's energy and momentum. An elegant illustration is Heisenberg's thought experiment for locating an electron with an ideal microscope.
Both photons and material particles such as electrons create analogous interference patterns when passing through a double-slit experiment. For photons, this corresponds to the interference of a Maxwell light wave whereas, for material particles, this corresponds to the interference of the Schrödinger wave equation. Although this similarity might suggest that Maxwell's equations are simply Schrödinger's equation for photons, most physicists do not agree. For one thing, they are mathematically different; most obviously, Schrödinger's one equation solves for a complex field, whereas Maxwell's four equations solve for real fields. More generally, the normal concept of a Schrödinger probability wave function cannot be applied to photons. Being massless, they cannot be localized without being destroyed; technically, photons cannot have a position eigenstate , and, thus, the normal Heisenberg uncertainty principle does not pertain to photons.
The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount.
Since photons contribute to the stress-energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound-Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves.
Light that travels through transparent matter does so at a lower speed than c, the speed of light in a vacuum. For example, photons suffer so many collisions on the way from the core of the sun that radiant energy can take years to reach the surface; however, once in open space, a photon only takes 8.3 minutes to reach Earth. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and the new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter (quasi-particles such as phonons and excitons) to form a polariton; this polariton has a nonzero effective mass, which means that it cannot travel at the speed of light. Light of different frequencies may travel through matter at different speeds; this is called dispersion.
where, as above, and are the polariton's energy and momentum magnitude, and and are its angular frequency and wave number, respectively. In some cases, the dispersion can result in extremely slow speeds of light in matter.
Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. As shown here, the absorption provokes a cis-trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine.
Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, that could operate under a classical theory of light.
A Laser is a device that emits light through a specific mechanism. A typical laser emits light in a narrow, low-divergence beam and with a well-defined wavelength (corresponding to a particular color if the laser is operating in the visible spectrum). This is in contrast to a light source such as the incandescent light bulb, which emits into a large solid angle and over a wide spectrum of wavelength. Lasers have become ubiquitous, finding utility in thousands of highly varied applications in every section of modern society, including consumer electronics, information technology, science, medicine, industry, law enforcement, entertainment, and the military.
Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to predict the frequency of the light emitted for a given energy transition. For example, the emission spectrum of a fluorescent light bulb can be designed using gas molecules with different electronic energy levels and adjusting the typical energy with which an electron hits the gas molecules within the bulb.
Under some conditions, an energy transition can be excited by two photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the region where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam. Moreover, these photons cause less damage to the sample, since they are of lower energy.
In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, which is used to measure molecular distances.
Photons are essential in some aspects of optical communication such as fiber optic cables. Light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters.
Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect; a photon landing on a metal plate ejects an electron, initiating an ever-amplifying avalanche of electrons. Charge-coupled device chips use a similar effect in semiconductors; an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules, causing a detectable change in conductivity.
|Particles in physics|
|elementary particles||Elementary fermions: Quarks: u · d · s · c · b · t • Leptons: e · μ · τ · νe · νμ · ντ
Elementary bosons: Gauge bosons: γ · g · W± · Z0 • Ghosts
|Composite particles||Hadrons: Baryons(list)/Hyperons/Nucleons: p · n · Δ · Λ · Σ · Ξ · Ω · Ξb • Mesons(list)/Quarkonia: π · K · ρ · J/ψ · Υ
Other: Atomic nucleus • Atoms • Molecules • Positronium
|Hypothetical elementary particles||Superpartners: Axino · Dilatino · Chargino · Gluino · Gravitino · Higgsino · Neutralino · Sfermion · Slepton · Squark
Other: Axion · Dilaton · Goldstone boson · Graviton · Higgs boson · Tachyon · X · Y · W' · Z'
|Hypothetical composite particles||Exotic hadrons: Exotic baryons: Pentaquark • Exotic mesons: Glueball · Tetraquark
Other: Mesonic molecule
|Quasiparticles||Davydov soliton · Exciton · Magnon · Phonon · Plasmon · Polariton · Polaron|
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
A plethora of corals in the southern Pacific
In the New Zealand region there are abundant and diverse cold-water corals, found on seamounts, slopes, on ridges and in canyons. They have been reasonably well-studied with records going back as far as the 1830s. At present 1,126 cnidarian species (the group including corals and anemones) have been recorded in New Zealand, of which 330 are still unidentified and/or undescribed. There are also 204 fossil species.
New Zealand boasts both warm-water corals with symbiotic algae (zooxanthellae) and cold-water corals growing in the darker depths. For example, in the north of New Zealand’s Exclusive Economic Zone (EEZ) around Raoul Island warm-water corals reach their southern limit while along the adjacent Kermadec Ridge cold-water corals thrive at greater depths.
Further south in the dark waters of Fiordland, on the southwest coast of the South Island, the black coral Antipathella fiordensis grows at depths of only 15 to 50 m. The stylasterid hydrocoral Errina is also found in the sounds of Fiordland. In waters deeper than 200 m scleractinian stony corals, both solitary cup corals and the matrix-forming stony corals occur. While there are no records of Lophelia in the New Zealand region, another member of the same family (Caryophllyidae), Solenosmilia variabilis, is abundant. Other species include Madrepora oculata, Goniocorella dumosa - the most dominant branching stony coral, Enallopsammia rostrata and E. marenzelleri, and Oculina virgosa which is found nowhere else (endemic). Apart from O. virgosa, all are found throughout the region mainly between water depths of approximately 200–1500 m.
Even further south, The New Zealand Ross Sea Protectorate in Antarctic waters supports a diverse coral fauna with recent research surveys and observer bycatch data revealing the presence of cup corals (but no reef-forming species), Errina, and several gorgonian octocorals including Corallidae (precious coral), Isididae (bamboo corals), Primnoidae (sea fans), and Paragorgidae (bubblegum corals). |
What is the basal lamina?
The basal lamina (lamina - layers), also known as the basement membrane, is a specialised form of extracellular matrix. The basal lamina can be organised in three ways:
1. it can surround cells (for example muscle fibres have a layer of basal lamina around them);
2. it lies underneath sheets of epithelial cells
3. it separates two sheets of cells, such as the endothelial cells of blood vessels and epithelial cells of another tissue. This type of arrangement is found in the kidney glomerulus, where the basal lamina acts as a permeability barrier or sieve.
Functions of the basal lamina.
The exact composition of the basal lamina varies between different types of cell. In the kidney, the basal lamina acts as a molecular filter. At the neuromuscular junction, the basal lamina that surrounds the muscle cells, separates the nerve cell from the muscle cell at the synapse, and helps to regenerate the synapse after injury, and helps to localise acetylcholine receptors.
The basal lamina provides support to the overlying epithelium, limits contact between epithelial cells and the other cell types in the tissue and acts as a filter allowing only water and small molecules to pass through. If the epithelial cells become transformed (cancerous) and become 'malignant', they are able to break through the basement membrane and invade the tissues beneath. This characteristic is one used in the diagnosis of malignant epithelial tumors.
Components of the basal lamina.
The basal lamina consists of a mixture of collagens, laminin (glycoprotein), perlecan (heparan sulphate glycoprotein), entactin (glycoprotein). These proteins can bind to each other to make a highly crosslinked extracellular matrix as shown in this diagram.
All epithelia have a basal lamina which lies between the cells and the underlying connective tissue. This layer is so thin that it is often difficult to see with conventional light microscopy and is usually only clearly defined under the electron microscope.
The basal lamina helps to attach and anchor the cells to the underlying connective tissue. Proteins (integrins and proteglycans) in the cell membranes attach to proteins in the basal lamina, which in turn is linked to the extracellular matrix of connective tissue.
Viewed with the electron microscope, three distinct layers of the basal lamina can be described:
lamina lucida - electron lucent (very little staining in the EM).
lamina densa - electron dense.
lamina reticularis - can be associated with reticular fibres of the underlying connective tissue.
This picture shows the arrangement of these three layers of the basal lamina lies underneath an epithelial cell.
The integrins and proteoglycans in the cell membrane that attach to proteins in the extracellular matrix/basal lamina communicate with and signal to each other. Cells can organise their own extracellular matrix, and the extracellular matrix in turn helps to organise cells. Changes in the integrins can mean that cells stop being adhesive, and staying put, to moving away, or vice versa.
Dystrophin is a glycoprotein in the plasma membrane of muscle cells, that binds to laminin in the extracellular matrix. In muscular dystrophy, this protein is defective or missing, and reduces the attachment of muscle cells to their basal lamina. This reduced attachment results in muscle degeneration and muscle weakness.
This picture shows a duct from the kidney, stained with alcian blue, where the basement membrane can be seen more clearly, underlying the cuboidal epithelium.
See if you can identify the lumen of the duct, the simple cuboidal epithelium, and the basement membrane.
This diagram shows part of the cuboidal epithelium in the photographs opposite together with its basal lamina. |
All of the genetic material in all living organisms is made from these basic building blocks of nucleotides.
Phosphate group, 5- carbon sugar, nitrogenous base
difference between ribonucleotides and deoxyribonucleotides
Ribonucleotides have a hydroxyl group bonded to their 2' carbon; deoxyribonucleotides have an H at the same location
The condensation reaction that forms nucleic acid polymers occurs between a _____ group on one nucleotide and a _____ group on a second nucleotide
In a nucleic acid polymer, the hydrogen bonds that help to hold regions of double-strandedness together occur between what parts of the nucleotide monomers?
Which of the following includes all of the pyrimidines found in RNA and/or DNA?
cytosine, uracil, and thymine
When nucleotides polymerize to form a nucleic acid _____.
a covalent bond forms between the sugar of one nucleotide and the phosphate of a second
Some viruses consist only of a protein coat surrounding a nucleic acid core. If you wanted to radioactively label the nucleic acids separately from the protein, you would use _____.
Which of the following statements about nucleotide structure is false?
The phosphate group is bonded to the nitrogenous base.
Which of the following statements about DNA structure is true?
The nucleic acid strands in a DNA molecule are oriented antiparallel to each other
When double-stranded DNA is heated to 95°C, the bonds between complementary base pairs break to produce single-stranded DNA. Considering this observation, is the strand separation step required for replication of the Double Helix an endergonic or exergonic reaction? Why?
Endergonic. An input of energy is required
If a segment of DNA contains 28 percent T nucleotides, then the percentage of A nucleotides in that segment will be _____.
Which of the following best describes DNA's secondary structure?
double antiparallel helical strands
In the context of chemical evolution, DNA's structure is interesting because it suggests a possible copying mechanism. What about DNA's structure facilitates copying?
The strands of the double helix are complementary.
Although DNA is the main hereditary material in all life-forms, it lacks one important characteristic of being a candidate for the first life-form. Why have researchers rejected the idea that DNA was found in the first life-form?
It does not function as a catalyst
A segment of nucleic acid that contains the sugar ribose and the nitrogenous bases A, G, U, and C would be called _____.
ribonucleic acid (RNA)
Which of these scientists was not directly involved in the discovery of DNA's structure?
Who was directly involved in the discovery of DNA's structure?
RNA and proteins combine in cells to form structures called ribosomes. Ribosomes contain the active site for peptide bond formation. Based on their chemical structures, do you think protein or RNA molecules actually form the active site within the ribosome?
It could be either, because both molecules have catalytic properties.
Why do researchers think the first self-replicating molecule was RNA?
RNA has the capacity to provide a template and is known to catalyze reactions (although no existing self-replicating molecules of RNA have been discovered).
Many researchers who study the origin of life propose that the first life-form capable of templating its own replication and catalyzing self-replication was made of _____.
The work of Bartel's group on the ribozyme RNA replicase supports which conclusion?
An RNA world could produce molecules that could self-replicate
If a strand of DNA has the nitrogen base sequence 5'-ATTTGC-3', what will be the sequence of the matching strand?
If a DNA double helix is 100 nucleotide pairs long and contains 25 adenine bases, how many guanine bases does it contain?
75; 100 nucleotide pairs are a total of 200 nucleotides. Because of base pairing, if there are 25 adenine there must also be 25 thymine. This leaves 200-50 = 150 nucleotides to be divided evenly between guanine and cytosine.
The two strands of a DNA double helix are held together by _____ that form between pairs of nitrogenous bases.
hydrogen bonds; Nitrogenous base pairs are joined by hydrogen bonds.
A nucleotide is composed of a(n) _____.
phosphate group, a nitrogen-containing base, and a five-carbon sugar
Teeth and bones are made up of calcium phosphate crystals. Sometimes, fluoride ions interact with these crystals. The best way to determine the three-dimensional arrangement of the crystals and fluoride ions is to use _____.
X-ray diffraction; X-ray crystallography provides information about molecular structures.
A scientist is studying the arrangement of neurons and glial cells in brain tissue. She labeled structures with specific fluorescent tags. Which technique would provide the best spatial images of the tissue?
confocal microscopy; Confocal microscopy is used to produce three-dimensional images of tissues
An entomologist is comparing the minute surface structures on beetle antennae from various species. Which microscope would give him the greatest resolution?
scanning electron microscopy; SEM would provide the best detail on the antennal structures
You are collecting information on Paramecium swimming motions in various solutions. Which type of microscope should you use?
compound light microscope; With this microscope, the Paramecium motions can be seen.
This figure illustrates the structural similarities and differences between eukaryotic and prokaryotic cells. What type of microscope was used to produce the images?
transmission electron microscope; The details inside the cells are visible by TEM; these include membranes and organelles.
Which statement is most accurate regarding similarities and differences between proteins and nucleic acids?
Proteins and nucleic acids both have a backbone and are formed by condensation reactions
means that half of the helical structure is old and the other half is new
Primary Structure of DNA
sequence of deoxyribosenucleotides; bases include adenine, thymine, cytosine, and guanine
Secondary Structure of DNA
Two anti-parallel strands twist into a double helix, stabilized by hydrogen bonding between complementary bases (A-T & G-C) and hydrophobic interactions.
Secondary structure of RNA
most common are hairpins, formed when a single strand folds back on itself to form a double-helix "stem" and a single-stranded "loop"
first step of RNA synthesis
complementary bases are added to 3' end of growing strand and polymerize.
second step of RNA synthesis
continued base pairing and polymerization form a complete complementary strand
third step of RNA synthesis
complementary and template strands separate, forming independent molecules
second step of enzyme action
transition state facilitation:interactions between enzyme and substrate lower the activation energy required.
third step of enzyme action
termination: products have lower affinityfor active site and are released. Enzyme is unchanged after the reaction.
is when another molecule with a similar shape to the substrate occupying the activate site |
Hepatitis A is an infection of the liver caused by the hepatitis A virus (HAV). Most commonly transmitted by the orofecal route, such as contaminated food, the best ways to prevent the disease are vaccination and careful hand washing.
- Hepatitis A Basics
General information about hepatitis A, including symptoms, complications, vaccines, tests, and treatment.
- Hepatitis A
Annual rates of hepatitis A disease in Minnesota.
- Hepatitis Information
For Health Professionals
Information on hepatitis A for health professionals, including clinical information, immunization schedules, and treatment of cases and contacts. |
Click any word in a definition or example to find the entry for that word
a situation in which a voter has made more votes than the number allowed according to the rules of an election
'In the second phase, USA TODAY and The Miami Herald joined with five other newspapers to examine more than 110,000 overvotes – ballots that were disqualified because they registered more than one presidential vote when run through vote-counting machines.'USA Today 11th May 2001
'Comelec legal chief Ferdinand Rafanan also said voters should not overvote or voting more than is necessary. For instance, if one is voting for the President, the voter should only shade one oval opposite the candidate he's voting for.'Inquirer.net 9th February 2010
The result of the United States presidential election will represent the climax of many months of speculation about whether America's first black president will be elected for a second term of office. Anyone who has strong feelings about the results of this, or indeed any, election will be keen to cast their vote and must be careful not to jeopardize its validity by unintentionally submitting an overvote.
overvotes generally only occur in voting systems based on paper – either punch cards or ballot slips which are manually counted or optically scanned
An overvote occurs when a voter, either accidentally or because they haven't understood the rules, marks their ballot paper in such a way that they've voted for more candidates than the number they're permitted to vote for in relation to a particular office. This has significant consequences, resulting in that person's ballot paper being deemed invalid and effectively cancelling their vote. Electronic voting systems usually have mechanisms in place which make it impossible to overvote, and so overvotes generally only occur in voting systems based on paper – either punch cards or ballot slips which are manually counted or optically scanned.
The concept of an overvote was of particular significance in the controversial presidential ballots which took place in Florida in November 2000. The results proved highly contentious in relation to the narrow margins by which former President George Bush defeated Democrat Al Gore. The vote was subjected to a recount which took centre stage in the election, it being argued that voter mistakes – such as incorrectly-punched voting cards (resulting in little pieces of paper famously described as hanging/pregnant chads) and overvotes – had significantly, and perhaps unfairly, influenced the outcome.
The word overvote is used both as a verb and a countable noun to refer respectively to the action of voting incorrectly or an instance of doing so. It is mainly used in American English, becoming more widely recognized after the recount controversy associated with the 2000 presidential election. The word actually made its first appearance in English long before this however, originally functioning as a synonym of outvote (to defeat someone by winning more votes), which later became obsolete.
Overvote is of course formed by affixation of the prefix over- in its sense of 'too much' (compare overheat, overreaction). Correspondingly, overvote also has an antonym undervote, where under- means 'not enough' (compare underestimate, undernourished). The word undervote, also used as a countable noun or verb, therefore refers to the situation of someone voting for fewer candidates than the number they are permitted to vote for in relation to a particular office. Unlike an overvote however, an undervote does not usually result in a cancelled vote, and is in fact considered a voter's right. This means that instructions on ballot papers often use wordings such as 'Vote for no more than three candidates', giving the voter the option to vote for one, two or a maximum of three different candidates. An undervote does of course become invalid if it relates to a single choice election.
The word vote is a productive animal in word formation, also occurring in phrasal verbs such as vote off/out (to remove someone from a position by voting), vote in (to give someone a position by voting), vote down (to stop something by voting) or vote through (to get something officially accepted by voting). A recent addition to this group is vote (something) up, now popularly used to refer to the action of voting for something that you like on the TV or Internet.
Would you like to use this BuzzWord article in class? Visit onestopenglish.com for tips and suggestions on how to do just that! This downloadable pdf contains a student worksheet which includes reading activities, vocabulary-building exercises and a focus on prefixes.
Read last week's BuzzWord. FOMO.
This article was first published on 27th March 2012.
A must for anyone with an interest in the changing face of language. The Macmillan Dictionary blog explores English as it is spoken around the world today.global English and language change from our blog |
Adding and Subtracting Signed Fractions - Remember Those Integer Rules!
Lesson 14 of 23
Objective: Students will be able to add and subtract signed fractions.
Opener: As students enter the room, they will immediately pick up and begin working on the opener. Please see my instructional strategy clip for how openers work in my classroom (Instructional Strategy - Process for openers). This method of working and going over the opener lends itself to allow students to construct viable arguments and critique the reasoning of others, which is mathematical practice 3.
Learning Target: After completion of the opener, I will address the day’s learning targets to the students. In today’s lesson, the intended target is, “I can add and subtract signed fractions.” Students will jot the learning target down in their agendas (our version of a student planner, there is a place to write the learning target for every day).
Instructional Strategy - How do table challenges work?: Fractions do not tend to be anyone’s favorite topic, so in order to get some much needed practice I am going to conduct a table challenge using the white boards. Students will rotate the white board at their table, and when it is not their turn to write on the white board, they will use the space I provided on the back of their notes sheet. For this challenge, each person at the table will work out the problem, and then discuss with one another. One student will be responsible for getting the work and answer on the board. When I call time, I will give a point to any table with the correct work/answer on their board. If several tables are incorrect, I will have an “expert” think aloud their steps for the problem at the board. This challenge will allow students the opportunity to practice fractions – being able to discuss steps and rationale with their peers as they go. I find that this is a great way to practice a new concept. It will be very important for students to be precise with their signs and calculations – as minor errors could create large mistakes (mathematical practice 6). |
Comet Impact Into Jupiter
This artist's concept shows comet Shoemaker-Levy 9 heading into Jupiter in July 1994, while its dust cloud creates a rippling wake in Jupiter's ring. The comet, as imaged by NASA's Hubble Space Telescope, appears as a string of reddish fragments falling into Jupiter from the south. A later Hubble image shows the dark blotches where pieces of the comet had already collided with the planet. The faint ring, based on images obtained by NASA's Galileo mission, is normally very faint, but has been enhanced for this illustration. The streaks show the tracks of the comet's dust cloud. Impacts from these dust particles tilted the ring off its axis.
Image credit: copyright M. Showalter |
CRC 111 - Surfing the Internet
A hands-on introductory course on accessing the Internet using a browser program. Students will learn the history of the Internet and it's impact on society. Students will be taught the basic tools of the World Wide Web for searching, uploading, and downloading. E-mail, newsgroups, and chat rooms will also be covered. Projects required. Basic knowledge of the PC, keyboard, mouse, and Windows are required. Five class hours per week for 3 weeks.
Course Learning Outcomes
1. Demonstrate knowledge of the origins of the Internet.
2. Recognize the process of connecting to the Internet.
3. Identify different features of a Web browser.
4. Develop effective Web search strategies to locate information on the Web.
5. Examine ways to communicate using various Webmail services.
6. Locate mailing lists, participate in chat sessions, explore and appraise science newsgroups, and create an identity in a virtual community.
7. Download software using FTP and Internet Explorer.
8. Identify security threats to their computer and personal privacy and implement basic countermeasures against each.
Course Offered Fall and Spring |
Definition - What does Dust mean?
Dust, in the context of occupational health and safety, refers to suspended organic or inorganic particles in the atmosphere. Some types of dust, such as those from chemicals, irritants or allergens, can have negative health effects.
Safeopedia explains Dust
Dust can be harmful to humans when inhaled, although the effects depend on the type of dust in question and the level of exposure. Dust can be deposited in the lungs, causing difficulty breathing, choking, coughing and even death. In the long term, exposure to dust can cause infections, various pulmonary diseases, acute toxic effects and even cancer. Dust can also cause irritation, and temporary or permanent harm to the eyes, skin and ears.
Dust in the workplace may be controlled by exhaust ventilation, water showers and good housekeeping. The impact of dust can be mitigated by the use of personal protective equipment. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.