content
stringlengths 275
370k
|
---|
This section is from the book "Alcohol, Its Production, Properties, Chemistry, And Industrial Applications", by Charles Simmonds. Also available from Amazon: Alcohol: Its Production, Properties, Chemistry, And Industrial Applications.
If for any reason the specific gravity of the alcohol is not taken at the standard temperature, it is necessary to include a correction to compensate for the deviation. The correction is greater at higher strengths than at lower, as will be seen from the table given below, which is used in the following manner: -
The difference between the standard temperature and the actual temperature of the observation is multiplied by the appropriate factor, taken from the table. If the actual temperature is higher than the standard, the product is added to the observed specific gravity; if it is lower, the product is subtracted. The unit throughout is water at the standard temperature (60° F.).
Table of temperature correction's.
Example: - At the temperature 65° F., the sp. gr. of a specimen of diluted alcohol is 0 9475, referred to water at 60° F. as unity. As this lies between 0 946 and 0 949, the appropriate factor is
000036, and the correction is 5 X 000036 = 00018. Hence the sp. gr. of the sample at the standard temperature, 60° F., is 09475 + 00018 = 09493.
Had the temperature of observation been 55° F. instead of 65° F., the product 0 0018 would have been subtracted, and the sp. gr. at 60° would then have been 0 9475 - 00018 = 09457.
It may be pointed out that where the greatest accuracy is required the temperature of the spirit must be very carefully adjusted in taking the specific gravity. For ordinary purposes it suffices if the sp. gr. is correct to one unit in the fourth place of decimals, which corresponds with about 0.1 per cent. of proof spirit. To obtain a result correct to one unit in the fifth place of decimals it is necessary to adjust the temperature to 001° C. or 002° F.; and in general for fairly accurate work the temperature of the alcohol should be correctly adjusted to within 01° F. This is requisite for a result accurate within five units in the fifth decimal place of the specific gravity, corresponding with about 0 05 per cent. of proof spirit at medium and lower strengths. When special accuracy is required, the desired temperature should be obtained by means of a thermostat, and the form of pyknometer employed should be one in which the level of the liquid is adjusted to a mark on the neck, not the form with perforated stopper.
Fig. 33. - geissler's form of pyknometer.
With ground-in thermometer and capped side-tube. |
Urban Exceptionalism in the American South
- David GoldfieldDavid GoldfieldDepartment of History, University of North Carolina
While colonial New Englanders gathered around town commons, settlers in the Southern colonials sprawled out on farms and plantations. The distinctions had more to do with the varying objectives of these colonial settlements and the geography of deep-flowing rivers in the South than with any philosophical predilections. The Southern colonies did indeed sprout towns, but these were places of planters’ residences, planters’ enslaved Africans, and the plantation economy, an axis that would persist through the antebellum period. Still, the aspirations of urban Southerners differed little from their Northern counterparts in the decades before the Civil War. The institution of slavery and an economy emphasizing commercial agriculture hewed the countryside close to the urban South, not only in economics, but also in politics. The devastation of the Civil War rendered the ties between city and country in the South even tighter. The South participated in the industrial revolution primarily to the extent of processing crops. Factories were often located in small towns and did not typically contribute to urbanization. City boosters aggressively sought and subsidized industrial development, but a poorly educated labor force and the scarcity of capital restricted economic development. Southern cities were more successful in legalizing the South’s culture of white supremacy through legal segregation and the memorialization of the Confederacy. But the dislocations triggered by World War II and the billions of federal dollars poured into Southern urban infrastructure and industries generated hope among civic leaders for a postwar boom. The civil rights movement after 1950, with many of its most dramatic moments focused on the South’s cities, loosened the connection between Southern city and region as cities chose development rather than the stagnation that was certain to occur without a moderation of race relations. The predicted economic bonanza occurred. Young people left the rural areas and small towns of the South for the larger cities to find work in the postindustrial economy and, for the first time in over a century, the urban South received migrants in appreciable numbers from other parts of the country and the world. The lingering impact of spatial distinctions and historical differences (particularly those related to the Civil War) linger in Southern cities, but exceptionalism is a fading characteristic. |
What are 3 Ways humans interact with the environment?
Examples of Different Kinds of Human Environment Interactions
- The use of natural resources. …
- Deforestation. …
- Energy resources. …
- Oil and gas drilling. …
- Water resources. …
- Relationships between human activities and the surroundings. …
- Vehicle production. …
How are human beings dependent on the environment for their food?
Animals including human beings get their food from plants. We use forest products to fulfill most of our needs. The animals that completely depend on the plants for their food are called herbivorous.
How do humans depend on the environment to breakdown waste?
Humans depend on the environment to break down, recycle or safely store their waste, including rubbish, gaseous emissions from use of vehicles, effluent, industrial wastes and fertiliser run-off. A third environmental function is earth’s service function.
What are the 5 major impacts humans have on the environment?
In this video, we’ll learn about the important services the ecosystem provides (including biogeochemical cycles and food) as well as the top five negative impact humans have had on the environment: deforestation, desertification, global warming, invasive species, and overharvesting.
How can humans reduce their impact on the environment?
Here are 10 easy ways you can gradually reduce your workplace’s impact on the environment (and save money).
- Watch your water usage. …
- Go paperless (if you can!). …
- Recycle if you can’t go paperless. …
- Use recycled products.
Why do we depend on animals?
Humans rely on animals for food, fiber, labor and companionship. So it makes sense that we need animal scientists to keep these animals healthy and productive. Animal scientists help put food on our tables. … When animals grow well and stay healthy, farmers can produce more meat, milk or eggs for our consumption.
Why life on earth is completely dependent on plants and how?
Sunlight is essential for life on Earth to exist. Plants use energy from sunlight to convert water and carbon dioxide gas into food. This releases essential, life-giving oxygen into the atmosphere. Virtually all other organisms rely on plants for energy to keep them alive.
Why are human beings considered the most important resource for a country?
Human beings are considered as the most important resource for development, because they are only responsible for the progress of a country. … The machines work fast and efficiently, but these machines are operated by humans. Thus, a human with intelligence is the ultimate resource who can make developments.
Do humans depend on the environment?
All the living creatures including human beings depend on the environment. Human beings, animals, air, water, soil, vegetation, minerals etc found on the earth are the chief resources of the environment. Living things cannot survive in the polluted environment because it affects the life adversely.
How does the environment support life?
These are the functions of the environment that support human life and economic activity. The second is the safe absorption (through breakdown, recycling or storage) of the wastes and pollution produced by production and human life (the Earth’s ‘sink’ function). …
Why is garbage bad for the environment?
Trash can travel throughout the world’s rivers and oceans, accumulating on beaches and within gyres. This debris harms physical habitats, transports chemical pollutants, threatens aquatic life, and interferes with human uses of river, marine and coastal environments.
How do humans destroy the nature?
Some human activities that cause damage (either directly or indirectly) to the environment on a global scale include population growth, overconsumption, overexploitation, pollution, and deforestation, to name but a few.
What is the most harmful thing to the environment?
Plastic bags are one of the most damaging sources of everyday pollution. By some estimates, 1 trillion non-biodegradable plastic bags are disposed of each year, breaking down in waterways, clogging landfill sites and releasing toxic chemicals when burned. |
Cholecystitis refers to severe abdominal pain associated with gallbladder inflammation or gallstones. Acute Cholecystitis can manifest as sharp cramping pain in the right upper quadrant of the abdomen. This pain can spread to the back or below the right shoulder blade. It usually appears after a fatty meal. Cholecystitis might also lead to nausea and vomiting and often jaundice. The person suffering Cholecystitis might notice clay colored stools and fever. Diagnostic tests that are prescribed to detect this condition are Liver function test, abdominal ultrasound and Endoscopy.
In many cases, Cholecystitis can clear on its own, with the right low fat diet and antibiotics. But in other cases, Cholecystectomy may be done to remove the gallbladder. Acute Cholecystitis needs to be treated urgently lest it lead to complications such as a perforated gallbladder or gangrenous Cholecystitis where the gallbladder tissue dies. On the other hand, cholangitis involves infection of the bile ducts either due to biliary obstruction or bacterial infection.
Lead poisoning occurs when there is increased level of lead inside the body. Lead is toxic and can lead to many health problems such as headache, anemia, abdominal pain and irritability. Over time, lead poisoning can result in kidney failure, hypertension, learning difficulties, lethargy and behavioral problems. Children are at risk for lead poisoning when they are in contact with products containing lead. X-rays, blood count and CT or MRI of brain can help in identifying lead poisoning.
Lead poisoning in adults has often been traced to the use of lead based glazes on potteries and contamination of herbal medicines. Sometimes lead pipes in older homes can leach lead into water. It is safe to let the water run for a few seconds before using the water for consumption. The more time water has been sitting in the pipes, the more lead it may contain. Again hot water may contain more lead than cold water. In adults, symptoms of lead poisoning are seen when the lead level in the blood exceeds 80 µg/dL for weeks.
Lead exposure is measured in micrograms per deciliter of blood (µg/dL). The following is a guide to the standards on lead exposure set by the CDC (Centers for Disease Control and Prevention):
High levels of lead in the blood may require Chelation Therapy (treatment with chemical agents that bind to the heavy metal lead which can be excreted through urine). There are 4 agents :
Edetate Calcium Disodium (EDTA calcium) and Dimercaprol (BAL) are given through injections
Succimer (Chemet) and Penicillamine (Cuprimine, Depen) are taken orally.
Colonoscopy allows the doctor to look into the interior lining of the large intestine. Through this procedure, the doctor is able to detect inflamed tissue, abnormal growths, polyps, tumors and ulcers. Early signs of cancer in the colon and the rectum can also be detected through colonoscopy. This procedure is also used to study unexplained changes in bowel habits, to evaluate symptoms of abdominal pain, rectal bleeding and sudden weight loss. The colonoscope is a thin flexible instrument whose length ranges from 48 inches to 72 inches. A small video camera is attached to the colonoscope so that photographic, electronic or videotaped images of the large intestine can be taken. Colonoscope is used to view the entire colon as well as a small portion of the lower small intestine.
The colon should be completely empty for colonoscopy to be thorough and safe. The liquid diet should be clear of any food colorings. It should be fat free. The colonoscope is gradually inserted into the rectum and slowly guided into the colon. The scope transmits an image of the inside of the colon onto a video screen so that the doctor can carefully examine the lining of the colon. The scope blows air into the colon and inflates it so that the doctor has a better view of the colon. During the procedure, the doctor is able to remove abnormal growths like polyp in the colon.
Virtual colonoscopy: Here the technique that is adopted uses a CAT scan to construct virtual images of the colon. These images are similar to the views of the colon obtained by direct observation through colonoscopy. However, virtual colonoscopy cannot find small polyps which are less than 5 mm in size. These can be seen by the traditional colonoscopy. Virtual colonoscopy is not as accurate as colonoscopy in finding cancers or pre-malignant lesions that are not protruding. Virtual colonoscopy also cannot remove polyps.Tags: #Cholecystitis #Lead poisoning #Colonoscopy
Enter your health or medical queries in our Artificial Intelligence powered Application here. Our Natural Language Navigational engine knows that words form only the outer superficial layer. The real meaning of the words are deduced from the collection of words, their proximity to each other and the context.
Diseases, Symptoms, Tests and Treatment arranged in alphabetical order:
Bibliography / Reference
Collection of Pages - Last revised Date: October 29, 2020 |
Vertical Forest is a model for a sustainable residential building, a project for metropolitan reforestation contributing to the regeneration of the environment and urban biodiversity without the implication of expanding the city upon the territory. It is a model of vertical densification of nature within the city that operates in relation to policies for reforestation and naturalization of large urban and metropolitan borders. The first example of the Vertical Forest consisting of two residential towers of 110 and 76 m height, was realized in the centre of Milan, on the edge of the Isola neighborhood, hosting 800 trees (each measuring 3, 6 or 9 meters), 4,500 shrubs and 15,000 plants from a wide range of shrubs and floral plants distributed according to the sun exposure of the facade. On flat land, each Vertical Forest equals, in amount of trees, an area of 20,000 square meters of forest. In terms of urban densification it is the equivalent of an area of a single family dwelling of nearly 75,000 sq.m. The vegetal system of the Vertical Forest contributes to the construction of a microclimate, produces humidity, absorbs CO2 and dust particles and produces oxygen.
Biological habitats. Vertical Forest increases biodiversity. It helps to set up an urban ecosystem where a different kind of vegetation creates a vertical environment which can also be colonized by birds and insects, and therefore becomes both a magnet for and a symbol of the spontaneous re-colonization of the city by vegetation and by animal life. The creation of a number of Vertical Forests in the city can set up a network of environmental corridors which will give life to the main parks in the city, bringing together the green space of avenues and gardens and interweaving various spaces of spontaneous vegetation growth.
Mitigations. Vertical Forest helps to build a micro-climate and to filter dust particles which are present in the urban environment. The diversity of the plants helps to create humidity and absorbs CO2 and dust, produces oxygen, protects people and houses from harmful sun rays and from acoustic pollution.
Anti-sprawl. Vertical Forest is an anti-sprawl measure which aims to control and reduce urban expansion. If we think of them in terms of urban densification, each tower of the Vertical Forest is equivalent to an area of urban sprawl of family houses and buildings of up to 50,000 square metres.
Trees are a key element in understanding architectural projects and garden systems. In this case the choice of the types of trees was made to fit their positioning on the facades and by height, and it took two years to finalize it, alongside a group of botanists. The plants used in this project will be grown specifically for this purpose and will be pre-cultivated. Over this period these plants can slowly get used to the conditions they will find on the building.
Ecology billboards. Vertical Forest is a landmark in the city which is able to depict new kinds of variable landscapes changing their look over seasons, depending on the types of plants involved. The Vertical Forests will offer a changing view of the metropolitan city below.
Management. The management of the tree pots is under building regulation, as well as the upkeep of the greenery and the number of plants for each pot.
Irrigation. In order to understand the need for water, the plan for these buildings took into account the distribution of plants across the various floors and their positioning.
Source: Stefano Boeri Architetti / www.stefanoboeriarchitetti.net |
What are harmful algal blooms?
Harmful algal blooms (HABs) are caused by cyanobacteria (also known as blue-green algae) which may or may not produce toxins. Cyanobacteria are common single-celled organisms that naturally exist in fresh waters, such as lakes and ponds, or slightly saline waters such as tidal rivers and estuaries (brackish water). The cyanobacteria utilize sunlight and nutrients from the water to grow and multiply. When there are too many nutrients in the water, the bacteria can grow rapidly or “bloom”. Blooms may turn the water a green, red, or brownish color. Blooms may also form a visible scum on the water surface, similar to the look of spilled paint. Blooms are more likely to occur in hot summer months.
What effects do harmful algal blooms have on humans?
Most cyanobacteria species are not able to produce toxin. Some species can produce one or more types of toxins such as neurotoxins (nerve toxins) or hepatotoxins (liver toxins) during blooms which may be harmful to humans or aquatic life. People may become exposed to cyanobacteria toxins in three ways: swallowing bloom water, direct skin contact, and breathing aerosolized toxins that are in the air.
What are the symptoms of harmful algal bloom exposures?
If water containing cyanobacteria toxin is swallowed, common gastrointestinal symptoms such as stomach pain, nausea, vomiting, and diarrhea may occur. If there is direct contact with cyanobacteria toxin, skin and eye irritation may result, along with tingling or numbness of the lips, fingers and toes, and dizziness. Respiratory irritation may include coughing or wheezing. Long-term exposure to cyanobacteria toxins may result in liver damage or other chronic health effects.
How soon after exposure do symptoms appear?
Symptoms of a neurotoxin HAB exposure may appear within 15-20 minutes while symptoms of a hepatotoxin HAB exposure may take hours or several days to appear. Telling your health care practitioner about contact with water may help him/her treat the illness properly.
What effects do harmful algal blooms have on animals and fish?
Mammals and birds exposed to cyanobacteria toxins may become ill or die. As other bacteria in the water break-down dead cyanobacteria, the dissolved oxygen in the water may become depleted, which may cause a fish kill. Cyanobacteria bloom toxins at high concentrations can be directly harmful to fish and may cause fish kills as well. Dense bacterial blooms in the water column will block out sunlight necessary to other organisms to survive. Wildlife, pets, and livestock are also prone to exposure by wading and drinking bloom water. A very small amount of toxin can cause illness to small animals if ingested.
Is it safe to eat seafood from waters with cyanobacteria blooms?
Internal organs (innards) of fish and crabs caught in bloom waters may be contaminated and, therefore, should not be consumed. It is safe to consume fish filets that appear healthy when caught in bloom waters, providing you carefully clean the fish, discarding all guts and the carcass, thoroughly cook the fish fillet, and wash hands and surfaces with fresh, soapy water afterward. In waters with persistent, reoccurring blooms where toxin levels are high, consumption may not be advised.
How do I protect myself from the effects of harmful algal blooms?
- Observe signage indicating a harmful algal bloom is present and avoid contact with the water when instructed.
- Do not swim, wade, or waterski in water that has unusual color or where a cyanobacteria bloom has been identified.
- If direct contact with skin occurs with water containing cyanobacteria, wash off with fresh water. In some cases, skin irritation will appear after prolonged exposure. If symptoms persist, consult your health care provider or your local health department.
- Never drink untreated water. Boiling water taken from a waterbody with a cyanobacteria bloom will not destroy toxins.
- Do not let children, pets, or livestock wade, swim, or drink affected waters. If exposed, wash skin and fur thoroughly with fresh water.
- People who are prone to respiratory allergies or asthma should avoid areas with cyanobacteria blooms.
- Do not eat internal organs or use the carcass for stock of fish caught in HAB waters. If you have cleaned fish fillets caught from affected waters, thoroughly wash any of your skin that has come into contact with the fish, in addition to surfaces during cleaning and preparation.
- Use rubber gloves if contact with affected waters must be made.
What is Virginia doing about harmful algal blooms?
Several state agencies and municipal governments work together to regularly monitor the water and shellfish growing areas for the presence of cyanobacteria and their toxins, and to conduct surveillance for human health effects. This group is known as the Virginia Harmful Algal Bloom Task Force. The public will be notified if a cyanobacteria bloom that could affect human health is identified. The Algal Bloom map is regularly updated to reflect the status of waterways experiencing a bloom.
How does someone report an algal bloom?
If you are concerned that you have been exposed to a harmful algal bloom, please see your health care provider or call your local health department. Telling your doctor about contact with water may help him/her treat the illness properly. You may also report the exposure on the Harmful Algal Bloom Hotline (888-238-6154). Report algal blooms and fish kills online at http://www.swimhealthyva.com/.
How can I learn more about harmful algal blooms?
- If you have concerns about harmful algal blooms, contact your healthcare provider.
- Call your local health department. A directory of local health departments is located at https://www.vdh.virginia.gov/local-health-districts/.
- Visit www.SwimHealthyVa.com and click on Harmful Algal Blooms.
- Visit the Centers for Disease Control and Prevention HAB website at https://www.cdc.gov/habs/.
- Visit the Environmental Protection Agency HAB website at |
Neuro-immunology is the part of Neuroscience that deals with immunological aspects of normal and abnormal functions in the body.
Multiple Sclerosis (MS) is one of such disorders. The Signs and Symptoms of Multiple Sclerosis are so diverse that sometimes, even an expert can get distracted. Gulf Neurology Center follows established and updated diagnostic criteria for the diagnosis and treatment of MS. However, normal tests and lack of objective findings does not rule out MS and clinical findings become the decision maker.
The following information may better help understand the basics of Multiple Sclerosis.
Recognizing Multiple Sclerosis: Multiple Sclerosis symptoms generally appear between the ages of 20 and 40. The onset of MS may be dramatic or so mild that a person doesn’t notice any symptoms until far later in the course of the disease.
The most common early symptoms of MS: Tingling Numbness Loss Of Balance Weakness in one or more limbs Blurred or double vision or visual impairment
Less common symptoms of MS: Slurred Speech Sudden Onset Of Paralysis Lack Of Coordination Cognitive Difficulties As the disease progresses other symptoms may include muscle spasms, sensitivity to heat, fatigue, changes in thinking or perception, and sexual disturbances.
Fatigue: this is the most common symptom of MS. It is typically present in the mid afternoon and may consist of increased muscle weakness, mental fatigue, sleepiness, or drowsiness.
Heat Sensitivity: Heat sensitivity (the appearance or worsening of symptoms when exposed to heat such as a hot shower) occurs in most people with MS.
Spasticity: Muscle spasms are a common and often debilitating symptom of MS. Spasticity usually affects the muscles of the legs and arms. It may interfere with a person's ability to move their muscles freely.
Dizziness: Many people with MS complain of feeling “off balance” or lightheaded. Occasionally they may experience the feeling that they or their surroundings are spinning. This is called vertigo. These symptoms are caused by damage within the complex nerve pathways that coordinate vision and other inputs into the brain that are needed to maintain balance.
Impaired Thinking: Problems with thinking occur in almost half of the people with MS. For most, this means slowed thinking, decreased concentration, or decreased memory. Approximately 10% of the people with this disease have severe impairments that significantly impairs their ability to carry out tasks of daily living.
Vision Problems: Vision impairments are relatively common in people with MS. In fact, one of the most important vision problem is optic neuritis. Optic Neuritis occurs in 55% of the people with MS. However, most vision impairments do not lead to blindness.
Other Symptoms associated with Multiple Sclerosis (MS). Abnormal sensations: Many people with MS experience abnormal sensations such as “pins and needles”, numbness, itching, burning, stabbing, or tearing pains. Fortunately, most of these symptoms, while aggravating, are not life-threatening or debilitating and can be managed or treated.
Speech and swallowing problems: People with MS often have swallowing difficulties. In many cases, they are associated with speech problems as well. They are caused by damaged nerves that normally aid in performing these tasks.
Difficulty walking: gait disturbance is the most common symptom of MS. This problem is mostly related to muscle weakness and/or spasticity. Having balance problems or numbness in your feet can also make walking difficult. Other rare symptoms include breathing problems and seizures. What Are The Types Of Symptoms? It is helpful to divide the symptoms into three categories: Primary, Secondary, and Tertiary. Primary Symptoms are a direct result of the demyelination process. This impairs the transmission of electrical signals to the muscles (to allow them to move appropriately) and the organs of the body (allowing them to perform normal functions). Primary symptoms include: weakness, tremors, tingling, numbness, loss of balance, vision impairment, paralysis, and bladder or bowel problems. The use of medication, rehabilitation, and other treatments can help keep many of these symptoms under control. Secondary Symptoms are a result from Primary Symptoms. For example, paralysis (a primary symptom) can lead to bedsores (pressure sores) and bladder or urinary incontinence problems can cause frequent, recurring urinary tract infections. Secondary Symptoms can be treated, but the ideal goal is to avoid them by treating the primary symptoms. Tertiary Symptoms are the social, psychological and vocational complications associated with the primary and secondary symptoms. For example, people with MS often suffer from depression which is considered to be a tertiary symptom. What Causes MS Symptoms? Demyelination, or deterioration of the protective sheath that surrounds the nerve fibers, can occur in any part of the brain or spinal cord. People with MS experience different symptoms based on the area affected. Demyelination in the nerves that send impulses to the muscles, tend to cause issues associated with movement (motor symptoms). Demyelination along the nerves that carry sensory impulses to the brain, causes disturbances in sensation (sensory symptoms). Are The Symptoms The Same In Every Person? Multiple Sclerosis follows a varied and unpredictable course. In many people, the disease starts with a single symptom, followed by months even years without any progression of symptoms. In others, the symptoms can become worse within weeks or possibly months. It is important to understand that although a wide range of symptoms can occur, any given individual may experience only some of the symptoms and never have others. Some symptoms may occur once, resolve, and never return. Because MS is such an individualized disease, it is not helpful to compare yourself with other people who have MS. Reviewed by the Doctors at the Mellen Center for Multiple Sclerosis Research at the Cleveland Clinic.
MS Patient Circle: Some of our patients have started a support group in the area and have been using an office within our Medical Complex. This support group is available to patients from all over the gulf coast region. Coordinators are selected by the patients. The meetings are planned and arranged by the group. For additional information on how to become involved please contact Crystal via email at [email protected] |
As spring rains feed the flowers, trees, weeds and grasses, these things feed allergies.
What are allergies and why do we have them? Allergies are an overreaction of the immune system. People who have allergies have hyper-alert immune systems that overreact to substances in the environment called allergens.
Allergies affect at least 2 out of every 10 Americans.
Some of the most common types of allergies are, but not limited to:
- Can trigger Hay fever or seasonal allergies
- Dust Mites
- Microscopic organisms that live in house dust, a mixture of fabric fibers, animal dander, bacteria, mold or fungus spores, food particles, bits of plants and others. Unlike pollen being seasonal, this usually occurs year round.
- Microscopic fungi spores that float in the air like pollen. These can be in damp areas of your home, such as basements and bathrooms. This also can occur seasonally, unless it's in your house.
- Animal Dander and Cockroaches
- Proteins secreted by oil glands in animal's skin and present in their saliva. It can take 2 or more years to develop allergies to animals and may subside months after ending contact with the animal.
- Insect Stings
- Everyone who gets stung by an insect will have pain, swelling and redness around the sting site. However people who are allergic to stings can have a severe or even a life threatening reaction.
- Rubber gloves, condoms, and certain medical devices contain latex. Reactions can range from skin redness and itching to difficulty breathing.
- Milk, fish, shellfish, nuts, wheat and eggs are among the most common of food allergies. These reactions usually happen within minutes after eating the food.
- Some people develop reactions to things such as Penicillin or Aspirin. These symptoms can range anywhere from mild to life threatening.
Here are symptoms of allergic reactions:
- Itchy watery eyes
Mild reactions do not spread throughout the body
Moderate Allergic Reactions
These can include symptoms that spread to other parts of the body.
- Difficulty breathing
Severe Allergic Reaction (Anaphylaxis)
This is a rare life-threatening emergency in which the body's response to the allergen is sudden and affects the whole body. It may begin with sudden itching of the eyes and within minute's progress to more serious symptoms:
- Varying degrees of swelling that can make breathing and swallowing difficult
- Abdominal pain
- Mental confusion or dizziness |
After studying this course, you should be able to:
explain in English and by using examples, the conventions and language used in graph drawing to someone not studying the course
use the following terms accurately, and be able to explain them to someone else: ‘time-series graph’, ‘conversion graph’, ‘directly proportional relationship’, ‘“straight-line” relationship’, ‘gradient’, ‘intercept’, ‘x-coordinate’, ‘y-coordinate’, ‘coordinate pair’, ‘variable’, ‘independent variable’, ‘dependent variable’, ‘average speed’, ‘velocity’, ‘distance-time graph’
draw a graph on a sheet of graph paper, from a table of data, correctly plotting the points, labelling the graph and scaling and labelling the axes
draw and use a graph to convert between a quantity measured in one system of units to the same quantity measured in a different system
write down the formula of a straight-line graph, and be able to explain, using sketches, the meaning of the terms ‘gradient’ and ‘intercept’. |
Testicular cancer is the most common cancer in young men between the ages of 15 and 35, but the disease also occurs in other age groups, so all men should be aware of its symptoms. The testicles are a part of the male reproductive system and are contained within a sac of skin called the scrotum, which hangs beneath the base of the penis. Each testicle is somewhat smaller than the size of a golf ball in adult males. The testicles produce and store sperm, and they also serve as the body's main source of male hormones. These hormones control the development of the reproductive organs and other male characteristics, such as body and facial hair and low voice. In the year 2005, an estimated 8500 cases of testicular cancer were diagnosed in the United States, and about 350 of them died of their testicular cancer. Caucasians are more likely to be diagnosed with testicular cancer than Hispanics, Blacks, or Asians. Of concern, the incidence of testicular cancer around the world has been steadily increasing, basically doubling in the past 30-40 years.
Most testicular cancer cases are found through a self-examination. The testicles are smooth, oval-shaped, and rather firm. Men who examine themselves routinely become familiar with the way their testicles normally feel. Any changes in the way they feel from month-to-month should be checked by a doctor. (See below for self-exam instructions.)
In about 90% of cases, men have a lump on a testicle that is often painless but slightly uncomfortable, or they may notice testicular enlargement or swelling. Men with testicular cancer often report a sensation of heaviness or aching in the lower abdomen or scrotum. In rare cases, men with germ cell cancer notice breast tenderness or breast growth. This symptom occurs because certain types of germ cell tumors secrete high levels of a hormone called human chorionic gonadotropin (HCG), which stimulates breast development. Blood tests can measure HCG levels. In the more rare non-germ cell testicular cancers, Leydig cell tumors produce androgens (male sex hormones) or estrogens (female sex hormones). These hormones may cause symptoms such as breast growth or loss of sexual desire, symptoms of estrogen-producing tumors. Androgen-producing tumors may not cause any specific symptoms in men, but in boys they can cause growth of facial and body hair at an abnormally early age.
Even when testicular cancer has spread to other organs, only about 1 man in 4 may experience symptoms related to the metastases prior to diagnosis. Lower back pain is a frequent symptom of later-stage testicular cancer. If the cancer has spread to the lungs and is advanced, shortness of breath, chest pain, cough, or bloody sputum may develop. Occasionally, men will complain of central abdominal discomfort, due usually to enlargement of abdominal lymph nodes. Rarely, men will complain of headache, which is associated with brain metastases (an uncommon pattern of spread, and usually associated with a certain type of testicular cancer called choriocarcinoma).
It is important to know that a number of noncancerous conditions, such as testicle injury or infection, can produce symptoms similar to those of testicular cancer. Inflammation of the testicle, known as orchitis, can cause painful swelling. Causes of orchitis include viral or bacterial infections.
Listed below are warning signs that men should watch for:
A lump in either testicle; the lump typically is pea-sized, but sometimes it might be as big as a marble or even an egg.
Any enlargement of a testicle;
A significant shrinking of a testicle;
A change in the consistency of a testicle (hardness);
A feeling of heaviness in the scrotum;
A dull ache in the lower abdomen or in the groin;
A sudden collection of fluid in the scrotum;
Pain or discomfort in a testicle or in the scrotum;
Enlargement or tenderness of the breasts.
A testicular self exam is best performed after a warm bath or shower. Heat relaxes the scrotum, making it easier to spot anything abnormal. The National Cancer Institute recommends following these steps every month:
1. Stand in front of a mirror. Check for any swelling on the scrotum skin.
2. Examine each testicle with both hands. Place the index and middle fingers under the testicle with the thumbs placed on top. Roll the testicle gently between the thumbs and fingers. Don't be alarmed if one testicle seems slightly larger than the other which is normal.
3. Find the epididymis, the soft, tubelike structure behind the testicle that collects and carries sperm. If you are familiar with this structure, you won't mistake it for a suspicious lump. Cancerous lumps usually are found on the sides of the testicle but can also show up on the front.
4. If you find a lump, see a doctor right away. The abnormality may not be cancer, but if it is, the chances are great it can spread if not stopped by treatment. Only a physician can make a positive diagnosis.
Various tests are necessary to make the diagnosis of testicular cancer. Your doctor may order several imaging tests and also draw blood to aid in the diagnosis. If a mass is seen in the testicle, these tests are usually followed by surgery to remove the affected testicle(s). A summary of the tests and procedures are below:
An ultrasound can help doctors tell if a testicular mass is solid or fluid filled. This test uses sound waves to produce images of internal organs. The images can help distinguish some types of benign and malignant tumors from one another. This test is very easy to take and uses no radiation. When you have an ultrasound exam, you simply lie on a table and a technician moves the transducer over the part of your body being examined. Usually, the skin is first lubricated with jelly. The pattern of echoes reflected by tissues can be useful in distinguishing fluid buildup around the testicle (called a hydrocele) and certain benign masses from cancers. If the mass is solid, then it is probably either a tumor or cancer but still could be some form of infection, and thus it is essential to follow up with further tests.
Certain blood tests are sometimes helpful in diagnosing testicular tumors. Many testicular cancers make high levels of certain proteins, such as alpha-fetoprotein (AFP) and human chorionic gonadotropin (HCG). The tumors may also increase the levels of enzymes such as lactate dehydrogenase (LDH). These proteins are important because their presence in the blood suggests that a testicular tumor is present. However, they can also be found in conditions other than cancer.
Nonseminomas can sometimes raise AFP and HCG levels. Seminomas occasionally raise HCG levels, but never AFP levels. LDH is a non-specific blood test, and occasionally a very high LDH often (but not always) indicates widespread disease. Sertoli or Leydig cell tumors do not produce these substances. These proteins are not usually elevated in the blood if the tumor is small. Therefore, these tests are also useful in estimating how much cancer is present and in evaluating the response to therapy to make sure the tumor has not returned.
If a suspicious mass is present in the testicle, the testicle is usually removed in a procedure called a radical orchiectomy. Through an incision in the groin, the surgeons remove the entire tumor together with the testicle and spermatic cord. The spermatic cord contains blood and lymph vessels that may act as a pathway for testicular cancer to spread to the rest of the body. The entire specimen will be sent to the laboratory where a pathologist (a doctor specializing in laboratory diagnosis of diseases) examines the tissue under a microscope. If cancer cells are present, the pathologist sends back a report describing the type and extent of the cancer. The entire operation takes less than an hour and is usually an outpatient procedure. At the time of radical orchiectomy, a testicular prosthesis can be placed if desired by the patient.
Chest x-ray: This is a “plain” x-ray of your chest and can be taken in any outpatient setting. This test is done to see if your cancer has spread to your lungs or the lymph nodes in an area of the chest known as the mediastinum.
Computed tomography (CT): The CT scan is an x-ray procedure that produces detailed cross-sectional images of your body. Instead of taking one picture, as does a conventional x-ray, a CT scanner takes many pictures of the part of your body being studied as it rotates around you. A computer then combines these pictures into an image of a slice of your body.
CT scans are helpful in staging the cancer (determining the extent of its spread), to help tell if your cancer has spread into your abdomen, lungs, liver, or other organs. They show the lymph nodes and distant organs where metastatic cancer might be present.
Once cancer of the testicle has been found, more tests will be done to find out if the cancer has spread from the testicle to other parts of the body (staging). A doctor needs to know the stage of the disease to plan treatment. The following stages are used for cancer of the testicle:
Cancer is found only in the testicle.
Cancer has spread to the lymph nodes in the abdomen (lymph nodes are small, bean-shaped structures that are found throughout the body; they produce and store infection-fighting cells).
Cancer has spread beyond the lymph nodes in the abdomen. There may be cancer in parts of the body far away from the testicles, such as the lungs and liver.
Stage I Testicular Cancer
Treatment depends on what the cancer cells look like under a microscope (cell type).
If the tumor is a seminoma, treatment will probably be surgery to remove the testicle (radical inguinal orchiectomy), followed by either 1.) external-beam radiation to the lymph nodes in the abdomen, 2.) surveillance – i.e. close observation with chest x-rays, CT scans, and blood tests, or 3.) chemotherapy usually in the form of a drug called carboplatin.
If a tumor is a nonseminoma, treatment will be radical orchiectomy followed by one of the following:
- Removal of some of the lymph nodes in the abdomen (retroperitoneal lymph node dissection). This surgery removes the lymph nodes where the testicular cancer usually spreads. The surgery can be performed so that fertility is preserved.
- If there is no cancer found in these lymph nodes, then blood tests and imaging exams will be done on a regular basis to make sure that the cancer does not recur.
- If there is a small amount of cancer found in the lymph nodes (e.g. less than or equal to 2 lymph nodes with cancer), then the surgery is usually completely therapeutic; in other words, removing this small amount of cancer in the lymph nodes oftentimes cures the patient without further therapy. Of course, routine visits for blood tests and imaging exams are necessary in this case.
- If a larger amount of tumor is found in the lymph nodes (e.g. greater than 2 nodes), chemotherapy is usually given after the patient recovers from surgery.
- Surveillance: careful testing to see if the cancer comes back (i.e. surveillance). The doctor must check the patient and do blood tests and x-rays very frequently, for example every 1-2 months for 2 years. This option is often chosen if the tumor has certain features to suggest a low chance of occult spread and if the patient is very reliable/compliant
- Immediate chemotherapy. This is sometimes preferred in select patients who have cancer confined to the testicle, however, the doctors worry that there is a high likelihood that the cancer has spread, but the cancer is too small to detect by our imaging exams.
Stage II Testicular Cancer
Again, treatment depends on whether the cancer is a seminoma or nonseminoma.
If the tumor is a seminoma and the spread of cancer is felt to be small volume, then treatment will probably be surgery to remove the testicle (radical inguinal orchiectomy), followed by external-beam radiation to the lymph nodes in the abdomen. If the spread to the abdomen is felt to be more bulky (larger nodes), then the treatment will probably be a radical inguinal orchiectomy followed by systemic chemotherapy.
If a tumor is a nonseminoma, treatment will be radical orchiectomy followed by one of the following:
- If low volume lymph nodes, then possibly removal of the lymph nodes in the abdomen (lymph node dissection) followed by the same protocol as in Stage I testicular cancer treated by lymph node dissection (see above). The other primary alternative is chemotherapy instead of surgery. If the tumor markers remain abnormal, then the recommendation should be chemotherapy.
- If high volume lymph node spread on CT scan, then the patient is administered systemic chemotherapy.
- If there is a “complete response” to chemotherapy (i.e. complete shrinkage of the metastasis so that there is no evidence of metastases, ie. normal CT scan), then the patients are usually followed closely with blood tests and imaging exams.
- If x-rays following chemotherapy show that there are still lymph nodes that remain enlarged with potential cancer, then surgery may be done to remove these masses. If there is still residual cancer in these masses, then your doctor may recommend more chemotherapy. If there is no cancer in these masses, the doctor will check the patient at regular intervals with blood tests, chest x-rays, and CT scans.
Stage III Testicular Cancer
Stage III disease is universally treated with systemic chemotherapy. The number of cycles and exact chemotherapy regimen depends on whether the tumor is a seminoma or nonseminoma, how extensive the disease is, and presence/extent of tumor marker (AFP, HCG) elevation. Doctors will follow how the tumor is responding to the chemotherapy with multiple imaging exams. Oftentimes in Stage III testicular cancer, there are masses remaining after completion of the chemotherapy. Usually, these masses are removed, as they may harbor residual cancer or teratoma. If there is residual cancer in these masses after full chemotherapy, then your doctor may recommend more chemotherapy, sometimes even a different chemotherapy drug combination.
The survival rates for testicular cancer are excellent. Specifically, the survival rate for men diagnosed with Stage I seminoma is about 99%. The survival rate for men with Stage I non-seminoma is about 98%. Cure rates for Stage II tumors range above 90%, while cure rates for Stage III tumors vary between 50-80%. In addition to Stage, a variety of institutions have created classifications of Good and Poor risk tumors. Good risk tumors are generally those that have not spread outside of the retroperitoneal lymph nodes or lungs and do not have overly elevated tumor markers. Poor risk tumors generally have very high tumor markers or have spread outside of the lungs and lymph nodes. As you might expect, the survival rate for good risk tumors is high (more than 90%), while the survival rate for poor risk tumors is lower (50-60%). |
Your search found 1 Results
WORLD HEALTH. 1991 Sep-Oct; 12.A researcher with WHO's Tropical Disease Research Programme reviews techniques used to diagnose malaria. Present techniques have not improved much since a French physician 1st used a microscope in 1880 to examine blood from a sick soldier and then noticed the parasites of Plasmodium falciparum. Yet optical quality has improved and special stains can now be used to color the parasites making them more recognizable. In fact, at a magnification of 600-700 times, a scientist can identify all 4 plasmodia, the blood forms of the plasmodia, and count the plasmodia. Blood samples and a microscope allow physicians to monitor the ill person's progress after they began treatment. Yet a microscope and the needed laboratory skills and other resources are not always present in health center in a village in countries where malaria is endemic. It is here where simple and effective techniques are needed the most. 1 approach is to detect antibodies to the plasmodia, but this takes much time. In addition, antibodies are only present after an individual has been infected for a relatively long time. Thus this technique cannot detect malaria early enough to provide proper treatment. Another approach readily identifies antigens. Yet the techniques required are complicated and require a lot of time. Besides antigen techniques are not as reliable as microscopic diagnosis. Researchers are presently experimenting on simple visual methods which are quick, inexpensive, and reliable. Molecules in the plasmodia which are in a small amount of blood will either react or not react with reagents incorporated on a dipstick or card. Thus physicians can detect what plasmodia are present and estimate parasite load. Another test can inform the physicians what antimalarial to prescribe and how much and resistance of the plasmodia to the antimalarial. |
It was a massive heist that received little attention. Several hundred trillion joules of energy were disappearing every second. Investigators suspected the deep ocean was involved, but couldn’t find any leads. There’s no need to panic, though— a fresh look at the evidence shows that the energy may never have been missing in the first place.
Most people know that greenhouse gases trap heat near the Earth, warming the planet. We fixate on records set by temperatures of the near-surface atmosphere to track the warming caused by anthropogenic greenhouse gas emissions. But the atmosphere is only part of the picture. There are other reservoirs that take up heat energy as well—most notably, the ocean. In fact, about 90 percent of the energy added by the increase in greenhouse gases has gone into the ocean.
The 2000s saw lots of La Niñas, the cold phase of the El Niño/Southern Oscillation that lowers surface temperatures. If those temperatures are your only measure of global heat content, enough La Niñas may get you thinking that there's been a slowdown in the warming trend our planet has been experiencing. You get a much different picture when you look at the Earth as a whole, though. During La Niña years, the Earth actually gains more energy than it would otherwise. Conversely, El Niño years make surface temperatures warmer but slows the rise in total energy.
That’s mainly the result of changes in cloudiness, precipitation, and storm tracks that come along with La Niña or El Niño conditions. For example, clearer skies in the tropical Pacific (La Niña) can allow more solar radiation through, whereas increased evaporation (El Niño) moves heat from the ocean to the atmosphere while boosting cloudiness.
We now have satellite networks that measure the incoming solar radiation and the outgoing infrared radiation, so we can track the changes in the planet's heat content pretty well. If the incoming solar radiation is greater than the outgoing infrared, energy was added to the system. If that energy goes into the atmosphere, we can track it using a vast network of weather stations (and satellites, as well) that enable calculations of global near-surface atmospheric temperature.
The ocean is a tougher nut to crack. We used to rely on ship-based temperature profiles for the surface ocean, but the Argo program changed that in 2003. This array of 3,000 instrumented floats measures temperature (among other things) in the upper 2 kilometers of the ocean. That's a lot more detail, but creates a significant shift in the sorts of data we have.
In 2010, Kevin Trenberth and John Fasullo (of the National Center for Atmospheric Research) published an article in Science describing a discrepancy in our accounting of Earth’s energy budget. While the satellite tracking of incoming solar radiation and outgoing infrared radiation between 2004 and 2008 clearly showed that the net addition of energy was increasing, measurements of ocean heat content showed a decline. It was sort of like seeing that your checking account balance had gone down by $1000, but finding that you had only written checks totaling $300. You know the money is gone, but where did it go?
The missing energy didn’t show up in any of the other energy budget terms we can track. That may not be as exciting as a seemingly faster-than-light neutrino, but it was a very important disparity. Trenberth and Fasullo suggested that the energy could be moving into the deep ocean, which we currently can’t monitor. Lots of modeling had previously shown that, in a warming climate, the upper ocean heat content will occasionally decline as energy moves into the deep ocean. In a 2011 paper, they also mentioned an alternate possibility: the uncertainty in our ocean heat content measurements is simply very large, which would make the discrepancy an experimental error (and therefore much less interesting).
A large source of potential error was that the switch from the ship-based measurements of ocean heat content to the Argo array entailed all kinds of difficult-to-quantify uncertainties. (New instruments that were operated differently, uneven distribution and changing density of measurement points as floats were gradually deployed, etc.) A new paper in Nature Geoscience makes headway by re-examining the ocean heat content data and accounting for that complex uncertainty.
The group’s ocean heat content record differs slightly from other analyses (just as global surface temperature series from NASA and NOAA don’t come out exactly the same), but the pivotal bit is that the uncertainty during the Argo transition period really was quite large. In fact, the difference between the ocean heat content and net total energy data is not statistically significant—it’s well within the uncertainty. That suggests that the missing energy might not be so missing.
At least one thing remains clear in all the datasets—the Earth is steadily gaining energy. Between 2001 and 2010, the amount of energy reaching the Earth has exceeded the amount leaving by an average of about 0.5 watts per square meter.
Still, it’s critically important that our energy accounting improve, and that’s a formidable task. It would be encouraging to declare the case of the missing energy “solved,” but it’s not so encouraging that our measurements are too uncertain to settle the matter. As the authors conclude, “the large inconsistencies between independent observations of Earth’s energy flows points to the need for improved understanding of the error sources and of the strengths and weaknesses of the different analysis methods, as well as further development and maintenance of measurement systems to track more accurately Earth’s energy imbalance on annual timescales." |
This study guide assists teachers in increasing students' understanding of the prevalence and spread of nuclear weapons and familiarizes students with historic and contemporary measures to control nuclear proliferation and stimulates their thinking of potential strategies for doing so in the future.
A primary security concern in today’s world is the threat of nuclear weapons proliferation. States beyond the five “original” nuclear weapons-possessing countries (Britain, China, France, Russia and the United States) are seeking to acquire, or have already acquired, nuclear materials, industrial systems to produce plutonium or uranium, and delivery systems, such as missiles and airplanes. Moreover, non-state actors are seeking to acquire nuclear materials and weaponry. What can be done to limit the proliferation of such dangerous weapons?
Objectives of the Teaching Guide
- To increase student understanding of the prevalence and spread of nuclear weapons;
- To familiarize students with historic and contemporary measures to control nuclear proliferation and stimulate their thinking of potential strategies for doing so in the future;
- To develop students’ analytical reading, writing, and research skills;
- To reinforce students’ abilities to collaborate and produce a work product with peers using traditional and electronic means of research, discussion, and document preparation;
- To enable classroom teachers, students, and contest coordinators to:
- Understand the overall theme of the National Peace Essay Contest (NPEC) topic;
- Define and understand the concepts contained in the essay question;
- Formulate a thesis for their essay;
- Review bibliographic resources and select qualified sources for their research;
- Write, edit, and submit essays to the United States Institute of Peace;
- To provide teachers with lesson plans, worksheets, bibliographic sources, and factual material to assist them in preparing students to write essays for submission to the National Peace Essay Contest.
The Teaching Guide includes all lesson plans, student handouts and instructions. |
in Equation 3-2 with the appropriate terms from Equation 3-1 allows the direct
calculation of the mass flow rate.
r A v
The water in the pipe of the previous example had a density of 62.44 lbm/ft3. Calculate
the mass flow rate.
Conservation of Mass
In thermodynamics, you learned that energy can neither be created nor destroyed, only changed
in form. The same is true for mass. Conservation of mass is a principle of engineering that
states that all mass flow rates into a control volume are equal to all mass flow rates out of the
control volume plus the rate of change of mass within the control volume. This principle is
expressed mathematically by Equation 3-4.
the increase or decrease of the mass within the control volume over a
(specified time period)
Steady-state flow refers to the condition where the fluid properties at any single point in the
system do not change over time. These fluid properties include temperature, pressure, and
velocity. One of the most significant properties that is constant in a steady-state flow system is
the system mass flow rate. This means that there is no accumulation of mass within any
component in the system. |
element of one
Best Results From Wikipedia Yahoo Answers Youtube
In abstract algebra, the idea of inverse element generalises the concepts of negation, in relation to addition, and reciprocal, in relation to multiplication. The intuition is of an element that can 'undo' the effect of combination with another given element. While the precise definition of an inverse element varies depending on the algebraic structure involved, these definitions coincide in a group.
In an unital magma
Let S be a set with a binary operation * (i.e. a magma). If e is an identity element of (S,*) (i.e. S is an unital magma) and a*b=e, then a is called a left inverse of b and b is called a right inverse of a. If an element x is both a left inverse and a right inverse of y, then x is called a two-sided inverse, or simply an inverse, of y. An element with a two-sided inverse in S is called invertible in S. An element with an inverse element only on one side is left invertible, resp. right invertible. If all elements in S are invertible, S is called a loop.
Just like (S,*) can have several left identities or several right identities, it is possible for an element to have several left inverses or several right inverses (but note that their definition above uses a two-sided identity e). It can even have several left inverses and several right inverses.
If the operation * is associative then if an element has both a left inverse and a right inverse, they are equal. In other words, in a monoid every element has at most one inverse (as defined in this section). In a monoid, the set of (left and right) invertible elements is a group, called the group of units of S, and denoted by U(S) or H1.
A left-invertible element is left-cancellative, and analogously for right and two-sided.
In a semigroup
The definition in the previous section generalizes the notion of inverse in group relative to the notion of identity. It's also possible, albeit less obvious, to generalize the notion of an inverse by dropping the identity element but keeping associativity, i.e. in a semigroup.
In a semigroup S an element x is called (von Neumann) regular if there exists some element z in S such that xzx = x; z is sometimes called a pseudoinverse. An element y is called (simply) an inverse of x if xyx = x and y = yxy. Every regular element has at least one inverse: if x = xzx then it is easy to verify that y = zxz is an inverse of x as defined in this section. Another easy to prove fact: if y is an inverse of x then e = xy and f = yx are idempotents, that is ee = e and ff = f. Thus, every pair of (mutually) inverse elements gives rise to two idempotents, and ex = xf = x, ye = fy = y, and e acts as a left identity on x, while f acts a right identity, and the left/right roles are reversed for y. This simple observation can be generalized using Green's relations: every idempotent e in an arbitrary semigroup is a left identity for Reand right identity for Le. An intuitive description of this is fact is that every pair of mutually inverse elements produces a local left identity, and respectively, a local right identity.
In a monoid, the notion of inverse as defined in the previous section is strictly narrower than the definition given in this section. Only elements in H1 have an inverse from the unital magma perspective, whereas for any idempotent e, the elements of He have an inverse as defined in this section. Under this more general definition, inverses need not be unique (or exist) in an arbitrary semigroup or monoid. If all elements are regular, then the semigroup (or monoid) is called regular, and every element has at least one inverse. If every element has exactly one inverse as defined in this section, then the semigroup is called an inverse semigroup. Finally, an inverse semigroup with only one idempotent is a group. An inverse semigroup may have an absorbing element 0 because 000=0, whereas a group may not.
Outside semigroup theory, a unique inverse as defined in this section is sometimes called a quasi-inverse. This is generally justified because in most applications (e.g. all examples in this article) associativity holds, which makes this notion a generalization of the left/right inverse relative to an identity.
A natural generalization of the inverse semigroup is to define an (arbitrary) unary operation ° such that (a°)°=a for all a in S; this endows S with a type <2,1> algebra. A semigroup endowed with such an operation is called a U-semigroup. Although it may seem that a° will be the inverse of a, this is not necessarily the case. In order to obtain interesting notion(s), the unary operation must somehow interact with the semigroup operation. Two classes of U-semigroups have been studied:
- I-semigroups, in which the interaction axiom is aa°a = a
- *-semigroups, in which the interaction axiom is (ab)° = b°a°. Such an operation is called an involution, and typically denoted by a *.
Clearly a group is both an I-semigroup and a *-semigroup. Inverse semigroups are exactly those semigroups that are both I-semigroups and *-semigroups. A class of semigroups important in semigroup theory are completely regular semigroups; these are I-semigroups in which one additionally has aa° = a°a; in other words every element has commuting pseudoinverse a°. There are few concrete examples of such semigroups however; most are
From Yahoo Answers
Answers:A use is not a property. It is a way to exploit a property. A property of copper is high conductivity. A use exploiting that is electrical wire.
Answers:It is interesting that you would have to resort to Yahoo to obtain your opinion. Can you develop your own ideas? Are you planning on referencing all of the people who respond to your questions as required in APA? Or are you intending to plagiarize other people's responses to obtain a passing grade in your class? Isn't there a prohibition against this? Perhaps you should read your course syllabus. Isn't this against your student "Code of Conduct"? If you watched the movie, all the answers are contained within it. You have now published a link of copyrighted material which maybe a violation of copyright law. How would you feel if all this information were provided to the Associate Dean of Business at Stevens-Henager College? Stop and think about what you are doing. Is this really the course of action you want to take? AD, Business, SHC
Answers:Valence electrons are defined as those that are not in the noble gas core of the atom. if you consider a few halogens, they have electronic configurations that look as follows: F: [He] 2s2 2p5 Cl: [Ne] 3s3 3p5 Br: [Ar] 4s2 3d10 4p5 etc. These have seven (or seventeen) valence electrons. The alkali metals have electronic configurations that look as folows: Li: [He] 2s1 Na: [Ne] 3s1 K: [Ar] 4s1 etc. These are the elements with one valence electron. ;) It's all about being able to write electronic configurations, and then interpreting what you have written.
Answers:Hydrogen, it is in the same group but is a nonmetal. |
The Science of the Big Bang
Many of the major questions that exist about the big bang model of cosmology—such as what came before the big bang, do other universes exist, and how did the fundamental forces unite in those first fractions of a second—are simply curiosities to many people outside of the science world.
And yet, the answers, when we find them, will fundamentally shape the way we understand how we came to be where we are today. Few questions matter more.
Before the Scientific Revolution, there were significant knowledge gaps and technological barriers to understanding the first moments of the universe. Understanding the universe’s early history required telescopes and spectroscopes, among other equipment. It required a nuanced understanding of the properties of light, chemical elements, and atoms.
It required a dramatic twentieth-century revision of the scope of the universe. All of these scientific elements came together when Albert Einstein proposed a new theory of gravity that raised curious questions about the future—and past—of the universe.
OPPONENTS of the BIG BANG THEORY
The big bang theory, like most scientific theories, has its opponents. While most of the opposition comes from the realm of religious people who favor a literal interpretation of their religious texts, there have been small camps of scientists who held out against the big bang model of cosmology into the twentieth century.
The big bang theory of cosmology is the standard model accepted by a majority of scientists. However, the steady state theorist Fred Hoyle continued to oppose the big bang theory throughout his life, as did his Synthesis of the Elements in Stars coauthors Margaret and Geoffrey Burbidge.
Geoffrey Burbidge created a revision of the steady state theory called the “quasi-Steady State.” The new version of the theory proposes that the universe expands and contracts over one-hundred-billion-year cycles.
According to the Burbidges, if stars can eject new types of matter as their paper with Hoyle showed, perhaps galaxies could also eject huge collections of matter to create new galaxies. Margaret Burbidge spent years observing quasars, theorizing that they could be a candidate for these ejected collections of matter.
In a 2005 interview with Discover magazine, Geoffrey Burbidge said:
The present situation in cosmology is that most people like to believe they know what the skeleton looks like, and they’re putting flesh on the bones. And Fred [Hoyle] and I would continuously say, we don’t even know what the skeleton looks like.
We don’t know whether it’s got 20 heads instead of one, or 60 arms or legs. It’s probable that the universe we live in is not the way I think it is or the way the Big Bang people think it is.
In 200 years, somebody is going to say how stupid we were. In other words, Burbidge believes that too many scientists have prematurely accepted the current big bang model of cosmology. Geoffrey Burbidge has since died, and Margaret Burbidge is in her late nineties. Few other scientists have continued to oppose the big bang theory.
The predominant opposition to the big bang comes from those who disagree due to religious reasons.
The Institute for Creation Research, which bills itself as a “leader in scientific research within the context of biblical creation,” publishes articles such as “The Big Bang Theory Collapses” that characterize the big bang theory as irreparably flawed—though scientific studies show otherwise.
The ICR ultimately argues against any scientific cosmology, including the quasi-steady state model, because they all contradict the ICR’s belief that the Christian god created heaven and Earth.
The ICR is just one example of literal religious thinkers who oppose the big bang theory. There are many others who dismiss the model for similar reasons.
There are also many religious people who do not dismiss the big bang theory. Georges Lemaitre was himself a Catholic priest, and many religious thinkers from various faiths see no opposition between the big bang model of cosmology and their religious beliefs.
Some see their creator as the creative force that sparked the big bang, while others, as Lemaitre, consider the religious and scientific realms as entirely separate and able to exist separately without conflict.
THE BIG BANG THEORY Chronology
13.8 billion years ago Our universe begins with a big bang followed by cosmic inflation; light elements from within the first three seconds
13.76 billion years ago Recombination takes place, and light can travel freely through the universe; the cosmic dark ages begin
13.57 billion years ago The first stars form, ending the cosmic dark ages
5 billion years ago The sun is born
3.8 billion years ago Earliest life-forms appear on Earth
13th century Glass lenses developed
1543 Nicolaus Copernicus publishes On the Revolutions of the Heavenly Spheres with his heliocentric theory of the solar system
1572 Tycho Brahe observes a supernova, which shows that changes do happen in the celestial realm
1577 Brahe observes a comet and calculates that it had passed by Venus, another crack in the Aristotelian model of unchanging, crystalline spheres
1608 First two patents are filed for telescope designs, both by spectacle makers in the Netherlands
1609 Galileo Galilei builds a telescope and begins observing the sky; Johannes Kepler publishes his first two laws of planetary motion
1610 Galileo discovers Jupiter’s four largest moons and the phases of Venus
1633 Galileo is found guilty of heresy for supporting the heliocentric model of the universe and sentenced to house arrest for the rest of his life
1665 Sir Isaac Newton shows that white light contains all of the colors of the rainbow
1668 Newton builds the first reflecting telescope
1676 Ole Römer measures the speed of light
1800 Sir William Herschel discovers the infrared light
1911 Ernest Rutherford discovers the atomic nucleus
1915 Albert Einstein publishes his general theory of relativity
1924 Edwin Hubble discovers that other galaxies exist outside of the Milky Way
1927 Georges Lemaitre publishes his first paper on an expanding universe
1929 Hubble discovers that galaxies are receding at speeds directly correlated to their distance
1931 Lemaitre publishes an article in the journal Nature with his theory of the “primeval atom”
1948 George Gamow and Ralph Alpher publish “On the Origin of Chemical Elements” theorizing how elements are formed from the big bang
1957 Fred Hoyle and three colleagues publish “Synthesis of the Elements in Stars” showing how the heavier elements are formed
1965 Arno Penzias and Robert Wilson discover the cosmic microwave background radiation
1989 George Smoot’s team launches the COBE satellite to study the CMB for the seeds of galaxies
1992 Smoot announces that COBE’s data showed small fluctuations in the CMB
Scientists believe that the initial cosmic inflation should have magnified quantum fluctuations in the early universe’s gravitational field, resulting in gravitational waves.
Gravitational waves were first predicted by Albert Einstein in his 1915 general theory of relativity. Gravitational waves, from the collision of two black holes, were first detected by LIGO (the Laser Interferometer Gravitational-Wave Observatory) in 2015.
The gravitational waves LIGO picked up confirmed Einstein’s prediction and also showed that black holes do collide. However, scientists are still on the hunt for gravitational waves specifically from cosmic inflation.
Inflation-caused gravitational waves would be too weak for the LIGO detector to pick up, but they would slightly twist the orientation of light, creating an effect called polarization.
In 2012, a group behind the BICEP2 radio telescope at the South Pole thought they had detected big bang gravitational waves. Their data had shown a curlicue pattern in the polarization of the CMB, which greatly excited the science community and made news headlines around the world.
However, the pattern turned out to be from dust in the Milky Way, which emits polarized light with the same curling pattern.
Research is still underway using an upgraded version of the original technology called BICEP3 whose observation period ran through much of 2016. BICEP3 includes more detectors and a finer resolution but also a broader spectrum of light that will help the team discern any signals from inflation from galactic dust.
One of the major questions about the history of our universe is whether our universe is the sole universe.
If you currently have a basic assumption that our universe is the only universe, it can be challenging to imagine what it means for other universes to exist. But at one time, people thought our planet was the only planet, and then that the sun was the only sun, and then that our galaxy was the only galaxy.
According to Brian Greene, one of the most prominent theoretical physicists who studies and speaks on the idea of multiple universes:
An illustration of the gravitational waves (disruptions in space-time) caused by two black holes orbiting one another
What we have found in research … is that our mathematical investigations are suggesting that what we have thought to be everything may actually be a tiny part of a much grander cosmos. And that grander cosmos can contain other realms that seem to rightly be called universe just as our realm has been called the universe.
One relatively simple example Greene gives begins by considering whether the universe is finite or infinite. Currently, physicists do not know which is true. Thus, an infinite universe can be considered a viable option.
Next, consider shuffling a deck of cards an infinite number of times. Eventually, the order of the cards will begin to repeat. So, too, would the configuration of particles in an infinite universe. If space goes on for infinity, there would inevitably be repeating configurations of matter just as there are repeating configurations of cards.
While these other universes have not yet been observed, there have been other successful theories that started in a similar way. Einstein’s general theory of relativity, for example, started as a theoretical set of equations and was later tested in various ways before becoming a well-supported, accepted theory.
The collection of multiple universes is called a multiverse. It is also sometimes called a bubble universe because the term describes how physicists imagine multiple universes forming.
To imagine a bubble universe, picture a boiling pot of water. The pot has bubbles of varying sizes. Some appear and pop immediately, others grow larger and last longer.
In this model, our region of space underwent its early cosmic expansion, which ended 13.8 billion years ago. While inflation in our region ended, inflation continued in other regions or “bubbles.”
The different inflation regions separated, creating an infinite number of universes. As they inflate, the bubbles grow apart and make room for more inflating bubbles.
The idea of a multiverse or bubble universe is controversial, in part because initially, scientists had no way to prove or disprove that a multiverse exists.
A core component of the scientific method is the ability to test a hypothesis, and without that component, a hypothesis essentially becomes a question of philosophy, not of science.
Scientists are looking for impact marks that could show other universes have collided with ours. These marks would validate the multiverse theory.
However, in recent years, astrophysicists have thought of a way to test the multiverse theory. Consider again the pot of boiling water analogy—as water boils, some of the bubbles that rise up will collide. We don’t know how dense the theoretical multiverse would be, but it is possible that another universe could have collided with our own.
Astrophysicists think that such a collision would be observable as imprints or “bruises” on the cosmic microwave background. The collision point would be around the spot of either higher or lower radiation intensity.
While it’s not a guarantee that our (hypothetical) bubble universe has collided with another bubble universe, finding such an imprint would lend significant support to the multiverse theory.
The Big Bang Theory’s Influence Today
The big bang may have taken place almost fourteen billion years ago, but as a society, we’re far from past it. The big bang theory is the standard model taught in astronomy and cosmology courses around the world.
Significant funding and research time is dedicated to understanding more about the big bang and filling in the remaining questions in the cosmological model.
It’s currently impossible to look back past the time of recombination (the moment when free electrons paired up with nuclei to form neutral atoms and photons of light were finally able to travel freely) with telescopes.
That means the first four hundred thousand years or so of the universe can be studied only indirectly, such as by observing and analyzing the cosmic microwave background radiation for clues to the early universe or by re-creating the conditions of the big bang.
Physicists use particle accelerators to reproduce those incredibly hot, incredibly dense early conditions of the universe. Particle accelerators are powerful instruments that produce and accelerate a beam of particles, typically protons or electrons, but occasionally entire atoms such as gold or uranium.
The particles are accelerated inside a beam pipe to greater and greater energies. When the particles have reached the desired energy levels, they collide with another beam or a fixed target, such as a thin piece of metal.
The collision produces a shower of exotic particles. Detectors record the particles and the paths they take after the collision, which gives physicists a wealth of data to sort through in the aftermath.
The most famous particle accelerator studying the early conditions of the universe is the Large Hadron Collider, which is buried underground along the French and Swiss border at CERN (the European Organization for Nuclear Research). The Large Hadron Collider (LHC) is also the world’s largest particle accelerator with a ring 17 miles (27 km) long.
The LHC has four detectors at different collision points on the ring that physicists use for different purposes. ATLAS is a general-purpose detector designed to investigate new physics, such as searching for extra dimensions and dark matter. CMS looks for similar things as ATLAS using different technology.
ALICE is a heavy ion detector used to study the physics of strongly interacting matter at extreme energy densities similar to those just after the big bang. LHCb investigates the differences between matter and antimatter.
In one recent experiment, the scientists at CERN used the ALICE detector to study the collision of heavy ions (such as gold and lead nuclei) at energies of a few trillion electron volts each.
The resulting collision, which CERN described as a “miniscule fireball,” recreated the hot, dense soup of particles moving at extremely high energies in the early universe.
The particle mixture was primarily made up of subatomic particles called quarks and gluons that moved freely. (Quarks are particles that make up the matter; gluons carry the strong force that binds quarks together.)
For just a few millionths of The quark-gluon plasma existed for a few microseconds after the universe began before cooling and condensing to form protons and neutrons. a second after the big bang, the bonds between quarks and gluons were weak and the two types of particles were able to move freely in what’s known as a quark-gluon plasma.
The LHC’s man-made fireball cooled immediately, and the individual quarks and gluons recombined and created many different types of particles, from protons, neutrons, antiprotons, and antineutrons to tiny particles called pions and kaons.
One early finding from the analysis of the quark-gluon plasma showed scientists that the plasma behaves more like a fluid than a gas, contrary to many researchers’ expectations.
Scientists at CERN are also using the Large Hadron Collider’s LHCb detector to determine what caused the imbalance between matter and antimatter after the big bang.
Other Uses for Accelerators
Particle accelerators were invented by experimental physicists to study particle physics, but they have since been used in many useful applications.
From Splitting the Atom to the Atomic Bomb
The first particle accelerator was built in 1929 by John Cockcroft and Ernest Walton in pursuit of splitting the atom to study the nucleus. They succeeded in 1932 when they bombarded lithium with high-energy hydrogen protons. Their experiment was the first time humans split an atom, a process called fission.
The experiment also confirmed Einstein’s law E=mc 2. Walton and Cockcroft found that their experiment produced two atoms of helium plus energy. The mass of the helium nuclei was slightly less than the mass of the combined lithium and hydrogen nuclei, but the loss in mass was accounted for by the amount of energy released.
In 1939, German physicists discovered how to split a uranium atom. Scientists across the world feared that the Nazis would build an atomic bomb capable of terrible destruction. (When the uranium-235 isotope is split, the fission begins a chain reaction that can grow large enough to cause an enormous explosion.)
At the urging of Einstein and other top physicists, in 1941 the United States government launched an atomic bomb development effort code-named the Manhattan Project.
Over 120,000 Americans worked on the Manhattan Project, and the government spent almost $2 billion on research and development. The effort was so top secret that Vice President Harry
Nuclear fission creates a chain reaction
Truman didn’t learn about the Manhattan Project until President Theodore Roosevelt died in office and Truman became president.
When Japan refused to surrender in 1945, Truman authorized two atomic bombs that were dropped on Hiroshima and Nagasaki on August 6 and August 9, respectively.
The bombs effectively ended World War II, but hundreds of thousands of Japanese people were killed and many more suffered terrible health effects from the radiation.
For example, particle accelerators are used to deliver radiation therapy, which is one of the standard methods for treating cancer. In one form, high energy X-rays are generated by beaming high-energy electrons at a material such as tungsten.
These X-rays are then directed at the site of the patient’s cancerous tumor to kill the cancer cells. Healthy tissue is also damaged by the radiation beam, however, and researchers are continually looking for ways to deliver the right dose of radiation to destroy the tumor while minimizing the impact on healthy cells.
Particle accelerators are also used to generate X-rays for medical imaging, such as when we have our teeth X-rayed at the dentist’s office or have a full-body magnetic resonance imaging (MRI) scan.
Outside of the medical world, particle accelerators are used for industrial purposes, such as manufacturing computer chips and producing the plastic used in shrink-wrap, for security purposes, such as inspecting cargo, and in many other applications.
Physicists are able to study many aspects of the big bang using particle accelerators, but their work is by no means over. There are still many enormous questions about the beginning of the universe. The major questions include:
How did all four forces combine in the first fraction of a second?
What gave particles their mass?
Why did particles outnumber antiparticles?
How can we detect and study the neutrinos believed to have been created in the big bang, and what will they tell us if we find them?
How can we detect and study the gravitational waves that are believed to have been created by the big bang?
Is our universe the only universe?
The Four Forces
The Standard Model of particle physics has been developed since the 1930s, with significant help from particle accelerators and their cataclysmic investigations into atoms and their component parts.
According to the Standard Model, everything in the universe is made of a few fundamental particles (such as the building blocks of matter, quarks, and leptons), governed by four fundamental forces (the gravitational, electromagnetic, weak, and strong forces).
The Standard Model explains how these particles and three of the forces relate to one another.
The electromagnetic force, which governs the propagation of light and the magnetism that allows a magnet to pick up a paper clip, reaches over great distances, as evidenced by starlight reaching Earth.
The weak force governs beta decay (a form of natural radioactivity) and hydrogen fusion and acts at distances smaller than the atomic nucleus. The strong force holds together the nucleus and acts at very small distances.
The electromagnetic, weak, and strong forces result from the exchange of a force-carrying particle that belongs to a larger group of particles called bosons. Each force has its own boson: the strong force is carried by the gluon, the electromagnetic force is carried by the photon, and the weak force is carried by W and Z bosons.
The Standard Model is able to explain the forces other than gravity, all of which operate on microscopic scales. Gravity, however, operates across large distances, and as of yet, there is only a theoretical boson called the graviton that corresponds to the gravitational force.
Even without gravity, however, the Standard Model is able to explain particle physics very well because the gravitational force has little effect at the small scale of particles.
Research has shown that at very high energies, the electromagnetic and weak forces unite into a single force. Scientists believe that at even higher energies the strong force converges with the electroweak force to create a grand unified theory (GUT).
It is thought that at the extreme conditions immediately after the big bang all four forces would have been unified, but scientists do not yet understand how this could work.
Figuring out this unified force could help scientists understand more about the big bang and where our universe came from.
Matter Versus Antimatter
Matter and antimatter particles are created in pairs, which means that the big bang should have created equal amounts of matter and antimatter.
Matter and antimatter annihilate one another upon contact, and in the first fractions of a second, the universe was filled with particle and antiparticle pairs popping in and out of existence.
At the end of this process, when all the annihilations were complete, the universe should have been filled with pure energy—and nothing else.
This is clearly not the case. We are made of matter, and we inhabit a world and universe made of matter. What, then, happened such that matter survived?
Scientists calculate that about one particle per billion particles of matter survived. It’s unknown why this is the case, but observations of particles at the LHC give one potential explanation:
Due to a weak interaction process, particles can oscillate between their particle and antiparticle state before decaying into other particles of matter or antimatter. It could be that in the early universe an unknown mechanism caused oscillating particles to decay into matter slightly more often than they decayed into antimatter.
The survival of matter over antimatter is a topic of an ongoing investigation at physics institutions across the world.
When you look up at the sky at night with your unaided eye, you can see a beautiful array of stars and constellations. Without a telescope, it’s hard to garner any information from those stars beyond their position in the night sky and their relative brightness.
A crucial precursor to the invention of the telescope was the invention of the glass lens within it that created the necessary magnifying effect.
Light permeates and illuminates our daily lives, but it also has the power to reveal vast amounts of information about the cosmos. Knowing the speed of light, the different types of light, and what light spectra can tell us has been essential to studying the beginnings of the universe.
The Speed of Light
The speed of light is a significant metric in cosmology because when we know how fast light travels, we can use that speed in calculations of how far away different stars, galaxies, nebulae (clouds of gas and dust), and other phenomena are.
Aristotle thought that light traveled instantly, but today we know that light does have a finite speed. The first person to measure the speed of light with relative accuracy was the Danish astronomer Ole Römer in 1676.
Römer had studied Jupiter’s moon Io (discovered a half-century earlier by Galileo), which is regularly eclipsed by Jupiter as Io moves behind Jupiter in its orbit. Sometimes, the eclipse happened sooner than expected or later than expected.
Römer realized that the early or late appearance of Io from behind Jupiter was due to the varying distance between Earth and Jupiter. When Earth was farther away from Jupiter, the light had to travel farther and thus arrived at Earth later than astronomers had expected. Though Io’s orbit around Jupiter is regular, the timing of the eclipse varied by about an
White light is composed of different wavelengths of light that refract at different angles when passing through a prism. hour throughout the year due to the varying distance between the two planets.
Römer estimated that the speed of light was 186,000 miles per second (299,344 km/s), which isn’t far off from its modern-day measurement of 186,282 miles per second (299,792 km/s).
Visible Light Spectra
Another misconception of light was that it was a pure, white substance. As far back as Aristotle, scientists had believed that light creates a rainbow when passed through a prism because the light itself is modified.
In 1665, Isaac Newton proved differently with a simple but convincing experiment. On a sunny day at Cambridge University, Newton darkened his room and made a small hole in the shutter for a beam of light to pass through.
Newton then took the second prism and placed it upside down in front of the first prism. The spectrum of light, with all of its component colors, passed through the second prism and combined back into white light. In this way, Newton showed that white light contains all of the colors of the rainbow.
A NEW GRAVITY
One of the most astounding developments in physics of the twentieth century was Albert Einstein’s general theory of relativity. Published in 1915, the theory presented a geometric theory of gravity that revised the common understanding of gravity based on Isaac Newton’s work centuries earlier.
Newton’s theory of gravity held that gravity is a tugging force between two objects that directly depends on the mass of each object and how far away those two objects are from one another. The moon and Earth both attract one another, for example, but the more massive Earth exerts a relatively stronger attractive force on the moon.
In 1905, Einstein had published his special theory of relativity, which in part stated that space and time are inextricably connected in a four-dimensional continuum called space-time.
In his 1915 general theory of relativity, Einstein hypothesized that a massive object will create a distortion in space-time much like a bowling ball distorts the surface of a trampoline.
Just like a marble placed on the trampoline will roll inward toward the bowling ball, objects in space follow the distortions in space-time toward more massive objects. In broader terms, matter tells space-time how to curve, and curved space-time tells matter how to move. Light, too, Einstein predicted, would follow any warps in space-time.
On May 29, 1919, a solar eclipse proved Einstein’s theory of gravity was more accurate than Newton’s. During this particular eclipse, astronomers knew the sun would be passing through the Hyades star cluster.
The light from the stars would have to pass through the sun’s gravitational field en route to Earth, and because of the darkness from the eclipse, scientists could observe and measure this light when it arrived.
The English physicist and astronomer Sir Arthur Eddington, leader of the experiment, first measured the true positions of the stars in January and February of 1919.
When the solar eclipse happened, he measured the star positions again. The star positions appeared shifted due to the path-altering effect, known as gravitational lensing, of the sun’s gravitational field.
Einstein’s theory of general relativity didn’t just change the way we view gravity. It also created enormous new cosmological questions. Depending on how the equations in his theory were solved, the universe was either expanding or contracting.
Einstein preferred to believe it was staying static, so he added a term he called the “cosmological constant” to his equations to force it into stability. He would later come to regret this modification of his otherwise elegant equations.
Georges Lemaitre was born in Charleroi, Belgium, in 1894. Lemaitre initially studied engineering before volunteering for the Belgian army and serving as an artillery officer during World War I. During the war, Lemaitre witnessed the first poison gas attack in history and was decorated with the Croix de Guerre (Cross of War).
Post-war, Lemaitre switched his scientific focus from engineering to mathematics and physics. He obtained a doctorate from the University of Louvain in 1920 and was ordained as a priest in 1923.
Lemaitre received a traveling scholarship from the Belgian government, awarded for a thesis he wrote on relativity and gravitation, that allowed him to spend the subsequent years studying at Cambridge University, the Harvard College Observatory, and the Massachusetts Institute of Technology.
He returned to the University of Louvain in 1925 and became a full professor of astrophysics there in 1927.
That same year, Lemaitre proposed that the universe had begun at a finite moment in a highly condensed state and had expanded ever since. He published his theory in the Annals of the Scientific Society of Brussels, which was not widely read outside of Belgium.
Some who did read it dismissed his work as influenced by his theological studies, as the idea of a beginning could imply a divine creator. Lemaitre disliked religious readings of his cosmology, however, arguing that his theory “remains entirely outside any metaphysical or religious question.”
Everything changed for Lemaitre when his former Cambridge University professor Sir Arthur Eddington began to champion Lemaitre’s work. Eddington, who had observed the 1919 eclipse, had seen the initial publication but forgot about it for some time.
In 1930, three years after Lemaitre first published his expansion theory and one year after Hubble released his data on the expanding universe, Eddington wrote a letter to the journal Nature drawing attention to Lemaitre’s work. In hindsight, with Hubble’s data as evidence, Lemaitre’s work was significantly easier to accept.
Einstein had read Lemaitre’s 1927 paper and originally told him that his math was correct but his physics were abominable. After Hubble’s data was published, however, Einstein was much more interested in what Lemaitre had to say about cosmology, and the two had many walks and talks together over the following years.
After publishing on his primeval atom theory, Lemaitre’s academic work included cosmic rays, celestial mechanics, and pioneering work on using computers to solve astrophysical problems. He received numerous awards, including the Royal Astronomical Society’s first Eddington Medal in 1953. Lemaitre died in 1966.
Einstein, Hubble, and Lemaitre laid the foundation for modern cosmology. Einstein’s general theory of relativity raised curious questions about the universe, Lemaitre published a theory of a universe with a finite beginning and original highly condensed state, and Hubble’s data provided evidence that the universe was indeed expanding over time.
Over the subsequent decades, numerous scientists would propose theories that built on that foundation and make discoveries that helped create the standard big bang model of cosmology. George Gamow and Ralph Alpher proposed a modification of Lemaitre’s work in which all of the elements were formed in the big bang.
Fred Hoyle opposed the big bang model, but his work on stellar nucleosynthesis helped fill in scientific gaps in the theory. Arno Penzias and Robert Wilson unintentionally discovered strong evidence of the big bang, and George Smoot designed a massive experiment to find whether that evidence could also explain the formation of stars and galaxies over time.
George Gamow was born in Odessa, Ukraine (it was part of the Russian Empire at the time), in 1904. He loved science from a young age, growing interested in astronomy when his father gave him a telescope for his thirteenth birthday.
Gamow graduated from the University of Leningrad in 1928 and moved to Göttingen, Germany, where he developed a theory of radioactive decay as a function of quantum mechanics. He was the first to successfully explain why some
Like Lemaitre, Alexander Friedmann also solved Einstein’s equations of general relativity and proposed an expanding model of the universe in the 1920s.
Friedmann and his theory received significantly less attention than Lemaitre, however, due to Friedmann’s background as a mathematician (not a physicist) and his death in 1925, before Hubble had shown that the universe was indeed expanding.
Friedmann was born in 1888 in St. Petersburg, Russia. As a student, Friedmann showed a remarkable talent for mathematics and coauthored a paper published in Mathematische Annalen in 1905. During World War I, Friedmann joined the volunteer aviation detachment and flew in bombing raids.
After the war, Friedmann worked in various positions including as head of the Central Aeronautical Station in Kiev, as a professor at the University of Perm, and as director of the Main Geophysical Observatory in Leningrad. The cosmologist George Gamow briefly studied under Friedmann at the observatory.
Friedmann became interested in Einstein’s general theory of relativity and published an article, “On the Curvature of Space,” in 1922 that proposed a dynamic, expanding universe. Einstein quickly rejected Friedmann’s work in the same journal, Zeitschrift für Physik, though he retracted his rejection again in the journal in 1923.
Friedmann’s equations for the expansion of space, known as the Friedmann equations, show the fate of the universe as either expanding forever, expanding forever at a decreasing rate, or collapsing backward (dependent on its density).
Friedmann’s career in cosmology was cut short in 1925 when he died of typhus.
Gamow became a professor at the University of Colorado at Boulder in 1956 and worked there until his death.
radioactive elements decay in seconds while others slowly decay over millennia.
The theoretical physicist Niels Bohr offered Gamow a fellowship at the Theoretical Physics Institute of the University of Copenhagen where, among other work, Gamow worked on calculations of stellar thermonuclear reactions.
Gamow also convinced the experimental physicist Ernest Rutherford of the value in building a proton accelerator, which was later used to split a lithium nucleus into alpha particles.
As much of Europe faced the pressures of communism and fascism in the 1930s, many intellectuals fled the continent (including Einstein). Gamow made several attempts to escape the Soviet Union, including an attempted crossing of the Black Sea into Turkey via kayak in 1932.
He finally got his chance to escape when he was invited to give a talk in Brussels on the properties of the atomic nuclei. Gamow arranged for his wife, Rho, also a physicist, to accompany him.
From there, the Gamows traveled through Europe and then to America in pursuit of an academic career outside of the Soviet Union.
Though he hoped for a prestigious position at a school known for its physics program, Gamow ended up accepting a position at George Washington University, which at the time didn’t have a strong reputation in physics.
Gamow quickly changed that, however, as his terms of acceptance involved expanding the physics department at GWU and establishing a theoretical physics conference series.
In addition to developing a theory of element formation in the big bang, Gamow’s research included stellar evolution, supernovas, and red giants. In later years, Gamow made contributions in biochemistry as well as a foray into what he called “the physics of living matter.”
After reading about Watson and Crick’s work on the structure of DNA in the journal Nature, he wrote his own note to Nature proposing the existence of a genetic code within DNA that was determined by the “composition of its unique complement of proteins” made up of chains of amino acids. Gamow’s ideas inspired Watson, Crick, and many other researchers to begin researching how DNA coded proteins.
Gamow also wrote numerous popular blogs designed to give non-physicists access to complex topics, including the Mr. Tomkins series about a toy universe with properties different from our own and One, Two, Three…Infinity. Gamow died in 1968.
Arno Penzias was born in Munich, Germany, in 1933 to a Jewish family. His family was rounded up for deportation to Poland when he was a young boy, but they returned to Munich after a number of days. His parents, aware of the danger they faced, sent Arno and his younger brother on a train to England in 1939.
His parents were able to join the two boys in England and, after six months there, they moved to New York City. Penzias attended the City College of New York, a municipally funded college dedicated to educating the children of New York’s immigrants.
After college, he spent two years in the Army Signal Corps, which develops and manages communication and information systems for the command and control of the military.
When he began his graduate studies in physics at Columbia University in 1956, that army experience helped Penzias gain research projects in the Columbia radiation laboratory. For his thesis, he built a maser amplifier, a device that amplifies electromagnetic radiation, for a radio astronomy experiment.
Penzias and Wilson made their discovery of the CMB on the Holmdel Horn Antenna, which detects radio waves. After finishing his Ph.D., Penzias began working at Bell Labs in Holmdel, New Jersey. There, Penzias was able to continue his work in radio astronomy, which led to his work with fellow radio astronomer Robert Wilson.
In an attempt to measure the radiation intensity of the Milky Way, the two accidentally discovered the cosmic microwave background (CMB) radiation, the relic radiation left over from the big bang.
Penzias rose through numerous levels of leadership at Bell Labs, eventually becoming vice president of research. As his own astrophysics research wound down, he wrote a blog called Ideas and Information on the creation and use of technology in society.
When he approached mandatory retirement age, Penzias left the research and development world for Silicon Valley, where he became involved in the venture capital world.
Robert Wilson was born in Houston, Texas, in 1936. His father worked for an oil well service company, and while in high school Robert often accompanied his father into the oil fields.
His parents were both “inveterate do-it-yourselfers,” Wilson wrote, and he gained a particular fondness for electronics from his father. As a high school student, Robert enjoyed repairing radios and television sets.
Wilson attended Rice University, where he majored in physics. He obtained his Ph.D. in physics at Caltech, where he worked with radio astronomer John Bolton on expanding a radio map of the Milky Way. After graduation, he joined Bell Labs’ radio research department.
Together, Wilson and Penzias made numerous discoveries using radio astronomy, including a surprising abundance of carbon monoxide in the Milky Way and their Nobel Prize-winning discovery of the CMB.
Today, Wilson continues to live in Holmdel, NJ, with his family.
George Smoot grew up attending university biology courses with his mother. Both parents had resumed their college educations after World War II and two children, and watching his parents study, learn, and dedicate time to education had a strong influence on George.
After some financial difficulties, the family moved to Alaska, where George spent his time outside exploring and studying the night sky.
George’s father worked for the United States Geological Survey, and as his reputation, as a field scientist grew, he traveled around the world to gather data on the properties and water flow of rivers.
George’s parents played a shaping role in his life through high school, as his father tutored him in trigonometry and calculus while his mother gave him lessons in science and history.
Smoot attended the Massachusetts Institute of Technology (MIT), where he majored in mathematics and physics. He stayed at MIT for his Ph.D. and then moved to Berkeley to work on particle physics at the Lawrence Berkeley National Laboratory.
There, he worked on the High-Altitude Particle Physics Experiment, in which Smoot and his colleagues searched for evidence of the big bang using balloon-borne detectors that would look for antimatter in cosmic rays.
Smoot eventually changed his focus to studying the CMB for more information about the early universe, which led to his Nobel Prize-winning discovery of fluctuations in the CMB.
Putting It Together
Over the twentieth century, the big bang theory slowly fell into place. In the next chapter, we’ll discuss the initial versions of the theory and the current big bang model of cosmology. The structure in the universe has become increasingly complex over time.
The Discovery of the Big Bang
As scientists studied the origins of the universe, their cosmological research focused on three major pillars: the expanding universe, nucleosynthesis, and the cosmic microwave background radiation. Today, all three give strong support for the big bang model while also providing clarity into how the universe evolved over time.
The EXPANDING UNIVERSE
“The whole story of the world need not have been written down in the first quantum-like a song on the disc of a phonograph. The whole matter of the world must have been present at the beginning, but the story it has to tell may be written step by step.”
In the 1920s, two academics independently worked through Einstein’s general relativity equations and found that the solutions suggested an expanding universe. One was Alexander Friedmann, whose work in the field was cut short by his untimely death from typhoid fever in 1925.
The other was Georges Lemaitre, who went on to become the first scientist to propose a theory of an expanding universe with a discrete beginning. Today, Lemaitre is known as the father of the big bang theory.
The Primeval Atom
Lemaitre was an avid scholar of general relativity and studied under one of its foremost experts, Sir Arthur Eddington, at Cambridge University in England. Lemaitre began writing about an expanding universe in the 1920s. In the early 1930s, he added the concept of a discrete origin to his theory.
In a 1931 letter published in the journal Nature, Lemaitre began by writing, “Sir Arthur Eddington states that, philosophically, the notion of a beginning of the present order of Nature is repugnant to him …”
For many scientists like Eddington, any cosmology with a finite beginning had too much of a creation narrative, which harkened back to mythology and supernatural forces, to be scientifically acceptable.
To Lemaitre, the notion of a beginning of the universe was not only quite acceptable, but it was also the logical conclusion from quantum theory.
If there was a constant total amount of energy in the universe and that number of distinct quanta were increasing, as theory held, then the implication must be that there were once much fewer quanta, perhaps a single quantum, that held all of the energy in the universe.
In the letter to Nature, Lemaitre suggested the possibility of a single unique radioactive atom that held all the mass in the universe before decaying into smaller and smaller atoms.
“The last two thousand million years are slow evolution: they are ashes and smoke of bright but very rapid fireworks,” he wrote in a paper called “The Evolution of the Universe.”
In a later text, The Primeval Atom, he wrote: “We can compare space-time to an open, conic cup … The bottom of the cup is the origin of atomic disintegration; it is the first instant at the bottom of space-time, the now which has no yesterday because, yesterday, there was no space.”
Lemaitre was the first physicist to propose a widely discussed model of cosmology with a finite beginning and expansion from a single atom.
However, his theory of cosmology wasn’t the big bang we think of today, which involves an explosion of pure energy that converts into all of the matter in the known universe. Lemaitre’s model was a colder, disintegrating model of the universe. The hot big bang model that we know today arrived seventeen years later.
Lemaitre worked on his cosmology in the years between World War I and World War II. World War II temporarily interrupted astrophysics as it diverted many top physicists to war projects and isolated others.
Lemaitre himself was cut off in Belgium after the Germans invaded, and he nearly died in an Allied forces’ bombing of his apartment building.
After the war, Lemaitre turned his focus to other scientific pursuits, including mathematical computing. Einstein turned his attention to finding a unified field theory that would unite general relativity with quantum mechanics.
Sir Arthur Eddington died in 1944. One generation of scientists stepped away from the cosmological question; a new generation of scientists stepped up.
Roughly a decade after Lemaitre proposed a primeval atom that decayed into all the matter in the universe, George Gamow and Ralph Alpher published a paper detailing the foundation of the modern big bang theory.
Gamow’s early work included studying radioactivity and stellar physics. When World War II broke out, Gamow had plenty of time to focus on the implications of nuclear physics on cosmology.
While other American scientists were drafted to support the war effort, Gamow was left out because he had briefly served in the Red Army before fleeing Ukraine.
Years later, as a professor at George Washington University, Gamow and his doctoral student Ralph Alpher published a paper outlining their theory of the beginning of the universe and synthesis of matter.
The physicists Carl von Weizsäcker and Hans Bethe had both showed how stars convert hydrogen into helium through what is called the carbon-nitrogen-oxygen cycle, but at the time no physicist could explain how heavier elements, or even carbon, were formed within stars. Gamow believed they could have formed at the beginning of the universe.
Gamow and Alpher proposed that the universe did not begin as a single super atom but as hot, highly compressed neutron gas that underwent a rapid expansion and cooling.
The initial primordial matter decayed into protons and electrons as the gas pressure dropped due to the expansion. (Gas pressure is a function of molecular collisions. As the gas density of the early universe decreased, the particles collided less frequently and the pressure dropped.)
This began a process called big bang nucleosynthesis in which protons “captured” neutrons to form deuterium (an isotope of hydrogen). Neutron capture continued and formed heavier and heavier elements by adding one neutron and one proton at a time.
The relative abundance of elements was determined by the time allowed by the universe’s expansion (that is, the time in which the universe had the right conditions for nucleosynthesis to proceed).
This capped window of time, Gamow and Alpher believed, explained why light hydrogen was so prevalent and heavy elements like gold so rare.
Their 1948 paper contained a model for only the abundances of hydrogen and helium, but as these two elements account for 99 percent of the atoms in the universe, it was enough to make their paper credible.
In Alpher’s Ph.D. thesis, he wrote that the nucleosynthesis of hydrogen and helium took just three hundred seconds.
Over time, large stars fuse successively heavier elements, creating “shells” of different elements that are eventually released into the universe.
Alpher’s calculations showed that there should be about ten hydrogen nuclei for every helium nucleus at the end of the big bang, which matches modern observed abundances and lent further support to the model.
In another 1948 paper, Alpher and his coauthor Robert Herman calculated that the radiation from the beginning of the universe should today be about 5 degrees K. This prediction provided a way to test the theory and provide strong supporting evidence for its validity.
Gamow and Alpher’s work created a buzz in the scientific community because it explained the origin of the most abundant elements and provided a compelling narrative of the big bang. Their work created the basic model of the big bang theory we know today.
The Age of the Universe
One other roadblock that had to be cleared before the big bang model was embraced by the scientific community was the age of the universe.
After Hubble’s discovery of the expanding universe, astronomers used his measurements to calculate the age of the universe. Galaxies move away from each other at a velocity represented by v = H0 x D. V is the observed velocity of the galaxy as it moves away from us, D is the distance to the galaxy, and H is the Hubble constant.
The Hubble constant represents the expansion rate of the universe, and Hubble’s 1929 estimate of this value was about 500 kilometers per second per megaparsec (Mpc). (Parsecs are measurements of distance in astronomy. One parsec is 3.26 light-years long, and one megaparsec is 3.26 million light-years long.)
The Hubble constant can be used to infer the age of the galaxy. If the universe had been expanding at a rate of 500 km/s/Mpc to the present day, the universe was about 1.8 billion years old.
However, geologists had shown through examinations of radioactive rocks that Earth was older than 1.8 billion years, and it was assumed that stars were even older than our planet. This timescale difficulty, as it was called, was a major flaw in the big bang models proposed by Lemaitre and Gamow.
It turned out, however, that Hubble’s measurements weren’t entirely accurate. The German astronomer Walter Baade discovered that there were two major types of Cepheid variable stars, which Hubble didn’t know when he used Cepheid variables to calculate the distance to the Andromeda galaxy.
The younger Population I stars are hotter, brighter, and bluer than the older Population II stars. Hubble had observed Population I Cepheid variable stars in Andromeda but mistook them for dimmer Population II stars. He saw a relatively bright star and, with the dimmer stars in mind, though it must be much closer than it really was.
Baade recalculated the distance to Andromeda using the knowledge of both types of Cepheid variables. His new calculation showed that Andromeda was twice as far away as previously thought.
It also opened up a new look at the big bang model’s timeline: if the recession speeds remained the same but the distances doubled, the age of the universe was now around 3.6 billion years.
Baade formally announced his results in 1952, just four years after Gamow and Alpher published their first paper on big bang nucleosynthesis.
This was much better for the big bang model as it allowed for a universe that was older than Earth, but it wasn’t yet a complete success. There were other elements of the universe thought to be older than 3.6 billion years.
Baade’s student Allan Sandage took on the task of measuring the distances to the farthest galaxies. Previously, due to technological limitations, astronomers had to use a variety of assumptions to measure the distance to very far-off galaxies. One of those assumptions rested on finding the brightest star in a faraway galaxy.
By comparing its apparent (observed) brightness to the apparent brightness of the brightest stars in a closer galaxy, astronomers could come up with a rough estimate of how far away the distant galaxy was. However, Sandage showed that what astronomers thought was the brightest star was actually often an enormous, very luminous cloud of hydrogen gas.
That meant that the actual brightest star in the distant galaxies was much dimmer than was previously known and the galaxies were much farther off than previously calculated. Sandage revised the age of the universe to first 5.5 billion years in 1954 and eventually to an age between 10 billion and 20 billion years.
The new timeline allowed for all of the planets, stars, and galaxies to form and thus made the big bang model compatible with observations of the universe.
Today, the age of the universe is estimated to be 13.8 billion years, within Sandage’s later estimated range. (The Hubble constant, H0, is now estimated to be somewhere between 45 km/sec/Mpc to 90 km/sec/Mpc.)
The current age estimate has been calculated using a variety of methods, including measurements of stellar evolution, expansion of the universe, and radioactive decay, with all three methods in agreement of the universe’s age.
Mapping the CMB
Those who supported the big bang model believed that the early universe must not have been perfectly uniform, for otherwise stars and galaxies couldn’t have formed.
Instead, they imagined a universe where some areas were denser than others, creating regions where gravity would eventually attract more matter and cause the regions to collapse under their own weight.
There was no proof of these variations in density when Penzias and Wilson first discovered the CMB. The signal they picked up was uniform across time and space. The American astronomer George Smoot hoped that if he measured the CMB with more powerful instruments, he would find the predicted density variations.
Smoot worked at the University of California at Berkeley, where he participated in several 1970s experiments using giant balloons to lift radiation detectors tens of kilometers above Earth.
The scientists hoped that this high altitude would remove any radiation from microwaves in Earth’s atmosphere. However, the cold temperatures at that altitude could wreak havoc on the detectors and the balloons were prone to a crash-landing.
In efforts to find other means of studying the CMB from high altitudes, Smoot used a United States Air Force spy plane to take a detector up. The data gathered ended up showing that the Milky Way was moving through the universe at a speed of 600,000 kilometers per second, which was new and interesting information, but not the data Smoot intended to find.
While his 1976 spy plane experiment was underway, Smoot began working on designing a satellite detector called COBE, or the Cosmic Background Explorer.
COBE contained several detectors including a Differential Microwave Radiometer (DMR) that measured the CMB radiation from two separate directions and found the difference. The DMR could thus detect whether the CMB was perfectly smooth or had small fluctuations.
COBE was scheduled to launch in 1988, but the experiment ran into a problem when the Challenger space shuttle exploded in January of 1986. NASA upended its flight schedule and called off the scheduled COBE launch.
The COBE team explored opportunities to launch on a foreign rocket, namely with the French, but NASA objected. Eventually, NASA agreed to send COBE up in a Delta rocket, which was much smaller than they had initially planned for.
The team quickly redesigned COBE to be smaller and lighter so the sophisticated equipment could fit in the rocket.
COBE launched on November 18, 1989. It took about six months to complete an initial rough, full-sky survey. The initial data showed no variations, but when the first thorough full-sky map was completed in December of 1991, the data showed something more.
The peak wavelength of the CMB radiation varied by 0.001 percent, a tiny variation but significant enough to show that the early universe was inhomogeneous. The variations were big enough to cause matter to clump and, eventually, galaxies to form.
Smoot’s team announced their results in April of 1992. It was one of the most significant discoveries in the history of cosmology, for the COBE results showed that the big bang model of cosmology could explain the history of the universe from its birth to the formation of galaxies to present day. Subsequent missions by the WMAP and Planck satellites confirmed and refined COBE’s measurements of the CMB.
By the 1990s, all three pillars of the big bang model were in place and the big bang became the standard cosmological model for our universe. Over the past few decades, scientists have arrived at a sophisticated, detailed standard model of those first few moments in the infant universe.
The STANDARD MODEL
In the first half of this chapter, we looked at how the discovery of the big bang fell into place, piece by piece. Now, let’s dig in deeper to the details of how it worked.
The standard model of the big bang today contains details of the first fractions of a second. The numbers at this time in the universe’s history are both astronomically small and astronomically large.
As a quick refresher on scientific notation, 10-43 seconds is the equivalent of a decimal place followed by 42 zeroes and a one. Conversely, 1032 degrees K is 10 with 32 zeroes after it.
The scale of these numbers shows the rapid speed at which the universe was changing and the extreme conditions present in that early period. As the universe expanded and cooled, the changes began to happen within more comprehensible timescales and environmental conditions. |
Video gamers are advancing the frontiers of science. Already, they've played games that ultimately help map the shapes of proteins. Now they're also advancing scientists’ knowledge of genetics.
A Web-based video game called Phylo allows game players to arrange sequences of colored blocks that represent nucleotides of human DNA. The game asks the players to recognize patterns and match them up in closely related species, comparing their results to a computer and scoring them. Phylo was developed by Dr. Jerome Waldispuhl of the McGill University School of Computer Science and collaborator Mathieu Blanchette.
By looking at the similarities and differences between these DNA sequences, scientists can get insight into genetically based diseases. For example, one part of the game shows a human and a mouse, and the challenge is to align the nucleotides correctly in a gene connected with familial Alzheimer’s disease. Once that is completed, the two sequences are compared with that of a dog and a new level of the game starts.
The trick is aligning the nucleotides — the order can’t be changed, but figuring out where along the sequence each one should go is a challenge, especially when one considers less-closely related species. That kind of intuitive pattern recognition is not something computers are very good at. This doesn’t mean humans can replace computers. In this instance, machines did a lot of the heavy lifting. But the problem was the case of misaligned sequences, which the computers weren’t always able to spot.
The game was launched in November 2010. Since thenm 17,000 registered players have contributed more than 350,000 solutions to alignment sequence problems. An added perk: players can also choose which specific disease they’d like to help with, such as cancer or leukemia.
Image: McGill University |
- Up to 73 million sharks are killed every year in support of the global shark fin industry.
- 926,645 square miles (2.4 million square kilometers) of the ocean is off limits to shark fishing.
- Losing sharks could have consequences that affect the entire ocean ecosystem.
- Bull Sharks are the only shark species that frequent freshwater, and can remain for long periods of time.
- Osmoregulation is the process that allows sharks to cellularly adapt their body fluids to cope with their environment.
- Bull Sharks can tolerate hypersaline water as high as 53 parts per thousand.
- 90% of a shark’s body secretion into the water is done by the gills, in order to balance internal pH.
- The spiral valve intestine in sharks increases the surface area of digestion and conserves space within the body.
- Some sharks will consume other sharks for food.
- 80% of the 350 shark species grow to less than 5 feet are unable to hurt people and rarely encounter them.
- Adult bull sharks are not known to have any natural predators, except humans.
- The largest bull shark caught on rod and reel weighed 771 lb. 9 oz.
- Humans are more at risk to die from a dog attack then a shark attack.
- Bull Sharks get their name from having a blunt, round snout, and robust-body, which is used to head-butt their prey before biting.
- Bull sharks can live up to 16 years.
May 23, 2013
River Shark Facts |
What makes the olmec culture so unique and alluring the olmecs were the first true mesoamerican civilization there were small villages and groups of people in the. The olmec c 1200 bce to c 400 bce first known mesoamerican civilization mesoamerica - a region and cultural area in the americas, extending approximately from central mexico to belize, guatemala, el salvador. Early civilizations in mesoamerica •the olmec had large cities that were centers for their religious rituals •the oldest city was san lorenzo, which contained pyramids and other stone monuments •in la venta, a 30 foot high pyramid towered above the city •skilled olmec artisans also carved a series of colossal stone heads, probably to represent their. One of the reasons the olmec were so important and influential - to some, the mother culture of mesoamerica - was the fact that they had extensive contact with other civilizations from the valley of mexico well into central america these other groups, even if they did not all embrace olmec culture, were at least in contact with it this gave. Tral mexico to the northern part of present-day honduras the earliest known american civilizations arose in southern mexico, an area of hot rain forests the people are called the olmec they flourished from about 1200 to 400 bctheir culture had a great influence on their neighbors and on peoples who lived long after them. 240 chapter 9 main idea why it matters now terms & names cultural interactionthe olmec created the americas’ first civilization, which in turn influenced later civilizations. This ballgame and several other features of olmec civilization may be found in subsequent central american civilizations thus, the olmecs had a considerable amount of influence on these later cultures as so little is known about the olmecs today, it would require much more work and research to gain a greater understanding and appreciation. Olmec civilization many archaeologist tell us that the olmec culture represents the mother culture of all mesoamerica cultures no one knows the.
The civilizations of mexico: the olmec civilization: the olmec civilization is thought to be the first civilization in mexico many of the earlier cultures of mexico. The peoples and civilizations of the americas author: adas, michael date: 1992 the archaic cultures by about 9000 bc small bands. The religion of the olmec people significantly influenced the social development and mythological world view of mesoamericascholars have seen echoes of olmec supernatural in the subsequent religions and mythologies of nearly all later pre-columbian era mesoamericans cultures the first mesoamerican civilization, the olmecs, developed on present-day mexico. 4 economy/ trade system trade the olmec apparently traded with other cultures all over mesoamerica before the dawn of the olmec civilization, trade in mesoamerica was common highly desirable items like obsidian knives, animal skins, and salt were routinely traded between neighboring cultures the olmecs created long-distance trade routes to.
Share this:a sutherland - ancientpagescom - at the time, the spaniards arrived, the olmec civilization, had already emerged and disappeared even today, much of the olmec history is shrouded in mystery it is believed that this remarkable civilization existed between 1200 bc - 400 bc we know far less about the olmec people than we do [. Pictures and descriptions of several examples of olmec culture preclassic period colossal heads from the sites of la venta and san lorenzo in tabasco and veracruz, mexico.
The ancient olmec civilization is now considered to be one of the earliest great civilizations in mesoamerica this civilization came and went long before the aztec empire was even thought of, and yet they left their mark on the peoples of mexico and beyond, and developed a complex culture which is still echoed today, probably in ways. Within the olmec culture these people were responsible for maintaining the well-being of the people by battling against demons and other evil shamans examples of how strong this religion was are found all over the geography of latin america as large standing temples and pyramids in most cultures around the world, including the olmec, religion. In which direction from la venta did olmec civilization have to most of its other sites.
What is happening in north america in 1000bce during the past thousand years the first north american civilization has appeared this has occurred in mexico and neighbouring areas, where several farming cultures now flourishthe most advanced of these is the olmec civilization at this date most peoples of present-day usa and canada still live. What objects, processes and strategies did the olmec culture use to influence or dominate virtually all of mexico the pre-classic is a time definedâ when farming based on maize, beans, and squash was an effective way of living for the local olmec culture a stable diet helped the growth of the culture and the ability to store food for useâ during. • pyramids, plazas, and massive earthen mounds • culture spread quickly across northern and central peru • no evidence of political or economic organization.
The olmec civilization is believed to be the first true civilization in mesoamerica (area which now covers mexico and central america) traces of them are found in the tropical lowlands of mexico. Who were the olmecs corn was a major component of olmec agriculture the olmecs were a civilization that lived in the south central area of modern day mexico the. The olmecs were a pre-columbian culture that flourished in south-central mexico a highly structured society, the olmecs were. Olmec civilization predates the great civilization od the mande it would be less ridiculous to talk about the mexicans colonizing africa g) we don't know if the. • olmecs—people who create earliest civilization in southern mexico the rise of olmec civilization •first sign of olmec culture: massive sculpture of head found.
The olmec civilization of mexico were interesting in this article you will learn. The olmec led by u kix chan is a custom civilization by leugi and tomatekh with contributions from janboruta and hokath this mod is also collected in tomatekh's cradles of civilization pack it requires brave new world and replaces the city-state of la venta with chavín de huantar (göbekli tepe. Olmec civilization is generally recognized as the earliest civilization in what is now present-day mexico, and many of its distinctive cultural practices were handed down to successive civilizations in the general geographical region, so that the. The excavations at la venta 1963 good overview of olmec civilization by national geographic and university of california reviews the large scale excavations of the olmec site of la venta begun in 1955. The olmec civilization was one of the first to inhabit the geographical zone known as mesoamerica, an area between mexico, guatemala, el salvador and belize, which shared the same culture based on cultivation, agricultural economy, solar calendar, human s. Zapotec civilization arises by the time olmec civilization had collapsed, another people—the zapotec—were developing an advanced society to the southwest, in what is now the mexican state. Like most other mesoamericanists, the authors do not mention the striking resemblance between olmec iconography and shang writing, which has been demonstrated most recently by mike xu with the collaboration of numerous shang specialists in china ()the three-pronged symbols on the headdress of the statue shown in diehl's perspective and. |
At the beginning of World War II, the Maginot Line was quickly outflanked (May, 1940). The development of airpower, heavy artillery, and mechanized warfare further proved the inefficacy of such massive defensive systems and brought them to an end. Despite the value of the German Siegfried Line, which long withstood heavy assault in 1944, and despite the usefulness of the Stalin line in channeling the German attack on Russia, field fortifications predominated over fixed fortifications in World War II. However, underground shelters were used for protection from air attack, and the Germans constructed large concrete shelters to protect submarines in harbor. The Japanese fortified Pacific islands with caves and with simply constructed pillboxes and bunkers. Similar fortifications were used in the Korean and Vietnam wars. The last years of the Korean War were virtually trench warfare. In Vietnam, the Viet Cong perfected underground complexes in the field, whereas the United States built a network of installations and artillery firebases protected by air forces and the usual land defenses.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. |
The mystery of Stonehenge is one of the longest-running in human history. Erected 5,000 years ago, the giant stones have been the subject of movies, conspiracy theories and a lot of research, yet have remained shrouded in mystery. Now, scientists can finally say where the stones came from—but they’re still totally stumped as to how, and why, they got to Stonehenge.
About a hundred years ago, scientists thought they had figured out the source of the stones. But a new analysis of the stones’ composition suggests that they came from an outcropping even further away. Tia Ghose at Live Science explains:
In 1923, geologist Herbert H. Thomas pinpointed the source of one type of the stones, known as dolerite bluestones, to a rocky outcropping known as Carn Meini on high ground in the Preseli Hills of western Wales. He became convinced the other bluestones (made from other types of igneous, or magmatic, rock) came from the nearby location of Carn Alw. That, in turn, lent credence to the theory that Stonehenge's builders transported the stones south, downhill, to the Bristol Channel, then floated them by sea to the site.
This new study, however, found that the levels of elements like chromium, nickel, magnesium oxide and iron oxide point to a slightly different location. According to this new analysis, about half the bluestones at Stonehenge actually came a place called Carn Goedog, about 1.8 miles further north.
Adding two miles to the stone’s trip is interesting, but dwarfed when compared to how far the stones had to travel to actually get to their current resting place. Somehow, 5,000 years ago, people managed to transport the giant rocks 140 miles. How they did that is still a mystery. |
The great vowel shift refers to a period roughly from the twelfth century to the eighteenth century in England when all of the long vowels in English were raised–with the most substantial development coming in the 15th and 16th centuries–creating a drastic shift away from the pronunciation methods found in middle English.
Origins of the Vowel Shift
Many linguists believe France”s ruling of England during the Hundred Years” War (1337-1453) inspired the English to differentiate their pronunciation from the French in an act of defiance, thus creating a modern standard. A counter theory by Matthew Giancarlo in Seth Lerner”s book Inventing English: a portable history of the language, hypothesizes this standardization came through social conditioning. Consequently, the vowel shift itself is seen as one of many possible dialects that could”ve been adopted during this period and ultimately may have been adopted to meet the needs of an increasingly print oriented society.
Articulation Points of English Words
The vowel shift did not affect short vowels; however each of the long vowels–classified as either low, mid, high or back depending on where they are pronounced in the mouth–moved up a pronunciation class. For instance low vowels became mid vowels and high vowels became diphthongs, a syllable containing two distinct vowel sounds. Back vowels were also moved forward, infringing on the pronunciation space of other vowel groupings until both front and back vowels were pronounced similarly. Latin etymologies served as a basis for the respelling of a plethora of English words. In some instances the pronunciation of a given word was modified to phonetically reflect the new spelling, while other written words retained one or more silent constants. These changes necessitated the construction of a modern alphabet which diverged from Latin.
How the Vowel Shift Impacted Spelling
Although the shift encompassed all of Britain and eventually the world, the transition to modern English occurred at varying speeds . The dialects which once defined a given area faded away as the London standard dialect gained traction with the advent of print. Modern English also spread throughout the world on account of Britain”s colonization and trade efforts and previously foreign words from Spain, Portugal and the Netherlands were integrated into English. This flexibility was not present in Old English, characterized by a lack of word endings and a dependency upon rough translations for foreign words. Printers in different areas had contradictory pronunciation guidelines, leading to many English words being spelled non-phonetically.
Are you a business professional trying to improve your communication? Do vowels cause you frustration and confusion? When you are ready to get rid of your accent, Accent Pros can help find the best accent reduction training program suitable for you and your lifestyle. Please take our Free Accent Screening to determine the severity of your foreign accent and examples of what you could do in order to speak with an American accent. |
Development is the process by which a country improves its standards of living over time, and what a country does to enhance the lives of its population. Several Indicators are used to measure Development across Economic, Environmental and Social indicators. Economic Indicators include GDP per Capita, and as the Economy grows, so do jobs and development.Social Indicators include access to clean water. Indicators also rise with each other, as shown in Figure A below, where the UK’s Life Expectancy Grows at the same time as its GDP per Capita grows. Also, different countries in the same region develop at a relatively similar rate, such as the example in Figure B, where France and Britain’s life expectancy and GDP per Capita rise at a similar steady rate. All in all, Development is the ability of a country’s people to live happy, long, healthy lives; it is not just a country’s income.In the study of Development, Geographers also study patterns in Countries’ Development. The major pattern found is the North-South Divide. Although there are anomalies such as Australia, countries in the north are generally more developed than countries in the south.The HDI (Human Development Index) was a new way of measuring Development. Created in 1990, it was invented for the sole purpose of putting people’s lives at the centre of the study of development of the world. It combines several indicators together to form an summarised idea of a country’s development, hiding extremes and creating a single figure for simple comparison, reflecting heavily on the quality of the world’s people, not just on income; GDP per Capita was the generally used figure at the time. Another problem with using income is that income can be used for things like gangs and military, the same income could also be used for medicine and schools.But the HDI, due to its focus on things like life expectancy, ensures that its relations to income show only the good uses of income and not the bad uses, like GDP per Capita could have done. It ranks every country based on three basic human needs: Health, Education, and Income. Figures are between 0 and 1. Using a single indicator only shows one aspect of development, which is a problem because, for example, GDP per Capita can be inaccurate as a Country may have very poor people in Absolute Poverty as well as very, very rich people. GDP per Capita also concentrated on things that did not make up the whole idea of development: goods, income and trade. It doesn’t show the quality of human lives in said country.This causes inaccuracies when using such a figure as a single indicator of Development. A composite indicator such as the HDI gives a much more balanced idea of development. However, one problem with it is that it takes each country in general; however, certain countries can have very different levels of development within it in several areas or ethnic groups. Another problem is that it isn’t gender specific. In certain countries, women might have a much worse Standard of Living than the men in the same country.Despite this, the HDI is still a better indicator of development than just a single indicator. In summary, the reason we use it so often is because it uses the most important figures from all aspects of development into one figure that hides odd extremes and makes it easier to compare different countries’ development levels.In terms of a comparison between two different countries, I chose Rwanda and Portugal. This is because my old school has a sister school in Rwanda, so I’m interested in its level of development. Portugal is the foreign country that I’ve been to the most.I chose Life Expectancy, Adult Literacy and GDP per Capita as my three Development Indicators as each one represents the key figure in my opinion of different aspects of Development; Social, Economic and Demographic. This means that I have figures in the table above and the graph below that represent all aspects of the countries’ development and give a good, balanced, overall picture of development. The data is from the 2009 report.The HDI Ranks of the two countries are 34 for Portugal and 167 for Rwanda, as shown in the table above. This shows that the two countries are very far apart in development, as shown in the chart above. The HDI rank of Portugal has fallen once since the 2007/08 Report, Rwanda’s rank has remained the same, with its HDI figure barely rising. This, however, may be due to the First and Second Congo Wars, with Rwanda and surrounding African countries being at almost constant war between 1996 and 2003, during which money may have been spent on the military rather on Development. This could have caused problems of Development within Central Africa.Both these countries fall into the general North-South Divide, with Rwanda in the South being less Developed than Portugal in the North. Statistics also from the UN’s Development Reports show that 32.9% of Rwanda’s Population are living under poverty and 23% of Children under 5 are underweight for their age, ranking 100th out of 135 on the Human Poverty Index. However, both Rwanda and Portugal’s gender specific readings, measuring the Development gap between Males and Females, show 99.8%, meaning that there is 0.4% difference in both countries’ population between genders. This figure surprised me a little.Figures that I didn’t include in the charts above include Child Mortality, which is just as significant. Rwanda has the 9th highest Child Mortality Rate in the world, with 181 of every 1000 children dying before the age of 5. On a Physical side, Rwanda is hard to access, being a very elevated and mountainous country, in fact, it’s known as ‘the land of a thousand hills’ by some. This might have had an effect on health care and food being able to reach its people, which may be why the Child Mortality rate and Life Expectancy are so low in Rwanda.Figures from Portugal and Rwanda prove the link between Development and Education, Quality of Health Care and Income. The Adult Literacy Rates of the two countries reflect on the HDI Ranks, with Portugal, the more Developed Country, having a higher rate than Rwanda, the less Developed Country. The Life Expectancy also proves the link between the two, with Portugal having a much higher life expectancy than Rwanda. Jobs and Income are also linked with the GDP per Capita being much, much higher in Portugal. This is evidence for the several different aspects of development being linked. |
Statistical vs. Biological Significance
The conclusion that there is a statistically significant difference indicates only that the difference is unlikely to have occurred by chance. It does not mean that the difference is necessarily large, important, or significant in the common meaning of the word. An example is the measurements made to determine whether or not Surveillance Towed Array Sensor System Low Frequency Active (SURTASS LFA) sonar transmissions affect the singing of humpback whales near Hawaii. The following graphs show the distribution of humpback whale song length (in minutes) during control periods when no sounds were being played (top) and during experimental conditions when LFA sounds were being played (bottom).
The mean length of the whale songs was 29% greater during transmissions. Given the measurements that were made, there is only a 4.7% probability that this difference is due to chance, and the scientists doing the study therefore concluded that the result is statistically significant. However, these data are from measurements made on a small number of whales that were followed before, during, and after transmissions. Since the number of measurements is relatively small, the probability that the scientists could have made a false negative (Type II) error is 50%. The power of the measurements to detect a difference is low.
Although the scientists concluded that the difference in the length of whale songs in the presence and absence of transmissions is statistically significant, the two graphs look similar and show that there is considerable variation in the length of humpback whale songs. The standard deviations of the distributions are considerably greater than the difference in the means. The response to the transmissions, that is the magnitude of the increase in song length, is well within the normal variation in the absence of the transmissions. The songs are sung exclusively by males and are thought to be displays to attract mates. Large changes in singing behavior might therefore have significant consequences to a humpback whale population. It seems unlikely, however, that changes in singing behavior that are well within the natural range of variability pose such a risk. The conclusion is that although the difference in song lengths is statistically significant, it is unlikely that it is biologically significant. |
The versailles treaty was the controversial peace settlement between germany and the allied powers that officially ended world war i. The treaty of versailles and the impact on germany by walter s zapotoczny the paris peace conference opened on january 12, 1919 meetings were held at various locations in. Germany resentfully signed the most famous treaty ever, versailles although years of readjusting the treaty followed, this essay will focus mainly on the strengths and weaknesses of the 440 articles in 1919.
The impact of the treaty of versailles on germany dear henry, have a happy new year how are you hope your well sorry that i haven’t written for a long. We will explore the treaty's negotiations at the paris peace conference, take a look at tangled constrictions that acclimatization multiplies by an analysis of the treaty of versailles effect on germany eight. World war i reparations were compensation the treaty of versailles stated that the traditional interpretation of the treaty's impact on germany was that.
Get an answer for 'what were the negative effects of the treaty of versailles' and find homework help for other world war i (1914–18) questions at enotes. The treaty of versailles had both an economic and nationalistic effect on germany the economic effect of the treaty of versailles had to do with the substantial amount of mon ey that germany had to pay for the war, because the blame was put entirely on them. Analysis of the inter-war period the treaty of versailles later left germany in a bankruptcy situation due to the reparations and the treaty allowed france to.
The paris peace conference and the treaty of versailles this separate peace treaty with germany stipulated that the united states would enjoy all “rights. French economist étienne mantoux disputed that analysis of the treaty of versailles, germany′s economy had the impact of the versailles treaty at. Video looking at how to analyse historical cartoons - this one is based on a cartoon about the issue of reparations and the treaty of versailles.
Treaty of peace with germany (treaty of versailles) treaty and protocol signed at versailles june 28, 1919 protocol signed by germany at paris january 10, 1920 treaty submitted to the senate bythe president of the united states for. Describe the effects of the treaty of versailles on germany the effect of the treaty of versailles on germany had an negative impact crow testament analysis. One of the chief contributing causes of the second world war was the treaty of versailles versailles: facts, causes and effects treaty : had germany.
Through an analysis of the consequences of the treaty of versailles assess the impact of the treaty of versailles on germany, 1919-1930. Treaty of versailles: treaty of versailles, peace document signed at the end of world war i by the allied powers and germany.
Start studying #2: the impact of the treaty of versailles on germany, 1919-1933 mzv learn vocabulary, terms, and more with flashcards, games, and other study tools. People in france believed the treaty was too lenient with germany in germany the treaty started a disturbance that the treaty he signed at versailles needed. The treaty of versailles ended world war and careful analysis of the treaty of versailles and germany in 1918 what impact did this treaty have on.
One of the most important episodes in ishiguro's novel involves the 1923 convention at lord darlington's house to potentially effect a renegotiation of the treaty of versailles lord darlington, in particular, is unhappy with the outcome of the treaty, which he feels unfairly penalizes germany here. Free term papers & essays - versailles effect on germany, history other. The great war: evaluating the treaty of versailles tools email the lesson the date the treaty took effect, the treaty was altered in germany's favor.Download |
Physics of Bumper Cars
Bumper cars have been a source of amusement and fascination for the young and young at heart for generations. Stepping into a bumper car, strapping oneself in, and letting loose on the other drivers is a pastime that continues to gain fans to this day. Surprisingly, this incredibly fun electric car ride has its roots in hard science. In some circles, bumper cars are considered a great real-world example of Newton’s Laws of Motion. Thankfully, understanding Newton’s laws is as easy as watching bumper-car drivers crash and burn rubber.
Newton’s Laws of Motion
When it comes to bumper cars, Newton’s laws are the driving force behind much of the fun that you have at amusement parks. Newton’s first law, the law of inertia, covers how objects move when they’re in motion. This law says that objects that are moving stay in motion unless they’re influenced by an outside force, and the same holds true for objects that are at rest. The law of acceleration, Newton’s second law, states that an object’s mass and the force applied to it will influence how much the object moves. Under this law, it’s understood that bigger objects take more force to move than smaller objects and when more force is applied, more acceleration can be witnessed. Lastly, Newton’s third law, the law of interaction, simply states that every action can be expected to produce an equal and opposite reaction. This law especially is what can give a bumper car its trademark jolts of fun.
- Bumper Cars: The University of Virginia gives an academic overview of bumper cars and the physical forces that guide their movements and trajectories.
- Amusement Park Physics (PDF): Utah State University provides examples of how various amusement park and carnival rides illustrate known physical laws.
- Newton’s Laws of Motion: The National Aeronautics and Space Administration (NASA) gives a brief summary of Newton’s Laws of Motion and provides educational activities for all grade levels.
Though in the real world, collisions can mean serious accidents or injuries for people in vehicles, bumper cars are created with special rubber linings on the outsides of the cars to protect against damage. These rubber linings are what soften the impact and help the cars bounce off of each other. While electrical energy drives the cars to collide with each other, the rubber acts as a special barrier between cars, which can alter movement and angles of impact. In some cases, the rubber lining will readjust the direction of the bumper car to create an entirely new trajectory.
- Momentum and Collisions: The University of Louisville Department of Physics discusses momentum and collisions and illustrates the concepts using mathematical equations.
- Conservation of Momentum: Saint Ignatius High School illustrates the conservation of momentum using easy-to-understand illustrations.
- Types of Collisions: The University of Alaska Fairbanks discusses different types of collisions and how they are affected by basic laws of physics.
Momentum, Impulse, and Collisions
Several different variables can influence how someone will experience driving a bumper car and colliding into another. The two cars’ masses, the weights of the drivers, and the velocities at which each is traveling can affect how each car and driver reacts after a collision. If the two people in the cars have different masses, the larger driver will move around less upon impact. Similarly, the driver who is traveling fastest will move the other car more when contact is made.
- Energy, Momentum and Driving: Indiana University-Purdue University Indianapolis details the forces of energy and momentum while driving.
- A Guide to Momentum and Collisions (PDF): An excerpt from a textbook helps detail the concept of momentum during various types of collisions.
- Impulse and Momentum: The Codman Academy Charter Public School discusses impulse and momentum during crashes, provides demos, and gives a list of resources for further information.
Safety and Fun
Driving bumper cars can almost always guarantee a great time. The law of inertia, however, can have a negative effect on drivers who don’t follow basic safety rules and wear seat belts in bumper cars. This is because the body of a driver will keep moving in the way it was initially moving upon impact, which can result in the driver possibly being thrown from a bumper car if they aren’t properly restrained. Much like when driving in a vehicle on a street, it’s important to buckle up and prepare oneself for bumps along the road. |
Bullies are very cunning people and are expert at getting away with the things that they do.
Bullying can go on anywhere, including at school or after school activities, but it’s the way it’s dealt with which makes the difference between life being tolerable or a misery for those being bullied.
Bullying includes things like:
- People calling you names
- Making things up to get you into trouble
- Hitting, pinching, biting, pushing and shoving
- Taking things away from you
- Damaging your belongings
- Stealing your money
- Taking your friends away from you
- Posting insulting messages on the Facebook or Twitter
- Spreading rumours
- Making silent or abusive phone calls
- Sending you offensive phone texts
The bullies will have worked out what buttons to push to make you upset.
They may make remarks about:
- Your weight
- Your looks
- The colour of your hair
- Your schoolwork
- If you work hard
- If you have a disability
- If you are a different religion, colour or culture
- If you wear spectacles or a hearing aid
So how do you solve the problem?
It is important to tell a friend, tell a teacher or tell your parents if you are being bullied. The bullying won’t stop unless you do. If you don’t feel you can do it in person it might be easier to write a note to your parents explaining how you feel, or perhaps it might be easier to confide in someone outside your immediate family, like a grandparent, aunt, uncle or cousin.
Also tell your form tutor what is going on. If you are worried about tell then, you could stay behind on the pretext of needing help with some work. If you don’t feel you can do that, then go to the medical room and speak to the school nurse.
If a teacher catches the bullies red-handed you won’t get into bother from anyone for telling tales. Don’t be tempted to hit back at the bullies because you could get hurt or get into trouble. Remember hitting someone is an assault.
Make sure you stay in safe areas of the school at break and lunchtime, somewhere there are plenty of other people. Bullies don’t like witnesses. If you are hurt at school, tell a teacher immediately and ask for it to be written down. And make sure you tell your parents.
If people are making nasty remarks about you then it may be because they are jealous. Perhaps you’re better looking than they are or work harder or perhaps the teachers like you better. One way of dealing with remarks is simply to say …yeah, whatever, …. each time so that you show them that it isn’t having the effect of upsetting you in the way they think.
Bullying can make you feel sad
Bullying is very upsetting and if you feel you can’t cope, tell your parents. It may be that need to see a doctor who may be able to write a note for the school explaining the effect that bullying is having on your health.
Aikido training will give you the confidence to look after yourself and to cope with the bullies better.
If you have any worries about being bullied, or would like to talk about how you feel, Sensei Rich and Sensei Kev are always there for you.
For more information you go the Bullying UK website, which has lots of useful information that can help you.
Cyberbullying is using the internet, email, online games or any digital technology to threaten, tease, upset or humiliate someone else. In order for someone’s actions to be considered cyberbullying, their behaviour must be:
- Occur more than once
- Cause harm to someone else (whether actual or perceived)
- Be conducted via a technologically-based source
Further information about cyberbullying can be found on the Childline website.
Aikido has helped me cope with a lot things that used to make me feel very sad |
Types of shallow foundations – Foundations are broadly classified into shallow foundations and deep foundations. This article is a total overview of shallow foundations and the types of Shallow foundations.
What is a shallow foundation ?
Shallow foundations transfer the load laterally to the soil. It is also called stripped foundations. The depth of a shallow foundation is less than its width. Shallow foundations are adopted when the load acting on a structure is reasonable and has a competent soil layer capable of negotiating the loads available at a shallow depth or shorter depth.
A shallow foundation is placed on the surface of the ground. The depth of a shallow foundation can range anywhere between 1 meter to 3.5 meters and sometimes more. The width of the shallow foundation is greater than the depth.
Types of shallow foundations
There are different types of shallow foundations adopted as per site conditions and design requirements.
Shallow foundation – Spread footing or isolated footing
The spread footing is one of the most commonly used types of shallow foundations. They are also called isolated footing or individual footings. Spread footings are further classified into simple spread footing, sloped spread footing, and stepped spread footing based on the shape of the footing.
- Simple spread footing
- Sloped spread footing
- Stepped spread footing
Simple spread footing
This is a common type of spread footing. Simple spread footing consists of a base footing with a single column over it. This type of foundation is used for structures with reasonable loads and bearing capacities.
Sloped spread footing
In this type of foundation the footing is sloped as shown in the figure. The footing carries a single column. The cross section of these types of footings are trapezoidal.
Stepped spread footing
When the loads are high steps are provided in the footings as shown in the figure.
Types of shallow foundations- Strip footing
Strip footings are also called wall footings. They are used for providing load-bearing brick/stone/RCC walls over the footings. Strip footings run continuously throughout the wall area of a building. These types of footings are also used when the spacing between the columns is very less and the footings overlap each other.
Types of shallow foundations – Combined footing
Combined footing consists of two or more columns over a single footing. These types of footings are adopted when the distance between the two individual footings are very less and overlap each other. A combined footing is also provided in areas where further excavation is not possible due to any flushing with the boundary. A combined footing is classified as rectangular combined footings and trapezoidal combined footings.
Strap footings are also called cantilever footings consisting of two individual footings connected through a beam strap. The beam strap is designed as a rigid structure. These types of foundations are economical than combined footings.
Mat or Raft foundations
Raft foundation – One of the most commonly used types of foundation in construction is a continuous slab resting on the soil and covering the total area of the proposed structure. There are different types of raft foundations based on their applications. The selection of the type of raft foundation depends on a lot of factors like bearing capacity, loads, site conditions, etc.
Raft foundation/Mat foundation is a solid slab placed at a designed depth spreading over the entire area of the structure. Raft foundations consist of columns and shear walls for transferring loads coming on the structure to the ground. These types of foundations are mainly used when the bearing capacity of the soil is low and becomes difficult for individual footings to negotiate the loads. The raft foundation helps to transfer the entire load of the structure to a larger area. Please read the total details about raft foundation in RAFT FOUNDATION- TYPES AND ADVANTAGES
Also read : RAFT FOUNDATION – TYPES & ADVANTAGES
Also read : PILE FOUNDATIONS – TYPES & ADVANTAGES
Types of shallow foundations – Suitability
Shallow foundations are very easy to construct and do not require highly skilled manpower and professional supervision. These foundations can even be done with the help of medium-skilled workers. A shallow foundation is very economical when compared with a deep foundation. Shallow foundations are end bearing type foundations that transfer loads to the end of the foundation.
Shallow foundations are considered as the most preferred option when the safe bearing capacity of the soil is reasonable and the structural loads are within the permissible limits. |
k nearest neighbor algorithm is a very simple and easy machine learning algorithm that is used for pattern recognition and finding the similarity. Out of the most effective machine learning tools, KNN follows the non-pragmatic technique for statistical estimation. It is a supervised machine learning model used for classification. Besides pattern recognition, the data scientist uses for intrusion detection, statical estimation, data mining.
K nearest neighbor is widely used in real-life scenarios. From medical science to the online store everywhere we use the K Nearest Neighbor Algorithm. It set a boundary and get the value based on the surrounding value. For the easy application, we use this algorithm.
Why You Use The K Nearest Neighbor Algorithm?
We use machine learning for predicting any data. To predict data we use several models. K nearest neighbor is one of them.
Now let us discuss how will we use the K Nearest Neighbor Algorithm. Suppose you have shown a picture of the cat to the machine. When you ask is it to get a dog, the system will tell you it is a cat.
It happens based on prediction. The machine takes several photos of cats and a dog. By several images, it makes a prediction. When you show any images, its answer is your questions based on the nearest value.
All of us have uses the Amazon website for purchasing our favorite items. When we search for any product it shows that product and it also shows some recommendations for another product. Machine learning your choices and shows your recommendation based on K Nearest Neighbor.
What is The K Nearest Neighbor Algorithm?
KNN or K Nearest Neighbor Algorithm is one of the best supervised easy algorithms of machine learning which is used for data classification. Best on neighbor’s classification it classifieds its data. It stores all available cases and classifiers including the new cases based on a similarity measure. For example, if someone likes normal juice or fresh juice it will depend on the value of the nearest neighbor. As a user, you have to who declared the value of k. Suppose the value of k is 5 then it will consider the nearest 5 value of a product. Based on the majority ratio it will predict your choice.
How the KNN Algorithm Works
Consider a dataset having two variables and height. Each of the points is classified as normal and underweight. No I giving a data set:
Now if I give the value of 157 cm height. The data is not given previously. Based on the nearest value it predicts the weight of 157 cm. It is using the model of k nearest neighbor.
We can implement the k nearest neighbor algorithm bye by Euclidean distance formula. It works by determining the distance of two coordinates. In a graph, if we plot the value of (x,y) and (a,b) then we will implication the formula as:
dost(d) = ✓(x-a)^2 + (y-b)^2
What is K?
K nearest neighbor ‘K” is a constant defined by users. At the query or test point, it is an unleveled vector. It classifies to assign the most frequent level among the K training sample nearest to the value of the query point. K is the imaginary boundary to classify the data. When the machine finds the new data it tries to find value withing or surrounding the boundary.
Finding the value of K is very difficult. If the value of k is small that means the noise has a high influence on the outcome. The value of K has a strong effect on the performance of KNN. In simple language, it is a similar thing nearest to others.
Is K Nearest Neighbor is a Lazy Algorithm
The data scientist considered the KNN is the lazy algorithm among other machine learning model. Someone considered it because lazy because it is easy to learn. It does not learn a discriminative function but “memorizes” the training dataset. The regression algorithm or any tool needs training time but KNN does not require the training time. So, it is considered a lazy algorithm.
How to Set the Value of K?
K nearest neighbor algorithm is based on the feature of similarity. Selecting the value of k is not easy. The process of selecting the value of k is called parameter tuning. To make the result accurate it is very important.
In this image, we can see the value of X1 and X2. The value of Red Star is independent. Now, we have to take three nearest values of the same distance. We get only one yellow dot. But, if we set the k value equals 6 then we will get two yellow dots and one another dot.
We can set the value of k to any odd number. The value of k may be 3,5, 7,9, 11 or any odd number. When you select the value of k = 3 you will get a specific answer. But if the value of k = 5 then the prediction may be changed. The result is based on the neighbor of your surroundings.
So, to choose the value of k:
- Smart(n), here we consider the value of n equals the total number of data points.
- To avoid any confusion between two numbers or points we always use an odd number.
When We Use The k Nearest Neighbor Algorithm?
You can use the k nearest neighbor algorithm labeled. In our supervised machine learning, we have seen the main requirement of it is data labeling. According to our previous example, we want to go for another example. The machine takes input based on several leveling. If you show the image of the cat it will contain height, weight, dimension, and many other features.
When the data set is normal and very small then we use k nearest neighbor algorithm. It does not alarm a discriminative function from that can you set. So it is known as a lazy learner algorithm.
Application of K Nearest Neighbor Algorithm?
K-nearest algorithm is used in various sectors of day to day life. It is easy to use so that data scientists and the beginner of machine learning use this algorithm for a simple task. Some of the uses of the k nearest neighbor algorithm are:
Finding diabetics ratio
Diabetes diseases are based on age, health condition, family tradition, and food habits. But is a particular locality we can judge the ratio of diabetes based on the K Nearest Neighbor Algorithm. If you figure out the data of is age, pregnancies, glucose, blood pressure, skin thickness, insulin, body mass index and other required data we can easily plot the probability of diabetes at a certain age.
If we search any product to any online store it will show the product. Decide that particular product it recommends some other product. You will be astonished after knowing that the 35% revenue of Amazon comes from the recommendation system. Decide the online store, YouTube, Netflix, and all search engines use the algorithms of k-nearest neighbor.
Concept search is the industrial application of the K Nearest Neighbor Algorithm. It means searching for similar documents simultaneously. The data on the internet is increasing every single second. The main problem is extracting concepts from the large set of databases. K-nearest neighbor helps to find the concept from the simple approach.
Finding The Ratio of Breast Cancer
In the medical sector, the KNN algorithm is widely used. It is used to predict breast cancer. Here KNN algorithm is used as the classifier. The K nearest neighbor is the easiest algorithm to apply here. Based on the previous history of the locality, age and other conditions KNN is suitable for labeled data.
Is K Nearest Neighbor Unsupervised?
No, KNN is supervised machine learning. The K-means is an unsupervised learning method but KNN always uses correlation or regression. In unsupervised learning, the data is not labeled but K nearest neighbor algorithm always works on labeled data.
KNN in Regression
Besides classification, KNN is used for regression. It considered the continuous value of the KNN algorithm. The weighted average of the k nearest neighbors is used by Mahalanobis distance or Compute the Euclidean method.
Advantage of K Nearest Neighbor
KNN algorithm is very easy to use. For lazy learners, it is the best algorithm to apply in machine learning. It has some benefits which are as follow:
- KNN algorithm is very easy to learn.
- For the distance or feature choice, it is flexible.
- It can handle multi-class cases.
- With enough representative data, it can do well in practice.
Limitation of KNN Algorithm
- If you want to determine the nearest value you have to set the parameter value k.
- The computation cost of the KNN algorithm is very high.
- It computes the distance of each query for all training samples.
- Storage of data is another problem.
- As a user, you must have to know meaningful distance functions.
K Nearest Neighbor Algorithm works based on labeled data. To calculate the value of prediction the value of k is predefined. Now KNN finds the value based on the nearest neighbor or surrounding value. The dataset of this algorithm is very small and works on labeled data. It is very easy to use. Because of its characteristics, it is known as the lazy algorithm of machine learning. |
Reaching for objects in your car when driving accounts for roughly 2% of all distracted driving crashes. These crashes are particularly common and particularly fatal for teen drivers. The National Highway Traffic Safety Administration (NHTSA) reports that motor vehicle crashes are the leading cause of fatality in the 15 – 20 age group. Teen drivers reaching for items in the vehicle are seven times more likely to be involved in a crash.
A study conducted by the National Institutes of Health (NIH) found that when a teen driver’s eyes are not focused on the road, the risk of a crash increases by 28 percent. Teen drivers using a cell phone doubled the risk of a crash.
We should also note that teens are not the only drivers who reach for something in their vehicle while driving. Nor are these other (older) drivers significantly less likely to be in a crash. Most of us believe that experienced drivers can control the vehicle and reach for something elsewhere in the car. Sometimes we can.
Try an experiment, and ask your teen drivers to try it as well. Start your car’s engine (to engage the power steering). Ensure that the parking brake is set. Leave the car in park. Sit for a moment as you do when driving. Put your hand on the steering wheel at the top center. Then reach across the vehicle to pick up something from the passenger seat. Look where your hand is on the steering wheel. Has the steering wheel turned? Next, instead of reaching for something on the seat, reach for something on the floor in front of the passenger seat. Look at your hand on the wheel. In all likelihood, you will have turned the wheel.
Based on this experiment, we can imagine what would happen if we reached into the back seat for an item. We can probably imagine the result of the experiment when reaching for something on the floor at our feet.
Thousands of crashes have occurred because drivers believe they would not lose control of their vehicle when reaching for something.
There are two things we need to teach teen drivers. First, that nothing anywhere in the vehicle is as important as controlling the vehicle. Nor is anything so urgent that it is worth risking a crash at that moment. Second, there is a safe and responsible way to retrieve objects in the vehicle. That is to wait until you can safely pull off from the highway. Pull over to a safe place, put the car in park, and then retrieve the object.
If you or a loved one has been injured in a crash caused by a driver reaching for something, or by any other type of distracted driving, call Altizer Law, P.C., in Roanoke, VA. Bettina Altizer and her auto accident team have been helping people recover the financial compensation they are entitled to for more than 30 years. When you have been hurt, and you need to rebuild your life, we understand that it’s about the money. |
The impact of manifest destiny to Native Americans was conflict. Of course, this was inevitable due to the fact that the ideology involved American settlers holding to a determined belief that they had a god-given purpose of expanding from the Atlantic coast to the Pacific coast, thus taking and controlling lands that were not originally theirs regardless of consequences.
The Impact of Manifest Destiny: How Did This Ideology Affect the Native Americans?
However, there were more specific consequences. One fundamental principle that formed the ideology was the Doctrine of Discovery. In 1823, the U.S. Supreme Court ruled in Johnson v. M’Intosh that this supposed international legal principle used by European colonizers was also applicable to the United States. Historians Robert J. Miller and Elizabeth Furse explained the implication of this ruling.
The doctrine stated that whenever European and Christian nations discovered new lands, the specific discovering country automatically gained sovereign and property rights over these territories held by non-European and non-Christian people despite the fact that these natives already owned, occupied, and used these lands.
According to the Supreme Court, when applied to the westward expansion of American settlers, the U.S. would acquire some sovereign rights over the native people and their governments that in turn, would restrict their tribal international political and commercial relationships. Take note that this apparent transfer of rights would transpire even without the knowledge of or consent from the Native Americans.
Below are the specific details of the consequences or impact of manifest destiny on Native Americans:
• Armed Conflict with the Settlers: Several wars transpired between the American settlers and the Native American tribes. One example was the Second Seminole War that happened between 1835 and 1842. It involved an armed conflict between different groups of Indians collectively called the Seminoles and the U.S. It resulted in the natives giving up their lands in Florida to the settlers and relocating in designated reservations.
• Displacement from Native Lands: In her reference handbook, Lorraine Hale mentioned that after the 1830s, the forcible removal of Native Americans from their homelands became commonplace. They were moved from reservations without taking into consideration whether these new lands were suited to their traditional way of life. Note that several Indian tribes moved further north and settled to territories in Canada.
• Exposure to Communicable Diseases: The arrival of Europeans in the Americas and the expansion of the American settlers exposed Native American populations to new infectious diseases. The review of K. B. Patterson and T. Runge noted that diseases such as measles and smallpox devastated these populations. It specifically noted that the arrival of smallpox correlated with the decline of Native American populations from the 15th to the 19th century.
• Reduction in the Number of Population: Estimates from Cherokee-American anthropologist Russell Thornton revealed that by 1800, the population of Native Americans in the U.S. numbered to about 600,000. However, by 1890, the population declined to 250,000. The population decline came from the warfare, diseases, and poverty introduced by the colonizers and settlers from the Old World. However, as noted in the study of J. D. Hacker and M. R. Haines, the further expansion of the settlers during the 19th century significantly reduced the numbers of Indians inhabiting the present-day U.S.
• Indoctrination and Assimilation: The federal government made several efforts to indoctrinate the Native Americans and assimilate them with the settlers. For example, as discussed by Laurence French, the Dawes Act of 1887 included provisions for special educational indoctrinations, including the mandatory use of English and the inculcation of patriotism in Indian schools. Furthermore, as noted in the paper of John Rhodes, the government also subjected them under religious persecution not only by grabbing their sacred lands but also by prohibiting them from exercising their religions.
FURTHER READINGS AND REFERENCES
- French, L. 2003. Native American Justice. Chicago, IL: Burnham Inc. Publishers. ISBN: 0-8304-1575-0
- Hacker, J. D. and Haines, M. R. 2005. “American Indian Mortality in the Late Nineteenth Century: the Impact of Federal Assimilation Policies on a Vulnerable Population.” Annales De Démographie Historique. 110(2). DOI: 10.3917/adh.110.0017
- Hale, L. 2002. Native American Education: A Reference Handbook. Santa Barbara, CA: ABC-CLIO. ISBN: 1-57607-363-8
- Miller, R. J. and Furse, E. 2006. “The Doctrine of Discovery.” Native America, Discovered and Conquered: Thomas Jefferson, Lewis & Clark, and Manifest Destiny. Westport, CT: Greenwood Publishing Group, Inc. ISBN: 0-275-99011-7
- Patterson, K. B. and Ringer, T. 2002. “Smallpox and the Native Americans.” American Journal of the Medical Sciences. 323(4): 216-222. PMID: 12003378
- Rhodes, J. 1991. “An American Tradition: The Religious Persecution of Native Americans.” Montana Law Review. 52(1): 14-35. Available online
- Thornton, R. 1990. American Indian Holocaust and Survival: A Population History since 1492. Oklahoma: Oklahoma University Press. ISBN: 0-8061-2220 |
Story of Indigo dye | The Blue Gold of India
Indigo, or indigotin, is a dyestuff extracted from the indigo and woad plants. Indigo was known all through the old world for its capacity to shade textures a dark blue. Indigo is an ancient dye and there is evidence for the use of indigo from the third millennium BC, and possibly much earlier for woad.
A frequently mentioned example is that of the blue stripes found in the borders of Egyptian linen mummy cloths from around 2400 BC.
Several sources claimed that ancient linen fabrics that are dyed blue are likely to have been dyed with indigo because indigo was thought to be superior to woad for dyeing linen. Another example was found on ancient tablets from Mesopotamia in 600 BC that explained a recipe for dyeing wool blue by repeatedly immersing and airing. The earliest example of indigo from Indigofera probably comes from the Harappan Civilization (3300 -1300 BC). Archaeologists also recovered remains of cloth dyed blue which dated back to 1750 BC from Mohenjo-Daro, now present day Sindh, Pakistan.
There are at least 50 different types of Indigofera in India. In the Northwest region, indigo has been processed into small cakes by producers for many centuries. It was exported through trade routes and reached Europe. Between 300 BC to 400 AD Greeks and Romans had small amounts of blue pigment in hard blocks, which they thought was of mineral origin. They considered it a luxury product and used it for paints, medicines and cosmetics.
Indigo was found to have high tinting strength although the colour faded rapidly when exposed to strong sunlight.
The Greeks called this blue pigment ‘indikon’, which translates into ‘a product from India’, this word then became Indigo in English. Another ancient term for the dye is ‘nili’ in Sanskrit which means dark blue from which the Arabic term for blue ‘al-nil’ was acquired. This word in Spanish was called anil and later made its way to Central and South America where it is simply referred to as indigo. The English word aniline is also derived from anil, and it is used to describe a class of synthetic dyes.
In the late 13th century Marco Polo returned from his voyages through Asia and described how indigo was not a mineral, but in fact was extracted from plants. Small quantities of indigo were available in Europe then, but they were very expensive due to the long journey required and the taxes imposed by traders along the route. Locally grown woad was the main blue dye used in Europe at the time. By the late 15th century Vasco da Gama discovered a sea route to China, allowing indigo to be imported directly. Large scale cultivation of indigo started in India and in the 1600s large quantities of indigo were exported to Europe. The cost of indigo dropped considerably and by the end of the 17th century it had virtually replaced woad in Europe.
Indigo was often referred to as Blue Gold as it the best possible trading commodity, although high value it was compact and long lasting.
The most widely recognised procedure of extracting Indigo colour is the point at which it is derived from the bushes of the Indigofera Tinctoria and Suffruticosa plants that have been particularly developed to make colours. The colour can be separated from either the leaves or the roots, however for reasons for sustainability, the colour from the leaves is utilised more often than not.
The leaves are then soaked in water so as to mature. This stage is one which basically removes the colour material from the plant, in spite of the fact that at this stage it is a lighter shade. After this point the leaves are removed and the existing solution is is then beaten and exposed in the air to form the Indican into Indigo Dye. Excess water is poured off and the blue slime is dried. This sludge is then pressed into balls and left to dry. This is the conventional indigo colour powder.
Typically, in the dyeing process, cotton and linen threads are usually soaked and dried 15-20 times.
After dying, the yarn may be sun dried to deepen the colour. The process of Indigo dyeing is completely different while done using the traditional process, it’s 100% natural and often organic. Instead of using heat and a mordant, it is done using a living fermentation process that naturally sets the dye into the textile. The texture colouring process takes no less than seven days, from colouring till drying. At first the texture is dunked in a vat of dye and kept under the water for a couple of minutes. When brought out into the air, the shading is a splendid green and gradually it changes to a wonderful profound and rich blue of Indigo.
The procedure is repeated around six to ten times, depending upon the shade required. It is then hung out to dry in the sun. By the end of the 19th century, natural indigo production was no longer able to meet the demands of the clothing industry and a search for a simpler and easier way to procure indigo started. In 1865, Adolf von Baeyer, a German chemist began working on the synthesis of indigo and in 1897 synthetic indigo was launched. The world’s current production of natural indigo could not cope with the demand for this dye. However, environmental concerns and an increased demand for natural and sustainable dyes may lead to a resurgence of natural indigo production. Although the chemical formula for natural as well as synthetic indigo is the same, synthetic indigo is almost pure indigotin.
Natural indigo has a high proportion of impurities such as indirubins, that give beautiful colour variations and the blue you get depends on where the indigo was grown and the weather at the time.
Synthetic indigo on the other hand, produces an even blue that never varies or fades with time. Natural indigo is a sustainable dye. After the pigment has been extracted the plant residue can be and id composted and used as a fertiliser whereas the water is reused to irrigate crops. The production of Indigo produces a variety of by-products that must be handled carefully.
Some of these materials are considered to be hazardous and must be disposed of in consideration with local and federal chemical waste disposal guidelines. Such chemicals can enter the environment in at least three different ways. The first, during the actual manufacture of the molecule. The second is when the dye is applied to the yarn, and the third is when the dye is removed from the yarn and enters the wash water during the initial stone washing or wet processing of the fabric.
This last step is typically undertaken to produce denim. Manufacturers who use indigo in dying operations are also seeking to improve their use of the dye.
Compared to traditional methods of stone washing fabric dyed with indigo, their new process uses few, if any, pumice stones which help give the fabric its faded look. Therefore, pumice stone handling and storage costs are reduced, along with time required to separate pumice from garments after stone washing. It also uses much less bleach. Therefore, this new process not only reduces garment damage, but also reduces waste produced by the stones and bleach.
Natural indigo can often be traced to its country of origin, and even to the farm where it was produced.
By using natural Indigo, we make a conscious effort will to help provide sustainable employment to rural population in third world countries. Not just that but you would also be contributing towards helping the environment and reducing the use of petrochemicals. |
Familiar spaces are comfortable places to learn and grow. There are many familiar places to take your child regularly, such as the library, the grocery store, or the doctor’s office. For you, they are routine, but for young children, these places are fascinating and new.
Infants and toddlers with and without disabilities naturally explore the world, and they are excited to discover the “new” in their spaces. Perhaps you have seen an infant looking intently at a toy that is just out of arms reach. She might stretch her arm as far as she can until she finds herself rolling onto her belly and grasping the toy. You may have seen a toddler crouching on the sidewalk to watch ants crawl. He might point to the ants and look at his mom with a puzzled expression to let her know he wants to know more about these insects. Curiosity motivates all young children to explore the spaces around them. Opportunities to learn and grow happen naturally when we tune into this curiosity and share in the excitement of discovery with children.
Many families experience challenges when balancing household tasks, community obligations, early intervention, and work. Laundry, cooking, EI providers, and errands always seem to take more time than we expect. The day fills up quickly when you add busy children playing and making a mess to the mix. Families may feel even more time pressure when they try to think of ways to incorporate EI strategies into everyday routines in familiar and new spaces. Adults can more easily do this when they tune into the excitement and curiosity that infants and toddlers have about exploring their spaces. Your child is like a traveler in a new land, and you are the tour guide! A good tour guide talks about everything he sees, smells, touches, and tastes.
Want to make the most of your time with your child to help them grow? Look at your spaces and find many opportunities to explore and grow together. Awaken your senses as you go about your day. Here are some ideas to help you get started.
menu [ + ]
Enjoy the outdoors! Your child may notice the birds, squirrels, and plants outside.
- Watch your child’s face to see where she is looking. The outdoors is a great opportunity to build language skills.
- Talk with your child about what she is seeing. You can expose her to rich vocabulary words when describing the colors you see, sounds you hear, and scents you smell. You are introducing your child to concepts such as opposites when you describe the warm sun versus the cold snow. These conversations build her cognitive abilities.
- Make time to climb or cruise around the playground to build your child’s gross motor skills. Crawling is a new experience when you are moving on the soft grass.
Discover treasures indoors! Your home has treasures that your child will enjoy discovering.
- Your kitchen space may be filled with safe items to discover, such as wooden spoons, measuring cups, and unbreakable bowls. Practice stacking and nesting these items with your child. This builds his spatial awareness. Pretend to cook and feed each other with older infants and toddlers. Pretend play is a natural way to develop social skills such as turn-taking and manners.
- You might place a few “treasure baskets” in different rooms of your home where you can put items that are safe for your child to explore. Remember, even the laundry basket is full of interesting textures, colors, shapes, and sizes to talk about!
- Help your child master gross-motor spaces such as stairs, ramps, and furniture.
Tour new places and familiar spaces!
- Many places that you visit regularly are routine to you but may be fascinating and new to your child. You can explore the library, the grocery store, or the doctor’s office with your child. Explain what others might be doing as they move around you.
- Adventure out with your child to a new place that makes you curious. This may help you share the excitement of discovery that your child experiences in familiar places.
- Walk at a different playground, stroll around a museum, or explore a local cultural festival. Ask questions and wait for your child to answer or indicate his interest by turning his eyes toward you or pointing at things he sees. Respond to your own questions and be a language model for your child. |
Sever’s disease (calcaneal apophysitis) is the most common cause of heel pain in children, particularly in physically active adolescents who are about to begin puberty. Sever’s disease is a painful inflammation of the heel’s growth plate, which is located in the lower back of the heel, where the Achilles tendon (the heel cord that attaches to the growth plate) attaches. It typically affects children between the ages of 8 and 14 years old during periods of rapid growth, because the heel bone (calcaneus) is not fully developed until approximately 15. Until then, the repetitive stress of physical activity can irritate the growth plate resulting in pain.
Causes of Sever’s disease
Overuse and stress to the heel is the primary cause of Sever’s disease. The heel’s growth plate remains weak and sensitive until it is fully formed, and the repetitive stresses of sport and activities that involve a lot of heel movement can cause irritation to the growth plate. Children going through periods of rapid growth are more at risk due to the heel bone growing faster than the attaching muscles. The results in the Achilles tendon being pulled very tightly on the growth plate, causing pain and inflammation.Risk factors include:
Sports that involve running and jumping on hard surfaces (basketball, netball athletics)
Standing for long periods, which places constant pressure on the achilles tendon
Flat feet (over pronated)
Symptoms of Sever’s disease
Symptoms of Sever’s disease include:
Pain, swelling and redness in the back or bottom of the heel
Walking on toes
Difficulty running, jumping, or participating in usual activities or sports
Pain when the sides of the heel are squeezed
Treatment of Sever’s disease
It is important a podiatric assessment be performed to rule out other more serious conditions. It is also recommended that early treatment takes place as symptoms can quickly progress without the appropriate steps being taken. At OnePoint Podiatry a Biomechanical Assessment is performed. During the assessment our podiatrists will obtain a thorough medical history and asks questions about recent activities. Examination for the the child’s foot and leg will also take place, along with assessment of any potential causative or risk factors. X-rays are also often used to evaluate the condition. Assessment findings are then used to develop a treatment plan specific to the child. Some treatment options may include:
Rest. Avoid activity that cause irritation and stress to the heel.
Ice. To ease pain and swelling. Do this for 20-30 minutes every 3 to 4 hours for 2 to 3 days, or until the pain is gone. At OnePointHealth we also provide Game Ready Ice Compression system.
Compression. compression bandages help to reduce swelling.
Elevation. Raising the foot above the level of the heart helps to reduce swelling.
Physical therapies. Rehabilitation programs help to promote healing and range of motion.
Anti-inflammatory. Nonsteroidal anti-inflammatory drugs (NSAIDs), like ibuprofen, naproxen, or aspirin, will help with pain and swelling.
Immobilisation. In some severe cases of heel pain, a cast may be used to promote healing while keeping the foot and ankle totally immobile.
Adequate footwear. Supportive footwear may also be recommended to stabilise the foot and ankle to reduce stress on the heel bone.
Taping/Bracing. Taping or bracing can help to stabilise the foot and ankle and reduce excessive stress on the Achilles tendon and heel bone.
Orthotic therapy: A custom orthotic device placed in the shoe can help stabilise the foot and address possible causative factors such as flat feet.
Your health. In the right direction.
Our purpose is to make your health number one! Our integrated team approach allows us to guide your health in the right direction, taking away your worry of whom to see and where to find them. OnePointHealth, all your health needs, under one roof.
It is not normal for children to experience foot or lower limb pain, and there is no such thing as “growing pains.” Any pain that lasts more than a few days, or that is severe enough to limit the child’s walking, should be assessed by a podiatrist straight away. Many foot or lower limb problems during adulthood stem from untreated […]
Children spend around 30 hours a week in their school shoes, or more than 15,000 hours during their school years, so correctly fitted schools shoes are very important. Ill-fitting shoes can lead to problems in adulthood, such as ingrown toenails, corns and bunions, which may even require surgery later in life. Here are some tips […]
There is a common misconception that heel spurs are the cause of pain, but in fact, many people live day to day with spurs and have no symptoms relating to them. Generally, both the pain and the growth of the spur are related to irritation of the plantar fascia. For most patients, plantar fasciopathy is […]
What are ingrown toenails (Onychocryptosis)? When a toenail is ingrown, it is curved and grows into the surrounding skin as the nail grows. Ingrown toenails irritates the skin, often creating pain, redness, swelling, and warmth in the toe. If an ingrown nail causes a break in the skin, bacteria may enter and cause an infection in the area, which is […]
Intoeing is when the feet turn inwards during gait and is commonly known as being pigeon-toed. This is often seen among children and in most cases will resolve without treatment as the child grows. This is why it’s less commonly seen in adults. Intoeing may initially cause a child to trip but should reduce over […] |
The class period started with a statement: COVID – 19 has had a disproportionately negative impact on certain populations.
The next hour was spent answering: How can we as mathematicians make sense of this statement using numerical evidence? What does disproportionality mean? How has COVID-19 impacted Latinx, Black, and white people?
The lesson explored what “certain populations” meant–from socioeconomic groups to racial groups, and what a negative impact would look like–from hospitalization rates to rates of death. The lesson structured a discussion on how, in the United States the data shows that there are disparate outcomes along racial lines; students discussed what those root causes might be.
Co-created by Mathematics Department Chair Brad Meeder and Social Justice Assistant Sherman Goldblum, the lesson tied critical mathematics skills to real-life challenges facing our country. Sherman explained that “disproportionality is a tool that can be used to identify injustice. This lesson was built to make our students better question the world and what they see.”
The plan exemplifies a strength of the Middle School academic program: connecting real-world topics and ideas to classroom learning.
Mr. Meeder shared, “This lesson showed the ability of Rashi students to make a meaningful connection between mathematics and social justice. We discussed how data and statistics tell us what is happening, but it doesn’t necessarily tell us why something is happening. I was inspired by our students’ thoughtfulness and ability to empathize with different groups’ perspectives and life experiences.”
Mr. Meeder and Sherman guided students in using data to show evidence of disproportionate representation through three different methods: using a statistic of one out of every x…, x per one-hundred thousand, or measuring something as a percentage of the overall population.
What does a lesson like this show about Rashi students? Sherman reflected that it “shows how Rashi students’ critical minds connect to their compassionate hearts. It gave students the opportunity to demonstrate their knowledge of current events and creative problem solving. They showed a sense of empathy, speaking about how Covid-19 has affected different communities in different ways.”
Mr. Meeder loved teaching this lesson, saying it allowed him “to see students through a different lens than many of our other classes. To hear them speak so thoughtfully about real societal issues made me proud to be their teacher and excited for their futures as mathematicians and civically engaged citizens.” |
What is Early Literacy?
It might seem like children don’t start building their literacy skills until they learn to read, but in fact the development of literacy skills begins at birth! We call these literacy skills, which children develop long before they are ready to read, “early literacy”.
Children with stronger early literacy skills have better foundations for beginning to read and write when they start school. They also have better foundations for being successful throughout their life. That’s why it’s so important to support the development of these skills.
Literacy at Home
We all know that play is a valuable part of childhood, but did you know that play is actually a critical part of a child’s learning, development, and well-being? It’s true! We have two free resources to help you support your child in playing and learning!
An important part of child development and early literacy is the encouragement and nurturing of a child’s imagination. This activity will encourage children to think about key ideas in a story through the use of “who, what, where, when, and why” questions, while stimulating their creative and imaginative side.
Books to Build Early Literacy Skills
Early Literacy Kits
Each kit includes 10 books, along with related materials like puzzles, puppets, or CDs, and suggestions for crafts and activities. |
The concept of the price level of a country refers to the weighted average of the prices of the main goods and services. Weighted goods and services are normally related to the importance given to them in the country, depending on the best before date attributed to them by the people or the national production they have.
In other words, the price level refers to the average value of goods and services in a country at a given moment of time.
In order to calculate the price level we must take into account the value of all goods and services that are of relevant importance to the country's economy in a given period:
(P1+P2+P3+...+Pn) / N
Where P1, P2, P3, Pn = Value of the good or service 1, 2, 3, ... n
N = Total number of goods or services that are taken into account
It is important to mention that, when there is a variation in price levels, it is said that we are facing a change in the cost of living of the citizens of a certain country. When this value increases above normal we are faced with a inflation, while if it falls below it is deflation.
On the other hand, it is important to differentiate between the price level and the price index. We have already seen the first concept, and the second refers to the movement of the general price level that occurs between two periods. This index is calculated at two price levels in percentage levels. The main price indices are the IPC and GDP deflator.
Finally, we should mention that the value attributed to certain goods and services depends and varies according to the country in which we are located. Developing countries will value some goods more than others. |
Volcanoes are a Blast!
Today, there are many active volcanoes throughout the world. These colossal monuments represent fragmented windows of time that reveal our planet's primitive origin. Volcanoes are mountains that come in various shapes and sizes, some of which are found towering tens of thousands of feet high, while others are broad and flat and stretch for several miles across Earth's landscapes. During an eruption, we are reminded that our planet Earth is ever-changing and will continue to do so throughout the ages.
Tall, steep-sided volcanoes, composed of successive layers of different types of volcanic products, are called composite volcanoes. They may also be referred to as stratovolcanoes because these volcanoes are made up of alternating layers of pyroclastic debris and lava flows. Composite volcanoes are very common and form in parts of the world where viscous magma reaches Earth's surface. When they erupt, they often do so extremely violently. Many of the world's most famous volcanoes, such as Mount St. Helens and Etna are of this type. Composite volcanoes generally have a combination of volcanic eruptions that may include: ash and pyroclastic debris shot into the air, slow-moving lava flows, pyroclastic flows, steam, dangerous gases and lahars (volcanic mud flows).
Shield volcanoes are the giants of the world. These quiet erupting volcanoes are shaped like broad, upturned shields that are made up of layer after layer of runny lava that flowed over the surface and then solidified. Shield volcanoes are typically formed above hotspots. All volcanic islands are composed of an igneous lava rock called basalt. Shield volcanoes typically have fast-moving lava flows, erupting fissures, lava tubes, and lava fountains. As a shield volcano progressively approaches its time of dormancy, it may become more violent in nature. Such volcanoes have lava flows that are mafic in composition, very hot temperatures, and predictable paths for destruction.
Cinder cones are relatively small and composed mainly of loose volcanic cinders (glassy fragments of solidified lava) and ash. They are also called scoria cones because they will often produce the igneous rock scoria. Cinder cones can also form on the sides of both shield and composite volcanoes. Very seldom will lava flows occur, but if they do, they are most often flowing out from the flanks of the cones. There are many cinder cones found in New Mexico and Arizona. Cinder cones often start out as small fissures that suddenly appear out of nowhere in the ground and start spewing cinders and lava bombs.
A small number of volcanoes across the world are said to be Supervolcanoes. Such volcanoes are of cataclysmic eruptions in the past and are capable of future eruptions that could radically alter landscapes and severly impact the world's climates. Super-volcanoes are considered capable of future, similar eruptions. A prime example of one is the Yellowstone Caldera, which makes up a large part of Yellowstone National Park in Wyoming. In this park, a large magma chamber lies about 5 miles below the caldera. Uplifting of the rock dome above the magma chamber or a big increase in earthquake activity could herald a new eruption that would directly affect most all of North America.
Fissure eruptions are voluminous outpourings of lava and poisonous gases that come from linear cracks that appear above ground. Most eruptions are fairly quiet, that is without loud explosions, but their effects can be quite dramatic. In fact, in the past, they have caused climate changes and mass extinctions. Fissures occur mainly in parts of the world where the Earth's crust is stretching, usually at a divergent plate boundary where rifting occurs. The lava flows are thin and runny and can flow for considerable distances. The lava flows cool to form basalt.
Mud volcanoes are less well-known and do not erupt with lava and ash. Such volcanoes are channels through which large amounts of gas, salty water and mud are expelled from deep underground onto Earth's surface. Once these volcanoes dry, and additional eruptions occur later, the mud builds up into cones that can be up to several hundred feet high. Depending on the volcano, these spew out mud which varies in temperature and viscosity. Mud does not erupt from the hot mantle but the crust; therefore, the mud flows are generally cold to luke warm. When underground pressure is high, the mud volcanoes generally break rock formations and throw out chunks of rock with the mud.
Calderas measure anywhere from 0.6 to 60 miles wide. Most are formed by the collapse or subsidence of the central part of a volcano, while there are a few where the entire region has been excavated by a very explosive eruption. Such volcanic eruptions are comparable in violence to asteroid impacts. Calderas should not be confused with craters. Craters are significantly smaller and are formed by the building up of material around a vent rather than the collapse of material below within the magma chamber. The photo to the left is a caldera from the Andes mountains. Such calderas may eventually fill up with water.
Located in the Black Rock Desert of Nevada, this unique geothermal geyser consist of three colorful cones, each continuously spouting hot water. These formations formed accidentally in 1964 from a geothermal test well which was inadequately capped. The scalding water has erupted from the well since then, leaving calcium carbonate deposits growing at the rate of several inches per year. The brilliant red, orange, yellow, and green coloring on the mounds is from thermophilic algae thriving in the extreme micro-climate of the geysers. Unfortunately, these unique volcanic features are not open to the public.
Fumaroles are openings in the planet's crust that emit steam and a variety of volcanic gasses, such as carbon dioxide, sulfur dioxide, and hydrogen sulfide. These unique volcanic features often make loud hissing noises as the steam and gasses escape. Many fumaroles are foul smelling. Unlike hot springs, the water in fumaroles get heated up to such a high temperature that it boils into steam before reaching the surface. The main source of the steam emitted by fumaroles is ground water heated by magma lying relatively close to the surface. Fumaroles are present on active volcanoes during periods of relatively quiet between eruptions. Photo was taken at Yellowstone National Park. |
Most of what we have come to think of as our daily fruits, vegetables, and grains were domesticated from wild ancestors. Over hundreds and thousands of years, humans have selected and bred plants for traits that benefit us -- traits such as bigger, juicier, and easier-to-harvest fruits, stems, tubers, or flowers. For short-lived, or annual, plants, it is relatively easy to envision how such human-induced selection rapidly led to changes in morphology and genetics such that these plants soon become quite different from their wild progenitors.
But what about longer-lived, perennial crops, such as fruit or nut trees? How do these long-lived species respond to short-term selection processes, and will this information be helpful in predicting responses to rapid climate changes?
Dr. Allison Miller (Saint Louis University, MO) and Dr. Briana Gross (National Center for Genetic Resource Preservation, USDA-ARS, Fort Collins, CO) are interested in the diversity of plant genomes in domesticated crops and the evolution of their breeding systems under domestication. They undertook an extensive review of perennials, primarily long-lived tree crops, comparing their morphology and genetics in response to human selection pressures to that of natural tree populations and annual crops, which is something we know a lot about. They published their findings in the September issue of the American Journal of Botany (http://www.amjbot.org/content/98/9/1389.full).
"Since their origins roughly 10,000,000 years ago, agricultural societies have been based primarily on annual grains and legumes such as corn, wheat, rice, common beans, and lentils," notes Miller. "The importance of these crops is without question; however, every agricultural society has also domesticated perennial plants and these are less well-known than the annuals."
In their article, Miller and Gross point out that one of the challenges to domesticating long-lived species is that they have especially long juvenile phases. This imposes limits on farmers because they have to wait years before they can evaluate, select, and cultivate fruits, in contrast to annuals that can be grown from seed every year. Moreover, like many trees in nature, perennial tree crops are often obligate outcrossers, requiring pollination from another individual. Farmers have gotten around these "obstacles" by clonally propagating individuals with desirable traits.
While clonal propagation may seem like it would result in lower genetic variation, the authors observe that clonal propagation and a long juvenile phase means perennial tree-crops have actually gone through fewer sexual cycles since domestication and thus have remained closer, genetically, to their wild progenitors. Indeed, perennial fruit crops retain an average of 95% of the (neutral) genetic variation found in their wild counterparts, compared with annual fruit crops which retain about 60%.
Interspecific hybridization is very common in tree species in nature, and this ability to readily hybridize is an important trait in domestication -- once a hybrid is formed, it can become the basis for an entire new variety through clonal propagation. Thus, clonal reproduction can also result in rapid rates of change in domesticated systems because individuals with desirable traits can be reproduced exactly and extensively.
"The evolution of perennial plants under human influences results in significant changes in reproductive biology," notes Miller, "and in many cases, perennial crops have reduced fertility in cultivation."
While many annual crops were domesticated from self-compatible wild ancestors, few perennial crops were derived from selfing wild populations. Thus, domesticated perennials often encounter mate limitation barriers when one or just a few clones are planted across a geographic region. However, plants in these agricultural systems have responded by evolving alternative strategies to ensure fruit production. For example, grapes have shifted from having unisexual to bisexual flowers and to having self-compatible fertilization.
Genetic bottlenecks in cultivated populations occur when only a subset of wild individuals are brought under cultivation -- over time, the genetic base narrows as superior individuals are selectively propagated, resulting in elite cultivars that can be genetically depauperate. However, the authors found that many domesticated tree crops are derived from multiple areas, where seeds and cuttings were removed from geographically distinct wild populations. Moreover, many perennial species are highly heterozygous and clonal propagation maintains this heterozygosity at the individual level. Thus, perennial tree crops tend to have a much broader genetic bottleneck than annuals.
In light of the growing concern over monocultures and the loss of genetic diversity in our domesticated crops, Miller and Gross' review of perennial long-lived crops highlights the importance of maintaining long-lived perennials which may have lower environmental impacts as well as higher genetic variability within their populations.
"Understanding how basic evolutionary processes associated with agriculture (e.g., domestication bottlenecks, selective cultivation) impact plant species is critical for crop breeding and for the conservation of crop genetic resources," concludes Miller.
Scientists are also interested in how climate change might impact agriculture. In this framework, Miller is interested in exploring how perennial crops withstand heterogeneous climates over multiple years. "Little is known about the genomic basis of adaptation to climate in perennial plants, or how gene expression patterns may vary from year to year based on climatic conditions in a given location," she notes.
|Contact: Richard Hund|
American Journal of Botany |
This Solving Inequalities lesson plan also includes:
Cooperative algebra learners swim from the opening pool problem through inequality-infested waters and into life-altering decisions all within one class period. Modeling a pool scenario with an inequality in one variable opens this lesson, which eventually ends with our swimmers modeling a job choice scenario using an inequality to decide which job is best. In between the pool and their job decision is an all-class activity delving into the consequences of operations applied to the two sides of an inequality. Then a mix-n-match game gives our swimmers practice solving inequalities in one variable and matching the solution to its graph. The cooperative debate to decide which job to take is modeled by a one variable inequality. They solve the inequality, and represent their solution set graphically and interpret their solution set within the context of the problem and make decisions based on their solution.
- Activity explores and addresses misconceptions that exist around inequalities and the steps to solve them
- Real-life scenarios are used to set up the inequalities in one variable
- May need to supplement with more directed practice solving inequalities in one variable |
Back to School Tips
Welcome back to School! Let's stay safe! Follow these safety tips:
Walking to school:
Walking to school:
- Look both ways before crossing the street!
- Cross at crosswalks. Crosswalks are placed in areas that are generally safer to cross the street.
- Walk the route to school with your child to assure the route is safe. Teach your child to avoid hazards.
- Walk with a group, preferably with adults.
- Consider a "walking school bus." An adult begins the route by picking up the first child, then stops at each home to "pick up" other children. All walk to school together under the adult's supervision. Parents may trade this responsibility so it is not a burden on any one parent.
- Make sure children know the way to school. New neighborhoods, new construction, or even changing landmarks may confuse younger children.
- Wear bright clothing and have a brightly colored backpack. Reflective clothing, shoes, and stickers will increase visibility at dusk and dawn.
- Make eye contact with drivers to make sure they are stopping for you before crossing in front of vehicles. Vehicles sometimes stop for reasons other than pedestrians, though it may look like the vehicle is stopping for the child.
- Always wear a helmet!
- Always ride with traffic. Though it seems to make sense to ride facing vehicle traffic, it makes it harder for drivers to see the bicyclist at intersections and exiting driveways.
- Stop at red lights and stop signs. Bicyclists must follow all laws vehicles must follow.
- Walk bikes across intersections and crosswalks.
- Slow down around schools! Every school morning and afternoon brings a chaotic mix of kids walking, biking, and climbing into and out of vehicles to attend school. Be patient and increase your vigilance.
- Eliminate distractions! Driving is hard and requires total attention. Eating, talking on the phone, responding to passengers in your vehicle, and any other types of distractions divides your attention and creates an opportunity for you to have an accident. Focus on driving!
- Follow the rules of the road! Traffic violations increase your chances of getting into an accident.
- No passing stopped school buses! Not only is this a law, but it is a huge risk to children going to or getting off of the bus.
- If something goes wrong, focus on driving the car to a safe location. That something may be a spill, a child who has gotten out of a car seat or a seat belt, a bee that has gotten into the car, etc. Many crashes occur while the driver tries to solve a seemingly emergency situation before parking the car in a safe location. Drive to a safe location, then deal with the issue.
- Always set a good example for your children! They watch every move you make. Let them learn the right things from you. |
For a long time, it was considered that these interactions between molecules are always attractive. Now, for the first time, found that in many rather common situations in nature the Van der Waals force between two molecules becomes repulsive.
Introduced in 1930, Van der Waals force is a term that defines the attraction of intermolecular forces between molecules. It only pushes back when groups of molecules are under pressure. According to the new research, such reversal can occur in the real world where crowds of molecules jostle freely.
Scientists applied a model that shows how the charges on particles become polarized under certain conditions. They then compared their findings with experimental results and found that the Van der Waals force can push when they were only expected to pull.
The model enables researchers to understand that Van der Waal force is like a little brother to the bonds that connect atoms to one another.
Electrons as negatively charged particles that move around a positively charged nucleus. While moving, they are more likely to occupy some areas around the atom more than others. It depends on what else happens to be pushing and shoving nearby.
Additionally, particles of the same charge have a repulsive effect on one another. Thus, electrons bunch up and make that part of a molecule more negative. They next pull towards the zone with more positive molecules.
Such loose bonding makes H2O molecules more sticky. This gives the liquid a high surface tension that makes belly flops in the swimming pool hurt so much.
Van der Waal forces are like these so-called hydrogen bonds. The force is only applied when the molecules are not so imbalanced. Although, Van der Waal forces are also not that much powerful than other forms of chemical bonding. They also require molecules to be relatively close to one another.
And squeezing molecules close together can shift their electron charge density. The researchers wanted to know if the same rearrangements of electrons could sometimes be repulsive under other crowded conditions that weren’t under high pressure.
Alexandre Tkatchenko, a researcher from the University of Luxembourg said, “The textbooks so far assumed that the forces are solely attractive. For us, the interesting question is whether you can also make them repulsive.”
To identify, scientists used a model called a Drude oscillator. The model shows the fluctuating charge densities around particles in a confined space. It demonstrates the tiny tugs between molecules over short distances. Sometimes it turns the molecules into the occasional shove, even when they’re not being squeezed together.
Researcher and developer of the model, Mainak Sadhukhan said, “We could rationalize many previous experimental results that remained unexplained until now. Our new theory allows, for the first time, for an interpretation of many interesting phenomena observed for molecules under confinement.” |
Are you a twin?
2% of pregnancies produces twins. About a third of these are identical twins, with identical genes, making them genetic clones. Non-identical twins share around half their genes, so they are no more alike than ordinary brothers and sisters. In twin studies, scientists assume that both sorts of twins usually share the same environment: upbringing, diet and so on. But this is not necessarily the case.
Are twins always the same?
Twins are born when two babies grow in their mother's womb at the same time. During fertilisation, a single sperm usually fertilises a single egg, which then starts to grow into an embryo, by dividing into two cells, then four, and so on. Occasionally, one embryo splits into two that then grow into genetically identical individuals – clones. Non-identical twins develop if two eggs are fertilised at the same time, by two separate sperm. They have no more genetic similarity than brothers and sisters.
Can twins be different heights?
Scientists can study twins to see how our genes and environment affect our appearance, health and behaviour. They have compared the height and weight of identical twins with those of non-identical twins, for example. Both depend on diet and genes, but to different extents. Your genes do affect your adult weight, but the amount you eat is more important. Your adult height depends on how tall your parents are, although it is also affected by your diet. |
The human body is made up of billions of cells, which in a healthy body are usually turning over slowly, in an organised way. Cancer is the term we use for a disease that occurs when these cells grow in an abnormal and uncontrolled way. This uncontrolled growth of cells can cause a lump or a mass to form, which is called a tumour. Tumours can be benign or malignant.
Benign tumours usually grow slowly and do not spread to other parts of the body. Benign tumours only become a problem if they grow very large, taking up space and affecting the way the body works. Malignant tumours are made up of cancer cells. They are usually faster growing, can destroy tissue and have the ability to spread to other parts of the body.
Cancer may also affect blood cells, causing blood cancers such as leukaemia, lymphoma or myeloma. These blood cancers also cause normal blood cell production to be reduced due to the uncontrolled growth of the abnormal (malignant) cells in the bone marrow.
Over time, the uncontrolled growth of cancer cells usually becomes too much for the body to cope with, or will spread to a part of the body that is essential for life. |
, genus of microorganisms that cause a variety of diseases in humans and other animals. Psittacosis, or parrot fever, caused by the species Chlamydia psittaci,
is transmitted to people by birds, particularly parrots, parakeets, and lovebirds. In birds the disease takes the form of an intestinal infection, but in people it runs the course of a viral pneumonia. Different forms of Chlamydia trachomatis
, an infection of the mucous membrane of the eyelids, and the sexually transmitted disease
lymphogranuloma venereum. This same species also causes the sexually transmitted disease called chlamydia, the most common such disease in the United States. In women, chlamydia is a common cause of pelvic inflammatory disease
, which can result in infertility and an increased risk of tubal pregnancy. Men are the primary carriers, but painful urination and discharge often prompt men to get treatment before the testes can be infected and male infertility can result. Chlamydial infections can be treated with antibiotics such as tetracycline.
The Columbia Electronic Encyclopedia Copyright © 2004.
Licensed from Columbia University Press |
Two books in the Old Testament. These books narrate the history of Israel from the rebellion of Adonijah, the fourth son of King David (about 1015 B.C.), to the final captivity of Judah (about 586 B.C.). They include the whole history of the northern kingdom (the ten tribes of Israel) from the separation until the Assyrians took them captive into the north countries. See also Chronology in the appendix.
Chapter 1 describes the final days of King David’s life. Chapters 2–11 record Solomon’s life. Chapters 12–16 tell of Solomon’s immediate successors, Rehoboam and Jeroboam. Jeroboam caused the division of the kingdom of Israel. Other kings are also mentioned. Chapters 17–21 record parts of the ministry of Elijah as he admonished Ahab, king of Israel. Chapter 22 records a war against Syria in which Ahab and Jehoshaphat, king of Judah, join forces. The prophet Micaiah prophesies against the kings.
Chapters 1:1–2:11 continue the life of Elijah, including Elijah’s rise to heaven in a chariot of fire. Chapters 2–9 relate Elisha’s ministry of faith and great power. Chapter 10 tells of Jehu, the king, and how he destroyed the house of Ahab and the priests of Baal. Chapters 11–13 record the righteous reign of Jehoash and the death of Elisha. Chapters 14–17 tell of various kings who reigned in Israel and Judah, often in wickedness. Chapter 15 records the Assyrian capture of the ten tribes of Israel. Chapters 18–20 record the righteous life of Hezekiah, the king of Judah, and the prophet Isaiah. Chapters 21–23 tell of the kings Manasseh and Josiah. According to tradition, Manasseh was responsible for the martyrdom of Isaiah. Josiah was a righteous king who reestablished the law among the Jews. Chapters 24–25 record the Babylonian captivity. |
Analysis of short story "Everyday Use" by Alice Wa
Analysis of short story "Everyday Use" by Alice Walker
With her story, "Everyday Use," Alice Walker is saying that art should be a living, breathing part of the culture it arose from, rather than a frozen timepiece to be observed from a distance. To make this point, she uses the quilts in her story to symbolize art; and what happens to these quilts represents her theory of art.(thesis)
The quilts themselves, as art, are inseparable from the culture they arose from. (topic sentence) The history of these quilts is a history of the family. The narrator says, "In both of them were scraps of dresses Grandma Dee had worn fifty and more years ago. Bits and pieces of Grandpa Jarrell's Paisley shirts. And one teeny faded blue piece . . . that was from Great Grandpa Ezra's uniform that he wore in the Civil War." So these quilts, which have become an heirloom, not only represent the family, but are an integral part of the family. Walker is saying that true art not only represents its culture, but is an inseparable part of that culture. The manner in which the quilts are treated shows Walker's view of how art should be treated. Dee covets the quilts for their financial and aesthetic value. "But they're priceless!" she exclaims, when she learns that her mother has already promised them to Maggie. Dee argues that Maggie is "backward enough to put them to everyday use." Indeed, this is how Maggie views the quilts. She values them for what them mean to her as an individual. This becomes clear when she says, "I can 'member Grandma Dee without the quilts," implying that her connection with the quilts is personal and emotional rather than financial and aesthetic. She also knows that the quilts are an active process, kept alive through continuous renewal. As the narrator points out, "Maggie knows how to quilt."
The two sisters' values concerning the quilt represent the two main approaches to art appreciation in our society. Art can be valued for financial and aesthetic reasons, or it can be valued for personal and emotional reasons. When the narrator snatches the quilts from Dee and gives them to Maggie, Walker is saying that the second set of values is the correct one. Art, in order to be kept alive, must be put to "Everyday Use" -- literally in the case of the quilts,... |
Nov 6, 2019
There are no such things as magnetic field lines in space.
The Voyager 1 spacecraft encountered the Sun’s heliosheath in December 2004, followed by Voyager 2 in August 2007. It was Voyager 1 that first found fluctuations in electron density as it traveled through the heliosphere, while Voyager 2 made similar observations later in 2008. At the time, astrophysicists were surprised by the “frothy magnetic bubbles” that Voyager 1 detected.
Recently, a team from the University of Iowa announced that Voyager 2 is also within the interstellar medium (ISM). An increase in plasma density is evidence that it encountered the higher-density plasma of interstellar space. Voyager 1 previously experienced that plasma density jump.
Since NASA’s computer model of the heliosphere works only if the readings are assumed to come from flying in and out of the aforementioned bubbles, some means for their creation had to be concocted. Enter stage left, the old tried and true, magnetic reconnection.
As a press release from the time states, the Sun’s “twisted and wrinkled” magnetic field lines far out in the heliosphere “bunch up,” causing them to “reconnect” and explosively “reorganize” into long, sausage-shaped bubbles of magnetism.
Retired Professor of Electrical Engineering, Dr. Donald Scott’s admonition about magnetic reconnection should be kept in mind when reading reports about Earth’s interaction with the plasma stream (commonly called the solar wind) and electromagnetic energy radiating from the Sun. He notes that magnetic field lines are only convenient ideas and nothing more. They indicate a magnetic field’s direction. Schematic diagrams consisting of magnetic field lines are useful for visualizing its shape and strength, although lines of force do not exist in space anymore than lines of latitude or longitude do.
Magnetic field lines do not move because they do not exist, since the field is continuous. Consensus opinions ignore that fact and speak of field lines that can move, touch, merge, and “detonate.” Scott’s observation is that if this idea were to be applied to circles of longitude, they would come together and “merge” in the polar regions and could be theorized to be the source of gravitational energy.
There is no such thing as “magnetic merging” or “reconnection” of magnetic field lines in the real world. The energy comes from electrical currents, which can move, touch, merge, and detonate. The cellular structure confining electric currents in space is not directly observable, except by flying a space probe through them. They have been detected on Earth and in near-space.
Charged particles in motion comprise an electric current. That current wraps itself in a magnetic field. As more charged particles accelerate in the same direction the magnetic field gets stronger. A familiar idea to electrical engineers, but when astronomers find magnetism in space they are mystified. They resort to ironic ideas about voids with magnetic fields frozen inside them or so-called “magnetic reconnection.”
Electric Universe advocate Wal Thornhill wrote:
“…plasma in space forms a bubble, known as a ‘virtual cathode’. Effectively it is the heliopause. In plasma terms, the heliopause is not a result of mechanical shock but is a Langmuir plasma sheath that forms between two plasmas of different charge densities and energies…Such ‘bubbles’ are seen at all scales, from the comas of comets to the ‘magnetospheres’ of planets and stars.”
Earth’s magnetospheric “bubble” is known among plasma physicists as a Langmuir sheath. Langmuir sheaths are electrically charged double layers of plasma, in which opposite charges build up near each other, creating an electric field between them. Double layers can accelerate ions to extreme velocities that might easily be misinterpreted as high temperature.
The same conditions are most likely present where the solar magnetosphere, or heliosphere, meets the dissimilar charge of the Interstellar Medium (ISM). Two regions of dissimilar plasma will form a Langmuir sheath between them, which leads to a potential “bubble” formation.
The Thunderbolts Picture of the Day is generously supported by the Mainwaring Archive Foundation. |
by Johnny Sullivan, Graduate Research Assistant, Center for Research in Water Resources, The University of Texas at Austin, [email protected]
It is unclear at present exactly how climate change will affect global precipitation patterns on a long-term time scale. The Intergovernnmental Panel on Climate Change has agreed on general trends, however, and projects in its 2007 report that certain regions will experience wetter climates, whereas others regions will receive less rainfall than ever before. It is the specific locations and magnitudes of these effects that are uncertain.
Last summer, Texas endured one of the worst droughts in the state’s history. It is not yet clear to what extent that drought was caused or intensified by climate change. But it is clear that having a system to better understand drought severity, especially within the context of previous droughts, could improve preparedness for such events in the future, whether climate change-related or otherwise.
Measuring the stage and flow of important rivers and water bodies is one way to analyze drought status. Another aspect is the amount of moisture present in soils. This facet is not easily observable and can thus easily be overlooked. Still, it is critical, for the moisture content relates directly to plant health, an important consideration in a state with such a large agricultural industry. Plants also preserve soil quality and prevent erosion. As such, a full picture of the drought status must include an analysis of the current state of soil moisture.
To this end, the goal of this project was to create a map, available online and updated in real-time, showing the extent and severity of drought across the state of Texas based on soil moisture content. Knowing only the amount of water currently present in the soils is not sufficient; one must have both the current water content and maximum available water capacity. When combined, these two pieces of data describe the drought status, for they detail how much water is contained in the soils compared to how much the soils can possibly hold. Stating that there is 4 cm of moisture in the top meter of soil means very little. It’s far more valuable to know that there is 4 cm present out of a possible 30 cm – or out of a possible 5 cm. The first case would likely be considered a fairly serious drought, while the second would cause little to no concern.
Both the current water storage and available water capacity are obtained from the North American Land Data Assimilation System (NLDAS), a collaboration between numerous governmental and academic institutions that publishes many types of land surface data in real-time in a 1/8th degree grid across the United States. The distribution of grid points over Texas, each of which represents a 1/8th degree quad, can be seen in the map below.
Figure 1. A map of Texas displaying the NLDAS data points across the state; the points are the centroids of the 1/8th degree quads to which NLDAS data is output.
One of the data types included in NLDAS is soil moisture content (a.k.a. current water storage) in kg/m2, for a depth of 0-100 cm. The Noah land-atmosphere model is used for this project’s analysis, and the files are available from the NLDAS FTP site in the GRIB file format. Unfortunately, GRIB files are not compatible with ArcGIS, so these files are converted to the netCDF format on-the-fly via Unidata’s THREDDS server. The available water capacity was obtained from the NLDAS soil parameters. NLDAS defines 18 different soil classes, each with unique values for field capacity and wilting point, and every 1/8th degree grid is assigned one of these classes.
These two pieces of data are then used to map drought across the state. In order to do so, a soil wetness index (SWI) is calculated. SWI is a metric developed for this project in cooperation with David Mocko at NASA. It describes the amount of soil moisture present out of how much could potentially exist in the soil; in other words, the current water storage divided by the available water storage. SWI ranges from 0 to 100% for each NLDAS quad, and this value is updated every day as new soil moisture data is obtained from NLDAS. Once the SWI has been calculated for each quad, the values are averaged on a county basis, as a county-level scale is most useful for disaster management and emergency operations. This yields a single SWI value for each county across the state. Finally, a map is updated with these new values. This analysis is carried out using a geoprocessing model in ArcGIS, as shown in the following figure.
Figure 2. The ArcGIS geoprocessing model created to carry out the analysis for this project.
The resulting map is representative of the current drought status in Texas. The SWI values for September 17, 2012 can be seen in the map below. There is wide variation across the state, with the soil wetness index ranging from 0 in the northern panhandle to 75% in central Texas.
Figure 3. Map of Soil Wetness Index across Texas for September 17, 2012.
While this map is useful, it is particularly interesting to see how the SWI changes over time. The animation below depicts the SWI values on the first of every month from January of 2011 to June of this year. It clearly shows the progression of the record-setting drought that ravaged the state during the summer of 2011.
Going forward, a statistical analysis of historical soil moisture content will be carried out in order to determine how significant the SWI values are for different areas of the state. For instance, in some of the drier areas it is currently difficult to discern whether a low SWI value indicates drought, or if soil moisture content is simply perennially low in that region. Examining historical data will allow for such a determination to be made. The analysis described above will also be automated, and a new map will be available online every day.
Furthermore, when paired with STATSGO or SSURGO data, this service can be used to calculate the total amount of soil moisture across the state. By comparing this value to an estimate of water storage change from NASA’s GRACE mission, the role of soil moisture (as compared to rivers, reservoirs, etc.) in the overall “water balance” for the state can be quantified.
The author would like to thank his advisor, Dr. David Maidment, for his guidance and support and the Texas Natural Resources Information System (TNRIS) for funding the project. |
In 1976, the people of Cuba overwhelmingly approved a Constitution that established a political system based on popular participation. The Constitution established structures of “Popular Power,” where the highest authority resides in the National Assembly of Popular Power. The deputies of the National Assembly are elected by the delegates of the 169 Municipal Assemblies in the country, who are elected in elections with two to six candidates in voting districts of 1000 to 1500 voters. The candidates are nominated in a series of nomination meetings held in each voting district. They are not nominated by any political party, and focus at the nomination meetings is on the leadership qualities of the candidates. The deputies of the National Assembly are elected to five-year terms. As the highest political authority in the nation, the National Assembly enacts legislation, and it elects the 31 members of the Council of State and Ministers, including the President of the Council of State and Ministers, who is the chief of state.
The Cuban Constitution of 1976 also established requirements for consultation by the national, provincial, and municipal assemblies with mass organizations. The mass organizations are organizations of workers, women, students, peasants and cooperative members, and neighborhoods. They meet on a regular basis to discuss concerns of their members, and the discussions range from concrete problems to major global issues. The mass organizations have a participation rate of 85%.
There are other examples of revolutions and movements forming popular assemblies and popular councils: the Paris Commune of 1871, the Russian Revolution of 1917, the German Revolution of 1918, the Hungarian Revolution of 1919, the General Strike in Great Britain in 1926, and the Hungarian Revolution of 1956 (Grant 1997:61). Popular councils also have been developed in Vietnam (Ho 2007:162-76), and they are being developed today in Latin America, particularly in Venezuela, Bolivia, and Ecuador.
Popular assemblies and popular councils are structures of popular democracy. They are fundamentally different from bourgeois structures of representative democracy. Popular democracy is characterized by regular face to face meetings of small groups in places of work and study and in neighborhoods, where the people meet to discuss the challenges and issues that they confront. In such settings, if someone has a confused or distorted conception, those persons with a more informed and comprehensive understanding of the issue can explain further, thus reducing the tendency to distortion and confusion, helping the people to understand the issues. In this process, those with a capacity to explain and with a commitment to fundamental human values earn the respect, trust, and confidence of their neighbors, co-workers, and/or fellow students. It is an environment that gives space to natural and indigenous leadership, and many leaders are able to develop their leadership capacities in the various mass organizations, serving from the local to national level. In Cuba, for example, it is not uncommon to find informed, committed, and articulate persons serving as president of the neighborhood organization for a city block, or as president of a municipal assembly in a small rural town.
In contrast, representative democracy is an impersonal and anonymous process. The people vote, or they select from predetermined answers for an opinion survey, but they do not meet to discuss and to inform themselves. They respond not to arguments, reasons, and evidence presented in face to face conversations, but to slogans and sound bites presented in the mass media, sometimes in the form of political advertising. Representative democracy is a process in which organizations compete, vying to see which political party or particular interest can generate the most support in elections or opinion polls, or better said, to see which party or interest can more effectively manipulate the people, who never meet to argue, debate, and discuss. In such a context, with competing particular interests presenting different and opposed spins and manipulations, the development of a consensus that could be the basis of a constructive national project is no more than an idealistic and naïve hope.
The formation of popular councils is an integral and necessary dimension of a social transformation that seeks a just and democratic world.
August, Arnold. 1999. Democracy in Cuba and the 1997-98 Elections. Havana: Editorial José Martí.
Grant, Ted. 1997. Rusia—De la revolución a la contrarrevolución: Un análisis marxista. Prólogo de Alan Woods. Traducción de Jordi Martorell. Madrid: Fundación Federico Engels.
Ho Chi Minh. 2007. Down with Colonialism. Introduction by Walden Bello. London: Verso.
Lezcano Pérez, Jorge. 2003. Elecciones, Parlamento y Democracia en Cuba. Brasilia: Casa Editora de la Embajada de Cuba en Brasil.
Key words: Third World, revolution, colonialism, neocolonialism, imperialism, democracy, national liberation, sovereignty, self-determination, socialism, Marxism, Leninism, Cuba, Latin America, world-system, world-economy, development, underdevelopment, colonial, neocolonial, blog Third World perspective, popular democracy, popular assembly, popular council, representative democracy, Paris Commune, Cuban Constitution, popular power in Cuba, mass organizations, Arnold August |
The Functions Of The Uterus
The uterus has several important functions when it comes to reproduction. It has several different parts including the body and the cervix. The uterus is located in the pelvis and are held in place by several different ligaments. There are actually different ligaments that are used to hold the uterus and these include the transverse ligament, the cardinal ligaments, the cervical ligaments , the uterosacral ligaments and lastly, the pubocervical ligaments. One specific ligament that is described to be a fold of the peritoneum known as the broad ligament serves to cover the uterus for protection.
The uterus is the one responsible for making menstruation possible. As the hormones from the pituitary gland is actually released, they act upon the uterus leading to the latter’s immediate response to such stimulation. The lining of the uterus is the one that is being shed during menstruation that is why it is very important in the female reproductive system. Aside from that, the uterus is also capable for allowing the blood to flow or pass to the pelvis and eventually to the external genitals or the vagina.
The uterus is also the one responsible for housing the ovum that comes from the fallopian tube. The ovum will then implant into the endometrial lining of the uterus where it is given an adequate supply of blood, oxygen and nutrients from the great distribution of blood vessels in the said area. The ovum will not survive if it does not receive adequate nourishment from the blood that is why it implants in an area where it can be definitely be supported and that is the uterus. The ovum, once it is already fertilized, becomes an embryo, which will attach to the wall and eventually develop its placenta. The placenta will serve as the house of the embryo as it grows later on. The embryo will then develop into a fetus until it eventually is ready for childbirth after some time.
The uterus can be found inside the pelvis just above or specifically dorsal to the bladder. It is located ventral to the rectal area as well. If you are to observe the uterus, it can actually be described as something with a pear shape and is measuring about 7 to 8 centimeters in length. There are actually four different parts or segments of the uterus which are known as the corpus or the uterine body, the fundus, the cervix and lastly, the os.
If you are going to look into the parts of the uterus, it is actually arranged based on the following. The external part of the uterus is known to be the cervix or what is known to be the neck of the uterus. The cervix is further divided into several areas known as the external os of the uterus, the central canal of the cervix and lastly, the internal os of the cervix which is the one closest to the body of the uterus. After the cervix, the body of the uterus itself comes next. The body of the uterus is also known as the corpus which is further divided into two parts known as the cavity of the corpus and the fundus. The fundus is known to be the topmost area of the uterus that has a lot of blood supply.
The Layers Of The Uterus
Aside from the several parts of the uterus, it also has several layers. First of all, it has the innermost layer or the lining known as the endometrium. The endometrium is further divided into two different parts or areas known as the basal endometrium and the functional endometrium. The functional endometrium is the one that is being shed during the menstrual process. The next layer after the endometrial lining is known to be the myometrium. The myometrium is mainly composed of the muscles that make up the uterus which are primarily smooth muscles. The next layer is known to be the parametrium which is the loose connective tissue that can be found covering or surrounding the uterus. The perimetrium is known to be the last layer of the uterus which is known to cover or protect the fundus.
Position Of The Uterus
The uterus is supported by different structures to ensure its placement in the pelvic cavity. These structures are known to be the perineal body, the pelvic diaphragm and the urogenital diaphragm. The position of the uterus is not like any usual organ because it is placed in anteversion in the cavity. The anteverted position means that the organ is in the forward angle that lies in the axis of the vagina and that of the cervix. The angle is known to be measuring 90 degrees whenever the rectum and the bladder is empty.
Blood Supply Of The Uterus
The uterus should be adequately supplies with blood so that it could maintain its function especially during pregnancy when the uterus needs most of the blood supply it could get from the body. The uterus is supplied by the ovarian artery, the arcuate artery, the radial artery, the spinal artery and the basal artery. These comprise the arterial vasculature of the uterus. The main supplier of blood in the uterus is known to be the ovarian artery and also the uterine artery.
These details simply show that the uterus is a very integral part of the female reproductive system. It serves important functions in maintaining normal female body functioning.
If you enjoyed reading this article other articles which may interest you include: |
Rosalind Elsie Franklin (25 July 1920 – 16 April 1958) was a British biophysicist and X-ray crystallographer who made critical contributions to the understanding of the fine molecular structures of DNA, RNA, viruses, coal, and graphite. Franklin is best known for her work on the X-ray diffraction images of DNA which led to discovery of DNA double helix.
Her data, according to Francis Crick, was “the data we actually used” to formulate Crick and Watson’s 1953 hypothesis regarding the structure of DNA. Franklin’s images of X-ray diffraction confirming the helical structure of DNA were shown to Watson without her approval or knowledge. Though this image and her accurate interpretation of the data provided valuable insight into the DNA structure, Franklin’s scientific contributions to the discovery of the double helix are often overlooked. Unpublished drafts of her papers (written just as she was arranging to leave King’s College London) show that she had independently determined the overall B-form of the DNA helix and the location of the phosphate groups on the outside of the structure. Moreover, Franklin personally told Crick and Watson that the backbones had to be on the outside, which was crucial since before this both they and Linus Pauling had independently generated non-illuminating models with the chains inside and the bases pointing outwards. However, her work was published third, in the series of three DNANature articles, led by the paper of Watson and Crick which only hinted at her contribution to their hypothesis. |
Cyber Attacks and Their Culprits
Cybercrime is a big business around the world, affecting individuals, major corporations, and even government agencies. The statistics on cyber attacks are easy to find: in 2014 in the UK alone, online banking fraud rose by 48% and cost £60.4 million according to figures published by Financial Fraud Action UK. Another study by the UK government found that 90% of large and 74% of small organizations suffered a security breach in 2014. With cyber attacks on the increase, technology evolving every day, and each and every one of us a potential target, becoming wise to cyber security issues is more important than ever.
What are cyber attacks and where do they come from? A cyber attack is the deliberate exploitation of computer systems, infrastructures, computer networks, or personal computer devices in which attackers use malicious code to alter computer code, logic, or data. The ultimate aim is to either steal, alter, or destroy information. The people behind cyber attacks can have various motives, from cybercriminals interested in making money through fraud or from selling valuable information to “hacktivists” who attack companies for political or ideological reasons. There are even hackers who simply like the challenge of hacking into computer systems for fun.
Cyber attacks can be targeted or untargeted. In untargeted attacks, attackers indiscriminately target as many devices, services, or users as possible, exploiting the vulnerabilities in a system and taking advantage of the openness of the internet. “Phishing” is one such example; emails are sent to large numbers of people requesting sensitive information or encouraging users to visit a fake website. “Water holing” is another example, where cybercriminals set up a fake website or compromise a legitimate one to source personal information from unsuspecting users.
In a targeted attack, a deliberately chosen organization or individual is singled out for attack, and the results can be more damaging. “Spear-phishing” is an example—emails are sent to targeted individuals and contain an attachment or link holding malicious software. “Botnets” are another method. These are “zombie computers”—groups or network of machines secretly taken over by cybercriminals who are then able to silently harvest sensitive information from users.
Over the past decade, there have been some high-profile cases of cyber attacks affecting global corporations. Whether the target is global businesses or individuals, attackers always look for vulnerabilities in IT systems, so the same guidance applies to all: the more protected you are, the less likely you are to fall prey to a cyber attack.
Recommended free course
Share with friends |
Firstly let us define the following quantities:
- 1. Force. - a push or a pull.
- 2. Speed. - distance covered per unit of time.
- 3. Velocity. - speed in a given direction.
- 4. Uniform Velocity. - constant speed in a straight line.
- 5. Acceleration. - change of velocity per unit of time.
Newton's work on Gravitation can be summarised as follows:
(i) every body in the universe attracts every other body
(ii) the gravitational force between two bodies is directly proportional to the mass of each and inversely proportional to the square of the distance between them.
Newton's work in mechanics can be summarised in a statement of his three laws. The first tells us what happens in the case of a body on which the net force is zero.
I. Every body continues in a state of rest or of uniform velocity unless acted on by an external force.
The second law tells us how to deal with bodies that have a non-zero net force:
II. The acceleration of a body under the action of a net force is directly proportional to that force and inversely proportional to the mass of the body.
In mathematical terms, if F is the force, m the mass and a the acceleration, Newton' Second Law can be stated succintly as F = ma - probably the most famous equation in all of Physics!
The third law talks about the mutual forces that two bodies in contact exert on each other, and can be stated thus:
III. To every action there is an equal and opposite reaction.
The third law will not have much impact on the progress of this course, so we will not consider it further. However, if this confuses you (e.g. how can a body ever move if it experiences equal and opposite forces?) remember that the action and the reaction mentioned in the third law act on different bodies - I push on the earth (action) and the earth pushes back on me (reaction).
To give you an idea of the application of this new understanding of mechanics, let us look at the old question of whether, when two bodies are dropped from the same height, the heavier would reach the ground first, a question answered in the affirmative by Aristotle.
Let us consider the case of an elephant falling out of a tree. As Galilieo showed, every falling body experiences the same acceleration in the absence of air resistance. However, when there is air resistance, the situation changes. Initially the elephant experiences only the force due to gravity, pulling it towards the centre of the earth. As its speed increases, however, air resistance also increases, opposing the force of gravity, which does not change. Eventually the force of the air resistance upwards equals the force of gravity downwards so that the net force on the elephant becomes zero. Newton's first Law then tells us that the elephant will continue from that point on with a constant speed. This speed is called the terminal velocity.
Now suppose that a feather drops from the tree at the same time as the elephant. In this case the gravitational force on the feather will be much less than it was on the elephant. So the air resistance on the feather will very quickly become equal to the force of gravity and the feather's terminal velocity will be much smaller than that of the elephant. Thus, in this case, Aristotle is correct, and the elephant will reach the ground long before the feather. Of course, in the absence of air resistance the situation is quite different. Although the force of gravity on the elephant is much greater than that on the feather, the elephant has a much greater inertial mass; and since acceleration is inversely proportional to the mass, it turns out that the acceleration of both is identical. |
More than a quarter century before Brown v. Board ended legal segregation in the United States, National Association for the Advancement of Colored People (NAACP) Field Secretary William Pickens (1881-1954) criticized the forced separation of white and black Americans. With logic typical of the NAACP's approach to fighting segregation counterbalanced by colorful phrasing and a strain of passion, Pickens anticipated many of the arguments African Americans in the coming decades would use to fight segregation. He attacked so-called separate but equal accommodations as "a mere legal fiction," scoffed at laws banning interracial marriage, and detailed segregation's potentially unhealthy effects on the black psyche. Pickens devoted much of his energy to this final point. Under the weight of segregation, light-skinned African Americans slipped under the color line to pass as white; African American children developed inferiority complexes they retained as adults; and the African American community splintered, internalizing the racism that kept segregation in place. To Pickens, segregation truly was subjugation.
How could African Americans escape this subjugation? Marcus Garvey, the founder of the United Negro Improvement Association (UNIA), preached voluntary separation. Born in Jamaica, Garvey (1887-1940) combined his international perspective with black nationalism, the belief in the need for African American political, economic, and social autonomy. Under him the UNIA urged African Americans to seek this independence by forming communities in places like Liberia, Sierra Leone, and the Ivory Coast. His message of independence and pride appealed to African Americans frustrated by the unmet promises of World War I. In the piece offered here, Garvey addresses white America, describing his desire to maintain "racial purity" by creating independent countries for people of African ancestry. "Will deep thinking and liberal white America help?" Garvey asks. If white Americans don't help, they're in for some trouble. Garvey threatens "riots, lynching, and mob rule,"—or worse, a black president—if the groups like the NAACP achieve their program of social equality. Statements like this show how Garvey's unique brand of black nationalism put him at odds with other black leaders of the time. He scoffed at the core idea of the growing civil rights movement—that with some convincing, white Americans would change their laws and minds to permit African Americans to live among them as equals. David Van Leeuwen's article, "Marcus Garvey and the Universal Negro Improvement Association" offers a convenient overview of Garvey's movement along with advice on how to teach it. Van Leeuwen notes important similarities between Garvey's message and that of Malcolm X. (18 pages.)
- What is the difference between segregation and separation?
- What effects does segregation have on the black community?
- What does Pickens's article suggest about segregation's effect on the white psyche?
- According to Pickens, why can't separate institutions be equal?
- What is the "Negro problem," according to Garvey?
- Compare Garvey's beliefs about social equality with those of Booker T. Washington, W. E. B. Du Bois, Martin Luther King, Jr., and Malcolm X.
- Why did Garvey seek the support of white Americans?
- How do we make sense of Garvey's talk of "the brotherhood of man" and his efforts to create a separate black territory?
- What might the United States be like had Garvey's vision of race relations been widely accepted?
- How do we reconcile Pickens's argument that the race problem is the greatest where segregation is the greatest with Garvey's argument that the race problem is a consequence of integration?
||What constitutes segregation?|
||How did African Americans experience it?|
||What is the difference between segregation and separation?|
||What are the consequences of segregation? Separation?||
|Pickens: || 6
|Garvey: || 5|
|Van Leeuwen: || 8|
|TOTAL ||19 pages|
William Pickens, The Heir of Slaves: An Autobiography, 1911, in Documenting the American South, from the University of North Carolina at Chapel Hill Library
William Pickens, "The Kind of Democracy the Negro Expects," address, 1919, in blackpast.org, from Dr. Quintard Taylor, University of Washington-Seattle
The Marcus Garvey and Universal Negro Improvement Association Papers Project, from the UCLA African Studies Center
|*PDF file - You will need software on your computer that allows you to read and print Portable Document Format (PDF) files, such as Adobe Acrobat Reader. If you do not have this software, you may download it FREE from Adobe's Web site.|
Image: Drinking fountain on the county courthouse lawn, Halifax, North Carolina, April 1933. Courtesy of the Library of Congress, Prints & Photographs Division. |
Floods in the Ohio River Valley
Heavy rain and snow have swollen the rivers of Indiana, Illinois, and Kentucky, pushing many past flood stage during the first two weeks of January 2005. The flooding occurred after several days of rain and snow fell on the already saturated ground of the U.S. Midwest. Since the water could not be absorbed into the soaked ground, it ran off as flood water. The storms were followed by warm temperatures, which melted the snow and produced further flooding. By January 17, some of the flooding had started to recede, but large tracts of land along the Ohio and Wabash Rivers were still under water.
The Aqua MODIS instrument captured the top image of the flooded rivers on January 17. The Ohio and Wabash Rivers are the most noticeably flooded, but many other rivers are also much larger than they were on November 25, 2004. On November 25, the Wabash River measured less than 3 pixels across in the 500-meter-resolution MODIS image (the mouse-over image underneath the Jan 17 image). On January 17, the river spanned 18 pixels at its widest point, increasing its width from approximately 1.5 kilometers to 9 kilometers. The Ohio River similarly grew to a width of 13.5 kilometers in the top image.
Floods along the Ohio are not unusual, but the timing of this flood is. The Ohio River and its tributaries often flood in the spring when winter's snow melts and runs into regional rivers. This flood is occurring in the middle of the winter, which is unusual. |
Definition of Ketonuria
Ketonuria: A condition in which abnormally high amounts of ketones and keytone bodies (a byproduct of the breakdown of cells) are present in the urine.
Ketonuria is a sign seen in diabetes mellitus that is out of control. Diabetics prone to ketonuria need to monitor their urine for signs of ketone buildup that could lead to life-threatening symptoms unless promptly treated. Ketonuria can also develop as a result of fasting, dieting, starvation and eating disorders.
Alternate names for ketonuria include ketoaciduria and acetonuria.
Digestion and the Role of Insulin
When food is digested, the body turns fats, proteins and carbohydrates into components that sustain and nurture the body. Fats are converted into fatty acids, proteins into amino acids and carbohydrates into glucose (a sugar) that enters the bloodstream. The body needs glucose as fuel to perform activities. However, glucose has to be delivered. It does not automatically route itself to body sites requiring fuel. Insulin, a hormone secreted by the pancreas, carries out this task, delivering glucose to cells throughout the body. Muscles and tissues then have the energy to do their jobs.
Ketones and Keytone Bodies: What They Are, How They Accumulate
In some people with diabetes mellitus, the pancreas releases insufficient amounts of insulin or no insulin at all. Consequently, glucose goes largely undelivered. In a desperate attempt to provide fuel, the body begins feeding on itself -- that is, it breaks down muscle and fat to burn as fuel. Ketone bodies are a byproduct of this process.
Ketone bodies consist chemically of three substances (beta-hydroxybutyric acid, acetoacetic acid, and acetone).
When ketone bodies are released, they enter the bloodstream, acidify the blood, and are eventually excreted mostly in urine. (One type of ketone body exits via the lungs.) Without treatment, glucose and ketone bodies may build to dangerous levels in the blood. Stress and illness can increase the risk of glucose and ketone buildup. When glucose and ketone bodies build to very high levels, the following conditions then exist:
1. Hyperglycemia: too much sugar in the blood.
2. Ketoacidosis: too many ketone bodies in the blood.
3. Ketonuria: accumulation of ketone bodies in the urine. When ketone is excreted, sodium is excreted along with it.
Symptoms and Treatment
Symptoms of glucose and ketone-body overload include thirst, frequent urination, dehydration, nausea, vomiting, heavy breathing, dilation of the pupils and confusion resulting from the toxic effects of ketone bodies and acid accumulation on the brain. In addition, the symptoms may also include a breath odor resembling the smell of fruit. (One type of ketone, acetone, is excreted through the lungs, causing the fruity smell.) This symptom-complex can progress to coma and death.
Treatment with insulin and intravenous fluids can restore normal levels of blood sugar and end ketoacidosis and ketonuria.
Prevention of emergencies in diabetics prone to ketonuria requires close monitoring of the levels of glucose in the blood and ketone bodies in the urine. Although ketone-body overload in the blood occurs primarily in type 1 diabetics, it can also occur in people with type 2 diabetes. Therefore, it is commonly recommended that all diabetics should closely monitor not only their glucose levels but also their ketone levels. Home tests kits are available to check both glucose and ketone levels.
Ironically, ketonuria is a desired effect of a special "ketogenic diet" used to prevent or reduce the number of seizures in people with epilepsy (seizure disorders). Some physicians use this diet when conventional medications fail to control seizures or when the side effects of medications become intolerable.
The ketogenic diet, which is high in fats and low in protein and carbohydrates, mimics starvation and raises the level of ketone bodies in the blood. The ketone bodies can prevent or decrease the incidence of many types of seizures, including myoclonic (spastic) and atonic (drop) seizures. They may also limit other types of seizures, including so-called staring spells. Why ketone bodies may inhibit such seizures is not known.
The ketogenic diet is very strict and must be closely managed under a physician's supervision. Only a limited number of medical centers are equipped and trained to prescribe it.Source: MedTerms™ Medical Dictionary
Last Editorial Review: 10/30/2013
Medical Dictionary Definitions A - Z
Search Medical Dictionary |
“Healing” is a word that gets thrown around a lot and it’s important to understand exactly what it means. Healing means getting your body back into a balanced, functioning state. Think of it like balance scales – the kind you might see at a courthouse. When you’re sick, one side hangs lower than the other. When you’re healthy, they’re level.
Your body wants to be in balance and will seek to heal itself if it’s out of balance. Or, at least, it will try to. What’s the deciding factor? Oxygen. Oxygen is necessary for healing in injured tissues. Researchers at Ohio State University found that wounded tissue will convert oxygen into reactive oxygen species to encourage healing.
What Are Reactive Oxygen Species?
Reactive oxygen species, also known as oxygen radicals or pro-oxidants, are a type of free radical. A free radical is a molecule that lacks an electron but is able to maintain its structure.
To most people, that doesn’t mean much. We just hear from marketing messages that free radicals are bad. Which is true… when your body is not in control of them. When in balance, your body actually uses free radicals to heal. It has everything to do with the nature of oxygen.
Oxygen is an element with eight protons and eight electrons. In this state, oxygen is completely neutral. Oxygen likes to share its electrons; that makes it reactive. Sometimes when it shares an electron or two, it doesn’t get them back. When that happens, oxygen becomes an ion, meaning it’s missing an electron. Ionized oxygen wants to replace the electron it’s missing. In this form, oxygen becomes singlet oxygen, superoxides, peroxides, hydroxyl radicals, or hypochlorous acid. These forms of oxygen try to steal an electron anywhere they can, this can be destructive.
Forms of Reactive Oxygen Species
This radical form of oxygen can act in one of two ways. It can trigger the genes inside a cell to start cell death. Or, if it encounters a lipid or fatty acid, it will oxidize the lipid. Think of it like corrosion.
We’re still learning about superoxides but it seems they affect how the body destroys cells and manages wound healing.
Hydrogen peroxide and hypochlorite help heal tissue. Oxygen radicals form when hydrogen peroxide interacts with reduced forms of metal ions or gets broken down and produces hydrogen radicals. Hydrogen radicals are destructive.
Hypochlorous acid contains oxygen and chloride. It can affect tissue through chlorination or oxidation.
Effects of Reactive Oxygen Species in the Body
Every time your muscles contract, you produce and use reactive oxygen species. High-intensity exercise causes reactive oxygen species levels to increase, leading to fatigue and muscle failure. The energy created by mitochondria creates reactive oxygen species. Exposure to tobacco smoke, alcohol, toxic metals, pollution, chemicals, germs, and stress also creates reactive oxygen species.
When your body can keep up with and remove unneeded reactive oxygen species, you remain in balance. If reactive oxygen species become too abundant, the oxidative stress can be overwhelming. It’s at this point that antioxidants are a helpful defense against free radicals.
How Oxygen Fuels the Body
Every cell in your body requires oxygen. Every breath supplies blood with oxygen to be carried throughout your entire body. Oxygen is converted to energy in a process known as cellular metabolism.
What Else Does the Body Need for Healing?
Oxygen isn’t the only factor that contributes to the healing process. Your health can quickly fall apart if your body doesn’t eliminate waste and toxins. Accumulated waste in the intestines or colon means that toxins are lingering in your body.
Simple steps can help keep your digestive tract clear. Drink plenty of water, exercise, and regularly cleanse your colon, liver, and kidneys. Oxy-Powder® is an oxygen-based colon cleanse formula that releases monoatomic oxygen into the digestive tract to support digestion, soothe the colon, and ease occasional constipation.
by Dr. Edward Group DC, NP, DACBN, DCBCN, DABFM
- Sen CK. Wound Healing Essentials: Let There Be Oxygen. Wound repair and regeneration?: official publication of the Wound Healing Society [and] the European Tissue Repair Society. 2009;17(1):1-18. doi:10.1111/j.1524-475X.2008.00436.x.
- Ohio State University Department of Medicine. Scientists Identify a New Role for Oxygen in Wound Healing. Last Accessed February 26, 2016.
- Triantaphylidès C, Krischke M, Hoeberichts FA, Ksas B, Gresser G, Havaux M, Van Breusegem F, Mueller MJ. Singlet oxygen is the major reactive oxygen species involved in photooxidative damage to plants. Plant Physiol. 2008 Oct;148(2):960-8. doi: 10.1104/pp.108.125690. Epub 2008 Aug 1.
- Chen Y, Azad MB, Gibson SB. Superoxide is the major reactive oxygen species regulating autophagy. Cell Death Differ. 2009 Jul;16(7):1040-52. doi: 10.1038/cdd.2009.49. Epub 2009 May 1.
- Jaimes EA, Sweeney C, Raij L. Effects of the reactive oxygen species hydrogen peroxide and hypochlorite on endothelial nitric oxide production. Hypertension. 2001 Oct;38(4):877-83.
- Aprioku JS. Pharmacology of Free Radicals and the Impact of Reactive Oxygen Species on the Testis. Journal of Reproduction & Infertility. 2013;14(4):158-172.
- Spickett CM, Jerlich A, Panasenko OM, Arnhold J, Pitt AR, Stelmaszy?ska T, Schaur RJ. The reactions of hypochlorous acid, the reactive oxygen species produced by myeloperoxidase, with lipids. Acta Biochim Pol. 2000;47(4):889-99.
- Powers SK, Ji LL, Kavazis AN, Jackson MJ. REACTIVE OXYGEN SPECIES: IMPACT ON SKELETAL MUSCLE. Comprehensive Physiology. 2011;1(2):941-969. doi:10.1002/cphy.c100054.
- Pham-Huy LA, He H, Pham-Huy C. Free Radicals, Antioxidants in Disease and Health.International Journal of Biomedical Science?: IJBS. 2008;4(2):89-96. |
History of Carrots - A Brief Summary & Timeline
Chapters in the history rooms:
Chapters in the history rooms:
Brief Carrot History and Timeline
The cultivated carrot is one of the most important root vegetables grown in temperate regions of the world. It was derived from the wild carrot, which has whitish/ivory coloured roots. Early writings in classical Greek and Roman times refer to edible white roots, but these may have also been parsnips, or both. There are white rooted carrots in existence today, often used as animal feed or a novelty crop. The earliest vegetable definitely known to be a carrot dates from the 10th century in Persia and Asia Minor and would have been quite unlike the orange rooted carrot of today. It is considered that Carrots were originally purple or white with a thin root, then a mutant occurred which removed the purple pigmentation resulting in a new race of yellow carrots, from which orange carrots were developed.
The centre of diversity for the carrot is in Central Asia, and the first cultivation of carrot for its storage root is reported to be in the Afghanistan region, approximately 1,100 years ago (Mackevic 1929). Long before carrot was domesticated, wild carrot had become widespread, as seeds were found in Europe dating back nearly 5,000 years ago. Today wild carrot is found around the world in temperate regions, particularly in wild areas, road sides and agricultural land.
Wild carrot appears in many temperate regions of the world, far beyond its Mediterranean and Asian centres of origin where this plant displays great diversity. Almost certainly those ancient cultures in these regions used wild and early forms of the domesticated carrot as a herb and a medicine before they were used as a root vegetable in the conventional sense of that term today. It is also quite likely that the seeds were used medicinally in the Mediterranean region since antiquity (Banga 1958).
There is good genetic evidence that wild carrot is the direct progenitor of the cultivated carrot (Simon 2000). Selection for a swollen rooted type suitable for domestic consumption undoubtedly took many centuries.
Carrot domestication transformed the relatively small, thin, white, heavily divided (forked or sprangled - spread in different directions) strong flavoured taproot of a plant with annual biennial flowering habit into a large, orange, smooth, good flavoured storage root of a uniformly biennial or “winter” annual crop we know today. Modern carrot breeders have further refined the carrot, improving flavour, sweetness, reducing bitterness and improving texture and colour. There have also been significant improvements in disease and pest reduction resulting in ever increasing yields. Flavour, nutritional and processing qualities are also uppermost in the minds of modern breeders.
The genus Daucus has more than 80 species.
These are centered North and South of the Mediterranean Sea, spreading to North Africa, SW. Asia and Ethiopia.
There are two main types of cultivated carrots:
1) Eastern/Asiatic carrots: These are often called anthocyanin carrots because or their purple roots, although some have yellow roots. They have pubescent leaves giving them a gray-green color, and bolt easily. The greatest diversity of these carrots is found in Afghanistan, Russia, Iran and India. These are possible centers of domestication, which took place around the 10th century.
Anthocyanin carrots are still under cultivation in Asia, but are being rapidly replaced by orange rooted Western carrots.
2) Western or Carotene carrots: These have orange, red or white roots. Most likely these carrots derived from the first group by selection among hybrid progenies of yellow Eastern carrots, white carrots and wild subspecies grown in the Mediterranean. The first two originated by mutation. These carrots may have originated in Turkey.
Carotene carrots are relatively recent, from the 16/17th century. Orange carrots were first cultivated in the Netherlands. Present cultivars seem to originate from long orange varieties developed there. Adaptation to northern latitudes has been accompanied by change in photoperiod response.
Both the wild and the cultivated carrots belong to the species Daucus carota. Wild carrot is distinguished by the name Daucus carota, Carota, whereas domesticated carrot belongs to Daucus carota, sativus.
has a somewhat obscure history, surrounded by doubt and enigma and it is
pin down when domestication took place. The wide
distribution of Wild Carrot, the absence of carrot root remains in archaeological
excavations and lack of documentary evidence do not enable us to determine precisely where and when carrot
domestication was initiated.
When carrot is grown in favourable conditions the roots of successive generations enlarge quickly. So the evolution of cultivars with enlarged roots can easily be explained, but what has puzzled historians is why it took so long for the modern cultivated, edible carrot to appear. The clue is that, although evidence of wild carrot seeds have been found in pre-historic cave dwellings and Greek and Roman records they were only used in medicinal applications and not for consumption of the root, as a food.
Unravelling the progress of the peregrinating carrot through the ages is complex and inconclusive, but nevertheless a fascinating journey through time and the history of mankind.
The Wild Carrot is the progenitor (wild ancestor) of the domestic carrot (direct descendent) and both still co-exist in the modern world. Wild Carrot is indigenous to Europe and parts of Asia and, from archaeological evidence, seeds have been found dating since Mesolithic times, approximately 10000 years ago. One cannot imagine that the root would have been used at that time, but the seeds are known to be medicinal and it is likely the seeds were merely gathered rather than actually cultivated.
Wild carrot has a small, tough pale fleshed bitter white root; modern domestic carrot has a swollen, juice sweet root, usually orange. Carrots were originally recorded as being cultivated in present day Afghanistan about 1000 years ago, probably as a purple or yellow root like those pictured here. Carrot cultivation spread to Spain in the 1100s via the Middle East and North Africa. Purple, white and yellow carrots were brought into southern Europe in the 14th century and were widely grown in Europe into the 16th Century. Purple and white carrots still grow wild in Afghanistan today where they are used by some tribesmen to produce a strong alcoholic beverage. Over the ensuing centuries, orange carrots came to dominate and carrots of other colours were only preserved by growers in remote regions of the world.
Nature then took a hand and produced mutants and natural hybrids, crossing both with cultivated and wild varieties. It is considered that purple carrots were then taken westwards where it is now known, through modern genetic research, that yellow varieties were developed to produce orange. Then some motivated Dutch growers took these "new" orange carrots under their horticultural wings and developed them to be sweeter, consistent and more practical. Finally we have the French to thank for popular modern varieties such as Nantes and Chantenay, with credit to the 19th century horticulturist Louis de Vilmorin, who laid the foundations for modern plant breeding. It's a long story.....................
The time frame and geographic region(s) of the first cultivation of carrots are unclear. Vavilov (1992 , pp. 337–340) identified Asia Minor (eastern Turkey) and the inner Asiatic regions as the centers of origin of cultivated carrot and noted Central Asia (Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, Uzbekistan) as being “the basic center of Asiatic kinds of cultivated carrots” where “wild carrots … practically invited themselves to be cultivated”.
As observed by the presence of carrot seed at prehistoric human habitations 4000 to 5000 years ago ( Newiler, 1931), it is speculated that wild carrot seed was used medicinally or as a spice ( Andrews, 1949 ; Brothwell and Brothwell, 1969).
Carrot was cultivated and used as a storage root similar to modern carrots in Afghanistan, Iran, Iraq, and perhaps Anatolia beginning in the 10th century (Mackevic, 1932 ; Zagorodskikh, 1939). On the basis of historical documents, the first domesticated carrot roots were purple and yellow and recorded in Central Asia, Asia Minor, then in Western Europe and finally in England between the 11th and 15th centuries ( Banga, 1963 ). Interestingly, orange carrots were not well documented until the 15th and 16th centuries in Europe ( Banga, 1957a , b ; Stolarczyk and Janick, 2011 ), indicating that orange carotenoid accumulation may have resulted from a secondary domestication event.
The cultivated carrot is believed to originate from Afghanistan before the 900s, as this area is described as the primary centre of greatest carrot diversity (Mackevic 1929), Turkey being proposed as a secondary centre of origin (Banga 1963). The first cultivated carrots exhibited purple or yellow roots. Carrot cultivation spread to Spain in the 1100s via the Middle East and North Africa. In Europe, genetic improvement led to a wide variety of cultivars. White and orange-coloured carrots were first described in Western Europe in the early 1600s (Banga 1963). Concomitantly, the Asiatic carrot was developed from the Afghan type and a red type appeared in China and India around the 1700s (Laufer 1919; Shinohara 1984). According to this history, it makes sense to envisage that colour should be considered as a structural factor in carrot germplasm.
Root types of these early carrots were categorized as yellow or purple and a flavour difference coincided with the colour. In Persia and Arabia, yellow carrots were generally regarded as more acrid in flavour and less succulent than purple carrots (Clement-Mullet 1864).
In the US Department of Agriculture circular dated March 1950 are listed 389 names that have been applied to orange-fleshed carrot varieties or strains. This gave a thorough classification of all varieties of orange rooted carrots found in the US at the time. On the basis of their general or outstanding characteristics these varieties or strains were classified in 9 major groups, as follows: I, French Forcing; II, Scarlet Horn ; III, Oxheart ; IV, Chantenay ; V, Danvers ; VI, Imperator; VII, James' Intermediate; VIII, Long Orange; and IX, Nantes (Synonymy of Orange-Fleshed Varieties of Carrots M F Babb 1950).
Morphological characteristics lead to a division of the cultivated carrot (Daucus carota subsp. sativus) into two botanical varieties: var. atrorubens and var. sativus (Small 1978).
refers to carrots originating from the East, exhibiting yellow or purple storage
roots and poorly indented, grey-green, pubescent foliage. Var. sativus refers to
carrots originating from the West and exhibiting orange, yellow or sometimes
white roots, and highly indented, nonpubescent, yellow-green foliage (Small
1978). Many intermediate variants exist between these two types.
Fossil pollen from the Eocene period (55 to 34 million years ago) has been identified as belonging to the Apiaceae (the carrot family).
Almost five thousand years ago, carrots were firstly cultivated in the Iranian Plateau and then in Persian Empire. Western and Arabic literatures along with the studies by the US Department of Agriculture (USDA) reveal that carrots were originated in Afghanistan, Pakistan, and Iran. It should be noted, however, that there were no Afghanistan or Pakistan in those olden days and the Iranian Plateau (a term which covers Afghanistan, Pakistan, and Iran) must be considered as the land of origin for carrots.
Very early evidence of the consumption of carrots (seeds) has also been found in prehistoric Swiss lake dwellings (Brothwell and Brothwell, 1969). It is said that the cultivated and edible carrot dates back about 5,000 years ago when the purple root was found to be growing in the area now known as Afghanistan. Temple drawings from Egypt in 2000 BC show a purple plant, which some Egyptologists believe to be a purple carrot. Egyptian papyruses containing information about treatments with seeds were found in pharaoh crypts, but there is no direct carrot reference. The Carrot Museum has visited several tomb paintings in the Valleys of Luxor and some images are compelling. It known that ancient Egyptians did use other members of the Apiaceae family (carrot) including anise, celery and coriander. None of these plants would have been used as root crops, but were rather leaf, petiole or seed crops.. Several books on the subject make conjecture about this but there is never proper documentary evidence that the Egyptians grew or ate carrots..
Many colourful varieties were later found in Asia and there is also evidence of their use in Greece during the Hellenistic period. However, it is not known whether or not the Egyptians or Greeks cultivated a very edible plant or if they only grew wild carrots for their seeds. Mostly they were used medicinally. It likewise found a place as a medicinal plant in the gardens of ancient Rome, where it was used as an aphrodisiac and in some cases as part of a concoction to prevent poisoning. Mithridates VI, King of Pontius (120bc-63bc) had a recipe including Cretan carrots seeds, which actually worked!
Carrots were said to be recognized as one of the plants in the garden of the Egyptian king Merodach-Baladan in the eighth century B.C, once again there is no documentary evidence for this although many plants shown on the clay tablets held in the British Museum remain to be identified.
The Carrot was well known to the ancients. Pedanius Dioscorides (c. 40-90ad) catalogued over 600 medicinal plant species during his first century travels as a roman army doctor and accurately describes the modern carrot. The Greek herbal of Pedanios Dioskurides, latinized as Pedanius Dioscorides (20–70 CE), is entitled Peri Ylis Ialikis (PYI) and latinized as De Materia Medica, On Medical Matters. It was written about the year 65. It was destined to be one of the most famous books on pharmacology and medicine but is also rich in horticulture. This non-illustrated work contained descriptions of about 600 plants, 35 animal products, and 90 minerals emphasizing their medicinal uses. The Greek Herbal of Dioscorides: Illustrated by a Byzantine in A.D. 512, refers to Orange Carrot - Dioskorides Codex Vindobonensis Medicus Greacus - Staphylinos Keras - The cultivated carrot-pictured right.
Carrot was mentioned by Greek and Latin writers by various names, but it was Galen (circa second century A.D.) who called it Daucus to distinguish the Carrot from the Parsnip. Carrot and parsnip have often been confused in historical references and in many cases were interchangeable, as those early carrots which were "dirty white" were very similar (in looks at least) to parsnip. They are of course from the same family. In classical and mediaeval writings both vegetables seem to have been sometimes called pastinaca yet each vegetable appears to be well under cultivation in Roman times. Since in many cases only the written word exists, if the Medieval writer called the plants "pastinaca", it is difficult to know if they were referring to carrots or parsnips.
Throughout the Classical Period and the Middle Ages writers constantly confused carrots and parsnips. This may seem odd given that the average carrot is about six inches long and bright orange while a parsnip is off white and can grow 3 feet, but this distinction was much less obvious before early modern plant breeders got to work. The orange carrot is a product of the 16th and 17th centuries probably in the Low Countries. Its original colour varied between dirty white and pinkish purple. Both vegetables have also got much fatter and fleshier in recent centuries, and parsnips may have been bred to be longer as well. In other words early medieval carrots and parsnips were both thin and woody and mostly of a vaguely whitish colour. This being the case, almost everyone up to the early modern period can perhaps be forgiven for failing to distinguish between the two, however frustrating this may be for the food or agriculture historian. See separate page showing illustrations from ancient manuscripts.
The name Carota for the garden Carrot is found first in the Roman writings of Athenaeus in 200 A.D., and in a book on cookery by Apicius Czclius in 230 A.D.
After the fall of Rome, a period often referred to as the Dark Ages, carrots stopped being widely seen (or at least recorded) in Europe until the Arabs reintroduced them to Europe in the Middle Ages around 1100. Scribes continued to reproduce and embellish previous manuscripts, rather than observing and representing the existing contemporary native plants.
The third book of Dioscorides the Greek – Roots - sets out an account of roots, juices, herbs, and seeds — suitable both for common use and for medications. The Greek Herbal of Dioscorides: Illustrated by a Byzantine in A.D. 512. gives an illustration of an orange carrot, probably the first depiction and certainly well before other illustrations in the 16th century. (modern translation here)
Modern research has shown that there are two distinct groups of cultivated carrots from which the modern orange carrot derives, these are distinguished by their root colours and features of the leaves and flowers.
Eastern/Asiatic carrot (anthocyanin) - identified by its purple and/or yellow branched root, grey green leaves which are poorly dissected and an early flowering habit; they often have a habit to bolt easily. The greatest diversity of these carrots is found in Afghanistan, Russia, Iran and India. These are possible centers of domestication, which took place around the 10th century.
Eastern carrot was probably spread by Moorish invaders via Northern Africa to Spain in the 12th century. It is considered that the purple carrot was brought westward as far as the Arab countries from Afghanistan (where the purple carrots of antiquity are still grown).
Anthocyanin carrots are still under cultivation in Asia, but are being rapidly replaced by orange rooted Western carrots.
Western or carotene carrot - identified by its yellow, orange, white or red unbranched root and yellowish green leaves more clearly dissected and slightly hairy. It is likely these carrots derived from the Eastern group by selection among hybrid progenies of yellow Eastern carrots, white carrots and wild subspecies grown in the Mediterranean. The first two originated by mutation.
It is thought that Western carrots may have originated later in Asia Minor, around Turkey and could have formed from a mutant which removed the anthocyanin (purple colour).
Carotene carrots are relatively recent, from the 16th and 17th centuries. Orange carrots were probaly first cultivated in the Netherlands. Our present cultivars seem to originate from long orange varieties developed there. Adaptation to northern latitudes has been accompanied by change in photoperiod response. (The physiological reaction of organisms to the length of day or night).
The origin of the cultivated carrot is clearly acknowledged to be purple and in the Afghanistan region mainly because it was known to exist there well before reliable literature references or paintings gave evidence of Western carotene carrots. It is thought the carotene carrot was domesticated in the regions around Turkey. The precise date is not known but thought to be before the 8th century.
The purple carrot spread into the Mediterranean in the 10th century where it is thought a yellow mutant appeared. The purple and yellow carrots both gradually spread into Europe in subsequent centuries. It is considered that the white carrot is also a mutant of yellow varieties.
Orange carrots derived from yellow forms, and then from human selection and development, probably in the Netherlands. It is now proved through modern genetic study that humans made selections from a gene pool involving yellow rooted eastern carrots.
Some scholars think that orange carrots did not to appear until the 16th century, although there is a Byzantine manuscript of 512 ad, and an 11th century illuminated script, both of which depict an orange rooted carrot, and suggesting it was around long before (see photo above). (see here for more detailed history of orange carrot)
After the fall of Rome, gardens and vegetables are rarely mentioned again until 795 ad, when King Charlemagne included carrots in the list of plants recommended for cultivation in the Frankish empire covering western and central Europe.
It is known that purple or red and yellow carrots were cultivated in Iran and
Arabia in the 10th century and in Syria in the 11th.
Throughout the Medieval writings, carrots are confused with parsnips. When Linnaeus created scientific names, he called carrots Daucus carota and parsnips Pastinaca sativa, so the two are clearly different. Before Linnaeus, however, Pastinaca sativa was used for both plants.
Fuchs in 1542 described red and yellow garden carrots and wild carrots, but names them all Pastinaca (Meyer Trueblood and Heller 1999).
Gerard (1633) uses the English name carrot, but calls it Pastinaca in Latin: Pastinaca sativa var. tenuifolia, the yellow carrot and Pastinaca sativa atro-rubens, the red carrot. Gerard distinguishes parsnips from carrots and calling the parsnip Pastinaca latifolia sativa and P. latifolia sylvestris. Gerard notes the name similarity and is dissatisfied with it. He gives daucus as a name for carrot in Galen, but notes that many Roman writers called it pastinaca or other names.
The plants were not confused on purpose, but since we have in many cases, only the written word, if the Medieval writer referred to "pastinaca", it is impossible to know if they were carrots or parsnips. There was rarely any mention of colour or taste which would have helped the modern researcher to distinguish the two plant relatives.
Many 16th century herbalists made reference to the cultivation and use of carrot roots and seeds, including its efficacy against the bites of venomous beasts and a whole manner of stomach ailments. (Carrots in Herbals/Herbalists here - Ancient Manuscripts page here)
The Spanish introduced the carrot on the island of Margarita, off the coast of Venezuela, in1565.
North America, particularly the parts that would become the Thirteen Colonies, got its carrots somewhat later, with the arrival of the first English settlers in Virginia in 1607. When the English moved into Australia in 1788, carrots were with them there, as well. (History of carrots in the USA here.)
The modern orange carrot was developed and stabilised by Dutch growers in the 16-17th century, evidenced from variety names and contemporary art works. (Art pages start here). A tale, probably apocryphal, has it that the orange carrot was bred in the Netherlands in the seventeenth century to honour William of Orange. Though the orange carrot does appear to date from the Netherlands in the sixteenth century, it is unlikely that honouring William of Orange had anything to do with it! Some astute historian managed to install the myth that the arboriculturist's work on an unexpected mutation was developed especially to give thanks to King William I as a tribute to him leading the Dutch revolt against the Spanish to gain independence from Spain. There is no documentary evidence for this story!
The purple carrots being consumed at the same time, not only stained cookware and appeared quite unsightly, they did not taste as good as orange carrots, and so the orange rooted varieties came to dominate the culinary world.
Whatever the origins, the Long Orange Dutch cultivar, is commonly held to be the progenitor of the orange Horn carrot varieties (Early Scarlet Horn, Early Half Long, Late Half Long). All modern, western carotene varieties ultimately descend from these varieties. The Horn Carrot derives from the Netherlands town of Hoorn in the neighbourhood of which it was probably developed. Horenshce Wortelen (carrots of Hoorn) were common on the Amsterdam market in 1610. The earliest English seedsmen list Early Horn and Long Orange.
In 1753 Carl Linnaeus, the Swedish botanist, published the "Species Plantarum," and established the foundations of the modern scheme for the naming of living organisms, called binomial nomenclature, that became universally accepted in the scientific world, including Daucus Carota.
We have the French to thank for popular modern varieties such as Nantes and
Chantenay, with credit to the 19th century horticulturist Louis de Vilmorin, who
laid the foundations for modern plant breeding. Here are some images of the carrots varieties which Vilmorin described in "The Vegetable
Garden" in 1856 :
Research and development continues to take place to produce disease resistant varieties, together with research into other uses for the root such a bio fuel and its use in construction as an alternative to fibre glass and carbon fibre.
The current yellow/orange varieties (containing carotene) through gradual selection in Europe, now form the basis of the commercial cultivars around the world, mainly through their superior taste, versatility, nutritional value and cultural acceptance.
There is a lot more detail of the history of carrots through the ages, and the next pages in the Carrot Museum go on to give the full history from pre-historic seeds through to how the Greeks and Romans used carrots, first in medicine and then food.
There is a more detailed analysis of the available evidence surrounding its origins, cultivation and domestication, and journey across Europe, also exploring the emergence of the ubiquitous orange carrot, here. The main colours of carrots now have their own pages - purple - black - white - yellow
Follow its steps through the dark ages and then enlightenment with 17th century herbalists who recommended carrots and their seeds for a wide variety of ailments. Finally after many years as a low class vegetable, mainly used for animal fodder, it came of age during the food scarcity of the two World Wars when people were forced to be more inventive with fewer resources.(WW2 page here)
It is a long and fascinating story.
There is a more comprehensive study and analysis of the various theories of the domestication of carrots and the arrival of the orange carrot on the page dedicated to the subject - the Colour Orange - here. |
Peripheral arterial disease (PAD)
Peripheral arterial disease, or PAD, refers to a reduction in blood-flow to the legs and feet. PAD is caused by hardening of the arteries or atherosclerosis, in which fatty deposits called plaques build up along the walls of the arteries. This buildup can reduce blood-flow, or block it completely. When blood-flow to the legs and feet is reduced, two conditions can result:
Claudication, the most common symptom of PAD, results in pain in the muscles of the buttocks, thighs and/or calves when walking.
People with critical limb ischemia, on the other hand, feel pain in their feet even when they are at rest, or develop non-healing sores on their feet. Those with critical limb ischemia are at risk for amputation. |
teaches students the meanings of Latin and Greek prefixes, roots, and suffixes commonly used in English. Students who learn to use these word elements will dramatically improve their spelling and their ability to decode unfamiliar words, adding hundreds of words to their usable vocabulary. This richer vocabulary, in turn, gives greater depth to students' thinking and writing.
For each lesson students:
- learn the meanings of prefixes, roots, and suffixes
- divide or assemble known and unknown compound words based on their elements
- match word parts or whole words to their definitions by analyzing their meanings
- apply their new vocabulary in sentences
Includes a Pretest/Post-test, answers, and a dictionary of words derived from Latin/Greek roots. Grades 7-12.
Have a question about this product? Ask us here. |
The Industrial Revolution Begins
- What differences between the Western and non-Western worlds may have led to the industrial revolution emerging first in the former?
- In particular, what combination of advantages allowed Britain to experience industrialization first?
- Why did Britain have the capital necessary to invest in industry?
- How did the Agricultural Revolution help to bring about the Industrial Revolution in Britain?
New Markets, Machines, and Power
- Explain the interrelationship between growing markets, inventors, and entrepreneurs.
- How did inventions change the way cotton and its products were manufactured? Which were the most important inventions?
- In what ways did the production of iron change?
- How did steam engines transform the way factories functioned?
- What effect did the railroads have on the Industrial Revolution as well as on the landscape?
Industrialization Spreads to the Continent
- To which parts of the European continent did industrialization spread from Britain after 1830?
- What steps did these governments on the continent take to promote industrialization in their own countries?
- Why did countries in southern, central, and eastern Europe remain primarily agricultural and untouched by industrialization?
Balancing Benefits and Burdens of Industrialization
- Which group gained most from Britain's newfound prosperity? How did its social situation change?
- In industrializing Britain, what was life like for ordinary people? In the factories?
- What sorts of insecurities, risks, and lifestyle changes did factory laborers face?
- What types of organizations did workers organize throughout this period of industrialization?
Life in the Growing Cities
- What factors were responsible for the growth of towns and cities in Europe between 1780 and 1850?
- In what ways were the occupations attracting people to towns and cities for work gender specific?
- What social problems were emerging in these expanding cities and why was this so?
Public Health and Medicine in the Industrial Age
- What health risks faced the new working class?
- How did medical commentators explain the outbreaks of disease plaguing cities?
- How effective were the treatments prescribed by physicians in the early 1800s?
- How did improvements in European diets help protect public health?
- In what ways did European doctors apply scientific methods to medicine and what improvements did this bring?
Family Ideals and Realities
- As the roles within the family changed, what values did the middle class associate with the proper family life?
- Describe the separate spheres of men and women and the responsibilities associated with them.
- What occupations were available to women outside of the home? Why were they limited in this manner?
- What type of "legal relationship" existed between married men and their wives and children?
- How did the lives and options of working and middle class women differ?
- How did industrialization change the working class family? |
Let's start with the basics. Subwoofers play low notes. Subwoofers have what's called voice coils. Voice coils cause a magnetic field when energised, hence moving the cone. You can have a sub with a few different configurations, typically SVC and DVC. Voice coils will have an impedence, such as 4 ohm and two ohm. For some specialty speakers, they'll have 6 ohm voice coils.
Ohms are a measurement of resistance. For amplifiers, when you put more of a load, the distortion increases. Keep that in mind.
SVC stands for Single Voice Coil.
DVC stands for Dual Voice coil. Dual voice coils MUST
have both voice coils wired up, either in series or parallel.
Both positive leads are wired together, both negative leads are wired together, and leads go out to your amp or another speaker like so:
You'll notice that the speaker is a dual 4 ohm voice coil sub, and wired in parallel. When wiring voice coils in parallel, take the impedance of the voice coils, these being 4, and divide it by 2. That'll give you your impedence. For this example, your impedence will be 2 ohm.
Positive from Coil A is jumpered to the Negative of Coil B. The negative from Coil A and Positive of Coil B go out to your amp or another speaker like so:
You'll notice that the speaker is a dual 4 ohm voice coil sub, and wired in series. When wiring voice coils in series, take the impedance of the voice coils, these being 4, and double it. That'll give you your impedence. For this example, your impedence will be 8 ohm.
The most common dual voice configurations are dual 4 ohm and dual 2 ohm. A single dual 4 ohm sub can be wired to 8 ohm or 2 ohms. A single dual 2 ohm sub can be wired to 4 ohm or 1 ohm. Two dual 4 ohm subs can be wired to 4 ohm or 1 ohm, and two dual 2 ohm subs can be wired to 2 ohm or 1/2 ohm.
Amplifiers come in many shapes, sizes, colors, makes and brands. We'll do the most common ones. There are a few types. Class A/B amps are usually good for 2 ohm stereo or 1 ohm mono. Class D, X, Digital, etc... Are usually monoblocks and good for 2 ohms or less.
MAKE SURE TO CHECK WITH YOUR AMPLIFIER MANUFACTURER TO SEE WHAT LOADS YOU CAN OR CANNOT PUT ONTO YOUR AMP AND PROPER WIRING
Monoblocks are a 1 channel amp. Sometimes they have 2 sets of RCA's in, sometimes only one. Monoblocks are used primarily for subwoofers. These are your Class D, X, Digital, etc... amps. Most are good to 2 ohm. Some are good to 1 ohm, and very few are good to 1/4 amp. Some monoblocks are called High Current. They'll play with higher voltages to them, usually 14.4 volts.
Two Channel Amps:
These are a stereo amps, a left and a right channel. Most stereo amps are good for 2 ohms stereo, and most two channel amps can be bridged to play a subwoofer, and are usually good to 4 ohm.
Four Channel Amps:
Four channel amps are used to run 4 sets of speakers, usually front/rear. They have independent gains on the front and rear. Some 4 channel amps can be bridged to 3 channels or 2 channels. Some can do 2 ohm stereo, and those that can be bridged to 3 or 2 channels can usually run a 4 ohms.
Tuning your amps
The simplest and easiest way to get your amps tuned is to follow these simple instructions. Grab your digital multimeter and a calculator.
output = square root (watts * ohms)
First, take your amp's wattage at load. For example, let's say your amp does 300 watts rms at 4 ohm. That would be 1200. Take the square root of that and you'll get 34.64101615137755, so let's say 34.64. Take your multimeter and set it to AC
volts. Disconnect your speakers from the amplifier. Grab this file here - http://www.realmofexcursion.com/audio/testtones/20Hz_to_120Hz.mp3
- and burn it to a cd. Turn your eq's off, turn your volume to 3/4 of the way up, hook up your multimeter to the + and - of the speaker outputs and play the sine wave. Your peak voltage should hit at the beginning of the cd - adjust your gain till it reads the voltage you figured out earlier, then leave it. Hook your speakers back up, and your gain is set. Don't turn your volume up above this volume or else you'll clip.
Since your speaker outs are AC voltage, you have an AC wave, with a peak and a valley, usually symmetrical. When your signal is clipped, you'll have an extreme valley or peak, deadly for speakers.
Storage devices/Energy makers
Everyone gets the wrong idea about capacitors. THEY ARE NOT A BATTERY. They do store energy, but for car audio, they're used to fill valleys and level out peaks when you have a long bass note. They are used to clean the signal only.
Batteries, without them, your car won't start. Yellow top optimas and deep-cycle batteries are wonderful for car audio. They have higher storage than regular batteries, and they can be discharged and recharged safely. They are also a closed battery, and can safely be put inside a trunk without having fumes.
Our stock alternator puts out around 75 amps at an idle and 120 at around 2000 rpm. They do make higher output alternators but MAKE SURE YOU CHECK
and see what it puts out at an idle. Some will put out stock at an idle and make max at 3000 or 4000 rpm. Make sure the amperage is higher at an idle or else you could cook your alternator, battery, or both.
The BIG 3
This post here - http://www.j-body.org/forums/read.php?f=4&i=117446&t=113629#117446
- written by Wysiwig, is a wonderful resource on the Big 3. You'll notice less dim when the bass hits, quicker starting, and less strain on electrical devices.
This sums it up. If you should have any technical questions, myself, Wysiwig, Soundsgood, Lash, n8ball2013, cavi sedan ls, cavi in kc, unholysavage, and sweetnloud can help you out. If I forgot anything or anyone, I apologise. |
Jennifer Orr on Data Retrieval Charts
Young children take in ideas at an astounding rate. Because so much is new to them they have to process and make sense of much more than adults do. It is critical, therefore, that we help students with this process. Data retrieval charts offer a way to visually show the organization of and connections between information. This makes it easier for students to make sense of and more deeply understand what they are learning.
Data retrieval charts are exactly what the name suggests: a chart for data. I have used them frequently with my first graders because they are a wonderful visual tool for understanding concepts that are new or challenging.
At the beginning of first grade we explore the idea of past and present. Six year olds struggle with comprehending anything that happened before they were born. So we look at past and present through four areas: school, family life, community, and transportation; all ideas with which young children have some experience. The chart has those topics across the top with two rows below them, one for past and one for present. As we read books, look at images, watch videos, interview people, or gather data in any other way we add it to our chart. We add words, phrases, sentences, and pictures.
One positive characteristic of a data retrieval chart is its visual nature. It hangs in our room throughout our study of past and present and often beyond it. That way students see it often, can add to it whenever we want, and can use it to help us understand the rest of our social studies curriculum. Therein lies the retrieval piece of a data retrieval chart.
We also study several famous Americans during the year: Ben Franklin, George Washington, Abraham Lincoln, and George Washington Carver. The curricular expectation is that students will understand the contributions made to our country by these men. Our data retrieval chart for this study has the names of the men down one side with several categories across the top: date of birth and death (with a timeline along the bottom to support their understanding of when each man lived), contributions, and interesting facts. I include interesting facts because my students inevitably learn things about these men that are not really contributions but that fascinate them. Some years my students have requested other columns added to fit their interests, such as one about family or childhood.
Students often return to our data retrieval charts as we learn new things about American history. They might notice new connections or find themselves asking questions about how certain information relates to what they are learning now. They also often look back at our charts during writing. They enjoy writing about things they are learning and the data retrieval charts help them recall information and synthesize their thinking.
Graphic organizers like data retrieval charts help students collect, connect, and visualize information. Pick up more tips and tricks for using graphic organizers in Ask a Master Teacher, see them in use in Teaching in Action, and learn scaffolding hints for diverse learners in Teaching English Language Learners. |
Advanced biofuels are highly touted as potential replacements for gasoline, diesel, and jet fuels. Equally touted is the synthesis of these fuels through the use of microbes. However, many of the best candidate compounds for advanced biofuels are toxic to microbes, which presents a “production versus survival” conundrum.
Researchers at the DOE’s Joint BioEnergy Institute (JBEI) have provided a solution to this problem by developing a library of microbial efflux pumps that were shown to significantly reduce the toxicity of seven representative biofuels in engineered strains of Escherichia coli.
“Working with all available microbial genome sequence data, we generated a library of largely uncharacterized genes and were able to devise a simple but highly effective strategy to identify efflux pumps that could alleviate biofuel toxicity in E. coli and, as a consequence, help improve biofuel production,” says Aindrila Mukhopadhyay, a chemist with JBEI’s Fuels Synthesis Division.
Research efforts are underway at JBEI and elsewhere to engineer microorganisms, such as E. coli, to produce advanced biofuels in a cost effective manner. These fuels, which encompass short-to-medium carbon-chain alcohols, such as butanol, isopentanol, and geraniol, can replace gasoline on a gallon-for-gallon basis and be used in today’s infrastructures and engines, unlike ethanol. Biofuels made from branched carbon-chain compounds, such as geranyl acetate and farnesyl hexanoate, would also be superior to today’s biodiesel, which is made from esters of linear fatty acids. Cyclic alkenes, such as limonene and pinene, could serve as precursors to jet fuel. Although biosynthetic pathways to the production of these carbon compounds in microbes have been identified, product toxicity to microbes is a common problem in strain engineering for biofuels and other biotechnology applications. |
Mississippi Dead Zone
Recent reports indicate that the large region of low oxygen water often referred to as the 'Dead Zone' has spread across nearly 5,800 square miles of the Gulf of Mexico again in what appears to be an annual event. NASA satellites monitor the health of the oceans and spots the conditions that lead to a dead zone.
Image to right: This image shows the outflow of the Neuse River. It is an example of contrast seen in the Gulf of Mexico when sediment filled water meets the ocean.
The Dead Zone
Image above: These images show how ocean color changes from winter to summer in the Gulf of Mexico. Summertime satellite observations of ocean color from MODIS/Aqua show highly turbid waters which may include large blooms of phytoplankton extending from the mouth of the Mississippi River all the way to the Texas coast. When these blooms die and sink to the bottom, bacterial decomposition strips oxygen from the surrounding water, creating an environment very difficult for marine life to survive in. Reds and oranges represent high concentrations of phytoplankton and river sediment.
Ships and Satellites Match Measurements
The National Oceanic and Atmospheric Administration (NOAA) ships measured low oxygen water in the same location as the highly turbid water in the satellite images. Most studies indicate that fertilizers and runoff from human sources is one of the major stresses impacting coastal ecosystems. In this image, reds and oranges represent low oxygen concentrations.
Image to right: This image shows the data from the NOAA ship as described above.
River Outlets Support Life
Summer rains wash nutrients, dissolved organic matter and sediment out of the mouths of rivers, into the sea, sparking large phytoplankton blooms. South America presents two excellent examples of river outlets where phytoplankton tends to thrive. Along the northern part of the continent the mouth of the Orinoco River opens into the Caribbean. Along the Eastern side of South America, the mighty Amazon exits its thousand mile journey.
Image to left: Image shows how river outlets help to support marine life.
Creeping Dead Zones
Enhanced phytoplankton blooms can create dead zones. Dead zones are areas of water so devoid of oxygen that sea life cannot live there. If phytoplankton productivity is enhanced by fertilizers or other nutrients, more organic matter is produced at the surface of the ocean. The organic matter sinks to the bottom, where bacteria break it down and release carbon dioxide. Bacteria thrive off excessive organic matter and absorb oxygen, the same oxygen that fish, crabs and other sea creatures rely on for life.
Image to right: This is a still from the animation showing a slow water cycle. Click on this still to view animation.
Goddard Space Flight Center |
Disadvantage of democracy
Advantages of democracy protects the interest of citizens the citizens of a democratic government have the right to vote on political, social, and economical issues. In a democratic nation, it is the citizens who hold the right to elect their representatives and their governing authorities according to a common observation, not. Democracy is a type of government that is by the people this means that all citizens of the country have a say in the way that the government is ran and the. Throughout most of history democracy existed only in small manageable communities, ethnically and culturally homogeneous. Extracts from this document introduction assess the advantages and disadvantages of direct democracy direct democracy occurs where the will of the people is. • democracy brings change from time to time if people are not satisfied with the working of any party or their officials, they can always re-elect another party. Essay 1 advantages and disadvantages of democracy advantages and disadvantages of democracy : ‘democracy’ ,asserts lincoln is “the government of the people by.
Essays - largest database of quality sample essays and research papers on disadvantages of democracy. Advantages and disadvantages of democracy all democracies (and every other structure of government) are bound to have fewstructural flaws, which. Get an answer for 'what are the advantages and disadvantages of a democracy' and find homework help for other political science questions at enotes. Democracy (greek: δημοκρατία dēmokratía, literally rule of the people), in modern usage, is a system of government in which the citizens exercise power. Democracy is a form of rule where the people of the nation enjoy at-most freedom of living but there are also few disadvantages of democracy to look into. What are the disadvantages of democracy is universal voting suitable are checks and balances always functional.
The factors representing characteristics of liberal democracy include its advantages, disadvantages and structure. Indirect democracies are the most common type of democratic system in the world we'll compare this system of government to direct democracy. Workplace democracy means allowing employees to have a strong voice in the direction and decisions within your organization it has become increasingly popular in the. 8 responses to “advantages & disadvantages of a parliamentary system” highly informative and useful, though i thought that a few more examples as to where the.
I am a democratic with bubs in democracy the community has freedomof speech democracy can offer modifications in government withouthostility. Democracy is a way of organizing governments and organizations this article explores the advantages and disadvantages of democracy, and the ways of balancing. All democracies (and every other structure of government) are bound to have few structural flaws, which are associated to the character of democracy. We have discussed the advantages and disadvantages of democracy in this article in the modern world, democracy is the most widely accepted form of government.
Disadvantage of democracy
It depends on where you stand are you among the majority if you are, democracy is awesome the advantage is you get what you want if you are among the minority. There are many forms of democracies, and one of them is direct democracy in this form of government, the people can directly determine the laws and policies of.
Start studying advantages/disadvantages direct democracy learn vocabulary, terms, and more with flashcards, games, and other study tools. We all have different views of how the government should be run we do not have the same views, therfore we will vote for different people. Democracy is best defined as “government of the people, by the people, and for the people” (abraham lincoln) democracy implies both popular. Democracy is a form of government by the people and for the people elected representatives are chosen by the citizens of an area in order to represent them in. Reform in india democracy's drawbacks sustained growth in india would be all the more impressive if the government could pass its reforms but the road is. The advantages and disadvantages of the democratic system of government in the form of pros and cons lists more sign in pros and cons of democracy. What were the advantages and disadvantages of the new politics of mass democracy were such things as the spoils system, party machines, and hoopla-driven campaigns.
List of disadvantages of democracy 1 it might allow misuse of public funds and time democratic governments can lead to wasted time and resources, considering that. |
Myths & Facts Online
The Mandatory Period
The British helped the Jews displace the native Arab population of Palestine.
The British allowed Jews to flood Palestine while Arab immigration was tightly controlled.
The British changed their policy after World War II to allow the survivors of the Holocaust to settle in Palestine.
As the Jewish population in Palestine grew, the plight of the Palestinian Arabs worsened.
Jews stole Arab land.
The British helped the Palestinians to live peacefully with the Jews.
The Mufti was not anti-Semitic.
The Irgun bombed the King David Hotel as part of a terror campaign against civilians.
“The British helped the Jews displace the native Arab population of Palestine.”
Herbert Samuel, a British Jew who served as the first High Commissioner of Palestine, placed restrictions on Jewish immigration in the ‘interests of the present population and the ‘ absorptive capacity of the country.1 The influx of Jewish settlers was said to be forcing the Arab fellahin (native peasants) from their land. This was at a time when less than a million people lived in an area that now supports more than nine million.
The British actually limited the absorptive capacity of Palestine by partitioning the country.
In 1921, Colonial Secretary Winston Churchill severed nearly four-fifths of Palestine — some 35,000 square miles — to create a brand new Arab entity, Transjordan. As a consolation prize for the Hejaz and Arabia (which are both now Saudi Arabia) going to the Saud family, Churchill rewarded Sherif Hussein’s son Abdullah for his contribution to the war against Turkey by installing him as Transjordan’s emir.
The British went further and placed restrictions on Jewish land purchases in what remained of Palestine, contradicting the provision of the Mandate (Article 6) stating that the Administration of Palestine...shall encourage, in cooperation with the Jewish Agency...close settlement by Jews on the land, including State lands and waste lands not acquired for public purposes. By 1949, the British had allotted 87,500 acres of the 187,500 acres of cultivable land to Arabs and only 4,250 acres to Jews.2
Ultimately, the British admitted the argument about the absorptive capacity of the country was specious. The Peel Commission said: The heavy immigration in the years 1933-36 would seem to show that the Jews have been able to enlarge the absorptive capacity of the country for Jews.3
“The British allowed Jews to flood Palestine while Arab immigration was tightly controlled.”
The British response to Jewish immigration set a precedent of appeasing the Arabs, which was followed for the duration of the Mandate. The British placed restrictions on Jewish immigration while allowing Arabs to enter the country freely. Apparently, London did not feel that a flood of Arab immigrants would affect the country’s absorptive capacity.
During World War I, the Jewish population in Palestine declined because of the war, famine, disease and expulsion by the Turks. In 1915, approximately 83,000 Jews lived in Palestine among 590,000 Muslim and Christian Arabs. According to the 1922 census, the Jewish population was 84,000, while the Arabs numbered 643,000.4 Thus, the Arab population grew exponentially while that of the Jews stagnated.
In the mid-1920s, Jewish immigration to Palestine increased primarily because of anti-Jewish economic legislation in Poland and Washingtons imposition of restrictive quotas.5
The record number of immigrants in 1935 (see table) was a response to the growing persecution of Jews in Nazi Germany. The British administration considered this number too large, however, so the Jewish Agency was informed that less than one-third of the quota it asked for would be approved in 1936.6
The British gave in further to Arab demands by announcing in the 1939 White Paper that an independent Arab state would be created within 10 years, and that Jewish immigration was to be limited to 75,000 for the next five years, after which it was to cease altogether. It also forbade land sales to Jews in 95 percent of the territory of Palestine. The Arabs, nevertheless, rejected the proposal.
Jewish Immigrants to Palestine7
By contrast, throughout the Mandatory period, Arab immigration was unrestricted. In 1930, the Hope Simpson Commission, sent from London to investigate the 1929 Arab riots, said the British practice of ignoring the uncontrolled illegal Arab immigration from Egypt, Transjordan and Syria had the effect of displacing the prospective Jewish immigrants.8
The British Governor of the Sinai from 1922-36 observed: This illegal immigration was not only going on from the Sinai, but also from Transjordan and Syria, and it is very difficult to make a case out for the misery of the Arabs if at the same time their compatriots from adjoining states could not be kept from going in to share that misery.9
“The British changed their policy after World War II to allow the survivors of the Holocaust to settle in Palestine.”
The gates of Palestine remained closed for the duration of the war, stranding hundreds of thousands of Jews in Europe, many of whom became victims of Hitlers “Final Solution.” After the war, the British refused to allow the survivors of the Nazi nightmare to find sanctuary in Palestine. On June 6, 1946, President Truman urged the British government to relieve the suffering of the Jews confined to displaced persons camps in Europe by immediately accepting 100,000 Jewish immigrants. Britain’s Foreign Minister, Ernest Bevin, replied sarcastically that the United States wanted displaced Jews to immigrate to Palestine because they did not want too many of them in New York.11
Some Jews were able to reach Palestine, many by way of dilapidated ships that members of the Jewish resistance organizations used to smuggle them in. Between August 1945 and the establishment of the State of Israel in May 1948, 65 illegal immigrant ships, carrying 69,878 people, arrived from European shores. In August 1946, however, the British began to intern those they caught in camps in Cyprus. Approximately 50,000 people were detained in the camps, 28,000 of whom were still imprisoned when Israel declared independence.12
“As the Jewish population in Palestine grew, the plight of the Palestinian Arabs worsened.”
The Jewish population increased by 470,000 between World War I and World War II, while the non-Jewish population rose by 588,000.13 In fact, the permanent Arab population increased 120 percent between 1922 and 1947.14
This rapid growth was a result of several factors. One was immigration from neighboring states — constituting 37 percent of the total immigration to pre-state Israel — by Arabs who wanted to take advantage of the higher standard of living the Jews had made possible.15 The Arab population also grew because of the improved living conditions created by the Jews as they drained malarial swamps and brought improved sanitation and health care to the region. Thus, for example, the Muslim infant mortality rate fell from 201 per thousand in 1925 to 94 per thousand in 1945 and life expectancy rose from 37 years in 1926 to 49 in 1943.16
The Arab population increased the most in cities where large Jewish populations had created new economic opportunities. From 1922-1947, the non-Jewish population increased 290 percent in Haifa, 131 percent in Jerusalem and 158 percent in Jaffa. The growth in Arab towns was more modest: 42 percent in Nablus, 78 percent in Jenin and 37 percent in Bethlehem.17
“Jews stole Arab land.”
Despite the growth in their population, the Arabs continued to assert they were being displaced. The truth is that from the beginning of World War I, part of Palestine’s land was owned by absentee landlords who lived in Cairo, Damascus and Beirut. About 80 percent of the Palestinian Arabs were debt-ridden peasants, semi-nomads and Bedouins.18
Jews actually went out of their way to avoid purchasing land in areas where Arabs might be displaced. They sought land that was largely uncultivated, swampy, cheap and, most important, without tenants. In 1920, Labor Zionist leader David Ben-Gurion expressed his concern about the Arab fellahin, whom he viewed as “the most important asset of the native population.” Ben-Gurion said “under no circumstances must we touch land belonging to fellahs or worked by them.” He advocated helping liberate them from their oppressors. “Only if a fellah leaves his place of settlement,” Ben-Gurion added, “should we offer to buy his land, at an appropriate price.”19
It was only after the Jews had bought all of the available uncultivated land that they began to purchase cultivated land. Many Arabs were willing to sell because of the migration to coastal towns and because they needed money to invest in the citrus industry.20
When John Hope Simpson arrived in Palestine in May 1930, he observed: “They [Jews] paid high prices for the land, and in addition they paid to certain of the occupants of those lands a considerable amount of money which they were not legally bound to pay.”21
In 1931, Lewis French conducted a survey of landlessness and eventually offered new plots to any Arabs who had been “dispossessed.” British officials received more than 3,000 applications, of which 80 percent were ruled invalid by the Government’s legal adviser because the applicants were not landless Arabs. This left only about 600 landless Arabs, 100 of whom accepted the Government land offer.22
In April 1936, a new outbreak of Arab attacks on Jews was instigated by a Syrian guerrilla named Fawzi alQawukji, the commander of the Arab Liberation Army. By November, when the British finally sent a new commission headed by Lord Peel to investigate, 89 Jews had been killed and more than 300 wounded.23
The Peel Commission’s report found that Arab complaints about Jewish land acquisition were baseless. It pointed out that “much of the land now carrying orange groves was sand dunes or swamp and uncultivated when it was purchased....there was at the time of the earlier sales little evidence that the owners possessed either the resources or training needed to develop the land.”24 Moreover, the Commission found the shortage was “due less to the amount of land acquired by Jews than to the increase in the Arab population.” The report concluded that the presence of Jews in Palestine, along with the work of the British Administration, had resulted in higher wages, an improved standard of living and ample employment opportunities.25
In his memoirs, Transjordan’s King Abdullah wrote:
Even at the height of the Arab revolt in 1938, the British High Commissioner to Palestine believed the Arab landowners were complaining about sales to Jews to drive up prices for lands they wished to sell. Many Arab landowners had been so terrorized by Arab rebels they decided to leave Palestine and sell their property to the Jews.27
The Jews were paying exorbitant prices to wealthy landowners for small tracts of arid land. “In 1944, Jews paid between $1,000 and $1,100 per acre in Palestine, mostly for arid or semiarid land; in the same year, rich black soil in Iowa was selling for about $110 per acre.”28
By 1947, Jewish holdings in Palestine amounted to about 463,000 acres. Approximately 45,000 of these acres were acquired from the Mandatory Government; 30,000 were bought from various churches and 387,500 were purchased from Arabs. Analyses of land purchases from 1880 to 1948 show that 73 percent of Jewish plots were purchased from large landowners, not poor fellahin.29 Those who sold land included the mayors of Gaza, Jerusalem and Jaffa. As’ad elShuqeiri, a Muslim religious scholar and father of PLO chairman Ahmed Shuqeiri, took Jewish money for his land. Even King Abdullah leased land to the Jews. In fact, many leaders of the Arab nationalist movement, including members of the Muslim Supreme Council, sold land to Jews.30
“The British helped the Palestinians to live peacefully with the Jews.”
In 1921, Haj Amin el-Husseini first began to organize fedayeen (“one who sacrifices himself”) to terrorize Jews. Haj Amin hoped to duplicate the success of Kemal Atatürk in Turkey by driving the Jews out of Palestine just as Kemal had driven the invading Greeks from his country.31 Arab radicals were able to gain influence because the British Administration was unwilling to take effective action against them until they finally revolted against British rule.
Colonel Richard Meinertzhagen, former head of British military intelligence in Cairo, and later Chief Political Officer for Palestine and Syria, wrote in his diary that British officials incline towards the exclusion of Zionism in Palestine. In fact, the British encouraged the Palestinians to attack the Jews. According to Meinertzhagen, Col. Waters-Taylor (financial adviser to the Military Administration in Palestine 1919-23) met with Haj Amin a few days before Easter, in 1920, and told him he had a great opportunity at Easter to show the world...that Zionism was unpopular not only with the Palestine Administration but in Whitehall and if disturbances of sufficient violence occurred in Jerusalem at Easter, both General Bols [Chief Administrator in Palestine, 1919-20] and General Allenby [Commander of Egyptian Force, 1917-19, then High Commissioner of Egypt] would advocate the abandonment of the Jewish Home. Waters-Taylor explained that freedom could only be attained through violence.32
Haj Amin took the Colonels advice and instigated a riot. The British withdrew their troops and the Jewish police from Jerusalem, allowing the Arab mob to attack Jews and loot their shops. Because of Haj Amin’s overt role in instigating the pogrom, the British decided to arrest him. Haj Amin escaped, however, and was sentenced to 10 years imprisonment in absentia.
A year later, some British Arabists convinced High Commissioner Herbert Samuel to pardon Haj Amin and to appoint him Mufti. By contrast, Vladimir Jabotinsky and several of his followers, who had formed a Jewish defense organization during the unrest, were sentenced to 15 years imprisonment.33
Samuel met with Haj Amin on April 11, 1921, and was assured that the influences of his family and himself would be devoted to tranquility. Three weeks later, riots in Jaffa and elsewhere left 43 Jews dead.34
Haj Amin consolidated his power and took control of all Muslim religious funds in Palestine. He used his authority to gain control over the mosques, the schools and the courts. No Arab could reach an influential position without being loyal to the Mufti. His power was so absolute no Muslim in Palestine could be born or die without being beholden to Haj Amin.35 The Muftis henchmen also insured he would have no opposition by systematically killing Palestinians from rival clans who were discussing cooperation with the Jews.
As the spokesman for Palestinian Arabs, Haj Amin did not ask that Britain grant them independence. On the contrary, in a letter to Churchill in 1921, he demanded that Palestine be reunited with Syria and Transjordan.36
The Arabs found rioting to be an effective political tool because of the lax British attitude and response toward violence against Jews. In handling each riot, the British did everything in their power to prevent Jews from protecting themselves, but made little or no effort to prevent the Arabs from attacking them. After each outbreak, a British commission of inquiry would try to establish the cause of the violence. The conclusion was always the same: the Arabs were afraid of being displaced by Jews. To stop the rioting, the commissions would recommend that restrictions be placed on Jewish immigration. Thus, the Arabs came to recognize that they could always stop the influx of Jews by staging a riot.
This cycle began after a series of riots in May 1921. After failing to protect the Jewish community from Arab mobs, the British appointed the Haycraft Commission to investigate the cause of the violence. Although the panel concluded the Arabs had been the aggressors, it rationalized the cause of the attack: The fundamental cause of the riots was a feeling among the Arabs of discontent with, and hostility to, the Jews, due to political and economic causes, and connected with Jewish immigration, and with their conception of Zionist policy....37 One consequence of the violence was the institution of a temporary ban on Jewish immigration.
The Arab fear of being displaced or dominated was used as an excuse for their merciless attacks on peaceful Jewish settlers. Note, too, that these riots were not inspired by nationalistic fervor — nationalists would have rebelled against their British overlords — they were motivated by racial strife and misunderstanding.
In 1929, Arab provocateurs succeeded in convincing the masses that the Jews had designs on the Temple Mount (a tactic that would be repeated on numerous occasions, the most recent of which was in 2000 after the visit of Ariel Sharon). A Jewish religious observance at the Western Wall, which forms a part of the Temple Mount, served as a catalyst for rioting by Arabs against Jews that spilled out of Jerusalem into other villages and towns, including Safed and Hebron.
Again, the British Administration made no effort to prevent the violence and, after it began, the British did nothing to protect the Jewish population. After six days of mayhem, the British finally brought troops in to quell the disturbance. By this time, virtually the entire Jewish population of Hebron had fled or been killed. In all, 133 Jews were killed and 399 wounded in the pogroms.38
After the riots were over, the British ordered an investigation, which resulted in the Passfield White Paper. It said the immigration, land purchase and settlement policies of the Zionist Organization were already, or were likely to become, prejudicial to Arab interests. It understood the Mandatorys obligation to the non-Jewish community to mean that Palestines resources must be primarily reserved for the growing Arab economy....39 This, of course, meant it was necessary to place restrictions not only on Jewish immigration but on land purchases.
“The Mufti was not anti-Semitic.”
In 1941, Haj Amin al-Husseini fled to Germany and met with Adolf Hitler, Heinrich Himmler, Joachim Von Ribbentrop and other Nazi leaders. He wanted to persuade them to extend the Nazis anti-Jewish program to the Arab world.
The Mufti sent Hitler 15 drafts of declarations he wanted Germany and Italy to make concerning the Middle East. One called on the two countries to declare the illegality of the Jewish home in Palestine. Furthermore, they accord to Palestine and to other Arab countries the right to solve the problem of the Jewish elements in Palestine and other Arab countries, in accordance with the interest of the Arabs and, by the same method, that the question is now being settled in the Axis countries.40
In November 1941, the Mufti met with Hitler, who told him the Jews were his foremost enemy. The Nazi dictator rebuffed the Mufti’s requests for a declaration in support of the Arabs, however, telling him the time was not right. The Mufti offered Hitler his thanks for the sympathy which he had always shown for the Arab and especially Palestinian cause, and to which he had given clear expression in his public speeches....The Arabs were Germany’s natural friends because they had the same enemies as had Germany, namely....the Jews.... Hitler replied:
In 1945, Yugoslavia sought to indict the Mufti as a war criminal for his role in recruiting 20,000 Muslim volunteers for the SS, who participated in the killing of Jews in Croatia and Hungary. He escaped from French detention in 1946, however, and continued his fight against the Jews from Cairo and later Beirut.
“The Irgun bombed the King David Hotel as part of a terror campaign against civilians.”
The King David Hotel was the site of the British military command and the British Criminal Investigation Division. The Irgun chose it as a target after British troops invaded the Jewish Agency on June 29, 1946, and confiscated large quantities of documents. At about the same time, more than 2,500 Jews from all over Palestine were placed under arrest. The information about Jewish Agency operations, including intelligence activities in Arab countries, was taken to the King David Hotel.
A week later, news of a massacre of 40 Jews in a pogrom in Poland reminded the Jews of Palestine how Britain’s restrictive immigration policy had condemned thousands to death.
Irgun leader Menachem Begin stressed his desire to avoid civilian casualties. In fact, the plan was to warn the British so they would evacuate the building before it was blown up. Three telephone calls were placed, one to the hotel, another to the French Consulate, and a third to the Palestine Post, warning that explosives in the King David Hotel would soon be detonated.
On July 22, 1946, the calls were made. The call into the hotel was apparently received and ignored. Begin quotes one British official who supposedly refused to evacuate the building, saying: “We don’t take orders from the Jews.”42 As a result, when the bombs exploded, the casualty toll was high: a total of 91 killed and 45 injured. Among the casualties were 15 Jews. Few people in the hotel proper were injured by the blast.43
In contrast to Arab attacks against Jews, which were widely hailed by Arab leaders as heroic actions, the Jewish National Council denounced the bombing of the King David.44
For decades the British denied they had been warned. In 1979, however, a member of the British Parliament introduced evidence that the Irgun had indeed issued the warning. He offered the testimony of a British officer who heard other officers in the King David Hotel bar joking about a Zionist threat to the headquarters. The officer who overheard the conversation immediately left the hotel and survived.45
and the Arab World, (NY: Funk and Wagnalls,
1970), p. 172; Howard Sachar, A
History of Israel: From the Rise of Zionism to Our Time,
(NY: Alfred A. Knopf, 1979), p. 146.
See also: Pre-State Israel
To order the paperback edition, click HERE. |
Von Hippel-Lindau Syndrome
What is von Hippel-Lindau syndrome (VHL)?
Two eye healthcare providers—von Hippel in Germany and Lindau in Sweden—were the first to publish descriptions of tumors in patients' eyes and brains, hallmarks of this genetic condition. In the 1960s, the disease was named von Hippel-Lindau syndrome to recognize their contributions in characterizing the condition.
Von Hippel-Lindau syndrome is a rare genetic disorder characterized by an increased risk of developing the tumors listed below:
Hemangioblastomas. Benign (noncancerous) tumors made up of nests of blood vessels of the brain and spine.
Hemangioblastomas of the retina
Pheochromocytomas. A neuroendocrine tumor, usually benign (noncancerous), within or outside of the adrenal gland
Renal cell carcinoma. Cancerous tumor of the kidney that happens in about 70% of individuals with VHL.
Less commonly, some individuals develop endolymphatic sac tumors (ear tumors that can cause deafness if undetected), pancreatic tumors, and cystadenomas of the epididymis or broad ligament. Other signs include cysts (pockets of fluid) of the kidney and pancreas.
The VHL gene is a tumor suppressor gene located on chromosome 3. This usually controls cell growth and cell death. Both copies of a tumor suppressor gene must be changed, or mutated, before a person will develop cancer. In about 80% of VHL cases, the first mutation is inherited from either the mother or the father. It is present in all cells of the body at birth. This is called a germline mutation. Whether a person who has a germline mutation will develop a tumor and where the tumor(s) will develop depends on where (in which cell type) the second mutation happens. For example, if the second mutation is in the retina, then a retinal hemangioblastoma may develop. If it is in the adrenal gland, then a pheochromocytoma may develop. The process of tumor development actually needs mutations in multiple growth control genes. Loss of both copies of the VHL gene is just the first step in the process. What causes these additional mutations is unknown. Possible causes include chemical, physical, or biological environmental exposures or chance errors in cell replication.
Some individuals who have inherited a germline VHL mutation never develop cancer. This is because they never get the second mutation necessary to knock out the function of the gene and start the process of tumor formation. This can make the cancer appear to skip generations in a family. But, in reality, the mutation is present. Individuals with a VHL mutation, regardless of whether they develop cancer, have a 50/50 chance to pass the mutation on to each of their children. About 20% of VHL cases are new mutations, and not inherited from a parent.
It is also important to remember that the VHL gene is not located on the sex chromosomes. Therefore, mutations can be inherited from either the mother's side or the father's side of the family.
Molecular genetic testing of VHL is available and identifies a mutation in about 90% to 100% of affected people. Genetic testing is also considered part of the standard management for first-degree relatives (parent, siblings, children) of affected people. For people who are mutation-positive, annual screening to find tumors before severe complications develop is recommended. Genetic testing of unaffected relatives is useful only if a germline mutation has already been identified in an affected family member. |
What is Technology Integration?
Technology integration is when teachers design experiences that require students to use
technology as part of their learning activities in ways that make learning more active, collaborative,
constructive, authentic, and engaging.
Definitions for what constitutes effective technology
integration have changed over the last three decades. At one
time, placing computer labs in schools was thought to be a
quick solution for preparing students to use technology.
During these times, teachers largely used their single
classroom computers for administrative tasks and left student
technology training to computer lab teachers. Even when a
classroom had several computers, these machines were often
housed in the back of a room where they collected dust.
Students used computers in isolation from the classroom and
disconnected from the curriculum. It was soon realized that
for technology to be used meaningfully, teachers had to use it
in the classroom and use it with a definite purpose—to
engage students in learning. When this kind of use occurs,
technology becomes just another tool in a teacher’s repertoire
This evolution in “technology integration” presents us with
the substantial need for more effective teacher training, which
has remained a challenge to this day. While there seems to be
consensus that technology should be moved from computer
labs into the classroom, there are still many teachers who have yet to adopt the use of technology.
Undoubtedly, adding technology to the already complex task of teaching will not be accomplished
quickly but the rewards can be tremendous.
Why Do Teachers Need to Know How to Use Technology?
Century World and Workplaces – The World is Different
Because the world and workplaces are so different, today’s teachers need to know how to use
technology so they can prepare students for a 21st century world and workplaces that demands
greater skills. The world is much different than just a few decades ago and the demands of the 21st
century workplace have grown exponentially. Due to shifting demographics, the United States
workforce is growing at a slower rate than in the past. Other changes, such as the rapidly increasing
pace of technological change and the expanding economic globalization are making it necessary for
companies to recruit workers outside the U.S. At the same time, Asia and Europe are turning out
significantly more graduates than the U.S. in fields that are critical for economic growth, thus adding
to the competition for jobs (Karoly&Panis, 2004). Just as globalcompetition has increased, there has
been a steady growth in the share of jobs that require higher level 21st century skills and more
“Integrating technology into classroom
instruction means more than teaching
basic computer skills and software
programs in a separate computer class.
Effective tech integration must happen
across the curriculum in ways that
research shows deepen and enhance the
learning process. In particular, it must
support four key components of learning:
active engagement, participation in
groups, frequent interaction and
feedback, and connection to real-world
experts. Effective technology integration
is achieved when the use of technology is
routine and transparent and when
technology supports curricular goals.”
Edutopia (¶ 2, 2008)
education. 21st century skills include among other things, abstract reasoning, analyzing, problem-
solving, innovating, communicating, and creating. Jobs also call for individuals who have strong
technological abilities, and these are rapidly changing as new technologies become more prevalent.
For example, visual literacy skills are more important as
most of our information is accessed on the visually-rich
Internet. Also, new and powerful modeling software is
used to solve problems in a variety of occupations
requiring spatial literacy, or an awareness and
understanding of how things work in relationship to space.
The rapidity of technological change, which is only expected to accelerate in the future, demands that
we are adaptable, flexible, and ready to learn and relearn as required on the job. It is easy to see how
in a 21st century world there is a greater need than in the past for more workers who are well
educated, have high-levels of skills, are tech savvy, and adaptable.
Century Digital Students – Students are Different
Watch this video:A Vision of K-12 Students Today
Because students today have grown up with technology, they learn differently than in times past.
Many states understand the magnitude of this difference and believe that we must take serious
measures to refocus our schools for 21st century learners. The Partnership for 21st Century Skills
(P21), an advocacy organization whose goal is to define 21st century education to ensure every
child’s success as workers in the 21st century world, has led the initiative to help schools in this
refocusing effort. West Virginia, the second state in the nation to join P21, defined 21st century
learners appropriately and the role teachers will play in educating them.
A 21st century learner is part of a generation that has never known a world without
the Internet, without computers, without video games and without cell phones.
They are digital natives who have grown up with information technology.
To these students, life without digital technologies seems alien. Their aptitudes,
attitudes, expectations and learning styles reflect the stimulating environment in
which they were raised. For most, instant messaging has surpassed the telephone
and e-mail as the primary form of communication. Control, alt, delete is as basic to
them as learning their ABCs and 123s.
Twenty-first century learners are always on, always connected. They are
comfortable multitasking. They sit at the computer, working on their homework
while listening to an iPod. At the same time, they may have 10 different chat
windows open, be playing a video game or surfing the Web while a TV blares in the
background. To them, technology is only a tool that they can customize to access
information and communicate.
“School should be less about
preparation for life and more
like life itself.” John Dewey
Twenty-first century learners are multimedia oriented. Their world is Web-based.
They want instant gratification. They are impatient, creative, expressive and social.
They are risk-takers who thrive in less structured environments.
Constant exposure to digital media has changed not only how these students
process information and learn but how they use information. Children today are
fundamentally different from previous generations in the way they think, access and
absorb information, and communicate in a modern world.
To cross the digital divide and reach these students, teachers must change not only
what they teach, but how they teach. To do so, educators must acknowledge this
digital world and educate themselves about it. To truly understand them, educators
must immerse themselves in the digital landscape where the 21st century learner
lives (West Virginia Department of Education, n.d.).
While some states understand that today’s digital learners are different and departments of
education in these states are proactive about changing their education systems, there are still
reasons to be discouraged about the current state of education. Statistics show that many students in
this generation are not engaged in their learning. Amidst strong global competition, U.S. students
score much lower than many countries on achievement inmathematics, reading, science, analytical
thinking, and problem-solving (OECD, 2010). The current model of education, which was built in the
20th century when individuals obtained their knowledge early in life and used that knowledge for
careers that lasted many years (Karoly&Panis, 2004), is failing 21st century students who are living in
a world of increasing change and greater demands. Students must learn to be lifelong learners,
willing to adapt in a changing world. To add to the dilemma, while this generation is exposed to and
regularly uses a variety of technologies—computers, the Internet, instant messaging, downloading
music, social networking, cell phones, etc. they are significantly deficient in the types of 21st century
technological skills needed in the workplace (Lorenzo &Dziuban, 2006). We are living in critical
times when students need to know how to use 21st century skills that inherently utilize technology.
These concerns are expounded when one considers the narrow conception that our educational
system still has of technology. In most schools, where technology is used only as a means to develop
students’ computer skills in lab settings, meaningful content and learning is separated from the use
of technology. Technology is viewed simply as a set of tools that allows us to function in a digital
world. This perception of technology may be part of the reason that widespread classroom use is not
yet a reality. Despite what many people believe, educators are not widely using technology. In fact,
Vockley (2008, p. 3) noted, “It is shocking and inconceivable—but true—that technology is
marginalized in the complex and vital affairs of education.”Besides this narrow conception, other
obstacles preventing widespread computer use include the scarcity of time for training, few
technology resources, insufficient computer access, inadequate technical or administrative support,
time given to standardized testing, and teachers’ attitudes and beliefs (Hew & Brush, 2007).
This situation is a travesty when research shows that students learn more when engaged in
meaningful, relevant, and intellectually stimulating schoolwork and that the use of technology can
increase the frequency for this type of learning (North Central Regional Educational Laboratory,
2003). Technology provides students with unique opportunities that would be impossible otherwise.
Among other things, students can tap into the knowledge of experts; visualize and analyze data; link
learning to authentic contexts; and participate in electronic, shared reflection (Bransford, Brown, &
Cocking, 1999). For this kind of learning to occur, technology must be made an integral part of
educational operationsjust as it is intheworld of business. When this happens, teachers can offer a
more rigorous, creative, relevant, and engaging curriculum where students must developand practice
Century Teacher Preparation – Teaching Should be Different
Because today’s world and students are different, teachers need to know how to use technology so
they can teach differently and model the appropriate use of technology for students in the
classroom.However, learning to use technology requires that teachers are given time for appropriate
training relevant to their classroom situations. Preservice teachers need effective training that
enables them to envision how technology can be an effective and motivating resource in their future
classrooms. In college courses where preservice teachers learn methods of instruction, professors
need to model the use of technology in their teaching and plan assignments that require preservice
teachers to do the same. Research shows that modeling the integration of technology is one of the
key factors that influences whether or not a preservice teacher will use technology in their future
classrooms (Brown &Warschauer, 2006; Fleming, Motamedi& May 2007). It is not enough simply to
know how to use the technology, but teachers need to know how toleverage the technologies to help
their students develop 21st century skills (Lambert &Cuper, 2009). Only as this happens on a regular
basis will our youth be prepared for the changing demands of this 21st century world where
technology is indispensable.
Watch this video:Teaching the 21st Century Learner
[Link to http://www.youtube.com/watch?v=DTWTKDdw8f4&feature=related]
Century Standards – Standards are Different
According to professional standards, Teachers need to know how to use technology so they can
design learning experiences for students that make use of these tools.Any textbook should be based
on current standards in the field, and this book is no exception as it supports the national technology
standards for teachers and students (See standards at the end of the chapter). Carefully read and
reflect on the national technology standards to become acquainted with them and their relationship
to 21st century skills. The International Society for Technology in Education (ISTE) published the
National Educational Technology Standards for Students 2007 (NETS-S) (ISTE, 2007) and the
National Educational Technology Standards for Teachers 2008 (NETS-T) (ISTE, 2008). The most
recent versions of these standards represent a significant step forward in meeting the demands of
21stcentury learning. This textbook also requires students to investigate and support their respective
academic content standards in each activity. This practice will help students understand that the
course is not just for learning the technical skills of using technology, but rather, it is for learning to
use technology in the context of the classroom. In this way, preservice teachers will learn the real
meaning of technology integration when technology is simply the means to help students learn in
more exciting ways.
Century Skills – Skills are Different
Teachers need to know how to use technology because these are the tools that will compel students
to practice and learn 21st century skills those skills needed in today’s world and workplace.There is
a growing movement worldwide to redesign classrooms by focusing on 21stcentury skills
(Commission of European Communities, 2008; Partnership for 21st Century Skills, 2009a; Vockley,
2008). One example is the Partnership for 21st Century Skills that is working to design American high
schools for 21stcentury learning and achievement. In these schools, students would acquire
knowledge in their core subjects but they would also intentionally and purposefully acquire
21stcentury knowledge and skills in the context of learning academic content. 21st century skills are
those skills needed to be successful in today’s world. The Partnership for 21st Century Skills (2009b)
proposes that 21st Century curriculum and instruction:
Focuses on 21stcentury skills discretely in the context of core subjects and 21stcentury
Focuses on providing opportunities for applying 21st century skills across content areas and
for a competency-based approach to learning
Enables innovative learning methods that integrate the use of supportive technologies,
inquiry and problem-based approaches and higher order thinking skills
Encourages the integration of community resources beyond school walls
Curriculum should be designed to produce deep understanding and authentic application of 21st
century skills; include models of appropriate learning activities; clearly identify 21st century skills as
the goals for learning; and be embedded with performance-based assessments. Instruction should
connect essential concepts and skills, coach students from teacher-guided experiences toward
independence, offer real-world opportunities to demonstrate their mastery of key concepts and 21st
century skills, and connect curriculum to learners’ experiences (Partnership for 21st Century Skills,
2009b). In this textbook, the curriculum is designed around 21st century skills that will produce deep
understanding of what it means to integrate technology. 21st century goals are clearly identified in
each chapter so that preservice teachers can learn about them and understand how to promote these
same skills in their future classrooms. What the video below and then read about each of the 21st
century skills that you should integrate in your future classroom.
Watch this video:21st Century Skills in Action: Critical Thinking,
Creative Thinking, and Problem Solving
[Link to http://www.youtube.com/watch?v=2s6PIrXwt7M]
A. Creativity and Innovation
Creativity is using existing knowledge and originality to generate and develop new ideas or
products. Innovation is acting on creative ideas to make a tangible contribution to
society.Today’s intelligence involves much more than acquired knowledge; rather, it is the
capacity to create, produce and apply learning to new situations.Students need these skills
so they can use their creative ideas and contribute new ideas and products for society.
B. Communication and Collaboration
Communicationis the ability to convey one’s thoughts effectively to others for a range of
purposes using variety of media and technologies. Collaboration is demonstrating the
ability to work together effectively, assuming shared responsibility for the work to be
accomplished, and contributing to a project team to solve problems. Students need to be able
to communicate and collaborate so they can interact and contribute to the teamwork in a
C. Research and Information Literacy
Research and Informationliteracyinvolve the ability to analyze information critically;
determine what information is needed; locate, synthesize, evaluate, and use information
effectively. Students need these skills especially today to access the abundance of available
information efficiently and effectively, to use the information accurately, and understand the
ethical issues related to the use of this information.
D. Critical Thinking
Critical thinkingrequires the abilities to analyze, evaluate, synthesize, interpret, and make
connections between bits of information.Bloom’s early taxonomy of cognition included six
graduated levels of thinking that move from knowledge, to comprehension, application,
analysis, synthesis and, finally, evaluation (Bloom, 1956).As Bloom’s taxonomy was updated,
the higher levels of thinking were identified as analyzing, evaluating, and creating. Thus, the
same three skills continued to be considered the higher level thinking skills. The order of the
top two skills was reversed, and the name “synthesis” was changed to “creating” to reflect
the importance of the creative process (Anderson, Krathwohl, Airasian, Cruikhank, et al.,
2000). The higher levels of thinking—analyzing, evaluating, and creating—are key to critical
thinking and form the basis for developing all other 21stCentury skills (Levy &Murnane,
2004). Students need these skills to identify and ask questions, collect and analyze data, and
use multiple processes and diverse perspectives to obtain answers to solve problems.
E. Nonlinear Thinking
Linear thinking is a process of thought following a step-by-step progression in one
direction.Linear multimedia tools generally progress from one slide to the next and are
commonly used by instructors as a supplementary teaching aid. This form of multimedia
tends to limit learning potential because it does not require active participation.Nonlinear
thinking is “human thought characterized by expansion in multiple directions, rather than
in one direction, and based on the concept that there are multiple starting points from which
one can apply logic to problem” (Chuck’s Lamp, 2009).Nonlinear thinking is required when
reading or working in a hyperlinked environment such as the Internet where hyperlinks
allow a viewer to navigate in multiple directions.
Multimedia nonlinear environments, such as are found in electronic CDs or the Internet,
offer viewers the choice to navigate wherever they like using hyperlinks among information
containing a variety of complementing media such as text, audio, graphics, animation,
and/or video. These kinds of environmentsprovideviewers interactivity, control of progress,
and choice in their construction of knowledge. While multimedia classroom tools offer
classroom teachers multiple ways of engaging students in the learning process, they also
present challenges for teachers. One of the challenges lies in the fact that certain multimedia
tools promote far more active learning and student decision-making than others (Jacobson
&Archodidou, 2000). Even with these challenges, students need to know how to navigate in
nonlinear multimedia environments because these are so prevalent today. Students also
need to know how to create their own nonlinear multimedia projects because this will allow
them to use their creativity, critical thinking skills, and construct their own knowledge.
F. Visual Literacy and Visual Thinking
Visual literacy is the ability to interpret, make meaning, and create messages from
information presented in the form of images (Wileman, 1993; Heinich, Molenda, Russell,
&Smaldino, 1999). Visual thinking is the ability to turn information of all types into
pictures, graphics, and other visual forms to communicate information by associating ideas,
concepts, and data or other verbal information with images. Visual forms of communication
include diagrams, maps, videos, gestures, street signs, time lines, flow charts, symbols, etc.
Increasingly, text-based languageis being replaced by videos, images, audio, graphs,
illustrations, and other forms of electronic media as the Internet, handheld digital devices,
and social networks are the predominant modes of literacy for students. Visuals can be
powerful forms of communication as they capture attention, evoke emotion, engage, and
provoke inquiry and higher order thinking, provide creative outlets for writing, aid in
problem solving, and enhance reading. For example, McVicker (2007) uses comics for
instruction because they can help students develop visual literacy skills by inferring meaning
from text and images. Sorensen (2008) uses primary sources to teach world history enabling
students to look for patterns in historical events and evaluate the unspoken assumptions
that provide insight into a civilization. Moline (2006) uses graphic organizers because they
can provide an ideal framework for writing especially since even young readers can
interpret these long before they can read. Digital storytelling has become a compelling,
engaging, and interactive way of letting students express themselves. George Lucas, a
renowned filmmaker who made Star Wars and Raiders of the Lost Ark envisions a new way
of learning that incorporates cinema in the classroom.
They [students] need to understand a new language of expression. The
way we are educating is based on nineteenth-century ideas and methods.
Here we are, entering the twenty-first century, and you look at our
schools and ask, 'Why are we doing things in this ancient way?' Our
system of education is locked in a time capsule. You want to say to the
people in charge, 'You're not using today's tools! Wake up!'
We must teach communication comprehensively, in all its forms. Today
we work with the written or spoken word as the primary form of
communication. But we also need to understand the importance of
graphics, music, and cinema, which are just as powerful and in some ways
more deeply intertwined with young people's culture. We live and work
in a visually sophisticated world, so we must be sophisticated in using all
the forms of communication, not just the written word. (Daly, 2004)
While the benefits of integrating visuals, particularly technology-based visual forms in the
classroom are numerous, visual forms of communication are being used to persuade, bias,
profit, and manipulate students. Media uses beautiful people to attract attention and sell
products. The tactics of fear, humor, sentiment, and intensity are used to stimulate feelings
and promote solutions to common problems. Flattery persuades viewers to love something
or some object. Names are associated with negative symbols to make you question the worth
of some idea. All these negative aspects of visual forms of media make it essential that
teachers incorporate visual literacy in their instruction. Students need these skills so they
can recognize, evaluate, and interpret the visuals they encounter and understand how these
images shape their personal lives as well as a culture and society.
G. Spatial Thinking
Spatialthinking is a set of cognitive skills that require individuals to have an awareness of
space (National Research Council, 2006). It is the concept of space that makes spatial
thinking a distinctive form of thinking. Students need to use spatial thinking to understand
space and its properties (e.g., dimensionality, continuity, proximity, and separation) that can
be used to interconnect all knowledge. Studentscan also understand how the properties of
space can help them structure problems, analyze information, find answers to problems,
predict patterns present in data, and express and communicate solutions to problems.
Silverman (2002) developed the concept of the visual-spatial learners to define those
learners who think mainly in images. Visual-spatial thinking is often characteristic of
creative individuals. Some at-risk students tend to have a preference for visual spatial
thinking, which is actually faster and more powerful than auditory sequential thinking.
Silverman found that some students had extraordinary abilities to solve problems presented
to them visually and excelled in the spatial tasks of intelligence tests. Thus, teachers
sometimes overlook some of their students’ potential if they do not teach in a way that
allows these students to capitalize on this ability.
H. Digital-Age Reflection
The concepts, “reflection” and “reflective practice”are entrenched in teacher education
literature (Ottesen, 2007) with good reason. Reflection is a vehicle for critical analysis and
problem solving and is at the heart of purposeful learning. Reflective observation focuses on
the knowledge being learned (i.e., curriculum) as well as the experiential practice (i.e.,
pedagogy); both are important aspects of the learning process (Kolb, 1984). Through
metacognitive examination of their own experiences, preservice teachers are encouraged to
take a closer look at what they are learning and to explore their own growth in greater
depth. Experiencing the power of reflection in their own learning, they are more likely to
encourage similar reflection on the part of their students. When reflection has been included
in instruction, it allows preservice teachers to address uncertainties in their own learning,
develop new approaches to learning, and document their growth as reflective practitioners
(Capobianco, 2007; Moran, 2007).While reflective activities have long included journal
entries or narrative writing, technology can facilitate and enhance the skills of reflection as
electronic reflections can be readily archived, revisited, updated and shared in exciting and
creative ways. Students need to know how to reflect on their learning so they can critically
analyze what they’ve learned, address uncertainties, examine misconceptions, develop new
approaches to learning, and document their learning. |
Hubble collects and stores its own power by using two solar arrays. For roughly one-third of each orbit, however, the Sun goes into eclipse as Hubble passes into the Earth’s shadow. At these times, the spacecraft relies on its six batteries to meet the spacecraft’s power requirement.
Hubble’s original array of six nickel hydrogen batteries were still functioning, when astronauts visited the observatory for Servicing Mission 4, 19 years after launch. That is pretty remarkable given the original design lifecycle was for just 5 years. Despite the power management skills employed by ground based technicians, the batteries were starting to lose their residual capacity however, so it was time for an upgrade.
Astronauts replaced all six batteries during SM4. The replacement batteries are also made of nickel hydrogen, but a different manufacturing process makes them more effective. In addition, each new battery has the added safety feature of an isolation switch that electrically dead faces each connector. This creates a safe environment for astronauts installing the battery modules.
Each of the 6 batteries begins its life on the ground with approximately 88 Ampere-hours of capacity. Due to limitations of Hubble’s thermal control system, the batteries can only be charged to 75 Ampere-hours once installed. The 6 new batteries began their life on-orbit by delivering a total of over 450 Ampere-hours of capacity to Hubble. This is actually less than the old batteries, but power savings elsewhere on the spacecraft have reduced the overall requirement.
Fact Sheet (PDF file)
Battery Module Assembly with lid removed, showing the cells and power isolation switches |
Although moles are very common in Western Washington, they are rarely seen due to their subterranean lifestyle. But you usually know when they're around. As moles excavate and maintain their underground burrow systems, excess soil is pushed to the surface forming molehills.
Two species—the Townsend's Mole and the Coast or Pacific Mole—are responsible for building the molehills in Washington State. A third species, the Shrew Mole, does not build extensive burrow systems, but spends time on the soil surface or under leaf litter.
Moles are highly specialized digging machines. They have broad, shovel-like forelimbs that allow them to power through soil. Although those who maintain gardens or lawns often view moles negatively, the burrowing is actually beneficial. It aerates and mixes soil layers and improves drainage. In addition, moles feed primarily on invertebrates, including the insect larvae, such as Crane Flies, which damage roots.
Moles are generally solitary, and aggressively defend their burrow systems. Mating season, in January and February, is an exception, when males will seek out females. Females give birth about 4 to 6 weeks after mating. Young moles spend 30 to 36 days with their mothers before dispersing to find their own territories. When they disperse, the young moles usually move above ground at night where many fall prey to owls, coyotes and other nocturnal predators.
Solving and preventing conflicts
Most conflict situations have to do with the molehills. Moles do sometimes harm plants, although inadvertently, by uprooting or covering them up as they diligently excavate. Moles do occasionally eat plant matter such as roots, tubers and bulbs. The presence of moles, however, is likely to be more helpful than harmful to the health of the soil on your property.
Excluding moles from your entire yard is difficult, but there are ways to prevent them from gaining access to your flower or vegetable gardens.
- Create raised beds for your garden and ornamental plants. If you attach one-inch galvanized or vinyl coated hardware cloth to the bottom of the raised bed, moles will be effectively prevented from digging up from below.
- Use a mole repellant. According to the Washington Department of Fish and Wildlife (WDFW) there are commercially available castor oil-based repellents that have been scientifically tested on moles in the Eastern U.S. with some success. Or try this homemade repellant suggested by WDFW.
- Try other commercially available products such as mechanical "thumpers" that send vibrations into the ground that supposedly encourage moles to leave. Some anecdotal evidence suggests these work for small yards, but no scientific evaluation of the products has been done.
- The least expensive and most effective way to approach a "mole problem" is to learn to accept their presence. You can remove or tamp down molehills. Inspect your yard regularly and re-bury any exposed roots to mitigate damage to plants. You can transition your yard from a solid green matt of grass to a diverse habitat filled with native plants. The native plantings will thrive in the healthy soil that the moles have helped cultivate, and the local wildlife (including the moles) will thank you!
- Call PAWS Wildlife Center at 425.412.4040.
- Washington Department of Fish and Wildlife |
When a nuclear reactor experiences a catastrophic failure, an uncontrolled release of nuclear fission products such as radioactive iodine (eg, 131I) may occur (Fig. A42–1). Monitoring of the exposed population of Belarus, Russia, following the Chernobyl meltdown of the reactor in 1986 revealed almost a 100-fold increase in the incidence of thyroid cancer among children, echoing similar long-term effects of the nuclear destruction of Hiroshima and Nagasaki in 1945.4 Iodine is a solid, bimolecular halogen that sublimates under standard conditions. Potassium iodide (KI), the most commonly available iodide salt, has been "generally recognized as safe" by the Food and Drug Administration (FDA) for nearly 40 years. Potassium iodide is recommended to prevent the uptake of radioactive iodine into the thyroid in order to reduce the future risk of thyroid cancer.
The decay pathway that describes how 131I derives from nuclear fuel, whether in a bomb or a reactor, ultimately decaying to stable xenon.
Iodine, or its ionic form iodide (I–), is an essential nutrient present in humans in minute amounts of 15 to 25 mg. Iodine is required for the synthesis of the thyroid hormones L-triiodothyronine (T3) and L-thyroxine (T4), which in turn regulate metabolic processes and determine early growth of most organs, especially the brain. Radioactive tracer iodine is distributed in the neck only 3 minutes after ingestion in a fasting subject. Iodide is actively transported with sodium into thyroid follicular cells where it is concentrated 20- to 40-fold compared with its serum concentration. It is then transported into the follicular lumen where it iodinates thyroglobulin to form T3 and T4 (see Chap. 49). Thyroid hormones are metabolized in hepatic and other peripheral, extrathyroid tissues by sequential deiodination. Iodide is then excreted in sweat, feces, and urine, and the presence of iodine in the urine is considered a reliable indicator of adequate iodine intake.
Iodine deficiency is a worldwide health problem with large geographic areas deficient in iodine in the foods; this occurs predominantly in mountainous areas and regions far from the world's oceans. In 2003 the World Health Organization (WHO) estimated there were 1.9 billion people with insufficient iodine intake despite universal salt iodization.22 Iodine deficiency disorders include spontaneous abortions, congenital anomalies, endemic cretinism, goiter, subclinical or overt hypothyroidism, mental retardation, retarded physical development, decreased fertility, and increased susceptibility of the thyroid gland to radiation.
During the critical, immediate postnatal period and the prepubertal and pubertal growth periods, there is a progressive growth of the thyroid gland as well as an increase in thyroglobulin and iodothyronine stores. Insufficient iodine supply in the diet results in increased iodine trapping by the thyroid gland. That is, the thyroid gland accumulates a larger percentage of exogenous ingested iodide and more efficiently reuses iodine that ... |
x-ray imaging system coverts ____ into ____
electric energy into electromagnetic energy
the study of stationary or fixed electric charges is known as?
what are the five laws of electrostatics?
4)inverse square law
force between 2 charges is directly proportional to the product of their charges and inverserly proportional to the square of the distance between them is known as?
Charges reside on the external surface of conductors is known as?
Concentration Law: greatest distribution of charges on surface on ________ curve?
process of electric charges being added/subtracted from an object is known as what?
objects rub against one another and electrons travel from one to the other is known as?
two objects touch; electrons move from one object to another is known as?
Contact causes _______ of charges?
process of electrical fields acting on one another w/o contact is known as?
it is the most important method?
induction method used in the operation of _______devices?
study of electric charges in motion is known as?
movement of electrons or electricity results from the traveling of ______?
only _____ charges move along solid conductors
Positive charges are fixed in the ?
electrons move from _____ to _____ concentration
highest to lowest
______charge=object with more electrons
_____charge=object with weaker negative charge or an object with fewer electrons than another object
electric current travels from _____ to _____ poles
postive to negative
electric /electron flow travels from ____ to ____ poles
negative to positive
electric current occurs in:
1)vacuum (x-ray tube)
3) inonic solutions & metals
any substance through which electrons flow easliy is known as?
Examples of conductors:
materials that resist electron flow is known as?
good insulators are:
materials with the ability to conduct electricity under certain conditions and insulate under other conditions is known as?
Two examples of semiconductors are:
silicon and germanium
allows electrons to flow freely w/no resistance below certain temperatures is known as?
superconductivity works with what kind of temepratures?
very cold (liquid nitrogen)
two examples of superconductivity is?
noibiun and titanium wire which are used in MRI
Nature of electron flow is?
1)direction of electron travel
2)quantity of electrons flowing
3)force of electron travel
4)opposition to current flow
what are the two currents for direction of electron flow?
Direct current (DC)
Alternating current (AC)
all electrons travel in the same direction is what type of current?
oscillating current is what type of current?
Quantity of electrons flowing=
quantity is measured in what unit of current?
Ampere = A
x-ray is measured in what unit?
causes number of electrons and x-rays produced to vary is known as what?
force of electron tavel=
electric potential exists when the flow of electrons has?
unit of electric potential is?
electric potential determines the speed of electrons that determines penetrability which developes what?
amount of opposition to current flow is =
resistance or impedance
opposition is measured in
unit of resistance is called?
what are the 3 factors that impede the flow of electrons?
1)Length of the conductor
2)crosssectional diameter of the conductor
3)temperature of the conductor
Length of the conductor-as length doubles, resistance does what? and has what proportional relationship?
crosssectional diameter of the conductor-as diameter doubles, resistance does what? and has what type of relationship
temperature of the conductor-
causes increase or decreases of resistance depending on conductor, insulator or semiconductor
electric potential is sometimes called
unit of electric potential is
V for US household is
Electric power is known as
rate of doing
Electric power is measured in
electric power-Household appliances tend to operate between
500 to 1500w
x-ray machines require kW (electric power)
20 to 150kW
x-ray machines require V (electric potential)
Ohm's law is a interrelationship of
current, potential, and resistance
for Ohms law voltage across the total circuit is equal to the
current x resistance
what is the Ohms law equation
electric potential in volts
electric power formula
P=current x voltage
formed by controlling the resistance in the closed path of a conductor is known as?
all circuit elements are conected in a line are known as
what are some advantages to series circuits?
cheap, easy to fix, repair, replace. greater potential difference=greater total voltage. current remains the same
what are some disadvantages to series circuits?
all resistances have to be operable. failure of one=the whole supply. resistance increases
each element has an individual branch is known as
what are some advantages to parallel circuits?
elements can operate at lower voltage bc voltage doesnt change. failure of one element doesnt interrupt the other . resistance goes down
whart are some disadvantages to parallel circuits?
current increases, can over head=fire
what are 2 safeguards that break the circuit before a dangerous temeprature is reached?
fuse, circuit breaker
metal tab that melts with increased heat is known as?
pops open and can be reused and reset
the voltage across the total circuit or any portion of the circuit is equal to the current time the resistance
Ohm's Law equation
A material that when combined with some other material can be turned into an insulator or a conductor.
a substance that readily conducts e.g. electricity and heat
a material that does not allow heat or electrons to move through it easily
what is the coulomb's law equation?
Please allow access to your computer’s microphone to use Voice Recording.
We can’t access your microphone!
Click the icon above to update your browser permissions above and try again
Reload the page to try again!
Press Cmd-0 to reset your zoom
Press Ctrl-0 to reset your zoom
It looks like your browser might be zoomed in or out. Your browser needs to be zoomed to a normal size to record audio.
Your microphone is muted
For help fixing this issue, see this FAQ. |
Note: This section uses a coordinate plane (that's what you graph equations on,
like graph paper) and the Pythagorean Theorem (used with right triangles). If you
really want to know, basic trigonometry is used at the end, but it's not as
important or even necessary for understanding the section. If you're a little shaky, read this section anyway. The next section is Free Body Diagrams
Just as a refresher:
To find what A and B are, there are two different methods, and both of them require a
lot of math. You should be familiar with the coordinate-plane and right triangles for
the first one and trigonometry for the second (don't worry if you don't know
trigonometry, though. The second method is only a bonus and not crucial).
For the first method, you need to know the coordinates of any two points along the line of the force, like in the example at right. You can then draw in two sides, parallel to the x and y axes, to form a triangle. These two sides represent the two component forces (introduced in The Parallelogram Law).
Though their positions aren't exactly right (they should all be coming out of the
same point of application), they're still the same length that they should be (ever looked at a parallelogram? Opposite sides are the same length). Notice
that this is a right triangle? Good, because now we can use a slight variation of the
Pythagorean Theorem, sometimes called the Distance Formula, to find the length
between the two points.
- General Form for a Vector: Ai+Bj
So, we subtract the x-coordinates: 8-2=6, squared is 36. For the y-coordinates:
4-1=3, squared is 9. Added together, we get 45. The square root of this is
approximately 6.708, the length between the two points. We then find the length of
each leg of the triangle. The vertical leg has length 4-1=3 (its top is at (8,4) and
its bottom is at (8,1)) since it's straight down from point A and straight across
from point B). Using similar reasoning, the horizontal leg's length is 8-2=6. To find
the relative length of each leg as compared to the hypotenuse (like the relative
length of a component force as compared to the original), we divide each leg's length
by the length of the hypotenuse. So, the vertical leg is 3/6.708, which is
aproximately .447, and the horizontal leg is 6/6.708, or about .894.
So far, this is what we have: .894i+.447j. This is the unit vector.
Now we need to incorporate the magnitudes of the forces into this. So far, we haven't
specified one, so let's say the original force was 100 Newtons. We know that the
horizontal force is .894 as compared to the original, so the component's magnitude is
.894*100, or 894 Newtons. The vertical force is .447*100, or 447 Newtons. Can you
believe we're done? We can now rewrite our original force in terms of its x and y
components, as a vector, in other words: 894i+447j. It's very easy to
add two forces represented this way by a vector. Add the i parts together and
then add the j parts together. As an example, we'll add a force,
500i+125j, to our other one. Their sum is
(894+500)i+(447+125)j, or 1394i+572j. What could be
simpler? Notice that now we're completely using math, and don't have to bother with
the inaccuracies of trying to physically draw a parallelogram.
The second method (there's another one? Don't worry, it's not long) is a take-off of
the first, though for it, you only need to know the angle that the original force
forms with the x-axis instead of the coordinates of two points. To understand it,
you'll also need to know about trigonometry (which isn't as bad as the name may
If you know about trigonometry, you may have noticed that dividing the length of the
vertical leg by the hypotenuse would be the same as taking the sine of Angle O (see
example at right). Or that dividing the horizontal side by the hypotenuse would be the same
as taking the cosine. After all, the definition of sine is "opposite over
hypotenuse", and that of cosine is "adjacent over hypotenuse". You then proceed
like in the first example. Angle O is 63.43 degrees, so the sine would be .894 and the cosine .447. Then, put them together, .894i+.447j, and you've got the unit vector.
Back to Index || Last Topic: Introduction to Vectors || Next Topic: Free
Body Diagrams and Equilibrium |
- Information for Consumers
- Laws, Regulations & Standards
- Industry Guidance
- Other Resources
Microwave ovens heat food using microwaves, a form of electromagnetic radiation similar to radio waves. Microwaves have three characteristics that allow them to be used in cooking: they are reflected by metal; they pass through glass, paper, plastic, and similar materials; and they are absorbed by foods.
A device called a magnetron inside the oven produces microwaves. The microwaves reflect off the metal interior of the oven and cause the water molecules in food to vibrate. This vibration results in friction between molecules, which produces heat that cooks the food.
Microwaves are non-ionizing radiation, so they do not have the same risks as x-rays or other types of ionizing radiation. But, microwave radiation can heat body tissues the same way it heats food. Exposure to high levels of microwaves can cause skin burns or cataracts. Less is known about what happens to people exposed to low levels of microwaves.
To ensure that microwave ovens are safe, manufacturers are required to certify that their microwave oven products meet the strict radiation safety standard created and enforced by the FDA.
Microwave energy will not leak from a microwave in good condition. A damaged microwave oven may present a risk of microwave energy leaks. Contact your microwave’s manufacturer for assistance if your microwave oven has damage to its door hinges, latches, or seals, or if the door does not open or close properly.
- Risk of Burns from Eruptions of Hot Water Overheated in Microwave Ovens (November 28, 2007)
- Use Your Microwave Safely (November 12, 2008)
- Microwave Oven Radiation (July 14, 2006)
- Consumer Product Safety Commission (CPSC) Website
Manufacturers of electronic radiation emitting products sold in the United States are responsible for compliance with the Federal Food, Drug and Cosmetic Act (FFDCA), Chapter V, Subchapter C - Electronic Product Radiation Control.
Manufacturers of microwave ovens are responsible for compliance with all applicable requirements of Title 21 Code of Federal Regulations (Subchapter J, Radiological Health) Parts 1000 through 1005:
In addition, microwave ovens must comply with radiation safety performance standards in Title 21 Code of Federal Regulations (Subchapter J, Radiological Health) Parts 1010 and 1030.10:
Required Reports for the Microwave Oven Manufacturers or Industry
Industry Guidance - Documents of Interest
Information Requirements for Cookbooks and User and Service Manuals (PDF Only)(PDF - 495KB) Guide for Establishing and Maintaining a Calibration Constancy Intercomparison System for Microwave Oven Compliance Survey Instruments (FDA 88-8264)] (PDF Only)(PDF - 826KB) Procedures for Laboratory Testing of Microwave Ovens(PDF - 705KB) Procedures for Field Testing Microwave Ovens(PDF - 2.7MB) Date of Manufacture Label on Radiation-Emitting Consumer Electronics Information Requirements For Cookbooks, and User and Service Manuals(PDF - 233KB) Guidance for Industry and FDA Staff - Addition of URLs to Electronic Product Labeling |
21.INTRODUCTIONIn simplest terms, a list refers to a collection of data items of similar type arranged in sequence (that is, one after another). For example, list of students' names, list of addresses.. etc.One way to store such lists in memory is to use an array. However, arrays have certain problems associated with them. As array elements are stored in adjacent memory locations.a sufficient block of memory is allocated to array at compile-time. Once the memory is allocated to array, it cannot be expanded.That is why array is called a static data structure. If the number of elements to be stored in an array increases or decreases significantly at run-time, it may require more memory space or may result in wastage of memory, both of which are unacceptable.Another problem is that the insertion and deletion of an element into array are expensive operations, since they may require a number of elements to be shifted. Because of these problems, arrays are not generally used to implement linear lists instead another data structure known as linked list is used.A linked list is a linear collection of homogeneous elements called nodes. The successive nodes of a linked list need not occupy adjacent memory locations and the linear order between nodes is maintained by means of pointers.
3SINGLY LINKED LISTSa singly linked list (also called linear linked list), each node consists of two fields: inf o and next (see Figure 5.1). The info field contains the data and the next field contains the address of memory location where the next node is stored.The last node of the singly linked list contains NULL in its next field that indicates the end of list.A linked list contains a list pointer variable Start that stores the address of the firstNode of the list. In case, the Start contains NULL,the list is called an empty list or a null list.Figure 5.2 shows a singly linked list with four nodes.
4Memory Representation To maintain a linked list in memory, two parallel arrays of equal size are used.One array(say INFO) is used for the info field and another array (say, NEXT) for the next field of the nodes of the list.Figure 5.3 shows the memory representation of a linked list where each node contains an integer.In this figure, the pointer variable Start contains 25, that is, the address of first node of the list, which stores the value 37 in array INFO and its corresponding element in array NEXT stores 49, that is, the address of next node in the list and so on. Finally, the node at address 24 stores value 69 in array INFO and NULL in array NEXT, thus, it is the last node of the list.
5Memory AllocationAs memory is allocated dynamically to the linked list, a new node can be inserted anytime in the list.For this, the memory manager maintains a special linked list known as free storage list or memory bank or free pool that consists of unused memory cells.This list keeps track of the free space available in the memory and a pointer to this list is stored ina pointer variable Avail (see Figure 5.4). Note that the end of free-storage list is also denoted by storing NULL in the last available block of memory.In this figure, Avail contains 22, hence, INFO [ 2 2] is the starting point of the free storagelist. Since NEXT contains 26, INFO is the next free memory location.Similarly, other free spaces can be accessed and the NULL in NEXT indicates the endof free-storage list.
6OperationsThese operations include traversing, searching, inserting and deleting nodes, reversing, sorting and merging linked lists.Creating a node means defining its structure, allocating memory to it and its initialization.As discussed earlier, the node of a linked list consists of data and a pointer to next node. To define a node containing an integer data and a pointer to next node in C language,we can use a self-referential structure whose definition is shown here.
8TraversingTraversing a list means accessing its elements one by one to process all or some of the elements.For example, if we need to display values of the nodes, count the number of nodes or search a particular item in the list, then traversing is required.We can traverse the list by using a temporary pointer variable (say, temp),which points to the node currently being processed.Initially, we make temp to point to the first node, process that element, then move temp to point to the next node using statement temp=temp- >next, process that element and moveso on as long as the last node is not reached, that is, until temp becomes NULL
9Insertion Insertion in Beginning: To insert a node in the beginning oflist, the next field of new node (pointed to by nptr) is made to point to the existing first node and the Start pointer is modified to point to the new node (see Figure 5.5).
10Insertion at EndTo insert a node at the end of a linked list, the list is traversed up to the last node and the next field of this node is modified to point to the new node.However, if the linked list is initially empty, then the new node becomes the first node and Start points to it.Figure 5.6(a) shows a linked list with a pointer variable temp pointing to its first node andFigure 5.6(b) shows temp pointing to the last node and the next field of last node pointing to the new node.
12Deletion Deletion from Beginning To delete a node from the beginning of a linked list, the address of the first node is stored in a temporary pointer variable temp and Start is modified to point to the second node in the linked list.After that the memory occupied by the node pointed to by temp is deallocated.Figure 5.8 shows the deletion of node from the beginning of a linked list.
13Deletion from EndTo delete a node from the end of a linked list, the list is traversed up to the last node.Two pointer variables save and temp are used to traverse the list, where save points to the node previously pointed by temp. At the end of traversing, temp points to the last node and save points to the second last node.Then the next field of the node pointed by save is made to point to NULL and the memory occupied by the node pointed to by temp is deallocated.Figure 5.9 shows the deletion of node from the end of a linked list.
23DOUBLY LINKED LISTSa singly linked list, each node contains a pointer to the next node and it has no information about its-previous node. Thus, we can traverse only in one direction, that is, from beginning to end.However, sometimes it is required to traverse in the backward direc that is, from end to beginning. This can be implemented by maintaining an additional pointer in each node of the list that points to the previous node. Such type of linked list is called doubly linked list.Each node ofa doubly linked list consists of three fields: prev, info and next (see 5.17). The info field contains the data, the prev field contains the address of the previous node and the next field contains the address of the next node.
24DOUBLY LINKED LISTSThe structure of a node of doubly linked list is shown here. |
Creating equations in Excel will allow you to be able to use operators and create equations for lab. There are different functions built in Excel that can be used in various formulas; such as the average, sum, trigonometric functions, and the standard deviation. One can also name a cell, which can then be used throughout the spreadsheet without having to rewrite a certain constant or formula. Not only that, you can also reference a certain constant. By using a reference for the constant, this allows the constant to be able to be used in different parts of the spreadsheet without changing the value.
Why is this important:
Knowing how to write equations in excel is important when creating different formulas during lab. An important thing to remember is to use the equal sign (=) before starting any equation or when using an operator.
1. In order to create equations or use any operator, one has to write an equal sign (“=”) to indicate that what will be writing into the cell will be a formula. This will tell Excel that that will be followed by the “=” will be an operator rather than random numbers or words.
2. If you do not remember certain formulas, one can click on the “Formulas” tab. This is show different formulas that can be used in Excel in case you forget how to write a particular formula during class or at home while writing your lab report.
1. Trigonometric formulas (cos, sin, tan, etc) are, by default, in radians in Excel and need to be converted In order to convert from radians to degrees. This can be done by converting from degrees to radians by multiplying the radians angle by “π divided by 180″ (π/180).
2. In Excel, π is written as “pi()” |
Maintenance of soil fertility is critical to sustainability of the food supply – crop rotation, fallowing and organic amendment, strict pollution management, and integration of the farming system into the local environment are all components of any well-managed agricultural system. Organic producers are not unique in practicing these, but organic farmers have strict standards under which they are required to practice, which usually means a more reliable outcome for these quality values.
Organic farmers can use a variety of management practices to conserve nutrients and enhance soil quality, including:
- Applications of composted animal manure and other organic residue to form a more uniform and chemically stable fertiliser material. The use of animal manure completes the nutrient cycle allowing for a return of energy and fertiliser nutrients to the soil.
- Use of crop rotations to help trap and recycle nutrients in the soil profile, increase soil tilth, and provide a diversity of crop residues. Crop rotation is an essential component of any organic farming system, creating diversity in space and time that disrupts the growth and development of weed, pest and disease populations.
- Use of green manures and cover cropping is a standard practice in organic farming. Which crops are chosen depends on the intended function of the crop.
- Use of compost is beneficial because it contains antibiotics and antagonists to soil pests allowing for increased plant resistance to attacks, increases crop yields, is important in weed control and builds up soil organic matter.
- Use of annual soil tests to help calculate appropriate amounts of organic fertilisers to add.
- Avoidance of surface application of manure prior to rain, or irrigation.
- Use of farming practices that enhance soil quality and reduce the potential for water runoff and wind and water erosion eg controlled traffic.
- Use of vegetative buffers or filters between cropping areas and water bodies to protect against nutrient and sediment movement into rivers or lakes. |
Last week we talked about the origins of fascism and our two key examples were Italy and Germany. Today, we want to bring together our understanding of fascism with Germany’s behavior between 1933 and 1945. As we have already discussed, Germany was a revisionist power from 1919 on. Rather than integrating Germany into a European-wide security system, the Versailles Treaty isolated Germany and repeatedly inflamed national passions. Had Germany been only as powerful as, say, Rumania, Versailles’ effects would not have been a problem. Germany was, however, Western Europe’s most powerful state, and as such its revisionist agenda could only cause trouble. When Germany’s power and grievances were wedded to an aggressive nationalist ideology, European and worldwide instability was the result. Things were supposed to have gone differently. The Treaty of Versailles was supposed to end all wars. It didn’t, and the main reason for this was the Nazi German state. When war broke out in 1939, no one in Europe, including many if not most Germans, wanted another war. In fact, no one could even conceive of another war. Unfortunately, Adolf Hitler understood this state of affairs better than anyone else, and as the head of a powerful and revisionist power, he was in a position to exploit the possibilities it presented.
In 1933, Adolf Hitler came to power in a country that confronted significant internal and external problems. Internally, its economy and politics were a disaster. Externally, it had a list of foreign policy grievances. Hitler would address all of Germany’s problems—one way or another. Internally, Hitler alleviated many of Germany’s economic problems, at least in the short term. To that end, he used public works programs, a massive rearmament program, and public scapegoating of Jews to convince Germans that he was their economic savior. Hitler’s economic policies were unsustainable over the long term, however, though most people failed to recognize that fact. And the personal effect on Germany’s Jews, who were often super patriots, was horrible. (One of history’s more vicious ironies is that contrary to what the Nazis may have believed, one effect of German unification was that Jews in central and eastern Europe embraced the glory of German culture completely, becoming the most conservative of German nationalists. On the one hand, many Jews were deeply sorry to see Imperial Germany disappear. There were Jews in Berlin, for example, who celebrated the Kaiser’s birthday every year, during the Weimar era. On the other hand, German Jews cultivated their own national hierarchy within German nationalism. Newly arrived Jews from eastern Europe who spoke German with a Yiddish accent and clung to Jewish culinary and sartorial traditions were referred to dismissively as Ostjuden, or Eastern Jews.) These realities aside, Hitler was not content with simply rearranging Germany in accord with his demented ideas about race; he also wanted to rework the post-World War I order. To understand how he achieved this, we must turn to the successes of his diplomacy and the failures of his counterparts’ diplomacy.
Adolf Hitler was a genius when it came to judging the strength of other people’s will. He understood that no one in Europe, least of all its major statesmen, was willing to challenge his demands. Rather than stand up to Hitler, other countries responded by trying to understand the nature of his grievances before giving him everything that he wanted. This was known as appeasement and was supposedly a more rational way of doing business. For many European policy makers war was the ultimate evil, which meant that it was better to talk and yield than enforce one’s will and risk war. (The term appeasement has an interesting history of its own. Before WWII it had a positive connotation. After WWII, of course, this word has taken on a negative connotation. Whatever else states may do, they do not appease other states, for fear that it will lead to further bad behavior.) Thus, Adolf Hitler brilliantly exploited the Western powers’ desire to avoid war by constantly threatening them with it.
In this diplomatic environment, revising the most onerous aspects of the Versailles treaty was the first item on Hitler’s agenda. His first big move came in 1933, when he pulled Germany out of the League of Nations. Germany had only been admitted in 1926 as part of the rapprochement unfolding between Germany and France. This was a fatal move for the League, since it now lacked Europe’s most powerful state. (It already lacked the world’s greatest power, the United States, since the U.S. Senate had refused to ratify American entry into the League.) The world had already seen how weak the League was, when Japan invaded Manchuria in 1931 and the League fulminated while doing nothing. This set the diplomatic tone for the 1930s. The Soviet Union gained admission in 1934, but by then Italy had stopped attending council meetings and would go even further. In 1935, Benito Mussolini sent troops into Ethiopia, essentially daring the League to do something about it. It did nothing. In 1939, the Soviet Union invaded Finland and was expelled from the League for its behavior. Stalin did not mind, and over the next few pages we will see exactly why.
An example of the League’s fundamental weakness was its relations with Germany and the Soviet Union. During the 1920s both countries were diplomatic pariahs. Germany had been saddled with responsibility for the First World War and was denied admission into the League of Nations. The Soviets, for their part, were reviled for being Communist. None of the other major powers wanted to do business with a state whose official ideology called for violent overthrow of the capitalist system. Thus, the two biggest powers on the Continent were isolated and angry, a situation that ultimately drove them together. In 1922, Germany and Russia reached a series of agreements now called retrospectively the Treaty of Rapallo. The essence of the deal was that Germany offered military training and industrial expertise to the Soviets in exchange for the right to train its armed forces and test its weapons on Russian soil. Thus, although the German army remained small, its tactics became ever more lethal, and when Adolf Hitler increased military spending in the 1930s, the German armed forces quickly became the world’s premier fighting force.
Against this backdrop we can better understand Hitler’s aggressive policies. His next step after leaving the League was to gain full control of German territory. In 1936, Hitler remilitarized the Rhineland by sending in German troops. The Treaty of Versailles had demilitarized the Rhineland, in response to France’s pronounced desire to secure its borders. This was a legacy of France’s defeat in 1870-1. When Prussia unified Germany, one practical outcome was that the French could not counter German power by themselves. They needed allies, and if they lacked them at any given moment, had to choose between backing down, or suffering military defeat. Nonetheless, Hitler’s move into the Rhineland is often considered one of the greatest lost opportunities in the history of international relations. Although the fundamental strategic situation was in Germany’s favor, the remilitarization was a giant bluff. Hitler knew that at that moment Germany was not strong enough militarily to oppose a concerted allied response. German troops were, in fact, given orders to retreat at the slightest sign of resistance. The allies, not knowing this, did nothing, and German troops entered the area amidst great fanfare.
Hitler’s political triumph had important diplomatic consequences. The Belgians were a French ally and had agreed to build an extension of the Maginot Line through their country to the sea. France’s lack of nerve and German diplomatic pressure, however, convinced the Belgians not only to cancel the alliance but also to pull out of the Maginot project. (In France’s defense, we should also note that the British had made quite clear that they were unwilling to go to war over the Rhineland issue.) The upshot was that France’s great defensive network simply stopped at the Belgian border, which made it easy for the Germans to sweep around it later and pin the French army against its own defenses. The French could have simply completed the line within their own borders, but here two problems arose. First, the French feared that cutting off the Belgians would drive them into Germany’s arms. Second, France completing the original section had already strained French finances and the potential diplomatic consequences of completing it made the cost seem extreme.
From the Rhineland Hitler then turned to the next great diplomatic problem, Austria. Prussia had denied Austria a central role in German politics by its victory at the Battle of Königgrätz in 1866. Thus, from 1866 until 1918, Austria was a multi-national state centered on a German-speaking region that was excluded from Germany. After World War I, however, Austria-Hungary was broken up into a series of smaller states, the largest among which were Austria, Hungary, Czechoslovakia, and Yugoslavia. Austria was now a rump state without access to the ocean, and many believed that it was too small and isolated to be economically viability. This impression was false. Today’s Austria has exactly the same boundaries and is quite wealthy. Austria’s main problems were the Treaty of Versailles, which saddled Austria with a reparations bill that it could not pay, and the general economic crisis of the 1920s.
Adolf Hitler was, of course, an Austrian native and adding it to his great German Reich was a heady dream. Austria’s political economic problems made it an easy target; like Weimar Germany, it had descended into chaos during the 1920s and 30s, which pushed Austrian politics to political extremes, with rival left-wing and right-wing armies clashing in the streets of Vienna. One result of these troubles was the victory of authoritarianism. Austria had always been a conservative region, but during the 1930s politicians such as Engelbert Dollfuss arose who believed that authoritarianism was the only way to save Austria. Dollfuss was a member of the conservative Christian Social Party and became Chancellor in 1932. Borrowing heavily from Italian fascism he founded an authoritarian umbrella organization called the Fatherland Front (Vaterländische Front) that was supposed to unite all the conservative parties against the left. Dollfuss kept the left at bay, but he was not able to control the right.
As the Nazi party became more powerful in Germany, so too did the Austrian Nazis. In 1934, the Nazis held a coup and executed Dollfuss. The coup failed, because Benito Mussolini forced Adolf Hitler to disavow the conspirators. Dollfuss’ conservative successor, Kurt von Schuschnigg enjoyed more success early on in controlling the right, but was unable to maintain Austrian independence against German pressure. In 1936, the Austrian government signed an agreement that unified its foreign policy with Germany’s. In February 1938, Schuschnigg went to Berchtesgaden intent on getting Adolf Hitler to stop supporting Nazi plots in Austria. Adolf Hitler humiliated Schuschnigg, flying into a wild rage and demanding a series of unpalatable concessions, before sending him home. Schuschnigg tried to save his government by calling for a plebiscite on unification with Germany. Hitler responded quickly, however, ordering an invasion in March 1938. This is what the Germans call Anschluss: Austria was now part of the German Reich. In spite of the Versailles treaty’s specific prohibition of such unification, the allies did nothing while Nazi Germany revised Europe’s territorial arrangements.
As before, however, Hitler remained unsatisfied. Another one of the great historical problems left over from WWI was the presence of roughly 3 million Germans in Czechoslovakia. Created after the breakup of Austria-Hungary, Czechoslovakia was a multi-national state that included Czechs, Slovaks, and Hungarians, in addition to many Germans in a mountainous area known as the Sudetenland. Here we really begin to see Hitler’s genius for bluster as a negotiating tactic. He began by making vague threats against the Czechoslovak state, trumping up charges about discrimination and violence against the resident German minority. It is instructive to note that Hitler never actually asked for anything nor threatened any specific action. This would have made his position a matter of negotiation. No, instead he raged against a small neighboring state and waited for the western allies to give him everything he wanted—which, of course, they promptly did. In late September 1938, Neville Chamberlain, the British Prime Minister flew to Munich and betrayed the Czechoslovak state in the name of peace. Together with the French Premier, Édouard Daladier, Chamberlain gave Hitler everything he wanted, turning the entire Sudetenland over to Germany. Chamberlain then returned to Britain triumphantly waving the treaty he and Hitler had signed, proclaiming that it guaranteed “peace in our time.” In exchange, the Czechs got a promise that the British would defend what was left of their state. It was an empty promise.
One could argue that the Sudetenland was full of Germans and if they wanted to be in Germany they should be allowed to join. (National self-determination was, after all, a basic principle behind Wilson’s Fourteen Points, though it had been unevenly applied with respect to Germans.) But whether there were sufficient Germans in the Sudetenland to justify this is beside the point, since true national determination by the Germans was a practical impossibility. Without the Bohemian Mountains under its control, the Czech state had no defensible borders. Hitler’s charge that Germans were being sorely mistreated was bogus, but the Sudeten Germans did have legitimate grievances, as local Czech officials openly practiced ethnic discrimination against the German minority. Discontent over their treatment led to the rise of a German ethnic party called the Sudeten German Home Front (Sudentendeutsche Heimatfront) under the leadership of a man named Konrad Heinlein. Heinlein actively campaigned for German annexation of the Sudetenland and in 1935 his party received 2/3 of the Sudeten German vote, making it the second largest party in the Czech chamber. Under domestic and foreign pressure, the Czech government yielded to almost all German and Sudeten demands, granting the Sudetenland almost complete autonomy. Unfortunately, there was no reaching an accommodation with Adolf Hitler, especially after the Munich agreement. Annexing the Sudetenland was not Hitler’s real goal; he wanted all of Czechoslovakia. On March 14, 1939, Nazi troops invaded the rest of Czechoslovakia while the West again did nothing. Britain did not want war, and France, fearful of confronting Germany all alone, let her Czech ally be dismantled.
Having reached this point we need to consider how Hitler’s triumphs emboldened not only Hitler but also all Germans. Hitler saw each of these victories as vindication of his foresight and diplomatic skills; that is, his head kept getting bigger. Domestically, too, Hitler was looking more and more like a genius. He had remilitarized German soil, brought distant Germans back into the Reich, and increased employment, and all of this was done without firing a shot. One historian has even suggested that had Hitler never gone to war, he would be considered an even greater statesman that Bismarck.
As you already know, this was hardly the end for Hitler, since the problem of Poland still existed. As part of the Versailles treaty a Polish state with access to the sea was created. The problem was, however, that in order to give this state access to the sea, the new Poland had to go through majority-German territories, specifically the city of Danzig. Thus, East Prussia was split away from Germany and Danzig was declared a free international city. The Danzig issue was a real thorn in the German nationalist eye. Not only was German territory being taken away, but it was also given to the Poles, a people whom many Germans had never liked. The feeling was, of course, mutual. The Poles, proud of their new independence, refused to return the so-called Polish Corridor, even though they did not really need it. Thus, national pride kept both sides from cutting a reasonable deal.
This was not, however, Poland’s real problem. The bigger issue was that both Germany and the Soviet Union had designs on Poland. On August 23, 1939, Nazi Germany and the Soviet Union reached a wide-ranging accord on matters such as the future of Poland and economic cooperation. The accord had two parts. The first was a non-aggression pact that was to last for ten years and included a trade agreement that was very favorable to Germany. The second carved up Eastern Europe. Germany got 2/3 of Poland, while Russia took the other third, as well the Baltic States and Finland. This accord shocked the world. Mortal enemies had signed it. That Nazism and Communism, two totalizing and hostile worldviews with a deep antipathy to each other, could make a deal threw everybody’s worldview out of whack. Moreover, these two states had agreed to make the Polish state disappear from the map once again, and there was nothing that anyone could do about it. On September 1, 1939, Germany invaded Poland and occupied roughly half the country. Immediately thereafter, the Soviet Union invaded from the East not only taking over the rest of Poland but also snuffing out the Baltic States’ experiment with democratic freedom. When Britain and France responded with a declaration of war, World War II was officially underway.
The war’s early months of the war are best characterized by two German words, Blitzkrieg and Sitzkrieg. Blitzkrieg, or lightning war, was a method of attack that Germany had perfected during the Spanish Civil war, and which relied on heavy aerial bombardment and concentrated use of armor. Germany’s Blitzkrieg in Poland was savage and quick, as Nazi dive-bombers and artillery hammered Polish cities into submission, and German armored columns smashed brave Polish resistance. The battle for Poland war lasted ten days.
Confronted with yet another act of naked aggression, Britain and France were finally forced to fight a war their policies had encouraged. Only, once again, neither side could demonstrate sufficient will to fight. Instead, British and French troops hunkered down behind the Maginot line, expecting that the German army would be smashed on the complicated network of defenses. Thus, began what the Germans called Sitzkrieg, or sitting war, as the Brits and the French did nothing, while the Germans on the other side of the Rhine waited until the Wehrmacht and Luftwaffe were finished with Poland in the East.
Although he did not send troops to attack the Western Allies right away, Hitler kept busy in other areas. In April 1940, he launched attacks on Denmark and Norway. Denmark could offer no resistance and surrendered immediately. In Norway, the Germans launched a large amphibious invasion, but suffered heavy initial losses, due to determined Norwegian resistance. Nonetheless, German airpower was so overwhelming that the Norwegian resistance collapsed within a few days. On May 9 and 10, Hitler turned west, invading Belgium, the Netherlands, and Luxembourg. This invasion allowed German troops to swing around the Maginot line and cut off a host of British and French troops, who then fled to Dunkirk, where they were evacuated on anything that would float. The German army’s failure to pursue the retreating troops to the beach was an enormous blunder, as basically the entire French and British armies were evacuated to fight another day. The equipment that the allies left behind could be replaced, but dead soldiers and POW’s could not. The decision to halt the advance would haunt the German war effort, though on June 14 German troops still entered Paris. On June 22, France surrendered. The north of France became occupied territory and the south became a puppet state, led by Marshal Pétain in the city of Vichy.
The problem for Hitler now was, however, that the Brits refused to give up. Germany had no way to invade the British Isles, so it hoped that air attacks would force the Brits to their knees. Unfortunately for the Nazis, Winston Churchill had become Prime Minister on May 11, 1940, and under his leadership that simply was not going to happen. This set the stage for the great air war that became known as the Battle of Britain. The battle’s early stages went rather well for the Germans. German bombing raids concentrated on airfields, factories and radar installations, which almost did bring Britain to its knees. But this was not working fast enough for the German leadership, so the Germans changed tactics, turning on British cities—the idea being that a terror campaign would break the British will. This shift in tactics allowed the British to survive, since they could now produce enough aircraft to meet their losses, find German planes with their radar, and send their own planes up on airfields that were still working. The Germans, by contrast, were flying over hostile territory. By the summer of 1941, the Brits had clearly won this battle; like Napoleon before him, Adolf Hitler found that invading Britain was an impossible task. A strategic stalemate ensued that would only be altered by the entry of two greater powers into the war, the United States and the Soviet Union. We will trace these events and what they meant over the next two lectures. |
In the modern world, electricity is an essential part of day-to-day life. In fact, it is probably impossible to count all the ways we use electricity. From the moment we wake up we use electricity to toast our bread, listen to the radio or refrigerate our orange juice. Electricity powers the lights in the classrooms and offices where we work. The clothes we wear, even the cars we drive, are made by machines that use electricity.
To see where electricity comes from all we need to do is look inside a aluminum wire. The problem is what we are looking for is too small to see. But, if you could look past the protective covering, past the aluminum wire's shiny surface, you would see that the wire is made up of tiny particles. These are atoms, the basic building blocks from which everything in the universe is made.
Atoms are so tiny that in a little dot (.) there are more than you could ever count.
In 1831, English scientist, Michael Faraday, produced electricity by moving a magnet inside a coil of wire, discovering the principle of magneto-electricity, which is how Manitoba Hydro generates electricity today.
If you could look closely at an atom you would see that the atom itself is made up of even smaller particles. Some of these particles are called electrons. Usually, electrons spin around the centre, or nucleus, of the atom. However, sometimes electrons are knocked out of the outer orbit of an atom. These electrons become "free" electrons.
All materials normally have free electrons that are capable of moving from atom to atom. Some materials, such as metal, contain a great number of free electrons and are called conductors. Conductors are capable of carrying electric current. Other materials, such as wood or rubber, have few free electrons and are called insulators.
If free electrons in a conductor can be made to jump in the same direction at the same time then a stream, or current, of electrons is produced. This is an electric current. In an electrified wire, the free electrons are jumping between atoms, creating an electric current from one end to the other. But, how can the electrons jump in the same direction at the same time? By using magnets.
Surrounding the end of every magnet are invisible lines of force called magnetic fields. If you move a straight wire through a magnetic field, the force will push the free electrons from one atom to another, creating electric current. If you move several coils of wire quickly and continuously through the field of a powerful magnet, a great quantity of electric current can be produced.
Manitoba Hydro uses machines called generators to produce electricity. In a generator, a huge electromagnet, or rotor, is rotated inside a cylinder, called a stator, containing coils and coils of electric wires. Some rotors are 12 metres across and weigh as much as eight railway cars, nearly 380 tonnes. A great deal of energy is needed to rotate something that size. Manitoba Hydro uses the province's abundant supply of water.
Virtually all of the electricity in Manitoba is generated using the energy of flowing water.
A watt is the unit used to measure electric power. It takes 100 watts to light a 100-watt light bulb.
Electricity generated using waterpower is called hydroelectricity. A hydroelectric generating station uses the natural force of a river as energy. The same water flow or current that pushes a floating canoe down a river can also turn a generator's rotor.
Typically, there are two components to a generating station. A powerhouse which houses the generators and a spillway that allows any water not being used to bypass the powerhouse.
At the heart of a hydroelectric generating station is the turbine runner. Looking like a giant propeller, some turbine runners are nearly eight metres across. Attached to the rotor by a five-metre shaft, the turbine runner converts the physical energy of the water into the mechanical energy that drives the generator.
Water flows into a station's powerhouse through the intake and enters into the scroll case. The scroll case is a spiral area surrounding the turbine. The spiral shape gives the incoming water the spiral movement which pushes the blades of the turbine. As the turbine is turned, the attached rotor also spins, generating electricity. The potential energy of the river is converted into the mechanical energy of a generator which produces electric energy. Just one of the ten generators at the Limestone Generating Station can produce 133 million watts or 133 megawatts of electricity. That's enough to supply power to over 12,000 homes.
When the natural flow of a river is adequate, a run-of-river plant is built. The run-of-river design reduces the need for a large reservoir of water, or forebay, behind the station. Instead, the water flowing into a generating station upstream is used immediately, not stored for later use. The Limestone Generating Station located on the Nelson River is an example of a run-of-river design.
When the natural flow of water is inconsistent or inadequate, a more extensive network of dams is constructed to create a large forebay to provide for times when the river's water level is low. The dam also creates a head of water, or waterfall, to ensure the water has enough force to spin the turbines. The Grand Rapids Generating Station on the Saskatchewan River is an example of a station that uses a water reservoir.
Cross-section view of the Long Spruce Generating Station intake and powerhouse.
When you plug a toaster or a stereo into a wall socket there is electricity waiting to toast your bread or play music. But have you ever wondered how that electricity gets from the generator in a hydroelectric station to the socket in your wall? For the answer, we need to take another look at those electrons in our aluminum wire.
Remember, magnets passing over a wire or coil of wire will push electrons causing them to jump between atoms. As the electrons jump they are transferring a charge to the next atom. As the next atom receives the charge its electron will jump. The magnets trigger a chain reaction which moves down the wire. The electric energy can travel down the wire because aluminum is a conductor. It is conducting the electricity. Manitoba Hydro has an extensive system of wires of varying sizes that conduct electricity throughout the province. But, that is only part of the answer.
In Manitoba, nearly 80 per cent of our electricity is produced by hydroelectric generating stations on the Nelson River in northern Manitoba. So, Manitoba Hydro must transmit the electricity it generates about 900 km to southern Manitoba where most people live and work.
But, electricity does not travel long distances easily. In fact, for many years the problems associated with transmitting electricity long distances prevented Manitoba Hydro from building stations on the Nelson River.
Then Manitoba Hydro turned to high voltage direct current (HVDC) technology to solve the problem of transmitting electricity from the north. Direct current (DC) is electric current that flows in one direction only. It is the type of power produced by batteries used in cameras, flashlights and cars. The electricity in your home is alternating current (AC), electric current which reverses direction approximately 60 times a second. The advantage of DC is that the power loss over long distances is considerably less than with AC. Also construction of a DC transmission line costs about one-third less than an AC transmission line.
A higher voltage is used with DC transmission to increase energy transmission and reduce losses. To explain why, we can make a comparison between the electricity flowing through a wire and water flowing through a pipe. Just as great quantities of water can be moved through a large diameter pipe, a great quantity of electricity can be moved through a large diameter wire. Great quantities of water can also be moved through a small diameter pipe, such as a garden hose, by increasing the pressure. Similarly, electricity can be moved in greater quantities through a small diameter wire by increasing the voltage.
Manitoba Hydro has built two HVDC transmission lines, known as Bipole I and Bipole II, to bring electricity from the north.
A volt is the unit of electrical force or potential that causes a current to flow in a circuit. One kilovolt (kV) is equal to 1,000 volts.
Today, Manitoba Hydro is known throughout the world as a leader in HVDC technology.
A transformer is an electromagnetic device for changing the voltage of alternating current.
When you turn on a light switch, electricity flows through the wire and lights the bulb — the circuit is closed. When you turn the light switch off, the flow of electricity stops — the circuit is open.
Let's say the electricity in your house has the same force as a baseball pitched towards you at 100 km per hour. The force of the electricity on the HVDC line would be over 4,000 times more powerful. Imagine trying to stop a car travelling 100 km per hour with a baseball glove.
As generators spin they produce AC electricity that has about 25 kV of force. So, the electricity generated by the hydroelectric stations on the Nelson River must be converted to DC and transmitted at an even higher voltage to reduce the power losses experienced over long distances. This conversion is accomplished at the Henday and Radisson converter stations located near Gillam, Manitoba.
Once the electricity has been converted it travels south to the Dorsey Converter Station. At Dorsey, the electricity is converted back to AC because the refrigerators and other appliances people use in their homes are designed to run on AC electricity. From Dorsey, eleven 230 kV lines supply southern Manitoba and interconnections to Saskatchewan, Ontario and the U.S.
The high voltage lines transport the electricity to substations which are located throughout the province. These substations contain a variety of equipment used to transform voltages to lower levels, switch current in a line on or off, and analyze and measure electricity.
The transformation of electricity from high voltage to low voltage is accomplished using the same principle as generation. The magnetic field of a coil of wire carrying an alternating, or fluctuating current, is capable of causing a fluctuating current in a second coil. In a transformer, two separate coils of wire are wrapped around a magnetic iron core. The electricity in the first coil of wire creates a fluctuation in the magnetic field of the iron core. That fluctuation then passes through the iron core, electrifying the second coil of wire. If the second coil of wire has half as many turns the electricity will have half the voltage. If the second coil has twice the number of turns, then the voltage will be doubled.
From the substations, the electricity runs through overhead lines, or underground cables to transformers located near a customer's home or business. Located near the tops of hydro poles or at ground level where there is underground service, these transformers complete the voltage reduction.
From the pole, electricity travels through wire into your home, going first to the meter and the main switch. The wires then lead to a distribution panel. From there, circuits hidden inside the walls lead to the power outlets and light fixtures.
Next time you are boiling water, put a lid on the top of the pot. As the water boils, it expands and turns into steam. The pressure of the expanding steam will eventually shake or raise the lid. A thermal generating station uses this same energy to turn turbines that drive electric generators. The fuel used to heat the water can be coal, oil, natural gas or a nuclear energy source.
Manitoba Hydro maintains two small thermal generating stations, in Brandon and Selkirk. The thermal stations are used to help meet power demand during times of low water flows or to provide extra electricity during periods of high demand, particularly in winter.
Unlike hydroelectric generating stations, thermal generating stations can be built almost anywhere. However, the major disadvantage of thermal stations is that fossil fuel, like the natural gas used by Manitoba Hydro, is not self-renewing like water power.
When you're outside on a windy day, you can feel the wind push against your body. That push can also spin blades on a wind turbine which produces electricity.
The first windmill to produce electricity went into service in Denmark in 1890.
A kilowatt-hour is the amount of electricity used to light a 100-watt bulb for 10 hours.
Though not practical in all locations, wind generators are a good idea in those areas where they can be used because wind, like water, is a renewable resource. However, wind generators have two main drawbacks. First, they are expensive. Second, not all locations have consistent strong winds.
Biomass is a general term used to describe organic or living matter such as wood. Biomass generation means burning organic matter rather than fossil fuels to create electricity. Potential biomass fuels include residue from the forestry and agricultural industries. Everything from rice hulls to coffee grounds could be burned to create steam.
Plants are able to create food from the light of the sun. This process is called photosynthesis. The word "photo" means light and the word "synthesis" means to change. We can also use the sun's light to make electricity. Panels made from silicon are able to convert sunlight to electricity through the photovoltaic process. Voltaic is another word for electricity.
Photovoltaic (PV) panels can be used to power everything from calculators to appliances in your home. One of the advantages of solar energy is that it doesn't need fuel. Unfortunately, PV panels are very expensive. So, the widespread use of solar energy is not yet practical when compared to hydroelectric generation.
If you blow up a toy balloon and then let go of the neck, the balloon will shoot away. The force that pushes the balloon, expanding air, is the same force that drives a gas combustion turbine.
A combustion turbine looks and operates something like a jet engine. In a combustion turbine, fuel such as natural gas is mixed with compressed air and combusted. The gases produced during combustion are hot and under pressure. In most combustion turbines, the combustion gases can reach up to 1,300 degrees Celsius. The super hot, high pressure gases are pushed into the turbine section where they are allowed to expand and apply pressure across the blades of a rotating turbine that drives an electrical generator.
Over the last decade, combustion turbines have gained importance as an electric generation option. In fact, Manitoba Hydro operates two natural gas combustion turbines as part of its Brandon Generating Station. |
The Thames in Stereographs
The stereoscopic viewer and the stereographic print derive from the "reflecting stereoscope" invented by the British physicist Charles Wheatstone in 1832. Wheatstone first made twin drawings of an object, each mimicking the perspective of the left or right eye, respectively. With the use of mirrors, Wheatstone's device then combined the pictures into a single, three-dimensional image. Soon after the arrival of photography in 1839, Wheatstone's drawings were replaced by photographs. Though initially the device used one-of-a-kind daguerreotypes, with the introduction of glass-plate negatives in 1851 stereographs could be mass-produced.
Generally, a stereograph was a four-by-seven-inch rectangular card with two photographs, usually albumen prints, mounted next to each other. As with Wheatstone's device, the pictures are made by a dual-lens camera with the centers of the two lenses placed at the same distance from each other as the centers of two human eyes. In the examples exhibited here, note how the photographs are not identical but show a slight lateral shift.
Between 1860 and 1890, as many as twelve thousand stereo-photographers took between 3.5 and 4.5 million individual images, which were printed on approximately 400 million stereographs. Often the name of the photographer or publisher, along with a short caption, was printed on the front of the card, with a longer text on the reverse. Stereographs were sold at tourist spots, from storefronts, through mail-order catalogues, and door to door. By the close of the nineteenth century, stereoscopic viewing had come within reach of a broad middle-class audience, fulfilling the London Stereoscopic Company's motto, "A Stereoscope in Every Home."
Next: Whistler's "Thames Set"
Previous: The Geography of London |
1) The Equilibrium constant Kp for the reaction
2SO2(g) + O2(g) --- 2SO3(g)
Is 5.60 x 10^4 at 350 degrees C. The initial pressures of SO2 is 0.350 atm and the initial pressure of O2 is 0.762 atm at 350 degrees C. When a mixture equillibtrates, is the total pressure less than or greater than the sum of the initial pressures (1.112 atm)?
3) The equilibrium constant Kc for the reaction
H2(g) + Br(g) ----- 2HBr(g)
Is 2.18 x 10^6 at 730 degrees C. Starting with 3.20 moles of HBr in a 12.0-L reaction vessel, calculate the concentrations of H2, Br2, and HBR at equilibrium.
4) Assuming equal concentrations of conjugate base and acid, which one of the following mixtures is suitable for making a buffer solution with an optimum pH of 4.6 - 4.8?
a. CH3COO2Na / CH3COOH (Ka = 1.8 x 10-5)
b. NH3 / NH4Cl (Ka(NH4+) = 5.6 x 10-10)
c. NaOCl / HOCl (Ka = 3.2 x 10-8)
d. NaNO2 / HNO2 (Ka = 4.5 x 10-4)
e. NaCl / HCl
5) You have 500.0 mL of a buffer solution containing 0.20 M acetic acid (CH3COOH) and 0.30 M sodium acetate (CH3COONa). What will the pH of this solution be after the addition of 20.0 mL of 1.00 M NaOH solution?
Ka = 1.8 x 10-5
6) 50.00 mL of 0.10 M HNO2 (nitrous acid) was titrated with 0.10 M KOH solution. After 25.00 mL of KOH solution was added, what was the pH in the titration flask? (Given Ka = 4.5 x 10-4)
7) The solubility product for CrF3 is Ksp = 6.6 x 10-11. What is the molar solubility of CrF3?
8) The Ksp for Ag3PO4 is 1.8 x 10-18. Determine the Ag+ ion concentration in a saturated solution of Ag3PO4.
9) Will a precipitate of MgF2 form when 300 mL of 1.1 x 10-3 M MgCl2 solution are added to 500 mL of 1.2 x 10-3 M NaF? Ksp (MgF2) = 6.9 x 10-9
This solution explains:
1) How to determine concentration of reactants and products in an equilibrium reaction.
2) How to prepare a buffer solution with a specific pH.
3) How to calculate a molar solubility.
4) How to calculate ion concentration in a concentrated solution. |
Scientists are reporting development of a new method to recycle rare earth elements from wastewater. Many of today’s technologies, from hybrid car batteries to flat-screen televisions, rely on rare earth elements (REEs) that are in short supply.
The process is described in a study in the journal ACS Applied Materials & Interfaces. The new process could help alleviate economic and environmental pressures facing the REE industry and favorably affect consumer goods prices over time.
Zhang Lin and colleagues point out that REEs, such as terbium – a silvery metal so soft it can be cut with a knife – behave in unique ways as super magnets, catalysts or superconductors. That makes them irreplaceable in many of today’s tech gadgets and machines.
Market watchers expect global demand to rise to at least 185,000 tons by 2015. Although some of these elements are actually plentiful, others are indeed in short supply. According to reports, terbium and dysprosium supplies may only last another 30 years. Attempts so far to recycle them from industrial wastewater are expensive or otherwise impractical. A major challenge is that the elements are typically very diluted in these waters.
Lin’s team knew that a nanomaterial known as nano-magnesium hydroxide, or nano-Mg(OH)2, was effective at removing some metals and dyes from wastewater. So they set out to understand how the compound worked and whether it would efficiently remove diluted REEs, as well.
Related article: Helium: Abundant but Unavailable
To test their idea, they produced inexpensive nano-Mg(OH)2 particles, whose shapes resemble flowers when viewed with a high-power microscope. They showed that the material captured more than 85 percent of the REEs that were diluted in wastewater in an initial experiment mimicking real-world conditions. In addition, a method was developed to further separate the immobilized REEs and the residual magnesium hydroxide by varying the solution pH.
Nano-Magnesium Hydroxide Ion Exchange Action.
“Recycling REEs from wastewater not only saves rare earth resources and protects the environment, but also brings considerable economic benefits,” the researchers state. “The pilot-scale experiment indicated that the self-supported flower-like nano-Mg(OH)2 had great potential to recycle REEs from industrial wastewater.”
The REE issue is of more importance to the world economy than most people realize. The elements are used in a wide array of devices and more and more devices have electronic components. REEs are used at ever increasing rates.
Related article: Mine Tailings Hold Little Hope for US Rare Earth Industry
Everyone with a bit of sense should realize that just disposing to landfills items that contains recyclable materials is a way to increase future costs, not to mention the simple dirtying and trashing of our environment. For example, used lithium ion batteries are really not something one wants laying around disintegrating for important and very toxic reasons.
Recycling, while critical, isn’t going to increase the inventory of devices for sale or in use. For that more REE needs to be produced. The U.S. has a huge known inventory, but pressures of ‘not in my back yard’, government permitting and other barriers have production essentially stopped.
If you enjoy the latest technology, realize that a lot of life depends on existing technology that will need to be replaced and upgraded someday, or have an interest in these types of things and the power that makes them go, REEs are a vitally important thing to keep an eye on. It’s not a matter of strategic minerals, it’s continuing on with life as we know it.
By. Brian Westenhaus |
The realization that global biodiversity is seriously threatened by human activities emerged as a primary international concern in the 1970s, although the history of human efforts to protect rare species is much older.
National Parks and Biodiversity Conservation
In the United States, efforts were made to prevent the extinction of the American bison at the end of the 1800s. Yellowstone National Park, the first national park in the world, was established in 1872, and it provided habitat to the only wild bison herd during that era. In 1900, the U.S. federal government passed the Lacey Act, which forbade interstate commerce in illegally harvested animals or their body parts, and likely helped prevent the extermination of snowy egrets and other birds that were being harvested for their feathers.
The national park system in the United States grew rapidly during the late 1800s and early 1900s. Its model for protecting nature was to draw a boundary around a particular area and restrict human uses within it. Most early parks were focused on places with geological, not biological, wonders, so they weren’t especially good at protecting biodiversity, but they established an important model for nature protection.
With adequate enforcement, the national park model can be very effective for conserving biodiversity, but it also raises questions of social justice. Even during the 1800s when the first parks were established, local residents complained about lost access to resources because of the restrictions that parks imposed. Among those most disenfranchised were Native American groups, such as the Blackfeet of Montana who lived within today’s Glacier National Park; they were told they were no longer allowed to do traditional hunting, fishing, and gathering within the park boundaries. Despite the social injustices that were a part of the U.S. national park movement, this model of nature conservation was adopted by many European nations that established national parks in their African colonies. Below, you will learn about efforts to balance biodiversity protection in key areas with the needs of humans who live nearby, and those efforts stem from social justice concerns about the original “fortress” model for nature conservation exemplified by national parks.
These early efforts were quite minimal compared to the global boom in protected areas since the 1960s. Today, there are over 100,000 individual protected areas that cover about 12% of the Earth’s total land surface. Over half of this area was protected just in the last decade.
Within the field of geography, particularly in a subfield called political ecology, there has been a lot of research in protected areas on the issue of balancing biodiversity protection with human needs. Political ecologists have looked at the political and economic interests of humans in protected areas, and how those interests relate to biodiversity and other ecological processes. The establishment of a new protected area invokes social justice concerns about the way that the fortress model of conservation displaces local people from their land and resources. At the same time, some parks have been operational for over 100 years and have their own unique set of political and ecological issues. An example can be seen in Yosemite National Park, where park visitation levels have become so high that efforts are underway by park managers to establish a park “capacity” for visitation to certain parts of the park with the goal of limiting human impacts on ecological processes (recall the “carrying capacity” concept from Module 2). As visitation to protected areas increases, the interface between environmental protection and levels of visitation becomes increasingly complex, and innovative management strategies are required to meet the given objective of a protected area.
IUCN Protected Area Categories
It’s important to remember, however, that protected areas receive very different levels of protection, and may have many more purposes than simply protecting biodiversity. The International Union for the Conservation of Nature (IUCN) has identified six different levels of protected areas:
Category 1: Strict Nature Reserves. These areas restrict motorized vehicles and extractive uses. They may be open to indigenous people for traditional gathering and hunting, but, in most cases, the only human activities are scientific research and monitoring and low-impact recreation. The federal wilderness system in the United States, established in 1964, is an example of this kind of reserve.
Category 2: National Parks. These are areas intended to balance ecosystem protection with human recreation, which is often a very difficult mandate for the managing agency to achieve. Extractive uses in these areas are prohibited. Many national parks, such as Tubbataha National Park in the Philippines, are sources of ecotourism income as well as breeding grounds for commercially important species. One problem with national parks in many developing countries is that there is little or no enforcement of regulations. One study showed that only 1% of parks in Africa and Latin America have adequate enforcement. We might think of these as “paper parks” that exist on a map but, in reality, are not protected.
Category 3: Natural Monuments. Protecting interesting natural or cultural features is the goal in these areas, but they are smaller than the areas in the two previous categories.
Category 4: Habitat/Species Management Areas. These are areas that are relatively heavily utilized by humans for agriculture or forestry but have been designated as important habitats for a particular species or natural community. Management plans and continual monitoring are important components to ensure that conservation goals are achieved.
Category 5: Protected Landscape/Seascape. These areas are intended to protect historically important interactions between people and nature. Examples include traditional farming areas, homelands of indigenous peoples, and significant religious landscapes. Endemic and rare species in these regions are often best protected by maintaining the traditional human land uses that have existed alongside them for many generations.
Category 6: Managed Resource Protected Area. Similar to Category 5, these areas are managed for the long-term sustainable use by humans. In the Ngorongoro Crater Conservation Area in northern Tanzania, Masai pastoralists graze cattle on most of the land while living alongside Africa’s largest concentrations of megafauna.
One system of protected areas that has become particularly important for conserving biodiversity is “Biosphere Reserves.” In 1971, The United Nations’ Educational, Social and Cultural Organization (UNESCO) started the Man and the Biosphere Programme. Its major focus has been building a network of biosphere reserves. There are over 400 reserves in almost 100 countries today. Each reserve has to be large enough to contain three different “zones”: (1) a core area where the national government restricts essentially all human activities except scientific monitoring and research, (2) a buffer zone where tourist recreation and local resident usage for agriculture, sustainable logging, grazing, hunting, and fishing are allowed, as long as they don’t threaten the core area, and (3) a transition zone where more intensive uses of land are permitted. This model seeks to balance the needs of humans and the biosphere, as its name implies.
If you were designing a set of protected areas with the goal of preserving biodiversity, here are a few concepts that you would want to keep in mind:
Comprehensiveness: Include samples of different types of habitats and ecological processes.
Representativeness: It’s unlikely that you will be able to preserve much of each habitat type, so protect an area that is representative of the ecological processes contained within it.
Risk Spreading: Natural disasters, wars, or other disturbances can harm even the most well-protected areas, so it may be wise to not have all of your reserves connected and nearby each other.
Connectivity: On the other hand, maintaining connections between protected areas is very important for several reasons, including the dispersal of genetic material, the ability for migrating and wide-ranging species to persist, and the possibility for species to adapt to climate changes or adjust their ranges after disturbance events.
Examples of Biodiversity Conservation Practices
Of course, creating a theoretical set of protected areas is much easier than doing it in the real world, but here are several examples of how these ideas are being implemented or advocated for in different parts of the world.
Costa Rica is perhaps the best example of a biodiversity-rich country making a commitment to protecting its natural endowments. While it is a small country, about the size of West Virginia, it is home to about 500,000 plant and animal species. Though Costa Rica experienced very serious deforestation driven by cattle ranching during the 1960s and 1970s, it has worked for the last 30 years to protect about 25% of its land in national parks and other forms of reserves. The protected areas are designed to ensure the survival of at least 80% of Costa Rica’s remaining biodiversity. Efforts have been made to facilitate connectivity between reserves and to ensure that they are as representative as possible. Beyond the reserves, the Costa Rican government has also halted subsidies that encourage forest clearing and has encouraged investment in ecotourism. Today, tourism is the largest industry in Costa Rica, and is very substantially focused on activities within and surrounding these reserves. Tourism has become so popular that the Costa Rican government and conservation biologists are now concerned about the impacts that so many visitors are having on the country’s biodiversity. Nevertheless, Costa Rica remains an example of the benefits that protected areas can have for biodiversity and local economies.
But connectivity between reserves is often necessary on a larger than national scale, and that was the goal of advocates for the “Paseo Pantera” (Panther Path) in Central America. Now known as the “MesoAmerican Biological Corridor,” this system of protected areas and corridors stretches from Mexico to Panama.
The Rewilding Institute advocates for the creation of even large-scale connectivity between important ecosystems in North and Central America, focusing on the necessity for large carnivores like wolves, mountain lions, and grizzly bears to travel the long distances they require.
The primary goal of all of these corridor-based projects is to ensure landscape permeability, which means that even if a particular place is not designated as a protected area, wildlife is able to use the habitat and to travel freely through it. Elements that ensure landscape permeability include laws that regulate or restrict wildlife hunting or trapping, designing roads and railroads so that animals can cross safely, and establishing relationships between government wildlife agencies and local communities so that everyone feels that they benefit from protecting the biological integrity of the region. |
"It takes a long time to build and melt an ice sheet, but glaciers can react quickly to temperature changes," notes Eric Rignot, a glaciologist at NASA's Jet Propulsion Laboratory. "Greenland is probably going to contribute more and faster to sea level rise than predicted by current models."
Rignot partnered with Pannir Kanagaratnam of the University of Kansas to look at satellite data on Greenland's glaciers. New satellites and new techniques allowed the two to figure out how fast the glaciers were moving, thinning and even what the bedrock beneath them looked like. Based on this data, the researchers found that the glaciers were traveling faster than anyone had predicted. They also determined that even more northerly glaciers were on the move and that in just 10 years the amount of fresh water lost by all the glaciers had more than doubled from 90 cubic kilometers of ice loss a year to 224 cubic kilometers. "The amount of water Los Angeles uses over one year is about one cubic kilometer," Rignot points out. "Two hundred cubic kilometers is a lot of fresh water."
Current climate models do not take into account glacial flow and therefore underestimate the impact of glacial melt and the calving of ice flows, the researchers argue in a paper detailing the findings in today's Science. According to climate records stretching back a century, southern Greenland has warmed three degrees Celsius in just the past 20 years, driving melting that may help lubricate glacial flow along the bedrock, the two speculate. With the higher glacier speeds in mind, they calculate that Greenland currently contributes 0.57 millimeter of ocean level rise every year out of a total of three millimeters.
But Greenland contains an ice sheet that covers 1.7 million square kilometers--an area nearly the size of Mexico--and is as much as three kilometers thick in places. If it all melted, it would raise the world's oceans by seven meters, though that is not likely to happen anytime soon. "The southern half of Greenland is reacting to what we think is climate warming," Rignot adds. "The northern half is waiting, but I don't think it's going to take long." |
The American Eel
People have fished and farmed eels for thousands of years, but until recent years, little was known about the eel's complex life history. Eels have played a major role in the human diet in Europe and Asia. A young life phase of American eels, called glass eels, cyclically fetch a high price on the Asian market and are also harvested in the United States.
The American eel is the only freshwater eel found in North America. They begin their lives as eggs hatching in the North Atlantic in the Sargasso Sea. Hundreds of millions of eggs hatch into larvae that drift with the Gulf Stream and take years to reach their freshwater, estuarine and marine habitats from Greenland south to Venezuela. In these habitats, the eels mature, changing color over time, and then, as adults, millions of them return to the Sargasso Sea to spawn and die.
American eels remain widely distributed throughout much of their historical range, despite reduced numbers over the past century and habitat loss from dams and other obstructions. In some coastal rivers, eels are the most commonly found fish, occupying more aquatic habitats than any other species. Harvest quotas and mechanisms restoring fish passage have reduced stressors on the species. Read more about the American eel (Factsheet, PDF).
Considering Endangered Species Act Protection
The U.S. Fish and Wildlife Service has reviewed the status of the American eel in 2007 and in 2015, finding both times that Endangered Species Act protection for the American eel is not warranted.
After examining the best scientific and commercial information available about the eel from Greenland south along the North American coast to Venezuela in South America and as far inland as the Great Lakes and the Mississippi River drainage, the Service found that the American eel is stable. While American eels still face local mortality from harvest and hydroelectric facilities, this is not threatening the overall species. Harvest quotas and mechanisms restoring eel passage around dams and other obstructions have also reduced these effects. |
bits and text (intro)
character sets and character encodings
To represent a character of text as bits, we represent it as a number, and so we must (arbitrarily) decide which numbers represent which characters. A character set is a standardized selection of characters given designated numbers.
When expressing characters as numbers, we need to decide how exactly to write the numbers as bits. How many bits do we use to represent each character? Do we use the same number of bits for every character? In other words, how should we encode the characters? A character encoding is a standardized way of encoding text.
ASCII and Unicode
ASCII (American Standard Code for Information Interchange) was the most widely used character set for several decades. ASCII contains just 128 characters: the English alphabet, English punctuation, numerals, and a few miscellaneous others.
The Unicode character set has now supplanted ASCII as the most widely used character set. Created in the 1990’s, Unicode contains over a million characters, including basically every symbol of every written language in history.
UTF-8, UTF-16, UTF-32
Unicode text is most commonly encoded in three standard encodings: UTF-8, UTF-16, and UTF-32. UTF-8, as the name implies, uses as few as 8 bits to represent a single character, though some characters require as many as 32 bits. In UTF-16, the most commonly used characters are represented in 16 bits and the rest in 32. In UTF-32, all characters are represented in 32 bits. The choice of which encoding to use comes down to a trade off between space efficiency and processing efficiency. |
Learn about building a boundary surface.
- [Instructor] In this movie, we're going to be looking at a couple different ways to create a boundary surface. The first one's going to be pretty straightforward just using two simple curves. The first thing I want to do is make sure I have my model that's been sliced up into quadrants, which we did using the Split Line command in the previous movie. Because I have this quadrant up here, you can see I just have that arc I can easily select. I can do exact same thing at the top here and select that arc. That's where we're going to be creating the boundary surface between the two. Let's go over here to Surfaces. Click on Boundary Surface. As far as direction number one, let's go ahead and choose that upper edge right there, so we can zoom in right there, click on that edge.
There it is. Then, come over here and make sure we're choosing on the same side. Choose that one there. We can see we've got a boundary surface that's been created between the two. Now, instead of this None, I want to go down here and click on Curvature to Face, or Tangency to Face. We have a couple different options for how we can create that surface. Do the same thing over here. Curvature to Face. You can see there you got a nice little surface that's been created. Now, you can click on these arrows here and drag them in or out to create or modify that surface and define how much influence each one of those has on that surface.
You can come over here and then actually type in a value if you wanted to, or just grab the arrows and drag them around. If you choose the other surface here of the other edge, you can then, I'd do the exact same thing. Whichever one is highlighted, you have the control to type in a value or drag the arrow out to commodify that surface. Once you have something that you like, go ahead and click on the green check mark, and there's our very first surface. Now, down here at the bottom we have a little bit more complicated situation. We actually have a couple of these guides that we want to use to define where that surface actually lies.
The first thing I want to do is create a helper surface. To do that, I'm going to go ahead and choose this Right Plane, start a sketch, choose this line right here, and I'm going to convert that over. Then, I'm going to create a surface, which is going to be an extruded surface. Let's go ahead and just choose a Mid Plane. Let's type in two inches. The only reason I'm creating this is that a helper surface so I can project some lines into it so I can create the edges of my new surface. Click on okay, there it is. Now, what I'd like to do is create a regular plane.
Go over here to Features, Reference Geometry, Plane, and I need three points. The points I'm going to be choosing are this little point right there, this one right over here, and then come down here where this intersects again, and choose this point, I'll spin around, here we go, right there. Those are the points we need to create that surface. Click on okay. Now, directly on that surface, let's create a sketch. So, start a sketch and just do some really simple lines, and just connect the dots.
Now, you could use a spline if you wanted to, but, in this case, a regular line will work just fine. Click there. All right, that's my first line. Then, exit out of that sketch. First thing's first. We want to project this line we just created. Actually, let's hide this plane 'cause it makes it a little bit hard to see what's going on. There's that line we just created. I want to project that line onto that surface. To do that, let's go over here to Features, go to Curves, projected curve.
I'm going to choose this sketch right there, and I want to project it onto that surface. Click on okay. Reverse the projection. Click on okay. And there is our curve. Pretty nice. Let's do exactly the same thing on the other side here. Let's go ahead and go back to that plane right there. Let's go ahead and start a sketch. Create a basic line. Now, I'm going to connect the dots from here up to there.
Exit out of that sketch, and come back over here to Curves, Project Curve. That sketch onto that surface there. Make sure we do reverse the projection so it's projecting directly to where we want to be. Click okay. Now, I have those two curves which are going to guide my boundary surface. It's kind of a lot of work to do that. You have to create the helper surface. You have to create the plane. You got to create the lines, and you have to project them up there. But, what that's allowing me to do is completely control that surface. Now I know that my surface is going to go along that line.
It's going to be following this line down here and this line over here. So, I have three guide-lines. If I hide the helper surface, this one here, let's hide it, you can see it's not there. I'm going to go ahead and hide these ones here, too, so you can see that there's our surfaces on the edges where they connect to, and then let's go back and show this one again. You can see on the bottom that's what's going to control that surface. Once we have those things, we're pretty much all set to create that boundary surface. Go to Surfaces, come over here to Boundary Surface.
My first direction's going to be this edge right here. My second one's going to be this one right here. Notice it creates the boundary surface. Now, as far as direction number two, what we want to choose is, let's go ahead and choose this edge right here. Look, it already brings it up, which looks good. I also want to chose the far edge, looks good. Then go ahead and choose that edge down there at the bottom. Click okay, and there it is. As you can see, we've controlled all edges of this boundary surface so it exactly matches the guide curves we were originally given with a perfect match boundary surface.
Once all that's okay you can, of course, change the way that the curvature is setup, like something like Tangency to Face. But, in this case, we don't really need to because all those lines were originally created exactly tangent. So, in this case, I can just go ahead and turn that back to None. Either way, doesn't really matter. Click on okay, and there is our perfect surface. And, if we look directly at the right plane, you can see this surface perfectly matches the two guide-curves that we were given and it looks really nice. There's a couple different ways to create these boundary surfaces. Obviously, the first option is way, way easier just by clicking and choosing the two individual curves, and then just applying the different direction vectors and the effect that they have on the surface.
The second one is a lot more control. It takes a lot more work, but you can get exactly what you're looking for in creating a boundary surface, and that's what we have.
- Exam-taking techniques
- Surfacing tools
- Creating splines and 3D curves
- Building a boundary surface
- Extending and untrimming surfaces
- Knitting surfaces together
- Creating surface fillets
- Using the Thicken tool |
A malaria vaccine would provide a much-needed way of alleviating the toll of this disease on the world. But one does not yet exist, and there is no scientific consensus on the best lines of research to pursue. This policy brief outlines the progress and challenges in vaccine development.
Malaria is caused by infection with a microbe called a protozoan, which is transferred to people though the bite of Anopheles mosquitoes. Distinct from viruses and bacteria, protozoans are single-celled eukaryotic micro-organisms. Four species of protozoan parasites cause malaria in humans — Plasmodium falciparum, P. vivax, P. ovale and P. malariae. No vaccine has ever been made against a protozoan that causes the disease in people.
P. vivax, P. malariae and P. ovale infections are as widespread as P. falciparum infection but they are not usually fatal. P. falciparum, however, is highly dangerous. People who are often infected by it — such as those living in rural Africa develop a partial immunity to the disease. But infection can be fatal in other groups, such as children or people without naturally acquired immunity e.g. tourists, aid workers and military personnel.
Methods for controlling and treating malaria have been fiercely debated since the earliest descriptions of malaria more than two thousand years ago. At present, disease control relies on three main efforts:
- Drug treatment of patients
Treatment to kill the Plasmodium infection is the mainstay of control in areas of high transmission in Africa and Asia. But drug resistance, cost, and inadequate infrastructure are severely hampering efforts. More targeted treatment efforts to those most vulnerable to malaria are taking place in the form of intermittent preventative treatment to pregnant women and infants, involving the administration of regularly spaced doses of antimalarial drugs.
- Mosquito control
Measures to prevent the spread of malaria include using insecticide-impregnated bednets, insecticide treatment of mosquito habitats, killing mosquito larvae with chemicals at wetland breeding sites, and suppressing breeding sites by water drainage. Indoor residual spraying of homes with insecticides, although phased-out in many areas following concerns about the use of DDT, is now returning to favour, most notably in southern Africa and India.
- Post-eradication monitoring and treatment
When malaria is brought into areas where it was eradicated in the 20th century (such as Europe and North America), all those who might be carrying infection are rapidly treated to avoid re-establishment of transmission.
Ideally, a malaria vaccine that could greatly improve global public health would induce a rapid and protective immune response that completely eliminates the infection.
Less ideal, but more feasible and still highly desirable, would be a partially protective vaccine that does not completely prevent infection but would at least boost the immune system sufficiently to lessen the severity of disease caused by P. falciparum.
Why a vaccine?
Vaccines are the most cost-effective component of public health services. They are usually given orally or by injection of an inactivated (killed) or attenuated (live, but non-virulent) whole pathogen. When given together with a boosting substance called an adjuvant, a successful vaccination safely induces an immune response that leads the body to recognise and kill the infectious agent.
A malaria vaccine seems to offer the greatest hope of achieving significantly improved malaria control, particularly in Africa, where the ecological habitat is such that effective mosquito control has proved difficult or impossible to maintain.
The success of other vaccination campaigns demonstrate that the control of major infectious diseases is achievable on a global scale. The World Health Organization's (WHO) Expanded Program on Immunisation (EPI) was launched in 1974. This effort has successfully provided subsidised vaccination worldwide against six major childhood diseases: measles, diphtheria, pertussis (whooping cough), tetanus, polio and tuberculosis. Furthermore, smallpox has been eradicated as a result of vaccination, and polio seems close to being eliminated.
Additionally, the WHO immunisation programme has created the global skill base and infrastructure through which other protective vaccines could be administered. If funding were available, existing vaccines against diseases such as Hepatitis B, Haemophilus influenzae type b (Hib) and yellow fever could be introduced. Indeed, in view of the high rates of death and disease in children caused by malaria, great efforts would be made to incorporate a malaria vaccine into the WHO immunisation schedules, if an effective and affordable option were available.
So why is there not yet a malaria vaccine? The lack of a malaria vaccine today can be attributed to several complementary factors including lack of interest by governments and the private sector and significant scientific obstacles. This policy brief outlines the direction that any future malaria-vaccine research policy could take.
Inadequate attention and competing ideas
Until quite recently, malaria vaccine development has not been a global priority. Research in this area has had only a few adherents, who scraped together funding to test small batches of vaccine prototypes in early-stage clinical trials. The research field ran against the grain of conventional 'pure' academic immunological research, and at the same time was too risky to attract potential partners in either the small biotech sector or the established pharmaceutical industry.
Malaria vaccine research involves the unfamiliar combination of vaccinology, immunology and malariology. There are few established theoretical principles in this field to serve as guidelines. Instead, workers must contend with a mass of frequently inconsistent data from field and laboratory studies that confuse attempts to forge ahead in any one direction.
Some malaria researchers have noted the similarities with the effort to develop an HIV vaccine, which has also been affected by competing research priorities in the past 25 years.
'Whole organism' vaccines
Malaria researchers in the 1960s and 1970s followed the traditional approach of producing attenuated or killed whole organisms for inoculation. This method has yielded live attenuated vaccines that protect against smallpox, yellow fever, measles, mumps, rubella, polio (the Sabin oral vaccine) and varicella zoster (chicken pox).
The same approach has produced killed or inactivated viral vaccines against polio (the Salk injected vaccine) and Hepatitis A. Killed or inactivated whole-bacteria based vaccines are used safely and effectively to protect against pertussis (whooping cough), pneumonia and meningitis caused by Haemophilus influenzae type b (Hib), typhoid, plague and cholera.
All Plasmodium species have distinct forms in both the human and mosquito stages of their life cycle. The first malaria vaccine was made from fragmented and dissolved malaria parasites, chosen in the form that is ready to invade red blood cells (merozoites). Tested in the 1960s, this vaccine gave some protection in experimental monkey models. But inconsistent results, difficulties in producing immunising material, and dependence on toxic adjuvants led to a halt of such trials.
Instead, workers took an alternative approach using the emergent recombinant DNA/genetic engineering technologies to purify individual protective merozoite surface components that could be recognised by the immune system (antigens).
Meanwhile, other investigators attempted to create vaccines using attenuated sporozoites — the form of the malaria parasite that is transferred to humans by mosquitoes and that invades the liver. Live, infected mosquitoes were used to deliver the vaccine after having first been subjected to X-ray radiation to render the parasites unable to multiply. A very high proportion of volunteers bitten by the irradiated mosquitoes could subsequently fight off infection with normal untreated malaria sporozoites.
Many investigators consider this approach technically too difficult to pursue, and impossible to scale up — although there are advocates of novel attenuated sporozoite production programmes.
As with whole merozoite vaccines, attention on sporoziotes largely moved to discovering and purifying the major sporozoite protective antigen or antigens.
The focus of malaria vaccine research, therefore, switched to vaccines based on one or more immunogenic components of the parasite; the so-called subunit vaccine approach. Early precedents for successful subunit vaccines are the bacterially secreted toxins used in the diphtheria and tetanus vaccines. The most influential recent precedent has been the Escherichia coli or yeast-produced Hepatitis B surface-protein vaccine, which is currently the only genetically engineered recombinant vaccine to be successfully produced and deployed.
Subunit vaccines: the challenge of target selection
A key problem for subunit vaccine development is how to choose the parasite component (i.e. the protective antigen) to induce immunity. There are three stages of the malaria parasite's life cycle that seem especially vulnerable to the immune system of infected human hosts and are thus prime vaccine targets. They are the:
- pre-erythrocytic stage — not only the circulating sporozoites transferred by mosquitoes but also those that continue to develop after entry into the liver.
- asexual blood stage — the merozoites that emerge from the liver to invade, and subsequently grow in, red blood cells.
- sexual blood stage (gametocytes) — taken up by the blood-feeding mosquito to continue the protozoan life cycle.
This complex biology gives rise to two target selection problems. First, there are several thousand proteins (plus carbohydrates and lipids) made by malaria parasites during human infection. These compounds can serve as antigen targets for two types of immune responses: the secretion of antibodies to attack parasites floating in the blood stream, and infected red blood cells; and the action of white blood cells known as T cells, which can attack infected liver cells, and also aid in boosting antibody production and specificity.
Second, many antigenic proteins vary between individual parasites within an infected person. Making matters even more complex, individual malaria parasites can also switch the selection of proteins that appear on the surface of infected red blood cells to evade the host's antibodies.
Malaria, thus, poses a formidable challenge to the human immune system. Indeed a successful vaccine might seem an impossible idea, were it not for the knowledge that people clearly can become immune. This natural immunity takes years to develop as people encounter a very diverse population of parasites. In Africa, tropical conditions sustain huge populations of mosquitoes and humans can be bitten by infected mosquitoes once or more every night. Every bite introduces 10 – 100 genetically diverse sporozoites. So far, however, it has not been possible to test the protective efficacy of immunisation with more than a handful of the 6000 or so proteins made by P. falciparum.
Sporozoite-protein based vaccines
At New York University in the late 1980s, Ruth Nussenzweig and her colleagues discovered the major surface antigen of sporozoites, the circumsporozoite protein (CSP). Several clinical trials in the late 1980s with chemically synthesised versions of CSP were unsuccessful.
Since then, a development programme run by the US Army and Smith Kline, now GlaxoSmithKline, has led to the development of the most successful CSP-based malaria vaccine so far — the RTS,S formulation.
The RTS,S formulation consists of 'virus-like particles', produced through recombinant DNA technology. The formulation contains a mixture of Hepatitis B surface antigens and a fragment of CSP and is given with a very potent adjuvant. The adjuvant, ASO2, is an oil-in-water emulsion of a lipid (fat particle) from the cell wall of salmonella bacteria (monophosphoryl lipid A) and a saponin-type detergent, and is essential for the vaccine's protective effect. RTS,S is thought to act by stimulating both the production of antibodies that block the parasite's invasion of the liver and T cells that kill the replicating parasites inside liver cells.
This vaccine has been shown to confer a substantial (40–70 per cent, depending on the trial) but short-lived protection in volunteers deliberately exposed to the bites of infected mosquitoes (usually five infected mosquitoes per volunteer).
However, the vaccine conferred only short periods of protection (less than six months) to naturally exposed adult Gambian volunteers. These volunteers were already semi-immune to malaria, in that they had been naturally exposed to the disease since childhood. In the most recent reported field trial in Africa, the vaccine gave around 30 per cent protection overall against the first clinical attack of malaria in Mozambican children and reduced the incidence of severe disease by 58 per cent .
Multi-epitope peptide vaccines
Also during the late 1980s and early 1990s, attention turned to the idea of creating vaccines containing multiple antigen targets — so-called multi-epitope peptide vaccines. Colombian researchers led by Manuel Pattaroyo in Bogota created such a vaccine formulation known as SPf 66. The vaccine contained chemically synthesised protein fragments (peptides) representing portions of three merozoite surface antigens, linked together with a protein sequence matching CSP (a so-called multi-epitope synthetic vaccine). It showed great promise in Aotus monkey experiments and in early human clinical trials in South America.
However, further field trials in Africa and South East Asia failed to repeat the early success and this candidate has now been largely dropped from the global vaccine development effort. But the concept of multi-epitope vaccines remains valid.
Vaccines to trigger and boost T cells
This class of vaccine aims to elicit cytotoxic T cells that can kill malaria parasites in infected liver cells. Unlike red blood cells, liver cells can alert the immune system to the parasite invasion and thereby render themselves targets for killing.
These 'prime-boost' vaccines involve a two-step procedure in which volunteers are first given a DNA-based vaccine to instruct body cells to produce malaria proteins as if they were infected with the parasite. This first step provides the initial stimulus to T cells that can detect these liver-stage antigen targets.
The second, booster inoculation contains attenuated viruses that carry synthetic portions of various pre-erythrocytic, liver-stage antigens, including CSP and also another sporozoite protein known as TRAP.
This booster step magnifies the numbers of T cells capable of recognising and killing malaria-infected cells, so that in theory, there can be a rapid and powerful response to a parasite infection. However, although results of several Phase I trials have shown safety and some immunogenicity, Phase IIb trials on adult male Gambian volunteers have not shown significant protective efficacy of such vaccines. Further clinical trials of alternative prime-boost vaccines designed to produce stronger responses are underway.
For tourists or the military, blood-stage vaccines are less appealing than vaccines that would block all infection. Blood-stage vaccines aim to reduce the most damaging aspect of the malaria parasite life cycle — its uncontrolled asexual replication in human red blood cells. Research in this area has focused on proteins that enable the parasite to latch on to, and invade, red blood cells.
The lead candidate proteins of most of these types of vaccines contain fragments of the merozoite surface protein-1 (MSP-1) and the merozoite antigen-1 (AMA-1), produced through genetic engineering. They exist as advanced prototypes in or near clinical trials, alone and in combination.
Other early-stage human trials involve vaccines containing other malaria parasite blood-stage antigens (including glutamate-rich protein (GLURP), Exported Protein 1 (EXP-1), 175 kilo-Dalton erythrocyte binding antigen (EBA-175), serine repeat antigen (SERA), ring-infected erythrocyte surface antigen (RESA), and Merozoite Surface Proteins 2 and 3 (MSP 2 and 3).
There is, as yet, no striking proof that these formulations confer high-level protection against clinical malaria but it remains an active area of research.
Preventing infected cells sticking to human tissue
Other blood-stage malaria vaccines aim to stop malaria-infected red blood-cells sticking to human tissues. Such adhesion can lead to serious and fatal disease. For example, during pregnancy when infected red cells accumulate in the placenta and impede blood supply to the growing foetus.
The adhesion process involves malaria parasite proteins called P. falciparum erythrocyte membrane protein-1 (PfEMP-1) appearing on the surface of infected red cells, causing the cells to bind to protein 'receptors' on the surface of body tissues. This vaccine strategy is based on evidence that natural human immunity to malaria involves acquiring a range of antibodies against PfEMP-1 proteins.
Women who have acquired malaria immunity during childhood, seem to lose this immunity during pregnancy. This loss of immunity, leading to heavy malaria parasite infection of the placenta is a serious cause of low birth weight and infant mortality and morbidity.
The explanation for the loss of resistance, it seems, is that malaria parasites, when they infect pregnant women, begin to express a form of PfEMP-1 molecule that can only bind in the placenta. With no previous encounter with these antigens, women become malaria-susceptible during their first pregnancy.
Other severe malaria syndromes, such as cerebral malaria in children, may also be triggered by similar potentially blockable adhesion processes. Prototypes of PFEMP-1 based vaccines are being produced and their safe testing and delivery is under discussion. One possibility is to genetically engineer their co-delivery with other vaccines designed to protect unborn children against infectious disease, such as the rubella component of the MMR (measles, mumps and rubella) vaccine.
Transmission blocking vaccines
Although sexual fusion of malaria parasite gametes takes place in the female mosquito after a blood meal, this process could potentially be blocked by serum antibodies produced in the blood of the human before they are bitten.
Vaccines aimed at eliciting such antibodies might, therefore, prevent mosquitoes from becoming infected after feeding on people who have been vaccinated with transmission-blocking vaccines.
These vaccines benefit the entire community rather than a single individual, by reducing transmission from one person to another. Phase I trials of transmission blocking vaccines for both P. falciparum and P. vivax are underway.
The future: comparative testing of vaccines
Current malaria vaccine development projects concentrate mainly on the small number of parasite surface antigens involved in invasion, adhesion onto host tissue, or development in the liver, and are based on the assumption that such processes are vulnerable to disruption.
There has been much vague talk about the need for more 'outside the box' thinking in vaccine candidate selection, but as yet no significant challenge to the logic of concentrating on triggering antibodies that block receptors, or T cells that kill infected liver cells.
Additionally, an important technical constraint has been the lack of simple, standardised assays that allow vaccine candidates to be compared. There is no clear consensus on which vaccine antigens trigger the strongest immune responses, and what type of response (antibody or T cell), measured in vitro assays, can serve as an accurate correlate of in vivo human immune responses to malaria.
Such assays have been difficult to develop as there is no small animal model — such as mice — of P. falciparum infection. Research in monkeys is possible, but is, on a practical level, increasingly difficult and expensive to do.
Hundreds of research papers report measurements of human antibody or T cell responses to particular antigens after natural infections. Dozens more studies show positive or negative correlations between immune responses to particular antigens and measures of human immune protection such as the absence of, or low concentrations of, parasites in the blood. However, these results are rarely comparable between studies and thus not clear enough to enable the vaccine-research community to focus resources on a short list of promising candidates.
Future progress might be aided by researchers working together to create and develop controlled, comparative immunisation experiments. Such work might allow researchers to compare vaccine-elicited immune responses (such as concentration of serum antibodies, the response to whole parasites, and inhibition of parasite growth) on a small scale.
Such comparisons would allow workers to better prioritise candidate vaccines before beginning the expensive production of vaccines that are fit for human clinical trials. Several groups in the USA and Europe have started to work together to achieve such improved pre-clinical comparative testing.
Despite only small-scale investment, malaria vaccine research has made significant progress in the past decade. Advocacy has increased and major charitable support from the Bill and Melinda Gates Foundation has prompted other foundations, national governments and organisations such as the European Union and the WHO, to consider increasing their financial commitment to finding a vaccine.
There is a widely held consensus on how to proceed: give support to vaccine development projects that are supported by a substantial body of peer-reviewed data; broker novel collaborations, in particular between academia and industry; and let the competitive research environment produce the most effective vaccine.
The pace of vaccine development is quickening, thanks to increased efforts on several fronts: basic research on antigen target selection, industrial research and development to optimise experimental vaccines, and clinical testing of the best candidates so far. As a result, ‘first generation’ malaria vaccines are on the horizon. Such vaccines are likely to give only partial rather than complete protection, however, and will supplement rather than replace vector control and drug treatment.
Although vaccine research looks ready to start contributing to the decline of malaria, it is likely to be gradual, occurring over decades rather than years – as the history of infectious disease indicates. By maintaining a broad-based research effort, the vaccine research community may one day be able to convince political players, philanthropic donors and even more importantly, key target populations such as African mothers, that a vaccine will provide worthwhile protection for their children.
Snow R. S., et al. The global distribution of clinical episodes of Plasmodium falciparum malaria. Nature, 434, 214-217 (2005).
United Nations Development Programme. Human Development Report 2003: Millenium Development Goals: a Compact Amongst Nations to End Human Poverty. Oxford University Press, Oxford, 2003.
Esparza J. The Global HIV Vaccine Enterprise. Internat Microbiol, 8, 93 – 101 (2005).
Nussenzweig R.S. et al. Protective immunity produced by the injection of X-irradiated sporozoites of Plasmodium berghei. Nature, 216, 160 – 62 (1967)
Luke T. C., Hoffman S.L. Rationale and plans for developing a non-repPcating, metaboPcally active, radiation-attenuated Plasmodium falciparum sporozoite vaccine. J Experiment Biol 206, 3803 – 08 (2003).
Alonso P. L. et al. The efficacy of the RTS,S/ASO2A vaccine against Plasmodium falciparum infection and disease in young African children: randomised control trial. Lancet, 364, 1411 – 20 (2004).
Moorthy V.S. et al. A randomised, double bPnd, controlled vaccine efficacy trial of DNA/MVA ME-TRAP against malaria infection in Gambian adults. PLoS Medicine 1, (2004).
Salanti A. et al. Selective up-regulation of a single distinctly structured var gene in chondroitin sulphate A- adhering Plasmodium falciparum involved in pregnancy-associated malaria. Mol Microbiol, 49, 179–91 (2003).
Malkin E. M. et al. Phase 1 vaccine trial of Pvs25H: a transmission blocking vaccine for Plasmodium vivax malaria. Vaccine 23, 3131–38 (2005). |
Worksheet: What Men Live By
1. Why does Simon want to pass the stranger by?
2. Why does he change his mind?
3. What sign to you have that this man may be special in some way?
4. What reason does the man give for being there?
5. What does Simon’s wife expect of him?
6. Why is she angry with Simon?
7. Why does her heart soften?
8. How does she show her concern for the stranger?
9. Describe Michael’s behavior?
10. How does the rich gentleman behave?
11. Why does Michael make a pair of soft slippers for the man?
12. What happened to the mother of the two little girls?
13. What is the proverb Matrena quotes?
14. Why does Michael smile three times?
15. What are the three truths Michael learns? Is one lesson more important than another?
16. How have Simon and his wife been rewarded for their kindness?
17. What is the turning point of the story?
18. What indirect characterization does the author use to describe _____________
19. What are some of the characteristics of a folk tale? |
The first Black newspaper, Freedom's Journal
*On this date in 1827, the Freedom’s Journal newspaper was founded. It was the first Black-owned and operated newspaper in the United States.
Started by a group of free Black men in New York City, the paper served to counter racist commentary published in the mainstream press. As a four-page, four-column standard-sized weekly, Freedom’s Journal was established the same year that slavery was abolished in New York State. Samuel E. Cornish and John B. Russwurm served as its senior and junior editors. The Journal consisted of news of current events, anecdotes, and editorials and was used to address contemporary issues such as slavery and "colonization," a concept that was conceived in 1816 to repatriate free Black people to Africa.
Initially opposed to colonization efforts, Freedom’s Journal denounced slavery and advocated for Black people’s political rights, the right to vote, and spoke out against lynchings. Freedom’s Journal provided its readers with regional, national, and international news and with news that could serve to both entertain and educate. It sought to improve conditions for the over 300,000 newly freed Black men and women living in the North. The newspaper broadened readers’ knowledge of the world by featuring articles on such countries as Haiti and Sierra Leone. As a paper of record, Freedom’s Journal published birth, death and wedding announcements.
To encourage Black achievement, it featured biographies of renowned black figures such as Paul Cuffee, Touissant L’Ouverture, and poet Phyllis Wheatley. The paper also printed school, job, and housing listings. At various times, the newspaper employed between 14 to 44 agents to collect and renew subscriptions, including David Walker from Boston, the writer of "David Walker’s Appeal," which called for slaves to rebel against their masters. Freedom’s Journal was soon circulated in 11 states, the District of Columbia, Haiti, Europe, and Canada. A typical advertisement cost between 25 to 75 cents.
Russwurm became sole editor of Freedom’s Journal following the resignation of Cornish in September 1827, and began to promote the colonization movement. The majority of the newspaper’s readers did not support the paper’s radical shift in support of colonization, and in March 1829, Freedom’s Journal ceased publication. Freedom’s Journal’s two-year existence helped spawn other publications. By the start of the Civil War over 40 Black-owned and operated papers had been established throughout the United States.
Black Saga The African American Experience A Chronology
by Charles M. Christian
Copyright 1995, Civitas/Counterpoint
Today in American History |
LESSON 13 The Special Senses.
LESSON ASSIGNMENT Paragraphs 13-1 through 13-24.
LESSON OBJECTIVES After completing this lesson, you should be able to:
13-1. Identify functions of structures related to the special senses.
13-2. Given a list of statements about the physiology of the special senses, identify the false statement.
SUGGESTION After completing the assignment, complete the exercises at the end of this lesson. These exercises will help you to achieve the lesson objectives.
THE SPECIAL SENSES
Section I. INTRODUCTION
13-1. GENERAL VERSUS SPECIAL SENSES
a. The human body is continuously bombarded by all kinds of stimuli. Certain of these stimuli are received by sense organs distributed throughout the entire body. These are referred to as the general senses.
b. Certain other stimuli (table 13-1) are received by pairs of receptor organs located in the head. These are the special senses.
bulbus oculi (eye)
ear (membranous labyrinth)
olfactory hair cells in nose
taste buds in mouth
Table 13-1. The special senses.
c. Since the general senses respond to immediate contact, they are very short range. In contrast, the special senses are long range.
13-2. INPUT TO BRAIN
From the special sense organs, information is sent to the brain through specific cranial nerves. When this information reaches specific areas of the cerebral cortex, the sensations are perceived at the conscious level.
Section II. THE SPECIAL SENSE OF VISION
13-3. THE RETINA
Within the bulbus oculi (eyeball) is an inner layer called the retina. See Figure 13-1 for the location of the retina within the bulbus oculi. See Figure 13-2 for the types of cells found within the retina.
Figure 13-1. A focal-axis section of the bulbus oculi.
Figure 13-2. Cellular detail of the retina.
a. Visual Fields (Figure 13-3). When a human looks at an object, light from the right half of the visual field goes to the left half of each eye. Likewise, light from the left half of the visual field goes to the right half of each eye. Later, in paragraph 13-4, we will see how the information from both eyes about a given half of the visual field is brought together by the nervous system.
b. Photoreception and Signal Transmission. The cells of the retina include special photoreceptor cells in the form of cones and rods. The light ray stimulus chemically changes the visual chemical of the cones and rods. This produces a receptor potential which passes through the bodies of the rods and cones and which acts at the synapses to induce a signal in the bipolar cells. This signal is then transmitted to the ganglion cells.
Figure 13-3. Scheme of visual input.
(1) Cones. The cones of the retina are for acute vision and also receive color information. The cones tend to be concentrated at the rear of the eyeball. The greatest concentration is within the macula lutea at the inner end of the focal axis (Figure 13-1).
(2) Rods. Light received by the rods is perceived in terms of black and white. The rods are sensitive to less intensive light than the cones. The rods are concentrated to the sides of the eyeball.
(3) Signal transmission. The stimulus from the photoreceptors (cones and rods) is transferred to the bipolar cells. In turn, the stimulus is transferred to the ganglion cells, the cells of the innermost layer of the retina. The axons of the ganglion cells converge to the back side of the eyeball. The axons leave the eyeball to become the optic nerve, surrounded by a dense FCT sheath. There are no photoreceptors in the circular area where the axons of the ganglion cells exit the eyeball; thus, this area is called the blind spot.
13-4. NERVOUS PATHWAYS FROM THE RETINAS
a. The two optic nerves enter the cranial cavity and join in a structure known as the optic chiasma. Leading from the optic chiasma on either side of the brainstem is the optic tract. In the optic chiasma, the axons from the nasal (medial) halves of the retinas cross to the opposite sides. Thus, the left optic tract contains all of the information from the left halves of the retinas (right visual field), and the right optic tract contains all of the information from the right halves of the retinas (left visual field).
b. The optic tracts carry this information to the LGB (lateral geniculate body) of the thalamus. From here, information is carried to the posterior medial portions (occipital lobes) of the cerebral cortex, where the information is perceived as conscious vision. Note that the right visual field is perceived within the left hemisphere, and the left visual field is perceived within the right hemisphere.
c. The LGB also sends information into the midbrainstem. This information is used to activate various visual reflexes.
13-5. FOCUSING OF THE LIGHT RAYS
a. The light rays, which enter the eyeball from the visual field, are focused to ensure acute vision. The majority of this focusing is accomplished by the permanently rounded cornea.
b. Fine adjustments of focusing, for acuteness of vision, are provided by the crystalline lens (biconvex lens). See Figure 13-4. This is particularly important when changing one's gaze between far and near objects.
Figure 13-4. Bending of the light rays by a biconvex lens.
The additional focusing provided by the crystalline lens, mentioned above, is one of the processes involved in accommodation. Accommodation refers to the various adjustments made by the eye to see better at different distances.
a. The crystalline lens is kept in a flattened condition by the tension of the zonular fibers (zonule ligaments; fibers of the ciliary zonule) around its equator, or margin. Contraction of the ciliary muscle of the eyeball releases this tension and allows the elastic lens to become more rounded. Since the elasticity of the crystalline lens decreases with age, old people may find it very difficult to look at close objects.
b. A second process in accommodation is the constriction of the pupils. The diameter of the pupil (the hole in the middle of the iris) controls the amount of light that enters the eyeball. As a light source comes closer and closer, the intensity of the light increases greatly. Therefore, the pupils must be constricted to control the amount of light entering the eyeball as an object under view comes close to the individual.
c. A third process in accommodation is the convergence of the axes of the two eyeballs toward the midline. Since both eyes tend to focus on the same object (binocular vision), there is an angle between the two axes. As an object draws closer, the angle increases to enable the axes to still intersect the object.
13-7. EYE MOVEMENTS
a. Convergent and Conjugate Eye Movements. In a conjugate eye movement, both eyeballs move through an equal angle in one direction, such as right or left. In a convergent eye movement, both eyeballs turn toward the midline to focus upon a nearby object. In both cases, the movement of the left and right eyeballs is highly coordinated so that an object may be viewed by both eyes. Therefore, the object can be perceived within both cerebral hemispheres in a binocular fashion.
b. "Searching" and "Following" Eye Movements. "Searching" and "following" movements of the eyeball are also called, respectively, voluntary fixation movements and involuntary fixation movements. For the first type of movement, the eyeballs move in a searching pattern, without focusing on a particular object until it is located. Once an object is located, the eyeballs will continually fix on that object in a following-type motion.
c. Eye Movements During Reading. During reading of printed or written material, the eyeball demonstrates several physical characteristics. The amount of material that can be recognized at a given glance occupies a given width of a written line. Each glance is referred to as a fixation. During a fixation, the eyeball is essentially not moving, and each eyeball is oriented so that the image falls upon the macula lutea (the maximum receptive area). Reading is a series of motions in which the eyeballs fixate on a portion of the written line and then move very rapidly to the next portion.
d. Compensation for Head Movements (Vestibular Control of Eye Movements). Since the human body cannot be held absolutely still, the eyeballs must move in order to remain fixed upon an object. For this purpose, the eyeballs must be moved in the opposite direction and at the opposite speed of the movement of the head. This is accomplished by a delicate and complicated mechanism. This mechanism includes the motor neurons of the muscles of the eyeball and the vestibular nuclei of the hindbrain (responsible for balance and spatial orientation).
13-8. VISUAL REFLEXES
In the sense of vision, one consciously perceives the various objects being looked upon. In addition to this, there are a number of protective reactions to visual input--the visual reflexes.
a. When an unexpected visual stimulus occurs within the visual field, the individual's response will often include movement and other types of reaction. This is a part of the startle reflex.
b. When there is a change in the amount of light entering the eyeball, the size of the pupil will change. This is the pupillary reflex. The muscles of the iris automatically constrict or dilate to control the amount of light entering the eyeball.
c. In the blink reflex, the eyelids automatically move over the exterior surface of the eyeball. This reflex results in the automatic washing of the exterior surface of the eyeball with the lacrimal fluids. It also helps to keep the surface moist.
13-9. LACRIMAL APPARATUS
The eyeball is suspended in the orbit and faces outward. Helping to fill the orbit are a number of structures associated with the eyeball; these are the adnexa. Among these other structures is the lacrimal apparatus.
a. The lacrimal gland is located in the upper outer corner in front. Via small ducts, it secretes the lacrimal fluid into the space between the external surface of the eyeball and the upper eyelid.
b. The inner surface of the eyelids and the outer surface of the eyeball are covered by a continuous membrane known as the conjunctiva. The lacrimal fluid keeps the conjunctiva transparent. Also, with the blink reflex, the lacrimal fluid washes away any foreign particles that may be on the surface of the conjunctiva.
c. The free margins of the upper and lower eyelids have special oil glands. The oily secretion of these glands helps prevent the lacrimal fluid from escaping.
d. With the movement of the eyeball and the eyelids, the lacrimal fluid is gradually moved across the exterior surface of the eyeball to the medial inferior corner. Here, the lacrimal fluid is collected into a lacrimal sac, which drains into the nasal chamber by way of the nasolacrimal duct. Thus, the continuous production of lacrimal fluid is conserved by being recycled within the body.
Section III. THE SPECIAL SENSE OF HEARING (AUDITORY SENSE)
If a medium is set into vibration within certain frequency limits (average between 25 cycles per second and 18,000 cycles per second), we have what is called a sound stimulus (Figure 13-5). The sensation of sound, of course, occurs only when these vibrations are interpreted by the cerebral cortex of the brain at the conscious level.
a. The human ear is the special sensory receptor for the sound stimulus. As the stimulus passes from the external medium (air, water, or a solid conductor of sound) to the actual receptor cells in the head, the vibrations are in the form of (1) airborne waves, (2) mechanical oscillations, and (3) fluid-borne pulses.
Figure 13-5. Characteristics of sound.
b. The ear (Figure 13-6) is organized in three major parts: external ear, middle ear, and internal (inner) ear. Each part aids in the transmission of the stimulus to the receptor cells.
Figure 13-6. A frontal section of the human ear.
13-11. THE EXTERNAL EAR
The external ear begins with a funnel-like auricle. This auricle serves as a collector of the airborne waves and directs them into the external auditory meatus. At the inner end of this passage, the waves act upon the tympanic membrane (eardrum). The external auditory meatus is protected by a special substance called earwax (cerumen).
13-12. THE MIDDLE EAR
a. Tympanic Membrane. The tympanic membrane separates the middle and external ears. It is set into mechanical oscillation by the airborne waves from the outside.
b. Middle Ear Cavity. Within the petrous bone of the skull is the air-filled middle ear cavity.
(1) Function of the auditory tube. Due to the auditory tube, the air of the middle ear cavity is continuous with the air of the surrounding environment. The auditory tube opens into the lateral wall of the nasopharynx. Thus, the auditory tube serves to equalize the air pressures on the two sides of the tympanic membrane. If these two pressures become moderately unequal, there is greater tension upon the tympanic membrane; this reduces (dampens) mechanical oscillations of the membrane. Extreme pressure differences cause severe pain. The passage of the auditory tube into the nasopharynx opens when one swallows; therefore, the pressure differences are controlled somewhat by the swallowing reflex.
(2) Associated spaces. The middle ear cavity extends into the mastoid bone as the mastoid air cells. The relatively thin roof of the middle ear cavity separates the middle ear cavity from the middle cranial fossa.
c. Auditory Ossicles. There is a series of three small bones, the auditory ossicles, which traverse the space of the middle ear cavity from the external ear to the internal ear. The auditory ossicles function as a unit.
(1) The first ossicle, the malleus, has a long arm embedded in the tympanic membrane. Therefore, when the tympanic membrane is set into mechanical oscillation, the malleus is also set into mechanical oscillation.
(2) The second ossicle is the incus. Its relationship to the malleus produces a leverage system which amplifies the mechanical oscillations received through the malleus.
(3) The third ossicle, the stapes, articulates with the end of the arm of the incus. The foot plate of the stapes fills the oval (vestibular) window.
d. Auditory Muscles. The auditory muscles are a pair of muscles associated with the auditory ossicles. They are named the tensor tympani muscle and the stapedius muscle. The auditory muscles help to control the intensity of the mechanical oscillations within the ossicles.
13-13. THE INTERNAL EAR
a. Transmission of the Sound Stimulus. The foot plate of the stapes fills the oval (vestibular) window, which opens to the vestibule of the internal ear (Figure 13-7A). As the ossicles oscillate mechanically, the stapes acts like a plunger against the oval window. The vestibule is filled with a fluid, the perilymph. These mechanical, plunger-like actions of the stapes impart pressure pulses to the perilymph.
Figure 13-7. Diagram of the scalae.
b. Organization of the Internal Ear. The internal ear is essentially a membranous labyrinth suspended within the cavity of the bony (osseous) labyrinth of the petrous bone (Figure 13-8). The membranous labyrinth is filled with a fluid, the endolymph. Between the membranous labyrinth and the bony labyrinth is the perilymph.
Figure 13-8. The labyrinths of the internal ear.
c. The Cochlea. The cochlea is a spiral structure associated with hearing. Its outer boundaries are formed by the snail-shaped portion of the bony labyrinth. The extensions of the bony labyrinth into the cochlea are called the scala vestibuli and the scala tympani (Figure 13-7B). These extensions are filled with perilymph.
(1) Basilar membrane (Figure 13-7B). The basilar membrane forms the floor of the cochlear duct, the spiral portion of the membranous labyrinth. The basilar membrane is made up of transverse fibers. Each fiber is of a different length, and the lengths increase from one end to the other. Thus, the basilar membrane is constructed similarly to a harp or piano. Acting like the strings of the instrument, the individual fibers mechanically vibrate in response to specific frequencies of pulses in the perilymph. Thus, each vibration frequency of the sound stimulus affects a specific location of the basilar membrane.
(2) Organ of Corti. Located upon the basilar membrane is the organ of Corti. The organ of Corti is made up of hair cells. When the basilar membrane vibrates, the hair cells are mechanically deformed so that the associated neuron is stimulated.
13-14. NERVOUS PATHWAYS FOR HEARING
The neuron (associated with the hair cells of the organ of Corti) then carries the sound stimulus to the hindbrainstem. Via a special series of connections, the signal ultimately reaches Brodmann's area number 41, on the upper surface of the temporal lobe (see para 12-36). Here, the stimulus is perceived as the special sense of sound. It is interesting to note that speech in humans is primarily localized in the left cerebral hemisphere, while musical (rhythmic) sounds tend to be located in the right cerebral hemisphere.
Section IV. THE SPECIAL SENSE OF EQUILIBRIUM, THE GENERAL BODY
SENSE, AND POSTURAL REFLEXES
a. The human body is composed of a series of linkages, block on top of block. These blocks can be arranged in a multitude of patterns called postures. In order to produce and control these postures, the human brain utilizes a great number of continuous inputs telling the brain the instantaneous condition of the body posture. Overall, we refer to this process as the general body sense.
b. The internal ear provides one of the input systems for the general body sense. The internal ear responds to gravitational forces, of which there are two kinds--static and kinetic (in motion). Of the kinetic stimuli, the motion may be in a straight line (linear) or angular (curvilinear).
13-16. THE MACULAE
The membranous labyrinth of the internal ear has two sac-like parts--the sacculus and the utriculus. On the wall of each of these sacs is a collection of hair cells known as the macula (plural: maculae). The hairs of these hair cells move in response to gravitational forces, both static and linear kinetic. The maculae are particularly sensitive to small changes in the orientation of the head from an upright position. Thus, the maculae are very important in maintaining a standing or upright position.
13-17. THE SEMICIRCULAR DUCTS
a. In addition, three tubular structures are associated with the utriculus. The circle of each of these semicircular ducts is completed by the cavity of the utriculus. At one end of each semicircular duct is a crista, a ridge of hair cells across the axis of the duct.
b. When a jet takes off, a passenger tends to remain in place at first and can feel the resulting pressure of the seat against his back. Also, when the jet is no longer accelerating, the passenger can feel that the pressure of the seat against his back has returned to normal.
c. Likewise, in the appropriate semicircular duct, the endolymph ("passenger") tends to remain in place early during an acceleration. Because the duct ("seat") itself is moving with the body ("jet"), the hairs of the crista are affected by the change in movement. Later, when acceleration stops, the effect upon the hairs of the crista is also registered.
d. However, the cristae of the semicircular ducts detect rotation of the head (angular acceleration and angular velocity). Linear acceleration, as with our example of the passenger and the jet, is detected primarily by the maculae, discussed above.
13-18. RESULTING INPUTS FOR THE SPECIAL SENSE OF EQUILIBRIUM
The combined inputs from the maculae of the sacs and the cristae of the semicircular ducts provide continuous, instantaneous information about the specific location and posture of the head in relationship to the center of gravity of the earth. These inputs are transmitted by the vestibular neurons to the hindbrainstem.
13-19. INPUTS FOR THE GENERAL BODY SENSE
In addition to the inputs from the membranous labyrinth, various other inputs are used to continuously monitor the second-to-second posture of the human body.
a. We have already examined the proprioceptive sense, which monitors the condition of the muscles of the body.
b. Various other receptors are associated with the joint capsules, the integument, etc. They indicate the precise degree of bending present in the body.
c. A very important body sense is vision. Even when other inputs are lacking, if an individual can see his feet, he may still be able to stand and move.
13-20. POSTURAL REFLEXES
To automatically control the posture, the human nervous system has a number of special reflexes. These reflexes are coordinated through the cerebellum.
a. The head and neck tonic reflexes orient the upper torso in relationship to the head.
b. Another set of reflexes does likewise for the body in general. The righting reflexes come into play when the body falls out of balance or equilibrium.
c. A special set of reflexes connects the vestibular apparatus to the extraocular muscles of the eyeball. This was discussed earlier in the section on the special sense of the vision.
Section V. THE SPECIAL SENSE OF SMELL (OLFACTION)
13-21. SENSORY RECEPTORS
Molecules of various materials are dispersed (spread) throughout the air we breathe. A special olfactory epithelium is located in the upper recesses of the nasal chambers in the head. Special hair cells in the olfactory epithelium are called chemoreceptors, because they receive these molecules in the air.
13-22. OLFACTORY SENSORY PATHWAY
The information received by the olfactory hair cells is transmitted by way of the olfactory nerves (cranial nerves I). It passes through these nerves to the olfactory bulbs and then into the opposite cerebral hemisphere. Here, the information becomes the sensation of smell.
Section VI. THE SPECIAL SENSE OF TASTE (GUSTATION)
13-23. SENSORY RECEPTORS
Molecules of various materials are also dispersed or dissolved in the fluids (saliva) of the mouth. These molecules are from the food ingested (taken in). Organs known as taste buds are scattered over the tongue and the rear of the mouth. Special hair cells in the taste buds are chemoreceptors to react to these molecules.
13-24. SENSORY PATHWAY
The information received by the hair cells of the taste buds is transmitted to the opposite side of the brain by way of three cranial nerves (VII, IX, and X). This information is interpreted by the cerebral hemispheres as the sensation of taste. |
A military is an organisation with the pejorative to defend a country, nation and other entities, by combating threats to that which it is defending. A military may also incorporate aspects of law enforcement, government and other roles normally attributed to other organisations. In micronationalism, a military force is entirely fictional, incorporates fictional elements or is entirely factual, but usually possesses no actual ability to carry out military operations, due to constrains of macronational law.
The first recorded use of 'military' was in the 15th Century. The word is derived from the Latin 'militaris', from the prefix 'milit-' and the word 'miles' (meaning 'soldier').
Common roles of the micronational military
Each military force in each micronation has different functions. In some, the police force is linked to or is entirely the military force of the micronation. Military roles are also dependent on resources available, manpower and legal constraints.
Some common roles of micronational military forces include:
- A small degree of law enforcement
- Parades and inspections
- Military exercises and training
- Live combat simulation
Militaries, both micronational and macronational, are almost always guided by a strict organisational structure.
Military commands are headed by an overall supreme authority. In some cases, it is a single person, like a dictator or a similar all-powerful figure. In other cases, it is headed by some sort of supreme command group, which co-ordinates and runs and entire military. Names often given to these headquarters include:
- 'High Command'
- 'Supreme Command'
- 'Supreme Headquarters'
Below these commands are similar commands, depending on military units' names and resource allocations, but in a hierarchical structure. This 'bottom-down' organisation allows for information to flow efficiently in both directions through the delegation of authority to suitable officers. Such commands may include Divisional Headquarters, Eastern Command and Battalion Headquarters.
All personnel of a military, including support staff, follow a strict ranking structure, which varies from military to military (a notable exception is the Armored Federation Army). Generally, the ranks of a military may be split into three groups - enlisted, non-commissioned officers and commissioned officers. Depending on the military, the exact ranks can change.
As a general guide, however, the rank system of an army usually follows the following general guidelines (in order of superiority from lowest to highest). For the air force and naval branches of a military, different ranking systems are used.
- Lance Corporal
- Field Marshal |
Climate Change Lecture
news 24 February 201211:19
Professor Dorthe Dahl-Jensen
Exploring the Greenland Ice Sheet: Implications for Climate Change Past and Present.
The Greenland Ice Sheet is reacting to the recent climate change and is losing more and more mass for every year. One of our challenges in the future is to adapt to rising sea level. Looking into the past gains us knowledge on how the ice sheets react to changing climate of the past and this knowledge can be used to improver predictions of sea level rise in the future. The deep ice cores from Greenland contain information on the past climate more than 130.000 years back in time.
The first results from the new Greenland ice core from the drill site named NEEM are presented and combined with the results from the other deep ice cores from the Greenland Ice Sheet.
All the ice cores drilled though the Greenland ice sheets have been analyzed and the results show that all the ice cores contain ice from the previous warm Eemian climate period, 130.000 to 155000 years before present. Is it thus clear that the Greenland Ice Sheet did exist for 120.000 years ago in this warm climate period where it was 5 oC warmer over Greenland and the sea level has been estimate to have been 5-8 m higher than the present sea level? |
|This article does not cite any sources. (September 2014) (Learn how and when to remove this template message)|
|Worst case performance||
O(n²) (unbalanced)O(n log n) (balanced)
|Best case performance||O(n log n)|
|Average case performance||O(n log n)|
|Worst case space complexity||Θ(n)|
A tree sort is a sort algorithm that builds a binary search tree from the elements to be sorted, and then traverses the tree (in-order) so that the elements come out in sorted order. Its typical use is sorting elements adaptively: after each insertion, the set of elements seen so far is available in sorted order.
Adding one item to a binary search tree is on average an O(log n) process (in big O notation), so adding n items is an O(n log n) process, making tree sort a 'fast sort'. But adding an item to an unbalanced binary tree needs O(n) time in the worst-case, when the tree resembles a linked list (degenerate tree), causing a worst case of O(n²) for this sorting algorithm. This worst case occurs when the algorithm operates on an already sorted set, or one that is nearly sorted. Expected O(n log n) time can however be achieved in this case by shuffling the array.
The worst-case behaviour can be improved upon by using a self-balancing binary search tree. Using such a tree, the algorithm has an O(n log n) worst-case performance, thus being degree-optimal for a comparison sort. When using a splay tree as the binary search tree, the resulting algorithm (called splaysort) has the additional property that it is an adaptive sort, meaning that its running time is faster than O(n log n) for inputs that are nearly sorted.
The following tree sort algorithm in pseudocode accepts an array of comparable items and outputs the items in ascending order:
STRUCTURE BinaryTree BinaryTree:LeftSubTree Object:Node BinaryTree:RightSubTree PROCEDURE Insert(BinaryTree:searchTree, Object:item) IF searchTree.Node IS NULL THEN SET searchTree.Node TO item ELSE IF item IS LESS THAN searchTree.Node THEN Insert(searchTree.LeftSubTree, item) ELSE Insert(searchTree.RightSubTree, item) PROCEDURE InOrder(BinaryTree:searchTree) IF searchTree.Node IS NULL THEN EXIT PROCEDURE ELSE InOrder(searchTree.LeftSubTree) EMIT searchTree.Node InOrder(searchTree.RightSubTree) PROCEDURE TreeSort(Array:items) BinaryTree:searchTree FOR EACH individualItem IN items Insert(searchTree, individualItem) InOrder(searchTree)
data Tree a = Leaf | Node (Tree a) a (Tree a) insert :: Ord a => a -> Tree a -> Tree a insert x Leaf = Node Leaf x Leaf insert x (Node t y s) | x <= y = Node (insert x t) y s | x > y = Node t y (insert x s) flatten :: Tree a -> [a] flatten Leaf = flatten (Node t x s) = flatten t ++ [x] ++ flatten s treesort :: Ord a => [a] -> [a] treesort = flatten . foldr insert Leaf
In the above implementation, both the insertion algorithm and the retrieval algorithm have O(n²) worst-case scenarios.
- Heapsort: builds a binary heap out of its input instead of a binary search tree, and can be used to sort in-place (but not adaptively).
|The Wikibook Algorithm Implementation has a page on the topic of: Binary Tree Sort| |
Colors and Color Spaces
Continuing with my previous post, where I discussed about the basic concepts of computer vision, here I will discuss all about colors – more about color depth, channels and color spaces.
More on Color Depth
You are already aware (as discussed in my previous post), there are three ways to represent an image with respect to its color – binary, grayscale and color images. You are also aware that depth of an image basically refers to the different number of unique shades of color which constitute an image. Let us look at the following once again.
Depth is represented as “bits” like 1-bit, 8-bit, 16-bit, 24-bit, etc. An n-bit image simply refers that there are 2n unique shades of color in the image. For example, an 8-bit grayscale image will have 28 = 256 different shades of gray in between black (0) and white (1). Similarly, a 16-bit grayscale image will have 216 = 65,536 different shades of gray in between black (0) and white (1).
Relationship between Depth and Intensity
Next, n-bit image also refers that each of its pixels stores the intensities in n-bit fashion. Please read this carefully, since it will help you out in many places where you need to work with individual pixels. Each pixel of an image has an intensity (as discussed in the previous post). This intensity is directly related to the depth of the image. In the previous point, it is stated that depth represents different shades of a color. Here, I state that shades are nothing but different intensity levels. For example, for an 8-bit grayscale image, there are 28 = 256 different shades of gray. This also means that each pixel can have 256 different intensity levels, level 0 to level 255. Here level zero (0) corresponds to black (0) and level 255 corresponds to white (1). Any value in between 0 and 255 merely represents a gray shade. For instance, a pixel intensity of 127 corresponds of 0.5 gray, 56 corresponds to 56 ÷ 256 = 0.218, 198 corresponds to 198 ÷ 256 = 0.773, etc. Similarly, for a 16-bit grayscale image, there are 216 = 65,536 different shades of gray. Here intensity level zero (0) corresponds to black (0) and level 65535 corresponds to white (1). Again, any value in between 0 and 65535 represents a gray shade.
Now that we are aware that each pixel has an intensity value in between 0 and 2n. Considering 8-bit image, each pixel has 256 intensities, which means that each pixel is represented in an 8-bit format. An 8-bit image is also known as “Byte” image (because 8 bits = 1 byte, simple enough). The intensities, which are in between 0 and 255 are represented as 8 bit binary as demonstrated below.
If you are familiar with binary, then it should be easy to understand, and you would have spotted an error in it as well. ;) Each pixel intensity (0…255) is represented and stored in its 8 bit binary format. Thus, while working with individual pixels and manipulating their values, this little detail should always be kept in mind.
Still confused? The following pictures from Wikipedia will should help. Here you can see a single image in five different depths. As the depth of the image increases, more number of colors are utilized in the construction of the image, and thus makes the image look smoother and more realistic. When we see something in real world from our eyes, we see infinite number of colors and shades in nature. But unfortunately, images being digital in nature cannot capture all of them, and hence are quantized to a finite number of colors determined by the depth of the image. We will discuss more about how cameras perform sampling and quantization of images in my upcoming posts.
Now lets discuss more about color images and how their color is actually represented. Every digital color image is represented according to a color space. There are many types of color spaces, some of which are – RGB, RGBA, HSV, HSL, CMYK, YIQ, YUV, YCbCr, YPbPr, etc. We are interested only in RGB and HSV, and little bit of YCbCr. I will give you a brief idea about these three, for the rest, please refer Wikipedia.
RGB Color Space
RGB stands for Red-Green-Blue. This color space utilizes a combination of the three primary colors viz. Red (R), Green (G) and Blue (B) to represent any color in the image. This makes it the most widely used, intuitive and easy to use color model. It uses the technique of additive color mixing to create new colors. By mixing different intensities of Red, Green and Blue, we can get any possible color. For 8-bit images, each channel (R, G and B) can have an intensity value in between 0 and 255. Thus a mix of these three colors can result in 256 × 256 × 256 = 16,777,216 different colors! You can see it for yourself! Open MS Office, and there you can have the following Color Mixer–
There you can choose your color from a palette. You can also choose from two different color models (RGB and HSL). In RGB model, you can view the individual values of R, G, and B. You can move marker in the right column up and down to adjust the intensity level of that color. In the bottom, there is a horizontal scroll bar which determines the opacity of the color.
Let’s delve deeper into this. Let’s take the following image which I created using (surprisingly) MS Paint! The picture contains several bands of colors. The top band represents pure red color, below that it shows how it fades into white color. The same is repeated with pure green and pure blue, followed by yellow, cyan and magenta (which are a combination of two of the primary colors) and then two bands of white and black color.
So basically what I did is fed this image into a code which I wrote using OpenCV 2.4.3 and split apart the red, blue and green channels. And this is what I got —
As you can see, in the Red (R) channel, the white areas indicate the presence of red color where black areas indicate its absence. As expected, the red, yellow, magenta and white areas of the input image have turned white whereas pure green, pure blue, pure cyan and black areas have turned black. In the other areas, you can see a transition from black to white, which means that the intensity of red color increases gradually in these areas. Similar explanations can be given for Green (G) and Blue (B) channels.
But this is merely a computer generated image. Upon running the same code on the Lena image, this is what you get —
As you can see, the Red (R) channel is quite bright as compared to other two channels. This indicates that the overall intensity of red color is higher as compared to other colors. Well, you can try it out yourself. I have shared a code called usingMouse.cpp in the Code Gallery. You can find the Windows executable here (I will post the Linux executable soon). There will be a file called
usingMouse.exe along with the image lena.jpg. Download both to your computer. Then open command prompt and go to the directory where you have downloaded the files, and then run it by typing the following:
For now, lets skip how to create the executable (which we will discuss later). After executing the executable, you should see a window with lena.jpg image displayed in it. If you hover the mouse over the image, it should show the (x,y) coordinate position. If you left click somewhere on the image, it displays RGB values of that pixel. If you right click anywhere in the image, it displays HSV values (which we will discuss next).
So, basically, this program helps you to extract the RGB information of each pixel. Feel free to replace the lena.jpg image with any other image of your choice and explore it! Have fun with it! :)
HSV Color Space
HSV stands for Hue-Saturation-Value. Before we describe it, I would like to show you the following picture.
Suppose I want to extract the yellow region of the ball. In this case, there is a lot of variation in the color intensity due to the ambient lighting. The top portion of the ball is very bright, whereas the bottom portion is darker as compared to the other regions. This is where the RGB color model fails. Due to such a wide range of intensity and color mix, there is no particular range of RGB values which can be used for extraction. This is where the HSV color model comes in. Just like the RGB model, in HSV model also, there are three different parameters.
Hue: In simple terms, this represents the “color”. For example, red is a color. Green is a color. Pink is a color. Light Red and Dark Red both refer to the same color red. Light/Dark Green both refer to the same color green. Thus, in the above image, to extract the yellow ball, we target the yellow color, since light/dark yellow refer to yellow.
Saturation: This represents the “amount” of a particular color. For example we have red (having max value of 255), and we also have pale red (some lesser value, say 106, etc).
Value: Sometimes represented as intensity, it differentiates between the light and dark variations of that color. For example light yellow and dark yellow can be differentiated using this.
This makes HSV color space independent of illumination and makes the processing of images easy. But it isn’t much intuitive and some people may have some difficulty to understand its concepts.
Now lets take the same color band image and convert it to HSV and split each channel. So here is what I got–
As you can see, each and every color has a separate hue value. It can be seen clearly that different shades of red have the same hue value. Saturation refers to the amount of color. That’s why you can see such a variation in the saturation channel. The value (or intensity) is the same here since it is a computer generated image. In real images, there will be variation in the intensity channel as well. So, applying the same on the Lena image, we get something like this–
You can get individual H S V values of any image by executing the
usingMouse.exe file as discussed above.
But how the heck can a color image be comprised of three grayscale images?
Well, this is a genuine question, which any beginner should ask when learning about color spaces. In the examples above (both the color band and the Lena image), we have seen that the three channels separated from the original image are grayscale images. Whether it is the red/green/blue channel of an RGB image, or the hue/sat/val channel of HSV image. Any great ideas?!
Well, you know that grayscale images are represented as a single channel. The intensity of each pixel is represented as a value in the range 0-255 (for 8-bit images). Well, this is exactly what happens here. If you use the
usingMouse.exe file to view the RGB/HSV values of each pixel, you will realize that each of the three channels can be represented separately having a value 0-255. Thus, when all the pixel values are taken together, we can have three separate channels, each having a separate intensity value for each of its pixels. This is exactly how a grayscale image is represented, and hence is stored in that format.
The converse is equally true. When you combine three grayscale images, it will result in a color image. And this is no magic as well! The three grayscale images are combined and represented as a color image.
Y’CrCb Color Space
We won’t go into its details. In short, this is another sophisticated color space and is used in video processing and transmission. This is because one of the components (Y) is the major uncompressed component, whereas the other two components are compressed a lot, thus saving bandwidth for transmission. In short,
Y = Luma or Luminescence or Intensity
Cr = RED component minus reference value
Cb = BLUE component minus reference value
Several modern frame grabbers return this type of images. There was a time when I was taking in frames from a camera, and I was confused by the image it was returning to me with a blue-ish tint. At that time I was unfamiliar with this color space. After learning about it, I converted it to RGB format to get the actual desired color picture. So before using any camera, do check out the type of image returned by it.
Well, this is a concept we have been talking all the way throughout this tutorial. For example, in the RGB image, there are three channels – R, G and B channels; In the HSV image, there are three channels – H, S and V channels. So this is not a new concept. As per Wikipedia, a channel is a grayscale image comprising only of one of the components (R/G/B or H/S/V) of the image.
Usually software like OpenCV and MATLAB support images up to four channels. Usually we have three channels (like RGB, HSV), but sometimes we also have a fourth channel (like RGBA, CMYK).
Here we will discuss how to implement these things in software like MATLAB and OpenCV.
To convert an image from one color space to another, the keyword is
cvCvtColor(). The syntax is–
cvCvtColor ( <source>, <destination>, <conversion_code> );
cvCvtColor( src, dst, CV_BGR2HSV);
Important: OpenCV stores RGB images in BGR format (not RGB) by default. Please keep this mind before implementing it.
There are lots of conversion codes, some of which are–
CV_BGR2HSV, CV_HSV2BGR, CV_GRAY2BGR, CV_BGR2GRAY
If you want to split the different channels, then use
cvSplit( <source>, <dest0>, <dest1>, <dest2>, <dest3>);
cvSplit( src, b, g, r, NULL);
If you want to merge different channels, then use
cvMerge( <src0>, <src1>, <src2>, <src3>, <dest>);
cvMerge( h, s, v, NULL, dst);
In MATLAB, to convert an image from one form to another, there are separate functions for each type of conversion–
I = rgb2hsv (img); I = hsv2rgb (img); I = rgb2gray (img);
If you want to view the individual channels of the image, then try this out–
redChannel = rgbImage (:, :, 1); greenChannel = rgbImage (:, :, 2); blueChannel = rgbImage (:, :, 3);
This is because in MATLAB, everything is represented as arrays and matrices, and thus the third argument represents the third dimention of the matrix.
If you want to combine the different channels, then use the
cat() function, which basically concatenates the arrays and matrices–
rgbImage = cat (3, redChannel, greenChannel, blueChannel);
The codes used to generate the RGB/HSV images and channels used in this post can be found in the CV Code Gallery of maxEmbedded.
So folks, this was all about colors and color spaces. Lets summarize it.
- We discussed about color depth in detail, including its relationship with intensity.
- We discussed about three color spaces – RGB, HSV and Y’CrCb.
- In RGB and HSV color models, we discussed the role of each component along with two demonstrations.
- The RGB and HSV values were studied using the
usingMouse.exeprogram from the code gallery.
- Then we defined the already known concept of image channels.
- Then we learnt about the software implementation of the techniques discussed in this tutorial in both OpenCV and MATLAB.
Thank you for reading this long post till the end! I would be glad if you post your feedback, doubts, queries, etc as comment below, so that I will be encouraged to write more tutorials for you! :)
Till then, please subscribe to maxEmbedded via email or RSS Feeds to stay updated.
VIT University, Vellore, India |
The Muscovy duck (Cairina moschata) is a large duck native to Mexico, Central, and South America. Small wild and feral breeding populations have established themselves in the United States, particularly in the lower Rio Grande Valley of Texas, as well as in many other parts of North America, including southern Canada. Feral Muscovy ducks are found in New Zealand and have also been reported in parts of Europe.
They are a large duck, with the males about 76 cm or 30 inches long, and weighing up to 7 kg or 15 pounds. Females are considerably smaller, and only grow to 3 kg or 7 pounds, roughly half the males' size. The bird is predominantly black and white, with the back feathers being iridescent and glossy in males, while the females are more drab. The amount of white on the neck and head is variable, as well as the bill, which can be yellow, pink, black, or any mixture of these. They may have white patches or bars on the wings, which become more noticeable during flight. Both sexes have pink or red wattles around the bill, those of the male being larger and more brightly colored.
Although the Muscovy duck is a tropical bird, it adapts well to cooler climates, thriving in weather as cold as −12°C (10°F) and able to survive even colder conditions. In general, Barbary duck is the term used for C. moschata in a culinary context.
The domestic breed, Cairina moschata forma domestica, is commonly known in Spanish as the pato criollo ("creole duck"). They have been bred since pre-Columbian times by Native Americans and are heavier and less able to fly long distances than the wild subspecies. Their plumage color are also more variable. Other names for the domestic breed in Spanish are pato casero ("backyard duck") and pato mudo ("mute duck"). |
Neuroscience: A Journey Through the Brain
|Main Page Organization Development Neuron Systems About the site (Glossary) References|
The Structure of a Neuron: Dendrites
The word "dendrite" is Greek for "tree", and reflects the appearance of the dendrites. Dendrites resemble the branches of a tree as they extend from the soma. The dendrites for a single neuron are collectively called the dendritic tree, and each branch is called a dendritic branch. Dendrites have a wide variety of shapes and sizes, and are used to help classify groups of neurons.
Dendrites function as the 'antennae' for the neuron, and are thus covered with thousands of synapses. The membrane of the dendritic tree has specialized proteins called receptors in it to detect neurotransmitters released by other neurons into the synapse. Some dendrites are also covered with small bumps called dendritic spines, which are believed to isolate various chemical reactions that are triggered by some types of synaptic activation.
Created and Maintained by: Melissa
Last Updated: April 09, 2002 08:55 PM |
Say What? A Parent's Guide to Forgotten Literacy Words
Have you ever sat down to help your child with a homework assignment or school project and felt like you needed a dictionary before you began? On occasion, a worksheet will come home with one of my children and I'll find myself attempting to recall information I learned 20+ years ago. When we're not using information on a regular basis, we tend to forget. I don't know about you, but I don't often discuss onomatopoeia or homophones in my everyday life. While we remember things like adjectives and verbs, we might find ourselves scratching our heads when it comes to things like dipthongs and digraphs.
Here is a handy refresher for some of those words we haven't thought much about since the fourth grade.
Synonym - A synonym is a word that has the same or nearly the same meaning as another word.
Example: Huge and enormous are synonyms.
Antonym - An antonym is a word that has the opposite meaning of another word.
Example: Wet and dry are antonyms.
Homonym - Homonyms are words that are pronounced the same and have the same spelling, but have different meanings.
Example: The tree is covered in bark. My dog likes to bark.
Homophone - Homophones are a type of homonym. Homophones are two words that are pronounced the same but have different meanings.
Example: I love to buy things on sale. Do you know how to sail a boat?
Homographs - Homographs are words that are spelled the same, but have different meanings.
Example: I went to the county fair. It's important to play fair.
Dipthong - A dipthong is when two vowel sounds are connected in a smooth, gliding manner. A dipthong is two adjacent vowel sounds in the same syllable. The vowels don't necessarily make the sound of either vowel.
Example: oi in boil, ou in round, au in sauce
Digraph - A digraph is two adjacent letters that make one sound.
Example: ck in black, ea in reach
Onomatopoeia - Onomatopoeia is the written representation of a sound.
Example: oink, crash, tweet
Predicate - The predicate is the part of the sentence or statement that tells something about the subject, and typically contains a verb.
Example: The baker made a chocolate cake.
Simile - A simile is a type of metaphor that compares two objects using the word like or as.
Example: sly like a fox, quick as a cat
Idiom - An idiom is an expression that cannot be understood from the individual words, but instead has a separate meaning of its own.
Example: You have a chip on your shoulder. It's raining cats and dogs.
Hyperbole - Hyperbole is an exaggeration. It is typically used to make a point, and is not meant to be taken literally.
Example: I'm so hungry I could eat a horse.
Alliteration - Alliteration is when a series of words begin with the same sound.
Example: Baby Billy borrowed a bottle.
Preposition - A preposition is a word that comes before a noun in a sentence to show its relationship to another word in the sentence.
Example: The fox burrowed under the ground. (The preposition under shows the relationship between the words burrowed and ground.)
Pronoun - A pronoun is a word that takes the place of a noun.
Example: She, he, it |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.