content
stringlengths 275
370k
|
---|
Typhoid is an infectious disease caused by Salmonella typhi which causes severe symptoms in the digestive system. It can be life-threatening, but if treated early antibiotics are effective.
The disease is transmitted from human to human via food or drinking water, and it is therefore mainly hygiene and sanitary conditions that determine its spread. It is primarily for this reason that it is mainly seen in areas with poor sanitation/living conditions.
The incubation period is 10 to 20 days and depends on, among other things, how large a dose of bacteria has been taken in.
In the mild disease, the bacterium is eliminated very early in the course of the disease and there are perhaps only mild symptoms. It is possible to become a healthy carrier of infection.
A more serious case of typhoid may include high temperature, sweating, cough, headache, vomiting and constipation (diarrhoea in children).
Hepatitis A is an infection of the liver caused by the hepatitis A virus. It is spread through contaminated water and food, especially shellfish or through person to person contact where personal hygiene is poor.
Hepatitis A occurs worldwide, mostly in countries where sanitation is poor. It is now rare in Western Europe, Scandinavia, North America, Japan, New Zealand and Australia. Most cases imported into Britain have been contracted in the Indian sub-continent.
The illness of all forms of hepatitis is similar. Symptoms include mild fever, gastro-intestinal upset, nausea/vomiting, diarrhoea and abdominal pain. Jaundice may also occur. Infection with hepatitis A results in lifelong immunity. |
Vesicoureteral reflux is a common disorder of the urinary system. The urinary system is made up the kidneys, ureters, bladder and urethra. The body has two kidneys that drain urine to the bladder by small tubes called ureters. Urine normally travels in only one direction, i.e from the kidneys to the bladder. Vesicoureteral reflux (VUR) occurs when urine travels backward from the bladder through the ureters to the kidneys. Vesicoureteral reflux without urinary infection by in large is harmless. However, when associated with urinary infection, VUR may cause severe kidney infections (pyelonephritis) which can lead to kidney damage.
There are two types of VUR: primary and secondary. Primary VUR is the most common and is usually caused by an irregular embryological arrangement of the ureteral tube in the bladder early in the development of the fetus before birth. When the ureter enters the bladder, the tunnel for which it travels in the bladder may be too short or have too large of a diameter to allow the ureter to close sufficiently during bladder filling to prevent a backup of urine. This condition may resolve as the child grows with the bladder enlarging and the ureter changes in length. Secondary VUR occurs when there is an associated condition, such as: bladder outlet obstruction, overactive bladder, myelomeningocele, voiding abnormalities and dysfunctional elimination problems.
VUR occurs in less than 1% of healthy children. In children with a urinary tract infection (UTI), the incidence is 25 to 50%. One study found that 38% of children with antenatal (before birth) kidney swelling (hydronephrosis) were diagnosed with VUR on subsequent studies after birth. While boys had a higher incidence antenatally, females still make up 85% of the children with VUR overall. Caucasian girls had 10 times the risk of VUR versus African-American girls.
Further studies have shown a higher incidence of VUR (30-40%) in siblings of children who were already diagnosed with VUR. If you have a child with vesicoureteral reflux, it is important to talk with your physician to determine if other siblings should be evaluated for VUR.
There are two different types of patients who are diagnosed with VUR; 1- children with prenatally detected kidney swelling (hydronephrosis); 2- Children being evaluated for urinary tract infection. Some children are detected before birth when hydronephrosis is discovered via a prenatal screening ultrasound. These children are frequently evaluated after birth with a renal ultrasound and voiding cystourethrogram (VCUG). A VCUG is performed by placing a catheter in the urethra (natural voiding channel) and X-ray visible dye is injected into the bladder allowing X-rays to delineate the flow of the urine.
The second group of children may require an evaluation for VUR after a urinary tract infection. While opinions vary, it is generally accepted that the following children with a UTI should be evaluated for VUR with a renal ultrasound and VCUG: any child less than 5 years of age, a child with a UTI and fever (regardless of age), and any boy with a UTI (unless they are sexually active or have a significant past history of genitourinary problems).
Your healthcare provider may recommend another form of imaging called a radionuclide scan. This procedure allows the provider to continue to monitor the VUR with minimal radiation exposure. A DMSA scan may be ordered to detect scarring of the kidney or an infection in the kidney (pyelonephritis).
The VCUG is important in helping to stage the severity of VUR.
Vesicouretral reflux itself is usually asymptomatic and a urinary infection is the presenting picture. Children may present initially with the following signs of a urinary tract infection: fever, malodorous urine, blood in the urine, urinary frequency, pain with urination, bedwetting, protein in the urine, lethargy or gastrointestinal symptoms. Newborns may have nonspecific symptoms such as poor feeding and irritability.
Vesicoureteral Reflux without urinary infection for the most part does not cause injury to the kidneys. However, VUR with infection can result in an infection of the kidney (pyelonephritis) which can result in scarring of the kidney. Fortunately, significant kidney scarring is rare. Significant scarring of the kidneys can result in high blood pressure, renal impairment, renal failure, and complications in pregnancy as an adult. Prophylactic antibiotic treatment to prevent urinary infections in children is begun immediately after diagnosis of VUR to try and decrease the risk for these complications.
The management and treatment of VUR depends upon many factors and an in depth discussion of VUR and your child should be individualized with your health care provider. Vesicoureteral reflux is frequently initially managed by a primary care provider for lower grades of VUR (1-3) . Higher grades of VUR or complex and complicated cases of VUR are usually jointly managed with a surgical specialist called a Pediatric Urologist.
VUR has a spontaneous resolution rate and is usually managed with prophylactic antibiotics (preventative antibiotic) in hope that with growth of the child there will be concomitant growth of the ureteral tunnel. Should the tunnel grow enough then the VUR may resolve without the need for a surgical procedure. Prophylactic antibiotics are given at very low doses daily to reduce possible side effects. Newborns are usually given Amoxicillin or Keflex (Cephalexin). Children older than 2 months can be given Trimethoprim (Primsol) or Bactrim (trimethoprim-sulphamethoxazole). Waiting 12-18 months is the usual time to wait between follow up X-rays so that a child has time to grow.
Spontaneous resolution of VUR has one major caveat. It is impossible to predict when or if the VUR will improve or resolve. Some children with high grade VUR can have resolution in a short time frame and some children with low grade VUR will never have spontaneous resolution. Fortunately most children with Grades I-III VUR will have improvement or resolve their urinary reflux by the time they are 2 to 5 years of age. Children with Grades IV & V urinary reflux have a lower resolution rate of VUR. These children too can be followed but frequently require a surgical procedure to bring closure to the VUR
If a child has a breakthrough infection (urinary tract infection on the preventative antibiotic) the conservative plan of monitoring the reflux must be abandoned and a surgical procedure is necessary to prevent further potential infections of injuring the kidneys. In general infants are at greater risk for renal injury than older children.
Surgery is also an option if a child has had persistent VUR after years of follow-up with little or no improvement. However, if no infections have occurred surgery is not mandatory. In the older child, many families frequently select surgery to bring closure to the problem, allow the discontinuance of antibiotics, and avoid any further potential side effects of VUR.
Surgical treatment is offered in 2 ways; open ureteral reimplantaion surgery and minimally invasive endoscopic deflux injections. The gold standard is open surgery that involves rearranging the ureters in the bladder in a non-refluxing natural position. Open surgery is > 95% successful and usually does not require a repeat VCUG x-ray after surgery. The surgical procedure is performed through a 4cm low abdominal incision, just above the pubic bone, below the underpants line. The child routinely only spends the night of surgery in the hospital and generally gets back to normal activity in 3-5 days (4-6 years old). Infants and toddlers rarely need surgery but, if required, are frequently back to themselves within 1-2 days.
Minimally invasive deflux injection involves performing a telescopic exam (endoscopic) of the bladder through the urethra as an outpatient procedure. This access allows direct injection of a dextramoner bead paste (sugar beads) under the ureter that improves or cures VUR in about 80% of the time. Children are back to normal activity usually the same day. A VCUG x-ray is necessary to assess the treatment after the procedure.
About the Author
Peter D. Furness III, M.D., FAAP, FACS:
Dr. Furness is Associate Professor of Surgery and Pediatrics at the University of Colorado Health Sciences Center and the Associate Chief of Pediatric Urology at the Children's Hospital in Denver, Colorado.
Copyright 2012 Peter D. Furness III, M.D., All Rights Reserved
|Mon: 8am-7pm||Thurs: 8am-7pm|
|Tues: 8am-4pm||Fri: 8am-4pm|
|Wed: 8am-7pm||Sat: 9am-12pm|
|Mon: 8am-4pm||Thurs: 8am-4pm|
|Tues: 8am-7pm||Fri: 8am-4pm|
|Wed: 8am-4pm||Sat: 9am-12pm| |
Despite technological advances, cavities are one of the most common illnesses in our society. Cavities begin when harmful bacteria in your mouth reacts with sugar and starches in your food, producing acids that wear away tooth enamel. Cavities can go undetected for some time because you won’t feel pain from a cavity until it reaches the center of a tooth where the pulp (nerves) is located. Parents can even pass tooth decay bacteria on to their children by kissing them or sharing eating utensils! Parents, siblings, all can spread harmful bacteria if they are sharing straws, treats, and toothbrushes. There are essentially three types of cavities:
Smooth Surface Cavities
Found on the outside smooth sides of teeth, these are the most preventable type of cavity, and the most easily reversed. Developing slowly, they start as a white spot where bacteria dissolve the calcium on a tooth’s enamel. These start to develop in adult teeth between the ages of 20-30.
Pit and Fissure Cavities
These are also called “coronal” cavities, and they arise on the narrow grooves on the chewing surface of teeth. Starting as early as the teen years, they progress quickly and are more difficult to clean as they are harder to reach. Dental sealants on these back teeth can help prevent these cavities.
Root cavities arise on the root surface of a tooth. Found in adults past middle age who have receding gums, these develop wherever parts of the root are exposed. These types of cavities develop from poor oral hygiene, dry mouth (limited saliva production which would otherwise neutralize bacterial acids) and a diet high in sugar and starches.
To prevent cavities, fluoride can help tooth enamel heal in the early stages. Once a cavity spreads inside the tooth it will require a filling to drill out the decay and fill the space left behind. Cavity causing culprits include poor oral hygiene, snacking constantly, and sipping sweetened drinks throughout the day.
What Can You Do?
–Brush your teeth using a soft-bristled toothbrush at least twice a day.
–Floss thoroughly at least once a day.
–Visit your dentist at least once a year for cleanings and checkups.
–Use a fluoride toothpaste if you don’t have access to fluoridated water.
–Consider dental sealants to protect your back teeth.
–Eat a healthy balanced diet.
–Avoid snacking between meals.
If you have questions, concerns, or would like to schedule your next dental cleaning, you can reach our team at Vela Dental Centers Southside by calling 361.994.4900. |
“Nanowires are really the major building blocks of future nano-devices,” said postdoctoral researcher Parsian Mohseni, first author of the study. “Nanowires are components that can be used, based on what material you grow them out of, for any functional electronics application.” Li’s group uses a method called van der Waals epitaxy to grow nanowires from the bottom up on a flat substrate of semiconductor materials, such as silicon. The nanowires are made of a class of materials called III-V (three-five), compound semiconductors that hold particular promise for applications involving light, such as solar cells or lasers.
Thanks to its thinness, graphene is flexible, while silicon is rigid and brittle. It also conducts like a metal, allowing for direct electrical contact to the nanowires. Furthermore, it is inexpensive, flaked off from a block of graphite or grown from carbon gases. “One of the reasons we want to grow on graphene is to stay away from thick and expensive substrates,” Mohseni said. “About 80 percent of the manufacturing cost of a conventional solar cell comes from the substrate itself. We’ve done away with that by just using graphene. Not only are there inherent cost benefits, we’re also introducing functionality that a typical substrate doesn’t have.” |
Summer winds are intensifying along the Marin Coast and climate change is a likely cause, a new study says.
The winds, which blow parallel to the shore and draw cold, nutrient-rich water from the deep ocean to the surface in a process known as "upwelling," have increased over the last 60 years in three out of five regions of the world, including the California Coast according to an analysis published earlier this month in the journal Science.
At this point "we don't know what the implications are," said William Sydeman, president of the Farallon Institute for Advanced Ecosystem Research in Petaluma, who led the study by seven scientists in the U.S. and Australia. "On the one hand it could be good. On the other hand, it could be really bad."
The shift could already be having serious effects on some of the world's most productive marine fisheries and ecosystems off California, which benefit from upwelling.
The fertilizing nutrients necessary to begin the marine food chain begin in the cold, lower depths of the ocean. Those nutrients eventually must come near the surface and bask in sunlight for photosynthesis to occur which. That, in turn, generates conditions in which zooplankton, tiny marine animals that include shrimp-like crustaceans called krill, flourish. Zooplankton are eaten by seabirds, fish and marine mammals.
To get to the surface, nutrients ride funnels of water that are created by winds. That process — upwelling — is the key element that begins the food chain. Too much or too little wind can upset the process.
Stronger winds have the potential to benefit coastal areas by bringing a surge of nutrients and boosting populations of plankton, fish and other species. But they could also harm marine life by causing turbulence in surface waters, disrupting feeding, worsening ocean acidification and lowering oxygen levels, the study says.
The windier conditions are occurring in important currents along in the Pacific and off of Marin, which brings an abundance of sea life to the area.
Scientists said their results lend support to a hypothesis made more than two decades ago by oceanographer Andrew Bakun. He suggested that rising temperatures from the human-caused buildup of greenhouse gases, by causing steeper atmospheric pressure gradients between oceans and continents, would produce stronger winds during summer and drive more coastal upwelling.
To test that claim, researchers reviewed and analyzed 22 published studies that tracked winds in the world's five major coastal upwelling regions using data from the 1940s to the mid-2000s.
Scientists found a trend of windier conditions in the California Current along the west coast of North America, the Humboldt Current off Peru and Chile and the Benguela Current off the west coast of southern Africa. In the Canary and Iberian currents off northern Africa and Spain, however, they found no clear signs of increasing winds.
Researchers can't say for sure that human-caused climate change is to blame, but they said finding a pattern that was consistent across several parts of the planet gives a strong indication it is a factor. The study also found that the increase in winds was more pronounced at higher latitudes, which is in line with other observed effects of climate change.
The study's conclusions are controversial among ocean scientists. They say the records used in the analysis do not go back far enough in time to rule out naturally occurring climate cycles such as the Pacific Decadal Oscillation, which shifts between warm and cool phases about every 20 to 30 years and also influences atmospheric conditions.
"It doesn't prove that global warming is driving this," said Art Miller, a climate scientist at Scripps Institution of Oceanography who was not involved in the study.
Distributed by MTC Direct. IJ staff writer Mark Prado contributed to this report. |
The Ku Klux Klan in Reconstruction North Carolina: Methods of Madness in the Struggle for Southern Dominance
The Ku Klux Klan was known to be active in certain parts of North Carolina, but Klan activity and other white supremacist group activity became prominent in many counties of the state in the post-Reconstruction years. Once Union soldiers were no longer occupying the south, white supremacist groups began to dominate the landscape in the south and started a violent campaign to disenfranchise, lynch, and terrorize African Americans. When the Union soldiers left after Reconstruction, suddenly African Americans found themselves in very desperate and dangerous situations, sometimes life was even more tragic and terrifying than the pre-Civil War slavery years. The drastically negative changes that took place in these African American's lives help modern historians understand why and how violence and disenfranchisement protracted the rights, civil liberities, and safety of African Americans in the south until well into the 20th cnetury when the Civil Rights Movement began to gain a foothold in the U.S. |
The sun flings out solar wind particles in much the same manner as a garden sprinkler throws out water droplets.
The artist's drawing of the solar wind flow was provided courtesy of NASA.
The Spiral of the IMF
The solar wind is formed as the Sun's top layer blows off into space. It carries magnetic fields still attached to the Sun.
Streams appear to flow into space as if they are spiraling out from the Sun, as shown in this figure. This is called the "spiral angle" of the IMF (interplanetary magnetic field).
Planets have to be in a stream's path to be affected.
Shop Windows to the Universe Science Store!
Our online store
on science education, classroom activities in The Earth Scientist
specimens, and educational games
You might also be interested in:
The force of magnetism causes material to point along the direction the magnetic force points. Here's another picture of how this works. This picture shows where the magnetic poles of the Earth are to...more
For a planet to be affected by a blob of material being ejected by the sun, the planet must be in the path of the blob, as shown in this picture. The Earth and its magnetosphere are shown in the bottom...more
AU stands for Astronomical Units. It is an easy way to measure large distances in space. It is the distance between the Earth and the Sun, which is about 93 million miles. For really big distances, we...more
The solar wind is formed as the Sun's top layer blows off into space. It carries magnetic fields still attached to the Sun. Streams appear to flow into space as if they are spiraling out from the Sun,...more
If someone says they saw an aurora, you might picture something like this. There is another type of aurora that we can't see. These aurora are called SAR arcs. The SAR stands for Stable Auroral Red. That...more
This figure shows the effect of the aurora on the atmosphere. When FAC's enter the atmosphere and create the aurora, they heat the atmosphere suddenly and abruptly. This creates an impulse which travels...more
This picture shows the flowing of particles into and out of the auroral zone, as Field-Aligned currents (FAC's) take at short-cut through the atmosphere. Some of the particles entering the auroral zone...more |
What are the components of blood?
Your blood makes up about 7% of your
body weight, and in an adult amounts to around 5 liters (= 1.3
gallons)1. If you lose too much blood, for instance
during a trauma or an accident, it can be life-threatening,
and you may need a blood transfusion. We are able to donate up
to half a liter, which is about 10% of our total blood volume,
however such blood donation requires overall good health, and
time to rest and recover afterwards. Roughly half of the blood
volume consists of various blood cells, while the other half
is blood plasma, the liquid which enables your blood to flow
throughout your body. Every component in the blood has their
own critical function, which will be introduced in this
The blood plasma consists of approximately 90 % water1
Water is critical for life. A human can survive for days and even weeks without food, but only a few days without water. The reason is that water is a resource that constantly recycles. We lose water from our body both through urine and as evaporation from our skin through sweat. Neither of these processes are something we can consciously control, but they are important processes for temperature control as well as getting rid of waste products. On the other hand, our water intake is under our control. We get water from what we drink, but also through food. Mild dehydration may lead to headache, overheating, or dizziness, but is not life-threatening under normal circumstances. In cases of extreme dehydration, you can get liquid through intravenous transfusions directly into your blood.
The blood plasma contains various soluble components
In addition to giving blood its fluidity, so that blood cells can be transported throughout the body, water is also important as a solvent for transport of nutrients and waste products. Minerals, vitamins, glucose, and various types of proteins, along with the water, make up the blood plasma. Although the color of the blood is red, the color of the blood plasma is actually yellow. The red color comes from the large amount of red bloods cells, as will be described below. The yellow color of the blood plasma comes from the various water-soluble components, such as nutrients and various signaling molecules. Furthermore, your blood is the carrier of various waste products, that are filtered out from the blood to the urine through the kidneys. In addition, the blood contains various proteins that have both structural as well as regulating or signaling roles. One type of important structural proteins are the coagulation factors that are required for proper blood clotting. Insulin is an example of signaling molecule. People suffering from diabetes must closely monitor and adjust the glucose and insulin levels in their blood, to ensure a proper balance.
The cells in our blood
The cells in our blood are divided into two main types: The red blood cells, and the white blood cells. In addition, there are specialized cell fragments, called platelets, that are derived from a specific type of white blood cells, the megakaryocytes. The red blood cells (RBC, also called erythrocytes) take up about 45% of the total blood volume1. The red color is due to abundant amounts of the protein hemoglobin, which binds and transports oxygen from the lungs throughout our body. The white blood cells are critical for our immune system, which can broadly be divided into the innate and the adaptive immune system. The innate immune system recognizes patterns that are associated with pathogens, and mounts a fast reaction towards infections. The adaptive immune cells recognize specific eptiopes, and can be educated to recognize epitopes associated with disease. The response of adaptive immune cells are initially slower, but the education leads to a "memory" so that upon later encounters, we can quickly recognize and eliminate the threat. Immunization is based on the ability of the adaptive immune system to recognize the pathogen and develop a protective "memory" or immunization. Lastly, the platelets, also called thrombocytes, are not cells, but rather cell fragments. They are critical for blood coagulation, to ensure that upon a cut or damage to a blood vessel, the bleeding will stop.
Human blood has many well defined components, and is the major transport system of both nutrients, signaling molecules, waste products and immune cells. Therefore, a simple blood sample can reveal many types of disease or unbalance in the body, and thereby help with initial diagnosis.
Molecular Biology of the Cell, Fourth Edition, 2002, Editiors: Alberts et al., Publisher: Garland Science
Adaptive Immune System: Is the
part of the immune system that can adapt to and "remember" the
pathogen based on "memory". This feature is unique to higher
vertebrates, and is based on a clonal expansion of cells that
are able to react to a pathogen. Immunizations take advantage
of the adaptive immune system, by introducing the body to
potential pathogens (for instance the inactivated flu virus)
to elicit a memory response to protect you from future
Epitope: A surface that can be recognized by the immune system. It is also called an antigenic determinant, as it is a part of an antigen, the unit that an antibody or immune receptor recognize. The epitope can be a linear sequence of for instance amino acids (a small part of a protein, called a peptide), or it can be a 3-dimensional surface composed of different parts.
Innate Immune System: Is the part
of the immune system that is innate, that we are born with. It
generally recognizes so-called Pathogen-Associated Molecular
Patterns (PAMPs), for instance common repetitive structures on
the surface of bacteria and viruses.
Pathogen: A foreign organism that leads to damage or disease (pathology). Bacteria, virus, parasites are all examples of potential pathogens. However, the commensal bacteria in your gut, or the cultures in yoghurt, are not pathogens.
Acknowledgement: Thanks to
Sharp for critical review of and valuable feedback for
10/13-2012: Educational article: The Components of the
Blood and the immune System. |
ReadWriteThink couldn't publish all of this great content without literacy experts to write and review for us. If you've got lessons plans, videos, activities, or other ideas you'd like to contribute, we'd love to hear from you.
Find the latest in professional publications, learn new techniques and strategies, and find out how you can connect with other literacy professionals.
Teacher Resources by Grade
|1st - 2nd||3rd - 4th|
|5th - 6th||7th - 8th|
|9th - 10th||11th - 12th|
Poetry Portfolios: Using Poetry to Teach Reading
|Grades||K – 2|
|Lesson Plan Type||Standard Lesson|
|Estimated Time||Five 15-minute sessions|
MATERIALS AND TECHNOLOGY
- Computer with Internet access
- “A Good Poem Will Give You Goose Bumps!” by Kenn Nesbitt
- Poetry books
- Chart paper
- 8-½” x 11” paper for portfolios
- Markers, crayons, and pencils
- Pocket chart
- Sentence strips
- Highlighter tape or sticky notes
|1.||Read "A Good Poem Will Give You Goosebumps!" by Kenn Nesbitt. This article provides background information explaining why poetry is good for children and important in the classroom. The author also suggests five ways to engage your students with poetry.
|2.||Compile the following materials into one place for easy access:
|3.||Set up a "Poetry Corner" for storage of all of the materials used during the lesson (e.g., pointers, sentence strips, poetry books) and poems that your students study throughout the year. Allow students to visit this area and review previously learned poems by using the pointers and sentence strips to work on voice-to-print matching, sequencing, and poetic language.
|4.||Before beginning your poetry studies, have students illustrate the My Poetry Portfolio cover sheet. Share the poem on the handout and then ask students to illustrate the poem. |
The East Pacific Rise is a mid-oceanic ridge, a divergent tectonic plate boundary located along the floor of the Pacific Ocean. It separates the Pacific Plate to the west from (north to south) the North American Plate, the Rivera Plate, the Cocos Plate, the Nazca Plate, and the Antarctic Plate. It runs from south from the Gulf of California in the Salton Sea basin in southern California to a point near 55° S, 130° W where it joins the Pacific-Antarctic Ridge trending west-southwest towards Antarctica near New Zealand (though in some uses the PAR is regarded as the southern section of the EPR). Much of the rise lies about 3200 km (2000 mi) off the South American coast and rises about 1,800–2,700 m (6,000–9,000 ft) above the surrounding seafloor.
The oceanic crust is moving away from the East Pacific Rise to either side. Near Easter Island the rate is over 15 cm (6 in) per year which is the fastest in the world but it is lower at about 6 cm (2 1⁄2 in) at the north end. On the eastern side of the rise the eastward moving Cocos and Nazca plates meet the westward moving South American Plate and the North American Plate and are being subducted under them. The belt of volcanoes along the Andes and the arc of volcanoes through Central America and Mexico are the direct results of this collision. Due east of the Baja California Peninsula, the Rise is sometimes referred to as the Gulf of California Rift Zone. In this area, newly formed oceanic crust is intermingled with rifted continental crust originating from the North American Plate.
Near Easter Island, the East Pacific Rise meets the Chile Rise at the Easter Island and Juan Fernandez microplates, trending off to the east where it subducts under the South American Plate at the Peru–Chile Trench along the coast of southern Chile. The southern extension of the East Pacific Rise (called the Pacific-Antarctic Ridge) merges with the Southeast Indian Ridge at the Macquarie Triple Junction south of New Zealand.
Along the East Pacific Rise the hydrothermal vents called black smokers were first discovered and have been extensively studied. These vents are forming volcanogenic massive sulfide ore deposits on the ocean floor. Many strange deep-water creatures have been found here. The southern stretch of the East Pacific Rise is one of the fastest-spreading sections of the Earth's mid-ocean ridge system. |
According to a study published in a recent issue of Science, young adults who are at a higher genetic risk for Alzheimer’s disease may already show differences in how their brains handle spatial navigation.
Researchers say that it’s too early to tell whether the brain differences are an indication of Alzheimer’s: “That is still unclear and needs to be investigated in further studies,” says senior researcher Dr. Nikolai Axmacher of the German Center for Neurodegenerative Diseases in Bonn.
Even so, researchers are hopeful that study findings will help improve understanding of the earliest processes that lead to Alzheimer’s. Axmacher notes that if brain differences do predict Alzheimer’s disease, that information could be used to identify high-risk people earlier.
The study involved 75 young adults, half of whom carried a variant of the APOE gene that may boost the risk of Alzheimer’s. According to Axmacher, one in six people carry the variant APOE4 and they have a higher risk of developing Alzheimer’s compared to non-carriers.
An advanced form of MRI was used to study the brain area known as the entorhinal cortex, which contains “grid cells.” Axmacher explains that those cells are essential in spatial navigation.
The team tracked the activity in the grid cells as participants navigated a “virtual” task that measured their spatial memory. Participants were required to remember the spatial location of objects in a virtual arena, and place those objects in the correct place.
What did the team find? They found that, on average, those carrying the variant APOE4 showed less functioning in their grid cells during the task compared to participants who did not carry the variant.
Both groups still performed similarly on the test, which raised the question as to whether APOE4 carriers compensated by using other brain regions to navigate through the task.
“Indeed we found that the less intact the grid cell system was, the more active an adjacent brain area, the hippocampus, [was],” added Axmacher.
What was interesting, he added, was that APOE4 carriers showed a different strategy during the test: they navigated from a vantage point along the border of the virtual arena, while non-carriers navigated from the center.
Researchers note that other research suggests that excess activity in the hippocampus is part of the process that can lead to Alzheimer’s.
Dean Hartley, director of science initiatives for the Alzheimer’s Association, agrees that the results do hint at a possible new biological marker. He notes that the findings offer more clues about the roots of Alzheimer’s, which could help develop new therapies.
Source for Today’s Article:
Norton, A., “Brain Differences Seen in Young Adults at Genetic Risk of Alzheimer’s,” Medicine Net web site, October 22, 2015; http://www.medicinenet.com/script/main/art.asp?articlekey=191380. |
Revised June 2018
What is tobacco?
Photo by ©iStock.com/USO
Tobacco is a plant grown for its leaves, which are dried and fermented before being put in tobacco products. Tobacco contains nicotine, an ingredient that can lead to addiction, which is why so many people who use tobacco find it difficult to quit. There are also many other potentially harmful chemicals found in tobacco or created by burning it.
How do people use tobacco?
People can smoke, chew, or sniff tobacco. Smoked tobacco products include cigarettes, cigars, bidis, and kreteks. Some people also smoke loose tobacco in a pipe or hookah (water pipe). Chewed tobacco products include chewing tobacco, snuff, dip, and snus; snuff can also be sniffed.
How does tobacco affect the brain?
The nicotine in any tobacco product readily absorbs into the blood when a person uses it. Upon entering the blood, nicotine immediately stimulates the adrenal glands to release the hormone epinephrine (adrenaline). Epinephrine stimulates the central nervous system and increases blood pressure, breathing, and heart rate. As with drugs such as cocaine and heroin, nicotine activates the brain’s reward circuits and also increases levels of the chemical messenger dopamine, which reinforces rewarding behaviors. Studies suggest that other chemicals in tobacco smoke, such as acetaldehyde, may enhance nicotine’s effects on the brain.
What are other health effects of tobacco use?
Although nicotine is addictive, most of the severe health effects of tobacco use comes from other chemicals. Tobacco smoking can lead to lung cancer, chronic bronchitis, and emphysema. It increases the risk of heart disease, which can lead to stroke or heart attack. Smoking has also been linked to other cancers, leukemia, cataracts, and pneumonia. All of these risks apply to use of any smoked product, including hookah tobacco. Smokeless tobacco increases the risk of cancer, especially mouth cancers.
Electronic cigarettes, also known as e-cigarettes or e-vaporizers, are battery-operated devices that deliver nicotine with flavorings and other chemicals to the lungs in vapor instead of smoke. E-cigarette companies often advertise them as safer than traditional cigarettes because they don't burn tobacco. But researchers actually know little about the health risks of using these devices. Read more about e-cigarettes in our Electronic Cigarettes (e-Cigarettes) DrugFacts.
Pregnant women who smoke cigarettes run an increased risk of miscarriage, stillborn or premature infants, or infants with low birth weight. Smoking while pregnant may also be associated with learning and behavioral problems in exposed children.
People who stand or sit near others who smoke are exposed to secondhand smoke, either coming from the burning end of the tobacco product or exhaled by the person who is smoking. Secondhand smoke exposure can also lead to lung cancer and heart disease. It can cause health problems in both adults and children, such as coughing, phlegm, reduced lung function, pneumonia, and bronchitis. Children exposed to secondhand smoke are at an increased risk of ear infections, severe asthma, lung infections, and death from sudden infant death syndrome.
How does tobacco use lead to addiction?
For many who use tobacco, long-term brain changes brought on by continued nicotine exposure result in addiction. When a person tries to quit, he or she may have withdrawal symptoms, including:
- problems paying attention
- trouble sleeping
- increased appetite
- powerful cravings for tobacco
How can people get treatment for nicotine addiction?
Both behavioral treatments and medications can help people quit smoking, but the combination of medication with counseling is more effective than either alone.
The U.S. Department of Health and Human Services has established a national toll-free quitline, 1-800-QUIT-NOW, to serve as an access point for anyone seeking information and help in quitting smoking.
Government Regulation of Tobacco Products
On May 5, 2016, the FDA announced that nationwide tobacco regulations now extend to all tobacco products, including:
- e-cigarettes and their liquid solutions
- hookah tobacco
- pipe tobacco
This ruling includes restricting sale of these products to minors. For more information, see the FDA's webpage, The Facts on the FDA's New Tobacco Rule.
Behavioral treatments use a variety of methods to help people quit smoking, ranging from self-help materials to counseling. These treatments teach people to recognize high-risk situations and develop strategies to deal with them. For example, people who hang out with others who smoke are more likely to smoke and less likely to quit.
Nicotine Replacement Therapies
Nicotine replacement therapies (NRTs) were the first medications the U.S. Food and Drug Administration (FDA) approved for use in smoking cessation therapy.
Current FDA-approved NRT products include chewing gum, transdermal patch, nasal sprays, inhalers, and lozenges. NRTs deliver a controlled dose of nicotine to relieve withdrawal symptoms while the person tries to quit.
Bupropion (Zyban®) and varenicline (Chantix®) are two FDA-approved non-nicotine medications that have helped people quit smoking. They target nicotine receptors in the brain, easing withdrawal symptoms and blocking the effects of nicotine if people start smoking again.
Can a person overdose on nicotine?
Nicotine is poisonous and, though uncommon, overdose is possible. An overdose occurs when the person uses too much of a drug and has a toxic reaction that results in serious, harmful symptoms or death. Nicotine poisoning usually occurs in young children who accidentally chew on nicotine gum or patches used to quit smoking or swallow e-cigarette liquid. Symptoms include difficulty breathing, vomiting, fainting, headache, weakness, and increased or decreased heart rate. Anyone concerned that a child or adult might be experiencing a nicotine overdose should seek immediate medical help.
Points to Remember
- Tobacco is a plant grown for its leaves, which are dried and fermented before being put in tobacco products. Tobacco contains nicotine, the ingredient that can lead to addiction.
- People can smoke, chew, or sniff tobacco.
- Nicotine acts in the brain by stimulating the adrenal glands to release the hormone epinephrine (adrenaline) and by increasing levels of the chemical messenger dopamine.
- Tobacco smoking can lead to lung cancer, chronic bronchitis, and emphysema. It increases the risk of heart disease, which can lead to stroke or heart attack. Smoking has also been linked to other cancers, leukemia, cataracts, and pneumonia. Smokeless tobacco increases the risk of cancer, especially mouth cancers.
- Secondhand smoke can lead to lung cancer and heart disease as well as other health effects in adults and children.
- For many who use tobacco, long-term brain changes brought on by continued nicotine exposure result in addiction.
- Both behavioral treatments and medication can help people quit smoking, but the combination of medication with counseling is more effective than either alone.
- Nicotine overdose is possible, though it usually occurs in young children who accidentally chew on nicotine gum or patches or swallow e-cigarette liquid.
- Anyone concerned that a child or adult might be experiencing a nicotine overdose should seek immediate medical help.
For more information about tobacco products and nicotine, visit our Tobacco/Nicotine webpage.
For more information about how to quit smoking, visit smokefree.gov.
Get this Publication
Cite this article
NIDA. (2018, June 6). Cigarettes and Other Tobacco Products. Retrieved from https://www.drugabuse.gov/publications/drugfacts/cigarettes-other-tobacco-products
For help from the National Cancer Institute: 1-877-44U-QUIT (1-877-448-7848)
The National Cancer Institute's trained counselors are available to provide information and help with quitting in English or Spanish, Monday through Friday, 8:00 a.m. to 8:00 p.m. Eastern Time.
Other Articles of Interest
Discusses the harmful effects of tobacco use, risks associated with pregnancy and adolescents, as well as best practices for the prevention and treatment of tobacco addiction. |
Fusion happens between light nuclei. It cannot happen in room temperatures and pressures, it needs very high energies in order to strip the electrons from the nucleus and to overcome the electromagnetic repulsion of the positive charges .
The fusion reaction rate increases rapidly with temperature until it maximizes and then gradually drops off. The DT rate peaks at a lower temperature (about 70 keV, or 800 million kelvin) and at a higher value than other reactions commonly considered for fusion energy.
Fusion experimentally, apart from the Hbomb, has been sustained at JET, an experimental facility, by confining a plasma at the high temperatures necessary, in a tokamak, a specially designed magnetic field. This design is extended into ITER which is a prototype fusion energy reactor, i.e, will give out more energy than spent in creating the magnetic field and the plasma. If you look at the links these are huge constructs not suitable for bombs.
A second direction in creating the plasma temperatures necessary for sustained fusion is with very strong lasers. If you look at the photo of the system needed to reach fusion energies again you will realize that it cannot become the central part of a bomb.
That is where present day technology is. Hopefully by the time nanotechnology catches up with fusion humanity will have matured enough to use fusion only for getting unlimited energy. |
Learn something new every day More Info... by email
There are several ways to classify, discuss or define movement disorder. They can be talked about in reference to the conditions or diseases that might result in them. Alternately, they might be split into types of movement caused and then further classified by types of disorders that result in the varying movement anomalies. This second method can be very useful, and it can be said there are two ruling types of movement disorders called hypokinetic and hyperkinetic. The former refers to impaired movement states that in one or more areas, and the latter to movements in excess that are unplanned.
A variety of hypokinetic movement disorder causes and diseases can exist and these can range from mild illnesses that may be addressed to extremely severe and fatal illnesses. Some of those that tend to be most serious include multiple sclerosis and amyotrophic lateral sclerosis or Lou Gehrig’s syndrome. Over time both of these conditions further restrict movement and can result in paralysis in many different areas of the body. Another illness like this is progressive supranuclear palsy, which also results in continued movement loss and is ultimately fatal.
Other types of hypokinetic movement disorder are not as severe, though they do pose challenges. Dyspraxia is a group of conditions that is known for causing clumsiness and slow development of gross and fine motor skills. This is most felt during childhood, but can persist if ignored. Intervention with occupational therapy may help offset some movement problems resulting in relatively normal development later.
Limited movement condition types are contrasted to hyperkinetic movement disorder varieties, where involuntary movements may occur frequently. Again, there are a variety of examples to choose from when discussing this set of illnesses or causes. Some people have tremors especially in the hands or the voice, that don’t have an underlying disease component.
This may be called essential tremor and it’s typically most noted when people try to do something, like write with a pencil, or it could be especially obvious when folks attempt to keep the hands in a certain posture. This can be caused suddenly by heavy metal exposure, thyroid disease, or by taking certain medications like lithium. Others will have the condition all their lives with first expression in early childhood.
Some conditions, like Tourette syndrome may manifest in involuntary movements like tics, which with treatment might be lessened in number. Other conditions that have too much movement cannot exercise any type of control, and examples can include Huntington’s disease, which results in movements that are jerky or called chorea. Interestingly, some conditions are both hypo and hyperkinetic. Parkinson’s diseases, for instance, has tremor and restricted movement, making it doubly challenging.
Movement disorder types may be mild and fixable, or extraordinarily difficult. They range in cause, expression, and treatment. Given the severity of some, any form of involuntary or restricted movement deserves medical attention. What may seem relatively harmless at first could progress to more severe symptoms at a later point.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK! |
The Tweet: “A single cup of gasoline, when ignited, has the same explosive power as five sticks of dynamite.”
In the case of an explosion or any other reaction, power is the amount of energy released and the speed at which the energy is released. So, ‘explosive power’ has quite a specific meaning from a scientific point of view. The energy stored in fuel is usually a less important factor in measuring the power of an explosion than the rate at which energy is released.
The difference between an explosive detonation and a loud, hot bang is the speed at which the detonation wave moves. If the detonation wave moves faster than the speed of sound it is considered an explosion. Usually in high explosives this speed is approximately 4500 metres per second (m/s).
So to understand if a cup of petrol releases energy five times as explosively as five sticks of dynamite we need to know how much energy is in that volume of petrol and how long it takes to burn. In 170g of petrol there is 8,000,000 Joules (or 8MJ) of energy. This is how much energy is released when all 170g are burnt. To burn, petrol must be mixed with air, or vaporised. Once vaporised, it releases this energy in 2 to 3 milliseconds at a rate of 3 Giga Watts (3GW). A Watt, is a measurement of power and is equivalent to a joule per second.
Sticks of dynamite are all roughly 20cm in length but vary in width from 3 to 5cm and weight from 0.5 to 4 pounds. Assuming that a stick of dynamite contains 0.5 pounds of explosive there is 1MJ of energy to be released. Dynamite burns much faster than petrol, however, in about 4 microseconds and at a rate of 250GW. This is 83 times more powerful than a cup of petrol.
So even assuming we are using a small stick of dynamite, the energy released is far more explosive than a cup of petrol. The essential fact is how fast the energy is released, not the amount of energy stored in it. For comparison, there are around 8MJ in 1.2 pounds of wood, but wood burns much more slowly than petrol. In total the power in wood is about 30kW, or 1/1,000,000 of the power of petrol.Image: Death to Stock Photos This post was first published on The Untweetable Truth (04/12/2014) |
2007 Schools Wikipedia Selection. Related subjects: Geology and geophysics
In geology, a crust is the outermost layer of a planet, part of its lithosphere. Planetary crusts are generally composed of a less dense material than that of its deeper layers. The crust of the Earth is composed mainly of basalt and granite. It is cooler and more rigid than the deeper layers of the mantle and core.
On stratified planets, such as Earth, the lithosphere is floating on fluid interior layers. Because of convection in the plastic, although non-molten, upper mantle and asthenosphere, the lithosphere is broken into tectonic plates that move. Oceanic crust is different from that of the continents. The oceanic crust ( sima) is 5 to 10 km thick and is composed primarily of a dark, dense rock called basalt. The continental crust ( sial) is 20-70 km deep and is composed of a variety of less dense rocks. The crust's temperature ranges from the air temperature to about 900°C near the upper mantle.
Origin of the Earth's Crust
The Earth is considered to have differentiated from an aggregate of planetesimals into its core, mantle and crust within ~100 million years of the formation of the planet, at 4.4 billion years ago. The primordial crust was very thin, and was likely recycled by much more vigorous plate tectonics and destroyed by significant asteroid impacts, which were much more common in the early stages of the solar system. Of particular note is a theory that the moon was formed by one such very large impact.
The Earth has likely always had some form of basaltic oceanic crust, but there is evidence that it has also had continental style crust for as long as 3.8 to 3.9 billion years. The oldest crust on Earth is the Narryer Gneiss Terrane in Western Australia at 3.9 Ga, and certain parts of the Canadian Shield and the Fennoscandian Shield are also of this age.
The majority of the current Earth's continental crust was formed primarily between 4.6 billion years and 3.9 billion years before present, in the Hadean. The vast majority of rocks of this age are located in cratons where the crust is up to 70km thick. The lower density of the continental crust as compared to the oceanic crust prevents it being destroyed by subduction. Crust formation is linked to periods of intense orogeny or mountain building; these periods coincide with the formation of the supercontinents such as Rodinia, Pangaea and Gondwana. The crust forms not so much by accumulation of granite and metamorphic fold belts, but by depletion of the mantle to form buoyant lithospheric mantle. |
Scheel, D., Godfrey-Smith, P., & Lawrence, M. (2016). Signal Use by Octopuses in Agonistic Interactions. Current Biology. DOI: 10.1016/j.cub.2015.12.033
As you may have noticed, last week Oceanbites was all about love in the ocean. Though finding a mate is important (unless you are a hermaphrodite), marine animals may not always be in a loving mood. A fight between members of the same species can break loose at any time, especially if it involves food, shelter or the aforementioned mate (Fig. 1). But how do two organisms decide when it is time to fight or if it is better to flee? Sound does not always work so well in the noisy ocean, making it hard for competitors to simply “talk it out”.
Cephalopods, such as cuttlefish and squid, have found a way to bridge this gap in communication by changing their body color, pattern and orientation before engaging in combat. These changes by the initiator may help cue the weaker competitor of the impending conflict and offer them a chance to flee the scene before things get ugly.
How do octopuses, another type of cephalopod, deal with competition from other members of their species? The answer was largely unexplored because octopuses were thought to be solitary and non-social creatures. Researchers in this study set out to verify this claim and attempt to record interactions between octopuses in the wild. To do this, 53 hrs of video footage was collected off the coast of Australia at a known octopus site, using staged underwater GoPro cameras.
It turns out octopuses are a lot more social than previously thought. From a total of 186 octopus interactions, researchers recorded 345 body colors and patterns, along with 512 different actions. Here is a breakdown of the most commonly observed body signals between interacting octopuses:
When two octopuses began to interact, the most common action by far was “reaching”. In this context, the term reaching refers to one octopus extending an arm towards another octopus. However, actual contact is never made, so this action is more of a precursor to combat – think of it as “feeling” out your competitor. Reaching often occurred when one octopus was in the safety of its den. Touching or grappling between two octopuses was extremely rare, especially when there was a large difference in body color (Fig. 2B). The initiator of the reach was dark and the receiver pale in color. Fighting was more likely if two octopuses had similar body colors.
2. Standing tall on higher ground:
Aside from being darker, the more dominant octopus would sometimes display a “stand tall” posture by raising its head and mantle and seeking a higher position (Fig. 2C). Interestingly enough, the relative darkness of interacting octopuses seemed to correlate with their choice to either withdraw from a fight or stand their ground. For instance, say one octopus was the initiator and approached another octopus called the reactor. When the reactor stood its ground (either by standing tall or reaching), the initiator was lighter and the reactor darker. If the initiator was darker and stood taller, the reactor would withdraw from the fight and flee the scene (Fig. 2D). Clearly, color seems to matter.
There was some love to be found after all, but not in large quantities. Mating attempts comprised only 11% of the recorded interactions and on average lasted 6 minutes. Though difficult to identify the sex of the octopuses on film, researchers could provide estimates based on the types of body signals they saw. Males were typically pale, with dark eyes and would reach the right third arm toward the female (Fig. 2A). The third arm would make contact with the female who had a mottled body pattern.
Discussion and Significance:
Prior to this study, octopuses were thought to be solitary creatures that rarely interacted with members of their own species. It appears that some octopuses are highly interactive and use a complex array of body patterns and colors to communicate. Researchers captured footage from mounted GoPro cameras that revealed social behaviors before the onset of a fight. Overall, the more dominant octopus was darker in color and raised its body position to appear taller, while the weaker octopus appeared paler and not as physically dominant (Fig. 3)
Regardless of who initiates, it seems octopuses have found a way to determine the outcome of a fight before it happens. Why may it be beneficial to have this sort of communication? If you are a weaker octopus, the costs of entering and losing a fight seem too risky. It may also benefit initiators to not actually engage in unnecessary combat because they may become injured themselves or may waste important resources. This unique type of signaling may be a way of minimizing injury and or death of the population that results from a fight. This exciting discovery begs the question: what other types of communication have we yet to uncover between members of other marine species? |
A Molecular Biology Glossary
A Quick and Dirty Reference to Terms Used in Molecular Biology
Dr. Robert H. Lyons, Director
Some of the definitions refer to the following diagrams:
3' end/5' end: A nucleic acid strand is inherently directional, and the "5 prime end" has a free hydroxyl (or phosphate) on a 5' carbon and the "3 prime end" has a free hydroxyl (or phosphate) on a 3' carbon (carbon atoms in the sugar ring are numbered from 1' to 5'; see Figure 1). That's simple enough for an RNA strand or for single-stranded (ss) DNA. However, for double-stranded (ds) DNA it's not so obvious - each strand has a 5' end and a 3' end, and the 5' end of one strand is paired with the 3' end of the other strand (it is "antiparallel"; Figure 2). One would talk about the 5' end of ds DNA only if there was some reason to emphasize one strand over the other - for example if one strand is the sense strand of a gene. In that case, the orientation of the sense strand establishes the direction (see Figures 3 and 4).
3' flanking region: A region of DNA which is NOT copied into the mature mRNA, but which is present adjacent to 3' end of the gene (see Figure 4). It was originally thought that the 3' flanking DNA was not transcribed at all, but it was discovered to be transcribed into RNA, but quickly removed during processing of the primary transcript to form the mature mRNA. The 3' flanking region often contains sequences which affect the formation of the 3' end of the message. It may also contain enhancers or other sites to which proteins may bind.
3' untranslated region: A region of the DNA which IS transcribed into mRNA and becomes the 3' end or the message, but which does not contain protein coding sequence. Everything between the stop codon and the polyA tail is considered to be 3' untranslated (see Figure 4). The 3' untranslated region may affect the translation efficiency of the mRNA or the stability of the mRNA. It also has sequences which are required for the addition of the poly(A) tail to the message (including one known as the "hexanucleotide", AAUAAA).
5' flanking region: A region of DNA which is NOT transcribed into RNA, but rather is adjacent to 5' end of the gene (see Figure 4). The 5'-flanking region contains the promoter, and may also contain enhancers or other protein binding sites.
5' untranslated region: A region of a gene which IS transcribed into mRNA, becoming the 5' end of the message, but which does not contain protein coding sequence. The 5'-untranslated region is the portion of the DNA starting from the cap site and extending to the base just before the ATG translation initiation codon (see Figure 4). While not itself translated, this region may have sequences which alter the translation efficiency of the mRNA, or which affect the stability of the mRNA.
Ablation experiment: An experiment designed to produce an animal deficient in one or a few cell types, in order to study cell lineage or cell function. The idea is to make a transgenic mouse with a toxin gene (often diphtheria toxin) under control of a specialized promoter which activates only in the target cell type. When embryo development progresses to the point where it starts to form the target tissue, the toxin gene is activated, and that specific tissue dies. Other tissues are unaffected.
Acrylamide gels: A polymer gel used for electrophoresis of DNA or protein to measure their sizes (in daltons for proteins, or in base pairs for DNA). See "Gel Electrophoresis". Acrylamide gels are especially useful for high resolution separations of DNA in the range of tens to hundreds of nucleotides in length.
Agarose gels: A polysaccharide gel used to measure the size of nucleic acids (in bases or base pairs). See "Gel Electrophoresis". This is the gel of choice for DNA or RNA in the range of thousands of bases in length, or even up to 1 megabase if you are using pulsed field gel electrophoresis.
Amp resistance: See "Antibiotic resistance".
Anneal: Generally synonymous with "hybridize".
Antibiotic resistance: Plasmids generally contain genes which confer on the host bacterium the ability to survive a given antibiotic. If the plasmid pBR322 is present in a host, that host will not be killed by (moderate levels of) ampicillin or tetracycline. By using plasmids containing antibiotic resistance genes, the researcher can kill off all the bacteria which have not taken up his plasmid, thus ensuring that the plasmid will be propagated as the surviving cells divide.
Anti-sense strand: See discussion under "Sense strand".
AP-1 site: The binding site on DNA at which the transcription "factor" AP-1 binds, thereby altering the rate of transcription for the adjacent gene. AP-1 is actually a complex between c-fos protein and c-jun protein, or sometimes is just c-jun dimers. The AP-1 site consensus sequence is (C/G)TGACT(C/A)A. Also known as the TPA-response element (TRE). [TPA is a phorbol ester, tetradecanoyl phorbol acetate, which is a chemical tumor promoter]
ATG or AUG: The codon for methionine; the translation initiation codon. Usually, protein translation can only start at a methionine codon (although this codon may be found elsewhere within the protein sequence as well). In eukaryotic DNA, the sequence is ATG; in RNA it is AUG. Usually, the first AUG in the mRNA is the point at which translation starts, and an open reading frame follows - i.e. the nucleotides taken three at a time will code for the amino acids of the protein, and a stop codon will be found only when the protein coding region is complete.
BAC: Bacterial Artificial Chromosome a cloning vector capable of carrying between 100 and 300 kilobases of target sequence. They are propagated as a mini-chromosome in a bacterial host. The size of the typical BAC is ideal for use as an intermediate in large-scale genome sequencing projects. Entire genomes can be cloned into BAC libraries, and entire BAC clones can be shotgun-sequenced fairly rapidly.
Band shift assay: see Gel shift assay.
Bacteriophage lambda: A virus which infects E. coli , and which is often used in molecular genetics experiments as a vector, or cloning vehicle. Recombinant phages can be made in which certain non-essentiall DNA is removed and replaced with the DNA of interest. The phage can accommodate a DNA "insert" of about 15-20 kb. Replication of that virus will thus replicate the investigator's DNA. One would use phage l rather than a plasmid if the desired piece of DNA is rather large.
Binding site: A place on cellular DNA to which a protein (such as a transcription factor) can bind. Typically, binding sites might be found in the vicinity of genes, and would be involved in activating transcription of that gene (promoter elements), in enhancing the transcription of that gene (enhancer elements), or in reducing the transcription of that gene (silencers). NOTE that whether the protein in fact performs these functions may depend on some condition, such as the presence of a hormone, or the tissue in which the gene is being examined. Binding sites could also be involved in the regulation of chromosome structure or of DNA replication.
Blotting: A technique for detecting one RNA within a mixture of RNAs (a Northern blot) or one type of DNA within a mixture of DNAs (a Southern blot). A blot can prove whether that one species of RNA or DNA is present, how much is there, and its approximate size. Basically, blotting involves gel electrophoresis, transfer to a blotting membrane (typically nitrocellulose or activated nylon), and incubating with a radioactive probe. Exposing the membrane to X-ray film produces darkening at a spot correlating with the position of the DNA or RNA of interest. The darker the spot, the more nucleic acid was present there. (see figure, below)
BP: Abbreviation for base pair(s). Double stranded DNA is usually measured in bp rather than nucleotides (nt).
Cap: All eukaryotes have at the 5' end of their messages a structure called a "cap", consisting of a 7-methylguanosine in 5'-5' triphosphate linkage with the first nucleotide of the mRNA. It is added post-transcriptionally, and is not encoded in the DNA.
Cap site: Two usages: In eukaryotes, the cap site is the position in the gene at which transcription starts, and really should be called the "transcription initiation site". The first nucleotide is transcribed from this site to start the nascent RNA chain. That nucleotide becomes the 5' end of the chain, and thus the nucleotide to which the cap structure is attached (see "Cap"). In bacteria, the CAP site (note the capital letters) is a site on the DNA to which a protein factor (the Catabolite Activated Protein) binds.
CAT assay: An enzyme assay. CAT stands for chloramphenicol acetyl transferase, a bacterial enzyme which inactivates chloramphenicol by acetylating it. CAT assays are often performed to test the function of a promoter. The gene coding for CAT is linked onto a promoter (transcription control region) from another gene, and the construct is "transfected" into cultured cells. The amount of CAT enzyme produced is taken to indicate the transcriptional activity of the promoter (relative to other promoters which must be tested in parallel). It is easier to perform a CAT assay than it is to do a Northern blot, so CAT assays were a common method for testing the effects of sequence changes on promoter function. Largely supplanted by the reporter gene luciferase.
CCAAT box: (CAT box, CAAT box, other variants) A sequence found in the 5' flanking region of certain genes which is necessary for efficient expression. A transcription factor (CCAAT-binding protein, CBP) binds to this site.
cDNA clone: "complementary DNA"; a piece of DNA copied from an mRNA. The term "clone" indicates that this cDNA has been spliced into a plasmid or other vector in order to propagate it. A cDNA clone may contain DNA copies of such typical mRNA regions as coding sequence, 5'-untranslated region, 3' untranslated region or poly(A) tail. No introns will be present, nor any promoter sequences (or other 5' or 3' flanking regions). A "full-length" cDNA clone is one which contains all of the mRNA sequence from nucleotide #1 through to the poly(A) tail.
ChIP: See Chromatin Immuniprecipitation (below).
Chromatin Immunoprecipitation: This is a method for isolating and characterizing the specific pieces of DNA out of an entire genome, to which is bound a protein of interest. The protein of interest could for example be a transcription factor, or a specific modified histone, or any other DNA binding protein. This procedure requires an antibody to that protein of interest.
One isolates chromosomal material with all the proteins still bound to the genomic DNA. After fragmenting the DNA, you use the antibody to immunoprecipitate all chunks that contain your protein of interest. Isolate the DNA from those chunks, and you can characterize the specific DNA sites to which your protein was bound.
Chromosome walking: A technique for cloning everything in the genome around a known piece of DNA (the starting probe). You screen a genomic library for all clones hybridizing with the probe, and then figure out which one extends furthest into the surrounding DNA. The most distal piece of this most distal clone is then used as a probe, so that ever more distal regions can be cloned. This has been used to move as much as 200 kb away from a given starting point (an immense undertaking). Typically used to "walk" from a starting point towards some nearby gene in order to clone that gene. Also used to obtain the remainder of a gene when you have isolated a part of it.
Clone (verb): To "clone" something is to produce copies of it. To clone a piece of DNA, one would insert it into some type of vector (say, a plasmid) and put the resultant construct into a host (usually a bacterium) so that the plasmid and insert replicate with the host. An individual bacterium is isolated and grown and the plasmid containing the "cloned" DNA is re-isolated from the bacteria, at which point there will be many millions of copies of the DNA - essentially an unlimited supply. Actually, an investigator wishing to clone some gene or cDNA rarely has that DNA in a purified form, so practically speaking, to "clone" something involves screening a cDNA or genomic library for the desired clone. See also "Probe" for a description of how one might start a cloning project, and "Screening" for how the probe in used.
One can also clone more complex organisms, with considerable difficulty. The much-publicized Scottish research that resulted in the sheep Dolly exemplifies this approach.
Clone (noun): The term "clone" can refer either to a bacterium carrying a cloned DNA, or to the cloned DNA itself. If you receive a clone from a collaborator, you should first figure out if they send you DNA or bacteria. If it is DNA, your first job is to introduce it ("transform" it) into bacteria [see "Transformation (with respect to bacteria)"]. Occasionally, someone might send just the "insert", rather than the whole plasmid. "Your assignment, Jim, if you decide to accept it", is to splice that DNA into a convenient vector, and only then can you transform it into bacteria.
Coding sequence: The portion of a gene or an mRNA which actually codes for a protein. Introns are not coding sequences; nor are the 5' or 3' untranslated regions (or the flanking regions, for that matter - they are not even transcribed into mRNA). The coding sequence in a cDNA or mature mRNA includes everything from the AUG (or ATG) initiation codon through to the stop codon, inclusive.
Coding strand: an ambiguous term intended to refer to one specific strand in a double-stranded gene. See "Sense strand".
Codon: In an mRNA, a codon is a sequence of three nucleotides which codes for the incorporation of a specific amino acid into the growing protein. The sequence of codons in the mRNA unambiguously defines the primary structure of the final protein. Of course, the codons in the mRNA were also present in the genomic DNA, but the sequence may be interrupted by introns.
Consensus sequence: A nominal sequence inferred from multiple, imperfect examples. Multiple lanes of shotgun sequence can be merged to show a consensus sequence. The optimal sequence of nucleotides recognized by some factor. A DNA binding site for a protein may vary substantially, but one can infer the consensus sequence for the binding site by comparing numerous examples. For example, the (fictitious) transcription factor ZQ1 usually binds to the sequences AAAGTT, AAGGTT or AAGATT. The consensus sequence for that factor is said to be AARRTT (where R is any purine, i.e. A or G). ZQ1 may also be able to weakly bind to ACAGTT (which differs by one base from the consensus).
Contig: Several uses, all nouns. The term comes from a shortening of the word contiguous. A contig may refer to a map showing placement of a set of clones that completely, contiguously cover some segment of DNA in which you are interested. Also called the minimal tiling path. More often, the term contig is used to refer to the final product of a shotgun sequencing project. When individual lanes of sequence information are merged to infer the sequence of the larger DNA piece, the product consensus sequence is called a contig.
Cosmid: A type of vector used for cloning 35-45 kb of DNA. These are plasmids carrying a phagel cos site (which allows packaging into l capsids), an origin of replication and an antibiotic resistance gene. A plasmid of 40 kb is very difficult to put into bacteria, but can replicate once there. Cosmids, however, have a cos site, and thus can be packaged into l phage heads (a reaction which can be performed in vitro ) to allow efficient introduction into bacteria (you'll have to look up the cos site elsewhere).
DNase: Deoxyribonuclease, a class of enzymes which digest DNA. The most common is DNase I, an endonuclease which digests both single and double-stranded DNA.
Dot blot: A technique for measuring the amount of one specific DNA or RNA in a complex mixture. The samples are spotted onto a hybridization membrane (such as nitrocellulose or activated nylon, etc.), fixed and hybridized with a radioactive probe. The extent of labeling (as determined by autoradiography and densitometry) is proportional to the concentration of the target molecule in the sample. Standards provide a means of calibrating the results.
Downstream: See "Upstream/Downstream".
E. coli: A common Gram-negative bacterium useful for cloning experiments. Present in human intestinal tract. Hundreds of strains of E. coli exist. One strain, K-12, has been completely sequenced.
Electrophoresis: See "Gel electrophoresis".
Endonuclease: An enzyme which digests nucleic acids starting in the middle of the strand (as opposed to an exonuclease, which must start at an end). Examples include the restriction enzymes, DNase I and RNase A.
Enhancer: An enhancer is a nucleotide sequence to which transcription factor(s) bind, and which increases the transcription of a gene. It is NOT part of a promoter; the basic difference being that an enhancer can be moved around anywhere in the general vicinity of the gene (within several thousand nucleotides on either side or even within an intron), and it will still function. It can even be clipped out and spliced back in backwards, and will still operate. A promoter, on the other hand, is position- and orientation-dependent. Some enhancers are "conditional" - in other words, they enhance transcription only under certain conditions, for example in the presence of a hormone.
ERE: Estrogen Response Element. A binding site in a promoter to which the activated estrogen receptor can bind. The estrogen receptor is essentially a transcription factor which is activated only in the presence of estrogens. The activated receptor will bind to an ERE, and transcription of the adjacent gene will be altered. See also "Response element".
Evolutionary Footprinting: One can infer which portions of a gene are important by comparing the sequence of that gene with its cognates from other species. A plot showing the regions of high conservation will presumably reflect the regions that are functional in all the test species. In theory, the more species involved in the comparison, the more stringent the result can be (i.e. the more the conserved regions will reflect truly important sequences). Care must be taken, however, to use species in which the function of the gene has not diverged excessively, or the outcome will be uninformative.
Exon: Those portions of a genomic DNA sequence which WILL be represented in the final, mature mRNA. The term "exon" can also be used for the equivalent segments in the final RNA. Exons may include coding sequences, the 5' untranslated region or the 3' untranslated region.
Exonuclease: An enzyme which digests nucleic acids starting at one end. An example is Exonuclease III, which digests only double-stranded DNA starting from the 3' end.
Expression: To "express" a gene is to cause it to function. A gene which encodes a protein will, when expressed, be transcribed and translated to produce that protein. A gene which encodes an RNA rather than a protein (for example, a rRNA gene) will produce that RNA when expressed.
Expression clone: This is a clone (plasmid in a bacteria, or maybe al phage in bacteria) which is designed to produce a protein from the DNA insert. Mammalian genes do not function in bacteria, so to get bacterial expression from your mammalian cDNA, you would place its coding region (i.e. no introns) immediately adjacent to bacterial transcription/translation control sequences. That artificial construct (the "expression clone") will produce a pseudo-mammalian protein if put back into bacteria. Often, that protein can be recognized by antibodies raised against the authentic mammalian protein, and vice versa.
Footprinting: A technique by which one identifies a protein binding site on cellular DNA. The presence of a bound protein prevents DNase from "nicking" that region, which can be detected by an appropriately designed gel.
Gel electrophoresis: A method to analyze the size of DNA (or RNA) fragments. In the presence of an electric field, larger fragments of DNA move through a gel slower than smaller ones. If a sample contains fragments at four different discrete sizes, those four size classes will, when subjected to electrophoresis, all migrate in groups, producing four migrating "bands". Usually, these are visualized by soaking the gel in a dye (ethidium bromide) which makes the DNA fluoresce under UV light.
Gel shift assay: (aka gel mobility shift assay (GMSA), band shift assay (BSA), electrophoretic mobility shift assay (EMSA)) A method by which one can determine whether a particular protein preparation contains factors which bind to a particular DNA fragment. When a radiolabeled DNA fragment is run on a gel, it shows a characteristic mobility. If it is first incubated with a cellular extract of proteins (or with purified protein), any protein-DNA complexes will migrate slower than the naked DNA - a shifted band.
Gene: A unit of DNA which performs one function. Usually, this is equated with the production of one RNA or one protein. A gene contains coding regions, introns, untranslated regions and control regions.
Genome: The total DNA contained in each cell of an organism. Mammalian genomic DNA (including that of humans) contains 6x109 base pairs of DNA per diploid cell. There are somewhere in the order of a hundred thousand genes, including coding regions, 5' and 3' untranslated regions, introns, 5' and 3' flanking DNA. Also present in the genome are structural segments such as telomeric and centromeric DNAs and replication origins, and intergenic DNA.
Genomic blot: A type of Southern blot specifically used to analyze a mixture of DNA fragments derived from total genomic DNA. Because genomic DNA is very complicated, when it has been digested with restriction enzymes, it produces a complex set of fragments ranging from tens of bp to tens of thousands of bp. However, any specific gene will be reproducibly found on only one or a few specific fragments. A million identical cells will produce a million identical restriction fragments for any given gene, so probing a genomic Southern with a gene-specific probe will produce a pattern of perhaps one or just a few bands.
Genomic clone: A piece of DNA taken from the genome of a cell or animal, and spliced into a bacteriophage or other cloning vector. A genomic clone may contain coding regions, exons, introns, 5' flanking regions, 5' untranslated regions, 3' flanking regions, 3' untranslated regions, or it may contain none of these...it may only contain intergenic DNA (usually not a desired outcome of a cloning experiment!).
Genotype: Two uses: one is a verb, the other a noun. To 'genotype' (verb) is to example polymorphisms (e.g. RFLPs, microsatellites, SNPs) present in a sample of DNA. You might be looking for linkage between a microsatellite marker and an unknown disease gene. With such information, you can infer the chromosomal location of the unknown gene, and can sometimes identify the gene. As a noun, a 'genotype' is the result of a genotyping experiment, be it a SNP or microsat or whatever.
GRE: Glucocorticoid Response Element: A binding site in a promoter to which the activated glucocorticoid receptor can bind. The glucocorticoid receptor is essentially a transcription factor which is activated only in the presence of glucocorticoids. The activated receptor will bind to a GRE, and transcription of the adjacent gene will be altered. See also "Response element".
Helix-loop-helix: A protein structural motif characteristic of certain DNA-binding proteins.
hnRNA: Heterogeneous nuclear RNA; refers collectively to the variety of RNAs found in the nucleus, including primary transcripts, partially processed RNAs and snRNA. The term hnRNA is often used just for the unprocessed primary transcripts, however.
Host strain (bacterial): The bacterium used to harbor a plasmid. Typical host strains include HB101 (general purpose E. coli strain), DH5a (ditto), JM101 and JM109 (suitable for growing M13 phages), XL1-Blue (general-purpose, good for blue/white lacZ screening). Note that the host strain is available in a form with no plasmids (hence you can put one of your own into it), or it may have plasmids present (especially if you put them there). Hundreds, perhaps thousands, of host strains are available.
Hybridization: The reaction by which the pairing of complementary strands of nucleic acid occurs. DNA is usually double-stranded, and when the strands are separated they will re-hybridize under the appropriate conditions. Hybrids can form between DNA-DNA, DNA-RNA or RNA-RNA. They can form between a short strand and a long strand containing a region complementary to the short one. Imperfect hybrids can also form, but the more imperfect they are, the less stable they will be (and the less likely to form). To "anneal" two strands is the same as to "hybridize" them.
Insert: In a complete plasmid clone, there are two types of DNA - the "vector" sequences and the "insert". The vector sequences are those regions necessary for propagation, antibiotic resistance, and all those mundane functions necessary for useful cloning. In contrast, however, the insert is the piece of DNA in which you are really interested.
Intergenic: Between two genes; e.g. intergenic DNA is the DNA found between two genes. The term is often used to mean non-functional DNA (or at least DNA with no known importance to the two genes flanking it). Alternatively, one might speak of the "intergenic distance" between two genes as the number of base pairs from the polyA site of the first gene to the cap site of the second. This usage might therefore include the promoter region of the second gene.
Intron: Introns are portions of genomic DNA which ARE transcribed (and thus present in the primary transcript) but which are later spliced out. They thus are not present in the mature mRNA. Note that although the 3' flanking region is often transcribed, it is removed by endonucleolytic cleavage and not by splicing. It is not an intron.
KB: abbreviation for kilobase, one thousand bases.
Kinase: A kinase is in general an enzyme that catalyzes the transfer of a phosphate group from ATP to something else. In molecular biology, it has acquired the more specific verbal usage for the transfer onto DNA of a radiolabeled phosphate group. This would be done in order to use the resultant "hot" DNA as a probe.
Knock-out experiment: A technique for deleting, mutating or otherwise inactivating a gene in a mouse. This laborious method involves transfecting a crippled gene into cultured embryonic stem cells, searching through the thousands of resulting clones for one in which the crippled gene exactly replaced the normal one (by homologous recombination), and inserting that cell back into a mouse blastocyst. The resulting mouse will be chimaeric but, if you are lucky (and if you've gotten this far, you obviously are), its germ cells will carry the deleted gene. A few rounds of careful breeding can then produce progeny in which both copies of the gene are inactivated.
Lambda: see Bacteriophage Lambda.
Leucine zipper: A motif found in certain proteins in which Leu residues are evenly spaced through ana-helical region, such that they would end up on the same face of the helix. Dimers can form between two such proteins. The Leu zipper is important in the function of transcription factors such as Fos and Jun and related proteins.
Library: A library might be either a genomic library, or a cDNA library. In either case, the library is just a tube carrying a mixture of thousands of different clones - bacteria orl phages. Each clone carries an "insert" - the cloned DNA.
A cDNA library is usually just a mixture of bacteria, where each bacteria carries a different plasmid. Inserted into the plasmids (one per plasmid) are thousands of different pieces of cDNA (each typ. 500-5000 bp) copied from some source of mRNA, for example, total liver mRNA. The basic idea is that if you have a large enough number of different liver-derived cDNAs carried in those bacteria, there is a 99% probability that a cDNA copy of any given liver mRNA exists somewhere in the tube. The real trick is to find the one you want out of that mess - a process called screening (see "Screening").
A genomic library is similar in concept to a cDNA library, but differs in three major ways - 1) the library carries pieces of genomic DNA (and so contains introns and flanking regions, as well as coding and untranslated); 2) you need bacteriophagel or cosmids, rather than plasmids, because... 3) the inserts are usually 5-15 kb long (in a l library) or 20-40 kb (in a cosmid library). Therefore, a genomic library is most commonly a tube containing a mixture of l phages. Enough different phages must be present in the library so that any given piece of DNA from the source genome has a 99% probability of being present.
Ligase: An enzyme, T4 DNA ligase, which can link pieces of DNA together. The pieces must have compatible ends (both of them blunt, or else mutually compatible sticky ends), and the ligation reaction requires ATP.
Ligation: The process of splicing two pieces of DNA together. In practice, a pool of DNA fragments are treated with ligase (see "Ligase") in the presence of ATP, and all possible splicing products are produced, including circularized forms and end-to-end ligation of 2, 3 or more pieces. Usually, only some of these products are useful, and the investigator must have some way of selecting the desirable ones.
Linker: A small piece of synthetic double-stranded DNA which contains something useful, such as a restriction site. A linker might be ligated onto the end of another piece of DNA to provide a desired restriction site.
Marker: Two typical usages:
Molecular weight size marker: a piece of DNA of known size, or a mixture of pieces with known size, used on electrophoresis gels to determine the size of unknown DNAs by comparison.
Genetic marker: A known site on the chromosome. It might for example be the site of a locus with some recognizable phenotype, or it may be the site of a polymorphism that can be experimentally discerned. See 'Microsatellite', 'SNP', 'Genotyping'.
Message: see mRNA.
Microsatellite: A microsatellite is a simple sequence repeat (SSR). It might be a homopolymer ('...TTTTTTT...'), a dinucleotide repeat ('....CACACACACACACA.....'), trinucleotide repeat ('....AGTAGTAGTAGTAGT...') etc. Due to polymerase slip (a.k.a. polymerase chatter), during DNA replication there is a slight chance these repeat sequences may become altered; copies of the repeat unit can be created or removed. Consequently, the exact number of repeat units may differ between unrelated individuals. Considering all the known microsatellite markers, no two individuals are identical. This is the basis for forensic DNA identification and for testing of familial relationships (e.g. paternity testing).
mRNA: "messenger RNA" or sometimes just "message"; an RNA which contains sequences coding for a protein. The term mRNA is used only for a mature transcript with polyA tail and with all introns removed, rather than the primary transcript in the nucleus. As such, an mRNA will have a 5' untranslated region, a coding region, a 3' untranslated region and (almost always) a poly(A) tail. Typically about 2% of the total cellular RNA is mRNA.
M13: A bacteriophage which infects certain strains of E. coli . The salient feature of this phage is that it packages only a single strand of DNA into its capsid. If the investigator has inserted some heterologous DNA into the M13 genome, copious quantities of single-stranded DNA can subsequently be isolated from the phage capsids. M13 is often used to generate templates for DNA sequencing.
Nick translation: A method for incorporating radioactive isotopes (typically32P) into a piece of DNA. The DNA is randomly nicked by DNase I, and then starting from those nicks DNA polymerase I digests and then replaces a stretch of DNA. Radiolabeled precursor nucleotide triphosphates can thus be incorporated.
Non-coding strand: Anti-sense strand. See "Sense strand" for a discussion of sense strand vs. anti-sense strand.
Northern blot: A technique for analyzing mixtures of RNA, whereby the presence and rough size of one particular type of RNA (usually an mRNA) can be ascertained. See "Blotting" for more information. After Dr. E. M. Southern invented the Southern blot, it was adapted to RNA and named the "Northern" blot.
NT: Abbreviation for nucleotide; i.e. the monomeric unit from which DNA or RNA are built. One can express the size of a nucleic acid strand in terms of the number of nucleotides in its chain; hence nt can be a measure of chain length.
Nuclear run-on: A method used to estimate the relative rate of transcription of a given gene, as opposed to the steady-state level of the mRNA transcript (which is influenced not just by transcription rates, but by the stability of the RNA). This technique is based on the assumption that a highly-transcribed gene should have more molecules of RNA polymerase bound to it than will the same gene in a less-active state. If properly prepared, isolated nuclei will continue to transcribe genes and incorporate32P into RNA, but only in those transcripts that were in progress at the time the nuclei were isolated. Once the polymerase molecules complete the transcript they have in progress, they should not be able to re-initiate transcription. If that is true, then the amount of radiolabel incorporated into a specific type of mRNA is theoretically proportional to the number of RNA polymerase complexes present on that gene at the time of isolation. A very difficult technique, rarely applied appropriately from what I understand.
Nuclease: An enzyme which degrades nucleic acids. A nuclease can be DNA-specific (a DNase), RNA-specific (RNase) or non-specific. It may act only on single stranded nucleic acids, or only on double-stranded nucleic acids, or it may be non-specific with respect to strandedness. A nuclease may degrade only from an end (an exonuclease), or may be able to start in the middle of a strand (an endonuclease). To further complicate matters, many enzymes have multiple functions; for example, Bal31 has a 3'-exonuclease activity on double-stranded DNA, and an endonuclease activity specific for single-stranded DNA or RNA.
Nuclease protection assay: See "RNase protection assay".
Oncogene: A gene in a tumor virus or in cancerous cells which, when transferred into other cells, can cause transformation (note that only certain cells are susceptible to transformation by any one oncogene). Functional oncogenes are not present in normal cells. A normal cell has many "proto-oncogenes" which serve normal functions, and which under the right circumstances can be activated to become oncogenes. The prefix "v-" indicates that a gene is derived from a virus, and is generally an oncogene (like v-src , v-ras, v-myb , etc). See also "Transformation (with respect to cultured cells)".
Open reading frame: Any region of DNA or RNA where a protein could be encoded. In other words, there must be a string of nucleotides (possibly starting with a Met codon) in which one of the three reading frames has no stop codons. See "Reading frame" for a simple example.
Origin of replication: Nucleotide sequences present in a plasmid which are necessary for that plasmid to replicate in the bacterial host. (Abbr. "ori")
pBR322: A common plasmid. Along with the obligatory origin of replication, this plasmid has genes which make the E. coli host resistant to ampicillin and tetracycline. It also has several restriction sites (BamHI, PstI, EcoRI, HindIII etc.) into which DNA fragments could be spliced in order to clone them.
PCR: see Polymerase Chain Reaction.
Phagemid: A type of plasmid which carries within its sequence a bacteriophage replication origin. When the host bacterium is infected with "helper" phage, the phagemid is replicated along with the phage DNA and packaged into phage capsids.
Plasmid: A circular piece of DNA present in bacteria or isolated from bacteria. Escherichia coli, the usual bacteria in molecular genetics experiments, has a large circular genome, but it will also replicate smaller circular DNAs as long as they have an "origin of replication". Plasmids may also have other DNA inserted by the investigator. A bacterium carrying a plasmid and replicating a million-fold will produce a million identical copies of that plasmid. Common plasmids are pBR322, pGEM, pUC18.
PolyA tail: After an mRNA is transcribed from a gene, the cell adds a stretch of A residues (typically 50-200) to its 3' end. It is thought that the presence of this "polyA tail" increases the stability of the mRNA (possibly by protecting it from nucleases). Note that not all mRNAs have a polyA tail; the histone mRNAs in particular do not.
Polymerase: An enzyme which links individual nucleotides together into a long strand, using another strand as a template. There are two general types of polymerase DNA polymerases (which synthesize DNA) and RNA polymerase (which makes RNA). Within these two classes, there are numerous sub-types of polymerase, depending on what type of nucleic acid can function as template and what type of nucleic acid is formed. A DNA-dependant DNA polymerase will copy one DNA strand starting from a primer, and the product will be the complementary DNA strand. A DNA-dependant RNA polymerase will use DNA as a template to synthesize an RNA strand.
Polymerase chain reaction: A technique for replicating a specific piece of DNA in-vitro , even in the presence of excess non-specific DNA. Primers are added (which initiate the copying of each strand) along with nucleotides and Taq polymerase. By cycling the temperature, the target DNA is repetitively denatured and copied. A single copy of the target DNA, even if mixed in with other undesirable DNA, can be amplified to obtain billions of replicates. PCR can be used to amplify RNA sequences if they are first converted to DNA via reverse transcriptase. This two-phase procedure is known as RT-PCR.
Polymerase Chain Reaction (PCR) is the basis for a number of extremely important methods in molecular biology. It can be used to detect and measure vanishingly small amounts of DNA and to create customized pieces of DNA. It has been applied to clinical diagnosis and therapy, to forensics and to vast numbers of research applications. It would be difficult to overstate the importance of PCR to science.
Post-transcriptional regulation: Any process occurring after transcription which affects the amount of protein a gene produces. Includes RNA processing efficiency, RNA stability, translation efficiency, protein stability. For example, the rapid degradation of an mRNA will reduce the amount of protein arising from it. Increasing the rate at which an mRNA is translated will increase the amount of protein product.
Post-translational processing: The reactions which alter a protein's covalent structure, such as phosphorylation, glycosylation or proteolytic cleavage.
Post-translational regulation: Any process which affects the amount of protein produced from a gene, and which occurs AFTER translation in the grand scheme of genetic expression. Actually, this is often just a buzz-word for regulation of the stability of the protein. The more stable a protein is, the more it will accumulate.
PRE: Progesterone Response Element: A binding site in a promoter to which the activated progesterone receptor can bind. The progesterone receptor is essentially a transcription factor which is activated only in the presence of progesterone . The activated receptor will bind to a PRE, and transcription of the adjacent gene will be altered. See also "Response element".
Primary transcript: When a gene is transcribed in the nucleus, the initial product is the primary transcript, an RNA containing copies of all exons and introns. This primary transcript is then processed by the cell to remove the introns, to cleave off unwanted 3' sequence, and to polyadenylate the 3' end. The mature message thus formed is then exported to the cytoplasm for translation.
Primer: A small oligonucleotide (anywhere from 6 to 50 nt long) used to prime DNA synthesis. The DNA polymerases are only able to extend a pre-existing strand along a template; they are not able to take a naked single strand and produce a complementary copy of it de-novo. A primer which sticks to the template is therefore used to initiate the replication. Primers are necessary for DNA sequencing and PCR.
Primer extension: This is a method used to figure out how far upstream from a fixed site the start of an mRNA is. For example, perhaps you have isolated a cDNA clone, but you don't think that the clone has all of the 5' untranslated region. To find out how much is missing, you would first sequence the part you have, and figure out which strand is coding strand (usually the coding strand will have a large open reading frame). Next, you ask the DNA Synthesis Facility to make an oligonucleotide complementary to the 5'-most region of the coding strand (and thus complementary to the mRNA). This "primer" is hybridized to mRNA (say, a mixture of mRNA containing the one in which you are interested), and reverse transcriptase is added to copy the mRNA from the primer out to the 5' end. The size of the resulting DNA fragment shows how far away from the 5' end your primer is.
Probe: A fragment of DNA or RNA which is labeled in some way (often incorporating 32P or 35S), and which is used to hybridize with the nucleic acid in which you are interested. For example, if you want to quantitate the levels of alpha subunit mRNA in a preparation of pituitary RNA, you might make a radiolabeled RNA in-vitro which is complementary to the mRNA, and then use it to probe a Northern blot of the pit RNA. A probe can be radiolabeled, or tagged with another functional group such as biotin. A probe can be cloned DNA, or might be a synthetic DNA strand. As an example of the latter, perhaps you have isolated a protein for which you wish to obtain a cDNA or genomic clone. You might (pay to) microsequence a portion of the protein, deduce the nucleic acid sequence, (pay to) synthesize an oligonucleotide carrying that sequence, radiolabel it and use it as a probe to screen a cDNA library or genomic library. A better way is to call up someone who already has the clone.
Processing: The reactions occurring in the nucleus which convert the primary RNA transcript to a mature mRNA. Processing reactions include capping, splicing and polyadenylation. The term can also refer to the processing of the protein product, including proteolytic cleavages, glycosylation, etc.
Promoter: The first few hundred nucleotides of DNA "upstream" (on the 5' side) of a gene, which control the transcription of that gene. The promoter is part of the 5' flanking DNA, i.e. it is not transcribed into RNA, but without the promoter, the gene is not functional. Note that the definition is a bit hazy as far as the size of the region encompassed, but the "promoter" of a gene starts with the nucleotide immediately upstream from the cap site, and includes binding sites for one or more transcription factors which can not work if moved farther away from the gene.
Proto-oncogene: A gene present in a normal cell which carries out a normal cellular function, but which can become an oncogene under certain circumstances. The prefix "c-" indicates a cellular gene, and is generally used for proto-oncogenes (examples: c-myb , c-myc , c-fos , c-jun , etc).
Pulsed field gel electrophoresis: (PFGE) A gel technique which allows size-separation of very large fragments of DNA, in the range of hundreds of kb to thousands of kb. As in other gel electrophoresis techniques, populations of molecules migrate through the gel at a speed related to their size, producing discrete bands. In normal electrophoresis, DNA fragments greater than a certain size limit all migrate at the same rate through the gel. In PFGE, the electrophoretic voltage is applied alternately along two perpendicular axes, which forces even the larger DNA fragments to separate by size.
Random primed synthesis: If you have a DNA clone and you want to produce radioactive copies of it, one way is to denature it (separate the strands), then hybridize to that template a mixture of all possible 6-mer oligonucleotides. Those oligos will act as primers for the synthesis of labeled strands by DNA polymerase (in the presence of radiolabeled precursors).
Reading frame: When mRNA is translated by the cell, the nucleotides are read three at a time. By starting at different positions, the groupings of three that are produced can be entirely different. The following example shows a DNA sequence and the three reading frames in which it could be read. Not only is an entirely different amino acid sequence specified by the different reading frames, but two of the three frames have stop codons, and thus are not open reading frames (asterisks indicate a stop codon).
A DNA open reading frame:
If we shift the grouping again, we will just get the first reading frame again. The reading frame that is actually used is determined by the first methionine codon (the initiation codon). Once that first AUG is recognized, the pattern of triplet groupings follows unambiguously.
Repetitive DNA: A surprising portion of any genome consists not of genes or structural elements, but of frequently repeated simple sequences. These may be short repeats just a few nt long, like CACACA etc. They can also range up to a few hundred nt long. Examples of the latter include Alu repeats, LINEs, SINEs. The function of these elements is often unknown. In shorter repeats like di- and tri-nucleotide repeats, the number of repeating units can occasionally change during evolution and descent. They are thus useful markers for familial relationships and have been used in paternity testing, forensic science and in the identification of human remains.
Response element: By definition, a "response element" is a portion of a gene which must be present in order for that gene to respond to some hormone or other stimulus. Response elements are binding sites for transcription factors. Certain transcription factors are activated by stimuli such as hormones or heat shock. A gene may respond to the presence of that hormone because the gene has in its promoter region a binding site for hormone-activated transcription factor. Example: the glucocorticoid response element (GRE).
Restriction: To "restrict" DNA means to cut it with a restriction enzyme. See "Restriction Enzyme".
Restriction enzyme: A class of enzymes ("restriction endonucleases") generally isolated from bacteria, which are able to recognize and cut specific sequences ("restriction sites") in DNA. For example, the restriction enzyme BamHI locates and cuts any occurrence of:
5'-GGATCC-3' |||||| 3'-CCTAGG-5'
Note that both strands contain the sequence GGATCC, but in antiparallel orientation. The recognition site is thus said to be palindromic, which is typical of restriction sites. Every copy of a plasmid is identical in sequence, so if BamHI cuts a particular circular plasmid at three sites producing three "restriction fragments", then a million copies of that plasmid will produce those same restriction fragments a million times over. There are more than six hundred known restriction enzymes.
Bacteria produce restriction enzymes for protection against invasion by foreign DNA such as phages. The bacteria's own DNA is modified in such a way as to prevent it from being clipped.
Restriction fragment: The piece of DNA released after restriction digestion of plasmids or genomic DNA. See "Restriction enzyme". One can digest a plasmid and isolate one particular restriction fragment (actually a set of identical fragments). The term also describes the fragments detected on a genomic blot which carry the gene of interest.
Restriction fragment length polymorphism: See "RFLP".
Restriction map: A "cartoon" depiction of the locations within a stretch of known DNA where restriction enzymes will cut.
The map usually indicates the approximate length of the entire piece (scale on the bottom), as well as the position within the piece at which designated enzymes will cut. This map happens to be of a plasmid, and the two ends are joined together with about 25 nt between the EcoRI and HindIII sites.
Restriction site: See Restriction enzyme.
Reverse transcriptase: An enzyme which will make a DNA copy of an RNA template - a DNA-dependant RNA polymerase. RT is used to make cDNA; one begins by isolating polyadenylated mRNA, providing oligo-dT as a primer, and adding nucleotide triphosphates and RT to copy the RNA into cDNA.
RFLP: Restriction fragment length polymorphism; the acronym is pronounced "riflip". Although two individuals of the same species have almost identical genomes, they will always differ at a few nucleotides. Some of these differences will produce new restriction sites (or remove them), and thus the banding pattern seen on a genomic Southern will thus be affected. For any given probe (or gene), it is often possible to test different restriction enzymes until you find one which gives a pattern difference between two individuals - a RFLP. The less related the individuals, the more divergent their DNA sequences are and the more likely you are to find a RFLP.
Ribonuclease: see "RNAse".
Riboprobe: A strand of RNA synthesized in-vitro (usually radiolabeled) and used as a probe for hybridization reactions. An RNA probe can be synthesized at very high specific activity, is single stranded (and therefore will not self anneal), and can be used for very sensitive detection of DNA or RNA.
Ribosome: A cellular particle which is involved in the translation of mRNAs to make proteins. Ribosomes are a complex consisting of ribosomal RNAs (rRNA) and several proteins.
RNAi: 'RNA interference' (a.k.a. 'RNA silencing') is the mechanism by which small double-stranded RNAs can interfere with expression of any mRNA having a similar sequence. Those small RNAs are known as 'siRNA', for short interfering RNAs. The mode of action for siRNA appears to be via dissociation of its strands, hybridization to the target RNA, extension of those fragments by an RNA-dependent RNA polymerase, then fragmentation of the target. Importantly, the remnants of the target molecule appears to then act as an siRNA itself; thus the effect of a small amount of starting siRNA is effectively amplified and can have long-lasting effects on the recipient cell.
The RNAi effect has been exploited in numerous research programs to deplete the call of specific messages, thus examining the role of those messages by their absence.
RNase: Ribonuclease; an enzyme which degrades RNA. It is ubiquitous in living organisms and is exceptionally stable. The prevention of RNase activity is the primary problem in handling RNA.
RNase protection assay: This is a sensitive method to determine (1) the amount of a specific mRNA present in a complex mixture of mRNA and/or (2) the sizes of exons which comprise the mRNA of interest. A radioactive DNA or RNA probe (in excess) is allowed to hybridize with a sample of mRNA (for example, total mRNA isolated from tissue), after which the mixture is digested with single-strand specific nuclease. Only the probe which is hybridized to the specific mRNA will escape the nuclease treatment, and can be detected on a gel. The amount of radioactivity which was protected from nuclease is proportional to the amount of mRNA to which it hybridized. If the probe included both intron and exons, only the exons will be protected from nuclease and their sizes can be ascertained on the gel.
rRNA: "ribosomal RNA"; any of several RNAs which become part of the ribosome, and thus are involved in translating mRNA and synthesizing proteins. They are the most abundant RNA in the cell (on a mass basis).
RT-PCR: See Polymerase Chain Reaction.
Run-off: see Nuclear run-on.
Run-on: see Nuclear run-on.
S1 end mapping: A technique to determine where the end of an RNA transcript lies with respect to its template DNA (the gene). Can't be described in a short paragraph. See "RNAse Protection assay" for a closely related technique.
S1 nuclease: An enzyme which digests only single-stranded nucleic acids.
Screening: To screen a library (see "Library") is to select and isolate individual clones out of the mixture of clones. For example, if you needed a cDNA clone of the pituitary glycoprotein hormone alpha subunit, you would need to make (or buy) a pituitary cDNA library, then screen that library in order to detect and isolate those few bacteria carrying alpha subunit cDNA.
There are two methods of screening which are particularly worth describing: screening by hybridization, and screening by antibody.
Screening by hybridization involves spreading the mixture of bacteria out on a dozen or so agar plates to grow several ten thousand isolated colonies. Membranes are laid onto each plate, and some of the bacteria from each colony stick, producing replicas of each colony in their original growth position. The membranes are lifted and the adherent bacteria are lysed, then hybridized to a radioactive piece of alpha DNA (the source of which is a story in itself - see "Probe"). When X-ray film is laid on the filter, only colonies carrying alpha sequences will "light up". Their position on the membranes show where they grew on the original plates, so you now can go back to the original plate (where the remnants of the colonies are still alive), pick the colony off the plate and grow it up. You now have an unlimited source of alpha cDNA.
Screening by antibody is an option if the bacteria and plasmid are designed to express proteins from the cDNA inserts (see "Expression clones"). The principle is similar to hybridization, in that you lift replica filters from bacterial plates, but then you use the antibody (perhaps generated after olde tyme protein purification rituals) to show which colony expresses the desired protein.
Sense strand: A gene has two strands: the sense strand and the anti-sense strand. The Sense strand is, by definition, the same 'sense' as the mRNA; that is it can be translated exactly as the mRNA sequence can. Given a sense strand with the following sequence:
5' - ATG GGG CCA CGG CTG TGA - 3' Met Gly Pro Arg Leu stop
The anti-sense strand will read as follows (note that the strand has been reversed and complemented):
5' - TCA CAG CCG TGG CCC CAT - 3'
The duplex DNA will pair as follows:
5' - ATGGGGCCACGGCTGTGA - 3' |||||||||||||||||| 3' - TACCCCGGAGCCGACACT - 5'
Note however that when the RNA is transcribed from this sequence, the ANTI-SENSE strand is used as the template for RNA polymerization. After all, the RNA must base-pair with its template strand (see Figure 3), so the process of transcription produces the complement of the anti-sense strand. This introduces some confusion about terminology:
Some people use the term coding strand and non-coding strand to refer to the sense and antisense strands, respectively. Unfortunately, many people interpret these terms in exactly the opposite way. I consider the terms coding strand and non-coding strand to be too ambiguous. Some people use the exact opposite definition for sense and anti-sense that I have given here. Be aware of the possibility of a discrepancy. Textbooks I have consulted generally agree with the nomenclature given herein, albeit some avoid defining these terms at all.
Sequence: As a noun, the sequence of a DNA is a buzz word for the structure of a DNA molecule, in terms of the sequence of bases it contains. As a verb, "to sequence" is to determine the structure of a piece of DNA; i.e. the sequence of nucleotides it contains.
Shotgun cloning: The practice of randomly clipping a larger DNA fragment into various smaller pieces, cloning everything, and then studying the resulting individual clones to figure out what happened. For example, if one was studying a 50 kb gene, it "may" be a bit difficult to figure out the restriction map. By randomly breaking it into smaller fragments and mapping those, a master restriction map could be deduced. See also Shotgun sequencing.
Shotgun sequencing: A way of determining the sequence of a large DNA fragment which requires little brainpower but lots of late nights. The large fragment is shotgun cloned (see above), and then each of the resulting smaller clones ("subclones") is sequenced. By finding out where the subclones overlap, the sequence of the larger piece becomes apparent. Note that some of the regions will get sequenced several times just by chance.
siRNA: Small Inhibitory RNA; a.k.a. 'RNAi'. See 'RNAi'.
Slot blot: Similar to a dot blot, but the analyte is put onto the membrane using a slot-shaped template. The template produces a consistently shaped spot, thus decreasing errors and improving the accuracy of the analysis. See Dot blot.
snRNA: Small nuclear RNA; forms complexes with proteins to form snRNPs; involved in RNA splicing, polyadenylation reactions, other unknown functions (probably).
snRNP: "snerps", Small Nuclear RiboNucleoProtein particles, which are complexes between small nuclear RNAs and proteins, and which are involved in RNA splicing and polyadenylation reactions.
SNP: Single Nucleotide Polymorphism (SNP) - a position in a genomic DNA sequence that varies from one individual to another. It is thought that the primary source of genetic difference between any two humans is due to the presence of single nucleotide polymorphisms in their DNA. Furthermore, these SNPs can be extremely useful in genetic mapping (see 'Genetic Mapping') to follow inheritance of specific segments of DNA in a lineage. SNP-typing is the process of determining the exact nucleotide at positions known to be polymorphic.
Solution hybridization: A method closely related to RNase protection (see "RNase protection assay"). Solution hybridization is designed to measure the levels of a specific mRNA species in a complex population of RNA. An excess of radioactive probe is allowed to hybridize to the RNA, then single-strand specific nuclease is used to destroy the remaining unhybridized probe and RNA. The "protected" probe is separated from the degraded fragments, and the amount of radioactivity in it is proportional to the amount of mRNA in the sample which was capable of hybridization. This can be a very sensitive detection method.
Southern blot: A technique for analyzing mixtures of DNA, whereby the presence and rough size of one particular fragment of DNA can be ascertained. See "Blotting". Named for its inventor, Dr E. M. Southern.
SSR: Simple Sequence Repeat. See 'Microsatellite'.
Stable transfection: A form of transfection experiment designed to produce permanent lines of cultured cells with a new gene inserted into their genome. Usually this is done by linking the desired gene with a "selectable" gene, i.e. a gene which confers resistance to a toxin (like G418, aka Geneticin). Upon putting the toxin into the culture medium, only those cells which incorporate the resistance gene will survive, and essentially all of those will also have incorporated the experimenter's gene.
Sticky ends: After digestion of a DNA with certain restriction enzymes, the ends left have one strand overhanging the other to form a short (typically 4 nt) single-stranded segment. This overhang will easily re-attach to other ends like it, and are thus known as "sticky ends". For example, the enzyme BamHI recognizes the sequence GGATCC, and clips after the first G in each strand:
The overhangs thus produced can still hybridize ("anneal") with each other, even if they came from different parent DNA molecules, and the enzyme ligase will then covalently link the strands. Sticky ends therefore facilitate the ligation of diverse segments of DNA, and allow the formation of novel DNA constructs.
Stringency: A term used to describe the conditions of hybridization. By varying the conditions (especially salt concentration and temperature) a given probe sequence may be allowed to hybridize only with its exact complement (high stringency), or with any somewhat related sequences (relaxed or low stringency). Increasing the temperature or decreasing the salt concentration will tend to increase the selectivity of a hybridization reaction, and thus will raise the stringency.
Sub-cloning: If you have a cloned piece of DNA (say, inserted into a plasmid) and you need unlimited copies of only a part of it, you might "sub-clone" it. This involves starting with several million copies of the original plasmid, cutting with restriction enzymes, and purifying the desired fragment out of the mixture. That fragment can then be inserted into a new plasmid for replication. It has now been subcloned.
Taq polymerase: A DNA polymerase isolated from the bacterium Thermophilis aquaticus and which is very stable to high temperatures. It is used in PCR procedures and high temperature sequencing.
TATA box: A sequence found in the promoter (part of the 5' flanking region) of many genes. Deletion of this site (the binding site of transcription factor TFIID) causes a marked reduction in transcription, and gives rise to heterogeneous transcription initiation sites.
Tet resistance: See "Antibiotic resistance".
Tissue-specific expression: Gene function which is restricted to a particular tissue or cell type. For example, the glycoprotein hormone alpha subunit is produced only in certain cell types of the anterior pituitary and placenta, not in lungs or skin; thus expression of the glycoprotein hormone alpha-chain gene is said to be tissue-specific. Tissue specific expression is usually the result of an enhancer which is activated only in the proper cell type.
Tm: The melting point for a double-stranded nucleic acid. Technically, this is defined as the temperature at which 50% of the strands are in double-stranded form and 50% are single-stranded, i.e. midway in the melting curve. A primer has a specific Tm because it is assumed that it will find an opposite strand of appropriate character.
Transcription factor: A protein which is involved in the transcription of genes. These usually bind to DNA as part of their function (but not necessarily). A transcription factor may be general (i.e. acting on many or all genes in all tissues), or tissue-specific (i.e. present only in a particular cell type, and activating the genes restricted to that cell type). Its activity may be constitutive, or may depend on the presence of some stimulus; for example, the glucocorticoid receptor is a transcription factor which is active only when glucocorticoids are present.
Transcription: The process of copying DNA to produce an RNA transcript. This is the first step in the expression of any gene. The resulting RNA, if it codes for a protein, will be spliced, polyadenylated, transported to the cytoplasm, and by the process of translation will produce the desired protein molecule.
Transfection: A method by which experimental DNA may be put into a cultured mammalian cell. Such experiments are usually performed using cloned DNA containing coding sequences and control regions (promoters, etc) in order to test whether the DNA will be expressed. Since the cloned DNA may have been extensively modified (for example, protein binding sites on the promoter may have been altered or removed), this procedure is often used to test whether a particular modification affects the function of a gene.
Transformation (with respect to bacteria): The process by which a bacteria acquires a plasmid and becomes antibiotic resistant. This term most commonly refers to a bench procedure performed by the investigator which introduces experimental plasmids into bacteria.
Transformation (with respect to cultured cells): A change in cell morphology and behavior which is generally related to carcinogenesis. Transformed cells tend to exhibit characteristics known collectively as the "transformed phenotype" (rounded cell bodies, reduced attachment dependence, increased growth rate, loss of contact inhibition, etc). There are different "degrees" of transformation, and cells may exhibit only a subset of these characteristics. Not well understood, the process of transformation is the subject of intense research.
Transgenic mouse: A mouse which carries experimentally introduced DNA. The procedure by which one makes a transgenic mouse involves the injection of DNA into a fertilized embryo at the pro-nuclear stage. The DNA is generally cloned, and may be experimentally altered. It will become incorporated into the genome of the embryo. That embryo is implanted into a foster mother, who gives birth to an animal carrying the new gene. Various experiments are then carried out to test the functionality of the inserted DNA.
Transient transfection: When DNA is transfected into cultured cells, it is able to stay in those cells for about 2-3 days, but then will be lost (unless steps are taken to ensure that it is retained - see Stable transfection). During those 2-3 days, the DNA is functional, and any functional genes it contains will be expressed. Investigators take advantage of this transient expression period to test gene function.
Translation: The process of decoding a strand of mRNA, thereby producing a protein based on the code. This process requires ribosomes (which are composed of rRNA along with various proteins) to perform the synthesis, and tRNA to bring in the amino acids. Sometimes, however, people speak of "translating" the DNA or RNA when they are merely reading the nucleotide sequence and predicting from it the sequence of the encoded protein. This might be more accurately termed "conceptual translation".
Tumor suppressor: A gene that inhibits progression towards neoplastic transformation. The best-known examples of tumor suppressors are the proteins p53 and Rb.
tRNA: "transfer RNA"; one of a class of rather small RNAs used by the cell to carry amino acids to the enzyme complex (the ribosome) which builds proteins, using an mRNA as a guide. Fairly abundant.
Upstream activator sequence: A binding site for transcription factors, generally part of a promoter region. A UAS may be found upstream of the TATA sequence (if there is one), and its function is (like an enhancer) to increase transcription. Unlike an enhancer, it can not be positioned just anywhere or in any orientation.
Upstream/Downstream: In an RNA, anything towards the 5' end of a reference point is "upstream" of that point. This orientation reflects the direction of both the synthesis of mRNA, and its translation - from the 5' end to the 3' end. In DNA, the situation is a bit more complicated. In the vicinity of a gene (or in a cDNA), the DNA has two strands, but one strand is virtually a duplicate of the RNA, so it's 5' and 3' ends determine upstream and downstream, respectively. NOTE that in genomic DNA, two adjacent genes may be on different strands and thus oriented in opposite directions. Upstream or downstream is only used on conjunction with a given gene.
Vector: The DNA "vehicle" used to carry experimental DNA and to clone it. The vector provides all sequences essential for replicating the test DNA. Typical vectors include plasmids, cosmids, phages and YACs.
Western blot: A technique for analyzing mixtures of proteins to show the presence, size and abundance of one particular type of protein. Similar to Southern or Northern blotting (see "Blotting"), except that (1) a protein mixture is electrophoresed in an acrylamide gel, and (2) the "probe" is an antibody which recognizes the protein of interest, followed by a radioactive secondary probe (such as
YAC: Yeast artificial chromosome. This is a method for cloning very large fragments of DNA. Genomic DNA in fragments of 200-500 kb are linked to sequences which allow them to propagate in yeast as a mini-chromosome (including telomeres, a centromere and an ARS - an autonomous replication sequence). This technique is used to clone large genes and intergenic regions, and for chromosome walking.
Zinc finger: A protein structural motif common in DNA binding proteins. Four Cys residues are found for each "finger" and one finger can bind a molecule of zinc. A typical configuration is: CysXxxXxxCys--(intervening 12 or so aa's)--CysXxxXxxCys.
Go to the University of Michigan DNA Sequencing Core's Home Page |
Proposition 3Given two unequal straight lines, to cut off from the greater a straight line equal to the less.
This proposition accomplishes subtracting from a given line, a line of shorter length. This construction relies heavily, almost solely, on Proposition 2.
This proof and step-by-step construction may seem almost trivial, or maybe it seems complicated for what we accomplish. This is again another instance in which analytic geometry proves much simpler; we would simply take the measure of length of the shorter of the given lines and cut that length from the longer line. To do this in Euclidean geometry we first construct a circle at center A using a radius of length C, which is given. This ensures that the circle constructed will intersect line AB at a point E since AB is longer than the length C. This is an application of the line-circle property that Euclid takes for granted. Then we know that AE=C1C2 and A_E_B.
Let AB, C be the two given unequal straight lines, and let AB be the greater of them.
Thus it is required to cut off from AB the greater a straight line equal to C the less.
At the point A let AD be placed equal to the straight line C; [1.2]To place at a given point [as an extremity] a straight line equal to a given straight line. and with centre A and distance AD let the circle DEF be described. [Post. 3]To describe a circle with any centre and distance.
Now, since the point A is the centre of the circle DEF, AE is equal to AD. [Def. 15]A circle is a plane figure contained by one line such that all the straight lines falling upon it from one point among those lying within the figure are equal to one another.
But C is also equal to AD. [Def. 15]A circle is a plane figure contained by one line such that all the straight lines falling upon it from one point among those lying within the figure are equal to one another.
Therefore each of the straight lines AE, C is equal to AD; so that AE is also equal to C. [C.N.1]Things which are equal to the same thing are also equal to one another.
Therefore, given the two straight lines AB, C, from AB the greater AE has been cut off equal to C the less.
(Being) what it was required to do.
Proof in Nuprl |
Superconductivity or zero electrical resistance at room temperature is any physicist’s dream, but so far the challenges have proven too great. Typically, metals like mercury become superconductive at temperatures close to absolute zero or -273 degrees Celsius. This means that we need to add a lot of energy to refrigerate the material so we might then exploit superconductivity, making the idea unfeasible as of now. Is stable room temperature superconducitivity possible? Just a few weeks ago I’ve written how this state was attained at room temperature, but only for a fleeting moment – just a milionth of a second. What’s clear so far is that we won’t ever be able to reach a stable state until we thoroughly understand what happens inside a superconductive material. Helping in this quest are Swiss scientists at the Ecole Polytechnique Fédérale de Lausanne who found electron spin plays a major role zero electrical resistance.
Look to the spin for the key
So far, the most promising superconductors are a class of material called cuprates. These can achieve superconductivity at a much higher temperature – only minus 120 degrees Celsius. Typically, this state is achieve near absolute zero when the electrons of the material join up and form electron couples that are called “Cooper pairs”, and in this form can flow without resistance. In cuprates, Cooper pairs do not form because atoms nudge them together, so clearly they become superconductive under a different mechanism than other materials.
A team of researchers led by Marco Grioni at EPFL has used a cutting-edge spectroscopic technique to explore the unique superconductivity of cuprates. The scientists used a technique called Resonant Inelastic X-ray Scattering, which is used to investigate the electronic structure of materials. This high-resolution method was able to monitor what happens to the electrons of a cuprate sample as it turned into a superconductor.
“Normally, superconductors hate magnetism,” says team leader Marco Grioni. “Either you have a good magnet or a good superconductor, but not both. Cuprates are very different and have really surprised everyone, because they are normally insulators and magnets, but they become superconducting when a few extra electrons are added by gently tweaking its chemical composition.”
Starting in the 1920s, Otto Stern and Walther Gerlach of the University of Hamburg in Germany conducted a series of important atomic beam experiments. Knowing that all moving charges produce magnetic fields, they proposed to measure the magnetic fields produced by the electrons orbiting nuclei in atoms. Much to their surprise, however, the two physicists found that electrons themselves act as if they are spinning very rapidly, producing tiny magnetic fields independent of those from their orbital motions. Soon the terminology ‘spin’ was used to describe this apparent rotation of subatomic particles.
Spin is a bizarre physical quantity. It is analogous to the spin of a planet in that it gives a particle angular momentum and a tiny magnetic field called a magnetic moment. Thus, among other things, electron spin is a key ingredient involved in magnetism. When spins interact with each other, they form spin waves. When magnetic materials are disturbed, spin waves are created and spread in ripples throughout their volume. Such spin waves are telltale fingerprints of the magnetic interaction and structure.
Unlike other materials, cuprates retain their magnetic properties when becoming superconductive.
“Something of the magnet remains in the superconductor, and could play a major role in the appearance of superconductivity ” says Grioni. “The new results give us a better idea of how the spins interact in these fascinating materials.”
Scientists now have a much better picture of what happens inside cuprates, by revealing the role spin interactions have. Eventually, tiny findings like these might couple together so that we might finally come up with a stable room-temperature superconductor, one with the potential to transform forever the electronics industry. Imagine power lines with zero loss or high speed rail that floats on a magnetic quantum cushion. All of these and much more might be possible one day. Here’s to a superconductive future.
Findings appeared in Nature Physics. |
Fast-growing silver maples can attain heights of 100 feet. Their natural habitats include wetland areas and floodplains. The tree’s attractive greenery makes it a desirable decoration.
However, it is difficult for farmers to grow these kinds of plants because of silver maple trees problems. Let’s scroll down to know more.
What Is The Silver Maple?
Before finding what causes silver maple tree problems, I will give you some fundamental information about these trees.
Silver Maple Tree Identification
The silver maple (Acer saccharinum), often known as the soft maple and white maple, is a fast-growing shade tree that belongs to the soapberry species (Sapindaceae).
Originally from the eastern part of North America, it has become a popular crop abroad.
Characteristic Of A Maple Tree
Rapid expansion in size is its defining feature. Their leaves shimmer and move gracefully in the wind because they are two colors on opposite sides and adaptable to many soil types.
Planting 10 feet away from roads, sidewalks, basements, and sewage lines is recommended due to the tree’s robust root structure.
Depending on the cultivar, it may reach heights of 50–80 feet and widths of 2–3 times its height. (zones 3-9)
They have 3-6″ leaves with five lobes divided by prominently deep, thin sinuses. Silver maples are a vibrant green in the spring and summer, but they transform into silvery yellow in the autumn.
Early in the spring, they develop clusters of tiny blooms in shades of silver, red, and yellow, and by the end of spring, they’ve matured into couples of winged seeds measuring more than 3 inches in length.
Among native maples, their seeds are the biggest you’ll find.
Once they mature, they form a vase shape. Besides, the roots of this plant extend far and wide, and the trunk has the potential to grow to enormous proportions.
How Fast Do Silver Maple Trees Grow
Although it thrives in wetter environments, this plant can adapt to various environments.
Don’t like red maple, a silver maple may add 4 to 6 feet to its height per year. It is considered one of the state’s tallest trees.
So, how tall do silver maple trees grow? The typical height of these healthy trees is around 100 feet, with an inner diameter of 4 to 5 feet.
The maple tree is one of the most adaptable species around. You may find these common trees in every state and province in Canada and the USA.
While maple trees may thrive in a wide range of temperatures, you’ll most often find them in the Northeast and North, where it’s colder.
What Are Silver Maple Trees Problems?
There are several main causes of silver maple trees problems, including Verticillium Wilt, Anthracnose, Tar Spot, Chlorosis, and Shoestring Root Rot.
The stem disease known as verticillium wilt (Verticillium albo-atrum) is responsible for the unexpected demise of silver maple trees.
Some branches, and the whole plant, may lose their leaves and become a sickly yellow.
The leaf margins in a badly diseased tree will curl inward before becoming brown. The greatest defense against Verticillium wilt is to grow disease-resistant cultivars.
This silver maple disease can’t thrive in the ground without an appropriate host plant for over three years.
Therefore, removing the sick trees will finally bring the disease under control in the affected area. The use of chemicals has shown some promise, albeit it may need an expert application to be successful.
All species of maple trees are prone to anthracnose (Gloeosporium spp.), a fungal infection that impacts a variety of shade trees.
Reddish-brown lesions appear first; subsequently, the Anthracnose lesions merge into larger ones, eventually dying off vast sections of the damaged leaf.
After that, the tender shoots and leaflets wither and become black. Defoliation often follows very infectious diseases. Infected tissue may then contaminate the fresh growth that occurs in the spring.
The overwintering populations of anthracnose may be reduced by removing leaves that have fallen and branches, which is the most effective management method.
Pruning is removing live branches from an entire tree to make space for new ones and increase airflow through the tree’s canopy.
Clean, water, and fertilize your silver maple regularly to keep it healthy. A severe infection will less impact a healthy silver maple tree than one under stress.
A fungus called tar spot is responsible for the ugly black blotches on the foliage of trees.
While this fungal infection seldom kills silver maple trees, it might occasionally completely consume an infected tree, severely diminishing the tree’s visual appeal.
In most cases, spots appear at the beginning of summer, starting as yellow before changing to a much darker brown or black.
Good sanitary measures taken in the autumn may prevent and even eradicate this fungus. When dead leaves begin to fall, rake them up.
Trees of the silver maple species do best when planted in acidic soil (pH 6.5 or below). Chlorosis, caused by a lack of manganese, may affect these trees if grown in alkaline soil.
Silver maple trees with this deficit have juvenile leaves with pale green or yellow edges separating the veins, even when they remain dark green.
The issue may be prevented by planting the tree by doing a soil test beforehand. Sulfur may be applied to soils with a pH greater than 7.0.
The soil must be acidified, which may require as much as twelve months before the plant is placed.
Shoestring Root Rot
Shoestring root rot (Armillaria mellea) is a type of fungi that infects both evergreen and deciduous plants across the globe.
It may go undiagnosed for decades on the roots of its host species until it produces an abundance of mushrooms around the foot of the tree.
Earlier signs of trouble for the tree include stunted development, discoloration of the leaves, and the death of branches.
Differentiating this disease from others that cause similarly disturbed leaf or twig characteristics is the emergence of tan fungus at the base of the affected silver maple tree.
Disease-prevention efforts have failed thus far because the rhizomorphs responsible for spreading the fungus are so diverse.
What Does a Silver Maple Tree Look Like?
- All maple trees, including this one, have five-pointed leaves and generate abundant seeds in springtime.
- The seeds develop into a set of “wings.” Once they reach maturity, they split off and spiral down the trunk with a single “wing.”
- The silvery-white undersides of these leaves give this tree its common name. The leaves take on a silvery sheen when the outside temperature is mild.
- The outer layer of bark of young bushes and branches looks smooth. Yet, while the tree ages, the trunk bark develops a hard matted texture.
Why Are Silver Maples Bad?
The silver maple has a bad reputation due to its propensity to shed large limbs, which may cause significant damage to structures, walls, and even power lines, in addition to smaller branches and layers of bark, which might clog mowers and other equipment.
Besides, Silver maples are prone to having their roots breach sewage systems, leading to backups and leaks, since they are always on the lookout for water. Fixing this might be expensive for homeowners.
What Makes Maple Trees Unique?
The leaves of each kind of maple tree appear completely distinct from other maple trees. The form of a maple leaf is very recognizable and helps the tree differentiate itself from others.
The vibrant autumn hues they display are another reason for their fame.
Some people call maple trees “sugar maples” because of the syrup and sugar that may be made from their sap. Breakfast pancakes with waffles and ice cream are all improved by adding maple syrup.
Several fungal diseases can cause silver maple trees problems, as I mentioned. Therefore, you should pay attention if you plan to plant this tree.
However, the silver maple is a common ornamental tree because of its rapid growth rate and ability to provide a cool shadow.
The tree serves as a source of animal nutrition and may be used to produce syrup, among other things. So, it is worth it. |
For the new work, Cracraft, Barrowclough, and their colleagues at the University of Nebraska, Lincoln, and the University of Washington examined a random sample of 200 bird species through the lens of morphology — the study of the physical characteristics like plumage pattern and color, which can be used to highlight birds with separate evolutionary histories. This method turned up, on average, nearly two different species for each of the 200 birds studied. This suggests that bird biodiversity is severely underestimated, and is likely closer to 18,000 species worldwide.
The researchers also surveyed existing genetic studies of birds, which revealed that there could be upwards of 20,000 species. But because the birds in this body of work were not selected randomly — and, in fact, many were likely chosen for study because they were already thought to have interesting genetic variation — this could be an overestimate. The authors argue that future taxonomy efforts in ornithology should be based on both methods. Paper. (public access) – George F. Barrowclough, Joel Cracraft, John Klicka, Robert M. Zink. How Many Kinds of Birds Are There and Why Does It Matter? PLOS ONE, 2016; 11 (11): e0166307 DOI: 10.1371/journal.pone.0166307More.
But why are the species to be classified only by appearance (taxonomy)? If they can interbreed, how many of them are in fact hybrids? Would we not need genome mapping?
From the Abstract:
Using a sample of 200 species taken from a list of 9159 biological species determined primarily by morphological criteria, we applied a diagnostic, evolutionary species concept to a morphological and distributional data set that resulted in an estimate of 18,043 species of birds worldwide, with a 95% confidence interval of 15,845 to 20,470. In a second, independent analysis, we examined intraspecific genetic data from 437 traditional avian species, finding an average of 2.4 evolutionary units per species, which can be considered proxies for phylogenetic species.
What are “evolutionary units”? Have the species’ ability to hybridize and produce fertile offspring been tested?
One asks because, in general, the concept of speciation is currently a mess:
See also: Mystery species depicted in cave art is buffalo-cattle hybrid?
Cichlid speciation attributed to “plasticity” now
Nothing says “Darwin snob” like indifference to the mess that the entire concept of speciation is in.
Follow UD News at Twitter! |
Treating Anomic Aphasia with Speech Therapy
Anomic aphasia presents as an inability to consistently produce the appropriate words for things a person wishes to talk about. This is particularly evident when the individual requires a noun or a verb. The disorder is known by several names, including amnesic aphasia, dysnomia, and nominal aphasia.
Typically, the patient will have fluent speech that is grammatically correct. However, their speech will be filled with vague words such as “thing” and a constant attempt to find the words which will accurately describe the word they want to use. Many people will describe this as feeling as though the word is on the tip of the tongue, a feeling most people occasionally experience. Individuals suffering from anomic aphasia feel this on a regular basis. These patients usually have no difficulty reading or understanding speech and are even able to accurately repeat sentences and words. The difficulty lies in finding words to express their own thoughts, whether verbally or in writing.
Simply put, speech therapy is assessment, diagnosis, and treatment for those who struggle with a wide variety of speech and communication disorders. Speech therapy is almost exclusively handled by a speech-language pathologist (SLP) who can be found in a variety of settings from hospitals to schools. Like other forms of therapists, speech-language pathologists give their patients and students a variety of different exercises meant to correct a specific issue that was previously diagnosed. Throughout speech therapy, an SLP will monitor progress and make any necessary changes to their intervention.
As previously mentioned, SLPs can be found in a variety of settings and with the increased adoption of technology, speech therapy is now more accessible than ever. Schools and districts across the country are integrating speech teletherapy services into their special education programs, which is speech therapy conducted through online speech therapy by an SLP at an offsite location.
Speech Therapy for Anomic Aphasia
Early we discussed what Anomic Aphasia was and how it creates communication difficulties for those who struggle with this speech disorder. While there are several different methods of therapy for those with anomic aphasia, speech therapy is thought to be the most common and effective method. However, it is important to note that depending on the degree of the condition, someone with anomic aphasia may improve their communicating abilities but never fully regain their speech function.
Speech Therapy Activities & Exercises
The most promising treatments include increasing the ability to recall the words for nouns. This is done by repeatedly showing the patient an image of the object and offering praise when they recall the word. Alternatively, an image can be shown surrounded by words associated with it. Done regularly this has been shown to increase word retention, although once treatment ends patients tend to revert back to pre-intervention word retrieval levels.
Another exercise that speech pathologists can use to help with anomic aphasia is by naming a list of words and having the person tell you what the mean. A more complex version of this activity is describing an object and having the person name. For example, the object is something used to mow the grass, and the word is a lawnmower.
An SLP can name three or four nouns and have their student or patient try to describe how they are alive. For example, fish, cats, and dogs can all be house pets. As you can see, a majority of these activities revolve around nouns and the recognition of them.
Speech Therapy using Assistive Technology
With the popularity of smartphones and tablets, options for patients with anomic aphasia are rapidly increasing. There are numerous companies and individual apps specializing in making communication easier for patients with different forms of aphasia, including the Aphasia Company, Assistive Ware, and Tapgram.
Companies like VocoVision strictly use online videoconferencing technology to connect students with remote speech-language pathologists. This online technology uses interactive capabilities in conjunction with games, exercises, and other tools to further encourage student engagement. These teletherapy sessions can either occur during a student’s school day or even from the comforts of their own home.
How Long Does Speech Therapy for Anomic Aphasia Take?
Depending on the severity, there is no fixed amount of time it takes to recover from anomic aphasia. According to Aphasia.org, is aphasia symptoms last longer than two or three months, a full recovery is unlikely. However, this doesn’t mean someone can’t continue to improve for years, even decades. Often, regular speech therapy sessions are recommended for treatment. In addition to meeting with a speech therapist, someone who struggles with anomic aphasia will be encouraged to perform exercises on their own.
Getting Started with Speech Therapy for Anomic Aphasia
Before a patient or student receives any therapy for anomic aphasia they need to first be diagnosed. This typically occurs from a doctor and is done by a series of neurological tests or a series of questions. A speech-language pathologist is also experienced in identifying different types of aphasia. Once anomic aphasia is diagnosed, there are several different factors including the severity that determines a treatment and intervention plan.
If you are a speech-language pathologist looking for your next school assignment, search our virtual SLP jobs through the button below! |
- Page ID
The common cold is caused by several different viruses and is the most common human viral infection.
- Recognize the major viruses known to cause the common cold: rhinovirus, human parainfluenza virus and the human respiratory syncytial virus (RSV)
- Over 200 virus types have been found that cause the common cold, with rhinoviruses being the most common.
- Rhinoviruses are a sub-type of picornavirus, a non-enveloped RNA virus, which is very small in size.
- The symptoms of the common cold are not due to the viral infection directly but rather the bodies response to the virus.
- There is no cure for the common cold, and in fact antibiotics which often prescribed are detrimental to patients.
- serotypes: A group of microorganisms characterized by a specific set of antigens; serovar.
- capsid: The outer protein shell of a virus.
The common cold (also known as nasopharyngitis, rhinopharyngitis, acute coryza, or a cold) is a viral infectious disease of the upper respiratory tract which affects primarily the nose. Symptoms include coughing, sore throat, runny nose, and fever which usually resolve in seven to ten days, with some symptoms lasting up to three weeks. Well over 200 viruses are implicated in the cause of the common cold. The most commonly implicated virus is a rhinovirus (30–80%), a type of picornavirus with 99 known serotypes. A picornavirus is a virus belonging to the family Picornaviridae. Picornaviruses are non-enveloped RNA viruses with an icosahedral capsid. The name is derived from pico, meaning small, and RNA, referring to the ribonucleic acid genome, so “picornavirus” literally means small RNA virus. Others include: coronavirus (10–15%), human parainfluenza viruses, human respiratory syncytial virus, adenoviruses, enteroviruses, and metapneumovirus. Frequently more than one virus is present.
The symptoms of the common cold are believed to be primarily related to the immune response to the virus. The mechanism of this immune response is virus specific. For example, the rhinovirus is typically acquired by direct contact; it binds to human ICAM-1 receptors through unknown mechanisms to trigger the release of inflammatory mediators. These inflammatory mediators then produce the symptoms. It does not generally cause damage to the nasal epithelium. The respiratory syncytial virus (RSV) on the other hand is contracted by both direct contact and air born droplets. It then replicates in the nose and throat before frequently spreading to the lower respiratory tract. RSV does cause epithelium damage. Human parainfluenza virus typically results in inflammation of the nose, throat, and bronchi. In young children when it affects the trachea it may produce the symptoms of croup due to the small size of their airway.
No cure for the common cold exists, but the symptoms can be treated. Antibiotics have no effect against viral infections and thus have no effect against the viruses that cause the common cold. Due to their side effects they cause overall harm; however, they are still frequently prescribed.It is the most frequent infectious disease in humans with the average adult contracting two to three colds a year and the average child contracting between six and twelve. These infections have been with humanity since antiquity. |
Forests harbor enormous biodiversity and are a major carbon sink. Therefore, forest conservation and restoration can help mitigate climate change; however, climate change and land use decisions could fundamentally undermine this ability in many regions of the world.
Different modeling approaches complete the analysis
Predicting the impact of future climate change on forests is challenging because each scientific approach relies on assumptions and incomplete data. In this study, researchers compared results from three major modeling approaches that provide complementary information: a global mechanistic vegetation model, which estimates forest carbon loss; a climate envelope model, which provides information on species shifts; and empirical assessment of forest loss caused by disturbance using satellite imagery. By combining the outputs, despite large uncertainty in most regions, the study found that some forests are consistently at higher risk:
- Forests in the southern boreal belt of the northern hemisphere, such as Canada and large parts of Russia, will be in danger. But also forests in tropical Africa and parts of the Amazon, says Thomas Pugh, principal investigator at BECC and researcher at MERGE.
In addition to Lund University, the following universities and organizations have participated in the study: University of Utah, University of Birmingham, Max Planck Institute for Biogeochemistry Jena, NOVA University Lisbon, Technical University of Munich, and Berchtesgaden National Park.
The study is published in Science:
A climate risk analysis of Earth's forests in the 21st century
Read an article about the study in Swedish:
Studie visar hur jordens skogar kommer att påverkas av klimatförändringar
Thomas Pugh's profile in Lund University research porta
Thomas Pugh interviewed by SVT Nyheter
Så förändrar klimatförändringar världens skogar |
A population pyramid graphically displays the age and gender make-up of a population. As we explained in our post ‘What is a Population Pyramid,’ the more rectangular the graph is shaped, the slower a population is growing; the more a graph looks like a pyramid, the faster that population is growing.
So what determines the shape of a population pyramid? Well, a lot of things. Events that took place during the 70 – 80 years depicted in the graph all have the potential to impact the graph’s shape. Economic variations (such as a depression or job growth), political changes (such as a new policy on family planning or a new tax break on dependents), conflicts (such as wars), public health trends and natural events (such as long-term droughts or an earthquake) can impact birth and/or death rates. These, in turn, influence the shape of the population pyramid.
For an example, let’s look at China’s pyramid from 2008 that is shown here. Where the graph shrinks in for the age cohorts around 50, this shows the slowing of population growth during the Great Chinese famine. After the famine, we see population started increasing again. But then, about 30 years ago, the pyramid starts becoming thinner again and this coincides with the institution of the “one child policy” that was put into law in 1979. The slight bulge in the 15-19 cohort is a reflection of the larger bulge at 35-44 (the lower bulge represents the children of the upper).
Once you know how to read a population pyramid, and understand the age-sex distribution graph, you can tell a lot about a country or region. Check out the U.S. Census Bureau for dynamic illustrations of changing pyramids for every country since 1950. (On the site, select ‘Population Pyramid Graph’ under Select Report.) Have students create population pyramids for several different countries using the activity, Power of the Pyramids. |
Atomic Mass of Polonium
Atomic mass of Polonium is 209 u.
The atomic mass is the mass of an atom. The atomic mass or relative isotopic mass refers to the mass of a single particle, and therefore is tied to a certain specific isotope of an element. The atomic mass is carried by the atomic nucleus, which occupies only about 10-12 of the total volume of the atom or less, but it contains all the positive charge and at least 99.95% of the total mass of the atom. Note that, each element may contain more isotopes, therefore this resulting atomic mass is calculated from naturally-occuring isotopes and their abundance.
The size and mass of atoms are so small that the use of normal measuring units, while possible, is often inconvenient. Units of measure have been defined for mass and energy on the atomic scale to make measurements more convenient to express. The unit of measure for mass is the atomic mass unit (amu). One atomic mass unit is equal to 1.66 x 10-24 grams. One unified atomic mass unit is approximately the mass of one nucleon (either a single proton or neutron) and is numerically equivalent to 1 g/mol.
For 12C the atomic mass is exactly 12u, since the atomic mass unit is defined from it. For other isotopes, the isotopic mass usually differs and is usually within 0.1 u of the mass number. For example, 63Cu (29 protons and 34 neutrons) has a mass number of 63 and an isotopic mass in its nuclear ground state is 62.91367 u.
There are two reasons for the difference between mass number and isotopic mass, known as the mass defect:
- The neutron is slightly heavier than the proton. This increases the mass of nuclei with more neutrons than protons relative to the atomic mass unit scale based on 12C with equal numbers of protons and neutrons.
- The nuclear binding energy varies between nuclei. A nucleus with greater binding energy has a lower total energy, and therefore a lower mass according to Einstein’s mass-energy equivalence relation E = mc2. For 63Cu the atomic mass is less than 63 so this must be the dominant factor.
Note that, it was found the rest mass of an atomic nucleus is measurably smaller than the sum of the rest masses of its constituent protons, neutrons and electrons. Mass was no longer considered unchangeable in the closed system. The difference is a measure of the nuclear binding energy which holds the nucleus together. According to the Einstein relationship (E=mc2), this binding energy is proportional to this mass difference and it is known as the mass defect.
See also: Atomic Mass Number – Does it conserve in a nuclear reaction?
Mass Number of Polonium
Mass numbers of typical isotopes of Polonium are 208-210.
The total number of neutrons in the nucleus of an atom is called the neutron number of the atom and is given the symbol N. Neutron number plus atomic number equals atomic mass number: N+Z=A. The difference between the neutron number and the atomic number is known as the neutron excess: D = N – Z = A – 2Z.
Neutron number is rarely written explicitly in nuclide symbol notation, but appears as a subscript to the right of the element symbol. Nuclides that have the same neutron number but a different proton number are called isotones. The various species of atoms whose nuclei contain particular numbers of protons and neutrons are called nuclides. Each nuclide is denoted by chemical symbol of the element (this specifies Z) with tha atomic mass number as supescript. Therefore, we cannot determine the neutron number of uranium, for example. We can determine the neutron number of certain isotope. For example, the neutron number of uranium-238 is 238-92=146.
Density of Polonium
Density of Polonium is 9.196g/cm3.
Density is defined as the mass per unit volume. It is an intensive property, which is mathematically defined as mass divided by volume:
ρ = m/V
In words, the density (ρ) of a substance is the total mass (m) of that substance divided by the total volume (V) occupied by that substance. The standard SI unit is kilograms per cubic meter (kg/m3). The Standard English unit is pounds mass per cubic foot (lbm/ft3).
Density – Atomic Mass and Atomic Number Density
Since the density (ρ) of a substance is the total mass (m) of that substance divided by the total volume (V) occupied by that substance, it is obvious, the density of a substance strongly depends on its atomic mass and also on the atomic number density (N; atoms/cm3),
- Atomic Weight. The atomic mass is carried by the atomic nucleus, which occupies only about 10-12 of the total volume of the atom or less, but it contains all the positive charge and at least 99.95% of the total mass of the atom. Therefore it is determined by the mass number (number of protons and neutrons).
- Atomic Number Density. The atomic number density (N; atoms/cm3), which is associated with atomic radii, is the number of atoms of a given type per unit volume (V; cm3) of the material. The atomic number density (N; atoms/cm3) of a pure material having atomic or molecular weight (M; grams/mol) and the material density (⍴; gram/cm3) is easily computed from the following equation using Avogadro’s number (NA = 6.022×1023 atoms or molecules per mole):
Since nucleons (protons and neutrons) make up most of the mass of ordinary atoms, the density of normal matter tends to be limited by how closely we can pack these nucleons and depends on the internal atomic structure of a substance. The densest material found on earth is the metal osmium, but its density pales by comparison to the densities of exotic astronomical objects such as white dwarf stars and neutron stars.
If we include man made elements, the densest so far is Hassium. Hassium is a chemical element with symbol Hs and atomic number 108. It is a synthetic element (first synthesised at Hasse in Germany) and radioactive. The most stable known isotope, 269Hs, has a half-life of approximately 9.7 seconds. It has an estimated density of 40.7 x 103 kg/m3. The density of Hassium results from its high atomic weight and from the significant decrease in ionic radii of the elements in the lanthanide series, known as lanthanide and actinide contraction. |
Wouldn’t it be wonderful to live in a place where nobody ever dies or gets sick, a place where school shootings do not exist and natural disasters do not destroy? Yet the reality is that we live in a broken world where there is death and tragedy, and sadly, children are no strangers to loss and grief. While dealing with loss and grief can be difficult for anyone, children’s reactions may differ from those of adults. Here are some suggestions to consider when helping a child who is grieving.
The age of the child will affect the way they understand loss and grief. Younger children might not understand the permanence of death or may believe they are somehow responsible for a tragic event. Older children engage in concrete thinking and might ask for more details if they want to know more. They will understand the consequences of a loss and that a person who dies will not be around anymore. Different ages will need differing levels of explanation.
Children need to know they are safe and secure, so it is best to hear hard news from someone they feel safe with, someone they trust. While you might think it’s in their best interest to either wait or spare them bad news, it is important for children to hear as soon as possible so they can start facing the loss. Silence is seldom helpful, though often tempting.
It is also important to prepare them for what lies ahead. If this is the beginning of a long illness, telling children early helps them prepare. If there is a funeral coming up, explain what they will encounter in the days ahead. Even if a child was not directly exposed to a tragedy such as a school shooting or natural disaster, he or she may have heard the news or adult conversations and still feel stress or anxiety. Make a point of raising the topic to a depth appropriate to their level of understanding.
Children often go in and out of grief, so be patient. Answer any questions they have, even if they are hard questions. Some children might ask a lot of questions, some might communicate without words through actions and reactions. A grieving child might have a physical symptom of grief like a loss of appetite, or emotional symptoms like mood swings or severe crying. It is not uncommon for a child to revert to an earlier stage of development and suck their thumb or wet the bed. Children might become aggressive when angry or clingy when scared. Recognize bad behaviors as symptoms of underlying grief, and address the grief more than the behaviors.
When you do say something, be direct and especially honest so that there is less confusion for the child. Use simple concrete language, avoiding euphemisms. And always provide reassurance, letting the child know you care. Acknowledge the feelings of hurt, sadness, and fear. Here are some guidelines when you talk to a child:
If someone is dying, encourage children to say good-bye and express their emotions. And don’t feel you need to hide your own grief. Remembering the person who died is part of the healing process. Share memories and pictures, talking about the loved one. If a traumatic event means there are changes, involve children in those decisions as appropriate. Giving children choices whenever possible helps them regain a sense of control.
For a list of helpful resources for ministry leaders, click here.
Grief is a natural consequence of losing something or someone important. Acknowledge that pain as appropriate, validating your and the child's feelings of loss. May your shared loss let your relationship grow stronger.
Rev. Dr. Rob Toornstra |
Climate change is a significant and lasting change in the Earth's climate over an extended period. In this article, we discuss the impact of a rise in the average temperature of the Earth's atmosphere and oceans from the 19th century to the present. Climate change is expected to have far greater negative effects on developing countries than on developed countries due to numerous factors including exposure to extreme weather and infrastructure considerations.
Possible solutions[edit | edit source]
The science of climate change[edit | edit source]
Although the existence of the greenhouse effect has been largely understood since 1896, there are still a tiny minority of scientists who remain critical of specifics written in some reports of the IPCCW and other organizations. These so-called climate change skepticsW are generally misinformed or are deliberately attempting to create doubt and uncertainty about the science.
Certain politicians, lobbyists and economists refer to disinformation from climate change 'skeptics' for their own advantage, portraying an image that climate change does not exist, poses but a minor problem, or may even be beneficial so as to be able to prevent action to reduce greenhouse gases (mostly resulting from the burning of fossil fuels). The GWPF in the UK is an example of a political lobbying group, which is secretly funded by fossil fuel interests.Guardian article on the secret funding of the GWPF
The impact of climate change[edit | edit source]
Although climate change itself has been proven, uncertainties still exist regarding predicting the effects. The IPCC is highly confident that impacts will increase as greenhouse gases and associated positive feedback effects kick in (e.g. methane release from melting permafrost, changing albedo), though their severity and the timescales may differ to some extent. Examples of impacts include:
- Increasing heat stresses as global temperatures rise. While some regions such as Canada may benefit to some degree from rising temperatures, the overall effects on the ability of the planet to support life will be negative. As temperatures rise, many equatorial regions will become hostile to life;
- Changing weather patterns, especially more extreme weather events and changing rainfall patterns, specifically increasing or decreasing precipitation levels. Increasing floods and drought periods will have a generally negative effect on farming;
- Natural disasters (ie mud slides, hurricanes,...) are expected to increase in their severity. Death toll in 2003 = 150,000 people:
- Sea level rise will contaminate a very large percentage of the planets' agricultural fields with sea salt and make them no longer suitable for continued food production. In addition, many low-lying islands and coastlines will need to be abandoned, forcing many people to move.
- Increasing ocean acidity. As the pH of water decreases due to the input of carbonic acid (resulting from CO2 dissolving in water), life forms that rely on a chalk shell will find it increasingly impossible to survive. This will have negative effects on many ocean ecosystems, especially coral reefs which are the most bio-diverse of any ecosystem on the planet.
Historic changes in the earth's climate[edit | edit source]
The term "climate change" is often used to describe the impact of human-caused pollution on the earth's climate, but is important to understand that climate change is also caused by natural phenomena.
History of Ice Ages
The earth's climate is subject to large fluctuations. The climate is dominated by ices ages, which reduce the earth's surface temperature and cover large parts of the surface in ice sheets. There have only been five ice ages to date, that have lasted from 30 million up to 300 million years. The most recent one occurred during to "Quaternary", which started about 2.5 million years ago and is still lasting. In between ice ages, the poles are not covered by glaciers and the temperature is higher. The climate is quite stable in these periods, however during ice ages strong variations occur. These variations are called glacial (cold) and interglacial (hot) periods. Glacial and interglacial periods alternate every 5,000-15,000 years. The current interglacial period started 14,000 years ago.
Climate change mitigation[edit | edit source]
Several options are available to reduce the impacts of a changing climate. Most of these (the most efficient ones) are lifestyle changes (i.e. stop the burning of fossil fuels, stop eating meat, etc.) and can be put in place today. We also do not need to wait for any specific technology to become available. Rather, the essential technology is already here today. Politicians often portray a different picture but it is rarely based in reality. Selected options include:
- Reducing the release of greenhouse gasesW (GHG's) into the atmosphere (ie through energy efficiency,...)
- Prevent carbon dioxide from being released into the atmosphere (ie through carbon capture and storage (CCS), biochar,...). With Carbon sequestration/CCS, after combusting a fuel, the CO2 is stored in a cavity underground.
- Remove carbon dioxide from the atmosphere, e.g. through geo-engineering ocean fertilisation, planting extra trees,...
- Shield some of the planet from the sun, or reflect a proportion of sunlight back into space (i.e. by painting roads, parking spaces and roofs white, spraying sulfate aerosols into the stratosphere,...)
- Climate change mitigation: build heat tolerant houses (passive solar with suitable insulation), flood control barriers,...
- Grin and bear it: put up with the inconveniences and the expected loss of biodiversity and increases in certain types of natural disasters and wait for extinction.
The IPCC already considers a 2°C temperature rise to be almost inevitable. In addition, it also advises the use of most other measures, yet stays critical of geoengineering options, due to the risks involved.
Climate change chain reactions[edit | edit source]
The change in temperature and weather has a huge impact on wildlife which can become dangerous if the change stays the same. Even the minor change of temperature can lead to earlier hatching of insects or melting of snow which protects snow rabbits from their natural hunters. These effects cause problems like insect overpopulation or too few surviving rabbits to reproduce sustainably. The insects are dangerous because they consume crops and other plants and in huge amounts, they are even able to consume a whole field of possible food. Because this problem is getting bigger and even endangers species and if it goes on like this many species will die out and even our own food capacity may shrink exponentially.
Climate Change on the Ocean[edit | edit source]
Earth is called the blue planet because approximately 72 percent of it is covered by oceans. The oceans influence the weather on local to global scales, while changes in climate can fundamentally alter many properties of the oceans.
Ocean conveyor system[edit | edit source]
The thermohaline circulation also called the great ocean conveyor belt or global ocean conveyor belt is a large-scale ocean circulation that distributes vast quantities of heat and moisture around on a planetary scale.
Gulf stream[edit | edit source]
Gulf stream has the length of 10.000 km. It is one of the largest and fastest ocean current on earth. Originating from the tip of Florida, at 2 m/s, it brings 100.000.000 m3/s water towards Europe.
Sea level rise[edit | edit source]
As water gets warmer, it takes up more space. Each drop of water only expands by a little bit, but when you multiply this expansion over the entire depth of the ocean, it all adds up and causes sea levels to rise. But that's not the only cause of the rise of sea level. The rise of sea level also caused by the melting of glaciers and polar ice caps and ice loss from Greenland and West Antarctica.
Ocean acidification[edit | edit source]
Ocean acidification is the ongoing decrease in ocean pH caused by human CO2 Emission. Ocean pH has decreased by about 30% already and if we continue emitting CO2 at the same rate by 2100 ocean acidity will increase by about 150%, a rate that has not been experienced for at least 400,000 years. Such a monumental alteration in basic ocean chemistry is likely to have wide implications for ocean life, especially for those organisms that require calcium carbonate to build shells or skeletons.
Impact of climate change on species[edit | edit source]
Through rising temperatures resulting in a shift of climate zones forces species to adapt to the new climatic conditions. By a rapidly increased temperature, the risk for especially plants not to find areas with suitable living conditions early enough is higher. Observations of several species conclude that they are moving their ranges polewards to keep up with the changing climate.
References[edit | edit source]
- Precipitation changes
- Climate Lab
- One solution is to grow more resilient crops, i.e. more resistant to changes in rainfall and disease
- 150000 people killed by global warming up to 2003
- Climate Change Security
- Sea level rise: 2m rise expected by the year 2100, 6.5m by 2200
- Earth underwater documentary
- Note: this direct temperature reduction does not reduce carbon levels, so ocean acidification from higher carbon dioxide is still a problem
See also[edit | edit source]
- Renewable energy
- Incentives for sustainability
- Sea level rise
- Measures to stop global warming |
Japanese (日本語) is spoken by more than 130 million people worldwide, mainly in Japan but also in some emigrant communities. There are an estimated 1.5 million Japanese speakers in Brazil, and more than 1 million in the USA. Approximately 12% of Hawaiians have Japanese ancestry, due to large-scale migration in the late nineteenth century following the boom in the sugar industry.
Japanese is written in a mixture of three scripts: Chinese characters (kanji) and two syllabic scripts (using modified Chinese characters, called kana). The Latin alphabet is also increasingly used for company branding, such as names and logos, and also abbreviations and acronyms such as ‘CD’ or ‘NATO’. Kōgo is the dominant system for Japanese writing today, having gradually replaced bungo since the turn of the twentieth century.
Traditionally, Japanese is written from top to bottom, then from right to left, copying the traditional Chinese system. Modern Japanese, however, is often written like English, i.e. left to right, then top to bottom. There are more than 250 Japanese characters.
Standard Japanese (hyōjungo) came from the language spoken in upper-class Tokyo after the Meiji Restoration of 1868. It is the official language taught in schools and used in media and official communications. There are also several dialects of Japanese spoken throughout the country, but these are largely mutually intelligible.
Japanese has been heavily influenced by Chinese and therefore contains a significant number of Chinese loanwords, borrowed from ancient times. Other countries have come into contact with Japan in the past and offered loanwords, such as England, Portugal, the Netherlands, Germany and France. The words for ‘tobacco’ and ‘raincoat’ come from Portuguese, while the words for ‘scalpel’ and ‘sailor’ come from Dutch. English has also adopted some Japanese words, including ‘bonsai’, ‘geisha’ and ‘kamikaze’.
Japanese has an intricate grammatical system to express politeness and formality. Humble language is used to talk about oneself and one’s own family or company, whereas honorific language is used to talk about people outside the speaker’s group. There are four levels of formality used: kun, chan, sun and sama, and the differences in usage are complex.
During the period when Japan was governed by the samurai class (twelfth to nineteenth centuries), some of the methods used to train soldiers developed into well-ordered martial arts, for example jujitsu and sumo. Some of these arts were then developed into modern sports, such as judo, and are widely practised today.
Business Language Services Ltd. (BLS) specialises in Japanese translation (both English to Japanese and Japanese to English). We have a broad network of highly experienced, qualified professional Japanese translators, who only translate into their mother tongue. What’s more, all our Japanese translations are proofread by a second, independent linguist. BLS has an extensive database of Japanese interpreters, selected according to their expertise, specialist knowledge, friendly attitude and professional reliability. BLS also works with some of the best Japanese language tutors, enabling us to offer you tailor-made courses to match your precise needs and suit your ongoing work commitments. |
Whiskers are also known as vibrissa, from the latin vibrare "to vibrate". Vibrissa are the specialized hairs on mammals and the bristlelike feathers near the mouths of many birds. Their resonant design is symbolic of the energies, good and bad, that are reverberating throughout the natural world. Every living thing is connected and, by birthright, deserves to exist.
Scientific Name: Anodorhynchus hyacinthinus
Where Hyacinth Macaws Live:
Bolivia, Brazil and Paraguay in semi-open habitats, usually in forests that have a dry season that prevents the growth of a tall closed-canopy tropical forest.
What Hyacinth Macaws Eat:
Seeds, grains and nuts but mostly palm nuts. Most feeding is done on the ground but macaws can climb trees and pick nuts as well.
How Long Hyacinth Macaws Live: Up to 50 years
Why Hyacinth Macaws are Awesome:
They are the largest of all parrots at almost 40 inches in height and weigh up to 3.5 pounds.
Conservation of Hyacinth Macaws:
The U.S. Fish and Wildlife Service has proposed listing the Hyacinth Macaw on the Endangered Species List because it faces significant threats, particularly due to high deforestation rates, which could destroy remaining native habitat as early as 2030. Hyacinth macaws nest in tree cavities and cliff cavities. Habitat loss, hunting and competition for food have adverse effects on macaw populations.
The Hyacinth Macaw is listed under the Convention on International Trade in Endangered Species (CITES) Appendix I and II, is protected under Brazilian and Bolivian law and banned from export in all countries of origin. Many property owners in the Hyacinth Macaw range no longer permit trappers on their properties.
The single most effective way any of us can help all parrot species is to avoid purchasing birds or other exotic animals, especially if their birth location is unknown. Purchasing locally bred exotic animals can increase the demand for wild born exotic animal species. If you really must have an exotic animal, look first at Petfinder.com, which is a resource for adoptable cats, dogs, bunnies, rodents, snakes, hamsters, parrots and more. |
Do you love watching cool timelapses of constructions or other massive work done in minutes? Lets make one here.
We will be looking at an excavator digging quarry, making pictures each day to see whole progress. And your task is to show us this process!
Quarry is defined by the width of its first layer.
Excavator is defined by its capability to dig in one day.
Width of quarry. Integer number, always >= 1.
Excavator dig speed. Integer number, always >= 1.
Progress of digging quarry on each day. Started with flat untouched ground and finished with completed quarry.
On the last day there may be less units to dig, than excavator is capable of. Excessive work won't be used anywhere, so you should just output fully dug quarry.
All days progress must be present in the output at once. You can't clear or overwrite previous day progress in the output.
Trailing and leading newlines for each day output are acceptable in any reasonable number.
This is code-golf, so make your code as compact as possible.
Work starts with a flat ground. Length of displayed ground is width of quarry + 2. So there always will be one underscore character on both sides of quarry.
Dug quarry is looking like this for even width:
_ _ \ / \ / \ / \/
And like this for odd width
_ _ \ / \ / \ / V
Here are examples of quarry progress:
_ _______ V dug 1 unit _ ______ \/ dug 2 units _ ___ \___/ dug 5 units _ _ \ __/ dug 10 units \_/
Full progress example. Quarry width: 8. Excavator speed: 4 units per day.
__________ _ _____ \__/ _ _ \______/ _ _ \ __/ \__/ _ _ \ / \ __/ \/ _ _ \ / \ / \ / \/
Excavator will need to dig on the last day exactly its capability (speed)
Width: 7, Speed: 3 Width: 10, Speed: 4 |
How to Manage Pests
Pests in Gardens and Landscapes
The Asian citrus psyllid, Diaphorina citri, is a tiny, mottled brown insect about the size of an aphid. This insect poses a serious threat to California's citrus trees because it vectors the pathogen that causes huanglongbing disease (HLB). This disease is the most serious threat to citrus trees worldwide—including those grown in home gardens and on farms. The psyllid feeds on all varieties of citrus (e.g., oranges, grapefruit, lemons, and mandarins) and several closely related ornamental plants in the family Rutaceae (e.g., calamondin, box orange, Indian curry leaf, and orange jessamine/orange jasmine).
The Asian citrus psyllid (or ACP), damages citrus directly by feeding on newly developed leaves (flush). However, more seriously, the insect is a vector of the bacterium Candidatus Liberibacter asiaticus, associated with the fatal citrus disease HLB, also called citrus greening disease. The psyllid takes the bacteria into its body when it feeds on bacteria-infected plants. The disease spreads when a bacteria-carrying psyllid flies to a healthy plant and injects bacteria into it as it feeds.
HLB can kill a citrus tree in as little as 5 years, and there is no known cure or remedy. All commonly grown citrus varieties are susceptible to the pathogen. The only way to protect trees is to prevent the spread of the HLB pathogen by controlling psyllid populations and destroying any infected trees.
The Asian citrus psyllid is widely distributed throughout Southern California and is becoming more widespread in the Central Valley and further north. The first tree with HLB was found in March 2012 in a home garden in Los Angeles County and a few years later was found in residences in Orange and Riverside Counties. Spread of the disease began to rapidly accelerate in these areas in 2017. Removal of infected trees by the California Department of Food and Agriculture (CDFA) has occurred wherever they have been found.
The presence of HLB in pockets of Southern California emphasizes that it is critical to control psyllid populations so that disease spread is limited.
The Asian citrus psyllid and the HLB disease originated in eastern Asia or the Indian subcontinent and then spread to other areas of the world where citrus is grown. The psyllid was first found in the United States in 1998 in Palm Beach County, Florida on backyard plantings of orange jessamine, Murraya paniculata, and spread rapidly over a 3-year period. HLB spread equally rapidly in Florida.
In 2008, the Asian citrus psyllid was first detected in California. The psyllid spread throughout Southern California, particularly in urban and suburban environments, but also in commercial groves. The psyllid has since expanded its range to the Central Valley and the Central Coast, and has been found as far north as the San Francisco Bay Area and sites near Sacramento.
The first infected tree found in California is believed to have been the result of illegal grafting of an infected bud (taking plant tissue from one tree and inserting it into another to form a new branch). The infected tree was destroyed to prevent further spread of the bacterium. Since that time, additional infected trees have been found in southern California’s residential areas; these may have resulted from illegally imported diseased trees, illegal grafting of infected budwood, and, more recently, the natural spread of the bacterium by the psyllid. CDFA is continuing to detect and eliminate infected trees.
To protect the state's commercial and residential citrus from HLB, it is important to control the psyllid, prevent the accidental introduction of any infected host plant, and detect and remove any infected plants found in California as quickly as possible. The job of detecting infected trees is made difficult by the fact that it takes one to several years for symptoms of HLB to begin to show in the trees. Meanwhile, psyllids can pick up the HLB pathogen as nymphs and spread it only a few weeks after the tree is infected when they fly away as adults. Therefore, it is important to monitor and control psyllids in citrus trees and immediately report any suspected plant symptoms to the county agricultural commissioner.
Psyllid Life Stages
The adult Asian citrus psyllid is a small brownish-winged insect about the size of an aphid. Its body is 1/6 to 1/8 inch long with a pointed front end, red eyes, and short antennae. The wings are mottled brown around the outer edge except where a clear stripe breaks up the pattern at the back. The adult psyllid feeds with its head down, almost touching the leaf, and the rest of its body is raised from the surface at an almost 45-degree angle with its back end in the air . No other insect pest of citrus positions its body this way while feeding.
Adults typically live 1 to 2 months. Females lay tiny yellow-orange almond-shaped eggs in the folds of the newly developed leaves of citrus. Each female can lay several hundred eggs during her lifespan.
The eggs hatch into nymphs that are wingless, flattened, yellow or orange to brownish, and 1/100 to 1/14 inch long. Nymphs molt 4 times, increasing in size with each nymphal stage (instar), before maturing into adult psyllids. Late instar nymphs have distinctive red eyes. The nymphs can only feed on soft, young plant tissue and are found on immature leaves, stems and flowers of citrus.
The nymphs remove sap from plant tissue when they feed and excrete a large quantity of sugary liquid (honeydew). Each nymph also produces a waxy tubule from its back end to help clear the sugary waste product away from its body. The tubule's shape—a curly tube with a bulb at the end—is unique to the Asian citrus psyllid and can be used to identify the insect.
There are other psyllids such as Eucalyptus psyllids, tomato psyllids, and Eugenia psyllid that can be found in home gardens. The Asian citrus psyllid is easily distinguished from these in its adult stage by the brown band along the edge of its wing interrupted by a clear area, its characteristic body tilt, and, in the nymph stage, the shape of the waxy tubules it produces.
The Asian citrus psyllid damages citrus when its nymphs feed on new shoots and leaves. They remove sap from the plant tissue and inject a salivary toxin as they feed. This toxin can inhibit or kill new shoots, deforming new leaves by twisting and curling them. If the leaves mature after sustaining this damage then they will have a characteristic notch.
There are many other insect pests that can cause twisting of leaves, such as aphids, citrus leafminer, and citrus thrips. The twisting of leaves doesn't harm trees and can be tolerated, but the death of new growth will retard the growth of young trees that are less than 5 years old.
Excess sap (honeydew) that the psyllid nymphs excrete accumulates on leaf surfaces. This promotes the growth of sooty mold, which is unsightly but not harmful. Other insect pests of citrus also excrete honeydew, including aphids, whiteflies, and soft scales.
Most importantly, the Asian citrus psyllid, through its feeding activity, can inoculates the tree with the bacterium that causes HLB, ultimately killing the tree. It only takes a few psyllids to spread the disease.
An early symptom of HLB in citrus is the yellowing of leaves on an individual limb or in one sector of a tree's canopy. Leaves that turn yellow from HLB will show an asymmetrical pattern of blotchy yellowing or mottling of the leaf, with patches of green on one side of the leaf and yellow on the other side.
Citrus leaves can turn yellow for many other reasons and often discolor from deficiencies of zinc or other nutrients. However, the pattern of yellowing caused by nutrient deficiencies typically occurs symmetrically (equally on both sides of the midvein), between or along leaf veins.
As the disease progresses, the fruit size becomes smaller, and the juice turns bitter. The fruit may remain partially green, which is why the disease is also called citrus greening. The fruit becomes lopsided, has dark aborted seeds, and tends to drop prematurely.
Chronically infected trees are sparsely foliated with small leaves that point upward, and the trees have extensive twig and limb dieback. Eventually, the tree stops bearing fruit and dies. Fruit and tree health symptoms may not begin to appear for 2 or more years after the bacteria infect a tree.
MONITORING AND MANAGEMENT
In response to the establishment of ACP in California, CDFA began an extensive monitoring program to track the distribution of the insect and disease. This program involves CDFA and other personnel regularly checking thousands of yellow sticky traps for the psyllid, in both residential areas and commercial citrus groves throughout the state. The program also includes frequent testing of psyllids and leaf samples for the presence of the pathogen.
Monitoring results are being used to delimit quarantine zones, guide releases of biological control agents, intensify testing for HLB, and prioritize areas for chemical control programs. In areas where HLB has been found, home gardeners need to take an active role in controlling the psyllid throughout the year, by watching for disease symptoms and supporting disease testing and tree removal activities.
ACP and HLB Quarantines
ACP quarantine zones have been established throughout the state that restrict movement of citrus trees and fruit in order to prevent psyllids from being moved to new, uninfested areas of California. Additional, more restrictive HLB quarantine zones have been established to help keep HLB from spreading. Citrus trees and close relatives that could be hosts of the psyllid can't be taken out of quarantine areas. Fruit can be moved, but only if it is washed and free of stems and leaves that could harbor psyllids.
Whether you are inside or outside a quarantine area, it is very important to assist with the effort to reduce Asian citrus psyllids and report suspected HLB symptoms in your trees. Your efforts will slow the spread of HLB and provide time for scientists to work on finding a cure for the disease. For maps and information about the quarantine areas, see the UC ANR ACP Distribution and Management website.
How You Can Help
Residents and landscapers can help combat the psyllid by inspecting their citrus trees and reporting infestations of the Asian citrus psyllid in areas where they are not known to occur or suspected cases of the disease. For more photos of the Asian citrus psyllid and HLB symptoms, visit the California Citrus Threat website.
The best way to detect psyllids is by looking at tiny newly-developing leaves on citrus trees whenever flush (clusters of new leaves) is forming. Mature citrus trees typically produce most of their new growth in the spring and fall, but young trees and lemons tend to flush periodically year round during warm weather.
Slowly walk around each tree and inspect the flush. Look for signs of psyllid feeding and damage, including twisted or notched leaves, nymphs producing waxy deposits, honeydew, sooty mold, or adult psyllids. If you think psyllids are present, use a hand lens to look for small yellow eggs, psyllid nymphs with their waxy tubules, and adults. Immature stages (eggs and nymphs) are found on tender new leaves and they don't fly, so monitoring efforts are most effective when directed toward these stages on citrus flush.
If you think you have found the insect, immediately contact the CDFA Exotic Pest Hotline at 1-800-491-1899. CDFA staff will tell you if you are in an area that is new to the psyllid or if it is common in your area.
If you are in an area that is new to the psyllid, CDFA may come to your residence and take a sample. If the insect is identified as an Asian citrus psyllid, then the quarantine may expand to include that location, and citrus and other ACP host plants will be treated with insecticides by CDFA personnel to control the psyllid.
In areas known to be widely infested with the psyllid, you will need to treat for the psyllid yourself. This can be confirmed by calling the CDFA hotline. This publication provides information on how you can treat your infested trees. If you need further assistance, contact your local UC Master Gardener program or a landscaping and pest control professional for more information about the steps you can take to control the psyllid.
Monitoring citrus trees for symptoms of HLB disease is critical for early detection and management. Immediately report suspected cases of the disease to your county agricultural commissioner's office or call the CDFA hotline at 1-800-491-1899. If the tree is found to be infected with the HLB pathogen, it will be removed immediately to prevent further spread of the disease. It is critical that residents cooperate with tree removal.
Symptoms of HLB take a long time to develop after a tree is first infected, perhaps upwards of 2 years or more for mature trees. However, infected trees can be a source of the bacterium for the psyllid much sooner. This means that in areas where HLB is known to be present, just because a tree looks to be free of HLB symptoms does not mean that it hasn’t been infected. Therefore, for residential areas where HLB is becoming widespread it is worth considering proactive removal of your citrus trees, even if they have not yet tested positive for the disease, to help contain HLB spread. At the very least, in these high HLB risk areas, home gardeners should be discouraged from planting new citrus trees given the high potential for them to become infected in the near future.
A number of predators and parasites feed on ACP. The nymphs are killed by tiny parasitic wasps and various predators, including lady beetle adults and larvae, syrphid fly larvae, lacewing larvae, and minute pirate bugs. Some spiders, birds, and other general predators feed on adult psyllids.
Several species of tiny parasitoid wasps, collected by University of California researchers, have been brought to California for host-testing, mass-rearing, and release. The most promising of these, Tamarixia radiata, strongly prefers ACP nymphs, and under ideal conditions can significantly reduce psyllid populations.
Females of this tiny wasp, which poses no threat to people, lay their eggs underneath ACP nymphs, and after hatching, the parasitoid larvae feed on and kill the psyllid. To find evidence of this wasp at work, keep an eye out for ACP “mummies”, which look like hollowed-out nymphal shells. This wasp has been released at thousands of sites throughout Southern California since early 2012. More recently, T. radiata releases have been made in Central California and a second wasp (Diaphorencyrtus aligarhensis) that attacks the younger ACP nymphs was released in Southern California.
Tamarixia and other natural enemies have reduced ACP populations in Southern California, but they have not eradicated the pest and have not halted the spread of HLB. In the absence of ants, these beneficial insects will at least help to reduce psyllids, especially in areas where it is not possible or practical to institute chemical psyllid control measures. Visit the ACP Distribution and Management website to see a map of where these parasites have been released in California.
Ant Control to Protect Natural Enemies
Ants directly interfere with biological control of ACP, so it is very important for residents to control ants around their citrus trees. Ants “farm” the psyllid honeydew, feed it to their young and vigorously protect psyllids from predators and parasites (also called natural enemies). Ants do this to preserve this food source for their colony.
Ant control is especially important in areas of California where the very aggressive Argentine ant is found. Argentine ants can significantly reduce Tamarixia and Diaphorencyrtis attack rates on ACP. For information on ant identification and management in the landscape, see the UC IPM Pest Notes: Ants.
In areas where ACP has newly arrived, or where residential citrus trees are close to commercial citrus operations, CDFA conducts residential insecticide treatments to control psyllids. When a psyllid is found in these areas, all citrus and other ACP host plants on a property and nearby properties receive an application of two insecticides: a foliar pyrethroid insecticide to quickly kill adults and immature psyllids by direct contact and a soil-applied systemic insecticide to provide sustained control of nymphs tucked inside young leaves. This combination of treatments may protect trees against psyllids for many months. Home gardeners are encouraged to be vigilant and consider supplementary applications of their own when they see psyllids on their trees.
Because of the threat ACP poses to both backyard and commercial citrus and the urgency of containing this pest, home gardeners outside the areas that are part of the CDFA residential treatment program are encouraged to consider implementing their own psyllid control measures if psyllids are found in their area.
Home gardeners can hire a landscape pest control professional to apply insecticides, or make treatments themselves. Landscape professionals have access to the same pesticides applied by CDFA, which include the systemic imidacloprid and foliar applications of the pyrethroid beta-cyfluthrin.
Home gardeners can apply broad-spectrum foliar sprays (carbaryl, malathion) to rapidly control adults and protect plants for many weeks. The systemic insecticide imidacloprid (Bayer Advanced Fruit, Citrus & Vegetable and other products) is available for use as a soil drench, which moves through the roots to the growing tissues of the plant. This systemic insecticide provides good control of the nymphs for 1 to 2 months. Nymphs are hard to reach with foliar sprays because they are tucked inside the small, developing flush.
Apply the soil drench during summer or fall when roots are actively growing. Broad-spectrum foliar sprays and systemic insecticides are toxic to honey bees, so don't apply them when the citrus trees are blooming.
There are also a number of organic and "soft" foliar insecticides such as oils and soaps (horticultural spray oil, neem oil, insecticidal soap) that can help to reduce psyllids. These insecticides are generally lower in risk to beneficial insects (natural enemies and pollinators); however, they are also less persistent so applications need to be made frequently when psyllids are observed (every 7 to 14 days). Oil and soap insecticides must make direct contact with the psyllid so should be applied carefully to achieve full coverage of the tree. See the "Active Ingredients Compare Risks" button in this publication online for more information about potential hazards posed by these materials.
Grafton-Cardwell EE, Godfrey KE, Rogers ME, Childers CC, Stansly PA. 2006. Asian Citrus Psyllid. UC ANR Publication. 8205. Oakland, CA.
Polek M, Vidalakis G, Godfrey KE. 2007. Citrus Bacterial Canker Disease and Huanglongbing (Citrus Greening). UC ANR Publication. 8218. Oakland, CA.
Rust MK, Choe D-H. 2012. Pest Notes: Ants. UC ANR Publication 7411. Oakland, CA.
UC Agriculture and Natural Resources. Asian Citrus Psyllid Distribution and Management website. (Accessed on September 25, 2018.)
UC Agriculture and Natural Resources. California Master Gardeners website. (Accessed on September 25, 2018.)
United States Department of Agriculture. Citrus Greening Disease home page. (Accessed on September 25, 2018.)
Pest Notes: Asian Citrus Psyllid and Huanglongbing Disease (formerly titled Asian Citrus Psyllid)
TECHNICAL EDITOR: K Windbiel-Rojas
Produced by University of California Statewide IPM Program
PDF: To display a PDF document, you may need to use a PDF reader. |
Did you know that the United States shares many migratory songbird species with our neighbors in Central and South America?
Follow us along the migratory path of the beautiful Cerulean Warbler. And learn what the Rainforest Alliance is doing to conserve the bird’s breeding habitat in Appalachia and its wintering habitat in the forests and coffee farms of Colombia.
The lush Appalachian woodlands of the southeastern United States are an ecological treasure, home to more than 200 globally rare plants and animals. More than 15,000 documented wildlife species, including the caribou, moose, black bear, wild boar, fox, and numerous small mammals thrive here. The Appalachian region is home to more salamander species than anywhere else in the world.
And like all living forests, the Appalachian woodlands are the lungs of the region, providing oxygen while also absorbing greenhouse gases and stabilizing the local climate. They provide safe drinking water and clean air for millions of people.
Forest and Coal Country
Nearly 60 percent of Appalachian forests are owned and managed by private landowners—families, small businesses, or a single person—who regard their forest as a living asset for recreation, refuge, and income generation. While many private woodland owners across the region have an excellent record of forest stewardship, irresponsible timber harvesting has been the norm, rather than the exception, for decades. These forests are also being decimated for mountaintop removal for coal mining.
Appalachia: Strong Forests for Strong Communities
To address these pressures on a critically important US landscape, the Rainforest Alliance has joined with businesses, foresters, the US Forest Service, and landowners in the southeastern United States to form the Appalachian Woodlands Alliance (AWA). The AWA works to conserve this forestland and enhance the local economy by advancing sustainable forest management practices tailored to local needs. These practices include the creation of vegetative buffers alongside waterways to enhance water quality, the management of invasive species, and the improvement of high-value habitats, including those that host migratory birds and other wildlife species.
Take a look at this video that highlights how Rainforest Alliance is working to balance sustainable use and protection of Appalachia.
A Refuge for Migratory Birds
Appalachia’s forests provide critical stopover refuge for migratory birds. The birds spend their breeding season in Canada and the United States and fly hundreds of miles south to winter in Mexico and Central and South America. These species include the Baltimore Oriole, Broad-winged Hawk, Red-eyed Vireo, Ruby-throated Hummingbird, Scarlet Tanager, Tennessee Warbler, and the Cerulean Warbler.
Cerulean Warblers spend the breeding season—late April through August—across the Midwest and southeastern regions of the United States, into Quebec and Ontario in Canada. They favor mature deciduous forests, those mostly made up of trees that lose their leaves each year. The birds eat insects and forage high up in the trees, moving rapidly from limb to limb in search of food. Although sizeable populations can be found throughout their migratory range, approximately 80 percent of the Cerulean Warbler population breeds in the forests of Appalachia in Tennessee, Kentucky, West Virginia, Ohio, and Pennsylvania.
In August, Cerulean Warblers begin their amazing journey south, flying across the Gulf of Mexico to the highlands of Central America. They’ll finally settle for the winter in the tropical Andean forests of Colombia, Venezuela, Ecuador, and Peru. Throughout their migratory range, birds depend on high-quality habitats with lots of available food so they can develop the fat deposits they need to fuel their journey.
Cerulean Warblers winter in Andean sub-montane forest, mainly between 3,280 – 6,562 feet (1,000 - 2,000 meters). Shade coffee plantations in Colombia are a critically important wintering habitat, supporting densities of warblers 3 to 14 times higher than those of neighboring primary forest. Shade coffee farms that have natural forest canopy provide migratory and resident birds with food sources in the form of insects and fruit nectar. These farms also provide important ecological services, such as erosion prevention, the protection of fresh water sources, soil nourishment, and carbon storage.
Shade Coffee Farms: Important Bird Habitat
There are more than 24,000 Rainforest Alliance Certified™ coffee farms, covering 452,000 acres (182,000 hectares) within the species’ wintering habitat in South America. In Colombia, there are more than 13,600 certified farms covering more than 280,500 acres (113,550 hectares).
On Rainforest Alliance Certified™ farms, coffee grows in harmony with nature: soils are healthy, waterways are protected, trash is reduced or recycled, and migratory bird habitat flourishes. The sustainable agriculture certification standard prohibits the clearing of any forestland for agricultural expansion, and it requires the conservation of on-farm natural habitat. It also includes requirements for shade cover and the number of tree species per hectare for agroforestry crops.
As a result of the training programs and certification systems, more than 1.3 million farmers are now using responsible methods that protect wildlife habitat—such as composting, the planting of native trees among shade-friendly crops, and manual and biological pest control. Farmers and wildlife also benefit from watershed conservation, buffer zones along streams to prevent erosion, and biological corridors for migratory species.
BirdNote is grateful to the Rainforest Alliance for sharing this information as part of our efforts to #BringBirdsBack.
Listen to these BirdNote shows to learn more about Cerulean Warblers and other birds that depend on Appalachia’s forests during migration:
Cerulean Warblers Link Conservation on Two Continents
World of Warblers
Recording Cerulean Warblers with Charlotte Goedsche
The Baltimore Oriole
How Much Birds Sing
Audubon and the Ruby-throated Hummingbird
Scarlet Tanagers Under the Canopy
Tanagers – Coffee Birds
Learn more about the Rainforest Alliance’s work: |
Every empire has unusual features; the Romans, for instance, were civil engineers extraordinaire, building aqueducts and roads still in use today, thousands of years later. The Mongol Empire was noted for its sheer military power, a rapid communication system based on relay stations, paper currency, diplomatic immunity and safe travel under Pax Mongolica. These features facilitated the growth, strength and flexibility of the Empire in responding to ever-changing circumstances.
The Yam or Ortoo, the communication/postal relay system, grew out of the Mongol army's need for fast communication. As the empire grew, it eventually incorporated some 12 million square miles, the largest contiguous land empire in world history. Genghis Khan set up a system of postal/relay stations every 20 to 30 miles. A large central building, corrals and outbuildings comprised the station. A relay rider would find lodging, hot food and rested, well-fed horses. The rider could hand his message to the next rider, or he could grab a fresh horse, food and go. By this method, messages traveled quickly across the vast acreage of the empire. At first, merchants and other travelers could use the postal stations, but they abused the system and the Empire rescinded the privilege.
When Marco Polo traveled through the Mongol Empire in 1274, he was astonished to find paper currency, which was completely unknown in medieval Europe at the time. Genghis Khan established paper money before he died; this currency was fully backed by silk and precious metals. Throughout the empire, the Chinese silver ingot was the money of public account, but paper money was used in China and the eastern portions of the empire. Under Kublai Khan, paper currency became the medium of exchange for all purposes.
The Mongols relied on trade and on diplomatic exchanges to strengthen the empire. To that end, Mongol officials gave diplomats a paiza, an engraved piece of gold, silver or bronze to show their status. The paiza was something like a diplomatic passport, which enabled the diplomat to travel safely throughout the empire and to receive lodging, food and transportation along the way. The Mongols sent and received diplomatic missions from all over the known world.
Safe Travel through the Empire
Along with diplomats, trade caravans, artisans and ordinary travelers were able to travel safely throughout the empire. Trade was essential to the empire since Mongols made very little themselves and so safe conduct was guaranteed. When Karakhorum, the Mongol capital was being built, artisans, builders and craftsmen of all types were needed, so talented people were located and moved to Mongolia. Under the Mongols, the Silk Road, a series of interconnected trade routes from East to West operated freely, facilitating a fertile exchange of ideas and goods from China to the West and visa versa. |
A few years ago, Jerry Seinfeld did a very funny comedy routine about how parents and other adults always use the word “down” to direct kids. Calm down. Slow down. Sit down.
The whole concept of keeping our kids calm and controlled, particularly in the classroom, is now being proven wrong by science. Because of this latest information, classrooms look very different than in previous generations as educators turn to innovative approaches to help children learn. According to the 2010 by the Kaiser Family Foundation survey “Generation M2: Media in the Lives of 8-18-year-olds”, the average U.S. student sits at school for about 4.5 hours a day. Add the hours they sit staring at screens — such as computers, tablets, phones, or television — and we find that our kids are sitting 85 percent of the time they are awake. That sure is a lot of sitting.
Up until now, it was believed that children needed to sit still in order to concentrate and succeed in school. Experts today find that kids are not wired to sit all day long. Instead, they benefit from breaks during which they are moving to help energize their brain and be more productive.
Why Movement is Better
Many studies in recent years helped educators realize forcing children to sit still is not the best approach; instead, moving around enhances their educational experience. A 2013 report from the Institute of Medicine found that children who are more active show greater attention, have faster cognitive processing speed, and perform better on standardized tests than those who are less active.
Nicholas Frankovits saw it in his own classrooms right here in Cleveland and Akron. Currently a professor of geology at the University of Akron, for many years he taught high school science to inner city students. Through his experience trying to engage his students, he developed a unique teaching technique called Inventucation that he describes as “a way for students to learn in a hands-on manner without being glued to their seats.”
When he entered the classroom in the beginning of the year, the students were bored, lethargic, and had no self-confidence. By providing them with creative problem-solving activities that got them out of their seats and directly interacting with science and engineering tools and objects, the students thrived.
“These projects got students out of their seats almost every day to see, touch and do. You can’t build something unless you really understand it physically,” he explains.
Movement breaks in the classroom also help address childhood obesity and the many other health concerns about children not getting enough physical activity. As we know, extensive medical evidence shows that regular physical activity is related to lower body fat, greater muscular strength, stronger bones, and improvements in cardiovascular and metabolic health. It also helps reduce anxiety and depression.
Changes in Classrooms to Encourage Movement
Educators are beginning to look for alternatives to the current classroom environment to improve how children learn. More and more schools are bringing into the classroom active learning styles, including movement breaks, standing desks, and yoga ball seats.
- Movement Breaks
I don’t know about you, but my kids won’t stop talking about GoNoodle. Now used in more than 60,000 elementary schools in the U.S., it is one of several creative online programs that teachers are using to give their students active breaks throughout the school day. The idea is that kids need time between lessons to move around and give their mind a rest.
The unique aspect about these types of programs is that they are not intended to be focused solely on exercise. Instead, they are aimed to entertain the students, while at the same time getting them moving. For example, GoNoodle videos have kids running along their desks through a virtual obstacle course or following along with dance moves. The kids are laughing and having a blast without even realizing they are exercising.
- Standing Desks
Another new trend in classrooms to encourage movement is standing desks. These are raised desks
that can be adjusted to each child’s height. Standing desks have been proven to be beneficial to children from both health and learning perspectives. A report in Pediatrics reviewed eight studies showing how standing desks in classrooms decreased sitting time by about an hour each day. Some of the studies also found that this improved the students’ behavior.
Next, a study in the International Journal of Health Promotion and Education published by scientists at Texas A&M found that students who used standing desks were more engaged in the classroom than those who sat during class. Nearly 300 children in second through fourth grade were observed during the school year. Their engagement level was measured by behaviors like answering a question, raising their hand, or participating in discussions. Researchers found a 12 percent rise in engagement by students using standing desks, which adds up to an extra seven minutes per hour of effective instruction time.
Frankovits found that students thought better when they were standing, so he always encouraged them to stand up and move around the classroom.
“There is something about walking around that gets creative juices flowing,” he says. “It’s more stimulating. When they are up and about, they are more energetic and have more ideas. My advice is to let kids breathe. Don’t corral them in. Give them freedom to think and experience what they are working on.”
Standing desks are becoming so popular now that organizations focusing on their benefits and use are sprouting up. Stand Up Kids and JustStand.org are both great resources to learn more about this effective classroom option.
- Yoga Ball Seats
Yoga balls also are rolling into more classrooms. They stabilize the core, promote better posture, and
allow students to move and bounce around a bit at their work station when they feel antsy. Kids can essentially get a mini-work- out just by sitting on the ball while they do their work. According to an article in California Educator, teachers have noticed that the yoga balls decrease unwanted movement, while students’ attention spans have risen. The children are thrilled because they have more freedom to move around.
What Parents Can Do
Some school districts have not yet added these inventive movement options to their curriculum for
various reasons such as funding and time constraints. It may take parents advocating in order to push schools to bring these options into the classroom.
There also are several ways that parents can incorporate movement into their children’s lives outside of school. When it’s time to focus on homework, it’s important for parents to be mindful that their kids have been sitting for much of the day.
Sarah Groves, M.Ed., LPCC-S, mental health therapist in the Executive Functioning Skills Building Program at Akron Children’s Hospital, works with families to identify simple ways to add movement to the children’s day.
“The more a child moves, the more engaged they are and the more they want to do and learn. Movement absolutely leads to better cognitive ability and emotional health,” she says.
Groves suggests finding small activities to sprinkle in some movement, even if it’s just a trip to the bathroom, kicking legs around at the desk, or playing with a fidget toy.
By incorporating some simple movement breaks both at home and at school, children will be more successful overall be- cause, as Groves explains, “The reality is, if we don’t find a way for them to move, then they will find inappropriate ways to move. Let’s give them healthy ways to regulate themselves throughout the day rather than having them end up with extreme ups and downs.” |
Reasoning in Artificial intelligence
In previous topics, we have learned various ways of knowledge representation in artificial intelligence. Now we will learn the various ways to reason on this knowledge using different logical schemes.
The reasoning is the mental process of deriving logical conclusion and making predictions from available knowledge, facts, and beliefs. Or we can say, "Reasoning is a way to infer facts from existing data." It is a general process of thinking rationally, to find valid conclusions.
In artificial intelligence, the reasoning is essential so that the machine can also think rationally as a human brain, and can perform like a human.
Types of Reasoning
In artificial intelligence, reasoning can be divided into the following categories:
Note: Inductive and deductive reasoning are the forms of propositional logic.
1. Deductive reasoning:
Deductive reasoning is deducing new information from logically related known information. It is the form of valid reasoning, which means the argument's conclusion must be true when the premises are true.
Deductive reasoning is a type of propositional logic in AI, and it requires various rules and facts. It is sometimes referred to as top-down reasoning, and contradictory to inductive reasoning.
In deductive reasoning, the truth of the premises guarantees the truth of the conclusion.
Deductive reasoning mostly starts from the general premises to the specific conclusion, which can be explained as below example.
Premise-1: All the human eats veggies
Premise-2: Suresh is human.
Conclusion: Suresh eats veggies.
The general process of deductive reasoning is given below:
2. Inductive Reasoning:
Inductive reasoning is a form of reasoning to arrive at a conclusion using limited sets of facts by the process of generalization. It starts with the series of specific facts or data and reaches to a general statement or conclusion.
Inductive reasoning is a type of propositional logic, which is also known as cause-effect reasoning or bottom-up reasoning.
In inductive reasoning, we use historical data or various premises to generate a generic rule, for which premises support the conclusion.
In inductive reasoning, premises provide probable supports to the conclusion, so the truth of premises does not guarantee the truth of the conclusion.
Premise: All of the pigeons we have seen in the zoo are white.
Conclusion: Therefore, we can expect all the pigeons to be white.
3. Abductive reasoning:
Abductive reasoning is a form of logical reasoning which starts with single or multiple observations then seeks to find the most likely explanation or conclusion for the observation.
Abductive reasoning is an extension of deductive reasoning, but in abductive reasoning, the premises do not guarantee the conclusion.
Implication: Cricket ground is wet if it is raining
Axiom: Cricket ground is wet.
Conclusion It is raining.
4. Common Sense Reasoning
Common sense reasoning is an informal form of reasoning, which can be gained through experiences.
Common Sense reasoning simulates the human ability to make presumptions about events which occurs on every day.
It relies on good judgment rather than exact logic and operates on heuristic knowledge and heuristic rules.
The above two statements are the examples of common sense reasoning which a human mind can easily understand and assume.
5. Monotonic Reasoning:
In monotonic reasoning, once the conclusion is taken, then it will remain the same even if we add some other information to existing information in our knowledge base. In monotonic reasoning, adding knowledge does not decrease the set of prepositions that can be derived.
To solve monotonic problems, we can derive the valid conclusion from the available facts only, and it will not be affected by new facts.
Monotonic reasoning is not useful for the real-time systems, as in real time, facts get changed, so we cannot use monotonic reasoning.
Monotonic reasoning is used in conventional reasoning systems, and a logic-based system is monotonic.
Any theorem proving is an example of monotonic reasoning.
It is a true fact, and it cannot be changed even if we add another sentence in knowledge base like, "The moon revolves around the earth" Or "Earth is not round," etc.
Advantages of Monotonic Reasoning:
Disadvantages of Monotonic Reasoning:
6. Non-monotonic Reasoning
In Non-monotonic reasoning, some conclusions may be invalidated if we add some more information to our knowledge base.
Logic will be said as non-monotonic if some conclusions can be invalidated by adding more knowledge into our knowledge base.
Non-monotonic reasoning deals with incomplete and uncertain models.
"Human perceptions for various things in daily life, "is a general example of non-monotonic reasoning.
Example: Let suppose the knowledge base contains the following knowledge:
So from the above sentences, we can conclude that Pitty can fly.
However, if we add one another sentence into knowledge base "Pitty is a penguin", which concludes "Pitty cannot fly", so it invalidates the above conclusion.
Advantages of Non-monotonic reasoning: |
Earthquakes occur most often along the edges of oceanic and continental plates. Oceanic plates are the large pieces of the Earth's crust that are located beneath the oceans. Continental plates hold the Earth's large land masses.
Earthquakes occur when the Earth's plates move and rub against or bump into each other. Quakes occur most frequently along the edges of plates, but they are also common along faults, which can be far from the edge of a plate. Faults are cracks where smaller sections of the plate move against each other. Cracks may be defined as normal, reverse or strike-slip faults. One of the most well-known faults, the San Andreas Fault, is a strike-slip fault.Learn More
Earthquakes are usually triggered when rock located beneath the ground, on top of fault lines, breaks and suddenly releases a significant amount of energy. The immediate and rapid release of energy caused by earthquakes generates seismic waves, which cause shaking motions that start below the Earth’s surface and spread across large distances.Full Answer >
Most earthquakes occur along the boundaries between the Earth's tectonic plates. The crust of the Earth is divided into plates. When a plate collides with or slides past another plate, this causes earthquakes. For example, as the Pacific plate moves past the North American plate, many earthquakes occur along the coast of California.Full Answer >
Earthquakes happen daily across the United States, though they vary in magnitude. For instance, the United States had 3,836 earthquakes in 2012, but only five of them were above six on the Richter scale, and none of them exceeded seven.Full Answer >
Earthquakes are the result of two of the Earth's crustal plates slipping past each other, otherwise known as plate tectonics. The vibrations caused by this sudden movement reverberate through the surrounding rock structures, and they are felt as tremors. Earthquakes are most common among the geologically active regions at the borders between plates of the Earth's crust, also known as fault zones.Full Answer > |
What do we mean by "Critical Thinking" and "Inquiry?"
Take a few minutes and answer these questions for yourself. Then, if you wish, let us know what you think using the submission form.
This issue was discussed in depth at the workshop and participants were polled for their answers to these questions.
Workshop participants' responses:
What are the key elements of critical thinking?
1. Process to create the ability to assemble knowledge of intellectual merit, to appraise, by using reflection with a healthy dose of skepticism, to logically formulate the 'big picture.'
2. [The ability to evaluate] different lines evidence, synthesize, and apply to new situations.
What do we mean by the term "Inquiry"?
1.Inquiry is a five-step activity or process that includes:
- a. Posing a question
- b. Determining what information is needed
- c. Getting the information
- d. Evaluation of the information
- e. Communication of findings
2. Inquiry based learning offers opportunities to formulate questions, and to use data and information to construct understanding.
How can we incorporate these steps and abilities into web resources?
We know it when we see it.
You can also view the opinions posted by other visitors to this site. |
Congestive heart failure is a condition in which the heart fails to pump enough oxygen-rich blood to the body, and there is a congestion of fluids in the tissue. The symptoms occur suddenly and the patient may need immediate hospitalization. For chronic conditions, the symptoms develop slowly and are treatable. There are different symptoms for different stages of this condition. The causes of congestive heart failure are diseases which weaken the heart muscles, for example, diseases like cardiomyopathy, coronary heart disease, etc. Some of the other contributors to heart failure are age, lifestyle, and alcoholism.
One of the most important organ of the human body is the heart because of its function of pumping oxygenated blood to the body. If for some reason this ability to pump blood is hampered, it can lead to symptoms like shortness of breath. It becomes difficult for people with a congestive heart condition to do physical activity because any kind of physical exertion leads to fatigue and weakness. In acute congestive heart failure, symptoms like severe chest pain are sudden, and the individual might need immediate medical attention.
Due to the lack of oxygen supply to the cells and tissues of the body, people experience swelling in the extremities. Swollen legs are one of the most common symptoms of chronic congestive heart failure, with women experiencing swollen ankles more than men due to this condition. Retention of fluid in cells and tissues, known as edema, leads to weight gain and other weight-related problems. This weight gain occurs in spite of loss of appetite experienced by people with a congestive heart condition.
Retention of fluid in the liver leads to problems like nausea and abdominal pain, which at times, can become unbearable. Cough and difficulty in breathing while lying down are also some of the symptoms of a congestive heart condition.
Statistics have attributed over 250,000 deaths every year to this condition. Aging is one of the cause of this condition, and the prognosis in the elderly depends on the nature of the underlying heart disease. The prognosis is variable, dependent on the individual's health; almost 50 percent of people living with this condition have a life expectancy of 3-5 years.
Complications like pulmonary edema have indicated that the advanced congestive heart condition is bleak with survival rates dropping to 15-20 percent in the age group of people over 50. In women, statistics have shown that ⅓ have died within the first 5 years of diagnosis. While stage 4 condition is bleak, complications like irregular heart rhythm, which can be fatal, has led to bad statistics. Although these statistics are unreliable, and for individual prognosis, it is best to rely on your physician. End stage prognosis leads to death, with almost 2 percent Americans suffering from this condition. In the end stages, the physical activity of the person gets restricted, and the person experiences chest pain and tiredness.
Congestive heart failure prognosis is generally poor with almost 50 percent of the patients dying within 5 years of being diagnosed. Habits like smoking and drinking alcohol also contribute to early demise in people with such heart conditions.
Disclaimer: This HealthHearty article is for informative purposes only, and should not be used as a replacement for expert medical advice. |
THE genome of the sea squirt, a distant cousin of animals with backbones, has been sequenced. It is helping reveal how the genome of vertebrates like us evolved.
Sea squirts, with their leathery, filter-feeding tubes, do not look much like long-lost relatives. But the larval form of sea squirts reveals their true ancestry. These free-swimming tadpoles have a stiffened rod, or notochord, running down their back, which in a developing vertebrate is the forerunner to the backbone. They also have a simple brain and heart.
The creature that gave rise to both sea squirts and vertebrates appeared on the planet during the Cambrian explosion, an orgy of evolutionary experimentation about 550 million years ago. And as modern sea squirts are thought to be similar to this common ancestor, comparing their genome with our own reveals how the vertebrate genome evolved.
The species sequenced,
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. |
A human skull, on average, is about 0.3 inches thick, or roughly the depth of the latest smartphone. Human skin, on the other hand, is about 0.1 inches, or about three grains of salt, deep.
While these dimensions are extremely thin, they still present major hurdles for any kind of imaging with laser light.
Why? Laser light contains photons, or miniscule particles of light. When photons encounter biological tissue, they scatter. Corralling the tiny beacons to obtain meaningful details about the tissue has proven one of the most challenging problems laser researchers have faced.
However, one research group at Washington University in St. Louis (WUSTL) decided to eliminate the photon roundup completely and use scattering to their advantage.
The result: An imaging technique that penetrates tissue up to about 2.8 inches. This approach, which combines laser light and ultrasound, is based on the photoacoustic effect, a concept first discovered by Alexander Graham Bell in the 1880s.
In his work, Bell found that a focused light beam produces sound when trained on an object and rapidly interrupted–he used a rotating, slotted wheel to create a flashing effect with sunlight.
Bell’s concept is the foundation for photoacoustics, an area of a growing field known as biophotonics, which joins biology and light-based science known as phototonics. Biophotonics bridges photonics principles, engineering and technology that are relevant for critical problems in medicine, biology and biotechnology.
“We combine some very old physics with a modern imaging concept,” says WUSTL researcher Lihong Wang, who pioneered the approach.
Wang and his WUSTL colleagues were the first to describe functional photoacoustic tomography (PAT) and 3-D photoacoustic microscopy (PAM). Both techniques follow the same basic principle: When the researchers shine a pulsed laser beam into biological tissue, it spreads out and generates a small, but rapid rise in temperature. This increase produces sound waves that are detected by conventional ultrasound transducers. Image reconstruction software converts the sound waves into high-resolution images.
Following a tortuous path
Wang first began exploring the combination of sound and light as a post-doctoral researcher.
At the time, he modeled photons as they traveled through biological material. This work led to an NSF CAREER grant to study ultrasound encoding of laser light to “trick” information out of the beam.
“The CAREER grant boosted my confidence and allowed me to study the fundamentals of light and sound in biological tissue, which benefited my ensuing career immensely,” he says.
Unlike other optical imaging techniques, photoacoustic imaging detects ultrasonic waves induced by absorbed photons no matter how many times the photons have scattered. Multiple external detectors capture the sound waves regardless of their original locations.
“While the light travels on a highly tortuous path, the ultrasonic wave propagates in a clean and well-defined fashion,” Wang says. “We see optical absorption contrast by listening to the object.”
The approach does not require injecting imaging agents, so researchers can study biological material in its natural environment. Using photoacoustic imaging, researchers can visualize a range of biological material from cells and their component parts to tissue and organs. It detects single red blood cells in blood, as well as fat and protein deposits.
While PAT and PAM are primarily used by researchers, Wang and others are working on multiple clinical applications. In one case, researchers use PAM to study the trajectory of blood cells as they flow through vessels in the brain.
“By seeing individual blood cells, researchers can start to identify what’s happening to the cells as they move through the vessels. Watching how these cells move could act as an early warning system to allow detection of potential blockage sites,” says Richard Conroy, director of the Division of Applied Science and Technology at the National Institute of Biomedical Imaging and Bioengineering.
Minding the gap
Because PAT and PAM images can be correlated with those generated using other methods such as magnetic resonance imaging or positron emission tomography, these techniques can complement existing ones.
“One imaging modality can’t do everything,” says Conroy. “Comparing results from different modalities provides a more detailed understanding of what is happening from the cell level to the whole animal.”
The approach could help bridge the gap between animal and human research, especially in neuroscience.
“Photoacoustic imaging is helping us understand how the mouse brain works. We can then apply this information to better understand how the human brain works,” says Wang, who along with his team is applying both PAT and PAM to study mouse brain function.
Wang notes that one of the challenges currently facing neuroscientists is the lack of available tools to study brain activity such as action potentials, which occur when electrical signals travel along axons, the long fibers that carry signals away from the nerve cell body.
“The holy grail of brain research is to image action potentials,” he says.
With funding from The BRAIN Initiative, Wang and his group are now developing a PAT system to capture images every one-thousandth of a second, fast enough to image action potentials in the brain.
“Photoacoustic imaging fills a gap between light microscopy and ultrasound,” says Conroy. “The game-changing aspect of this [Wang’s] approach is that it has redefined our understanding of how deep we can see with light-based imaging.” |
By Kyle Mizokami, The National Interest
Nearly a hundred years ago the U.S. Navy asked a question: if airplanes can fly through the air, why couldn’t a vessel carrying them fly through the air as well? The result was the Akron-class airships, the only flying aircraft carriers put into service in any country. Although promising, a pair of accidents—prompted by the airship’s limitations—destroyed the flying carrier fleet and ended development of the entire concept.
The Akron-class airships were designed and built in the late 1920s. The ships were designed, like conventional seagoing aircraft carriers, to reconnoiter the seas and search for the enemy main battle fleet. Once the enemy fleet was located, the U.S. Navy’s battleships would close with the enemy and defeat them. This was a primitive and limiting use of the aircraft carrier, which had not yet evolved into the centerpiece of U.S. naval striking power.
The airships of the Akron class, Akron and Macon, were ordered in 1926 before the Great Depression. The two ships were commissioned into U.S. Navy service in 1931 and 1933, respectively. The Akron class was a classic pill-shaped interwar airship design, with a rigid skin made of cloth and aluminum and filled with helium. The air vessel was powered by eight Maybach twelve-cylinder engines developing a total 6,700 horsepower. At 785 feet each was longer than a Tennessee-class battleship , had a crew of just sixty each, and could cruise at fifty-five knots. The airships were lightly armed, with just eight .30 caliber machine guns.
Unique among airships, the Akron class carried fixed-wing aircraft and could launch and recover them in flight. Each airship carried up to five Curtiss F9C Sparrowhawk fighters , lightweight biplanes with a crew of one and armed with two .30 caliber Browning machine guns. The airships each concealed a hangar within their enormous airframe and launched and recovered the Sparrowhawks through a hook system that lowered them into the airstream, whereupon they would detach and fly off. The system worked in reverse to recover the tiny fighters.
The flying carrier concept had its advantages and disadvantages compared to the “traditional” seagoing carrier. Akron and Macon were twice as fast as surface ships, and could therefore cover more ground. By their very nature those onboard could see much farther over the horizon than surface ships, and their Sparrowhawks extended that range even farther. For just sixty men manning each airship the Navy had a powerful reconnaissance capability to assist the battle fleet in fighting a decisive naval battle.
The airships did have their disadvantages. Akron and Macon were both prone to the whims of weather, and could become difficult to handle in high winds: in February 1932 Akron broke away from its handlers just as a group of visiting congressmen were waiting to board. Three months later in San Diego, two sailors were thrown to their deaths and a third was injured trying to moor the airship to the ground. Bad weather grounded the airships entirely, weather a traditional seagoing warship could handle with relative ease.
On April 3, 1933 USS Akron was on a mission to calibrate its radio equipment off the coast of New Jersey when it ran into trouble . Strong winds caused the Akron to plunge 1,000 feet in a matter of seconds, and the crew made the snap decision to dump the water ballast to regain altitude. The airship ended up rising too quickly and the crew lost control. Akron crashed into the sea, killing seventy-three out of seventy-six personnel on board, including the head of the Navy’s Bureau of Aeronautics and the commander of Naval Air Station Lakehurst and the station’s Rigid Airship Training & Experimental Squadron.
On February 12, 1935 USS Macon was over the Pacific Ocean when a storm caused the upper fin to fail. Macon had suffered damage to the fin months earlier, but the Navy had failed to repair the damage. The collapse of the upper fin took approximately 20 percent of the ship’s helium with it, causing the airship to rapidly rise. The crew decided to release additional helium to make it sink again, but too much helium was lost and the ship descended into the ocean. Macon’s slower crash than her sister ship Akron, as well as the presence of life jackets and life preservers aboard the airship, ensured that eighty-one out of eighty-three passengers and crew survived the accident.
The loss of both airships effectively ended the flying aircraft carrier concept. It’s interesting to speculate what might have happened had the concept been further developed and survived until the Second World War. As scouts, airship carriers would not have lasted long had they accomplished their mission and located Japanese ships and bases. Oscar and Zero fighters of the Imperial Japanese Army and Navy would have made short work of the delicate airships and their lightweight fighters. On the other hand, airships could have adapted to become formidable antisubmarine warfare platforms for convoy escort duty in the Atlantic Ocean, standing guard over unarmed merchantmen and fending off German u-boats with a combination of fighters and depth charges.
Regardless of speculation, World War II was won without flying aircraft carriers, proving they weren’t a war-winning asset. The concept has lain dormant for decades, but recent Pentagon research into turning the C-130 Hercules transport into a flying aircraft carrier for pilotless drones means the concept is still alive and well. The flying aircraft carrier could indeed stage a comeback, though with considerably fewer pilots involved.
Kyle Mizokami is a defense and national security writer based in San Francisco who has appeared in the Diplomat, Foreign Policy, War is Boring and the Daily Beast. In 2009 he cofounded the defense and security blog Japan Security Watch. You can follow him on Twitter: @KyleMizokami . |
Toddler color games are a great way to teach your child the colors. This game is wonderful for toddlers and preschoolers because it involves food coloring and water. What’s not to love?
To prepare for this color game, you will need to make several colored ice cubes. Simply fill a regular ice cube tray with water, then add 3-5 drops of food coloring into each cube. You only need the primary colors (red, yellow, and blue) for this game. I used about 4 ice cubes of each color. After I added the drops of food coloring, I mixed the water around a little bit with a toothpick. I don’t know if this was really necessary, but that is what I did. Once the food coloring is added and mixed in, let the ice cubes freeze in the freezer. I prepared the ice cubes the day before just to be sure that they would be ready when I needed them.
To play the game, have several bowls (white or clear if possible) filled halfway with water. You will want to play this game in a place that you can fill and empty the bowls quickly. We played it at our kitchen sink. A bathroom sink or the bathtub will also work very well. Take an ice cube out of the tray and ask your kids to guess what color the ice cube will turn the water. Put the ice cube into a bowl that is filled halfway with water and let your kids play with it until the ice cube is melted. Have them observe whether or not their guess was right.
Repeat this activity with two more ice cubes until you have three bowls filled with each of the three primary colors (red, yellow, and blue). Now, plug up your sink (or bathtub) and fill it slightly with water. Have your child choose two of the primary colors. Have them guess what color the two colors will make. Have your child pour the two primary colors into the sink (or bathtub) and play with the water until the two colors are combined. Have them observe what color the water turned into.
If you think your child is old enough, go ahead and explain the term “secondary color.” If not, just let them play with the water. When they are ready to make another color, empty that water down the sink and rinse the sink well. Now, get some more ice cubes and color the water in some bowls so you again have three bowls with each of the primary colors. Have your child pick two more colors and fill the sink with a little bit of water. Pour the bowls into the sink and see what color they make.
Continue until you have made all three secondary colors (orange, green, and purple). You can use this color mixing sheet for younger kids. Just have them color the circles with crayons as you go. For older kids, I suggest using the black and white color mixing sheet so they can choose what colors to mix.
This is a really fun toddler color game because of the sensory experience. Your children are sure to love it and the clean up is pretty easy because of all the water involved. Don’t forget to check out all of our other toddler color game posts! |
In computing, a printer is a peripheral which makes a persistent human readable representation of graphics or text on paper or similar physical media. The two most common printer mechanisms are black and white laser printers used for common documents, and color inkjet printers which can produce high quality photograph quality output.
The world's first computer printer was a 19th-century mechanically driven apparatus invented by Charles Babbage for his difference engine. This system used a series of metal rods with characters printed on them and stuck a roll of paper against the rods to print the characters. The first commercial printers generally used mechanisms from electric typewriters and Teletype machines, which operated in a similar fashion. The demand for higher speed led to the development of new systems specifically for computer use. Among the systems widely used through the 1980s were daisy wheel systems similar to typewriters, line printers that produced similar output but at much higher speed, and dot matrix systems that could mix text and graphics but produced relatively low-quality output. The plotter was used for those requiring high quality line art like blueprints.
The introduction of the low-cost laser printer in 1984 with the first HP LaserJet, and the addition of PostScript in next year's Apple LaserWriter, set off a revolution in printing known as desktop publishing. Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercial typesetting systems. By 1990, most simple printing tasks like fliers and brochures were now created on personal computers and then laser printed; expensive offset printing systems were being dumped as scrap. The HP Deskjet of 1988 offered the same advantages as laser printer in terms of flexibility, but produced somewhat lower quality output (depending on the paper) from much less expensive mechanisms. Inkjet systems rapidly displaced dot matrix and daisy wheel printers from the market. By the 2000s high-quality printers of this sort had fallen under the $100 price point and became commonplace.
The rapid update of internet email through the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. Even the desire for printed output for "offline reading" while on mass transit or aircraft has been displaced by e-book readers and tablet computers. Today, traditional printers are being used more for special purposes, like printing photographs or artwork, and are no longer a must-have peripheral.
Starting around 2010, 3D printing became an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. These devices are in their earliest stages of development and have not yet become commonplace.
- 1 Types of printers
- 2 Technology
- 2.1 Modern print technology
- 2.2 Obsolete and special-purpose printing technologies
- 2.3 Other printers
- 3 Attributes
- 4 See also
- 5 References
- 6 External links
Types of printers
Personal printers are primarily designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaround print jobs, requiring minimal setup time to produce a hard copy of a given document. However, they are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored on memory cards or from digital cameras and scanners.
Networked or shared printers are "designed for high-volume, high-speed printing." They are usually shared by many users on a network and can print at speeds of 45 to around 100 ppm. The Xerox 9700 could achieve 120 ppm.
A 3D printer is a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material ( including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper.
The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies don't work with certain types of physical media, such as carbon paper or transparencies.
A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.
Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected. The machine-readable lower portion of a cheque must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.
Modern print technology
The following printing technologies are routinely found in modern printers:
A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer's photoreceptor.
Liquid inkjet printers
Inkjet printers operate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers.
Solid ink printers
Solid ink printers, also known as phase-change printers, are a type of thermal transfer printer. They use solid sticks of CMYK-coloured ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. The printhead sprays the ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as colour office printers, and are excellent at printing on transparencies and other non-porous media. Solid ink printers can produce excellent results. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. In addition, this type of printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tek sold the printing business to Xerox in 2001.
A dye-sublimation printer (or dye-sub printer) is a printer which employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper or canvas. The process is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers are intended primarily for high-quality colour applications, including colour photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.
Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colours can be achieved with special papers and different temperatures and heating rates for different colours; these coloured sheets are not required in black-and-white output. One example is the ZINK technology (Zero INK Technology).
Obsolete and special-purpose printing technologies
The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.
Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of a typewriter), or hits the back of the paper, pressing the paper against the ink ribbon (the IBM 1403 for example). All but the dot matrix printer rely on the use of fully formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, although bolding and underlining of text could be done by "overstriking", that is, printing two or more impressions in the same character position. Impact printers varieties include, typewriter-derived printers, teletypewriter-derived printers, daisy wheel printers, dot matrix printers and line printers. Dot matrix printers remain in common use in businesses where multi-part forms are printed, such as car rental services. An overview of impact printing contains a detailed description of many of the technologies used.
Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric-based printers were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.
The common teleprinter could easily be interfaced to the computer and became very popular except for those computers manufactured by IBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.
Daisy wheel printers
Daisy wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second.
The term dot matrix printer is used for impact printers that use a matrix of small pins to transfer ink to the page. The advantage of dot matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).
Dot-matrix printers can be broadly divided into two major classes:
- Ballistic wire printers
- Stored energy printers
Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.
In the 1970s & 80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not print descenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as the Commodore VIC-1525 using the Seikosha Uni-Hammer system. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronised longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column. 24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed as Near Letter Quality by some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use.
Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.
Dot matrix printers are still commonly used in low-cost, low-quality applications like cash registers, or in demanding, very high volume applications like invoice printing. The fact that they use an impact printing method allows them to be used to print multi-part documents using carbonless copy paper, like sales invoices and credit card receipts, whereas other printing methods are unusable with paper of this type. Dot-matrix printers are now (as of 2005) rapidly being superseded even as receipt printers.
Line printers, as the name implies, print an entire line of text at a time. Four principal designs existed.
- Drum printers, where a horizontally mounted rotating drum carries the entire character set of the printer repeated in each printable character position. The IBM 1132 printer is an example of a drum printer. Drum printers were also found in adding machines and other numeric printers (POS), the dimensions were compact as only a dozen characters needed to be supported.
- Chain or train printers, where the character set is arranged multiple times around a linked chain or a set of character slugs in a track traveling horizontally past the print line. The IBM 1403 is perhaps the must popular, and came in both chain and train varieties. The band printer is a later variant where the characters are embossed on a flexible steel band. The LP27 from Digital Equipment Corporation is a band printer.
- Bar printers, where the character set is attached to a solid bar that moves horizontally along the print line, such as the IBM 1443.
- A fourth design, used mainly on very early printers such as the IBM 402, featured independent type bars, one for each printable position. Each bar contained the character set to be printed. The bars moved vertically to position the character to be printed in front of the print hammer.
In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper.
- Comb printers, also called line matrix printers, represent the fifth major design. These printers were a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers printed a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row could be printed, continuing the example, in just eight cycles. The paper then advanced and the next pixel row was printed. Because far less motion was involved than in a conventional dot matrix printer, these printers were very fast compared to dot matrix printers and were competitive in speed with formed-character line printers while also being able to print dot matrix graphics. The Printronix P7000 series of line matrix printers are still manufactured as of 2013.
Line printers were the fastest of all impact printers and were used for bulk printing in large computer centres. A line printer could print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operated with tight tolerances and required regular preventive maintenance (PM) to produce top quality print. They were virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many computer operating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers.
Liquid ink electrostatic printers
|This section needs additional citations for verification. (May 2012)|
Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document. The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process of electrostatic copying. Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.)
Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm) width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought by Xerox. 3M also used to make these printers.
Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology. Some plotters used roll-fed paper, and therefore had minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings.
A number of other sorts of printers are important for historical reasons, or for special purpose uses:
- Digital minilab (photographic paper)
- Electrolytic printers
- Spark printer
- Barcode printer multiple technologies, including: thermal printing, inkjet printing, and laser printing barcodes
- Billboard / sign paint spray printers
- Laser etching (product packaging) industrial printers
- Microsphere (special paper)
Printer control languages
Most printers other than line printers accept control characters or unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBM Personal Printer Data Stream (PPDS) became a commonly used command set for dot-matrix printers.
Today, most printers accept one or more page description languages (PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard's Printer Command Language (PCL), PostScript or XML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such as ESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as the Printer Working Group (PWG's) PWG Raster.
The speed of early printers was measured in units of characters per minute (cpm) for character printers, or lines per minute (lpm) for line printers. Modern printers are measured in pages per minute (ppm). These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially colour images. PPM are most of the time referring to A4 paper in Europe and letter paper in the United States, resulting in a 5-10% difference.
The data received by a printer may be:
- A string of characters
- A bitmapped image
- A vector image
- A computer program written in a page description language, such as PCL or PostScript
Some printers can process all four types of data, others not.
- Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots.
- Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all four.
- Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all four. This is especially true of printers equipped with support for PCL or PostScript, which includes the vast majority of printers produced today.
Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Many printer drivers do not use the text mode at all, even if the printer is capable of it.
Monochrome, colour and photo printers
A monochrome printer can only produce an image consisting of one colour, usually black. A monochrome printer may also be able to produce various tones of that color, such as a grey-scale. A colour printer can produce images of multiple colours. A photo printer is a colour printer that can produce images that mimic the colour range (gamut) and resolution of prints made from photographic film. Many can be used on a standalone basis without a computer, using a memory card or USB connector.
The page yield is number of pages that can be printed from a toner cartridge or ink cartridge -- before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors.
Cost per page
In order to fairly compare operating expenses of printers with a relatively small ink cartridge to printers with a larger, more expensive toner cartridge that typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms of cost per page (CPP).
Often the "razor and blades" business model is applied. That is, a company may sell a printer at cost, and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.
Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on the ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.
Printer steganography is a type of steganography – "hiding data within data" – produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.
More than half of all printers sold at U.S. retail in 2010 were wireless-capable, but nearly three-quarters of consumers who have access to those printers weren't taking advantage of the increased access to print from multiple devices according to the new Wireless Printing Study.
- 3D printing
- Cardboard modeling
- List of printer companies
- Print (command)
- Printer driver
- Print screen
- Print server
- Printable version
- Label printer
- Printer friendly
- Printer point
- Printer (publishing)
- Babbage printer finally runs, BBC News, 13 April 2000
- Morley, Deborah (April 2007). Understanding Computers: Today & Tomorrow, Comprehensive 2007 Update Edition. Cengage Learning. p. 164. ISBN 9781305172425.
- Abagnale, Frank (2007). "Protection Against Cheque Fraud" (PDF). abagnale.com. Retrieved 2007-06-27.
- "Technology–How it works | ZINK | Zero Ink". ZINK. Retrieved 2012-11-02.
- J. L. Zable; H. C. Lee (November 1997). "An overview of impact printing" (PDF-2031 KB). Journal of Research and Development. Vol.41 (Issue.6). IBM. pp. 651–668. doi:10.1147/rd.416.0651. ISSN 0018-8646. (subscription required)
- "MPS-801 printer". DenialWIKI. Retrieved 22 February 2015.
- "VIC-1525 Graphics Printer User Manual" (PDF). Commodore Computer. Retrieved 22 February 2015.
- Wolff, John. "The Olivetti Logos 240 Electronic Calculator - Technical Description". John Wolff's Web Museum. Retrieved 22 February 2015.
- IBM Corporation. IBM 1443 PRINTER for 1620/1710 Systems (PDF).
- IBM Corporation (1963). IBM 402, 403 and 419 Accounting Machines Manual of Operation (PDF).
- "Madison's website on Renn Zaphiropoulos". Cms.ironk12.org. Retrieved 2012-11-02.
- "Introduction to the 3M Scotchprint 2000 electrostatic printer". Wide-format-printers.org. Retrieved 2012-11-02.
- "HP Computer Museum".
- Lashinsky, Adam (March 3, 2009), "Mark Hurd's moment", CNNMoney.com
- "The Science Behind Page Counts, Cartridge Yields and The 5% Rule".
- "Measuring Yield: The ISO Standard for Toner Cartridge Yield for Monochrome LaserJet Printers". Hewlett-Packard.
- "Cost per page".
- "ISO Page Yields". quote: "Many original equipment manufacturers of printers and multifunction products (MFPs), including Lexmark, utilize the international industry standards for page yields (ISO/IEC 19752, 19798, and 24711)."
- "How We Test Printers: Cost Per Page Calculation". 2011. Computer Shopper.
- "Color laser vs inkjet printer". Ganson Engineering.
- Vincent Verhaeghe. "Color Laser Printers: Fast and Affordable". 2007.
- Rebecca Scudder. "Unit Cost of Printing: Laser v. Inkjet". 2011.
- Cost per Page versus Printer cost in currently available printers
- Artz, D (May–Jun 2001). "Digital steganography: hiding data within data". IEEE Xplore 5 (3): 75, 80. Retrieved April 11, 2013.
- "List of Printers Which Do or Do Not Display Tracking Dots". Electronic Frontier Foundation. Retrieved 11 March 2011.
- Wireless Printers: A Technology That's Going to Waste?, by The NPD Group: https://www.npd.com/wps/portal/npd/us/news/press-releases/pr_101101a/
- Media related to Printers at Wikimedia Commons |
General Science questions and answers for preparing PSC | UPSC | SSC | Civil Services | Railway Recruitment | Bank | Competitive Examinations.
1. The individual polymer unit of cotton is:
2. The fibre that is obtained by chemical treatment of wood pulp is:
Ans. Rayon or artificial silk
3. Which synthetic fibre used coal, water and air as its raw materials?
4. Parachutes and ropes for rock climbing are made out of the artificial fibre called ..........
5. What is polycot?
Ans. Mixture of polyester and cotton
6. Which polymer form is used widely for making bottles, utensils and films?
Ans. PET (polyethylene Terephthalate)
7. Plastics which get deformed easily on heating and can be bent are known as ..........
8. Once moulded, some plastics cannot be softened by heating. Such plastics are termed as ..........
Ans. Thermosetting plastics
9. Bakelite and Melamine are examples of .......... plastic.
Ans. Thermosetting plastic
10. The thermosetting plastic used for making floor tiles is ..........
11. Which is the plastic used for making nonstick coating on cookware?
12. Uniforms of fireman have been coated with a plastic so as to make them flame resistant. Which is the plastic form?
13. What is the “4R” principle associated with the sustainable use of plastics?
Ans. Reduce, Reuse, Recycle and Remove
14. The property of metals by which they can be beaten into thin sheets are called ..........
15. What is ductility?
Ans. Ability of metals to be drawn into wires
16. Which are the common metals those can be cut into pieces with a knife?
Ans. Sodium and Potassium
17. Why is the metal Sodium stored in kerosene?
Ans. Because it reacts vigorously with oxygen and water
18. Which non-metal, that catches fire if exposed to air, is kept under water?
19. Which element is the main constituent of coal?
20. The slow process of conversion of dead vegetation into coal is called .......... |
Gold During the Classical Period
"The metals obtained by, mining, such as silver, gold, and so on, come from water."
Science in the Classical period was confined mostly to speculation. The pre-Aristotelians explained the origin of the universe and natural phenomena in the light of four basic elements - earth, water, air, and fire. This theory probably originated in the ancient Indus Valley civilization, was taken up by the early Babylonian natural scientists, and passed on to the early Greek philosophers, among whom Empedocles (c. 490-430 B.C.) is usually credited with refining the concepts of the theory. Aristotle embraced the four element theory and added another concept, that of the ether.
During, pre-Hellenistic and Hellenistic times gold and silver were mined extensively throughout the Mediterranean, in Asia Minor, and elsewhere in western and northern Europe and in Africa and Asia. A large literature exists on ancient mining in these areas, all admirably summarized and synthesized by Rickard (1932), Davies (1935), and Healy (1978).
Initially it would seem from the evidence presented by many of the deposits and dumps in Egypt, the Aeizean, Turkey (Anatolia), Iran, India, and contiguous regions that most of the gold came from eluvial and alluvial placers; only later as these placers approached exhaustion were the oxidized zones of gold quartz and sulphide deposits exploited, first by open-cut methods and then by underground workings. Fire-setting seems to have been widely employed by the Greeks and the Romans in underground mining. Pliny, in his Historia naturalis, claimed that vinegar was better than water in quenching and disintegrating the hot rock. Because of water and ventilation problems, the depth of exploitation of bedrock deposits was limited, probably to 200 m or less in most auriferous regions. Slaves and convicts did all mining. Prospecting knowledge was crude and was based principally on visual signs of the presence of gold in quartz float, in exposures in quartz outcrops, in gossans, nearby soils and weathered residuum, and in the sediments of streams and rivers. Much gophering was employed in prospecting, evidenced by an abundance of pits and shallow shafts in most of the ancient mining areas of the Aegean, Turkey, Egypt, India (Kolar), and elsewhere. The gold pan appears to have been employed for testing gossans, soils, and alluvium and for winning the metal since the earliest times. Likewise, the rocker and sluice with riffles, animal fleeces, or mats for trapping the gold seem to have been in use almost from the beginning of placer gold mining. The animal (sheep or goat) fleeces were dried and the gold was then shaken out of them into pans for further concentration. When ulex (a prickly plant of the furze or gorse family) was used, it was burned and the gold washed out of the ashes.
Some placer operations employed the boulder-riffle method of concentrating the gold; in this method boulders were arranged in such a way that as the water rushed along carrying the gold, it swirled around the boulders, depositing the nuggets and dust in the slack-water zones around and between the boulders. After an appropriate interval, the sluicing water was turned off or diverted to allow the cleanup. Stretches of streams with natural riffles, such as slate and schist beds and folia oriented at right angles to the stream direction, also appear to have been employed in some placer areas. In large placer operations, especially where high-level (terrace) gravels were exploited by the Romans, as along the Sil in Northwestern Spain and the Vrbas in Yugoslavia, hushing (booming) was employed. This method frequently required aqueducts and canals several kilometers in length for transport of the water to the crude monitors. Where bedrock quartz deposits were exploited, the separation of the native gold from the dross, mainly quartz, was accomplished by hand picking, followed by crushing and grinding in stone mortars, querns, and crude "hour glass" and other types of mills, and finally by washing on sloping boards or flat rocks. These "washeries" can be seen in pictorial form on walls and tablets in Egyptian tombs and in a dilapidated condition in a number of ancient mining regions in Greece, Egypt, and the Middle East.
GOLD DEPOSITS AND THEORIES OF THEIR ORIGIN
In early Classical times ancient gold placers and mines were known on many of the Aegean Islands, particularly Siphnos, in mainland Greece, along the southern shores of Pontus Euxinus (Black Sea), and near the western coast of Asia Minor; most were small and soon worked out by the fifth century B.C. Prospecting farther away, particularly in the region of Mt. Tmolus (the modern Boz Dag), revealed the rich electrum placers of the rivers Pactolus and Hermus (the modern Gediz). Legend has it that the Pactolus is the river in which Midas, the mythical founder of the Phrygian kingdom, on the advice of Bacchus bathed in its waters to rid himself of the fatal faculty of turning everything he touched into gold. From the Pactolus came large stores of placer gold won mainly by the Lydian kings, of whom Ardys (c. 650 B.C.) minted at Sardis the earliest gold coins existent. In the course of time the Lydian monarchs became the richest princes of their age, especially Croesus who followed Alyattes (605-560 B.C.) on the throne and whose name we associate today with enormous wealth. Another source of gold described in early Classical and later Classical times was located in Thrace and Macedonia, where the mines at Dysoron and on Mt. Pangaeus provided great wealth for Thracians, Athenians, and Philip 11 of Macedonia, founder of Philippi near Mt. Pangaeus. It appears that much gold also reached the Aegean area in early Classical times from Egypt, Armenia (Turkey), Dacia (Romania, Hungary) and from as far as Spain, Siberia, and India.
During Hellenistic times, many of the gold mines of Macedonia and Thrace were exploited intensively as were also some of those in various auriferous regions of Asia Minor. The Ptolemies, the dynasty of Macedonian Kings that ruled Egypt (323-30 B.C.), prospected extensively in Egypt, Nubia (Sudan), and probably also in Arabia, winning considerable gold from these ancient mining regions. Many of the early Greek authors mention gold and silver, but without any particular geological reference. The Iliad and Odyssey of Homer (c. 1000 B.C.), the traditional epic poet of Greece, refer to gold and silver in numerous contexts and locations, the latter probably authentic in many cases. We have also the Greek myth of Jason and the Golden Fleece. According to this myth, the Golden Fleece was taken from the ram on which Phrixus and Helle escaped from being sacrificed. It was hung up in the grove of Ares in Colchis and recovered from King Aeetes by the Argonautic expedition under Jason, with the help of the sorceress Medea, the king's daughter. In actual fact the Argonauts were early prospectors who sought the source of the ancient placers on the Black Sea. At that time (1200 B.C.) the workers of auriferous placers recovered the gold by trapping the metallic particles on sheep's fleeces placed in crude sluices. The fleeces were then hung up to dry in nearby trees and were later shaken to collect the gold.
Rafal Swiecki, geological engineer email contact
This document is in the public domain. |
Presentation on theme: "Strategies for Supporting Second-Language Students (L2S) Best (and Worst) Practices Elizabeth Visedo, Ph.D."— Presentation transcript:
Strategies for Supporting Second-Language Students (L2S) Best (and Worst) Practices Elizabeth Visedo, Ph.D.
How/When to Speak Don’t speak too fast hurry students’ responses use unnatural speech, such as baby talk, shouting or excessively slow talking use too many idioms or colloquialisms. Do speak at normal speed and clearly moderate your speed if you are a fast talker repeat yourself or rephrase what you said when necessary after asking a question, wait for a few seconds before calling on someone to respond provide students with enough time to formulate their responses, whether in speaking or in writing help to shape what the L2S wants to say Wait Time provides a needed period to formulate a response gives students an opportunity to think
Teaching English Don’t treat English as a separate subject for L2Ss to learn only in ESL lessons put L2Ss on the spot by asking them to participate before they are ready feed your L2Ss on a diet of worksheets Do remember the English to which L2Ss are exposed in your classroom is of crucial importance to their academic language development correct content from the start correct grammar or pronunciation later allow for the "silent period" that some students go through provide opportunities for L2Ss to use language and concepts in meaningful situations Include a variety of ways of participating in your instruction, e.g. in cooperative groups
Non-Linguistic Cues Don’t stand in front of the class and lecture rely on a textbook as your only visual aid assume L2Ss understand what you are saying or that they are already familiar with school customs and procedures Do use visuals and sketches use gestures and intonation use other non-verbal cues to make both language and content more Using them to teach concepts can be hugely helpful to L2Ss visual/auditory representations
Learning Environment Don’t separate or isolate students away from the rest of the class - physically or instructionally limit your L2Ss’ access to authentic, "advanced" materials Do make sure L2Ss are seated where they can see and hear well provide them with maximum access to the instructional and linguistic input encourage them to collaborate with native-English peers involve them in some manner in all classroom activities fill your classroom with print and with interesting things to talk, read, and write about Creating a language-rich environment will allow your L2Ss to learn even when you aren’t directly teaching them.
Directions Don’t just tell students what to do and expect them to do it act surprised if students are lost when you haven't given step-by- step directions Do give oral and written instructions model what they are expected to do or produce explain and demonstrate learning actions share your thinking processes aloud show good student work samples. Modeling promotes learning and motivation increases student self-confidence
Checks Don’t simply ask, "Are there any questions?" assume students are understanding because they are smiling and nodding their heads wait until mid-term to assess their literacy skills Do assess their literacy skills and course readiness during the drop/add period recommend LRC when necessary regularly check students’ understanding after a lesson or explanation say, "Please put thumbs up, thumbs down, or sideways to let me know if this is clear, and it's perfectly fine if you don't understand or are unsure -- I just need to know." or have students quickly answer on a Post-It note that they place on their desks or have exit slips helps students monitor their own understanding, & helps ensure they are learning, thinking, understanding, comprehending, & processing at high levels checking for understanding
Home Language (L1) Don’t "ban" L2Ss’ L1s from the classroom Do encourage students to build their L1 literacy skills Research supports that learning to read in the L1 promotes reading achievement in the L2 as "transfer" occurs, including phonological awareness, comprehension skills, and background knowledge. Forbidding students from using their L1 discourages them from taking risks and making mistakes. It can also harm teacher-student relationships, especially if teachers act more like language "police" than language "coaches"
Respect Don’t laugh at their mistakes or make jokes at their expense allow other students to belittle L2Ss confuse low English proficiency with low intelligence confuse lack of knowledge of the classroom culture with uncooperativeness assume their previous school experience followed American standards Do encourage all students to work with and help L2Ss create success opportunities for L2Ss’ & praise their achievements treat L2Ss as full members of the classroom community help them to feel comfortable set your expectations high & clear ask for more participation and work as they become able to accomplish it learn as much about L2Ss and their backgrounds as you can use that knowledge to enrich the lives and learning of all the students
References Ferlazzo, L. (2012). Do's & don'ts for teaching English-language learners. Retrieved from sypnieski Shoebottom, P. (1996/2013). Classroom guidelines. Retrieved from |
Natural Language Processing (NLP) is the Artificial Intelligence (AI) method of making computer systems analyze, understand, and generate a natural language, i.e one spoken by a human, such as English or Spanish. Its most infamous application is being used, with limited success, by those competing in the Turing Test - Alan Turing’s criterion of AI which argues that, in order for a machine to be considered AI, a human being should be unable to distinguish it from a person when they look at the replies to questions put to both.
NLP works in two ways. Firstly, by understanding natural language to perform various tasks. It attempts to translate the meaning and structure into a machine representation format, in order to tell the computer what to do. Siri will be one example of this that will be familiar to most iPhone users, as it follows instructions spoken to it and tries to complete them, albeit within limits.
Another strand of NLP is Natural Language Generation (NLG). NLG works by generating language itself, which it does by taking nonlinguistic input data, processing it, and generating natural language back to the user. One example is textual weather forecasts generated from the set of numbers that would be used by meteorologists - temperature, precipitation, wind speed, and so forth.
One of the main tools NLP uses to work is machine learning, which it applies to reams of text to draw conclusion about language. This has evolved with computing, although the idea of NPL’s use predates computers and goes back as far as the 17th century, with a variety of other more manual methods being used. The internet has been a clear boon to NLP, as it has made available so much more text to work.
It is important for NLP to establish the context of each word. Words are ambiguous, and semantics and syntax are often difficult enough for humans to get to grips with, let alone machines. Noam Chomsky’s famous example of ‘colorless green ideas sleep furiously’ illustrates the problem of semantics. He compared that sentence with ‘furiously sleep ideas green colorless.’ Chomsky noted that the first sentence, though nonsensical, is grammatical, while the second is not. For example, a good NLP system will learn that the word ‘hot’, when used in close proximity to the word ‘curry’, will usually mean ‘spicy’, as opposed to ‘hot’ in the more literal sense of burning. However, there is also every chance that it could also mean hot as in burning. One of the ways people most commonly use to thwart machines during the Turing test is by asking the same question twice in different ways.
As a form of AI, the applications of NLP are many, and businesses can easily put it to use. It can help greatly in the processing of large amounts of text. For example, one company applied NLP to Twitter in order to predict where riots were likely to occur next. It can also be used to classify text into categories, as well as index and search, and translate languages. As machine learning progresses, and the technology becomes more finely attuned to the seeming contrariness of many languages, NLP is set to complete these tasks to an even more impressive degree and be an even greater resource to business. |
Beyond Blocks: Text-Based Coding in Middle School
By Weston Hagan
“If we want to create real coders, they need to use real code”. That was the ‘big idea’ I recently shared with an audience of middle school and high school educators at a conference. If you’re a parent, informal educator, or camp leader, consider the following four points when deciding how to introduce your students to coding.
Block Code is an Entry Point, not a Pathway
Scratch, Snap!, Blockly, and other variants of block coding have become popular in both schools and stores. Block coding uses a series of colorful shapes that snap together on-screen. Each block contains some bit of code and allows a program to run. These are fine starting places, but they can’t offer a full introduction to coding.
Why not? Because some coding skills like commenting, code styling, and troubleshooting do not transfer directly from block to typed coding. Block coding tools are appropriate for students in K-5, but they’re only useful for the first one to two years of coding before moving on to text-based code.
Middle School is a Fork in the Road
Looking at computer science education broadly, there are ample resources for the very first days of coding (as mentioned above). There are great tools for more advanced coders, too, including AP Computer Science and online coding courses.
But middle schoolers are underserved, and it’s an especially critical time period to keep them engaged with computer coding. If students are bored by using the same simplistic tools they used in elementary school, they may stop identifying with computer science as a topic of interest; a trend that is hard to reverse once it’s begun.
Middle school is the opportune time to expose students to real, typed coding and an open-ended world of challenges and creations that is limited only by their willingness to learn. They have the ability to learn typed code with its syntax, logic, and complexity; it’s up to educators and parents to give them the opportunity.
Typed Code is Transferrable to Any Coding Language
Troubleshooting, logical flow, statements and variables don’t change all that much between languages. The key is to get exposed to ideas in a real coding environment so that students build familiarity with real code in action. Learning code with Let’s Start Coding does not preempt coders from coding websites or video games, it builds transferrable skills that will help them do so!
Support is the Key to Success
Successful code learning tools are more than just a coding language; they include project ideas, instant feedback from the code, and fun tasks appropriate to the learner’s skill level. Even programs for adult learners perform better when there is a concrete output, like a working website, from all of the effort put forth.
In other words, the best tools to learn code are frameworks that support both the learner and instructor on their coding journey. A framework can include anything from a progression through projects, plain-English explanations of what’s happening, or videos that describe basic code concepts. The stronger the support for the teacher and learner, the more able students will be to continue learning.
What We’re Doing About It
At Let’s Start Coding, the ideas above are central to the kits we’ve created. We believe in the importance of text-based code without focusing on the peculiarities of any single language. With our step-by-step coding lessons, learners are exposed to the fundamentals of coding that apply to all typed coding languages.
Relevant, challenging tasks are key to keeping students interested in what they’re learning. That’s why students build handheld gadgets that are wearable, shareable, and fun with the Let’s Start Coding kit. When the outcome of a project is something concrete, students will stay engaged and experiment more.
We also know it takes more than a box of components to create a valuable learning experience. You’ll find dozens of pre-written programs on our website and free resources for teachers that include lesson plans and ties to common academic standards. |
The term "forage" is commonly used to refer to above-ground plant growth other than separated grain that is used as feed for domestic animals. Forage crops, or simply forages, are plant species grown for this purpose. Forage is most commonly consumed as pasture by grazing animals, but it also can be harvested and stored as hay or silage (dried or fermented plant material, respectively) and fed to livestock during times when pasture forage is not available. Ideally, forage is harvested or grazed when it consists mainly of green leaves and mostly immature stems rather than older, highly fibrous, and less digestible dead leaves and stemmy material. With its mild climate and abundant rural land, Alabama is well suited to the production of forage crops.
Most of the nutrition for domestic grazing animals is provided by forage. Researchers have estimated that in the United States, forages provide around 63 percent of the feed consumed by dairy cattle and 84 percent for beef cattle, on average. Close to one-third of the food consumed by the average American each year originates from cattle, sheep, and goats. All of these animals are ruminants, meaning that they have a four-stomach digestive system that is adapted for efficient extraction of nutrients from forage. As the primary source of nutrition for these animals, forage is therefore a major, although indirect, contributor to the human diet. Forage crops also contribute to the diets of other domestic nonruminant animals, including swine, horses, and sometimes even poultry.
Some forages are perennial, meaning that the plants have the ability to live for more than one year. Others are annuals that
germinate, grow, and die within a 12-month period. Forages are also divided into warm-season crops, which grow most actively
during the warmer months of the year (mainly summer), and cool-season crops, which grow during the cooler months of the year
(mainly spring and autumn). Forages are further divided into grasses and forbs. Plants in these two categories differ in many ways, with many grasses being particularly well-suited for forage because they are
hardy and productive. Most forbs used for forage are in the legume family, including clovers, alfalfa, lespedeza, and vetch,
all of which are especially nutritious to grazing animals.
Scope and Geographic Distribution
In Alabama, more than four million acres are devoted to the production of forage crops, making them second only to forestry in commercial land use in the state and surpassing all other agronomic and horticultural crops combined. More than 40 different nonnative forage crops are commonly planted in Alabama.
Before European settlement of the Southeast, most of the land in Alabama was forested and there was little opportunity for forage species to evolve. A few warm-season perennial native species, such as big bluestem, little bluestem, eastern gamagrass, and switchgrass, have forage potential and occur naturally or can be grown in many areas within Alabama. Although most livestock producers find that these grasses require more effort than is justified by the benefits of growing them strictly for forage, such native species are often planted for wildlife purposes.
The economic value of forage production is largely expressed through the sale of the animals that consume pasture, hay, and silage. The exceptions are local growers who produce hay for sale to others, mainly livestock producers. On most livestock farms in Alabama, pasture or hay land is dominated by one or more perennial grasses.
Tall fescue, a cool-season perennial grass, is the dominant forage species in the northern half of Alabama, particularly in the Limestone Valleys and Uplands, Sand Mountain, the Appalachian Plateau, and the Piedmont. The warm-season perennial bermudagrass and bahiagrass are dominant species in the southern half of the state, especially in the Coastal Plain. Dallisgrass and Johnsongrass, also both warm season perennial grasses, and tall fescue, are significant forages in the Blackland Prairie. Orchardgrass, a cool-season perennial grass, is grown mainly in the Limestone Valleys and Uplands and the northern portion of the Appalachian Plateau and in the Sand Mountain region in northern Alabama.
White clover, a cool-season perennial legume, is commonly grown as a companion species to tall fescue, dallisgrass, and orchardgrass.
Sericea lespedeza is a warm-season perennial legume grown mostly in the northern half of the state. Various combinations of
cool-season annual grasses, such as ryegrass and cereal grains (especially rye, wheat, or oats) and various annual legumes,
including arrowleaf clover and crimson clover, are often grown either on a prepared seedbed or seeded into the dormant sods
of warm-season perennial grasses. Cool-season annuals are grown throughout Alabama but are more productive and more widely
used in the southern portions of the state.
Forage crops provide both nutrition and cover for many species of wildlife and game animals, including white-tailed deer, rabbits, and turkey. In fact, many thousands of acres of forage crops are planted for wildlife in Alabama each year. Many hunters do so to increase the populations and nutritional status of wild game animals and birds or to attract them for hunting. Forage crops, particularly perennial sod-forming grasses, are among the most effective erosion-preventing ground covers, and they therefore help to protect water quality. They are widely planted to stabilize road banks, reclaim mined areas, and otherwise protect and reclaim soil that has been disturbed or is prone to erosion.
Some Alabama farmers obtain a significant portion of their income by producing seeds of various forage crops, including sericea
lespedeza, browntop millet, and several annual clovers. Forage crops are also an important source of nectar and pollen for
the bee industry, and bees in turn are effective pollinators that benefit the production of many other crops, particularly
fruits and vegetables. In addition, forage crops are often planted in rotations with row crops such as cotton and peanuts. Crop rotation interrupts pest cycles, increases organic soil matter, and improves soil structure, thus raising the yield
of row crops. Finally, forage crops contribute much beauty to rural landscapes.
Alabama's climate is well suited to growing forage crops. However, a great deal of the land currently devoted to these crops could be much more productive if it were managed more intensively, especially through more precise fertilization, better grazing management, and weed control. Furthermore, many thousands of acres of land in the state are currently underused but would make excellent pasture or hay land with proper management. Although the potential of forage crops has not been fully realized, they already make many contributions. In addition, the potential for biomass energy production from switchgrass and other forage species is currently being explored.
Ball, Donald M., Carl S. Hoveland, and Garry D. Lacefield. Southern Forages. 4th ed. Norcross, Ga.: International Plant Nutrition Institute, 2007.
Published July 28, 2008
Last updated July 24, 2013 |
Reading First-Ohio Resources for Families
As a parent, you know that reading with your child encourages his or her development of important literacy skills. Research has shown that kindergarten through grade 3 is a critical time when children develop a foundation in literacy. In addition to reading, playing games that build reading skills is a fun way to support your child's development.
The Reading-First Ohio Center offers online reading resources for families of children in kindergarten through grade 3. These include:
Tips and ideas for reading with your child;
Recommended reading lists for families and children;
Information on how you can help your child build language and literacy skills;
Fun and interactive literacy-based games you can play with your child. Each game is accompanied by detailed instructions, materials and an on-line video showing a family playing the game.
Click here to visit the Reading First-Ohio Center.
Reading First-Ohio is a federal grant program that helps low-performing, high-poverty school districts implement research-based literacy interventions in kindergarten through grade 3. The Reading First-Ohio Center is a consortium of three universities (Cleveland State University, John Carroll University and the University of Akron) that the Ohio Department of Education contracted with to provide professional development, technical assistance to Reading First districts and online reading resources for families. For more information about Reading First, click here.
Last Modified: 4/18/2013 10:38:28 AM |
Learning how to tell time is a skill that students will use their whole life -- just like reading, counting and knowing their colors. Students begin learning this skill in kindergarten, but it can be difficult to grasp at first since the numbers on the clock have to be converted to minutes, which is a relatively complex task for this age. Activities for teaching kindergartners how to tell time should find ways to simplify the information.
Make a Giant Clock
Get kindergarteners up and moving by asking them to create a giant paper clock together. Using a large piece of craft paper that replicates a round clock face, invite students to use markers or paint to fill in the numbers of the clock. Mark each hour to make the marking easier for this grade. Include tags that show the corresponding minutes next to each number. Move the minute hand and ask students to shout out the numbers together. Students can also stand on the clock face and move their arms and legs to mark the hour and minutes.
Create an Events Clock
Help children associate the numbers on the face of the clock with times that they know well by creating an events clock. Assign students to make their own clocks on paper plates or construction paper that correspond to special times in the day, such as snack time, recess or story time. Label the clocks and mount them on the wall. Reference them throughout the day, asking students to point out the actual time and the event it signals. "What time is it? 10 o'clock -- that's snack time!"
Do a Countdown
Help students understand seconds with an engaging countdown in class. Tie the activity with a special holiday, such as New Year's Eve, or use it for a reveal, such as unveiling the votes for what you will serve at your class party. Create a clock face using a paper plate and construction paper arms mounted on brads. Move the second hand as you count down the time. Ask students to count along together, and build as much excitement around the activity as possible. Use the countdown frequently, such as when heading out to recess or to lunch.
Morning to Night
Part of telling the time is understanding the difference between a.m. and p.m. Though a.m. and p.m. won't affect how the hands fall on the clock face, it will affect how students tell others the time or how they read it in reports. Help students understand the difference between morning and afternoon by asking them about common activities they do each day. Use the clock face to draw that time on the board. Write "a.m." and "p.m." next to the activities. Help students understand that some things that happen when the sun is down actually happen in the early a.m., or the morning, and things that happen in the afternoon are in the p.m.
- Ryan McVay/Photodisc/Getty Images |
The Blue Iguana, Cyclura lewisi, is a critically endangered species of lizard that is found on the island of Grand Cayman. It is estimated that only 25 of these animals still survive. It is expected that the population will be extinct within the first decade of the twenty-first century. The demise of this animal is due largely in fact to the introduction of human pets and indirectly by the destruction of their natural habitat.
Blue Iguanas occupy rock hole and tree cavity retreats, and as adults are primarily terrestrial. Younger iguanas tend to be more arboreal. The young are preyed upon by snakes, but the adult has no natural predators. The Blue Iguana sexually matures at three years old. While longevity in the wild is unknown, in captivity one individual lived to 67 years of age.
The Blue Iguana is primarily herbivorous, consuming leaves, flowers and fruits from over 100 different plant species. They very rarely eat insect larvae, crabs, slugs, dead birds and fungi. Mating occurs in May, and eggs are usually laid in June or July, in nests excavated in pockets of earth exposed to the sun. Individuals are aggressively territorial from the age of about 3 months. |
standardizationArticle Free Pass
standardization, in industry, the development and application of standards that permit large production runs of component parts that can be readily fitted to other parts without adjustment. Standardization allows for clear communication between industry and its suppliers, relatively low cost, and manufacture on the basis of interchangeable parts.
A standard is that which has been selected as a model to which objects or actions may be compared. Standards for industry may be devices and instruments used to regulate colour, size, weight, and other product attributes, or they may be physical models. Standards may also be written mathematical or symbolical descriptions, drawings, or formulas setting forth the important features of objects to be produced or actions to be performed. Standards that are applied in an industrial setting include engineering standards, such as properties of materials, fits and tolerances, terminology, and drafting practices; and product standards intended to describe attributes and ingredients of manufactured items and embodied in drawings, formulas, materials lists, descriptions, or models.
Certain fundamental standards among firms are required to prevent conflict and duplication of effort. The standards activities of governmental departments, trade associations, and technical associations serve in part to meet national standards needs, but one specialized standardizing organization is needed to coordinate the diverse standardization activities of many different types of organizations and promote general acceptance of basic standards. In the United States the American National Standards Institute (ANSI) performs this function. It does not initiate or write standards but provides the means by which national engineering, safety, and industrial standards can be coordinated. All interested groups may participate in the decision-making process, and compliance with the national standard is voluntary. The international body that serves this function is the International Organization for Standardization (ISO). Developing an international standard presents the greater challenge because of the breadth of representation and the diversity of needs and viewpoints that must be reconciled.
What made you want to look up standardization? |
The nasopharynx is the upper part of the pharynx (throat) behind the nose. Nasopharyngeal cancer most commonly starts in the squamous cells that line the nasopharynx.
The 3 parts of the pharynx are the nasopharynx, oropharynx and hypopharynx.
Key point about nasopharyngeal cancer:
1. Nasopharyngeal cancer is a type of Head and Neck cancer.
2. Risk factor may include the following:
- Being of Chinese or Asian ancestry.
- More common in men, 30-50 years old.
- Exposure to the Epstein-Barr virus, of which it has been associated with various cancers, including Nasopharyngeal and some Lymphomas.
- Eating salt cured foods, also chemicals released from the steam when cooking salt cured foods may enter nasal cavity, increasing the risk of nasopharyngeal cancer. |
Air Quality and Turfgrass
“Just one acre of grass can absorb hundreds of pounds of fossil-fuel created sulfur dioxide in a single year.”
In recent years progress seems to have been made in improving our air quality. But the levels of nitrogen oxide, sulfur dioxide and particulate matter in our atmosphere, primarily from the burning of carbon based fuels, are still a major concern.
Plants absorb these gaseous pollutants into their leaves and break them down, thereby cleaning the air. An acre of flourishing growth will absorb hundreds of pounds of sulfur dioxide during a year.
Grass also takes in carbon dioxide, hydrogen fluoride and peroxyacetyl nitrate ² the worst group of atmospheric pollutants.
Grasses in the United States also trap an estimated 12 million tons of dust and dirt released annually into the atmosphere.
This dust, dirt and even smoke are trapped in part by the grass leaves, where it is washed into the soil system by water condensed on the leaves and rainfall. Grassed areas significantly lower the levels of atmospheric dust and pollutants.
NOTE: Survey data was collected by the Maryland Agricultural Statistics Service which also tabulated the results and wrote the findings. |
Over half of the English language is derived from Latin.unicorn-corn, hornhumble-humus, earthgregarious-grex, flockpantry-p nis, breadflamingo-flamma, flameThese and thousands of other words we use every day keep this "dead" language-a language of kings and poets, of scrolls and secrets-alive. And this means that when we study Latin, we're not just learning about Rome-we're learning about ourselves. Rediscover this time-honored language, which led classical education innovator Dorothy Sayers to declare that "Latin should be begun as early as possible . . . when the chanting of 'am, am s, amat' is as ritually agreeable to the feelings as the chanting of 'eeny, meeny, miney, moe.'"In Latin Primer 3, the language basics explored in Primers 1 and 2 continue, with an increasing emphasis on translation. Revised and expanded, this text introduces students (grades 5 and up) to Latin's final noun declensions and verb conjugations, as well as to perfect tense, indirect objects, simple prepositions, and more, opening up broad frontiers for their understanding and enjoyment of this early language. This updated Teacher's Edition includes new teacher's notes, new weekly quizzes, an English-Latin glossary, and a Latin-English glossary. |
A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Magnoliophyta, also called angiosperms). The biological function of a flower is to mediate the union of male sperm with female ovum in order to produce seeds. The process begins with pollination, is followed by fertilization, leading to the formation and dispersal of the seeds. For the higher plants, seeds are the next generation, and serve as the primary means by which individuals of a species are dispersed across the landscape. The grouping of flowers on a plant is called the inflorescence.
In addition to serving as the reproductive organs of flowering plants, flowers have long been admired and used by humans, mainly to beautify their environment but also as a source of food.
See Also: Sydney Florist, Melbourne Florist, Brisbane Florist |
Lexile Measures at Home
Lexile measures defined
The Lexile Framework for Reading is a scientific approach to measuring readers and reading materials. A key component of the Lexile Framework is a number called the Lexile measure. A Lexile measure indicates both the difficulty of a text, such as a book or magazine article, and a student’s reading ability. Knowing the Lexile text measure of a book and the Lexile reader measure of a student helps to predict how the book matches the student’s reading ability – whether the book is too easy, too difficult or just right.
Both a Lexile reader measure and a Lexile text measure are denoted as a simple number followed by an "L" (e.g., 850L) and are placed on the Lexile scale. The Lexile scale ranges from below 200L for beginning readers and beginning-reading text to above 1700L for advanced readers and text.
The Lexile Framework, which comprises both the Lexile measure and Lexile scale, is not an instructional program any more than a thermometer is a medical treatment. But just as a thermometer is a useful diagnostic tool, the Lexile Framework is useful in managing your child’s reading development.
For additional information regarding the Lexile Framework for Reading, please view The Lexile Framework for Reading: A Web Session for Parents (Flash). This presentation provides families with an overview of the Lexile Framework for Reading.
Lexile measures are included in the SOL score reports that are sent to parents or guardians by school divisions. See a sample of the SOL Student Report (PDF)
Managing your child’s reading comprehension
Lexile measures allow you to manage your child’s reading comprehension by matching him or her to appropriately challenging text. Matching your child’s Lexile measure to a text with the same Lexile measure leads to an expected 75 percent comprehension rate – not too difficult to be frustrating, but difficult enough to encourage reading progress. You can further help your child by knowing his or her Lexile range. A reader’s recommended Lexile range is 50L above and 100L below his or her Lexile measure. These are the boundaries between the easiest kind of reading materials for your child and the most challenging level at which he or she should be able to read.
Finding books and articles that will help your child
Once you have your child’s Lexile measure, you can connect him or her with tens of thousands of books and tens of millions of articles with Lexile measures. Most public libraries have access to online periodical databases that you can use to search for newspaper and magazine articles by Lexile measure. It is important to note that the Lexile measure does not address the content or quality of the book. Many other factors affect the relationship between a reader and a book, including its content, the age and interests of the reader, and the design of the actual book. The Lexile measure is a good starting point in the book-selection process, but parents and educators should always consider these other factors when making a decision about which book to choose. The Look up a Book website is available to help you create customized reading lists. These free databases allow you to search for books based on Lexile measures and by interest categories or school assignment topics. With the Find a Book site, you can even check the availability of titles at your local library. Books listed are not endorsed or recommended by the Virginia Department of Education.
Using Lexile measures at home
- Ensure that your child gets plenty of reading practice, concentrating on material within his or her Lexile range (50L above and 100L below his or her Lexile measure). You can search the Lexile book database and find a list of books in your child’s range at the Find a Book website. Books listed are not endorsed or recommended by the Virginia Department of Education.
- Communicate with your child’s teacher and school librarian about his or her reading needs and accomplishments.
- When a reading assignment proves too challenging for your child, use activities to help. For example, review the words and definitions from the glossary, and the review questions at the end of a chapter before your child reads the text. Afterwards, be sure to return to the glossary and review the questions to make certain your child understood the material.
- Celebrate your child’s reading accomplishments. One of the great things about the Lexile Framework is that it provides an easy way for readers to keep track of their own growth and progress. You and your child can set goals for reading – sticking to a reading schedule, reading a book at a higher Lexile measure, trying new kinds of books and articles, or reading a certain number of pages per week. When your child hits the goal, make an occasion out of it!
For more information, visit the Lexile website's Frequently Asked Questions for Families. |
Definition of Turkish Empire
1. Noun. A Turkish sultanate of southwestern Asia and northeastern Africa and southeastern Europe; created by the Ottoman Turks in the 13th century and lasted until the end of World War I; although initially small it expanded until it superseded the Byzantine Empire.
Generic synonyms: Empire, Imperium
Group relationships: Africa, Asia, Europe
Turkish Empire Pictures
Click the following link to bring up a new window with an automated collection of images related to the term: Turkish Empire Images
Lexicographical Neighbors of Turkish Empire
Literary usage of Turkish Empire
Below you will find example usage of this term as found in modern and/or classical literature:
1. Haydn's Dictionary of Dates and Universal Information Relating to All Ages by Joseph Haydn, Benjamin Vincent (1889)
"30 March, „ [See Loans 1854-5) TURKEY. Great Britain, Fr.mce, and Austria guarantee integrity of Turkish empire . ..."
2. The Encyclopedia Americana: A Library of Universal Knowledge (1920)
"(Munich 1917) ; Eversley, Lord, 'The Turkish Empire: Its growth and decay' (London 1917); Ferriman, ZD, 'Turkey and the Turks' (ib 1911); Freeman, EA, ..."
3. The New World: Problems in Political Geography by Isaiah Bowman (1921)
"Before the Balkan wars the Turkish Empire was as large as Russia in Europe; ... EDITS RIRA NEAN S DECLINE OF THE Turkish Empire Extreme limits ["^Losses by ..."
4. Woodrow Wilson and World Settlement by Ray Stannard Baker (1922)
"CHAPTER IV THE Turkish Empire AS BOOTY — TERMS OF THE SECRET TREATIES AND AGREEMENTS FOR THE PARTITION OF TURKEY WE COME now to the most illuminating of all ..."
5. The Theological and Literary Journal (1851)
"The Turkish empire, under the same designation which it had previously borne, is here manifestly represented, anew . . . Symbolized by the Euphrates, ..."
6. Essays on the Progress of Nations in Civilization, Productive Industry by Ezra Champion Seaman (1868)
"History and Probable Destiny of the Turkish Empire. The empire of the Saracens was declining and crumbling to pieces by reason of successful invasions, ..." |
The Exclusionary Rule is an addendum to the Fourth Amendment of the Unites States Constitution. While the Fourth Amendment declared a citizen’s right to privacy in home, papers, and property, there were no set rules about the collection of evidence. The prevalence of illegally seized evidence made this addendum a necessity. Further exceptions were added to help clarify this rule for both law enforcement and citizens alike. The Exclusionary Rule and its exceptions provide benefits to citizens, but not without costs, and continue to be a subject of debate.
In principle, the Exclusionary Rule declares that evidence gathered as a result of an illegal search is inadmissible in court (Zalman, 2011). This concept is largely the product of constitutional monarchies and republics as the will of a theocratic monarch would be above contestation. Further, any evidence that could only be retrieved as a result of that initial illegal search is considered inadmissible under the fruit of the poisonous tree doctrine. Historically, common law stated that a person had a right against self-incrimination however all evidence was admissible in court. This changed in 1914 with Weeks v. United States, when the Supreme Court declared that evidence seized without a warrant is a violation of the Fourth Amendment. This ruling only applied to federal courts. This became an issue in 1949 in the case of Wolf v. Colorado (Zalman, 2011). Under federal ruling, the evidence used to convict Doctor Wolf of performing abortions which were illegal in Colorado would have been considered illegal; however, the state was not obligated to adopt the Exclusionary Rule and Dr. Wolf spent 18 months in prison. Eventually, the Exclusionary Rule was incorporated as part of the 1961 decision in Mapp v. Ohio.
While the adoption of the Exclusionary Rule has been essential to thwarting abuses of power, there are several exclusions by which law enforcement may still enter illegally seized or deceptively gathered evidence. Among these are good faith, evidence from an independent source, inevitable discovery, and attenuation. Good faith essentially means that the officer believed that he or she was gathering evidence legally despite the fact. This heavily undermines the Exclusionary Rule, relegating it to a deterrent rather than an assertion of individual rights. Evidence from an independent source is admissible as the Exclusionary Rule only applies to government officials. Under this exception, evidence can be gathered by a third party and handed over to law enforcement without the legal trappings of the Fourth Amendment. The exception of inevitable discovery states that is the evidence would have eventually been found. For example, in the case of Nix v. Williams in 1984, the evidence of a girl’s body was deemed admissible despite an unconstitutional interrogation because it would have been inevitable found following the search party’s instructions (Zalman, 2011). Finally, the attenuation exclusion is used when the link between the tainted evidence and the initial illegal search is tenuous.
While the Exclusionary Rule provides the benefit of protection to the United States citizen, it is not without the cost of letting criminals go free if the criminal justice professional acts illegally. These exclusions give the prosecution to enter this evidence despite the wrongdoings of the officer; possibly getting the conviction that otherwise would have been lost. Despite this, there are those that offer alternatives to the Exclusionary Rule including federal tort cases and even contempt of court (Zalman, 2011).
However, with all of these exclusions, the necessity of the Exclusionary Rule has become an issue of debate. However, this rule provides a necessary buffer between the United States citizen and those corrupt officials that may act as harbingers of tyranny. As technology advance, the right to privacy must be protected as law enforcement officials test the boundaries of what is considered legal and what is considered an illegal search. This has recently included the decision that a drug-sniffing dog is not performing an illegal search, but that the use of infrared imaging devices violated a person’s right to privacy (Sanchez, 2007). With boundaries constantly being tested, the American citizen needs the protection of the Exclusionary Rule to remind law enforcement that there are rules and the tranny is not law.
The Exclusionary Rule is a necessary element of the Fourth Amendment. It reiterates the right to privacy that is occasionally abused by persons of authority. Combined with the exclusions, the Exclusionary Rule will continue to be controversial as long as elements of corruption and a passion for privacy are at play.
Sanchez, J. (2007). How we got to Caballes. Reason, 38(4), 24-25.
Zalman, M. (2011). Criminal Procedure: Constitution and society (6th ed.). Upper Saddle River, NJ: Pearson/Prentice Hall. |
The waves that pass through Earths inner core are not behaving predictably.
Researchers look into Earths interior by measuring how fast and in what form seismic waves travel from an earthquake, then through the mantle, the outer core, inner core, back into the mantle and then through the crust to a seismograph. For at least a decade, seismologists have known that the waves traveling north-south through the inner core do so much faster than those traveling east-west so-called anisotropy. The telling data came from seismic waves that traveled from earthquakes in the Sandwich Islands to seismographs in Alaska.
This map of ray paths shows the travel time differences between waves that pass through the inner core and those that reflect off its boundary. Global and regional networks of seismometers measure how quickly the waves travel from their source, usually an earthquake. Circles indicate where those passing through traveled faster than reflecting from it, and triangles indicate where they traveled slower. The travel time differences vary between the western and eastern hemispheres, revealing that the inner core's structure is different on either side. Numbers mark different seismic networks. Courtesy of L. Wen, SUNY-Stony Brook
They have been working ever since to determine what this speed difference implies about the structure of the inner core. Recent work with new and refined data is showing that the strange behavior pervades the inner core, east and west.
Xinlei Sun and Xiaodong Song published work in the Feb. 19 online Geophysical Research Letters presenting evidence from shorter waves that bolsters their theory that the inner core is layered. In a 1998 paper in Science, Song, of the University of Illinois, and Donald Helmberger of the California Institute of Technology co-authored a paper proposing that the top layer was isotropic and the bottom layer anisotropic. Previously, people assumed anisotropy is uniform, Song says. In his Feb. 19 paper, Song says, Were seeing smaller, finer structures with the short-period data.
Songs 1998 study and recent work focus on the cores western side, relying on the Sandwich Islands-Alaska waves. But even in 1998, Song and Helmberger suspected something was strange in the eastern hemisphere too. Their 1998 diagram of the possible inner layer did not show a perfectly round circle, but rather an uneven blob, allowing that the transition zone in the eastern hemisphere might be deeper.
Thats not to say that the outer core has varying regions of isotropy and anisotropy, says Lianxing Wen, a researcher at the State University of New York in Stony Brook. We dont see any evidence that this top 80 kilometers are anisotropic. But even in this outer layer, the eastern hemisphere is faster, and the western hemisphere is slower. Wen and colleague Fenglin Niu published work in the April 26, 2001, Nature showing that waves passing through this upper layer traveled at different speeds beneath Asia and South America.
Adding to the mystery, Wen says, the waves traveling faster through the eastern hemisphere arrive at the endpoints with less energy. This is counterintuitive. And it eliminates temperature variations as the main cause of the hemispheric differences.
One possible cause, then, is structure such as, Wen says, the presence of a melt pocket with an irregular shape and volume between hemispheres. What is surprising for us is you have this hemisphere variation. We are also surprised that there could be differing fractions of melt between the two hemispheres.
Kenneth Creager, a researcher at the University of Washington whose 1992 Nature paper generated discussion about the strange travel times of north-south waves, is also surprised at the difference between east and west. Another potential culprit, he says, is the alignment of the crystal structure in different parts of the core. Where the crystals are not well aligned, the waves travel slower. A well-aligned structure will send the waves through faster. It looks like the western hemisphere is better aligned than the eastern hemisphere, Creager says.
Also related is the mysterious boundary between the core and mantle. Waves traveling through this region have also thrown seismologists for a loop over the past decade. Whats going on at the core-mantle boundary could be affecting heat flow in the core, Wen says. Something is stabilizing the outer cores temperature long enough for its structure to form differently than that of the lower core.
Waves that travel fast beneath the core-mantle boundary under Asia and slower at the boundary beneath the Pacific mirror their speed differences in the inner core, according to work being published in this months Geophysical Research Letters by Helmberger and co-authors Sheng-Nian Luo and Sidao Ni. The velocities could reveal connections in structure between the core-mantle boundary and inner core, they report. The heat transfer out of the mantle is different underneath the Pacific than it is underneath Asia, Helmberger says. |
The function of the piriform cortex relates to olfaction, which is the perception of smells. Sometimes called the olfactory cortex, olfactory lobe or paleopallium, piriform cortical regions are present in the brains of amphibians, reptiles and mammals.
The piriform cortex is among three areas that emerge in the telencephalon of amphibians, situated caudally to a dorsal area, which is caudal to a hippocampal area. Farther along the phylogenic timeline, the telencephalic bulb of reptiles as viewed in a cross section of the transverse plane extends with the archipallial hippocampus folding toward the midline and down as the dorsal area begins to form a recognizable cortex.
As mammallian cerebrums developed, volume of the dorsal cortex increased in slightly greater proportion, as compared proportionally with increased overall brain volume, until it enveloped the hippocampal regions. Recognized as neopallium or neocortex, enlarged dorsal areas envelop the paleopallial piriform cortex in humans and Old World monkeys.
Among taxonomic groupings of mammals, the piriform cortex and the olfactory bulb become proportionally smaller in the brains of phylogenically younger species. The piriform cortex occupies a greater proportion of the overall brain and of the telencephalic brains of insectivores than in primates. The piriform cortex continues to occupy a consistent albeit small and declining proportion of the increasingly large telencephalon in the most recent primate species while the volume of the olfactory bulb becomes less in proportion.
- BrainInfo at the University of Washington hier-147
- High Resolution Images of Piriform Cortex
|Telencephalon (cerebrum, cerebral cortex, cerebral hemispheres) - edit|
frontal lobe: precentral gyrus (primary motor cortex, 4), precentral sulcus, superior frontal gyrus (6, 8), middle frontal gyrus (46), inferior frontal gyrus (Broca's area, 44-pars opercularis, 45-pars triangularis), prefrontal cortex (orbitofrontal cortex, 9, 10, 11, 12, 47)
temporal lobe: transverse temporal gyrus (41-42-primary auditory cortex), superior temporal gyrus (38, 22-Wernicke's area), middle temporal gyrus (21), inferior temporal gyrus (20), fusiform gyrus (36, 37)
limbic lobe/fornicate gyrus: cingulate cortex/cingulate gyrus, anterior cingulate (24, 32, 33), posterior cingulate (23, 31),
Some categorizations are approximations, and some Brodmann areas span gyri.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
Smog is made up of many air pollutants. The main ones are ground-level ozone and fine particulate matter. Smog can also contain sulphur dioxide, nitrogen dioxide, total reduced sulphur, and carbon monoxide.
Ground-level ozone (O3)
The ozone found high in the earth’s atmosphere is called “good ozone” because it helps protect us from the sun's rays. But ozone at the ground level is not good for human health. Ground-level ozone is not emitted directly into the air, but forms when nitrogen oxide and volatile organic compounds (VOCs) from vehicle exhaust, factory emissions and other sources react with sunlight.
It’s called the “bad ozone” because if you breathe it in, it can cause health problems. Ground-level ozone usually peaks between noon and 6 p.m. during the summer months.
Health effects of ground-level ozone:
- worsening symptoms for people with asthma, COPD and other lung diseases, and for people with cardiovascular (heart) disease
- swollen, irritated airways
- irritation to your eyes, nose and throat
- coughing, wheezing
Over time, ozone can cause permanent lung damage.
Sources of ground-level ozone
- burning fossil fuels (gas, oil or coal) for industry and transportation consumer products (paints, wood laminates)
- natural sources (plants, trees, lightning)
Fine particulate matter (PM)
Fine particulate matter is a broad name given to particles of liquids and solids that pollute the air. These particles come in different sizes and are made of different things.
PM 2.5 is particulate matter that is very small (2.5 microns or less – that’s about the width of human hair). PM 2.5 can be breathed deeply into your lungs and will stay there, causing health problems. PM 2.5 can stay in the air longer and travel farther than larger particles.
Health effects of fine particulate matter:
- coughing or sneezing
- irritation in your eyes, throat, and lungs
- wheezing and breathing problems in people with asthma, COPD, and other lung diseases
- cardiovascular health problems, including heart attacks in people with certain pre-existing heart diseases
Sources of fine particulate matter:
- vehicle exhaust
- road dust from paved and unpaved roads
- wood stoves, fireplaces, and other kinds of wood burning
- forest fires
Sulphur dioxide (SO2)
Sulphur dioxide is a colourless gas that smells like burnt matches. It is one of the main ingredients in acid rain. Sulphur dioxide, combines with volatile organic chemicals (VOCs) and sunlight, creates ground-level ozone, the main ingredient in smog.
Health effects of SO2:
- irritation in your nose and throat
- breathing problems
- new cases of lung disease
- worsening symptoms in people with asthma, COPD, and other long-term lung diseases
- worsening cardiovascular (heart) disease
- changes in the lung's natural defences
Sources of SO2:
- fossil fuels burned in petroleum refineries
- pulp and paper mills
- steel mills
- electricity generating plants, including coal-fired power plants
- non-iron ore smelters
- diesel vehicles
- volcanoes and hot springs
Nitrogen Oxides (NOx)
Nitrogen oxide is a reddish-brown gas that smells foul.
Health effects of NOx:
- can lower your resistance to lung infections
- can cause shortness of breath and irritate the upper airways, especially in people with lung disease, such as asthma and chronic obstructive pulmonary disorder (COPD)
Sources of NOx:
- burning fossil fuels in motor vehicles, homes, and industries
- oil, gas, and coal-fired power plants
- metal production
- forest fires, lightning and decaying vegetation
Carbon monoxide (CO)
Carbon monoxide is an odourless, tasteless, colourless gas. At high levels, carbon monoxide is poisonous.
Health effects of carbon monoxide:
- shortness of breath
- slower reflexes and reduced perception
- at high levels: seizures, unconsciousness, coma, respiratory failure and death.
Sources of carbon monoxide:
- burning fossil fuels in vehicles
- metal production
- emissions from heating devices (gas heaters, etc.) |
Chapter 14: Wildlife Identification
It is crucial for all hunters to have wildlife identification skills, in order to ensure they only take legal game on their hunt. In this chapter we will explore some of Florida's wildlife which hunters might encounter in the field. Remember that hunting regulations change on a yearly basis. Always check hunting regulations before the hunting season to get the most current regulations on the game you will be hunting.
There is no hunting of fully protected animals. You cannot possess a fully protected animal, reptile, bird, or any part thereof, without a permit from the Florida Fish and Wildlife Conservation Commission and/or the U.S. Fish and Wildlife Service.
The Florida panther is also called a puma, cougar and mountain lion. The panther is the Florida state animal as chosen by the school children of the state. The panther is a large cat, tawny to gray in color, never black. Males grow up to seven feet in length with a weight of 130 pounds. Females are smaller - six feet in length - weighing 60 to 80 pounds. The Florida species is identified by a cowlick, or swirl of fur in middle of its back. The Florida panther has irregular flecking with patches of white hairs on neck and shoulders, and its tail has a crooked tip. Adult panthers need up to 200 square miles of territory for their home range. The species is endangered due to habitat loss and inbreeding. The panther’s diet consists mainly of deer and hogs.
Source: FWC Hunter Education Program |
How do we help children become responsible learners? Better yet, how do we help children become responsible? First and foremost, children must possess the ability to regulate their own behavior. Think about when you were a child and you parents told you to sit quietly when you went to visit your grandparents. For the most part, you could do it. Why do today’s children have such difficulty sitting still?
First, it has to do with the input they receive. Children of yesteryear played long and hard with make believe. They would spend hours in a dirt pile with their trucks or hours with their baby dolls playing house. They had to find things that would make the play real like sticks to make roadways, or would make the front porch different rooms of the house. Today’s children get everything all together. Dollhouses have multiple rooms and lots of furniture. Trucks come in sets complete with roadways and bridges. There is no longer the need to invent. Therefore, children bore quickly of such games. The ability to remain on task for a period of time is the initial foundation of self regulation.
Today’s children sit for hours without gross motor activity playing video games or watching TV. They are entertained rather than developing the ability to entertain themselves. Further as we have discussed in the past, such activities are comparable to sleeping. Very little cognitive activity occurs in the brain during these activities. There is no thinking happening in their brains when they are engaged in these activities.
So, how do you work with your children to develop a sense of self regulation and responsibility. First, model the behavior. Children will value responsibility if you value responsibility. They require guidance in the act of responsibility through loving discipline and follow through. Key concepts include teaching compassion for others, honesty, courage, self control and self respect. All can be addressed in your child’s daily life. Consider, for instance, making a get well card with your child for an ailing neighbor or making some cookies to take to the nursing home. Over time, your child will begin to see your values in being compassionate and will begin to mirror the behavior. The same holds true for each of the concepts. If you do it and do it consistently, your children will begin to do it. And don’t forget using books that teach these lessons. Your discussions with your child as you read the story can help bring the point home.
Just remember that children need lots of practice with new concepts and you will have to work on these over time and in a variety of ways. But before long, your child will begin to demonstrate the signs of responsibility. And, as they continue to grow, will learn to regulate their own behavior through demonstrating responsible behavior.
Mary Rockey, Ph.D., BCBA is the Director of Pupil Service at Randolph Central School. |
Wednesday, March 11, 2009
One of the major differences in conventional agriculture and natural ecosystems is in the number and complexity of components. Conventional agriculture is a fairly simple and linear system with very few components and pathways. In contrast, natural ecosystems are highly complex with hundreds, if not thousands of active components (microbes, insects, plants, animals) as well as numerous pathways through the system for energy and materials.
The needs of each component in a natural ecosystem are fulfilled by the outputs of multiple components, and in return, each provides multiple useful inputs for others. Furthermore, each need can often be fulfilled through multiple pathways. These needs are not only material requirements like nutrients and water, but also environmental requirements, like the right amount of sunlight, humidity, pest control, pollination and physical support.
This complexity has two direct benefits -
- Since everything is useful for something else, there is no 'waste' in the system, increasing energy and resource use efficiency.
- Due to multiple pathways for fulfilling each need, removal of a few components does not debilitate the system.
In permaculture, such complexity is intentionally designed into the system, creating a "food forest" with as many as 500-600 plant species, a few domesticated animal species, and attracting a large number of beneficial wild insects and birds. With this approach, a permaculture farm can produce an abundance of food practically throughout the year, without any chemical or mechanical inputs, and requiring far less labor than a conventional farm.
We need to shift our point of view significantly to realize how this is possible - we know that managing a farm with five plant species takes work. Managing a farm with fifty productive plant species will probably take even more work. What is perhaps a little un-obvious is that a farm with hundreds of plant species, a number of animal, and numerous insect and bird species can be completely self regulating and self sufficient, provided the components are chosen well to work together with each other and the local climate and landscape:
On a related note, the BBC series 'Natural World' recently showed the documentary A Farm for the Future, produced by wildlife film maker Rebecca Hosking:
Realising that all food production in the UK is completely dependent on abundant cheap fossil fuel, particularly oil, [Rebecca] sets out to discover just how secure this oil supply is.If you're in the UK, you can watch this program on the internet. Otherwise, it may be available on your local BBC channel.
Alarmed by the answers, she explores ways of farming without using fossil fuel. With the help of pioneering farmers and growers, Rebecca learns that it is actually nature that holds the key to farming in a low-energy future. |
Vine, a plant with a trailing or climbing stem. A trailing stem is one that grows along the ground; it is too weak to stand upright and does not have any means of climbing. The trailing arbutus is an example. A climbing stem has some means of ascending trees or other objects. It may climb by tendrils, as does the grape; by aerial roots, as poison ivy; by twining, as the morning glory; by adhesive disks, as the Virginia creeper; or by hooked spines, as the rattan. Some climbers, such as English ivy, will grow along the ground like trailers if no support is available.Honeysuckle is an ornamental vine.
Vines may be either annuals or perennials. Some, such as the arbutus, have woody stems; others, such as the pea, have herbaceous stems.
A few of the climbers, such as the dodder, are parasites that feed on the plants they cling to. Most climbers, however, attach themselves to other plants for support only. Even these may kill the supporting plants by strangling them or blocking out light.
Many vines are ornamental plants. They are used to cover walls, fences, trellises, and arbors, and to conceal unsightly objects. Some vines are important food plants. |
Evolution by natural selection as set out in Charles Darwin’s On the Origin of Species from 1859 is often thought of as a constant and bloody struggle for survival. Here, living organisms are engaged in an incessant fight to survive long enough to reproduce and over the long stretches of geological time, species adapt to their environment so as to enhance their prospects of leaving descendants. Those that don’t leave descendants become extinct.
This view of evolution as red in tooth and claw mischaracterizes the totality of Darwin’s thought. In the Origin of Species, Darwin briefly refers to what he calls sexual selection, which he refers to as not “a struggle for existence, but on a struggle between the males for possession of the females”. The peacock is perhaps the clearest example of the concept with its beautiful tail feathers, which it flourishes to attract the attention of a peahen. Its beauty aside, the peacock’s tail must be an ungainly hindrance to the bird in its daily life; its length when trailing on the ground and visibility when displayed must make it easier for a predator to catch. If evolution was just a struggle to survive, why did the peacock have such an impediment to its chances of survival?
So striking was the example of the peacock in suggesting that there was more to evolution than just a struggle for existence that in 1860, Darwin wrote, “[T]he sight of a feather in a peacock’s tail, whenever I gaze at it, makes me sick!” Peacocks and other examples of such displays lead to Darwin developing his ideas on sexual selection in a later volume entitled The Descent of Man, and Selection in Relation to Sex in 1871, yet this aspect of his theory of evolution has never received the same degree of attention as the concept of natural selection by adaptation, in part due to resistance by some of his original supporters.
Birds offer some of the most obvious illustrations of the concept, including the birds of paradise with their extraordinary plumage, saturated in colour. Bowerbirds are another striking example. Male bowerbirds make great efforts to create elaborate bowers, or nests, ornamented with stones, shells and even human debris such as bits of plastic. The females inspect them and, after a number of visits to different bowers, elect to mate with the male they deem to have created the best.
These characteristics do not directly assist in survival except in the limited sense of attracting a mate. If the characteristic leads to an increase in opportunities to mate that outweighs the risks in additional visibility to predators, it assists in the species’ survival.
Sexual selection plays a role in mammal courtship as well. In many species, the larger a male is, the more likely he is to mate. Females generally choose larger males to mate with, presumably on the basis that larger babies are more likely to survive, dominate others and then mate in turn. Male characteristics that suggest larger size add to their attractiveness to females. Male koalas and hyenas, for example, give out bellows and barks to attract a mate and it seems the deeper the sound, the more attractive it is to females as this suggests a larger male.
In deer, sheep and cattle, the males grow horns or antlers, which can be large and unwieldy. The larger the antlers or horns, the more likely the male animal is to mate. This illustrates how such characteristics can become so pronounced; a feedback loop sets in whereby the females chose to mate with the male with the largest antlers, the best bower and so forth and so males are born which inherit that characteristic. The modification is selected for across the generations, becoming more pronounced.
It is fascinating to consider human evolution from this perspective. The most profound aspect of human evolution, the factor that really defines what it is to be human, is being bipedal. No other primate, or indeed mammal, has the same bipedal form of locomotion that we do. It is possible that the emergence of our higher level of intelligence is only as a consequence of first becoming fully bipedal.
Full bipedalism in our human ancestors followed the divergence between humans and chimpanzees from a common ancestor, around six million years ago. A number of theories have been suggested for why our ancestors became fully bipedal and one of the first was that climatic changes in Africa lead to the opening up of savannas and the reduction in forests. As a consequence, our human ancestors descended from their formerly arboreal habitats, spent more time on the ground and, as a result, became bipedal from striding around their new habitat.
Yet paleoclimatic evidence suggests this environmental change, if it occurred, did not happen around the time that fossil evidence points to bipedalism developing. Other theories include the idea that being upright allows humans to be cooler as more of their body is higher off ground level and also decreases the amount of body area that is directly exposed to the sun. This regulatory control offered advantages in maintaining body temperature and an increased chance of survival.
It is also possible that partial bipedalism is a characteristic of our older ancestors, including the joint ancestors of chimpanzees and humans, and that chimpanzees are the ones that have changed the most by becoming functionally quadrupedal. Gibbons, for example, are often bipedal when climbing in trees and walking along thin branches and provide an illustration of the nature of our possible original partial bipedalism. According to this theory, human ancestors were already partially bipedal when living an arboreal lifestyle and perfected this when travelling on ground level, whilst chimpanzees took a different course.
The fossil record for human ancestors is far from complete and it is challenging to draw a conclusion from it that definitively establishes why our ancestors became fully bipedal. This is especially so if the reasons for this were primarily behavioural as it is likely that this will have to be inferred from the fossil evidence.
Bipedalism has a number of disadvantages to humans compared to our quadrupedal great ape relatives. Most significantly, it makes childbirth more difficult and dangerous for both mother and baby. It also makes humans more susceptible to back and knee problems and meant that our ancestors sacrificed the foot’s opposable “thumb” and its ability to grasp objects.
If standing and walking upright has such significant drawbacks, it might be as a result of factors that are not directly related only to survival. This suggests that our bipedal gait is due to sexual selection instead, at least in part. Sexual selection can radically alter an organism’s form and behaviour, as it did with the peacock, so it is not out of the question.
Richard Dawkins has suggested that it might be as simple as that – standing upright became an attractive trait at some point amongst our ancestors. In his book The Ancestor’s Tale: A Pilgrimage to the Dawn of Life he suggests that at some point in our evolution, those who engaged in the new “fashion” of standing and walking on two legs were attractive to potential mates, gained reproductive success and left descendants whereas, over time, those who weren’t bipedal failed to mate. As described above, this trait could quickly become established and grow more and more pronounced as the generations passed by. At least initially, the practical utility of being bipedal may have been irrelevant.
Standing upright might have also been important in terms of sexual selection as being bipedal helps in displaying the body to potential mates. An example of such a display could be human female breasts, which are larger than those of other primates. Chimpanzee breasts are flatter unless they happen to be lactating, in which they are enlarged to store milk. Human breasts are permanently in this larger condition and it is possible this is a display to attract a mate. If so, this would be an unusual example of a female displaying to attract a mate. Standing upright helps display the breasts more easily.
Similarly, it has been suggested that standing upright helps the male display his genitals to females. Dawkins has elaborated on this point by noting that male humans, unlike other great apes, lack a penis bone or baculum. An erection in the human penis is entirely due to blood pressure and Dawkins suggests that the lack of a baculum has evolved so that the erect penis can act as an indicator to a potential mate of both physical and mental health. This might seem speculative, but points such as these are potentially fundamental to human reproductive success and so could have played an important role in our evolution.
An interesting point to consider is that human evolution may have been subject to much more randomness than either natural or sexual selection might imply. At the genetic level, evolution is due to random mutations. Given a large population and a long period of time, we can interpret changes in the organism’s form or behaviour as making it better suited to survive or attract a mate. Yet if the population is small, such changes do not have sufficient individuals to produce a pronounced difference and the random background of mutations and events can prevent natural or sexual selection from playing the role in the species’ evolution that we would normally expect.
It seems that for a long time, our human ancestors suffered from just this perilous condition. In the six million years or so since we diverged from chimpanzees, for the vast majority of the time, the line of animals that lead to the chimpanzees outnumbered our ancestors. We have, until very recently, been generally unsuccessful animals, with an insignificant population. Genetic studies of chimpanzees indicate they are more evolved animals than we are, as evolution has had more material to act upon. As a result, random mutation may be a more important factor in human evolution than we might care to think.
The transition to full may have arisen as a result of many of these factors acting together; it is unlikely that it was just one cause. For example, perhaps sexual selection started the process but when the practical advantages gave our bipedal ancestors a better chance of survival, the change really took hold. The incompleteness of the fossil record and the generally interpretative context of much of these issues suggest it is possible that there will never be a definitive answer as to how or why humans became fully bipedal. As more fossils are uncovered, more will be learned about our ancestors and our evolution and it will be fascinating to see the story unfold. |
Many people in Nepal see federalism as a way to empower communities and regions marginalised by the centralization of power, and to acknowledge and further promote the country’s religious, linguistic and ethnic diversity.
Understanding federalism and discussing its key issues and options are at the core of the current constitution making debate.
This Glossary builds on an earlier edition, published in 2009, and offers definitions for some 300 federal terms and their translation into Nepali. It has been created to help foster a common understanding of the key terms used in federal arrangements. It also serves as a tool to explain federal concepts to individuals and groups as they engage in the process of preparing submissions and proposals to the Constituent Assembly.
The Glossary is a joint publication produced by International IDEA, the United Nations Development Programme (through its Support to Participatory Constitution Building in Nepal project) and the Forum of Federations (Canada), and is part of an attempt to standardize Nepali constitutional terminology and contribute to the development of plain language drafting of legal and constitutional texts in the country.
Foreword (Forum of Federations)
Foreword (International IDEA) |
Learning style is an individual's natural or habitual pattern of acquiring and processing information in learning situations. A core concept is that individuals differ in how they learn. The idea of individualized learning styles originated in the 1970s, and has greatly influenced education.
Proponents of the use of learning styles in education recommend that teachers assess the learning styles of their students and adapt their classroom methods to best fit each student's learning style. Although there is ample evidence for differences in individual thinking and ways of processing various types of information, few studies have reliably tested the validity of using learning styles in education. Critics say there is no evidence that identifying an individual student's learning style produces better outcomes. There is evidence of empirical and pedagogical problems related to the use of learning tasks to "correspond to differences in a one-to-one fashion". Well-designed studies contradict the widespread "meshing hypothesis", that a student will learn best if taught in a method deemed appropriate for the student's learning style.
- 1 David Kolb's model
- 2 Learning Modalities
- 3 Peter Honey and Alan Mumford's model
- 4 Anthony Gregorc's model
- 5 Sudbury model of democratic education
- 6 Neil Fleming's VAK/VARK model
- 7 Other models
- 8 Assessment methods
- 9 Criticism
- 10 Learning styles in the classroom
- 11 See also
- 12 References
David Kolb's model
David A. Kolb's model is based on the Experiential Learning Theory, as explained in his book Experiential Learning. The ELT model outlines two related approaches toward grasping experience: Concrete Experience and Abstract Conceptualization, as well as two related approaches toward transforming experience: Reflective Observation and Active Experimentation. According to Kolb's model, the ideal learning process engages all four of these modes in response to situational demands. In order for learning to be effective, all four of these approaches must be incorporated. As individuals attempt to use all four approaches, however, they tend to develop strengths in one experience-grasping approach and one experience-transforming approach. The resulting learning styles are combinations of the individual's preferred approaches. These learning styles are as follows:
David Kolb’s Experiential Learning Model (ELM)
|Active Experimentation||Reflective Observation|
1. Accommodators: Concrete Experience + Active Experiment
- "Hands-on" and concert
- Wants to do
- Discovery method
- Sets objectives/schedules
- Asks questions fearlessly
- Challenges theories
- Receive information from others
- Gut feeling rather than logic
2. Converger: Abstract Conceptualization + Active Experiment
- "Hands-on" and theory
- Specific problems
- Tests hypothesis
- Best answer
- Works alone
- Problem solving
- Technical over interpersonal
3. Diverger: Concrete Experience + Reflective Observation
- Real life experience and discussion
- More than one possible solution
- Brainstorming and groupwork
- Observe rather than do
- Background information
4. Assimilator: Abstract Conceptualization + Reflective Observation
- Theories and facts
- Theoretical models and graphs
- Talk about rationale rather than do
- Defines problems
- Logical Formats
Kolb's model gave rise to the Learning Style Inventory, an assessment method used to determine an individual's learning style. An individual may exhibit a preference for one of the four styles—Accommodating, Converging, Diverging and Assimilating—depending on their approach to learning via the experiential learning theory model.
Although Kolb's model is the most widely accepted with substantial empirical support, recent studies suggest the Learning Style Inventory (LSI) is seriously flawed
"Sensory preferences influence the ways in which students learn ... Perceptual preferences affect more than 70 percent of school-age youngsters" (Dunn, Beaudry, & Klavas, 1989, p. 52). There are three Learning Modalities adapted from Barbe and Swassing(writers of the book, teaching through modality strengths: concept and practices):
3. Tactile (Kinesthetic)
Descriptions of Learning Modalities:
Leaning modalities have the ability of occurring independently or in combination, changing over time, and becoming integrated with age.
Peter Honey and Alan Mumford's model
Two adaptations were made to Kolb's experiential model. Firstly, the stages in the cycle were renamed to accord with managerial experiences of decision making/problem solving. The Honey & Mumford stages are:
- Having an experience
- Reviewing the experience
- Conclusion from the experience
- Planning the next steps.
Secondly, the styles were directly aligned to the stages in the cycle and named Activist, Reflector, Theorist and Pragmatist. These are assumed to be acquired preferences that are adaptable, either at will or through changed circumstances, rather than being fixed personality characteristics. The Honey & Mumford Learning Styles Questionnaire (LSQ) is a self-development tool and differs from Kolb's Learning Style inventory by inviting managers to complete a checklist of work-related behaviours without directly asking managers how they learn. Having completed the self-assessment, managers are encouraged to focus on strengthening underutilised styles in order to become better equipped to learn from a wide range of everyday experiences.
A MORI survey commissioned by The Campaign for Learning in 1999 found the Honey & Mumford LSQ to be the most widely used system for assessing preferred learning styles in the local government sector in the UK.
Anthony Gregorc's model
Dennis W. Mills discusses the work of Anthony F. Gregorc and Kathleen A. Butler in his article entitled "Applying What We Know: Student Learning Styles". Gregorc and Butler worked to organize a model describing how the mind works. This model is based on the existence of perceptions—our evaluation of the world by means of an approach that makes sense to us. These perceptions in turn are the foundation of our specific learning strengths, or learning styles.
In this model, there are two perceptual qualities 1) concrete and 2) abstract; and two ordering abilities 1) random and 2) sequential. Concrete perceptions involve registering information through the five senses, while abstract perceptions involve the understanding of ideas, qualities, and concepts which cannot be seen. In regard to the two ordering abilities, sequential involves the organization of information in a linear, logical way and random involves the organization of information in chunks and in no specific order. Both of the perceptual qualities and both of the ordering abilities are present in each individual, but some qualities and ordering abilities are more dominant within certain individuals.
There are four combinations of perceptual qualities and ordering abilities based on dominance: 1) Concrete Sequential; 2) Abstract Random; 3) Abstract Sequential; 4) Concrete Random. Individuals with different combinations learn in different ways—they have different strengths, different things make sense to them, different things are difficult for them, and they ask different questions throughout the learning process.
Sudbury model of democratic education
Some critics (Mazza) of today's schools, of the concept of learning disabilities, of special education, and of response to intervention, take the position that every child has a different learning style and pace and that each child is unique, not only capable of learning but also capable of succeeding.
Sudbury Model democratic schools in the United States assert that there are many ways to study and learn. They argue that learning is a process you do, not a process that is done to you, and that this is true of everyone; it's basic. The experience of Sudbury model democratic schools shows that there are many ways to learn without the intervention of teaching, that is to say, without the intervention of a teacher being imperative. In the case of reading for instance in the Sudbury model democratic schools, some children learn from being read to, memorizing the stories and then ultimately reading them. Others learn from cereal boxes, others from games instructions, others from street signs. Some teach themselves letter sounds, others syllables, others whole words. Sudbury model democratic schools assert that in their schools no one child has ever been forced, pushed, urged, cajoled, or bribed into learning how to read or write. None of their graduates are real or functional illiterates, and no one who meets their older students could ever guess the age at which they first learned to read or write. In a similar manner, students learn all the subjects, techniques, and skills in these schools.
Describing current instructional methods as homogenization and lockstep standardization, alternative approaches are proposed, such as the Sudbury Model of Democratic Education schools, an alternative approach in which children, by enjoying personal freedom thus encouraged to exercise personal responsibility for their actions, learn at their own pace and style rather than following a compulsory and chronologically-based curriculum. Proponents of unschooling have also claimed that children raised in this method learn at their own pace and style, and do not suffer from learning disabilities.
Gerald Coles asserts that there are partisan agendas behind the educational policy-makers and that the scientific research that they use to support their arguments regarding the teaching of literacy is flawed. These include the idea that there are neurological explanations for learning disabilities.
Neil Fleming's VAK/VARK model
One of the most common and widely used categorizations of the various types of learning styles is Fleming's VARK model (sometimes VAK) which expanded upon earlier Neuro-linguistic programming (VARK) models:
- visual learners;
- auditory learners;
- reading-writing preference learners;
- kinesthetic learners or tactile learners.
Fleming claimed that visual learners have a preference for seeing (think in pictures; visual aids such as overhead slides, diagrams, handouts, etc.). Auditory learners best learn through listening (lectures, discussions, tapes, etc.). Tactile/kinesthetic learners prefer to learn via experience—moving, touching, and doing (active exploration of the world; science projects; experiments, etc.). Its use in pedagogy allows teachers to prepare classes that address each of these areas. Students can also use the model to identify their preferred learning style and maximize their educational experience by focusing on what benefits them the most.
Cognitive approach to learning styles
Anthony Grasha and Sheryl Reichmann, in 1974, formulated the Grasha-Reichmann Learning Style Scale. It was developed to analyze the attitudes of students and how they approach learning. The test was originally designed for college students. Grasha's background is in cognitive processes and coping techniques. The concepts of various learning styles are as follows:
The conclusion of this model was to provide teachers with insight on how to approach instructional plans.
Aiming to explain why aptitude tests, school grades, and classroom performance often fail to identify real ability, Robert J. Sternberg listed various cognitive dimensions in his book Thinking Styles (1997). Several other models are also often used when researching learning styles. This includes the Myers Briggs Type Indicator (MBTI) and the DISC assessment.
A more recent evidence-based model of learning
Chris J Jackson's neuropsychological hybrid model of learning in personality argues Sensation Seeking provides a core biological drive of curiosity, learning and exploration. A high drive to explore leads to dysfunctional learning consequences unless cognitions such as goal orientation, conscientiousness, deep learning and emotional intelligence re-express it in more complex ways to achieve functional outcomes such as high work performance. The model aims to explain many forms of functional behaviour (such as entrepreneurial activity, work performance, educational success) as well as dysfunctional behaviour (such as delinquency and anti-social behaviour). The wide applicability of the model and its strong grounding in the academic literature suggests that this evidence based model of learning has much potential. Latest research is summarized here. Evidence for this model is considerable. Siadaty and Taghiyareh (2007) report that training based on Conscientious Achievement increases performance but that training based on Sensation Seeking does not. These results strongly support Jackson's model since the model proposes that Conscientious Achievement will respond to intervention whereas Sensation Seeking (with its biological basis) will not. Jackson's papers can be downloaded here.
NASSP Learning Style Model
Learning style is a gestalt that tells us how a student learns and prefers to learn. Keefe (1979) says that: “Learning styles are characteristic cognitive, affective, and physiological behaviors that serve as relatively stable indicators of how learners perceive, interact with, and respond to the learning environment."
There are three broad categories of learning style characteristics:
•Cognitive styles are preferred ways of perception, organization and retention.
•Affective styles represent the motivational dimensions of the learning personality; each learner has a personal motivational approach.
•Physiological styles are traits deriving from a person's gender, health and nutrition, and reaction to school physical surroundings, such as preferences for levels of light, sound, and temperature.
Styles are hypothetical constructs that help to explain the learning (and teaching) process. Because learning is an internal process, we know that it has taken place only when we observe a relatively stable change in learner behavior resulting from what has been experienced” (Keefe, 1979). Similarly, learning style reflects underlying learning behavior. We can recognize the learning style of an individual student only by observing his or her behavior.
Learning Style Inventory
The Learning Style Inventory (LSI) is connected with Kolb's model and is used to determine a student's learning style. The LSI assesses an individual's preferences and needs regarding the learning process. It does the following:
1. Allows students to designate how they like to learn and indicates how consistent their responses are
2. Provides computerized results which show the student's preferred learning style
3. Provides a foundation upon which teachers can build in interacting with students
4. Provides possible strategies for accommodating learning styles
5. Provides for student involvement in the learning process
6. Provides a class summary so students with similar learning styles can be grouped together.
A completely different Learning Styles Inventory is associated with a binary division of learning styles, developed by Felder and Silverman. In this model, learning styles are a balance between four pairs of extremes: Active/Reflective, Sensing/Intuitive, Verbal/Visual and Sequential/Global. Students receive four scores describing these balances. Like the LSI mentioned above, this inventory provides overviews and synopses for teachers.
NASSP Learning Style Profile
The NASSP Learning Style Profile (LSP) is a second-generation instrument for the diagnosis of student cognitive styles, perceptual responses, and study and instructional preferences. The Profile was developed by the NASSP research department (Keefe and Monk, 1986) in conjunction with a national task force of learning style experts. The task force spent almost a year reviewing the available literature and instrumentation before deciding to develop a new instrument. The Profile was developed in four phases with initial work undertaken at the University of Vermont (cognitive elements), Ohio State University (affective elements), and St. John's University (physiological/environmental elements). Rigid validation and normative studies were conducted using factor analytic methods to ensure strong construct validity and subscale independence. The Learning Style Profile contains 24 scales representing four higher order factors: cognitive styles, perceptual responses, study preferences and instructional preferences (the affective and physiological elements). The LSP scales are as follows: • Analytic Skill • Spatial Skill • Discrimination Skill • Categorizing Skill • Sequential Processing Skill • Simultaneous Processing Skill • Memory Skill • Perceptual Response: Visual • Perceptual Response: Auditory • Perceptual Response: Emotive • Persistence Orientation • Verbal Risk Orientation; • Verbal-Spatial Preference • Manipulative Preference • Study Time Preference: Early Morning • Study Time Preference: Late Morning • Study Time Preference: Afternoon • Study Time Preference: Evening • Grouping Preference • Posture Preference • Mobility Preference • Sound Preference • Lighting Preference • Temperature Preference
The LSP is a first-level diagnostic tool intended as the basis for comprehensive style assessment. Extensive readability checks, reliability and validity studies, and factor analyses of` the instrument, combined with the supervisory efforts of the task force, ensure valid use of the instrument with students in the sixth to twelfth grades. Computer scoring is available.*
- Current versions of the LSP are available from GAINS Education Group, 1699 East Woodfield Road, Suite 007A, Schaumburg, IL 60173; Phone: 847-995-0403.
Other methods (usually questionnaires) used to identify learning styles include Fleming's VARK Learning Style Test, Jackson's Learning Styles Profiler (LSP), and the NLP meta programs based iWAM questionnaire. Many other tests have gathered popularity and various levels of credibility among students and teachers.
Ilene Thiel introduced LLL as a preferred method of learning style otherwise known as Lifelong Love of Learning.
Learning style theories have been criticized by many scholars and researchers. Some psychologists and neuroscientists have questioned the scientific basis for and the theories on which they are based. According to Susan Greenfield the practice is "nonsense" from a neuroscientific point of view: "Humans have evolved to build a picture of the world through our senses working in unison, exploiting the immense interconnectivity that exists in the brain."
Many educational psychologists believe that there is little evidence for the efficacy of most learning style models, and furthermore, that the models often rest on dubious theoretical grounds. According to Stahl, there has been an "utter failure to find that assessing children's learning styles and matching to instructional methods has any effect on their learning." Guy Claxton has questioned the extent that learning styles such as VARK are helpful, particularly as they can have a tendency to label children and therefore restrict learning.
Critique made by Coffield, et al.
A non-peer-reviewed literature review by authors from the University of Newcastle upon Tyne identified 71 different theories of learning style. This report, published in 2004, criticized most of the main instruments used to identify an individual's learning style. In conducting the review, Coffield and his colleagues selected 13 of the most influential models for closer study, including most of the models cited on this page. They examined the theoretical origins and terms of each model, and the instrument that purported to assess individuals against the learning styles defined by the model. They analyzed the claims made by the author(s), external studies of these claims, and independent empirical evidence of the relationship between the learning style identified by the instrument and students' actual learning. Coffield's team found that none of the most popular learning style theories had been adequately validated through independent research, leading to the conclusion that the idea of a learning cycle, the consistency of visual, auditory and kinesthetic preferences and the value of matching teaching and learning styles were all "highly questionable."
One of the most widely known theories assessed by Coffield's team was the learning styles model of Dunn and Dunn, a VAK model. This model is widely used in schools in the United States, and 177 articles have been published in peer-reviewed journals referring to this model. The conclusion of Coffield et al. was as follows:
Despite a large and evolving research programme, forceful claims made for impact are questionable because of limitations in many of the supporting studies and the lack of independent research on the model.
Coffield's team claimed that another model, Gregorc's Style Delineator (GSD), was "theoretically and psychometrically flawed" and "not suitable for the assessment of individuals."
The critique regarding Kolb's model
Mark K. Smith compiled and reviewed some critiques of Kolb's model in his article, "David A. Kolb on Experiential Learning". According to Smith's research, there are six key issues regarding the model. They are as follows: 1) the model doesn't adequately address the process of reflection; 2) the claims it makes about the four learning styles are extravagant; 3) it doesn't sufficiently address the fact of different cultural conditions and experiences; 4) the idea of stages/steps doesn't necessarily match reality; 5) it has only weak empirical evidence; 6) the relationship between learning processes and knowledge is more complex than Kolb draws it.
Coffield and his colleagues and Mark Smith are not alone in their judgements. Demos, a UK think tank, published a report on learning styles prepared by a group chaired by David Hargreaves that included Usha Goswami from Cambridge University and David Wood from the University of Nottingham. The Demos report said that the evidence for learning styles was "highly variable", and that practitioners were "not by any means frank about the evidence for their work."
Cautioning against interpreting neuropsychological research as supporting the applicability of learning style theory, John Geake, Professor of Education at the UK's Oxford Brookes University, and a research collaborator with Oxford University's Centre for Functional Magnetic Resonance Imaging of the Brain, commented that
We need to take extreme care when moving from the lab to the classroom. We do remember things visually and aurally, but information isn't defined by how it was received.
2009 APS critique
In late 2009, the journal Psychological Science in the Public Interest of the Association for Psychological Science (APS) published a report on the scientific validity of learning styles practices (Pashler et al., 2009). The panel was chaired by Hal Pashler (University of California, San Diego); the other members were Mark McDaniel (Washington University), Doug Rohrer (University of South Florida), and Robert Bjork (University of California, Los Angeles). The panel concluded that an adequate evaluation of the learning styles hypothesis—the idea that optimal learning demands that students receive instruction tailored to their learning styles—requires a particular kind of study. Specifically, students should be grouped into the learning style categories that are being evaluated (e.g., visual learners vs. verbal learners), and then students in each group must be randomly assigned to one of the learning methods (e.g., visual learning or verbal learning), so that some students will be "matched" and others will be "mismatched". At the end of the experiment, all students must sit for the same test. If the learning style hypothesis is correct, then, for example, visual learners should learn better with the visual method, whereas auditory learners should learn better with auditory method. Notably, other authors have reached the same conclusion (e.g., Massa & Mayer, 2006).
As disclosed in the report, the panel found that studies utilizing this essential research design were virtually absent from the learning styles literature. In fact, the panel was able to find only a few studies with this research design, and all but one of these studies were negative findings—that is, they found that the same learning method was superior for all kinds of students (e.g., Massa & Mayer, 2006).
Furthermore, the panel noted that, even if the requisite finding were obtained, the benefits would need to be large, and not just statistically significant, before learning style interventions could be recommended as cost-effective. That is, the cost of evaluating and classifying students by their learning style, and then providing customized instruction would need to be more beneficial than other interventions (e.g., one-on-one tutoring, after school remediation programs, etc.).
As a consequence, the panel concluded, "at present, there is no adequate evidence base to justify incorporating learning styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have strong evidence base, of which there are an increasing number."
The article incited critical comments from some defenders of learning styles. The Chronicle of Higher Education reported that Robert Sternberg from Tufts University spoke out against the paper: "Several of the most-cited researchers on learning styles, Mr. Sternberg points out, do not appear in the paper's bibliography." This charge was also discussed by Science, which reported that Pashler said, "Just so…most of [the evidence] is 'weak.'"
Learning styles in the classroom
Various researchers have attempted to hypothesize ways in which learning style theory can be used in the classroom. Two such scholars are Dr. Rita Dunn and Dr. Kenneth Dunn, who follow a VARK approach.
Although learning styles will inevitably differ among students in the classroom, Dunn and Dunn say that teachers should try to make changes in their classroom that will be beneficial to every learning style. Some of these changes include room redesign, the development of small-group techniques, and the development of Contract Activity Packages. Redesigning the classroom involves locating dividers that can be used to arrange the room creatively (such as having different learning stations and instructional areas), clearing the floor area, and incorporating student thoughts and ideas into the design of the classroom.
Their so-called "Contract Activity Packages" are educational plans that use: 1) a clear statement of the learning need; 2) multisensory resources (auditory, visual, tactile, kinesthetic); 3) activities through which the newly mastered information can be used creatively; 4) the sharing of creative projects within small groups; 5) at least three small-group techniques; 6) a pre-test, a self-test, and a post-test.
Another scholar who believes that learning styles should have an effect on the classroom is Marilee Sprenger in Differentiation through Learning Styles and Memory. Sprenger bases her work on three premises: 1) Teachers can be learners, and learners teachers. We are all both. 2) Everyone can learn under the right circumstances. 3) Learning is fun! Make it appealing. She details various ways of teaching, visual, auditory, or tactile/kinesthetic. Methods for visual learners include ensuring that students can see words written, using pictures, and drawing time lines for events. Methods for auditory learners include repeating words aloud, small-group discussion, debates, listening to books on tape, oral reports, and oral interpretation. Methods for tactile/kinesthetic learners include hands-on activities (experiments, etc.), projects, frequent breaks to allow movement, visual aids, role play, and field trips. By using a variety of teaching methods from each of these categories, teachers cater to different learning styles at once, and improve learning by challenging students to learn in different ways.
James W. Keefe and John M. Jenkins (2000; 2008) have incorporated learning style assessment as a basic component in their "Personalized Instruction" model of schooling. Six basic elements constitute the culture and context of personalized instruction. The cultural components - - teacher role, student learning characteristics, and collegial relationships - -establish the foundation of personalization and ensure that the school prizes a caring and collaborative environment. The contextual factors—interactivity, flexible scheduling, and authentic assessment—establish the structure of personalization. These six elements constitute the state of the art in personalized instruction. Cognitive and learning style aanlysis have a special role in the process of personalizing instruction. Style elements are relatively persistent qualities in the behavior of individual learners. They reflect genetic coding, personality, development, motivation, and environmental adaptation. Second only to the more flexible teacher role, the assessment of student learning style, more than any other element, establishes the foundation for a personalized approach to schooling: for student advisement and placement, for appropriate retraining of student cognitive skills, for adaptive instructional strategy, and for the authentic evaluation of learning. Some learners respond best in instructional environments based on an analysis of their perceptual and environmental style preferences. Most individualized and personalized teaching methods reflect this point of view. Other learners, however, need help to function successfully in any learning environment. If a youngster cannot cope under conventional instruction, enhancing his cognitive skills may make successful achievement possible. Many of the student learning problems that learning style diagnosis attempts to solve relate directly to elements of the human information processing system. Processes such as attention, perception and memory, and operations such as integration and retrieval of information are internal to the system. Any hope for improving student learning necessarily involves an understanding and application of information processing theory. Learning style assessment is an important window to understanding and managing this process.
Some research evaluating teaching styles and learning styles, however, has found that congruent groups have no significant differences in achievement from incongruent groups (Spoon & Schell, 1998). Furthermore, learning style in this study varied by demography, specifically by age, suggesting a change in learning style as one gets older and acquires more experience. While significant age differences did occur, as well as no experimental manipulation of classroom assignment, the findings do call into question the aim of congruent teaching-learning styles in the classroom.
- Theory of multiple intelligences
- Big Five personality traits
- Cognitive styles
- Constructivism (learning theory)
- Forer effect
- Montessori method
- Reading comprehension
- James, W.; Gardner, D. (1995). "Learning styles: Implications for distance learning". New Directions for Adult and Continuing Education 67.
- Pashler, H.; McDaniel, M.; Rohrer, D.; Bjork, R. (2008). "Learning styles: Concepts and evidence". Psychological Science in the Public Interest 9: 105–119. doi:10.1111/j.1539-6053.2009.01038.x.
- Klein, P. (2003). "Rethinking the multiplicity of cognitive resources and curricular representations:Alternative to learning styles and multiple intelligences.". Journal of Curriculum Studies 35 (1).
- Kolb, David (1984). Experiential learning: Experience as the source of learning and development. Englewood Cliffs, NJ: Prentice-Hall. ISBN 0-13-295261-0.
- http://www2.le.ac.uk/departments/gradschool/training/resources/teaching/theories/kolb Retrieved October 28, 2012.
- Kolb,D. (1985). Learning Style Inventory: Self Scoring Inventory and Interpretation Booklet. Boston, MA: McBer & Company.
- Manolis, C.; Burns, D., Assudan,R., China, R. (2012). "Assessing experiential learning styles: A methodological reconstruction and validation of the Kolb learning style inventory.". Learning and Individual Differences. doi:10.1016/j.lindif.2012.10.009.
- Honey, P & Mumford, A (2006). The Learning Styles Questionnaire, 80-item version. Maidenhead, UK, Peter Honey Publications
- Mills, D. W. (2002). Applying what we know: Student learning styles. Retrieved October 17, 2008, from: http://www.csrnet.org/csrnet/articles/student-learning-styles.html[dead link]
- Greenberg, D. (1987) The Sudbury Valley School Experience Back to Basics.
- Greenberg, D. (1987) Free at Last, The Sudbury Valley School, Chapter 5, The Other 'R's.
- Greenberg, D. (1992), Education in America, A View from Sudbury Valley, "Special Education" -- A noble Cause Sacrificed to Standardization.
- Greenberg, D. (1992), Education in America, A View from Sudbury Valley, "Special Education" -- A Noble Cause Run Amok.
- Greenberg, D. (1987), Free at Last, The Sudbury Valley School, Chapter 1, And 'Rithmetic.
- Greenberg, D. (1987), Free at Last, The Sudbury Valley School, Chapter 19, Learning.
- Gerald Coles (1987). The Learning Mystique: A Critical Look at "Learning Disabilities". Accessed November 7, 2008.
- Leite, Walter L.; Svinicki, Marilla; and Shi, Yuying: Attempted Validation of the Scores of the VARK: Learning Styles Inventory With Multitrait–Multimethod Confirmatory Factor Analysis Models, pg. 2. SAGE Publications, 2009.
- Thomas F. Hawk, Amit J. Shah (2007) "Using Learning Style Instruments to Enhance Student Learning" Decision Sciences Journal of Innovative Education doi:10.1111/j.1540-4609.2007.00125.x
- LdPride. (n.d.). What are learning styles? Retrieved October 17, 2008
- Grasha, Anthony (1996). Teaching with Style. Pittsburg, PA: Alliance Publishers.
- Jackson, C. J. (2009). Using the hybrid model of learning in personality to predict performance in the workplace. 8th IOP Conference, Conference Proceedings, Manly, Sydney, Australia, 25–28 June 2009 pp 75-79.
- Jackson, C. J. (2005). An applied neuropsychological model of functional and dysfunctional learning: Applications for business, education, training and clinical psychology. Cymeon: Australia
- Jackson, C. J. (2008). Measurement issues concerning a personality model spanning temperament, character and experience. In Boyle, G., Matthews, G. & Saklofske, D. Handbook of Personality and Testing. Sage Publishers. (pp. 73–93)
- Jackson, C. J., Hobman, E., Jimmieson, N., and Martin. R. (2008). Comparing Different Approach and Avoidance Models of Learning and Personality in the Prediction of Work, University and Leadership Outcomes. British Journal of Psychology, 1-30. Preprint. doi:10.1348/000712608X322900
- O'Connor, P. C. & Jackson, C. J. (2008). Learning to be Saints or Sinners: The Indirect Pathway from Sensation Seeking to Behavior through Mastery Orientation. Journal of Personality, 76, 1–20
- Jackson, C. J., Baguma, P., & Furnham, A. (In press). Predicting Grade Point Average from the hybrid model of learning in personality: Consistent findings from Ugandan and Australian Students. Educational Psychology
- Jackson, C. J. How Sensation Seeking provides a common basis for functional and dysfunctional outcomes. Journal of Research in Personality (2010), doi:10.1016/j.jrp.2010.11.005
- Siadaty, M. & Taghiyareh, F. (2007). PALS2: Pedagogically Adaptive Learning System based on Learning Styles. Seventh IEEE International Conference on Advanced Learning Technologies (ICALT 2007)
- Dunn, R, & Dunn, K (1978). Teaching students through their individual learning styles: A practical approach. Reston, VA: Reston Publishing Company.
- Felder, Richard. "Learning styles". North Carolina State University. Retrieved 01/11/12.
- Soloman, Barbara A.; Felder, Richard M. "Index of learning styles questionnaire". North Carolina State University. Retrieved 1 November 2012.
- Henry, Julie (29 July 2007). "Professor pans 'learning style' teaching method". The Telegraph. Retrieved 29 August 2010.
- Curry, L. (1990). "One critique of the research on learning styles". Educational Leadership 48: 50–56.
- Stahl, S. A. (2002). Different strokes for different folks? In L. Abbeduto (Ed.), Taking sides: Clashing on controversial issues in educational psychology (pp. 98-107). Guilford, CT, USA: McGraw-Hill.
- "Guy Claxton speaking on What's The Point of School?". dystalk.com. Retrieved 2009-04-23.
- Coffield, F., Moseley, D., Hall, E., Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: Learning and Skills Research Centre.
- Dunn, R., Dunn, K., & Price, G. E. (1984). Learning style inventory. Lawrence, KS, USA: Price Systems.
- Smith, M. K. (2001). David A. Kolb on experiential learning. Retrieved October 17, 2008, from: http://www.infed.org/biblio/b-explrn.htm
- Hargreaves, D., et al. (2005). About learning: Report of the Learning Working Group. Demos.
- Revell, P. (2005). Each to their own. The Guardian.
- Massa, L. J.; Mayer, R. E. (2006). "Testing the ATI hypothesis: Should multimedia instruction accommodate verbalizer-visualizer cognitive style?". Learning and Individual Differences 16: 321–336. doi:10.1016/j.lindif.2006.10.001.
- Glenn, David. Matching Teaching Style to Learning Style May Not Help Students. Retrieved on February 24, 2010, from http://chronicle.com/article/Matching-Teaching-Style-to/49497/
- Holden, Constance. Learning with Style. Retrieved on February 24, 2010, from http://www.sciencemag.org/content/vol327/issue5962/r-samples.dtl
- Sprenger, M. (2003). Differentiation through learning styles and memory. Thousand Oaks, CA: Corwin Press
- "TEACHING STRATEGIES/METHODOLOGIES: Advantages, Disadvantages/Cautions, Keys to Success". Retrieved 14 December 2012.
- Khademi, M., Motallebzadeh, K., & Ashraf, H. (2013). "The relationship between Iranian EFL instructors’ understanding of learning styles and their students’ success in reading comprehension. English Language Teaching, 6(4), doi:10.5539/elt.v6n4p".
Spoon J.C., & Schell, J.W. (1998). Aligning student learning styles with instructor teaching styles. Journal of Industrial Teacher Education, 35, 41-56. Keefe, J. W. (1979). Learning style: An overview. In Student learning styles — Diagnosing and prescribing programs. Reston, VA: National Association of Secondary School Principals. Keefe, J. W., & Jenkins, J. M. (1997). Instruction and the learning environment. Larchmont, NY: Eye on Education. Keefe J. W., & Jenkins, J. M. (2000). Personalized instruction: Changing classroom practice. Larchmont, NY: Eye on Education. Keefe, J. W. & Jenkins, J. M. (2008). Personalized instruction: The key to student achievement. 2nd edition. Lanham, MD: Rowman & Littlefield Education. |
- See also: North American regional phonology
American English (AmE, AE, AmEng, USEng, en-US), also known as United States English or U.S. English, is a set of dialects of the English language used mostly in the United States. Approximately two thirds of native speakers of English live in the United States.
The use of English in the United States was inherited from British colonization. The first wave of English-speaking settlers arrived in North America in the 17th century. During that time, there were also speakers in North America of Dutch, French, German, Norwegian, Spanish, Swedish, Scots, Welsh, Irish, Scottish Gaelic, Finnish, as well as numerous Native American languages.
In many ways, compared to English English, North American English is conservative in its phonology. Some distinctive accents can be found on the East Coast (for example, in Eastern New England and New York City), partly because these areas were in contact with England, and imitated prestigious varieties of English English at a time when those varieties were undergoing changes. In addition, many speech communities on the East Coast have existed in their present locations longer than others. The interior of the United States, however, was settled by people from all regions of the existing U.S. and, as such, developed a far more generic linguistic pattern.
Most North American speech is rhotic, as English was in most places in the 17th century. Rhoticity was further supported by Hiberno-English, Scottish English, and West Country English. In most varieties of North American English, the sound corresponding to the letter r is a retroflex [ɻ] or alveolar approximant [ɹ] rather than a trill or a tap. The loss of syllable-final r in North America is confined mostly to the accents of eastern New England, New York City and surrounding areas, South Philadelphia, and the coastal portions of the South. In rural tidewater Virginia and eastern New England, 'r' is non-rhotic in accented (such as "bird", "work", "first", "birthday") as well as unaccented syllables, although this is declining among the younger generation of speakers. ( Dropping of syllable-final r sometimes happens in natively rhotic dialects if r is located in unaccented syllables or words and the next syllable or word begins in a consonant. In England, the lost r was often changed into [ə] (schwa), giving rise to a new class of falling diphthongs. Furthermore, the er sound of fur or butter, is realized in AmE as a monophthongal r-colored vowel (stressed [ɝ] or unstressed [ɚ] as represented in the IPA). This does not happen in the non-rhotic varieties of North American speech.
Some other British English changes in which most North American dialects do not participate:
- The shift of /æ/ to /ɑ/ (the so-called "broad A") before /f/, /s/, /θ/, /ð/, /z/, /v/ alone or preceded by a homorganic nasal. This is the difference between the British Received Pronunciation and American pronunciation of bath and dance. In the United States, only eastern New England speakers took up this modification, although even there it is becoming increasingly rare.
- The realization of intervocalic /t/ as a glottal stop [ʔ] (as in [bɒʔəl] for bottle). This change is not universal for British English and is not considered a feature of Received Pronunciation. This is not a property of most North American dialects. Newfoundland English is a notable exception.
On the other hand, North American English has undergone some sound changes not found in Britain, especially not in its standard varieties. Many of these are instances of phonemic differentiation and include:
- The merger of /ɑ/ and /ɒ/, making father and bother rhyme. This change is nearly universal in North American English, occurring almost everywhere except for parts of eastern New England, hence the Boston accent.
- The merger of /ɒ/ and /ɔ/. This is the so-called cot-caught merger, where cot and caught are homophones. This change has occurred in eastern New England, in Pittsburgh and surrounding areas, and from the Great Plains westward.
- For speakers who do not merge caught and cot: The replacement of the cot vowel with the caught vowel before voiceless fricatives (as in cloth, off [which is found in some old-fashioned varieties of RP]), as well as before /ŋ/ (as in strong, long), usually in gone, often in on, and irregularly before /g/ (log, hog, dog, fog [which is not found in British English at all]).
- The replacement of the lot vowel with the strut vowel in most utterances of the words was, of, from, what and in many utterances of the words everybody, nobody, somebody, anybody; the word because has either /ʌ/ or /ɔ/; want has normally /ɔ/ or /ɑ/, sometimes /ʌ/.
- Vowel merger before intervocalic /ɹ/. Which vowels are affected varies between dialects. One such change is the laxing of /e/, /i/ and /u/ to /ɛ/, /ɪ/ and /ʊ/ before /ɹ/, causing pronunciations like [pɛɹ], [pɪɹ] and [pjʊɹ] for pair, peer and pure. The resulting sound [ʊɹ] is often further reduced to [ɝ], especially after palatals, so that cure, pure, mature and sure rhyme with fir.
- Dropping of /j/ after alveolar consonants so that new, duke, Tuesday, suit, resume, lute are pronounced /nu/, /duk/, /tuzdeɪ/, /sut/, /ɹɪzum/, /lut/.
- æ-tensing in environments that vary widely from accent to accent; for example, for many speakers, /æ/ is approximately realized as [eə] before nasal consonants. In some accents, particularly those from Philadelphia to New York City, [æ] and [eə] can even contrast sometimes, as in Yes, I can [kæn] vs. tin can [keən].
- The flapping of intervocalic /t/ and /d/ to alveolar tap [ɾ] before unstressed vowels (as in butter, party) and syllabic /l/ (bottle), as well as at the end of a word or morpheme before any vowel (what else, whatever). Thus, for most speakers, pairs such as ladder/latter, metal/medal, and coating/coding are pronounced the same. For many speakers, this merger is incomplete and does not occur after /aɪ/; these speakers tend to pronounce writer with [əɪ] and rider with [aɪ]. This is a form of Canadian raising but, unlike more extreme forms of that process, does not affect /aʊ/.
- Both intervocalic /nt/ and /n/ may be realized as [n] or [ɾ̃], making winter and winner homophones. This does not occur when the second syllable is stressed, as in entail.
- The pin-pen merger, by which [ɛ] is raised to [ɪ] before nasal consonants, making pairs like pen/pin homophonous. This merger originated in Southern American English but is now found in parts of the Midwest and West as well.
Some mergers found in most varieties of both American and British English include:
- The merger of the vowels /ɔ/ and /o/ before 'r', making pairs like horse/hoarse, corps/core, for/four, morning/mourning, etc. homophones.
- The wine-whine merger making pairs like wine/whine, wet/whet, Wales/whales, wear/where, etc. homophones, in most cases eliminating /ʍ/, the voiceless labiovelar fricative. Many older varieties of southern and western AmE still keep these distinct, but the merger appears to be spreading.
North America has given the English lexicon many thousands of words, meanings, and phrases. Several thousand are now used in English as spoken internationally; others, however, died within a few years of their creation.
Creation of an American lexicon
The process of coining new lexical items started as soon as the colonists began borrowing names for unfamiliar flora, fauna, and topography from the Native American languages. Examples of such names are opossum, raccoon, squash and moose (from Algonquian). Other Native American loanwords, such as wigwam or moccasin, describe artificial objects in common use among Native Americans. The languages of the other colonizing nations also added to the American vocabulary; for instance, cookie, cruller, stoop, and pit (of a fruit) from Dutch; levee, portage ("carrying of boats or goods") and (probably) gopher from French; barbecue, stevedore, and rodeo from Spanish.
Among the earliest and most notable regular "English" additions to the American vocabulary, dating from the early days of colonization through the early 19th century, are terms describing the features of the North American landscape; for instance, run, branch, fork, snag, bluff, gulch, neck (of the woods), barrens, bottomland, notch, knob, riffle, rapids, watergap, cutoff, trail, timberline and divide. Already existing words such as creek, slough, sleet and (in later use) watershed received new meanings that were unknown in England.
Other noteworthy American toponyms are found among loanwords; for example, prairie, butte (French); bayou (Louisiana French); coulee (Canadian French, but used also in Louisiana with a different meaning); canyon, mesa, arroyo (Spanish); vlei, kill (Dutch, Hudson Valley).
The word corn, used in England to refer to wheat (or any cereal), came to denote the plant Zea mays, the most important crop in the U.S., originally named Indian corn by the earliest settlers; wheat, rye, barley, oats, etc. came to be collectively referred to as grain (or breadstuffs). Other notable farm related vocabulary additions were the new meanings assumed by barn (not only a building for hay and grain storage, but also for housing livestock) and team (not just the horses, but also the vehicle along with them), as well as, in various periods, the terms range, (corn) crib, truck, elevator, sharecropping and feedlot.
Ranch, later applied to a house style, derives from Mexican Spanish; most Spanish contributions came indeed after the War of 1812, with the opening of the West. Among these are, other than toponyms, chaps (from chaparreras), plaza, lasso, bronco, buckaroo, rodeo; examples of "English" additions from the cowboy era are bad man, maverick, chuck ("food") and Boot Hill; from the California Gold Rush came such idioms as hit pay dirt or strike it rich. The word blizzard probably originated in the West. A couple of notable late 18th century additions are the verb belittle and the noun bid, both first used in writing by Thomas Jefferson.
With the new continent developed new forms of dwelling, and hence a large inventory of words designating real estate concepts (land office, lot, outlands, waterfront, the verbs locate and relocate, betterment, addition, subdivision), types of property (log cabin, adobe in the 18th century; frame house, apartment, tenement house, shack, shanty in the 19th century; project, condominium, townhouse, split-level, mobile home, multi-family in the 20th century), and parts thereof (driveway, breezeway, backyard, dooryard; clapboard, siding, trim, baseboard; stoop (from Dutch), family room, den; and, in recent years, HVAC, central air, walkout basement).
Ever since the American Revolution, a great number of terms connected with the U.S. political institutions have entered the language; examples are run, gubernatorial, primary election, carpetbagger (after the Civil War), repeater, lame duck and pork barrel. Some of these are internationally used (e.g. caucus, gerrymander, filibuster, exit poll).
The rise of capitalism, the development of industry and material innovations throughout the 19th and 20th centuries were the source of a massive stock of distinctive new words, phrases and idioms. Typical examples are the vocabulary of railroading (see further at rail terminology) and transportation terminology, ranging from names of roads (from dirt roads and back roads to freeways and parkways) to road infrastructure (parking lot, overpass, rest area), and from automotive terminology to public transit (e.g. in the sentence "riding the subway downtown"); such American introductions as commuter (from commutation ticket), concourse, to board (a vehicle), to park, double-park and parallel park (a car), double decker or the noun terminal have long been used in all dialects of English. Trades of various kinds have endowed (American) English with household words describing jobs and occupations (bartender, longshoreman, patrolman, hobo, bouncer, bellhop, roustabout, white collar, blue collar, employee, boss [from Dutch], intern, busboy, mortician, senior citizen), businesses and workplaces (department store, supermarket, thrift store, gift shop, drugstore, motel, main street, gas station, hardware store, savings and loan, hock [also from Dutch]), as well as general concepts and innovations (automated teller machine, smart card, cash register, dishwasher, reservation [as at hotels], pay envelope, movie, mileage, shortage, outage, blood bank).
Already existing English words —such as store, shop, dry goods, haberdashery, lumber— underwent shifts in meaning; some —such as mason, student, clerk, the verbs can (as in "canned goods"), ship, fix, carry, enroll (as in school), run (as in "run a business"), release and haul— were given new significations, while others (such as tradesman) have retained meanings that disappeared in England. From the world of business and finance came breakeven, merger, delisting, downsize, disintermediation, bottom line; from sports terminology came, jargon aside, Monday-morning quarterback, cheap shot, game plan (football); in the ballpark, out of left field, off base, hit and run, and many other idioms from baseball; gamblers coined bluff, blue chip, ante, bottom dollar, raw deal, pass the buck, ace in the hole, freeze-out, showdown; miners coined bedrock, bonanza, peter out, pan out and the verb prospect from the noun; and railroadmen are to be credited with make the grade, sidetrack, head-on, and the verb railroad. A number of Americanisms describing material innovations remained largely confined to North America: elevator, ground, gasoline; many automotive terms fall in this category, although many do not (hatchback, SUV, station wagon, tailgate, motorhome, truck, pickup truck, to exhaust).
In addition to the above-mentioned loans from French, Spanish, Mexican Spanish, Dutch, and Native American languages, other accretions from foreign languages came with 19th and early 20th century immigration; notably, from Yiddish (chutzpah, schmooze and such idioms as need something like a hole in the head) and German —hamburger and culinary terms like frankfurter/franks, liverwurst, sauerkraut, wiener, deli(catessen); scram, kindergarten, gesundheit; musical terminology (whole note, half note, etc.); and apparently cookbook, fresh ("impudent") and what gives? Such constructions as Are you coming with? and I like to dance (for "I like dancing") may also be the result of German or Yiddish influence.
Finally, a large number of English colloquialisms from various periods are American in origin; some have lost their American flavor (from OK and cool to nerd and 24/7), while others have not (have a nice day, sure); many are now distinctly old-fashioned (swell, groovy). Some English words now in general use, such as hijacking, disc jockey, boost, bulldoze and jazz, originated as American slang. Among the many English idioms of U.S. origin are get the hang of, take for a ride, bark up the wrong tree, keep tabs, run scared, take a backseat, have an edge over, stake a claim, take a shine to, in on the ground floor, bite off more than one can chew, off/on the wagon, stay put, inside track, stiff upper lip, bad hair day, throw a monkey wrench, under the weather, jump bail, come clean, come again? and will the real x please stand up?
American English has always shown a marked tendency to use substantives as verbs. Examples of verbed nouns are interview, advocate, vacuum, lobby, expense, room, pressure, rear-end, transition, feature, profile, buffalo, weasel, express (mail), belly-ache, spearhead, skyrocket, showcase, merchandise, service (as a car), corner, torch, exit (as in "exit the lobby"), factor (in mathematics), gun ("shoot"), author (which disappeared in English around 1630 and was revived in the U.S. three centuries later) and, out of American material, proposition, graft (bribery), bad-mouth, vacation, major, backpack, backtrack, intern, ticket (traffic violations), hassle, blacktop, peer-review, dope and OD.
Compounds coined in the U.S. are for instance foothill, flatlands, badlands, landslide (in all senses), overview (the noun), backdrop, teenager, brainstorm, bandwagon, hitchhike, smalltime, deadbeat, frontman, lowbrow and highbrow, hell-bent, foolproof, nitpick, about-face (later verbed), upfront (in all senses), fixer-upper, no-show; many of these are phrases used as adverbs or (often) hyphenated attributive adjectives: non-profit, for-profit, free-for-all, ready-to-wear, catchall, low-down, down-and-out, down and dirty, in-your-face, nip and tuck; many compound nouns and adjectives are open: happy hour, fall guy, capital gain, road trip, wheat pit, head start, plea bargain; some of these are colorful (empty nester, loan shark, ambulance chaser, buzz saw, ghetto blaster, dust bunny), others are euphemistic (differently abled, human resources, physically challenged, affirmative action, correctional facility).
Many compound nouns have the form verb plus preposition: add-on, stopover, lineup, shakedown, tryout, spinoff, rundown ("summary"), shootout, holdup, hideout, comeback, cookout, kickback, makeover, takeover, rollback ("decrease"), rip-off, come-on, shoo-in, fix-up, tie-in, tie-up ("stoppage"), stand-in. These essentially are nouned phrasal verbs; some prepositional and phrasal verbs are in fact of American origin (spell out, figure out, hold up, brace up, size up, rope in, back up/off/down/out, step down, miss out on, kick around, cash in, rain out, check in and check out (in all senses), fill in ("inform"), kick in ("contribute"), square off, sock in, sock away, factor in/out, come down with, give up on, lay off (from employment), run into and across ("meet"), stop by, pass up, put up (money), set up ("frame"), trade in, pick up on, pick up after, lose out.
Noun endings such as -ee (retiree), -ery (bakery), -ster (gangster) and -cian (beautician) are also particularly productive. Some verbs ending in -ize are of U.S. origin; for example, fetishize, prioritize, burglarize, accessorize, itemize, editorialize, customize, notarize, weatherize, winterize, Mirandize; and so are some back-formations (locate, fine-tune, evolute, curate, donate, emote, upholster, peeve and enthuse). Among syntactical constructions that arose in the U.S. are as of (with dates and times), outside of, headed for, meet up with, back of, convince someone to…, not to be about to and lack for.
Americanisms formed by alteration of existing words include notably pesky, phony, rambunctious, pry (as in "pry open," from prize), putter (verb), buddy, sundae, skeeter, sashay and kitty-corner. Adjectives that arose in the U.S. are for example, lengthy, bossy, cute and cutesy, grounded (of a child), punk (in all senses), sticky (of the weather), through (as in "through train," or meaning "finished"), and many colloquial forms such as peppy or wacky. American blends include motel, guesstimate, infomercial and televangelist.
English words that survived in the United States
A number of words and meanings that originated in Middle English or Early Modern English and that always have been in everyday use in the United States dropped out in most varieties of British English; some of these have cognates in Lowland Scots. Terms such as fall ("autumn"), pavement (to mean "road surface", where in Britain, as in Philadelphia, it is the equivalent of "sidewalk"), faucet, diaper, candy, skillet, eyeglasses, crib (for a baby), obligate, and raise a child are often regarded as Americanisms. Gotten (past participle of get) is often considered to be an Americanism, although there are some areas of Britain, such as Lancashire and Yorkshire, that still continue to use it and sometimes also use putten as the past participle for put.
Other words and meanings, to various extents, were brought back to Britain, especially in the second half of the 20th century; these include hire ("to employ"), quit ("to stop," which spawned quitter in the U.S.), I guess (famously criticized by H. W. Fowler), baggage, hit (a place), and the adverbs overly and presently ("currently"). Some of these, for example monkey wrench and wastebasket, originated in 19th-century Britain.
The mandative subjunctive (as in "the City Attorney suggested that the case not be closed") is livelier in AmE than it is in British English; it appears in some areas as a spoken usage, and is considered obligatory in contexts that are more formal. The adjectives mad meaning "angry", smart meaning "intelligent", and sick meaning "ill" are also more frequent in American than British English.
While written AmE is standardized across the country, there are several recognizable variations in the spoken language, both in pronunciation and in vernacular vocabulary. General American is the name given to any American accent that is relatively free of noticeable regional influences. It is not a standard accent in the way that Received Pronunciation is in England.
After the Civil War, the settlement of the western territories by migrants from the Eastern U.S. led to dialect mixing and leveling, so that regional dialects are most strongly differentiated along the Eastern seaboard. The Connecticut River and Long Island Sound is usually regarded as the southern/western extent of New England speech, which has its roots in the speech of the Puritans from East Anglia who settled in the Massachusetts Bay Colony. The Potomac River generally divides a group of Northern coastal dialects from the beginning of the Coastal Southern dialect area; in between these two rivers several local variations exist, chief among them the one that prevails in and around New York City and northern New Jersey, which developed on a Dutch substratum after the British conquered New Amsterdam. The main features of Coastal Southern speech can be traced to the speech of the English from the West Country who settled in Virginia after leaving England at the time of the English Civil War, and to the African influences from the African Americans who were enslaved in the South.
Although no longer region-specific, African American Vernacular English, which remains prevalent among African Americans, has a close relationship to Southern varieties of AmE and has greatly influenced everyday speech of many Americans.
A distinctive speech pattern was also generated by the separation of Canada from the United States, centered on the Great Lakes region. This is the Inland North Dialect—the "standard Midwestern" speech that was the basis for General American in the mid-20th Century (although it has been recently modified by the northern cities vowel shift). Those not from this area frequently confuse it with the North Midland dialect treated below, referring to both collectively as "Midwestern."
In the interior, the situation is very different. West of the Appalachian Mountains begins the broad zone of what is generally called "Midland" speech. This is divided into two discrete subdivisions, the North Midland that begins north of the Ohio River valley area, and the South Midland speech; sometimes the former is designated simply "Midland" and the latter is reckoned as "Highland Southern." The North Midland speech continues to expand westward until it becomes the closely related Western dialect which contains Pacific Northwest English as well as the well-known California English, although in the immediate San Francisco area some older speakers do not possess the cot-caught merger and thus retain the distinction between words such as cot and caught which reflects a historical Mid-Atlantic heritage. Mormon and Mexican settlers in the West influenced the development of Utah English.
The South Midland or Highland Southern dialect follows the Ohio River in a generally southwesterly direction, moves across Arkansas and Oklahoma west of the Mississippi, and peters out in West Texas. It is a version of the Midland speech that has assimilated some coastal Southern forms (outsiders often mistakenly believe South Midland speech and coastal South speech to be the same). The island state of Hawaii has a distinctive Hawaiian Pidgin.
Finally, dialect development in the United States has been notably influenced by the distinctive speech of such important cultural centers as Boston, Chicago, Philadelphia, Charleston, New Orleans, and Detroit, which imposed their marks on the surrounding areas.
Differences between British English and American English
American English and British English (BrE) differ at the levels of phonology, phonetics, vocabulary, and, to a lesser extent, grammar and orthography. The first large American dictionary, An American Dictionary of the English Language, was written by Noah Webster in 1828; Webster intended to show that the United States, which was a relatively new country at the time, spoke a different dialect from that of Britain.
Differences in grammar are relatively minor, and normally do not affect mutual intelligibility; these include, but are not limited to: different use of some verbal auxiliaries; formal (rather than notional) agreement with collective nouns; different preferences for the past forms of a few verbs (e.g. learn, burn, sneak, dive, get); different prepositions and adverbs in certain contexts (e.g. AmE in school, BrE at school); and whether or not a definite article is used in a few cases (AmE to the hospital, BrE to hospital). Often, these differences are a matter of relative preferences rather than absolute rules; and most are not stable, since the two varieties are constantly influencing each other.
Differences in orthography are also trivial. Some of the forms that now serve to distinguish American from British spelling (color for colour, center for centre, traveler for traveller, etc.) were introduced by Noah Webster himself; others are due to spelling tendencies in Britain from the 17th century until the present day (e.g. -ise for -ize, programme for program, skilful for skillful, chequered for checkered, etc.), in some cases favored by the francophile tastes of 19th century Victorian England, which had little effect on AmE.
The most noticeable differences between AmE and BrE are at the levels of pronunciation and vocabulary.
- Dictionary of American Regional English
- IPA chart for English
- Regional accents of English speakers
- Wikipedia:Manual of Style#National varieties of English
- Bartlett, John R. (1848). Dictionary of Americanisms: A Glossary of Words and Phrases Usually Regarded As Peculiar to the United States. New York: Bartlett and Welford.
- Ferguson, Charles A.; & Heath, Shirley Brice (Eds.). (1981). Language in the USA. Cambridge: Cambridge University Press.
- Finegan, Edward. (2004). American English and its distinctiveness. In E. Finegan & J. R. Rickford (Eds.), Language in the USA: Themes for the twenty-first century (pp. 18-38). Cambridge: Cambridge University Press.
- Finegan, Edward; & Rickford, John R. (Eds.). (2004). Language in the USA: Themes for the twenty-first century. Cambridge: Cambridge University Press.
- Frazer, Timothy (Ed.). (1993). Heartland English. Tuscaloosa: University of Alabama Press.
- Glowka, Wayne; & Lance, Donald (Eds.). (1993). Language variation in North American English. New York: Modern Language Association.
- Garner, Bryan A. (2003). Garner's Modern American Usage. New York: Oxford University Press.
- Kenyon, John S. (1950). American pronunciation (10th ed.). Ann Arbor: George Wahr.
- Kortmann, Bernd; Schneider, Edgar W.; Burridge, Kate; Mesthrie, Rajend; & Upton, Clive (Eds.). (2004). A handbook of varieties of English: Morphology and syntax (Vol. 2). Berlin: Mouton de Gruyter.
- Labov, William; Sharon Ash; Charles Boberg (2006). The Atlas of North American English. Berlin: Mouton de Gruyter. ISBN 3-11-016746-8.
- Lippi-Green, Rosina. (1997). English with an accent: Language, ideology, and discrimination in the United States. New York: Routedge.
- MacNeil, Robert; & Cran, William. (2005). Do you speak American?: A companion to the PBS television series. New York: Nan A. Talese, Doubleday.
- Mathews, Mitford M. (ed.) (1951). A Dictionary of Americanisms on Historical Principles. Chicago: University of Chicago Press.
- Mencken, H. L. (1936, repr. 1977). The American Language: An Inquiry into the Development of English in the United States (4th edition). New York: Knopf. (1921 edition online: www.bartleby.com/185/).
- Simpson, John (ed.) (1989). Oxford English Dictionary, 2nd edition. Oxford: Oxford University Press.
- Schneider, Edgar (Ed.). (1996). Focus on the USA. Philadelphia: John Benjamins.
- Schneider, Edgar W.; Kortmann, Bernd; Burridge, Kate; Mesthrie, Rajend; & Upton, Clive (Eds.). (2004). A handbook of varieties of English: Phonology (Vol. 1). Berlin: Mouton de Gruyter.
- Thomas, Erik R. (2001). An acoustic analysis of vowel variation in New World English. Publication of American Dialect Society (No. 85). Durham, NC: Duke University Press.
- Thompson, Charles K. (1958). An introduction to the phonetics of American English (2nd ed.). New York: The Ronald Press Co.
- Trudgill, Peter and Jean Hannah. (2002). International English: A Guide to the Varieties of Standard English, 4th ed. London: Arnold. ISBN 0-340-80834-9.
- Wolfram, Walt; & Schilling-Estes, Natalie. (1998). American English: Dialects and variation. Malden, MA: Basil Blackwell.
History of American English
- Algeo, John (Ed.). (2001). The Cambridge history of the English language: English in North America (Vol. 6). Cambridge: Cambridge University Press.
- Bailey, Richard W. (1991). Images of English: A cultural history of the language. Ann Arbor: University of Michigan Press.
- Bailey, Richard W. (2004). American English: Its origins and history. In E. Finegan & J. R. Rickford (Eds.), Language in the USA: Themes for the twenty-first century (pp. 3-17). Cambridge: Cambridge University Press.
- Bryson, Bill. (1994). Made in America: An informal history of the English language in the United States. New York: William Morrow.
- Finegan, Edward. (2006). English in North America. In R. Hogg & D. Denison (Eds.), A history of the English language (pp. 384-419). Cambridge: Cambridge University Press.
- Kretzschmar, William A. (2002). American English: Melting pot or mixing bowl? In K. Lenz & R. Möhlig (Eds.), Of dyuersitie and change of language: Essays presented to Manfred Görlach on the occasion of his sixty-fifth birthday (pp. 224-239). Heidelberg: C. Winter.
- Mathews, Mitford. (1931). The beginnings of American English. Chicago: University of Chicago Press.
- Read, Allen Walker. (2002). Milestones in the history of English in America. Durham, NC: Duke University Press.
- Allen, Harold B. (1973-6). The linguistic atlas of the Upper Midwest (3 Vols). Minneapolis: University of Minnesota Press.
- Atwood, E. Bagby. (1953). A survey of verb forms in the eastern United States. Ann Arbor: University of Michigan Press.
- Carver, Craig M. (1987). American regional dialects: A word geography. Ann Arbor: University of Michigan Press. ISBN 0-472-10076-9
- Kurath, Hans, et al. (1939-43). Linguistic atlas of New England (6 Vols). Providence: Brown University for the American Council of Learned Societies.
- Kurath, Hans. (1949). A word geography of the eastern United States. Ann Arbor: University of Michigan Press.
- Kurath, Hans; & McDavid, Raven I., Jr. (1961). The pronunciation of English in the Atlantic states. Ann Arbor: University of Michigan Press.
- McDavid, Raven I., Jr. (1979). Dialects in culture. W. Kretzschmar (Ed.). Tuscaloosa: University of Alabama Press.
- McDavid, Raven I., Jr. (1980). Varieties of American English. A. Dil (Ed.). Stanford: Stanford University Press.
- Metcalf, Allan. (2000). How we talk: American regional English today Houghton Mifflin Company. ISBN 0-618-04362-4
- Pederson, Lee; McDaniel, Susan L.; & Adams, Carol M. (eds.). (1986-92). Linguistic atlas of the gulf states (7 Vols). Athens, Georgia: University of Georgia Press.
- Bailey, Guy; Maynor, Natalie; & Cukor-Avila (Eds.). (1991). The emergence of Black English: Text and commentary. Philadelphia: John Benjamins.
- Green, Lisa. (2002). African American English: A linguistic introduction. Cambridge: Cambridge University Press.
- Labov, William. (1972). Language in the inner city: Studies in Black English Vernacular. Philadelphia: University of Pennsylvania Press.
- Lanehart, Sonja L. (Ed.). (2001). Sociocultural and historical contexts of African American English. Philadelphia: John Benjamins.
- Mufwene, Salikoko; Rickford, John R.; Bailey, Guy; & Baugh, John (Eds.). (1998). African American Vernacular English. London: Routledge.
- Rickford, John R. (1999). African American Vernacular English: Features, evolution, and educational implications. Oxford: Blackwell.
- Wolfram, Walt. (1969). A sociolinguistic description of Detroit negro speech. Urban linguistic series (No. 5). Washington, D.C.: Center for Applied Linguistics.
- Wolfram, Walt; & Thomas, Erik. (2002). The development of African American English: Evidence from an isolated community. Malden, MA: Blackwell.
- Leap, William L. (1993). American Indian English. Salt Lake City: University of Utah Press.
- Bayley, Robert; & Santa Ana, Otto. (2004). Chicano English grammar. In B. Kortmann, E. W. Schneider, K. Burridge, R. Mesthrie, & C. Upton (Eds.), A handbook of varieties of English: Morphology and syntax (Vol. 2, pp. 167-183). Berlin: Mouton de Gruyter.
- Fought, Carmen. (2003). Chicano English in context. New York: Palgrave Macmillan.
- Galindo, Letticia D. (1987). Linguistic influence and variation of the English of Chicano adolescents in Austin, Texas. (PhD dissertation, University of Texas at Austin).
- Santa Ana, Otto. (1993). Chicano English and the Chicano language setting. Hispanic Journal of Behavioral Sciences, 15 (1), 1-35.
- Santa Ana, Otto; & Bayley, Robert. (2004). Chicano English phonology. In E. W. Schneider, B. Kortmann, K. Burridge, R. Mesthrie, & C. Upton (Eds.), A handbook of varieties of English: Phonology (Vol. 1, pp. 407-424). Berlin: Mouton de Gruyter.
- Wolfram, Walt. (1974). Sociolinguistic aspects of assimilation: Puerto Rican English in New York City. Washington, D.C.: Center for Applied Linguistics.
- Cran, William (Producer, Director, Writer); Buchanan, Christopher (Producer); & MacNeil, Robert (Writer). (2005). Do you speak American? [Documentary]. New York: Center for New American Media.
- Kolker, Andrew; & Alvarez, Louis (Producers, Directors). (1987). American tongues: A documentary about the way people talk in the U.S. [Documentary]. Hohokus, NJ: Center for New American Media.
- ↑ Crystal, David (1997). English as a Global Language. Cambridge: Cambridge University Press. ISBN 0-521-53032-6.
- ↑ North American English (Trudgill, p. 2) is a collective term used for the varieties of the English language that are spoken in the United States and Canada.
- ↑ Trudgill, pp. 46-47.
- ↑ Labov, p. 48.
- ↑ According to Merriam-Webster Collegiate Dictionary, Eleventh Edition. For speakers who merge caught and cot, /ɔ/ is to be understood as the vowel they have in both caught and cot.
- ↑ , ,
- ↑ A few of these are now chiefly found, or have been more productive, outside of the U.S.; for example, jump, "to drive past a traffic signal;" block meaning "building," and center, "central point in a town" or "main area for a particular activity" (cf. Oxford English Dictionary).
- ↑ The Maven's Word of the Day, Random House. Retrieved February 8, 2007.
- ↑ Trudgill, Peter (2004). New-Dialect Formation: The Inevitability of Colonial Englishes.
- ↑ , Oxford Advanced Learner's Dictionary. Retrieved April 24, 2007.
- ↑ , , , , , , , , , , , , , , , , , , ,
- ↑ Trudgill, p. 69.
- ↑
- ↑ British author George Orwell (in English People, 1947, cited in OED s.v. lose) criticized an alleged "American tendency" to "burden every verb with a preposition that adds nothing to its meaning (win out, lose out, face up to, etc.)."
- ↑ Trudgill, p. 69.
- ↑ Possible entries for pavement
- ↑ Oxford Advanced Learner's Dictionary. . Retrieved March 23, 2007.
- ↑ Cf. Trudgill, p.42.
- ↑ Algeo, John (2006). British or American English?. Cambridge: Cambridge University Press. ISBN 0-521-37993-8.
- ↑ Peters, Pam (2004). The Cambridge Guide to English Usage. Cambridge: Cambridge University Press. ISBN 0-521-62181-X, pp. 34 and 511.
- American Regional Accent Map based on results from online quizzes
- Do You Speak American: PBS special
- Dialect Survey of the United States, by Bert Vaux et al., Harvard University. The answers to various questions about pronunciation, word use etc. can be seen in relationship to the regions where they are predominant.
- Linguistic Atlas Projects
- Phonological Atlas of North America at the University of Pennsylvania
- The American•British British•American Dictionary
- Speech Accent Archive
- World English Organization
- English Speaking Union of the United States
- British, American, Australian English - Lists and Online Exercises
- Listen to spoken American English (midwest}
- Dictionary of American Regional English
- The Great Pop Vs. Soda Controversy
Template:US topics Template:English dialectscs:Americká angličtina de:Amerikanisches Englischko:미국 영어 ia:Anglese american it:Inglese americano he:אנגלית אמריקנית hu:Amerikai angol nyelv nl:Amerikaans-Engelsno:Amerikansk engelsksimple:American English fi:Amerikanenglanti sv:Amerikansk engelska th:อังกฤษอเมริกัน
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies |
“An artist is above all a human being, profoundly human to the core. If the artist can’t feel everything that humanity feels, if the artist isn’t capable of loving until he forgets himself and sacrifices himself if necessary, if he won’t put down his magic brush and head the fight against the oppressor, then he isn’t a great artist.”
Considered the greatest Mexican painter of the twentieth century, Diego Rivera had a profound effect on the international art world. Among his many contributions, Rivera is credited with the reintroduction of fresco painting into modern art and architecture. His radical political views and tempestuous romance with the painter Frieda Kahlo were then, and remain today, a source of public intrigue. In a series of visits to America, from 1930 to 1940, Rivera brought his unique vision to public spaces and galleries, enlightening and inspiring artists and laymen alike.
Diego Rivera was born in Guanajuato, Mexico in 1886. He began to study painting at an early age and in 1907 moved to Europe. Spending most of the next fourteen years in Paris, Rivera encountered the works of such great masters as Cézanne, Gauguin, Renoir, and Matisse. Rivera was searching for a new form of painting, one that could express the complexities of his day and still reach a wide audience. It was not until he began to study the Renaissance frescoes of Italy that he found his medium. It was with a vision of the future of the fresco and with a strong belief in public art that Rivera returned to Mexico.
Frescoes are mural paintings done on fresh plaster. Using the fresco form in universities and other public buildings, Rivera was able to introduce his work into the everyday lives of the people. Rivera concerned himself primarily with the physical process of human development and the effects of technological progress. For him, the frescoes’ size and public accessibility was the perfect canvas on which to tackle the grand themes of the history and future of humanity. A life long Marxist, Rivera saw in this medium an antidote to the elite walls of galleries and museums. Throughout the twenties his fame grew with a number of large murals depicting scenes from Mexican history. His work appealed to the people’s interest in the history of technology and progress. The desire to understand progress was visible in the growing industrial societies of the 1930s, and Rivera saw the workers’ struggle as a symbol of the fragile political ground on which that capitalism trod.
In 1930, Rivera made the first of a series of trips that would alter the course of American painting. In November of that year, Rivera began work on his first two major American commissions: for the American Stock Exchange Luncheon Club and for the California School of Fine Arts. These two pieces firmly but subtly incorporated Rivera’s radical politics, while maintaining a sense of simple historicity. One of Rivera’s greatest gifts was his ability to condense a complex historical subject (such as the history of California’s natural resources) down to its most essential parts. For Rivera, the foundation of history could be seen in the working class, whose lives were spent by war and industry in the name of progress. In these first two commissions and all of the American murals to follow, Rivera would investigate the struggles of the working class.
In 1932, at the height of the Great Depression, Rivera arrived in Detroit, where, at the behest of Henry Ford, he began a paean to the American worker on the walls of the Detroit Institute of Arts. Completed in 1933, the piece depicted industrial life in the United States, concentrating on the car plant workers of Detroit. Rivera’s radical politics and independent nature had begun to draw criticism during his early years in America. Though the fresco was the focus of much controversy, Edsel Ford, Henry’s son, defended the work and it remains today Rivera’s most significant painting in America. Rivera, however, did not fare nearly so well in his association with the Rockefellers in New York City.
In 1933 the Rockefellers commissioned Rivera to paint a mural for the lobby of the RCA building in Rockefeller Center. “Man at the Crossroads” was to depict the social, political, industrial, and scientific possibilities of the twentieth century. In the painting, Rivera included a scene of a giant May Day demonstration of workers marching with red banners. It was not the subject matter of the panel that inflamed the patrons, but the clear portrait of Lenin leading the demonstration. When Rivera refused to remove the portrait, he was ordered to stop and the painting was destroyed. That same year, Rivera used the money from the Rockefellers to create a mural for the Independent Labor Institute that had Lenin as its central figure.
Rivera remained a central force in the development of a national art in Mexico throughout his life. In 1957, at the age of seventy, Rivera died in Mexico City. Perhaps one his greatest legacies, however, was his impact on America’s conception of public art. In depicting scenes of American life on public buildings, Rivera provided the first inspiration for Franklin Delano Roosevelt’s WPA program. Of the hundreds of American artists who would find work through the WPA, many continued on to address political concerns that had first been publicly presented by Rivera. Both his original painting style and the force of his ideas remain major influences on American painting. |
A team of researchers at the Lawrence Berkeley National Laboratory has developed a method for chemically producing two-dimensional transistors and circuits that may aid in the development of more advanced computers.
The research began in early 2015 and was published Monday in the journal Nature Nanotechnology. To produce its atomically thin, “two-dimensional” transistors, the team used graphene — a collection of carbon atoms in a single, one-atom-thin sheet — as a conductor for another two-dimensional substance used as a semiconductor.
After its development won the 2010 Nobel Prize in Physics, the main question became how to produce graphene, according to project researcher and campus doctoral student Mervin Zhao. Zhao said that now, however, the technology to grow graphene has been developed and the next step is how to integrate it with other two-dimensional substances to form devices.
“Our work addresses a lot of the issues with how we can complete the chemical toolbox for making two-dimensional materials (and) how we can actively use two-dimensional materials to make transistors and computers in the future,” Zhao said.
Transistors, Zhao said, are switches that function in the same way as light-switches: By applying voltage, the transistor can be turned on or off, providing the foundation for modern electronics. In this case, the transistors are composed of graphene and a transition-metal dichalcogenide, or TMDC, with the graphene measuring the voltage conducted by the TMDC.
“There’s been a lot of work in making transistors of these materials,” said Jeffrey Bokor, a campus professor of electrical engineering and computer sciences. “This work is a nice demonstration of growing (TMDC) and developing transistors.”
Zhao said the need for two-dimensional transistors has arisen to address the inadequacies of silicon as a semiconductor. He noted that silicon will always have some thickness to it, which poses a problem for developing new, smaller transistors — as called for by the rule known as Moore’s law.
Moore’s law, Bokor said, stipulates that the number of transistors used in computers should roughly double every two years. The law helps to keep up with the expected progress of the industry overall.
According to Zhao, development of two-dimensional transistors on a large scale may be difficult because of limitations imposed by current industry infrastructure and its predominant focus on silicon-based technology. Bokor, however, said progress has continued in the industry over its 50-year history despite other potential blocks, and future advancements may still be made.
“Why do they even keep doing Moore’s law? It costs lots of money — even just for research with silicon — and requires huge research and development,” Bokor said. “They follow it because there’s so much money to be made.” |
Troubling new research shows warm waters rushing towards world’s biggest ice sheet in Antarctica
3 Aug 2022 6:18 am AEST
Warmer waters are flowing towards the East Antarctic ice sheet, according to our alarming new research which reveals a potential new driver of global sea-level rise.
Laura Herraiz Borreguero
Physical oceanographer, CSIRO
Alberto Naveira Garabato
Professor, National Oceanography Centre, University of Southampton
Transdisciplinary Researcher & Knowledge Broker, CSIRO
The research, published today in Nature Climate Change, shows changing water circulation in the Southern Ocean may be compromising the stability of the East Antarctic ice sheet. The ice sheet, about the size of the United States, is the largest in the world.
The changes in water circulation are caused by shifts in wind patterns, and linked to factors including climate change. The resulting warmer waters and sea-level rise may damage marine life and threaten human coastal settlements.
Our findings underscore the urgency of limiting global warming to below 1.5℃, to avert the most catastrophic climate harms.
Ice sheets and climate change
Ice sheets comprise glacial ice that has accumulated from precipitation over land. Where the sheets extend from the land and float on the ocean, they are known as ice shelves.
It’s well known that the West Antarctic ice sheet is melting and contributing to sea-level rise. But until now, far less was known about its counterpart in the east.
Our research focused offshore a region known as the Aurora Subglacial Basin in the Indian Ocean. This area of frozen sea ice forms part of the East Antarctic ice sheet.
How this basin will respond to climate change is one of the largest uncertainties in projections of sea-level rise this century. If the basin melted fully, global sea levels would rise by 5.1 metres.
Much of the basin is below sea level, making it particularly sensitive to ocean melting. That’s because deep seawater requires lower temperatures to freeze than shallower seawater.
What we found
We examined 90 years of oceanographic observations off the Aurora Subglacial Basin. We found unequivocal ocean warming at a rate of up to 2℃ to 3℃ since the earlier half of the 20th century. This equates to 0.1℃ to 0.4℃ per decade.
The warming trend has tripled since the 1990s, reaching a rate of 0.3℃ to 0.9℃ each decade.
So how is this warming linked to climate change? The answer relates to a belt of strong westerly winds over the Southern Ocean. Since the 1960s, these winds have been moving south towards Antarctica during years when the Southern Annular Mode, a climate driver, is in a positive phase.
The phenomenon has been partly attributed to increasing greenhouse gases in the atmosphere. As a result, westerly winds are moving closer to Antarctica in summer, bringing warm water with them.
The East Antarctic ice sheet was once thought to be relatively stable and sheltered from warming oceans. That’s in part because it’s surrounded by very cold water known as “dense shelf water”.
Part of our research focused on the Vanderford Glacier in East Antarctica. There, we observed the warm water replacing the colder dense shelf water.
The movement of warm waters towards East Antarctica is expected to worsen throughout the 21st century, further threatening the ice sheet’s stability.
Why this matters to marine life
Previous work on the effects of climate change in the East Antarctic has generally assumed that warming first occurs in the ocean’s surface layers. Our findings – that deeper water is warming first – suggests a need to re-think potential impacts on marine life.
Robust assessment work is required, including investment in monitoring and modelling that can link physical change to complex ecosystem responses. This should include the possible effects of very rapid change, known as tipping points, that may mean the ocean changes far more rapidly than marine life can adapt.
East Antarctic marine ecosystems are likely to be highly vulnerable to warming waters. Antarctic krill, for example, breed by sinking eggs to deep ocean depths. Warming of deeper waters may affect the development of eggs and larvae. This in turn would affect krill populations and dependent predators such as penguins, seals and whales.
Limiting global warming below 1.5℃
We hope our results will inspire global efforts to limit global warming below 1.5℃. To achieve this, global greenhouse gas emissions need to fall by around 43% by 2030 and to near zero by 2050.
Warming above 1.5℃. greatly increases the risk of destabilising the Antarctic ice sheet, leading to substantial sea-level rise.
But staying below 1.5℃ would keep sea-level rise to no more than an additional 0.5 metres by 2100. This would enable greater opportunities for people and ecosystems to adapt.
Laura Herraiz Borreguero received funding from the European Research Council Horizon 2020 Marie Skłodowska-Curie Individual Fellowship, through grant number 661015, and the Centre for Southern Hemisphere Oceans Research (CSHOR, Hobart, Australia); She receives funding from the Australian government through CSIRO and the Australian Antarctic Partnership Program (AAPP). She is affiliated with CSIRO, the AAPP.
Alberto Naveira Garabato received funding from the Royal Society through a Wolfson Research Merit Award.
Jess Melbourne-Thomas receives funding from the Climate Systems Hub of the Australian Government’s National Environmental Science Program, the Fisheries Research and Development Corporation and The Pew Charitable Trusts. |
A grapheme is a letter or a group of letters that represent a sound (phoneme) in a word. The grapheme 'tch' represents the phoneme /ch/.
Download our Sketch: ‘tch’ sounds activity below.
This activity introduces your child to the grapheme 'tch' with example words and sentences. Sound out each example word. Can you think of any other words with the grapheme 'tch'? This activity has also been designed for handwriting practice.
Common Core alignment:
CCSS.ELA-LITERACY.L.1.2.D Use conventional spelling for words with common spelling patterns and for frequently occurring irregular words.
CCSS.ELA-LITERACY.RF.1.3.A Know the spelling-sound correspondences for common consonant digraphs (or trigraphs). |
By the time they reach pre-school age, children are keen learners, but exactly how do they learn?
Pre-schoolers are defined as children between the age of three and five years old and it refers to the time before they attend school at five years old.
From the age of three years old, children largely learn new skills and abilities through doing things and sharing their world with adults. Toys can help their learning, but you don’t need to spend loads of money on big or expensive toys, as they learn a lot from everyday activities and what’s going on in the world around them.
Pre-schoolers have already come a long way in their learning and you’ll have seen great changes in their ability as they progress from being a toddler to being a pre-schooler. Doing things is one of the most vital elements of early childhood education and learning for pre-schoolers – although to them, it’s all about fun. So incorporating lots of fun activities into their day, including plenty of new things to try and explore, will significantly help your child learn more at this age.
Talking And Listening
Pre-schoolers will be mastering the art of talking and slowly building up their vocabulary of words. They learn more about the art of talking and our language through listening to other people who speak to them and the speech going on around them. Plus, actually speaking themselves and putting it into action will improve their learning of language enormously.
You can help your pre-schooler by talking to them lots during the day and encouraging them to talk to other people too. This could be to siblings, grandparents or your friends. The more opportunities they have to talk, the more they’ll be learning about how to put ideas into spoken words.
It’s also helpful for you to ask your child questions, both simple ones that will require a yes or no answer, and questions which will make them think and have to come up with their own thought out answer. For example, this could be a question such as, “What did you do at Grandma’s house?” where they’ll have to explain what they’ve done during the day or, “What did you think of the farm?” where they’ll have to put their thoughts into words.
Pre-schoolers will also be learning and developing their motor skills, which involve using their fingers and hands and developing coordination. You can add to this learning ability by encouraging them to do simple tasks that involve the use of their hands and fingers or play games that involve these skills.
For example, you could thread plastic beads onto elastic to make a necklace (girls will like this, in particular) or you could have a go at making play dough models or doing a puzzle.
There are also personal tasks and dressing skills that involve good motor skills, such as brushing teeth, doing up zips or doing up buttons on clothes that help these skills too.
The development of good motor skills will help a child with their ability to write and draw too, as they need to have a good and still grip to be able to hold a crayon, pencil or pen properly in order to learn to write their name. |
Research from the University of Washington shows that endangered blue whales are present and singing off the southwest coast of India. The results suggest that conservation measures should include this region, which is considering expanding tourism.
“The Indian Ocean is clearly important habitat for blue whales — an endangered species that is only very slowly recovering from 20th-century commercial and illegal whaling, especially in the Indian Ocean,” said senior author Kate Stafford, an oceanographer at the UW Applied Physics Laboratory.
Analysis of recordings from late 2018 to early 2020 in Lakshadweep, an archipelago of 36 low-lying islands west of the Indian state of Kerala, detected whales with a peak activity in April and May.
“The presence of blue whales in Indian waters is well known from several strandings and some live sightings of blue whales,” said lead author Divya Panicker, a UW doctoral student in oceanography. “But basic questions such as where blue whales are found, what songs do they sing, what do they eat, how long do they spend in Indian waters and in what seasons are still largely a mystery.”
Answers to those questions will be important for the region, which is also experiencing effects of climate change.
“This study provides conclusive evidence for the persistent occurrence of blue whales in Lakshadweep,” Panicker said. “It is critical to answer these questions to draw up science-based management and conservation plans here.”
While enormous blue whales feed in the waters around Antarctica, smaller pygmy blue whale populations are known to inhabit the Indian Ocean, the third-largest ocean in the world.
In previous preliminary research, Panicker — who grew up in Cochin, India — talked to local fishers who reported seeing whale blows during the spring months.
But since whales surface only occasionally and soundwaves travel well in water, the best way to study whales is the same way they communicate.
The typical blue whale song is a series of one to six low moans, each up to 20 seconds long, below the threshold of human hearing. The pattern and number of moans varies for different populations. Songs provide insights into this poorly studied population — a possible new song was recently reported in the central Indian Ocean and off the coasts of Madagascar and Oman.
For the new study, scuba divers placed underwater microphones at two ends of Kavaratti Island. Other studies in nearby waters suggested that the presence of blue whales would be seasonal, and recordings confirmed their presence between the winter and summer monsoons.
“Our study extends the known range of this song type a further 1,000 kilometers (620 miles) northwest of Sri Lanka,” Panicker said. “Our study provides the first evidence for northern Indian Ocean blue whale songs in Indian waters.”
The researchers believe that the whales are likely resident to the northern Indian Ocean, and come to the Lakshadweep atoll seasonally.
Future work by another UW research group will use recordings of blue whales in the Indian Ocean to calculate their historic numbers and better understand how historic whaling affected different populations in this region.
This research was funded by the U.S. Navy’s Office of Naval Research through its Marine Mammal and Biology Program. |
We are aware of the fact that blood is the most important part of our life, but what exactly is the definition of blood and what is it exactly composed of ? Since we know a little about blood I’ll be explaining it in brief.
Blood is a body liquid in humans and animals. It is derived from the Latin word “Haema”. It may be in the form of liquid but it is also made up of some solids known as corpuscles. From now in this post we are only going to be discussing about the composition of blood. Apart from that we are also going to be looking at the functions of blood.
Composition of blood:
Blood is composed of plasma which is the liquid part and corpuscles which is the solid part. We are going to be discussing both plasma and blood corpuscles in detail. Study of blood is called haematology.
Plasma forms 50-60% of your blood and the rest are corpuscles. It is faint yellow or straw coloured, slightly alkaline, viscous fluid. It is extracellular. Plasma composed 90-92% water and 8-10% solutes.
Breaking the composition down. It consist of 90% water, 7% plasma proteins such as serum albumin (regulates osmotic pressure), serum globulin (maintains immunity), prothrombin and fibrinogen (helps in blood clotting), haemaglutin, immunoglobin, etc..
Rest 3% includes glucose, amino acids, cholesterol, mineral salts like bicarbonate, carbonate, chlorides, sulphate and phosphate of Na, K, Ca, Mg, etc., organic compounds, gases in dissolved state like O2, CO2, etc. and wastes like urea, creatine, uric acid, etc..
Blood corpuscles are solid part of the blood which are suspended in plasma. They are important for proper functioning of the body. Corpuscles make up 45% volume of the blood. They are also known as hematrocrit of the blood.
There the three types of corpuscles – Erythrocytes (They contain haemoglobin and are termed as RBCs), Leucocytes (They do not contain haemoglobin and are termed as WBCs) and Thrombocytes (These are known as platelets and are found only in mammals).
Erythrocrytes or Red Blood Corpuscles:
Erythrocyctes are circular, biconcave, enucleated (only in mammals) cells which gives red colour to the blood. This red colour is caused due to iron present in erythrocytes. Its diameter is 7μm and thickness is 2.5μm. Average life span is of 100-120 days. Each erythrocyte is covered in a plasma membrane which is made up of lecithin and cholesterol.
It is made up of four haem group and one globin molecule. Haemoglobin is found in the cytoplasm of the RBC. Largest RBC is the amphibian RBC among the vertebrates and the smallest RBC is of mammals.
The RBC count in females and males varies. RBC count in females is 4.3 to 5.2 million RBCs/cu mm and the RBC count for males is 5.1 to 5.8 million RBCs/cu mm.
Just like RBC count the haemoglobin content also varies for males and females. Haemoglobin content for an adult male is 13-18 mg/100ml blood and the haemoglobin content for an adult female is 11.5-16.5 mg/100ml blood. Number of RBCs is counted by haemocytometer.
Increase in the RBC count can lead to polycytheamia and decrease in the RCB count results in anemia. Anemia is a condition where the body lacks healthy red blood cells which is essential to carry oxygen to the tissues from the lungs. Anemia can be chronic or acute.
RBCs are formed in the spleen and liver when in foetus form and once when adult its formation occurs in the bone marrow. The process of formation of RBCs is known as erythropoiesis. When in developing stage they are colourless and have a large nuclei. Once, they are ready to be released in the blood stream they loose nuclei and instead contain haemoglibin. Hence, they are referred to as corpuscles and not cells.
Every second 2.5 million RBCs are destroyed in the spleen (main) and also the liver. During the time they are being destroyed they undergo haemolysis which gives haem and globin. Haem forms haemosiderin and the iron in it is sent back into the blood stream and globin gets converted into bile pigments – Bilirubin and Biliverdin.
Functions of RBC:
- Transports oxygen from lungs to tissues.
- Transports carbon dioxide form tissues to lungs.
- Maintains pH of the blood as haemoglobin acts as a buffer, meaning, haemoglobin has a neutralizing effect.
- It maintains the viscosity of the blood.
Leucocytes or White Blood Corpuscles:
Leucocytes have no pigments hence they are colourless, nucleated and phagocytic cells. They have a amoeboid movemet (crawling) which helps them to squeeze out of capillaries easily and reach the infected tissue area. This is called as diapedesis. It has no definite shape.
They are larger in size than RBCs (8 to 15 μm). Total WBC count is 5000 to 9000 WBCs/cu mm blood. Average life span is of 3 to 4 days. They are less in number compared to RBCs (Ratio – 1 WBC to every 600 RBC). There are two types of leucocytes – Granulocytes and Agranulocytes.
Decrease in the number of of WBCs lead to leucopenia and increase in the number of WBCs lead to leucocytosis. In some acute infections, the number of WBCs can increase. Pathological increase in the WBC results in leukemia which is also commonly known as blood cancer.
WBCs are formed in red bone marrow, spleen, lymph nodes, tonsils, thymus and Payer’s patch. The process of formation of WBCs is known as leucopoiesis. The old WBCs are destroyed in the blood, liver and lymph nodes by the process of phagocytosis.
Functions of leucocytes:
- To protect the immune system.
- To fight against diseases.
Granulocytes are cells which have granules on their cytoplasm. The contain nucleus which is large and irregular in shape but are lobed. Hence, these are also called as polymorphonuclear cells. They are produced in the red bone marrow by myceloblasts.
Granulocytes fight normal infections like allergy, dust, etc.. There are three types of ganulocytes – Neutrophils, Eosinophils and Basophils.
The granules of neautrophils take up neutral dye when staining, hence the name. The constitute 54 to 62% of the total WBCs. The nucleus is multilobed and 3 to 5 in number. Hence, they are called polymorphonuclear leucocytes or polymorphs. The cytoplasm contains five granules.
They are 12 to 15 μm in diameter. They are phagocytic in nature and show amoeboid movement which helps them to move 40 μm/minute. The average life span of neutrophils is 10 to 12 hours. They give protection against infections and form the first line of defence in microbes.
The granules of eosinophils take up acidic dye when staining such as of eosin. The granules are large in size and pick up a red colour stain. They constitute 2-3% of total WBC. The average life span of eosinophils is of 14 hours. They are 10 to 15 μm in size. These are non-motile but are slightly or non-phagocytic in nature.
The nucleus is bilobed and picks up a deep purple colour stain. They show anti-histamine property (used to fight against allergies). Increase in the number of eosinophils is called eosinophilia. Allergic disorders can result in the increase in numbers of eosinophils.
Functions of eosinophils:
- Destructs toxins of protein origin which are produced by invading microbes.
- Involved in healing of wounds.
The granules of basophils are stained with basic dye such as methylene blue. They constitute 0.5-1.0% of total WBC. They are 10 to 15 μm in diameter. They are non-phagocytic and their life span is of 8 to 12 hours.
The nucleus is twisted and of 2-3 lobes which takes up a dark purple stain. The granules are few in number. They release heparin (anticoagulant) and histamine. Hence, they are significant in allergic reactions.
Agranulocytes do not show granules on their cytoplasm. Their nucleus is single, simple and not lobed. They fight major diseases. There are two types of agranulocytes – Lymphocytes and Monocytes.
Lymphocytes are small cells with large and round/indented nucleus. They are 8 to 16 μm in size. They constitute 25-33% of the total WBC. They are less motile compared to other leucocytes. The cytoplasm is less compared to the size of the nucleus.
Lymphocytes are produced in lymph nodes, spleen, thymus, etc., derived from lymphoblasts (precursor). They produce antibodies, antitoxins and is also responsible for the immune response of the body. There are two types of lymphocytes – Small lymphocytes and large lymphocytes.
- Small lymphocytes – They are larger in size than RBCs. They are 7 to 10 μm in size. The nucleus is large and stains with basic dyes.
- Large lymphocytes – They are 10 to 14 μm in size. They have large quantity of cytoplasm.
Monocytes are the largest of all the WBCs. They are 12 to 15 μm in size. They have clear cytoplasm and they are more than nucleus. They are phagocytic in nature which removes invading microbes and cell debris, because of this they are also known as scavengers. They have an average life span of 10 to 12 hours.
The nucleus is large, kidney shaped or oval and not lobed. They are produced in lymph nodes and spleen, derived from monoblasts. They constitute 1-5% of total WBC. They are actively motile.
Platelets or Thrombocytes:
Platelets are non-nucleated, round and biconvex. These are the smallest elements of blood which measure up to 2.5 to 5 μm in size. These are not living cells. They have a total count of 2.5 to 4.5 lakhs/cu mm blood. They have an average life span of 5 to 10 days.
Platelets are formed by fragments of large cells called megakaryocytes of the bone marrow. Formation of platelets is called thrombopoiesis. Increase in the number of platelets is called thrombocytosis and decrease in the number is called thrombocytopenia.
These are used in coagulation/clotting of blood. They form a platelet plug at the site of injury and releases thromoplastin which helps in clotting the blood.
Functions of blood:
Following are the functions of the blood:
- Transports oxygen and carbon dioxide from the lungs to tissues and vice versa.
- Transports food from intestine to the liver first and the to the whole tissues of the body.
- Transports waste products to kidneys, lugs, skin and intestine in order for it to be eliminated.
- Transports hormones to vital tissues which are secreted by the endocrine gland.
- It maintains the pH.
- Balances water.
- Transports heat from deeper tissues of the body to the surface.
- Gives defence against infection.
- It regulates the temperature.
- Arteries help to support the tissues.
- Prevents loss of blood by clotting.
- Heals wounds. |
A primate is any member that belongs to the biological order Primates , the group that contains all the species commonly related to lemurs and monkeys, it is important to mention that this last category includes humans . It is a very wide and varied order of mammals that is subdivided into several categories and its evidence dates back more than 58 million years.
What are the primates?
Primates are an order of plantigrade mammals that have a total of five toes on their limbs and also have thumbs and are generally known as monkeys .
- Characteristics of primates
- What were the first primates
- Kinship with man
- Examples of current primates
- Primates that currently have a tail
Characteristics of primates
The main characteristics that we can observe in primates are the following:
- Primates have a type of stereoscopic and chromatic vision at the same time with which they can appreciate the distances of things.
- They have hands that are used to be able to grasp things well.
- Humans and their closest relatives belong to this group known as primates .
- They have five fingers , a common dental pattern among them and a body adaptation .
- Most primates, with the exception of humans who inhabit the world, inhabit the tropical and subtropical regions of America , Africa and Asia.
- They have plantigrade feet .
- They have an opposable thumb on the hands and feet.
- Their fingers also have the ability to flex , they have divergence and convergence .
- They have clavicles , nails , teeth , shoulder and elbow joints that are well developed.
- They have binocular vision and eye sockets that are surrounded by bone.
It is believed that the origin of the primates comes from the arboreal insectivorous mammals that favored the development of some characteristics related to the displacement . The origin of the word comes from Linnaeus in the year 1758 when he carried out his taxonomic ordering of animals. Linnaeus included in your order Primates to people , monkeys apes, monkeys of the Old World and the New World monkeys. It showed that monkeys are the most similar animals to humans , and that they were the most developed of the animal kingdom .
The evolution of primates dates back to about 65 million years ago in the mid- Late Cretaceous period in Africa . The oldest fossil dates from the late Eocene and at the beginning of the Miocene different species begin to appear. It was during the Tertiary Era that primates began to proliferate . They lived in the trees and fed on fruits, but with the passage of time they had to abandon the arboreal habitat to which they were accustomed. Over time the Australopithecus appeared, which was the one that began to acquire aupright posture and a development in his cranial capacity .
The order of primates is divided into two different suborders, each with different characteristics, these are:
- Suborder Strepsirrhines : they are known as the lower primates and were the first to appear on earth. They are animals capable of independently producing vitamin C. They are known as prosimians and have archaic characters . Their olfactory functions are much better than sight and they have a greater number of teeth . They are mostly nocturnal species. This order in turn is subdivided into:
- Chiromyiformes – long fingers to catch insects on tree trunks.
- Lamelliformes : which includes lemurs and they only inhabit Madagascar .
- Suborder haplorhines : they are known as superior primates and includes monkeys , great apes and humans . This group has an infra order, the simiiformes, which are divided into two more groups according to their geographical distribution : the Platirrinos, also known as the new world monkeys , America and the Catirrinos, the old world monkeys , Africa. In this group we can find several families:
- Cercopithecidae family : we find some species related to baboons such as geladas, baboons, the monkey or Gibraltar macaque.
- Hylobatidae family : hominid primates belong to this group . The species that belong to this family are characterized by having very long arms.
Primates can be found inhabiting practically the entire world, excluding the continents of Australia and Antarctica . Its expansion has made each species indigenous to the places where they live and they have been able to adapt to the climate and food for their survival . They generally inhabit the jungles and savannas . One of the places considered the greatest in terms of diversity is the Island of Madagascar , in the case of lemurs. They are generally found in Africa , Asia and America.
As for their diet, primates take advantage of a wide variety of foods from nature . Many of them feed on fruits , leaves and some also eat insects . There are those that are considered carnivorous primates and are considered nocturnal hunters , they use their legs to catch their prey and then eat them. They can also include foods like lizards , bird eggs , fruits, and plant sap .
What were the first primates
The first primates came from a small arboreal mammal very similar to a squirrel , this one evolved little by little giving rise to species of greater size and better physical consistency . It was known by the name of Plesiadapis .
There are several types of primates of which there is a fossil record. Among them we mention:
- Archicebus Achilles – Lived approximately 55 million years ago and was discovered by an international team of researchers in an ancient lake in central China’s Hubei province .
- Microchoerus hookeri : a small arboreal animal with nocturnal habits that was discovered in a coal mine in Sossís , in Lleida . It is believed that it fed on fruits and resins. Experts consider that the habitat of this primate extended throughout the Iberian Peninsula and in Central Europe .
- Nyanzapithecus alesi – An infant specimen that lived approximately 13 million years ago in the Miocene. It had features similar to that of gibbons , its snout was small and withdrawn, although it also had characteristics similar to that of chimpanzees and humans .
Kinship with man
Studies of the primate and human genome provide a wealth of information about human biology and evolution . Experts consider that between primates and man there are practically 99% similarities in the basic sequence of DNA . The man and the primates have many similarities mainly in our physical form if we compare them with other animals. The thumb , the eyes in front of the head and other characteristics suggest that we are descended from primates . Even in behavior and anatomy there are several similarities.
The importance of primates lies in the source of knowledge and research they provide scientists . Thanks to them, it has been possible to identify a series of behaviors and characteristics that make experts think that man comes directly from them. They are the basis of many studies and research , which seek to detail every day more, the origin and evolution of the human being.
Examples of current primates
Some examples of primates that we can find today are the following:
- Titi monkey
Primates that currently have a tail
Not all primates have a tail. The only primates that have a prehensile tail are those found on the American continent. The use of the tail is the basis of the locomotion of the primates of South America since they are mainly arboreal and need the tail to be able to maintain balance in the trees. The tails generally have small vertebrae and joints to provide flexibility and movement. They also have elongated and thick muscles in a dorsal position. |
Pixels are the individual building blocks of every digital photograph and most other digital images. In addition to pixels in a digital image, pixels can refer to pixels in a digital display, a digital camera sensor, and other devices. But, in this article, we’ll mainly be talking about the pixels in digital images.
What is Pixel?
Let’s first understand what is pixel. The word ‘pixel’ was invented by combining the words ‘picture element’. You can think of the pixels in digital images as colored squares. The images you see on screens usually have hundreds of thousands (and often millions) of pixels — when enough of these colored squares are placed next to one another and displayed at a small enough size, you see continuous images instead of individual pixels. Unless you zoom an image far enough to see the pixels, in which case they appear as colored squares.
How Big is a Pixel?
Pixels themselves don’t really have a size. While physical units like inches or centimeters have an exact, real-world size, a pixel is more of a logical unit than a physical one. However, the pixels of digital images are most often displayed at a size so small as to not be visible, so they usually exist as very small elements.
How are Pixels Related to Physical Units?
After understanding what is pixel, the next question that comes to one’s mind is how is it measured. Although pixels don’t have their own size, their relationship to physical units becomes important when you want to print an existing image. Or if you want to create a new image to be printed at a particular size. For example, if you’d like to print an image at 6 inches wide and 4 inches tall, that says nothing about how many pixels it has or should have.
At this point, it’s important to make sure that, just like when displaying an image on a screen, the individual pixels are small enough to not be visible. The standard for high-quality prints is to print 300 pixels per inch of a page. How many pixels should fit into an inch is called the resolution of the image? And with a resolution of 300 pixels per inch (PPI), 6 inches are now equal to 1,800 pixels and 4 inches are 1,200 pixels.
How Do Pixels Come to Exist?
Pixels can come to exist in a few different ways. For example, digital cameras have sensors made up of light-detecting pixels. When you press the shutter, the sensor captures the subject you’re photographing. The information detected by each of the pixels of this sensor is translated into the pixels of a digital image. Instead of capturing a photo, if you create a new image in Pixelmator Pro, the first step is setting its size. Therefore, you immediately say how many pixels you’d like your image to have. So all digital images start as collections of pixels and as long as they exist in digital form, they continue being collections of pixels.
How Do Pixels Work?
Pixels are the vehicle for transforming binary code data into an image on a screen. Each pixel in an RGB monitor is sent a piece of code that tells it how to display a certain color. Because each pixel has red, green, and blue lighting elements, the code is made up of a triplet of eight-digit numbers written in binary code. This tells each of the three color elements at what intensity to display, and when all three colors are combined, they display the desired color. Each pixel displays one color, and the colors together make an image.
Resolution vs Pixel Density
When talking about image or screen quality, the term resolution is often used. Screen resolution can be defined as the number of pixels on a screen. For example, a 13-inch Macbook Air has a resolution of 2560 x 1600, which means that the screen is made up of over 4,000,000 pixels. The higher the resolution, the higher the image quality.
While the resolution is important to consider when measuring screen quality, it is also important to consider the pixel density. Pixel density is either measured in PPI (pixels per inch) or PPC (pixels per centimeter). For example, if you look at a photo on an iPhone and a movie screen, both with 1792 x 828 resolution, the iPhone image will have a better quality because the PPI is higher on the iPhone screen. When the picture is blown up on a movie screen with the same resolution as the iPhone, the image will be pixelated. |
Around 10 percent of adults and eight percent of children in the United States have been diagnosed with severe food allergies. However, those numbers double when you take into account the people who have food sensitivities or intolerances. Food allergies and sensitivities are quite common, and their number is increasing every day.
This article is only about food allergies triggering a severe immune system response. It is not about food intolerances or food sensitivities.
What is a food allergy?
A food allergy triggers a cascade of responses from our immune system. This happens when our immune system mistakenly identifies certain proteins present in food to be harmful. As a result, our bodies take measures to protect us from these “harmful” chemicals. One of the most common measures is the release of histamine. Histamine, while helpful on the one hand, causes inflammation in our bodies.
When a person with food allergies is exposed to even the smallest amount of allergenic food, their body starts displaying symptoms. Symptoms of food allergies include difficulty breathing, swelling of the mouth, cheeks, and tongue, a sudden drop in blood pressure, nausea, vomiting, diarrhea, rashes, and hives. In severe cases, the body responds to these allergenic foods by going into anaphylactic shock, which can be fatal.
Note that food allergies and food intolerances/sensitivities are not the same thing. Food intolerance does not affect your immune system.
Six foods that trigger the worst food allergies
Cow’s milk contains a protein that an allergic person’s body will mistake for a harmful chemical, thus triggering the body’s immunity response. This form of allergy was most common in children from infants to toddlers. In the past, doctors did not consider this a serious allergy in young children, as they thought most of them would outgrow it. However, today we see that this allergy can persist and often stays with them through adulthood.
The reaction to cow’s milk usually includes swelling, rashes, vomiting, and the development of hives. In the most severe cases, anaphylaxis can also occur. Those who are allergic to cow’s milk should avoid the liquid milk itself, milk in powder form, cheese, margarine, butter, yogurt, ice-cream, and all sorts of dairy creams. Mothers who have an allergic baby will have to remove the cow’s milk from their diet.
An egg allergy, after cow’s milk, is the second most common allergy in children. Studies show that many of these children will outgrow this allergy. However, many others will continue to have this allergy through adulthood. The most common symptoms of egg allergy include respiratory issues, skin rashes and hives, stomach aches, and anaphylaxis.
The protein that commonly triggers the allergy is found in egg whites. The treatment for an egg allergy, similar to the cow’s milk allergy, is to make sure that your diet is egg-free.
However, egg-related foods, such as biscuits and cakes, or foods with cooked eggs in them, are not as allergenic to some allergic people because cooking the egg destroys the protein that causes the allergies.
Tree nut allergies are common in the United States, affecting almost 1% of the population. Tree nuts include almonds, walnuts, pistachios, cashews, Macadamia nuts, Brazil nuts, and pine nuts. Peanuts are not tree nuts. They are legumes. However, peanut allergies are considered a world-wide concern, and their allergic symptoms are the same as tree nuts.
People with this allergy will be affected by any form of tree nut or peanut, even if they consume something made from their oils or butter. The only way to avoid this allergy is to abstain from nuts. Tree nut and peanut allergies are responsible for half of all anaphylaxis related deaths. That is why people who have this allergy are advised to keep carry an epi-pen, which can save their lives by injecting a shot of adrenaline into their bloodstream. The adrenaline battles and reverses the effects of the allergy, if taken immediately after exposure. Strict avoidance is necessary for tree nut and peanut allergies.
Wheat allergies are also very common in both children and adults, with symptoms including digestive trouble, vomiting, nausea, skin rashes, swelling, and possible anaphylactic shock. People often mistake wheat allergies for celiac disease or gluten sensitivity because they have similar symptoms. However, a wheat allergy causes an immune system response that can prove to be fatal. It is one of the worst foods for allergies and requires the allergic person to avoid wheat and all foods made with wheat proteins.
Soy allergies were thought to be more common in children, though that belief is changing. Foods like soybeans, soy milk, soybean oils, soy sauce, and for some people soy lecithin and tofu made from soybeans are the prime sources of soy that could be triggering an allergic reaction. The symptoms include itchiness all over the body, tingling in the mouth, runny nose, coughing, difficulty breathing, asthma, and anaphylaxis.
Finned Fish and Shellfish
Researchers define seafood as either “finned fish” or “shellfish”. Around three percent of adults are affected by seafood allergies. They are two exclusive allergies, as finned fish and shellfish don’t carry the same kind of allergenic protein. That is why someone could be allergic to one and yet might not be allergic to the other. Finned fish and shellfish allergies primarily develop in adulthood, and both can be fatal.
The symptoms of all seafood allergies are similar: stomach pain, diarrhea, vomiting, swelling of the lips and tongue, and anaphylaxis.
As you may have noticed, despite the difference in the food types, most allergic symptoms are similar. This is because the proteins that cause reactions in our immune system tend to trigger the same symptoms in both children and adults. Unfortunately, current research appears to show that although some children still outgrow a food allergy, for many others it will continue through adulthood.
Other highly allergenic foods include linseed, mustard seeds, chamomile, avocados, bananas, peaches, kiwi, garlic, aniseed, sesame seed, and cocoa. The best way to avoid complications with severe food allergies is to avoid those foods at all costs. If you have consumed an allergenic food and are experiencing symptoms, use your epi-pen and/or contact your doctor immediately before the symptoms get worse.
If you suffer from food sensitivities or other treatable allergies, and you have been unsuccessful with allergy treatments or are looking for something different, St. Louis Allergy Relief Center treats allergies holistically without the use of pain or pills. We are an allergy wellness center specializing in natural treatments. We specialize in holistic, natural allergy treatments using Advanced Allergy Therapeutics (AAT). We provide you with a detailed treatment plan after completing a comprehensive assessment to determine environmental stressors that may be triggering allergies or allergy-like symptoms. Visit our website https://stlouisallergyrelief.com/ to learn more or call us at 314-384-9304. |
A large percentage of Americans are found to be deficient in Vitamin D. It’s an incredibly important vitamin yet it can be hard to obtain through just diet alone and is reliant on regular sun exposure. We need vitamin D to help the body absorb calcium and phosphate from our diet. Vitamin D has many roles in the body. It helps to:
- promote healthy bones and teeth
- support immune, brain, and nervous system health
- regulate insulin levels to support diabetes management
- support lung function and cardiovascular health
- influence the expression of genes for prevention of cancers
Vitamin D is sometimes referred to as the sunshine vitamin. This is because vitamin D is produced by the body in response to sun exposure. Spending about 15-20 minutes in the sun, three days per week, can allow a body to produce sufficient vitamin D. This can vary depending on how fair-skinned you are. Darker skinned people with more melanin will need more sun exposure. Other factors, such as smog and even the time of year, can also affect how much UVB rays are present. In addition, due to the very real concerns surrounding skin cancer, many people avoid sun exposure.
Very few foods naturally contain vitamin D. Fatty fishes, such as salmon and swordfish offer some vitamin D, but most dietary sources of vitamin D are found in fortified foods such as milk, cereal and orange juice. Vitamin D3 supplements may be an easy and effective way to meet your vitamin D needs, especially if you’re at risk of deficiency. Check with your healthcare provider for what would be recommended for you. |
As ocean temperatures rise, some species of corals are likely to succeed at the expense of others, according to a report published online on April 12 in the Cell Press journal Current Biology that details the first large-scale investigation of climate effects on corals.
"The good news is that, rather than experiencing wholesale destruction, many coral reefs will survive climate change by changing the mix of coral species as the ocean warms and becomes more acidic," said Terry Hughes of James Cook University in Australia. "That's important for people who rely on the rich and beautiful coral reefs of today for food, tourism, and other livelihoods."
In an attempt to understand the sorts of changes that may take place as the world's oceans warm, the researchers examined the coral composition of reefs along the entire length of Australia's Great Barrier Reef. Earlier studies of climate change and corals have been done on a much smaller geographical scale, with a primary focus on total coral cover or counts of species as rather crude indicators of reef health.
"We chose the iconic Great Barrier Reef as our natural laboratory because water temperature varies by 8 to 9 degrees Celsius along its full length from summer to winter, and because there are wide local variations in pH," Hughes explained. "Its regional-scale natural gradients encompass the sorts of conditions that will apply several decades from now under business-as-usual greenhouse gas emissions."
In total, the researchers identified and measured more than 35,000 coral colonies on 33 reefs. Their survey revealed surprising flexibility in the assembly of corals. As they saw one species decline in abundance, some other species would tend to rise. The waxing or waning of any given coral species the researchers observed as they moved along the coastline occurred independently of changes to other coral species.
Hughes concludes that corals' response to climate change is likely to be more complicated than many had thought. Although he now believes that rising temperatures are unlikely to mean the end of the coral reef, critical issues remain.
"If susceptible table and branching species are replaced by mound-shaped corals, it would leave fewer nooks and crannies where fish shelter and feed," he said. "Coral reefs are also threatened by much more local impacts, especially by pollution and overfishing. We need to address all of the threats, including climate change, to give coral reefs a fighting chance for the future."
Hughes et al.: "Assembly rules of reef corals are flexible along a steep climatic gradient." |
Last month, Comet Siding Spring spawned several tons of dust and ice particles on the Red Plane. The incident produced an approximately thousands of shooting stars.
Recently, NASA accumulated data from two of its spacecrafts that revealed significant information which gave an insight about the structure of Comet Siding Spring and the atmosphere of Mars.
The data unveiled that the large amount of dust and other fragments changed the entire configuration and released a meteor shower. Later on, the meteor shower gave birth to several atmospheric alterations.
Jim Green, the director of NASA’s Planetary Science Division informed that the scientists were really surprised by the large quantity of particles comet produced.
The magnesium break down the electric charge present in the Martian atmosphere. On the other hand, sodium formed a yellowish shade in the night sky that people usually witness in their parking lots at night.
Moreover, scientist asserted that this is the first time they observed this kind of ionization in the atmosphere because of any comet. It increased the layer of ions in the top most level of atmosphere. Some of the most noteworthy ions were sodium, iron and magnesium.
Nick Schneider, the main instrument scientist of one of the NASA’s satellite stated that it was indeed an amazing sight for the human eye.
The best place to witness this meteor shower was the Martian Surface. However, it was certainly impossible for experts to view the incident from there. Therefore, NASA sent two of its rover over there in order to record this amazing meteor shower. Unfortunately, the rovers failed to document the meteor shower due to some technical fault. |
Haemophilus influenzae type b (Hib)
Haemophilus influenzae type b (Hib) is a bacterium that can cause a number of serious illnesses, especially in young children.
Hib infections used to be a serious health problem in the UK, but the routine immunisation against Hib in infants since 1992 means these infections are now rare.
Of the small number of cases that do occur nowadays, most affect adults with long-term (chronic) underlying medical conditions, rather than young children.
What problems can it cause?
Hib bacteria can cause several serious infections, including:
meningitis – infection of the lining of the brain and spinal cord
septicaemia – blood poisoning
pneumonia – infection of the lungs
pericarditis – infection of the lining surrounding the heart
epiglottitis – infection of the epiglottis (the flap that covers the entrance to your windpipe)
septic arthritis – infection of the joints
cellulitis – infection of the skin and underlying tissues
osteomyelitis – infection of the bones
Many of the children who get Hib infections become very ill and need treatment with antibiotics in hospital.
Meningitis is the most severe illness caused by Hib and, even with treatment, one in every 20 children with Hib meningitis will die. Of those who do survive, many will have long-term problems such ashearing loss, seizures and learning disabilities.
How is Hib spread?
Hib bacteria can live in the nose and throat of healthy people and usually do not cause any symptoms.
The bacteria are usually spread in a similar way to cold and flu viruses – through the droplets of fluid in the coughs and sneezes. The bacteria can be spread by healthy people who carry the bacteria, as well as those who are ill with a Hib infection.
Inhaling these infected droplets, or transferring them into your mouth from a contaminated surface, can allow the bacteria to spread further into your body and cause one of the infections mentioned above.
Vaccinating children against Hib has been very successful in cutting rates of Hib infections. From more than 800 confirmed cases a year in England in the early 1990s, the number of Hib infections has now fallen to fewer than 20 cases a year.
Babies have three separate doses of Hib vaccine – at two, three and four months of age – as part of the combined 5-in-1 vaccination.
A booster dose is also offered at 12-13 months as part of the combined Hib/MenC booster, to provide longer-term protection.
Haemophilus influenzae type b bacteria
Reasons to have your child vaccinated
Why it's good for your child and the whole community to be vaccinated, and the dangers of not doing so |
How were large exotic animals caught and transferred to rich collectors before tranquilizers?
The possession, display, and gifting of “exotic” animals was a constant of the medieval and early modern elite from England to China. Unfortunately, sources don’t provide the clearest picture of the mechanics of this social trade. As you can imagine, chroniclers were much more interested in enumerating the splendor and exoticness of the massive (and minor–Matthew of Paris loved him some porcupine) beasts. Still, every now and then we catch some glimpses, although more pertaining to the upkeep of animals than their original capture.
Texts of veterinary science from the Arab world outsource the problem. While these books offer solutions for what ails common animals (especially horses), when it comes to elephants and “special” donkeys, a.k.a. zebras, they mention the importance of having a trained keeper arrive with the animal from India or sub-Saharan Africa. Europeans used outsourcing, too. The lions who arrived in 1775 London from Senegal were proudly presented by English soldiers–who had stolen them from the original trappers and then killed the people.
Medieval people were well aware of the art of luring, and also of provoking. Jean de Joinville tells a tale of a lion hunt in Caesarea (popular among crusaders), which might offer some insight into trapping:
The King set to work with his people to hunt lions, so that they captured many. But in doing so they incurred great bodily danger. The mode of taking them was this: They pursued them on the swiftest horses. When they came near one they shot a bolt or arrow at him, and the animal, feeling himself wounded, ran at the first person he could see, who immediately turned his horse’s head and fled as fast as he could. During his flight he dropped a piece of his clothing, which the lion caught up and tore, thinking it was the person who had injured him. And while the lion was engaged thus the hunters again approached the infuriated animal and shot more bolts and arrows at him. Soon the lion left the cloth and madly rushed at some other hunter, who adopted the same strategy as before. This was repeated until the animal succumbed, becoming exhausted by the wounds he had received.
In most cases the point was surely the lion’s death, but one can imagine a similar bait-and-subdue method used for capturing. Medieval bestiaries (texts describing the traits of animals, moralized for religious instruction) are not known for their rigorous devotion to realism, but this English bestiary illumination shows goats instead of humans used to lure the lions, who are then trapped into a pre-prepared hole or ditch.
Medieval animal-keepers were well aware of the use of chains and muzzles to control their charges.
Gaston Febus in his Livre de chasse divides animals into two groups: those who dogs “commonly and willingly hunt”, and the rest of them. That group, notably, does not include the great exotic beasts. Nevertheless, the English monarchs, at least, took great sport in using dogs to lion-bait for their entertainment.
Transport would have been situationally dependent, as the journey of the 18-19th century giraffes show. The giraffes headed from Sudan to Vienna and Paris seem to have been transported the same way from the African interior to the coast, strapped to the backs of camels. At that point, they were transported by ship to Europe. The giraffe bound for Vienna was carried across the Alps in a cart, a journey which ultimately damaged its skeleton and internal organs enough that it died soon after arrival. Zarafa the Giraffe, headed for Paris, did better. Her keepers elected to land in Marseilles and have her walk to Paris, protected from weather by an oilskin blanket.
As far as transport and keeping goes, the biggest logistical issue was feeding the animals! Most of our menagerie records, in fact, concern either the money used for food or the amount of food. The (probable) polar bear in 13th century London, when kept on a “one long and strong cord,” was allowed to fish in the Thames for its meals! It seems that many of the carnivorous beasts in the London menagerie ate primarily sheep. Much later, Zarafa the Giraffe traveled with 25 cows to provide her daily requirements for milk. (Apparently giraffes drink milk?)
We should not understate either the skill of medieval animal-keepers–or the inherent danger. Arab rulers imported keepers with their beasts for a reason; English kings designated “Master of Lyons and Bears” for a reason (even if they did not always pay on schedule). They reached certain levels of taming with their charges, who nevertheless remained wild animals. A tragic example from 17th century London demonstrates this:
Mary Jenkinson, living with the Person who keeps the Lyons in the Tower, going into the Den to show them to some Aquaintance of hers, one of the Lyons (being the Greatest there) putting out his paw, she was so venturous as to stroak him as she used to do, but suddenly catched her by the middle of the Arm with his Claws and mouth, and most miserably tore her Flesh from the Bone…They thrust several lighted Torches at him, but at last they got her away…She died not many Hours after. |
VIII. The Measure of a Man
Although America did not invent slavery, slavery did invent America. Slavery invented America by providing the delegates to the Constitutional Convention the frame within they considered each of the critical questions they sought to answer. This was already clear on May 29, 1787, barely two weeks into the convention, when Edmund Randolph introduced the resolutions, known as the Virginia Plan, out of which our constitution would emerge. For even those resolutions that did not mention slavery explicitly had slavery written all over them. Nor is it difficult for us to figure out why this was so. The answer is already right there in Aristotle’s definition of a slave: “Any human being that by nature belongs not to himself but to another is by nature a slave; and a human being belongs to another whenever, in spite of being a man, he is a piece of property, i.e., a tool having a separate existence and meant for action.”
“In spite of being a man. . . .” So, were slaves to be counted as property, in which case they should be taxed, but not represented; or should they be counted men, in which case they should be represented, but not taxed. Here, in a man—a human being who belonged not to himself but to another—was contained the very cause for the American Revolution itself: taxation without representation. And, yet, the problem does not end there.
On May 29, 1787, Edmund Randolph set out five defects to the Articles of Confederation, defects that in the judgment of all those present warranted the creation of “a more perfect Union.” To begin with, “the Confederation produced no security against foreign invasion.” Congress could not “prevent a war, nor . . . support it by their own authority.” Congress also had no authority, should any state violate a treaty, or “the law of nations,” to punish that state. More importantly, noted Randolph, “particular states might, by their conduct, provoke war without control.” And, should they do so, “neither militia nor drafts being fit for defence on such occasions, enlistments only could be successful, and these could not be executed without money.” In short, the federal government was entirely incapable of either defending itself or of raising revenues with which to do so. This suggested a strengthening of the federal government at the expense of the states.
The federal government needed revenue in order to raise an army. Southern delegates objected. They objected, first, because, unlike their colleagues to the north, their property consisted not in real estate, ships, merchandise, and fixed capital, but in men, women, and children—in slaves. However, they also objected because they had reason to believe that, were the federal government to use their taxes to raise an army, that army might well be used against the southern states themselves.
The second defect to which Randolph called the attention of his fellow delegates was the fact that the federal government lacked authority to “check the quarrel between states, nor a rebellion in any, not having constitutional power, nor means, to interpose according to the exigency.” Again, the federal government was too weak to prevent civil war. And, again, southern delegates feared that the next rebellion might not be launched by Daniel Shays and his men, but by southern slaves eager to join their free brothers and sisters in the north. On whose side would federal troops side, on the side of southern planters, or on the side of southern slaves?
The third defect was that the Confederation was unable to secure the “fruits of liberty,” which, in this case, meant chiefly the economic fruits. But, once again, the problem was the weakness of the center. Without the ability to regulate interstate commerce or to impose restrictions on the generation of currency, the federal government was powerless to take advantage of the abundance of wealth and labor enjoyed by the new republic. But, once again, as well, property and, hence, slavery was the underlying issue. This is because the southern states wanted slaves to be treated as men for purposes of representation, but treated as property when it came to law. This placed run-away slaves in the odd position of being charged with theft, in effect stealing themselves. Interstate commerce was jeopardized by the countless caveats and exceptions that states were forced to make on account of slavery.
Randolph’s fourth objection to the Articles of Confederation was that they left no means for the federal government to defend itself “against encroachments from the states.” But, fifthly, Randolph pointed out, there was no clause making the federal constitution “paramount to the state constitutions.”
In both of these cases, slavery was the underlying issue. Against whom would the federal government need to defend itself if not from troops charged with protecting the South’s most peculiar institution? Or, which states sought preemption of their state constitutions over the U.S. Constitution? Again, the answer was clear.
And, yet, the northern states knew that economically they needed the South just as much or even more than the South needed them. We often overlook the immense windfall slavery provided all of the states and not simply those in the south. Economically, slavery drove wages down not only for unskilled workers, who often competed for employment with slaves, but for all workers, both skilled and unskilled. But, for this reason, we also often overlook the tremendous boost to overall productivity slavery provided, not simply for the South, but for all of the United States. This helps to explain why, for all their moral posturing, delegates to the convention from northern states did not simply walk away from the table when their southern colleagues refused to budge. And this, in itself, points to what was truly unique about the debates over slavery at the Constitutional Convention.
Aristotle, we will recall, discussed slavery in the context of the household economy or private enterprise, the principle aim of which was to maintain the household. Indeed, Aristotle explicitly warned against those who would seek to expand their wealth indefinitely. The owners of such households were at risk, believed Aristotle, of mistaking life (bios) for creation (zoe) and so of substituting mere biological existence for living well. Such owners, Aristotle warned, would quickly lose their bearings and would begin to view all of human action as mere means for accumulating endless wealth. Soon they would see courage only another means for making money, instead of an indispensible component in winning battles. Or they would begin to see war as only another way of accumulating wealth. Or they would view health care as a means for making money rather than, as it should be, a means to care for the sick and make them healthy.
Clearly, the impact that endless accumulation of wealth would have upon the enslaved would be disastrous. For let us suppose that the owner of this form of property could increase his yield not by preserving or maintaining his human property, but by using, discarding, and purchasing another slave. Let us suppose that, like any commodity, this commodity too could carry a value that made its use and replacement more economical than its care and preservation. Here, in fact, was the ultimate perversion: that human beings would come to view other human beings not as a means to sustain a closed economy, i.e., a self-sustaining oikos or oikonomia, which is the case in every despotism; but, rather, that they would come to view other human beings as a means of unending accumulation of wealth. At this point, human beings, whose bodies and minds are not limitless, are made subject to a process that is. And the destruction of their finite bodies and minds for the sake of something limitless comes to be viewed as a price worth paying.
This, clearly, is what had happened in America, not only in the South, but also in the North. And it is what linked the fates of mere landless laborers in the North to the fates of enslaved Africans in the South. In both instances, their lives and labor had come to be viewed solely within the context of the private household economy. The natural condition of both was viewed as “the tool” of another human being within an overall despotic relationship.
But, since delegates from northern states shared this general outlook with their southern counterparts, northern delegates found it difficult to out-maneuver their southern colleagues in discussions about taxation, property, security,representation, and slavery. They wished—all of them, North and South—to be true republicans, true advocates and protectors of public life, the life we hold in common and where alone we are free and equal. And, yet, such was the nature of the despotism attached to the private household economy (oikonomia)—to which, in turn, the delegates were themselves attached—that this despotism could not help but seep through the cracks left open by the framers.
Initially, of course, it created but a trickle. Since it intersected every one of the resolutions named in the Virginia Plan, the topic of slavery came up almost every day the convention met. But, on most days, since it was always possible to reach consensus on whatever matter was explicitly on the table—length of term of service, direct or indirect election of representatives, interstate commerce, taxation—the treatment of slavery could be kicked down the road indefinitely, or so it seemed. But, then, beginning in July the matter of slavery could not be put off any longer. And by August it threatened to undo all that had thus far been accomplished. Would slavery wreck the convention? Would it deprive the new nation of what it most needed: a new constitution?
Republicanism presumes common wealth shared among political, economic, and intellectual equals. The men meeting in Philadelphia in 1787 to “form a more perfect Union” were under no illusions. They knew full well that this republican vision would not sit well with most of those who had actually picked up muskets, fought, and suffered hardship for their country. Most of their fellow countrymen lacked the wealth, education, and property they would have needed to actively participate in public life. And, although some of them might hope some day to own their own household (oikos) and thereby become the masters (despotes) of their own household economy (oikonomia), most of them were and would forever remain the doulos or “slaves” within some other man’s private household economy.
For those of European descent, there was a way out of such bondage. Hard work, saving, wise investment, and eventually land and education might earn them the right to actively participate in public life. For those of African descent, the same did not hold true. Hard work might earn them a transfer from work in the fields to work in the foundry or lumber mill or to the household itself. But no amount of labor would be enough to earn them their political freedom. They too remained doulos or slaves within some other man’s private household economy; but without any hope for redemption. They were bound to work for a master (despotes) for life.
Just as it was elsewhere in the Americas, despotism was thus an indispensible component of economic life in the United States as well. And this fact, in and of itself, made advocacy for republican values and institutions highly problematic. For, as the inhabitants on the Island of Hispanola (Haiti) would soon demonstrate, there was nothing to prevent slaves from embracing these same values and institutions. And, owing to the inherent conflict between republican institutions and values and despotic ones, this made the formation of a more perfect Union that explicitly included despotism a leading problem for those meeting in Philadelphia.
The topic of slavery came up nearly every day during the convention. It came up on July 11, because southern states wished to have their slaves counted (at least numerically) in the formula that would determine representation in the House. It came up on August 8, because Massachusetts’ Rufus King, an outspoken federalist, wished to revisit the matter of importing slaves, which he considered “a most grating circumstance.” King, who had just read a disturbing committee report on the matter, struck a despairing note:
In two great points, the hands of the legislature were absolutely tied. The importation of slaves could not be prohibited. Exports could not be taxed. Is this reasonable? What are the great objects of the general system? First, defence against foreign invasion; secondly, against internal sedition. Shall all the states, then, be bound to defend each, and shall each be at liberty to introduce a weakness which will render defence more difficult? Shall one part of the United States be bound to defend another part, and that other part be at liberty, not only to increase its own danger, but to withhold the compensation for the burden? If slaves are to be imported, shall not the exports produced by their labor supply a revenue the better to enable the general government to defend their masters? . . . Either slaves should not be represented, or exports should be taxable.
Here, again, questions of taxation, representation, and national defense are clearly bound together with the entire matter of slavery. Rufus had earned his wealth from manufacturing, shipping, and investing in public securities. He therefore well appreciated the intimate relationship between slave labor, manufacturing, and the export business. But, he also appreciated the expense the federal courts and magistrates had incurred and would continue to incur policing the slave trade and its issue.
At which point, Gouverneur Morris of Pennsylvania was recognized. The only thing striking about Morris’ human geography is the praise it earned from his colleagues.
Compare the free regions of the Middle States, where a rich and noble cultivation marks the prosperity and happiness of the people, with the misery and poverty which overspread the barren wastes of Virginia, Maryland, and the other states having slaves. Travel through the whole continent, and you behold the prospect continually varying with the appearance and disappearance of slavery. The moment you leave the Eastern States, and enter New York, the effects of the institution become visible. Passing through the Jersey’s and entering Pennsylvania, every criterion of superior improvement witnesses the change. Proceed southwardly, and every step you take, through the great regions of slaves, presents a desert increasing with the increasing proportion of these wretched beings. Upon what principle is it that the slaves shall be computed in the representation? Are they men? Then make them citizens, and had them vote. Are they property? Why, then, is no other property included?
What precisely Morris meant by referring to the human landscape as a “desert” is difficult to discern. Was he referring to the poverty, the hovels in which slaves were forced to live? Was he referring to the profound lack of commerce, when compared to the north? However, it is Morris’ aim that is truly astounding. Morris was not demanding representation for slaves. Far from it. Rather, he was pointing out that slaves could not and should not be represented (thus depriving the south of dozens of potential representatives in the House).
But, then Morris turns to the matter of property and therefore of taxation. Southern states wanted it both ways. When taxation was on the table, they wanted their slaves to be counted human beings and therefore not taxed. When representation was on the table, they also wanted their slaves to be counted human beings, at least for purposes of representation. But, then, when the matter of voting was on the table, all bets were off. Suddenly once again slaves became property. Which other form of property was thus exempted from taxation?
Morris then called attention to what all of the others had ignored. Slavery, he pointed out, is the foundation of aristocracy. “Domestic slavery is the most prominent feature in the aristocratic countenance of the proposed Constitution.” Of course he was right. But, he was not finished. “The vassalage of the poor,” he declared, “has ever been the favorite offspring of aristocracy.” Right again. But he was speaking to a room filled with men for whom “aristocracy” was not necessarily a term of derision. And, so, in conclusion Morris returned to the issues of security, revenue, and representation.
And what is the proposed compensation to the Northern States, for a sacrifice of every principle of right, of every impulse of humanity? They are to bind themselves to march their militia for the defence of the Southern States, for their defence against those very slaves of whom they complain. They must supply vessels and seamen, in case of foreign attack. The legislature will have indefinite power to tax them by excises, and duties on imports, both of which will fall heavier on them than on the southern inhabitants; for the bohea tea used by a northern freeman will pay more tax than the whole consumption of the miserable slave, which consists of nothing more than his physical subsistence and the rag that covers his nakedness. On the other side, the Southern States are not to be restrained from importing fresh supplies of wretched Africans, at once to increase the danger of attack and the difficulty of defence; nay, they are to be encouraged to it, by an assurance of having their votes in the national government increased in proportion; and are, at the same time, to have their exports and their slaves exempt from all contributions for the public service.
Here, finally, Morris tied all the themes together: despotism, security, revenue, and representation.
Aristotle, too, had recognized the interrelationship among these matters. Despotism—or, as we prefer to call it, private enterprise—was the very foundation for the freedom of the republic. Absent the household economy’s generation of wealth, there could be no revenue to share in common. And absent common wealth, there could be no freedom and independence for the citizens of the republic.
There was, however, a huge difference between fourth century Athens and eighteenth century North America. Where Aristotle had viewed the unending accumulation of wealth as a perversion of private enterprise, the framers of the constitution were inclined to praise this potentially endless source of wealth. Here, we cannot overlook the fact that, while the first Library of Congress gave Plato and Aristotle pride of place—first and second under the general heading of “Politics”—it grouped them together with Adam Smith’s Wealth of Nations, which, given its novelty, is quite impressive. Published a mere eleven years earlier, in 1776, Smith’s book had already made it onto the all-time best seller list alongside Plato’s Republic and Aristotle’s Politics.
Evidently the framers still believed that it would be possible to steer clear of the perversions Aristotle had mentioned. They would be able to prevent military courage from being leveraged for private financial gain. They would be able to prevent wars from being waged to accumulate wealth. And they would be able to prevent medicine from becoming a cash cow. But, mostly, they would be able to prevent the despotism of the private household economy from infecting the republic.
Yet, given their discussion of slavery, it already seems clear that republicanism itself was on the block. For, all of the delegates, not excluding Morris, were now entirely comfortable equating the value of human beings with the private labor they performed in one another’s households, as though a republic could possibly be based and built upon such a thoroughly despotic principle. Each of them had taken the measure of a man—a slave—and had made that measure the cornerstone of their so-called republic.
That cornerstone has a name and that name is Civil War. |
The recently discovered Higgs boson, which helps give particles their mass, could have destroyed the cosmos shortly after it was born, causing the universe to collapse just after the Big Bang. But gravity, the force that keeps planets and stars together, might have kept this from happening, scientists say.
In 2012, scientists confirmed the detection of the long-sought Higgs boson, also known by its nickname the "God particle," at the Large Hadron Collider (LHC), the most powerful particle accelerator on the planet. This particle helps give mass to all elementary particles that have mass, such as electrons and protons. Elementary particles that do not have mass, such as the photons that make up light, do not get mass from the Higgs boson.
The experiments that detected the Higgs boson revealed it had a mass of 125 billion electron-volts, or more than 130 times the mass of the proton. However, this discovery led to a mystery — at that mass, the Higgs boson should have destroyed the universe just after the Big Bang. [The Big Bang to Now in 10 Easy Steps]
This is because Higgs particles attract each other at high energies. For this to happen, the energies must be extraordinarily high, "at least a million times higher than the LHC can reach," study co-author Arttu Rajantie, a theoretical physicist at Imperial College London, told Space.com.
Right after the Big Bang, however, there was easily enough energy to make Higgs bosons attract each other. This could have led the early universe to contract instead of expand, snuffing it out shortly after its birth.
"The Standard Model of particle physics, which scientists use to explain elementary particles and their interactions, has so far not provided an answer to why the universe did not collapse following the Big Bang," Rajantie said in a statement.
A number of scientists had suggested that new laws of physics or as-yet-undiscovered particles might have stabilized the universe from the peril posed by the Higgs boson. Now Rajantie and his colleagues have found that gravity could solve this mystery instead.
Gravity is a consequence of masses warping the fabric of space and time. To imagine this, think of how bowling balls would deform rubber mats they sit on.
The early universe was very dense because it had not had a chance to expand much yet. This meant that space-time was greatly curved back then.
The researchers' calculations revealed that when space-time is greatly curved, the Higgs boson increases in mass. This would have also raised the amount of energy needed to make Higgs bosons attract each other, preventing any instability that might have collapsed the early universe.
Now that Rajantie and his colleagues have revealed that the interaction between gravity and the Higgs played a major role in the early universe, they want to learn more about the strength of this interaction. This could include looking at how the early universe developed using data from current and future European Space Agency missions that aim to measure the cosmic microwave background radiation, which constitute the echoes left over from the Big Bang, Rajantie said. It could also include studying gravitational waves, which are invisible ripples in the fabric of space-time given off by accelerating masses, he said.
The research is detailed in the Nov. 17 edition of the journal Physical Review Letters. |
Day Two (Tuesday 12th May)
Draw a picture and write a sentence about a part of the story you liked. Was there a part you didn't like? – explain why!
Day Three (Wednesday 13th May)
Think back to ‘Jasper’s Beanstalk’. Did it remind you of any other stories? Write down questions that you have about the story.
Day Four (Thursday 14th May)
Practice writing down the days of the week in order. Remember to use a capital letter! Write another list of weekday names, cut them out and play snap or the memory game.
Day Five (Friday 15th May)
Draw a big beanstalk with lots of leaves. Inside the beanstalk and leaves, write down and draw the names of as many insects as you can think of! |
The Greenhouse Effect DVD
Environmental Educational Materials
Environmental Educational Videos
Explore Causes and Effects of Global Warming
Give your students an overview of the theory that combustion of fossil fuels, deforestation, and inefficient agricultural practices are making our planet warmer. A computer model simulating Earth’s climate changes that may occur in the future is supported by photography from around the world. You’ll also get a teacher’s guide and suggested class activities. Duration: 17 minutes. |
This article covers the basics of digestion from eating to producing a bowel movement.
Do you know when digestion first begins? You might respond "When you chew," but actually, the digestive process begins in your brain upon thinking about that meal you are going to eat. The hypothalamus, which is a gland at the base of brain, begins to send signals to the salivary glands in the mouth and other parts of the digestive tract, to prepare the body for the meal.
Once you are sitting down to eat, obviously chewing the food is essential, and it is often lacking. The yogis say you should chew once for every tooth you have. If you do this, you'll find your food is a paste before you swallow. This is a good exercise to do if you are seeking better health and digestion. Only by chewing the food into a paste and coating it in salivary enzymes, is the beginning of the digestive process optimized. To enhance digestion in this segment, I suggest setting your fork down between bites and chewing food until it is a paste before swallowing.
Enzymes, organic molecules that act like a key unlocking the molecules of food so they can be better broken down and absorbed, are released by salivary glands in the mouth. There are many types of enzymes ranging from ones that break down starches (amylases) to ones that break down fats (lipases), to ones that break down proteins (proteases). When we chew food and coat it in saliva, these enzymes go to work. This mixture is then swallowed and enters the esophagus.
The food, coated with enzymes and saliva, will ideally sit in the bottom of the esophagus and upper stomach for 20-30 minutes. During this time the salivary enzymes are digesting the food and the stomach is creating an acid bath as the stomach expands to receive the meal. To enhance digestion at this phase, you may take a digestive enzyme supplement like Allegany Nutrition's Digestive Enzymes AL90 or AL270 with your meal. This adds extra digestive enzymes to help enhance the breakdown of foods to nutrients.
The acid bath, composed of hydrochloric acid, potassium chloride and sodium chloride, is critical for the digestion of proteins and minerals as it activates pepsin. Without adequate acid bath in the stomach, absorption of proteins and minerals may be insufficient for optimal health. Acid also kills bad bugs you might pick up from travel or from a bad salad bar. Most important to people who suffer from acid reflux: the acid signals the closing of the esophageal sphincter. Without this closing the acid that is made will erupt from the stomach when it uses it's other action for digestion: mechanical. The stomach has strong muscles in it's lining that help churn the food to support better digestion.
Many people with heartburn or acid reflux have insufficient stomach acid and contrary to popular belief, will get relief from doing things to encourage more acid like:
- Adding 1 Tablespoon of apple cider vinegar to a small glass of water with a meal
- Taking Betaine HCl supplement with meals to help increase more acid production. I suggest Ortho Molecular's Betaine and Pepsin or Empirical Labs Betaine HCl.
- Increasing sea salt in their diet like Baja Gold sea salt. (check out this article for signs you need more salt)
If the above items do not agree with the person, or make the reflux worse, then this a sign there is likely too much inflammation occurring in the upper digestive tract for the proper acidic environment to be supported. The solution in this case is to reduce the inflammation. I have found the formula by Metagenics Glutagenics to be very helpful in resolving this stage. Once inflammation is reduced, then steps to encourage stomach acid production may be resumed.
The stomach churns the food and bathes it in acid until it becomes a liquid we call "chyme." The chyme then is released at the bottom of the stomach into the small intestine. While this process is occurring, the body is taking inventory of the composition of the chyme: is it fat? protein? Whatever kind of enzymes are needed to break down the chyme are sent from the pancreas to the small intestine. The pancreas makes a variety of enzymes to break down various nutrients and will send this "cocktail" to the gut where the enzymes coat the chyme. The pancreas also releases sodium bicarbonate to the small intestine.
Simultaneously, bile is secreted from the liver and gall-bladder through the bile duct to the small intestine. Bile helps break fats apart so that the lipase made in the pancreas can break fats down further to be absorbed across the intestinal lining. Bile also helps trigger the bowel movement so that any food waste lower down in the system continues to move out. One other thing about bile and sodium bicarbonate are their alkalinity (meaning they're high on the pH scale, the opposite of acid). Alkaline bile and bicarbonate mixing with acidic chyme (it's acidic because it just came from the stomach) creates a pop and sizzle not unlike those volcanoes at elementary science fairs where baking soda and vinegar are combined. This chemical environment allows for the further breakdown of food into basic nutrients that can be moved across the gut lining in the small intestine.
Many people may have issues with digesting fat rooted back to an issue not making or releasing bile when it's needed. Besides increasing sea salt in the diet (like Baja Gold) and eating adequate fats, there are supplements like BetaPlus by Biotics Research that are very helpful with this process. Our employee, Lucy, wrote a great article all about problems with fat digestion.
When the gall-bladder struggles to release the bile, a person may have gall bladder attacks. While most people choose surgery to remove the gall-bladder, there are natural remedies and changes in diet that can help avoid surgery for some. I come from a line of women with gallbladder attacks and have experienced them myself and have found natural supplements and diet changes to bring relief and resolution of this issue.
The gut lining is only one cell layer thick. This allows for the easy transfer of nutrients into the blood stream below. In this way, the digestive system is like a filter, only allowing nutrients across while indigestible things and bad bugs we might be exposed to continue to move through the colon to be excreted. Many people experience "leaky gut" or hyper-permeable intestines which sets the stage for food sensitivities, infections, auto-immunity and disease to develop. To learn more about leaky gut, check out these other articles (Depression-leaky gut connection, Down in the Bowels of Leaky Gut). One of my favorite products that helps leaky gut is Restore by Biomic Sciences.
The gut biome comprises the bacteria and other microbes that reside throughout the entire digestive tract. The mouth has different species of bacteria than the esophagus, which is very different from the acid-tolerant bacteria in the stomach, for example. The bacteria throughout the gut help with the breakdown and absorption of food and also the production of stool to be excreted as a bowel movement from the colon. Due to the use of anti-biotics, poor diet, and other stressors, it can be very helpful to take a probiotic supplement to help recover digestive health. I suggest a number of probiotics that can be beneficial in this article Best Probiotic to Stop Sugar Cravings and Other Top Probiotic Picks.
Nutrients and water from the food continue to be absorbed as the stool is moved from the small intestine through the colon and finally excreted through the anus. When regular bowel movements such as 2-3 per day do not occur, then the waste from this process can present a stress to the body as it may become overburdened by waste materials that are designed to be moved out but instead become re-absorbed into the blood stream. The key to addressing bowel regularity lies supporting the steps of digestion already discussed and often in diet change. Natural laxatives may remedy the problem but may be habit-forming and cause problems with pro-longed use.
Those with loose stools and diarrhea may also need assistance with proper digestive secretions. However, they also typically require a shift in diet and lifestyle to promote a healthier gut biome and reduced inflammation.
There are many other detailed processes that occur with digestion, but this is it in a "nutshell". As a digestion specialist, I'm here to help you with the best foods to promote proper digestion as well as evaluate your chemistry and digestion for signs that you might benefit from natural support. Click here to schedule an appointment. |
Next Article: Elements of a Copyright
Back to: INTELLECTUAL PROPERTY RIGHTS
What are the rights of the holder of a copyright?
Copyrights, as do other forms of intellectual property, allow the holder to exclude others from using or copying the protected work. A copyright holder has exclusive rights to the following:
• Reproduce – The holder of a copyright has the ability reproduce and distribute the protected work. For example, the holder of the copyright may distribute copies of the work or license the work for reproduction in any format.
⁃ Example: The holder of a copyright on lyrics to a song may license the rights to perform the song to a recording artist.
• Derivative Works – The holder may prepare derivative works based on the original work. For example, the holder may employ images, characters, words, or notes from the original work or adopting your work in other formats.
⁃ Example: The holder of copyrights in an image may incorporate that image in a separate work. Similarly, the holder of copyrights in a song may use any portions of a song (lyrics or notes) in a separate song.
• Distribution – The holder may distribute copies or reproductions of the work by sale, lease or other transfer of ownership. This generally includes the right to commercially publish the protected work and distribute it through any medium or method of commerce.
⁃ Example: Jay-Z has the exclusive right to sell and distribute any of the songs or albums to which he holds the master rights.
• Performance – The holder may publicly perform the work. For example the holder of a copyright in a song may perform the song in public.
⁃ Example: Michael writes a song. He is the only individual who can perform that song without violating his copyright. Other example may include showings of movies, performances of plays, recitations of literary works or skits, etc.
• Public Display – The holder may publicly display the work.
⁃ Example: An example would include displaying art, such as paintings or photography, in galleries or exhibits.
In summary, the owner of the copyright may sell, assign, or otherwise transfer her copyright. She may also license any of these rights to third parties. A license is any transfer of rights that are less than complete ownership of the copyright.
• Note: A transfer of rights in the copyright must be done in writing and signed by the copyright holder.
• Discussion: How do the rights of the copyright holder compare to those of patent and trademark holders? Do you believe these rights are adequate? Why or why not?
• Practice Question: Frank is a musician. He regularly composes new musical arrangements. As such, he is the copyright holder of these original creative works. Can you explain to Frank the rights associated with his copyrights? |
Tap/click to enlarge
1. We begin at rest, when neither inspiration nor expiration is occurring. The intrapleural pressure is (and will always be) subatmospheric. There is no movement of air into or out of the lungs because the alveolar and atmospheric pressures are the same.
2. The respiratory centre in the brainstem generates action potentials which stimulate…
3. The diaphragm contracts to increase the vertical diameter of the thoracic cavity. Remember that the parietal pleura is adhered to the superior aspect of the diaphragm and the visceral pleura is adhered to the lung - so when the diaphragm flattens, the volume of the pleural cavity increases and consequently the pressure in the pleural cavity decreases. Note that in forced inspiration, the external intercostal muscles contract to increase the transverse and anteroposterior diameters of the pleural cavity, moving the ribs upwards and outwards. Yet again there is an increase in the volume (and decrease in the pressure) of the pleural cavity.
4. The result? The decreased intrapleural pressure (sometimes called intrathoracic pressure) pulls the visceral pleura (and the lungs) downwards, to sit beside the parietal pleura. This action increases the volume of the alveoli and consequently decreases the alveolar pressure. In fact, the alveolar pressure also becomes subatmospheric. Atmospheric air then rushes into the alveoli, moving down its pressure gradient.
Intrapleural’ and ‘intrathoracic’ pressures are the same thing, even though they sound like they should be different things! |
Dear Ms. Flores and 4th graders,
There’s wind in the Alton, Illinois, forecast this week, so it’s prime time for an answer. In fact, it looks like there’s wind all around Earth and even some gusts out on other planets.
My friend Nic Loyd, a meteorologist at Washington State University, studies all kinds of weather. He told me the simple answer: Wind is moving air.
We thought you might want to know more about how it works.
Wind happens both because of how the Sun heats up the Earth and the tiny air molecules that move around us all the time. You can’t see these air molecules, but they still have weight and take up space.
Oxygen and nitrogen make up most of Earth’s air molecules. As air heats up and cools down, it also exerts air pressure–which is like the weight of air–and a downward force. You may have experienced a change in air pressure if your ears popped in a car or airplane.
As the Sun heats up the Earth’s surface, differences in air pressure cause air to move. As it moves, it also balances out different air temperatures.
“A lack of wind would mean that some areas would get very hot, while other areas would become very cold,” Loyd explained.
As air warms up, it expands and its molecules spread apart. The air tends to weigh less and so it doesn’t exert too much air pressure. When air is cold, the molecules are packed tighter. The air weighs more and can exert more pressure.
Air usually flows from an area of high pressure to an area of low pressure. The colder air moves in to replace the warm air and we feel the wind.
We know that the warmest place on Earth is around the equator—the invisible belt around the middle of the planet, down by Ecuador, Kenya, and Indonesia. Warm air rises and flows out from the belt. It cools down and sinks as it moves toward the north and south poles.
If the Earth wasn’t spinning, the north winds would just go north and the south winds would go south.
“Of course, because the Earth does spin, things get a little more complicated,” Loyd said.
Wind wants to travel in that one direction, but the Coriolis effect makes wind move to the left or the right. Some winds in the northern hemisphere blow to the right, and winds in the southern hemisphere blow to the left.
As the wind moves, you can hear leaves rustling, watch it lift a kite up into the sky, see it expand a ship’s sails, or feel it cause hurricanes and storms. It’s pretty spectacular what tiny, invisible air molecules can do under pressure and the different temperatures. |
Taking air in and out of the lungs
Living organisms need oxygen for respiration. Breathing is the process of taking oxygen into the body and breathing out the waste carbon dioxide. The human body breathes by flattening out the diaphragm at the bottom of the lungs, which causes an increase in volume of the chest cavity. Air is then brought into the cavity to take up the extra volume. |
Groups are not lectures, but entails play-based experiential group work.
Although many children benefit from individual therapy, in some cases group therapy is more effective.
Group therapy is not only cost effective but also contains certain key therapeutic factors:
- In group counselling relationships, children experience the therapeutic releasing qualities of discovering that their peers have problems, too, and a diminishing of the barriers of feeling all alone.
- Group members recognize that other members’ success can be helpful and they develop optimism and a sense of hope and empowerment.
- A feeling of belonging, trust and togetherness develops and new interpersonal skills are learned practically.
- In groups children are afforded the opportunity for immediate reactions and feedback from peers as well as the opportunity for vicarious learning.
- Group members have the opportunity to re-enact critical family/peer dynamics with other group members in a corrective manner.
- Children also develop sensitivity towards others and receive a boost to their self-concept through being helpful to someone else. For children who have poor self-concepts and a life history of experiencing failure, discovering they can be helpful to someone else may be the most profound therapeutic quality possible.
- In groups, children also discover that they are worthy of respect and that their own worth is not dependent on what they do or what they produce, but rather on who they are.
- Group members begin to accept responsibility for life decisions.
Play as a means towards emotional well-being
“To ‘play it out’ is the most natural self-healing measure childhood affords.” – Erikson, 1977
Playing is how children try out and learn about their world and is essential for healthy development. For children, play is serious, purposeful business through which they develop mentally, physically and socially.
Play helps them to discover who they are and who they are not. It teaches them how to identify, understand and manage emotions. Play is the child’s form of self-therapy through which problems, confusions, anxieties and conflicts are often worked through. Through the safety of play, children can try out their own new ways of being, practice roles and explore situations.
Play performs a vital function for the child. It is far more than just frivolous, lighthearted, pleasurable activity that adults usually make of it. Play serves as a symbolic language. Children communicate through play and activity. Play and expressive therapies serve to create a necessary therapeutic distance for clients who are often unable to express their pain in words.
Play and expressive therapies can be effective in overcoming resistance because they are generally non-threatening, engaging and captivating. Play overpowers verbalisation, rationalization and/or intellectualization used as defences.
Play and expressive media are effective interventions for traumatized clients since neuro-biological effects of trauma point to inhibitions on cognitive processing and verbalization. Traumatized children often experience loss of emotional, psychological and even physiological control. A crucial goal for these children is thus empowerment. Play is a child’s work and world. They are the experts in play, and can thus experience and regain a sense of mastery and control, through play. |
So you built a computer? Before you can play All Your Base or watch ”Daisy’s Web Dev Diary”, you need to install an operating system. No, it’s not a medical procedure. An operating system is the interface that connects your computer hardware to applications like your web browser and text editor. You’ve probably heard of Apple OSX and Microsoft Windows. If you have a smartphone, it might be an iOS iPhone or Android. Those are examples of operating systems. Our favorite is Linux.
How Does an Operating System Work?
Think of your computer like a candy-coated piece of milk chocolate. At the core is the milk chocolate kernel. You can’t communicate directly with the kernel because it will melt in your hand. So you need a hard candy user interface, or shell. The shell is how you tell your computer what to do. The kernel decides the best way to do it. It controls the CPU for program execution, RAM for memory management, and interacts with hardware devices. There are two types of shell you can use to access the kernel: command line interface (CLI) and graphical user interface (GUI). A CLI looks like this:
It's here that you enter text at the prompt. In this case, our prompt is a dollar sign. You can also run multiple commands in a file together. This is called a shell script.
And a GUI looks like this:
A GUI uses windows and icons to navigate your computer’s file system, which makes it a friendly entry point for beginners. The command line might look boring, but as you become a superuser, you will find there are some tasks more easily accomplished with the CLI. |
The Science Of Sanitizing
CDC recommends washing hands with soap and water whenever possible because handwashing reduces the amounts of all types of germs and chemicals on hands. But if soap and water are not available, using a hand sanitizer with at least 60% alcohol can help you avoid getting sick and spreading germs to others. The guidance for effective handwashing and use of hand sanitizer in community settings was developed based on data from a number of studies.
Alcohol-based hand sanitizers can quickly reduce the number of microbes on hands in some situations, but sanitizers do not eliminate all types of germs.
Why? Soap and water are more effective than hand sanitizers at removing certain kinds of germs, like Cryptosporidium, norovirus, and Clostridium difficile1-5. Although alcohol-based hand sanitizers can inactivate many types of microbes very effectively when used correctly 1-15, people may not use a large enough volume of the sanitizers or may wipe it off before it has dried 14.
Hand sanitizers may not be as effective when hands are visibly dirty or greasy.
Why? Many studies show that hand sanitizers work well in clinical settings like hospitals, where hands come into contact with germs but generally are not heavily soiled or greasy 16. Some data also show that hand sanitizers may work well against certain types of germs on slightly soiled hands 17,18. However, hands may become very greasy or soiled in community settings, such as after people handle food, play sports, work in the garden, or go camping or fishing. When hands are heavily soiled or greasy, hand sanitizers may not work well 3,7,16. Handwashing with soap and water is recommended in such circumstances.
Hand sanitizers might not remove harmful chemicals, like pesticides and heavy metals, from hands.
Why? Although few studies have been conducted, hand sanitizers probably cannot remove or inactivate many types of harmful chemicals. In one study, people who reported using hand sanitizer to clean hands had increased levels of pesticides in their bodies 19. If hands have touched harmful chemicals, wash carefully with soap and water (or as directed by a poison control center).
If soap and water are not available, use an alcohol-based hand sanitizer that contains at least 60% alcohol.
Why? Many studies have found that sanitizers with an alcohol concentration between 60–95% are more effective at killing germs than those with a lower alcohol concentration or non-alcohol-based hand sanitizers 16,20. Hand sanitizers without 60-95% alcohol 1) may not work equally well for many types of germs; and 2) merely reduce the growth of germs rather than kill them outright.
When using hand sanitizer, apply the product to the palm of one hand (read the label to learn the correct amount) and rub the product all over the surfaces of your hands until your hands are dry.
Swallowing alcohol-based hand sanitizers can cause alcohol poisoning.
From 2011 – 2015, U.S. poison control centers received nearly 85,000 calls about hand sanitizer exposures among children 25. Children may be particularly likely to swallow hand sanitizers that are scented, brightly colored, or attractively packaged. Hand sanitizers should be stored out of the reach of young children and should be used with adult supervision. Child-resistant caps could also help reduce hand sanitizer-related poisonings among young children 24. Older children and adults might purposefully swallow hand sanitizers to become drunk 26. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.