content
stringlengths 275
370k
|
---|
The airship, USS Akron, was built for the United States Navy by the Goodyear tire and rubber company, at the time it was the largest blimp ever built in the United States. The Navy began flying the Akron in 1931, mostly around Lake Erie and the upper Midwest. On October 10, 1931, the Akron was scheduled to fly from Detroit to Huntington, West Virginia. What happened that day remains unexplained.
In an age of radio and newspapers, people along the flight path of the USS Akron waited with anticipation just to get a brief glimpse of the mighty airship. The residents of Gallipolis, Ohio, were no exception. They scanned the sky for the Akron, then they saw it. The Akron was approaching, it had already been seen by witnesses in Point Pleasant, just across the Ohio River.
follow us on Instagram
Then, to everyone’s shock, the USS Akron exploded and crashed into the rugged West Virginia terrain. Witnesses said it appeared that 3 people managed to parachute to safety. An extensive search found nothing and it was soon revealed that whatever observers on the ground had seen, it was not the USS Akron. The airship was still in Detroit, the Navy had canceled the flight due to bad weather along its flight path.
Less than 2 years later, in the early morning hours of April 4th, 1933, the USS Akron was flying off the coast of New Jersey when it encountered a sudden and powerful storm The Captain ordered full speed ahead but the Akron could not out run the storm. The airship exploded and crashed into the Atlantic. Of the 76 men on board only 3 survived.
The USS Akron, as it turns out, was not the first blimp built by Goodyear to carry the name Akron. In 1911, the company built a blimp that was to be used in an attempt to cross the Atlantic Ocean. The first try failed and on the second attempt disaster struck. While off the coast of New Jersey the old Akron exploded and crashed into the sea, very near the location that the USS Akron would meet its end about 20 years later. |
A novel telescope that uses the Antarctic ice sheet as its window to the cosmos, has produced the first map of the high-energy neutrino sky.
The map, unveiled for astronomers here today (July 16, AEST) at a meeting of the International Astronomical Union, provides astronomers with their first tantalizing glimpse of very high energy neutrinos, ghostly particles that are believed to emanate from some of the most violent events in the universe – crashing black holes, gamma ray bursts, and the violent cores of distant galaxies.
"This is the first data with a neutrino telescope with realistic discovery potential," said Francis Halzen, a University of Wisconsin-Madison professor of physics, of the map compiled using AMANDA II, a one-of-a-kind telescope built with support from the National Science Foundation (NSF) and composed of arrays of light-gathering detectors buried in ice 1.5 kilometers beneath the South Pole. "To date, this is the most sensitive way ever to look at the high-energy neutrino sky."
The ability to detect high-energy neutrinos and trace them back to their points of origin remains one of the most important quests of modern astrophysics.
Because cosmic neutrinos are invisible, uncharged and have almost no mass, they are next to impossible to detect. Unlike photons, the particles that make up visible light, and other kinds of radiation, neutrinos can pass unimpeded through planets, stars, the vast magnetic fields of interstellar space, and even entire galaxies. That quality -- which makes them very hard to detect -- is also their greatest asset as the information they harbor about cosmologically distant and otherwise unobservable events remains intact.
The map produced by AMANDA II is preliminary, Halzen emphasized, and represents only one year of data gathered by the icebound telescope. Using two more years of data already harvested with AMANDA II, Halzen and his colleagues will next define the structure of the sky map and sort out potential signals from statistical fluctuations in the present map to confirm or disprove them.
The significance of the map, according to Halzen, is that it proves the detector works. "It establishes the performance of the technology and it shows that we have reached the same sensitivity as telescopes used to detect gamma rays in the same high-energy region" of the electromagnetic spectrum. Roughly equal signals are expected from objects that accelerate cosmic rays, whose origins remain unknown nearly a century after their discovery.
Sunk deep into the Antarctic ice, the AMANDA II (Antarctic Muon and Neutrino Detector Array) Telescope is designed to look not up, but down, through the Earth to the sky in the Northern Hemisphere. The telescope consists of 677 glass optical modules, each the size of a bowling ball, arrayed on 19 cables set deep in the ice with the help of high-pressure hot water drills. The array transforms a cylinder of ice 500 meters in height and 120 meters in diameter into a particle detector.
The glass modules work like light bulbs in reverse. They detect and capture faint and fleeting streaks of light created when, on occasion, neutrinos crash into ice atoms inside or near the detector. The subatomic wrecks create muons, another species of subatomic particle that, conveniently, leaves an ephemeral wake of blue light in the deep Antarctic ice. The streak of light matches the path of the neutrino and points back to its point of origin.
Because it provides the first glimpse of the high-energy neutrino sky, the map will be of intense interest to astronomers because, said Halzen, "we still have no clue how cosmic rays are accelerated or where they come from."
The fact that AMANDA II has now identified neutrinos up to one hundred times the energy of the particles produced by the most powerful earthbound accelerators raises the prospect that some of them may be kick-started on their long journeys by some of the most supremely energetic events in the cosmos. The ability to routinely detect high-energy neutrinos will provide astronomers not only with a lens to study such bizarre phenomena as colliding black holes, but with a means to gain direct access to unedited information from events that occurred hundreds of millions or billions of light years away and eons ago.
"This map could hold the first evidence of a cosmic accelerator," Halzen said. "But we are not there yet."
The hunt for sources of cosmic neutrinos will get a boost as the AMANDA II Telescope grows in size as new strings of detectors are added. Plans call for the telescope to grow to a cubic kilometer of instrumented ice. The new telescope, to be known as IceCube, will make scouring the skies for cosmic neutrino sources highly efficient.
"We will be sensitive to the most pessimistic theoretical predictions," Halzen said.
"Remember, we are looking for sources, and even if we discover something now, our sensitivity is such that we would see, at best, on the order of 10 neutrinos a year. That's not good enough."
Materials provided by CSIRO Australia. Note: Content may be edited for style and length.
Cite This Page: |
April 3, 2014
Knowing the difference between primary and secondary sources is essential for students working on research projects. In the video below, LeFever from Commoncraft explains this difference using as example the story of Great Storm that hit England in the 18th century. Here is how he went about doing it:
To learn about this storm students will use two major sources of information to establish the facts and represent the most accurate version of the event:
These are first hand accounts of what exactly happened told by those people who directly experienced and lived the event. These accounts can come in different formats such as: documents, memoires, interview scripts, scientific documents and many more.These primary materials are the best sources for writing a research paper on the event under study as they come from people who were actually there when the event happened.
However, while primary sources are essential for doing research on this event , students may also draw on information provide by secondary sources.
Secondary sources include documentation and analysis of primary sources and other relevant information after the fact. For instance, a research paper in which the researcher gathered and analyzed the different primary sources about the event and came up with new insights through comparing the event with other similar events in history is considered a secondary source.
Watch the video to learn more about the difference between primary and secondary sources.
April 3, 2014 |
Supernovas Begin the Third Evening
Some stars flare up and dim periodically. From afar a star that suddenly flares up may seem to be a new star, a “nova,” because before the star became so bright perhaps it was too dim to see. But a star that flares up brighter than a galaxy is a supernova. A supernova exploded 150 000 years ago in a nearby companion of our galaxy, the Large Magellanic Cloud, and the light of its flare-up reached us in 1987. Some five or six billion years ago, other supernovas exploded, and dispersed their insides over a great region in space. The material quickly cooled and the light of the star remnants went out. The cooled interiors of the stars were just a dry dust.
When a hot iron is taken out of the fire and quenched in water, the light disappears with the heat. In a similar way, the extinction of the star remnants and the sudden cooling of their material brought on the third evening. The dust contained the iron that would later be the center of the Earth. The dust also had all the elements found in the crust of the Earth. They provide a rich chemistry, the basis of life.
We cannot photograph the end of our own second morning and the beginning of our own third evening, because the Earth itself is made of material ejected from other stars that exploded long ago. The photograph taken in 1987 of a supernova shows the end of the second morning and the beginning of the third evening for some other planet near some other star. |
This chart illustrates the relative masses of super-dense cosmic objects, ranging from white dwarfs to the supermassive black holes encased in the cores of most galaxies. The first three "dead" stars at left all form when stars more massive than our sun explode; the more massive the star, the more massive the stellar remnant, or compact object, is left behind.
While neutron stars -- which are created from the explosions of stars more than about 10 times the mass of the sun -- are low in mass compared to black holes, they are still quite hefty and incredibly dense. A spoonful of a neutron star would weigh as much as all of the humans on Earth.
Researchers suspect that a class of intermediate-mass black holes exist, with masses up to more than 100,000 times that of our sun, but the mystery remains unsolved.
Supermassive black holes at the hearts of galaxies are formed together with their nascent galaxies out of giant, collapsing clouds of matter. They can weigh up to the equivalent of 10 billion or more suns. Like all of the objects depicted in this chart, supermassive black holes grow in size as they gorge on surrounding matter. |
Signs and Symptoms of Influenza
• Sore Throat
• Body Aches
In addition, some people have reported diarrhea and vomiting associate with influenza.
Symptoms often begin suddenly. Fever typically will last a few days, and cough and fatigue the better part of one week, though some symptoms may persist longer than one week.
A small number of people experience complications such as sinus infection, bronchitis, and pneumonia.
Good Health Habits to Help Stop Spread of Flu Virus
Here are the best ways to avoid getting or spreading influenza:
• Practice good hand hygiene by washing your hands with soap and water, especially after coughing or sneezing. Alcohol-based hand cleaners also are effective.
• Practice respiratory etiquette by covering your mouth and nose with a tissue when your cough or sneeze. If you don’t have a tissue, cough or sneeze into your elbow or shoulder, not into your hands. Avoid touching your eyes, nose, or mouth.
• Know the signs and symptoms of the flu. A fever is a temperature taken with a thermometer that is equal to or greater than 100 degrees Fahrenheit or 38 degrees Celsius. Look for possible signs of fever: if the person feels very warm, has a flushed appearance, or is sweating or shivering.
• Stay home if you have flu or flu-like illness for at least 24 hours after you no longer have a fever (100 degrees Fahrenheit or 38 degrees Celsius) or signs of a fever (have chills, feel very warm, have a flushed appearance, or are sweating). This should be determined without the use of fever-reducing medications (any medicine that contains ibuprofen or acetaminophen). Don’t go to class or work.
• Be vaccinated annually for influenza. The CDC recommends that everyone 6 months of age and older should get vaccinated against the flu. It is the first and most important step in protecting against flu viruses. People at highest risk for complications include young children, pregnant women, people with chronic health conditions like asthma, diabetes, or heart and lung disease, and people 65 years and older. |
CDC Celebrates World No Tobacco Day
Several actions have been taken to reduce tobacco use. Learn how you can help eliminate smoking as the leading preventable cause of disease and death in the world and save lives.
What Is World No Tobacco Day?
World No Tobacco Day (WNTD) is an annual awareness day celebrated around the globe that:
- Draws worldwide attention to the tobacco epidemic
- Highlights the need for effective policies to reduce tobacco use
World No Tobacco Day is celebrated each year on May 31 and is sponsored by the World Health Organization (WHO).
What Is the Focus of This Year's Event?
For WNTD 2014, WHO and its partners are calling upon countries to raise taxes on tobacco products. This recommendation is among several actions also identified in the 2014 Surgeon General's Report—The Health Consequences of Smoking: 50 Years of Progress to more quickly and dramatically reduce tobacco use.
How Smoking Rates Fell After Turkey Raised Taxes and Prices on Cigarettes
As reported in the May 30, 2014, issue of Morbidity and Mortality Weekly Report, the average price paid per 20 cigarettes in Turkey increased by 42% between 2008 and 2012. The rise in prices reflected a 2010 increase in the country's Special Consumption Tax on Tobacco. As the cost of cigarettes increased, the average smoking rate dropped by 14.6% between 2008 and 2012, from 30.1% to 25.7%.
Following the 2010 increase in tobacco taxes:
- The average price paid for cigarettes increased.
- Cigarettes became less affordable.
- A significant drop in smoking rates occurred, with the largest reduction occurring among people of low socioeconomic status.
This shows the potential role tobacco tax increases can play in helping to reduce health gaps between different socioeconomic groups by reducing tobacco use—especially among low-income populations.
What Proven Strategies Work to Reduce U.S. Smoking Rates?
Smoking has been the number one cause of preventable death and disease in this country for decades. The death and disease from tobacco—which claims more than 480,000 lives each year—is overwhelmingly caused by cigarettes and other burned tobacco products.
There are many ways to reduce smoking rates quickly and dramatically. Among those strategies proven to work are:
- Higher prices on cigarettes and other tobacco products that discourage young people from starting in the first place and encourage adult smokers to quit
- Affordable quit-smoking treatments that are easily available to people who want to quit
- Comprehensive smokefree and tobacco-free policies in public places that protect nonsmokers and make smoking the exception rather than the norm
- Mass media campaigns, such as CDC's Tips From Former Smokers campaign, that inform people of the dangers of smoking and tell them about resources to help them quit
- State and community programs that help integrate tobacco control into medical, retail, education, and public health environments that reach groups of people who might not otherwise be exposed to tobacco control initiatives
How Can We Accelerate Progress?
The good news is that:
- Smoking rates have been cut in half in the United States since 1964.
- More recently, other countries, such as Turkey, have also cut smoking rates.
The challenge is that the current rate of progress to reduce tobacco use in the United States is not fast enough.
As indicated in The Health Consequences of Smoking: 50 Years of Progress, progress can be accelerated by:
- Raising the average excise cigarette taxes to prevent youth from starting smoking and encouraging smokers to quit
- Expanding national media campaigns so that ads air more frequently and for longer periods of time (12 months a year for at least the next 10 years)
- Extending proven programs and policies to more states and cities to make smoking less accessible, less affordable, and less attractive
- Helping everyone who wants to quit by providing quit-smoking resources that are readily available and affordable
- Making cigarettes less addictive and less appealing to youth by using federal regulatory authority
- Expanding tobacco control and prevention research efforts
- Fully funding comprehensive statewide tobacco control programs at CDC-recommended levels
- Extending comprehensive smokefree indoor protections to 100% of the U.S. population
A combined approach—implementing proven strategies and those listed above—has the potential to:
- Save millions of lives in the coming decades
- Keep hundreds of millions of people from suffering the effects of tobacco use
- Eliminate smoking as the leading preventable cause of death and disease
The following Web sites provide free, accurate, evidence-based information and professional assistance to help support the immediate and long-term needs of people trying to quit smoking. If you want to quit, here's where you can find help:
- Tips From Former Smokers Web site provides more information about the Tips campaign, including additional videos and links to podcasts by participants.
- CDC's Smoking & Tobacco Use Web site is CDC's one-stop shop for information about tobacco and smoking cessation.
- BeTobaccoFree.gov is the Department of Health and Human Services' comprehensive Web site that provides one-stop access to tobacco-related information from across its agencies. This consolidated resource includes general information on tobacco as well as federal and state laws and policies, health statistics, and evidence-based methods on how to quit.
- Smokefree.gov provides free, accurate, evidence-based information and professional assistance to help support the immediate and long-term needs of people trying to quit smoking.
- SmokefreeWomen provides free, accurate, evidence-based information and professional assistance to help support the immediate and long-term needs of women trying to quit smoking.
- Quit Tobacco: Make Everyone Proud is a Department of Defense-sponsored Web site for military personnel and their families.
- Smokefree Teen (SfT) is a site devoted to helping teens quit smoking.
- SmokefreeTXT is a teen texting site.
- espanol.smokefree.gov is a Spanish-language quitting site.
- How to Quit provides more useful information from CDC to help you quit.
- Million Hearts™ is a national initiative to prevent 1 million heart attacks and strokes by 2017. Million Hearts™ brings together communities, health systems, nonprofit organizations, federal agencies, and private-sector partners from across the country to fight heart disease and stroke.
- Page last reviewed: May 29, 2014
- Page last updated: May 29, 2014
- Content source:
- National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health
- Page maintained by: Office of the Associate Director for Communication, Digital Media Branch, Division of Public Affairs |
Who Were the Gracchi?:
The Gracchi, Tiberius Gracchus and Gaius Gracchus, were Roman brothers who tried to reform Rome's social and political structure to help the lower classes, in the 2nd century B.C.
Events surrounding the politics of the Gracchi led to the decline and eventual fall of the Roman Republic. From the Gracchi to the end of the Roman Republic, personalities dominated Roman politics; major battles were not with foreign powers, but civil. The period of the decline of the Roman Republic begins with the Gracchi meeting their bloody ends and ends with the assassination of Caesar. This was followed by the rise of the first Roman emperor, Augustus Caesar.
Gracchi is the plural of Gracchus.
Family of the Gracchi:
The mother of Tiberius (168-133 B.C.) and Gaius (159-121 B.C.) was Cornelia (c. 190-100 B.C.), who was held up as a paragon of Roman womanly virtue. Their father, Tiberius Sempronius Gracchus, had been consul twice and had even been censor in 169 B.C.
The Death and Suicide of the Gracchi:
To Tiberius Gracchus, the biggest problem was that there were not enough small farmers. He wanted to give land to some of the many free, poor unemployed. These men would be happy to farm, but there wasn't enough land, so he proposed that the state should take over land held illegally by large landholders and distribute it to the poor. Unfortunately for the plan of Tiberius, the people illegally holding the land were the powerful nobles whose families had held the land for generations. They didn't want to give it up.
In 133 Tiberius Gracchus was killed during rioting. Gaius Gracchus took up the reform issues of his brother when he became tribune in 123 B.C., 10-years after the death of brother Tiberius. He created a coalition of poor free men and equestrians who were willing to go along with his proposals. His political successes enraged the nobles. Gaius Gracchus lost control of his coalition and, faced with armed opposition, fell on a slave's sword. |
|Date||1901 – 1940s|
|Location||Texas, United States|
|Also known as||Gusher Age|
The Texas oil boom, sometimes called the gusher age, was a period of dramatic change and economic growth in the U.S. state of Texas during the early 20th century that began with the discovery of a large petroleum reserve near Beaumont, Texas. The find was unprecedented in its size and ushered in an age of rapid regional development and industrialization that has few parallels in U.S. history. Texas quickly became one of the leading oil producing states in the U.S., along with Oklahoma and California; soon the nation overtook the Russian Empire as the top producer of petroleum. By 1940 Texas had come to dominate U.S. production. Some historians even define the beginning of the world's Oil Age as the beginning of this era in Texas.
The major petroleum strikes that began the rapid growth in petroleum exploration and speculation occurred in Southeast Texas, but soon reserves were found across Texas and wells were constructed in North Texas, East Texas, and the Permian Basin in West Texas. Although limited reserves of oil had been struck during the 19th century, the strike at Spindletop near Beaumont in 1901 gained national attention, spurring exploration and development that continued through the 1920s and beyond. Spindletop and the Joiner strike in East Texas, at the outset of the Great Depression, were the key strikes that launched this era of change in the state.
This period had a transformative effect on Texas. At the turn of the century, the state was predominantly rural with no large cities. By the end of World War II, the state was heavily industrialized, and the populations of Texas cities had broken into the top 20 nationally. The city of Houston was among the greatest beneficiaries of the boom, and the Houston area became home to the largest concentration of refineries and petrochemical plants in the world. The city grew from a small commercial center in 1900 to one of the largest cities in the United States during the decades following the era. This period, however, changed all of Texas' commercial centers and developed the Beaumont/Port Arthur area, where the boom began.
H. Roy Cullen, H. L. Hunt, Sid W. Richardson, and Clint Murchison were the four most influential businessmen during this era. These men became among the wealthiest and most politically powerful in the state and the nation.
Several events in the 19th century have been regarded as a beginning of oil-related growth in Texas, one of the earliest being the opening of the Corsicana oil field in 1894. The Spindletop strike of 1901, at the time the world's most productive petroleum well ever found, is considered by most historians as the beginning point. This single discovery began a rapid pattern of change in Texas and brought worldwide attention to the state.
By the 1940s, the Texas Railroad Commission, which had been given regulatory control of the Texas oil industry, managed to stabilize American oil production and eliminate most of the wild price swings that were common during the earlier years of the boom. Many small towns, such as Wortham, which had become boomtowns during the 1920s saw their booms end in the late 1920s and early 1930s as their local economies collapsed, resulting from their dependence on relatively limited petroleum reservoirs. As production peaked in some of these smaller fields and the Great Depression lowered demand, investors fled. In the major refining and manufacturing centers such as Beaumont, Houston, and Dallas, the boom continued to varying degrees until the end of World War II. By the end of the war, the economies of the major urban areas of the state had matured. Though Texas continued to prosper and grow, the extreme growth patterns and dramatic socioeconomic changes of the earlier years largely subsided as the cities settled into more sustainable patterns of growth. Localized booms in West Texas and other areas, however, continued to transform some small communities during the post-war period.
Following the American Civil War, Texas's economy began to develop rapidly centered heavily on cattle ranching and cotton farming, and later lumber. Galveston became the world's top cotton shipping port and Texas' largest commercial center. By 1890, however, Dallas had exceeded Galveston's population, and in the early 1900s the Port of Houston began to challenge Galveston's dominance.
In 1900 a massive hurricane struck Galveston, destroying much of the city. That and another storm in 1915 shifted much of the focus from investors away from Galveston and toward nearby Houston, which was seen as a safer location for commercial operations. Because of these events, the coming oil boom became heavily centered on the city of Houston both as a port and a commercial center.
In the 1850s, the process to distill kerosene from petroleum was invented by Abraham Gesner. The demand for the petroleum as a fuel for lighting around the world quickly grew. Petroleum exploration developed in many parts of the world with the Russian Empire, particularly the Branobel company in Azerbaijan, taking the lead in production by the end of the 19th century.
In 1859, Edwin Drake of Pennsylvania invented a drilling process to extract oil from deep within the earth. Drake's invention is credited with giving birth to the oil industry in the U.S. The first oil refiner in the United States opened in 1861 in Western Pennsylvania, during the Pennsylvanian oil rush. Standard Oil, which had been founded by John D. Rockefeller in Ohio, became a multi-state trust and came to dominate the young petroleum industry in the U.S.
Texans knew of the oil that lay beneath the ground in the state for decades, but this was often seen more as a problem than a benefit because it hindered the digging of water wells. Rancher William Thomas Waggoner (1852–1934), who later became an influential oil businessman in Fort Worth, struck oil while drilling for water in 1902. He was quoted as having said, "I wanted water, and they got me oil. I tell you I was mad, mad clean through. We needed water for ourselves and for our cattle to drink."
Despite the negative associations with oil among many ranchers and farmers, demand for kerosene and other petroleum derivatives drove oil prospecting in Texas after the American Civil War at known oil-producing springs and accidental finds while drilling for water. One of the first significant wells in Texas was developed near the town of Oil Springs, near Nacogdoches. The site began production in 1866. The first oilfield in Texas with a substantial economic impact was developed in 1894 near Corsicana. In 1898, the field built the state's first modern refinery. The success of the Corsicana field and increasing demand for oil worldwide led to more exploration around the state.
In 1879, Karl Benz was granted the first patent on a reliable gasoline-powered engine in Germany. In 1885, he produced the first true gasoline automobile, the Benz Patent Motorwagen. The new invention was quickly refined and gained popularity in Germany and France, and interest grew in the United Kingdom and the United States. In 1902, Ransom Olds created the production line concept for mass-producing lower-cost automobiles. Henry Ford soon refined the concept so that by 1914, middle-class laborers could afford automobiles built by Ford Motor Company.
Automobile production exploded in the U.S. and in other nations during the 1920s. This, and the increasing use of petroleum derivatives to power factories and industrial equipment, substantially increased worldwide demand for oil.
After years of failed attempts to extract oil from the salt domes near Beaumont, a small enterprise known as the Gladys City Oil, Gas, and Manufacturing Company was joined in 1899 by Croatian/Austrian mechanical engineer Anthony F. Lucas, an expert in salt domes. Lucas joined the company in response to the numerous ads the company's founder Pattillo Higgins placed in industrial magazines and trade journals. Lucas and his colleagues struggled for two years to find oil at a location known as Spindletop Hill before making a strike in 1901. The new well produced approximately 100,000 barrels of oil per day, an unprecedented level of production at the time. The 1902 total annual production at Spindletop exceeded 17 million barrels. The state's total production in 1900 had been only 836,000 barrels. The overabundance of supply led oil prices in the U.S. to drop to a record low of 3 cents per barrel, less than the price of water in some areas.
Beaumont almost instantly became a boomtown with investors from around the state and the nation participating in land speculation. Investment in Texas speculation in 1901 reached approximately $235 million US (approximately $6.7 billion in present-day terms). The level of oil speculation in Pennsylvania and other areas of the United States was quickly surpassed by the speculation in Texas. The Lucas gusher itself was short-lived; production fell to 10,000 barrels per day by 1904. The strike, however, was only the beginning of a much larger trend.
Exploration of salt domes across the plains of the Texas Gulf Coast took off with major oil fields opening at Sour Lake in 1902, Batson in 1903, Humble in 1905, and Goose Creek (modern Baytown) in 1908. Pipelines and refineries were built throughout much of Southeast Texas, leading to substantial industrialization, particularly around Houston and the Galveston Bay. The first offshore oilfield in the state opened in 1917 at Black Duck Bay on the Goose Creek field, although serious offshore exploration did not begin until the 1930s.
Initially, oil production was conducted by many small producers. The early exploration and production frenzy produced an unstable supply of oil, which often resulted in overproduction. In the early years, a few major finds led to easy availability and major drops in prices, but were followed by limited exploration and a sudden spike in prices as production dwindled. The situation led exploration to spread into the neighboring states of Oklahoma, Louisiana, and Arkansas, who competed with Texas for dominance in oil production. The strike at Glenn Pool near Tulsa, Oklahoma in 1905 established Tulsa as the leading U.S. oil production center until the 1930s. Though Texas soon lagged behind Oklahoma and California, it was still a major producer.
During the late 1910s and 1920s, oil exploration and production continued to expand and stabilize. Oil production became established in North Texas, Central Texas, the Panhandle, and the Permian Basin in western Texas. The finds in North Texas, beginning with the 1917 strike in Ranger west of Dallas-Fort Worth, were particularly significant, bringing substantial industrialization to the area. Texas soon became dominant as the nation's leading oil producer. By 1940, Texas production was twice that of California, the next largest U.S. producer.
In 1930, Columbus Marion Joiner, a self-educated prospector, discovered the East Texas Oil Field, the largest oil discovery that had ever been made. Because East Texas had not been significantly explored for oil before then, numerous independent prospectors, known as "wildcatters", were able to purchase tracts of land to exploit the new field. This new oil field helped to revive Dallas's economy during the Great Depression, but sharply decreased interest in West Texas as the new supply led to another major drop in oil prices. The uncontrolled production in the eastern field destabilized the state's oil industry, which had been trying to control production levels to stabilize prices. Overproduction in East Texas was so great that then-governor Ross Sterling attempted to shut down many of the wells. During one of the forced closures, he ordered the Texas National Guard to enforce the shutdown. These efforts at controlling production, intended to protect both the independent operators and the major producers, were largely unsuccessful at first and led to widespread oil smuggling. In the later 1930s, the federal government intervened and brought production to sustainable levels, leading to a stabilization of price fluctuation. The income provided by the stabilization allowed less populated West Texas and the Panhandle to be more fully explored and exploited.
The first refining operations at Corsicana were built by Joseph S. Cullinan, a former manager for Standard Oil in Pennsylvania. His company, which was later absorbed by Magnolia Petroleum Company and then acquired by Standard Oil of New York, built the first modern refinery west of the Mississippi River. Following the strike at Spindletop, Cullinan partnered with Arnold Schlaet to form the Texas Fuel Company in Beaumont with funding from an investment group run by former Texas governor James S. Hogg and other investors. In 1905, as the new company rapidly expanded its operations, it moved its corporate headquarters to Houston. The company's strength in the oil industry established Houston as the center of the industry in Texas. The company was later absorbed into the Texas Company and then renamed Texaco.
The interests in the Lucas operation at Spindletop were purchased by J. M. Guffey and his associates, creating the Guffey Petroleum Company and Gulf Refining Company of Texas. These companies later became Gulf Oil Corporation, which in turn was bought by Chevron. Guffey's company became the largest oil producer in the state during the boom period. Standard Oil initially chose not to become directly involved in oil production in Texas, and instead formed Security Oil Company as a refining operation utilizing Guffey-Gulf and Texas Company as suppliers. Following state lawsuits related to anti-trust statutes, Security Oil was reorganized into Magnolia Petroleum Company in 1911. That same year, the Humble Oil Company (today Exxon Corporation) was formed by Ross Sterling and Walter William Fondren in Humble, Texas. The headquarters were moved to Houston, and the company eventually sold half of its shares to Standard Oil of New Jersey, establishing a long-term partnership that lasted for decades. The company built the Baytown Refinery, which became Texas' largest refining operation. In the post-World War II period, Humble became the largest crude oil transporter in the United States, and built pipelines connecting Baytown to Dallas-Fort Worth and West Texas to the Gulf of Mexico.
In spite of the few major operations, the first decade of the boom was dominated by numerous small producers. As production expanded and new companies were formed, consolidation occurred. By the late 1920s, ten companies produced more than half of the oil in the state: Gulf Production Company, Humble Oil and Refining Company, Southern Crude Oil Purchasing Company (later absorbed by Amoco which was later absorbed by BP), the Texas Company, Shell Petroleum Corporation, Yount-Lee Oil Company, Magnolia Petroleum Company, J. K. Hughes Oil Company, Pure Oil Company, and Mid-Kansas Oil and Gas Company (later Marathon Oil).
During the 1930s, a Dallas company known as the General American Finance System, struggling through the Great Depression, began to finance drilling operations in the state using oil reserves as collateral. This allowed Dallas to establish itself as the financing center for the oil industry. The Great American Finance System eventually reorganized itself as the General American Oil Company of Texas, which became an oil producer in its own right and, decades later, was purchased by Phillips Petroleum.
At the start of the 20th century, agriculture, timber, and ranching were the leading economic engines of Texas. This was changed by the boom, which led to rapid industrialization. Though refineries were initially concentrated around the Beaumont and Houston areas, refining operations gradually grew throughout the state by the end of the 1920s. By 1940, the value of petroleum and natural gas produced in Texas exceeded the value of all agricultural products in the state.
The opening of Houston Ship Channel in 1914 led to the Port of Houston overtaking the Port of Galveston as the state's dominant seaport. The situation led Houston to also overtake Galveston as the primary shipping center for cotton. The large quantities of oil and gas moving through Houston, Baytown, Texas City, and surrounding communities made the area around the ship channel attractive for industrial development. Chemical plants, steel factories, cement plants, automobile manufacturing, and many other types of heavy industry that could benefit from a ready supply of cheap fuel rapidly developed in the area. By the 1930s, Houston had emerged as the state's dominant economic center, though it continued to compete with Dallas throughout the 1900s. The effects of the boom helped offset the effects of the Depression so much that Houston was called the "city the Depression forgot." Dallas and other Texas communities were also able to weather the Depression better than many American cities because of oil.
The boom in the oil industry also helped promote other industries in other areas of the state. Lumber production thrived as demand climbed for construction of railroads, refineries, and oil derricks, and, in 1907, Texas was the third largest lumber producer in the United States. Growing cities required many new homes and buildings, thus benefiting the construction industry. Agriculture and ranching grew stronger as the rapidly expanding population created more demand for their produce.
The major commercial centers in the state grew tremendously during this period. The city of Houston grew by 555% between 1900 and 1930, reaching a population of 292,352. Other cities, from Beaumont to El Paso, saw similar growth rates. By contrast, New York City grew by 101% and Detroit, where the automotive boom was occurring, grew by 485%.
The populations of many small Texas towns had even greater population increases when oil discoveries brought prospectors, investors, field laborers, and businessmen. Between 1920 and 1922, the town of Breckenridge in rural North Texas grew from about 1,500 people to nearly 30,000. Between 1925 and 1929, the town of Odessa in the Permian Basin grew from 750 to 5,000. Between 1924 and 1925, the town of Wortham in northern Texas grew from 1,000 to about 30,000. The town of Kilgore in eastern Texas grew from about 500 to 12,000 between 1930 and 1936 following the discovery of the East Texas field.
The growth for many towns was only temporary. Growth in some communities was often driven by exploitation of limited oil resources, so once wells ran dry or demand slowed, their populations rapidly declined. When Wortham's boom ended, the population crashed from its 1927 peak of 30,000 to 2,000 people in 1929. The population of Breckenridge dropped from a similar high to 7,569 in 1930.
One of the most significant demographic changes in the state was the percentage of urban dwellers. Between 1910 and 1930, the percentage of urban dwellers (those living in towns of greater than 2500 people) increased by 32%, resulting in 41% of Texans living in urban areas in 1930. World War II pushed the urban population over 50%.
The urban landscape of the cities changed dramatically during this period. The Praetorian Building in Dallas (1907) and the Amicable Life Insurance Company building in Waco (1911) were among the first skyscrapers in Texas. The Perlstein Building in Beaumont was the first skyscraper built as a direct result of the boom. Beaumont's downtown grew rapidly during the first decade after the 1901 strike. After a second major strike at Spindletop in 1925, Beaumont had the largest skyline of any city between Houston and New Orleans by the end of the decade. (See Beaumont Commercial District.) The twenty–two story Edson Hotel, completed in 1929, was the tallest hotel building in Texas for several years.
Despite Beaumont's importance during the early boom period, the nearby and already-established commercial center of Houston became the leading city of the period. Houston's status was boosted by the 1914 completion of the Houston Ship Channel, allowing the Port of Houston to service large ships. Refineries and related operations were built along the Houston Ship Channel between Houston and Goose Creek. Heavy industry grew in the area and gradually created one of the world's largest industrial complexes. By the 1930s, Houston emerged as the state's largest city and the hub of the rail and road network. The effects of the petroleum-related growth helped offset the effects of the Great Depression substantially, particularly after the discovery of the East Texas field. Texans who became wealthy from the boom established upscale communities including River Oaks, which became a model for community planning in the U.S. Oil-related growth led to the creation of many new institutions, including the University of Houston, the Museum of Fine Arts, Hermann Park, the Houston Zoo, and the Houston Symphony Orchestra.
Dallas and Fort Worth experienced one of their greatest oil-related construction booms in 1930 and 1931, when the opening of the East Texas oil field helped establish Dallas as the financial center for the oil industry in Texas and Oklahoma. New business offices and municipal buildings appeared in the city, including the Highland Park Village shopping center, one of the nation's earliest shopping malls. The Depression slowed population growth in the Dallas area somewhat during the later 1930s, but rapid growth patterns returned again during the 1940s. By this time, though, Dallas had already begun to rediversify, becoming a center for aircraft manufacturing and electronics technology in addition to a variety of other industries.
Cheap gasoline encouraged automobile ownership, which provided a substantial revenue source to the government, leading to the rapid expansion of highway development. Despite the state's geographical size and its rural nature at the turn of the century, the state's road systems developed to a level comparable with the more established industrial areas of the United States.
The oil boom helped expansion of several Texas ports including four ports currently ranked as the top twenty busiest ports in the United States in terms of cargo tonnage. The Houston Ship Channel and Port of Houston became the state's busiest shipping resources. Both also rank 2nd in the United States in terms of cargo tonnage. Although Houston took the lead, the oil boom benefited other areas. The Sabine–Neches Waterway, located in the Beaumont/Port Arthur area, saw growth as a result of the oil boom. The existing ship channel was deepened following the 1901 Spindletop discovery and has been deepened several times since then. That waterway serves two of the United States's ports ranked in the top twenty in terms of cargo tonnage. The Port of Beaumont is ranked 4th and the Port of Port Arthur is ranked 18th. As of December 2013, The Sabine–Neches Waterway is the third-busiest waterway in the United States in terms of tons of cargo behind the Port of South Louisiana and the Houston Ship Channel. The Sabine–Neches Waterway is also the top bulk crude oil importer, the top bulk liquid waterway, and is projected to become the largest LNG exporter in the United States. Discovery of oil also helped the eighth (8th) ranked Port of Corpus Christi. Oil discoveries in nearby counties in the early 1930s resulted in the construction of refineries near the port. The principle cargo shifted from cotton to petroleum products. The Port of Texas City was another port which benefited from the oil boom. That port is currently ranked 14th in terms of cargo tonnage.
The university system in Texas improved dramatically because of the boom. Before the boom, the University of Texas consisted of a small number of crude buildings near Austin. Oil speculation on university land in western Texas led to the creation of the Santa Rita oil well, giving the University of Texas, and later Texas A&M University, access to a major source of revenue and leading the university to become among the wealthiest in the United States. Other universities in the state, especially the University of Houston, were also able to benefit from state-owned oil production and donations from wealthy oil investors, fueling substantial growth and development in their campuses.
Primary and secondary education improved as well, though the extreme growth in the new boomtowns initially caused severe strain on school systems unprepared for the rapid influx of students. Even as money was rapidly flowing in the communities, obtaining tax revenue efficiently where it was needed was often complex. Communities dealt with these problems by establishing independent school districts, education districts formed independently of city or county government with their own independent taxing authority. This type of school district is still the standard in Texas today.
One of the most significant developments in Texan government resulted from the creation of a state oil production tax in 1905. The revenue generated by the tax made funds available for development in the state without the need for income taxes and similar revenue mechanisms adopted in other states. In 1919, tax revenue from oil production surpassed $1 million ($13.7 million in today's terms) and in 1929 it reached $6 million ($82.8 million in today's terms). By 1940, the oil and gas industry accounted for approximately half of all taxes paid in the state.
Politics in Texas during the early 1900s was defined by a spirit of Progressivism. Oil money funded the expansion of the highway system and the educational system. In general, however, the attitude toward business was laissez-faire. There were few regulations on issues such as minimum wages and child labor.
The permissive attitude toward business did not always extend toward large corporations. A lack of venture capital in the state became a significant problem with the early industry. Civic and business leaders, and even ordinary citizens, worried that the influx of capital from outside the state would lead to a loss of political power, revenue, and business opportunities. This sentiment led to a series of antitrust lawsuits by the state Attorney General starting in 1906. The lawsuits easily succeeded and limited the ability of outside investors, most notably Standard Oil, to gain control of the state oil companies.
The mistrust of Standard Oil was partially the result of a suspicion toward carpetbaggers, which ironically was also the source of skepticism regarding labor unions. Union organizers were frequently seen as attempting to support a Northern agenda of promoting opportunities for African Americans at the expense of the white population. Because of the situation this created, labor reform was slow to develop. Despite the anti-union sentiments, groups like the International Oil Workers Union attracted membership and held some influence in the industry and state government.
An enduring theme during and after the oil boom has been a reluctance among Texans to relinquish their identity and a stubbornness in maintaining their cultural heritage in the face of drastic changes to the state brought by the sudden wealth. Despite its growth and industrialization, Texas culture in the mid-20th century remained distinct from the other industrial centers of the nation.
The possibility of becoming wealthy from oil created a "wildcatter" culture in many areas of the state. Independent entrepreneurs chased dreams of wealth by purchasing land and equipment to find oil. Ranchers and farmers, from both inside and outside of the state, turned to prospecting. The Oil and Gas Journal once published the following remark.
Though many failed in their endeavors there were many success stories. The majority of the pioneering of and searching for new oilfields in this era was done by these independents, not big business interests. Competition with large oil interests would lead to the establishment of the Independent Petroleum Association of Texas as a lobbying group for these small businessmen.— Oil and Gas Journal
Houston pioneered American car culture in the early 1900s thanks to the ready availability of inexpensive gasoline. By the 1920s traffic congestion had become so serious that the city became the first in the nation to install interconnected traffic lights. Visitors to the city were often astonished at the lack of pedestrian access to shopping venues and the importance of the automobile within the city. Efforts aimed at promoting mass transit and urban planning were largely defeated in Houston because of opposition by the public, which favored public investment in roads over mass transit. Urban concepts pioneered in Houston, like establishing shopping centers outside the city's core and encouraging suburban sprawl, became major trends adopted in many cities, both within the state and around the country.
Another indirect effect of the boom was the growth of gambling and prostitution in many communities. These activities had not been uncommon in Texas before the boom, but the wealth brought by the oil industry, as well as difficulties in enhancing the laws and the law enforcement agencies, created many new opportunities for illegal businesses and organized crime. Many communities developed casino and red-light districts, including the gaming empire in Galveston, which attracted wealthy businessmen from Houston. Some of these districts operated until the 1950s. Prostitution, which had always been present in the state, flourished in the boom towns, which were crowded with single men earning relatively high wages. The onset of Prohibition and the state government's reluctance to enforce vice laws only encouraged the growth of gambling and bootlegging during this period.
The rapid social changes during this period, especially the 1920s, led to the reemergence of the Ku Klux Klan in the urban centers of Texas, with their strongest presence in Dallas. As in other states, the new Klan was not outwardly focused on suppression of black civil rights, but instead supported traditional morality including opposition to bootlegging, gambling, and other vices that had grown during the period. Bigotry, though, was never far from the group's agenda. During the Depression, anti-New Deal sentiments among some leaders, such as John Kirby, led to their becoming loosely associated with the Klan and its ideals.
The petroleum industry influenced long-term trends in Texas and American culture. Conservative views among the early business leaders in Texas led them to help finance the emergence of the modern Christian right and the American conservative movement.
Although from the outset of the oil boom there were efforts at conservation and protection of the environment, they generally enjoyed little success. Because of the ease of finding oil in the early decades, wells were often not fully developed before prospectors began looking for more productive wells. The wildcatters not only wasted the valuable resource, but created avoidable environmental contamination with the numerous oil strikes. The rush to extract oil frequently led to the construction of poor storage facilities where leaks were common and water pollution became a serious issue. After heavy logging during the 19th century, the clearing of fields for oil exploration and the demand for lumber to be used in new construction destroyed most of the remaining forest lands in the state.
Industrial activities, which had little regulation, created substantial air pollution. The practice of burning off gas pockets in new oil fields was common, thus increasing the problem. As the Houston area came to be the most heavily industrialized area in the state, it accumulated the most serious air quality issues. By the 1950s, airline pilots were able to follow lines of haze in the air into the city.
Another serious effect created by the oil-related industries has been the pollution around the Houston Ship Channel and in Galveston Bay. By the 1970s, these waterways were among the most polluted waters in the United States. Though industrial sources were major sources of pollution, urbanization around the bay also contributed significantly to pollution levels. In recent decades, most of the pollution in the bay is the result of storm run-off from various smaller commercial, agricultural, and residential sources, as opposed to the major industrial complexes. Conservation efforts in the mid to late 20th century by area industries and municipalities have helped to dramatically improve water quality in the bay reversing at least some of the earlier damage to the ecosystem.
By the 1940s, production in the East Texas Oil Field and oil prices stabilized. Though the major urban areas continued to grow, the extreme growth patterns of the first three decades began to slow. As western Texas and the panhandle region began to be more fully explored, the Permian Basin gradually became the top producing area of the state. Though independent oil companies were still an important part of the industry for some time, the major new strikes were increasingly made by established companies. World War II helped complete the state's transition to an industrialized and urbanized state with oil facilitating the transition.
During the 1960s and 1970s, as a result of both production peaks in some nations and political instability in others, the world's supply of petroleum tightened leading to an energy crisis during the 1970s and early 1980s. Petroleum prices rose dramatically, greatly benefiting Texas, particularly as compared to other parts of the U.S. that faced recession during this time. A new economic boom emerged which, though not as transformative as the early 1900s, pushed the population of Texas to the point that, by the end of the century, Texas was the second most populous state in the nation. Some sources, in fact, use the phrase Texas oil boom to refer to this later period rather than the earlier period that followed Spindletop.
Four businessmen were emblematic of the 1920s and 30s boom years — H. Roy Cullen, H. L. Hunt, Sid W. Richardson, and Clint Murchison. Cullen was a self-educated cotton and real-estate businessman who moved to Houston in 1918 and soon began oil prospecting. Cullen's success led to his founding the South Texas Petroleum Company (with partner Jim West Sr.) and Quintana Oil Company. Cullen and his wife established the Cullen Foundation, which became one of the largest charitable organizations in the state, and donated heavily to the University of Houston, the Texas Medical Center, and numerous other causes in Texas, particularly in the Houston area.
Hunt's first successes were in the oilfields of Arkansas, but he lost most of his fortune by the outset of the Depression as overproduction depleted his fields and his speculation on land and oil drained his resources. He joined in Columbus Joiner's venture which opened the East Texas Oil Field. Hunt bought most of Joiner's interests in eastern Texas and his company, Placid Oil, owned hundreds of wells. He became established in Dallas and was labeled the richest man in the nation in 1948 by Fortune Magazine. A scandal emerged in 1975, after his death, when it was discovered that he had had a hidden bigamous relationship, with his second wife living in New York.
Richardson was a cattle trader who established an independent oil production business in Fort Worth in 1919. He soon expanded into numerous businesses and owned the Texas City Refining Company, cattle ranches, radio and television networks, among other businesses. He was a very private man who was sometimes referred to as the "bachelor billionaire." Murchison, who began his career at his father's bank, soon became an oil lease trader working with Richardson. He expanded into exploration and production in northern Texas, then around San Antonio, and finally the Dallas area. He went on to create the Southern Union Gas Company and became a developer on the East Texas field. He expanded his business into international oil and gas operations in Canada and Australia. His son Clint Jr. went on to form the Dallas Cowboys football franchise. For their part, Murchison and Richardson were known to have been major national political operatives and had close ties to President Dwight D. Eisenhower and his vice president Richard M. Nixon, as well as FBI chief J. Edgar Hoover and President Lyndon B. Johnson.
Other wealthy Texans involved in the oil industry, though not as influential, became well known, often as much for their eccentricities as their wealth. Glenn McCarthy was a modest oil worker who pioneered wells around what the Houston area. In 1932, he struck oil at Anahuac near the Galveston Bay. Over the next decade, he made dozens of other strikes and quickly became one of Texas' richest men. His extravagance was legendary leading to his becoming $52 million in debt in 1952 ($464 million in present-day terms). His love of bourbon led him to establish the WildCatter bourbon label. His excesses made him an unwilling national celebrity during the 1940s and 1950s as the media became enamored with tales of Texas oil wealth.
Jim West Jr. was the heir to the fortune of Jim West Sr., an early Houston businessman who helped shape the city and the state before the boom and during the early years of the boom. Known as "Silver Dollar Jim", for his habit of carrying silver dollars and tossing them to doormen, the poor, and anyone that waited on him, West Jr. is regarded by many as the most flamboyant of Houston oilmen. His lavish spending habits and his proclivity for amateur law enforcement were well known. Using his many cars, which were kept loaded with weapons, sirens, and radios, he regularly chased criminals in Houston alongside the police.
Though the general public of the United States was aware of oil production in Texas, the wealth that it generated in the state for the first three decades after Spindletop was largely unknown. Of the four most prominent oil businessmen in Texas at the end of World War II — Murchison, Cullen, Richardson, and Hunt — only three articles about them appeared in the New York Times during their lifetime, despite their philanthropy and influence in Washington D.C. Stereotypes about Texas in the American imagination generally revolved around cowboys and cattle.
By the late 1940s, the national media began to report the extreme wealth of some Texans in magazines such as Life and Fortune. A stereotype emerged of the nouveau-riche Texas oil millionaire, popularized by the media. The popular image often was characterized by a rough and combative personality, heavy drinking, and extravagant spending. In 1956, the motion picture Giant helped to crystallize the image of Texans in the popular imagination as comical, eccentric figures. Glenn McCarthy was the inspiration for the character of Jett Rink in the film. Other films such as Boom Town and War of the Wildcats, and books such as The Lusty Texans of Dallas and Houston: Land of the Big Rich, also contributed to public perceptions of oil's influence in Texas and surrounding states.
... it [Galveston Bay] is at the center of the state's petrochemical industry, with 30 percent of U.S. petroleum industry and nearly 50 percent of U.S. production of ethylene and propylene Occuring [sic] on its shores.
The building was the tallest structure between Houston and New Orleans when it was built in 1907.External link in
This 22-story hotel was completed in 1929 at a cost of 1. 5 million. It was designed by F. W. and D. E. Steinman of Beaumont and was the tallest hotel building in Texas. The building has classical detailing such as decorative pilasters, detailed cornices and quoins.
Texas Tallest, Hotel Edson, Beaumont, Texas, An Affiliated National Hotel.External link in
From the mid-1930's, the major portion of the tonnage moved through the Port shifted from cotton to petroleum and petroleum products.
Contaminated storm water runoff, or nonpoint source (NPS) pollution, remains the top water quality problem facing Galveston Bay. NPS pollution is transported to our waterways via rainfall runoff from diffuse, landbased sources such as businesses, industries, farms, roads, parking lots, septic tanks, marinas, and residential yards.
However, during the Texas oil boom, the state's population growth accelerated. From 1970 to 1980, as oil prices spiraled upward and people flocked to Texas ...
|Wikimedia Commons has media related to Texas Oil Boom.| |
Thesis Statement: The Sun is fascinating because it has so many layers.
The Sun has different types of layers that all play an important role in producing it's fuel. It's Core is 27 million degrees Fahrenheit and contains 6% of the Sun's mass. The Core is located in the center of the Sun and it produces all of the Sun's energy.
What sits on top of the Core is the Radioactive Zone. In the Radioactive Zone energy travels in form of Photons. The Radioactive Zone is also very dense and is 300,000 km thick. Atoms are so closely packed together that light takes millions of years to pass through.
Another very important layer of the sun is the Convective Zone. It is 200,000 km thick, hot gases rise from inside while cooler gases sink towards the center. It is a region where energy is carried by convection cells.
The Photosphere is on top of the Convective Zone, it is the Sun's visible surface. At 300 miles gases are produced so thick that you can see them.
The Chromosphere is an irregular layer of the atmosphere above the Photosphere. It is 3,000 km thick, making the color deep red.
The Sun has many fascinating layers that all have their own characteristics. |
Computing Architecture Last updated on 2008 1 10, a full moon day
Basically, an instruction is a combination of Flag Bit, Opcode, and Data. Flag Bit can be categorized into whether sign, or un-sign. Opcode is control. Opcode is also known as mnemonic. Opcode can be categorized into ALU oriented, General purpose oriented, Streaming Data oriented, and ... ; Data can be either being inside of Data bits of the instruction, or sometime can be a whole word of bits as Data.
Depending on hardware specifications, computing architecture varies. i.e. super computer specification uses not only instruction diagram, but also vector diagram. In addition to instruction diagram and vector diagram, Quantum computer specification uses particle diagram, and etc.
Time <> number <> architecture |
Voting Rights on the Eve of the Revolution
The basic principle that governed voting in colonial America was that voters should have a “stake in society.” Leading colonists associated democracy with disorder and mob rule, and believed that the vote should be restricted to those who owned property or paid taxes. Only these people, in their view, were committed members of the community and were sufficiently independent to vote. Each of the thirteen colonies required voters either to own a certain amount of land or personal property, or to pay a specified amount in taxes.
Many colonies imposed other restrictions on voting, including religious tests. Catholics were barred from voting in five colonies and Jews in four.
The right to vote varied widely in colonial America. In frontier areas, seventy to eighty percent of white men could vote. But in some cities, the percentage was just forty to fifty percent.
The Impact of the Revolution
The American Revolution was fought in part over the issue of voting. The Revolutionaries rejected the British argument that representation in Parliament could be virtual (that is, that English members of Parliament could adequately represent the interests of the colonists). Instead, the Revolutionaries argued that government derived its legitimacy from the consent of the governed.
This made many restrictions on voting seem to be a violation of fundamental rights. During the period immediately following the Revolution, some states replaced property qualifications with taxpaying requirements. This reflected the principle that there should be “no taxation without representation.” Other states allowed anyone who served in the army or militia to vote. Vermont was the first state to eliminate all property and taxpaying qualifications for voting.
By 1790, all states had eliminated religious requirements for voting. As a result, approximately 60 to 70 percent of adult white men could vote. During this time, six states (Maryland, Massachusetts, New York, North Carolina, Pennsylvania, and Vermont) permitted free African Americans to vote.
The Constitution and Voting Rights
The US Constitution left the issue of voting rights up to the states. The only thing that the Constitution said about voting was that those entitled to vote for the “most numerous Branch of the state legislature” could vote for members of the House of Representatives.
During the first half of the nineteenth century, the election process changed dramatically. Voting by voice was replaced by voting by written ballot. This was not the same thing as a secret ballot, which was instituted only in the late nineteenth century; parties printed ballots on colored paper, so that it was still possible to determine who had voted for which candidate.
The most significant political innovation of the early nineteenth century was the abolition of property qualifications for voting and officeholding. Hard times resulting from the Panic of 1819 led many people to demand an end to property restrictions on voting and officeholding. In 1800, just three states (Kentucky, New Hampshire, and Vermont) had universal white manhood suffrage. By 1830, ten states permitted white manhood suffrage without qualification. Eight states restricted the vote to taxpayers, and six imposed a property qualification for suffrage. In 1860, just five states limited suffrage to taxpayers and only two still imposed property qualifications. And after 1840, a number of states, mainly in the Midwest, allowed immigrants who intended to become citizens to vote.
Pressure for expansion of voting rights came from propertyless men; from territories eager to attract settlers; and from political parties seeking to broaden their base.
Ironically, the period that saw the advent of universal white manhood suffrage also saw new restrictions imposed on voting by African Americans. Every new state that joined the Union after 1819 explicitly denied blacks the right to vote. In 1855, only five states—Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont—allowed African Americans to vote without significant restrictions. In 1826, only sixteen black New Yorkers were qualified to vote.
The era of universal white manhood suffrage also saw other restrictions on voting. In New Jersey, the one state that had allowed women property holders to vote, women lost the right to vote. Twelve states forbade paupers from voting and two dozen states excluded felons. After 1830, interest in voting registration increased. There were also some attempts to impose literacy tests and prolonged residence requirements (ranging up to twenty-one years) in the 1850s.
The Dorr War
The transition from property qualifications to universal white manhood suffrage occurred gradually, without violence and with surprisingly little dissension, except in Rhode Island, where lack of progress toward democratization provoked an episode known as the Dorr War.
In 1841, Rhode Island, still operating under a Royal Charter granted in 1663, restricted suffrage to landowners and their eldest sons. The charter lacked a bill of rights and grossly underrepresented growing industrial cities, such as Providence, in the state legislature. As Rhode Island grew increasingly urban and industrial, the state’s landless population increased and fewer residents were eligible to vote. By 1841, just 11,239 out of 26,000 adult males were qualified to vote.
In 1841, Thomas W. Dorr, a Harvard-educated attorney, organized an extralegal convention to frame a new state constitution and abolish voting restrictions. The state’s governor declared Dorr and his supporters guilty of insurrection, proclaimed a state of emergency, and called out the state militia. Dorr tried unsuccessfully to capture the state arsenal at Providence. He was arrested, found guilty of high treason, and sentenced to life imprisonment at hard labor. To appease popular resentment, the governor pardoned Dorr the next year, and the state adopted a new constitution in 1843. This constitution extended the vote to all taxpaying native-born adult males (including African Americans). But it imposed property requirements and lengthy residence requirements on immigrants.
Rhode Island was unusual in having a large urban, industrial, and foreign-born working class. It appears that fear of allowing this group to attain political power explains the state’s strong resistance to voting reform.
The Civil War and Reconstruction
Although Abraham Lincoln had spoken about extending the vote to black soldiers, opposition to granting suffrage to African American men was strong in the North. Between 1863 and 1870, fifteen Northern states and territories rejected proposals to extend suffrage to African Americans.
During Reconstruction, for a variety of reasons, a growing number of Republicans began to favor extending the vote to African American men. Many believed that African Americans needed the vote to protect their rights. Some felt that black suffrage would allow the Republican party to build a base in the South.
The Reconstruction Act of 1867 required the former Confederate states to approve new constitutions, which were to be ratified by an electorate that included black as well as white men. In 1868, the Republican Party went further and called for a Fifteenth Amendment that would prohibit states from denying the vote based on race or previous condition of servitude. A proposal for a stronger amendment that would have prohibited states from denying or abridging the voting rights of adult males of sound mind (with the exception of felons and those who had engaged in rebellion against the United States) was defeated.
A variety of methods—including violence in which hundreds of African Americans were murdered, property qualification laws, gerrymandering, and fraud—were used by Southern whites to reduce the level of black voting. The defeat in 1891 of the Federal Elections Bill, which would have strengthened the federal government’s power to supervise elections, prevent suppression of the black vote, and overturn fraudulent elections, ended congressional efforts to enforce black voting rights in the South.
The Mississippi Plan
In 1890, Mississippi pioneered new methods to prevent African Americans from voting. Through lengthy residence requirements, poll taxes, literacy tests, property requirements, cumbersome registration procedures, and laws disenfranchising voters for minor criminal offenses, Southern states drastically reduced black voting. In Mississippi, just 9,000 of 147,000 African Americans of voting age were qualified to vote. In Louisiana, the number of black registered voters fell from 130,000 to 1,342.
Meanwhile, grandfather clauses in these states exempted whites from all residence, poll tax, literacy, and property requirements if their ancestors had voted prior to enactment of the Fifteenth Amendment.
The Late Nineteenth Century
Fears of corruption and of fraudulent voting led a number of northern and western states to enact “reforms” similar to those in the South. Reformers were especially troubled by big-city machines that paid or promised jobs to voters. Reforms that were enacted included pre-election registration, long residence qualifications, revocation of state laws that permitted non-citizens to vote, disfranchisement of felons, and adoption of the Australian ballot (which required voters to place a mark by the name of the candidate they wished to vote for). By the 1920s, thirteen northern and western states barred illiterate adults from voting (in 1924, Oregon became the last state to adopt a literacy test for voting). Many western states prohibited Asians from voting.
In 1848, at the first women’s rights convention in Seneca Falls, New York, delegates adopted a resolution calling for women’s suffrage. But it would take seventy-two years before most American women could vote. Why did it take so long? Why did significant numbers of women oppose women’s suffrage?
The Constitution speaks of “persons”; only rarely does the document use the word “he.” The Constitution did not explicitly exclude women from congress or from the presidency or from juries or from voting. The Fourteenth Amendment included a clause that stated, “No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States.”
In the presidential election of 1872, supporters of woman suffrage, including Susan B. Anthony, appeared at the polls, arguing that if all citizens had the right to the privileges of citizenship, they could certainly exercise the right to vote. In Minor v. Happersett (1875) the Supreme Court ruled that women could only receive the vote as a result of explicit legislation or constitutional amendment, rather than through interpretation of the implications of the Constitution. In a unanimous opinion, the Court observed that it was “too late” to claim the right of suffrage by implication. It also ruled that suffrage was a matter for the states, not the federal government, to decide.
One group of women led by Elizabeth Cady Stanton and Susan B. Anthony sought a constitutional amendment. Another group, led by Lucy Stone, favored a state-by-state approach. In 1890, the two groups merged to form the National American Woman Suffrage Association. Rather than arguing in favor of equal rights, the NAWSA initially argued that women would serve to uplift politics and counterbalance the votes of immigrants. Meanwhile, opponents of women’s suffrage argued that it would increase family strife, erode the boundaries between masculinity and femininity, and degrade women by exposing them to the corrupt world of politics.
Women succeeded in getting the vote slowly. Wyoming Territory, eager to increase its population, enfranchised women in 1869, followed by Utah, which wanted to counter the increase in non-Mormon voters. Idaho and Colorado also extended the vote to women in the mid-1890s. A number of states, counties, and cities allowed women to vote in municipal elections, for school boards or for other educational issues, and on liquor licenses.
During the early twentieth century, the suffrage movement became better financed and more militant. It attracted growing support from women who favored reforms to help children (such as increased spending on education) and the prohibition of alcohol. It also attracted growing numbers of working-class women, who viewed politics as the way to improve their wages and working conditions.
World War I helped to fuel support for the Nineteenth Amendment to the Constitution, extending the vote to women. Most suffragists strongly supported the war effort by selling war bonds and making clothing for the troops. In addition, women’s suffrage seemed an effective way to demonstrate that the war truly was a war for democracy.
At first, politicians responded to the Nineteenth Amendment by increasingly favoring issues believed to be of interest to women, such as education and disarmament. But as it became clear that women did not vote as a bloc, politicians became less interested in addressing issues of particular interest to them. It would not be until the late twentieth century that a gender gap in voting would become a major issue in American politics.
Declining Participation in Elections
Voter turnout began to fall after the election of 1896. Participation in presidential elections fell from a high of about 80 percent overall to about 60 percent in the North in the 1920s and about 20 percent in the South. Contributing to the decline in voter participation was single-party dominance in large parts of the country; laws making it difficult for third parties to appear on the ballot; the decline of urban political machines; the rise of at-large municipal elections; and the development of appointed commissions that administered water, utilities, police, and transportation, reducing the authority of elected officials.
Voting Rights for African Americans
In 1944, in Smith v. Allwright, the US Supreme Court ruled that Texas’s Democratic Party could not restrict membership to whites only and bar blacks from voting in the party’s primary. Between 1940 and 1947, the proportion of Southern blacks registered to vote rose from 3 percent to 12 percent. In 1946, a presidentially appointed National Committee on Civil Rights called for abolition of poll taxes and for federal action to protect the voting rights of African Americans and Native Americans.
At the end of the 1950s, seven Southern states (Alabama, Georgia, Louisiana, Mississippi, North Carolina, South Carolina, and Virginia) used literacy tests to keep blacks from voting, while five states (Alabama, Arkansas, Mississippi, Texas, and Virginia) used poll taxes to prevent blacks from registering. In Alabama, voters had to provide written answers to a twenty-page test on the Constitution and on state and local government. Questions included: “Where do presidential electors cast ballots for president?” And “Name the rights a person has after he has been indicted by a grand jury.” The Civil Rights Act of 1957 allowed the Justice Department to seek injunctions and file suits in voting rights cases, but it only increased black voting registrations by 200,000.
In an effort to bring the issue of voting rights to national attention, Martin Luther King Jr. launched a voter registration drive in Selma, Alabama, in early 1965. Even though blacks slightly outnumbered whites in this city of 29,500 people, Selma’s voting rolls were 99 percent white and 1 percent black. For seven weeks, King led hundreds of Selma’s black residents to the county courthouse to register to vote. Nearly 2,000 black demonstrators, including King, were jailed by County Sheriff James Clark for contempt of court, juvenile delinquency, and parading without a permit. After a federal court ordered Clark not to interfere with orderly registration, the sheriff forced black applicants to stand in line for up to five hours before being permitted to take a “literacy” test. Not a single black voter was added to the registration rolls.
When a young black man was murdered in nearby Marion, King responded by calling for a march from Selma to the state capital of Montgomery, fifty miles away. On March 7, 1965, black voting-rights demonstrators prepared to march. As they crossed a bridge spanning the Alabama River, 200 state police with tear gas, nightsticks, and whips attacked them. The march resumed on March 21 with federal protection. The marchers chanted, “Segregation’s got to fall . . . you never can jail us all.” On March 25, a crowd of 25,000 gathered at the state capitol to celebrate the march’s completion. Martin Luther King Jr. addressed the crowd and called for an end to segregated schools, poverty, and voting discrimination. “I know you are asking today, ‘How long will it take?’ . . . How long? Not long, because no lie can live forever.”
Two measures adopted in 1965 helped safeguard the voting rights of black Americans. On January 23, the states completed ratification of the Twenty-fourth Amendment to the Constitution barring a poll tax in federal elections. At the time, five Southern states still had a poll tax. On August 6, President Johnson signed the Voting Rights Act, which prohibited literacy tests and sent federal examiners to seven Southern states to register black voters. Within a year, 450,000 Southern blacks registered to vote.
The Supreme Court ruled that literacy tests were illegal in areas where schools had been segregated, struck down laws restricting the vote to property-owners or tax-payers, and held that lengthy residence rules for voting were unconstitutional. The court also ruled in the “one-man, one-vote” Baker v. Carr decision that states could not give rural voters a disproportionate sway in state legislatures. Meanwhile, the states eliminated laws that disenfranchised paupers.
Reducing the Voting Age
The war in Vietnam fueled the notion that young people who were young enough to die for their country were old enough to vote. In 1970, as part of an extension of the Voting Rights Act, a provision was added lowering the voting age to eighteen. The Supreme Court ruled that Congress had the power to reduce the voting age only in federal elections, not in state elections. To prevent states from having to maintain two different voting rolls, the Twenty-sixth Amendment to the Constitution barred the states and the federal government from denying the vote to anyone eighteen or older.
An Unfinished History
The history of voting rights is not yet over. Even today, debate continues. One of the most heated debates is whether or not convicted felons who have served their time be allowed to vote. Today, a handful of states bar convicted felons from voting unless they successfully petition to have their voting rights restored. Another controversy—currently being discussed in San Francisco—is whether non-citizens should have the right to vote, for example, in local school board elections. Above all, the Electoral College arouses controversy, with critics arguing that our country’s indirect system of electing a president overrepresents small states, distorts political campaigning, and thwarts the will of a majority of voters. History reminds us that even issues that seem settled sometimes reopen as subjects for debate. One example might be whether the voting age should be lowered again, perhaps to sixteen. In short, the debate about what it means to be a truly democratic society remains an ongoing, unfinished, story.
Make Gilder Lehrman your Home for History
Already have an account?
Please click here to login and access this page.
How to subscribe
Otherwise, click here for information on a paid subscription for those who are not K-12 educators or students.
Make Gilder Lehrman your Home for History
Become an Affiliate School to have free access to the Gilder Lehrman site and all its features.
Click here to start your Affiliate School application today! You will have free access while your application is being processed.
Individual K-12 educators and students can also get a free subscription to the site by making a site account with a school-affiliated email address. Click here to do so now!
Make Gilder Lehrman your Home for History
Why Gilder Lehrman?
Your subscription grants you access to archives of rare historical documents, lectures by top historians, and a wealth of original historical material, while also helping to support history education in schools nationwide. Click here to see the kinds of historical resources to which you'll have access and here to read more about the Institute's educational programs.
Individual subscription: $25
Click here to sign up for an individual subscription to the Gilder Lehrman site.
Make Gilder Lehrman your Home for History
Upgrade your Account
We're sorry, but it looks as though you do not have access to the full Gilder Lehrman site.
All K-12 educators receive free subscriptions to the Gilder Lehrman site, and our Affiliate School members gain even more benefits!
How to Subscribe
K-12 educator or student? Click here to get free access, and here for more information on the Affiliate School Program.
Not a educator or student? Click here for more information on purchasing a subscription to the Gilder Lehrman site.
Related Site Content
- Video Series: Lifetimes
- Video Series: African American History
- Teaching Resource: Essential Questions in Teaching American History
- Video Series: Women’s World
- Video Series: Essential Questions in American History
- Video Series: Inside the Vault
- Video Series: Pulitzer Prize Winners
- Video Series: Presidents
- Video Series: A Nation of Immigrants
- Video Series: History in the Courts |
ADS BY GOOGLE
Definition of Unit
1 Meaning of UnitIt is an amount taken as expressing uniqueness in number (one) or quality (e.g. "the Group has unit criteria"). It serves as a reference for comparing, cannot divide unless its essence is destroyed, or lose its functionality.
Measurable things used as basis for this measurement, the unit, which is always one, could not be zero, because that does not measure can not count. One is the drive itself, two is the sum of two units. The first units of which splits to form all the numbers are called absolute units. The TENs are units of second order, hundreds, of third order, fourth-order, thousand units, and so on.
The unit of measure is a number, constant, immutable and repeatable, same species to which you wish to measure. The set of interrelated units is a system of units. The international system of measurements, contains seven fundamental units, which correspond to the following quantities: length is the meter, of mass is the kilogram, the time, is the second, intensity of electric current is the ampere, thermodynamic temperature is the kelvin, of the amount of substance is the mole, and luminous intensity is the candela.
The monetary unit is the pattern of legal tender of a country, of which all others are derived.
In astronomy, the unit, are the 149.500.000 of km. that separate the Earth from the Sun.
In the military sphere under the command of a Chief, armed group is called military unit.
Unit, in the process of teaching and learning, includes the whole of sub themes, included in one central, which keep relationship and systematization among themselves and with the following and preceding units. For example "history of this year includes seven thematic units".
2. Definition of UnitWord that comes from the latin, unitas and refers to the property of all being, by virtue of which cannot divide its essence is destroyed or alter. Amount taken as a measurement or comparison with the of the same species (element unit) pattern.
In statistical terms, it refers to each elemental facts that constitute the whole study. In mathematics, is the name given to the first natural number; that is the 1. An absolute unit, in physics, is which can be defined directly on the basis of the fundamental units of length, mass and time.
A term that also defines a formation of army that can act under the orders of a single-head (head of unit), which has, in addition, can disciplinary. They are called the misunderstood from Brigade up, large units and small, the remaining.
In computer science, is the name that receives generally entire apparatus which is part of the treatment and data processing system, and which has awarded him a well-defined task. For example, we find that a disk drive, is the name given to a computer peripheral that uses diskettes (floppy disks) as support for the storage of data.
It consists of a mechanical system that spins to the disc, and a head for recording and reading magnetic data. They derive from the magnetic drums (mass memory) that unlike the disk do not have mobile recording heads, although its use is very similar to the one of them.
There are two main types, the hard drive, which uses hard disks larger and more expensive than the flexible, and the floppy disk or floppy.
3 Concept of UnitThe concept of unity is an abstract concept that is used to designate all that is uniform, United and such in the world. The idea of unit comes precisely from the one, that is one thing, a single element. Thus, for example unit of matter is present when different parties unite to become something greater or more complex that encompasses them. In scientific terms, the unit represents order, but in social terms the unit can often be understood as a negative if it is considered that the unit represents the annulment of different or distinct.
When we talk in terms of scientific, biological, chemical, physical, etc., the unit represents the conjunction of elements occurring naturally or artificially in different circumstances. So, for example when several specimens of the same species of animal are joined together in a group (a flock of birds) they become more complex, a unit since all Act and move together. A unit artificially attained by man can be for example when gets several ingredients to preparing a plate of food service; each element is different from the rest but altogether becomes all in one unit, something new.
In the case of social phenomena, the idea of unit has two aspects, one positive and the other negative. In the same way that in science, the unit can represent a phenomenon positive when he represents order and working together, for example when all persons of a company or a family Act ranked and securely, forming a unit and each role. The social unit has to do with the idea to unite all despite our differences for a purpose in common, for example, peace.
However, the unit can also be negative when in certain forms of Government, regimes or social and cultural ideas separate or different is understood as something bad or dangerous that should be eliminated. Thus tends to the disappearance of the differences that make us who we are and that we engage in a homogeneity in which none of us stands out for their own traits, characteristics, achievements, etc. |
Because reduced uranium species have a much smaller solubility than oxidized uranium species and because of the strong association of organic matter (a powerful reductant) with many uranium ores, reduction has long been considered to be the precipitation mechanism for many types of uranium deposits. Organic matter may also be involved in the alterations in and around tabular uranium deposits, including dolomite precipitation, formation of silicified layers, iron-titanium oxide destruction, dissolution of quartz grains, and precipitation of clay minerals. The diagenetic processes that produced these alterations also consumed organic matter. Consequently, those tabular deposits that underwent the more advanced stages of diagenesis, including methanogenesis and organic acid generation, display the greatest range of alterations and contain the smallest amount of organic matter. Because of certain similarities between tabular uranium deposits and Precambrian unconformity-related deposits, some of the same processes might have been involved in the genesis of Precambrian unconformity-related deposits. Hydrologic studies place important constraints on genetic models of various types of uranium deposits. In roll-front deposits, oxidized waters carried uranium to reductants (organic matter and pyrite derived from sulfate reduction by organic matter). After these reductants were oxidized at any point in the host sandstone, uranium minerals were reoxidized and transported further down the flow path to react with additional reductants. In this manner, the uranium ore migrated through the sandstone at a rate slower than the mineralizing ground water. In the case of tabular uranium deposits, the recharge of surface water into the ground water during flooding of lakes carried soluble humic material to the water table or to an interface where humate precipitated in tabular layers. These humate layers then established the chemical conditions for mineralization and related alterations. In the case of Precambrian unconformity-related deposits, free thermal convection in the thick sandstones overlying the basement rocks carried uranium to concentrations of organic matter in the basement rocks.
Additional Publication Details
The roles of organic matter in the formation of uranium deposits in sedimentary rocks |
265 pages, 13 b/w illustrations
How can we unravel the evolution of language, given that there is no direct evidence about it? Rudolf Botha addresses this intriguing question in his fascinating new book. Inferences can be drawn about language evolution from a range of other phenomena, serving as windows into this prehistoric process. These include shell-beads, fossil skulls and ancestral brains, modern pidgin and creole languages, homesign systems and emergent sign languages, modern motherese, language use of modern hunter-gatherers, first language acquisition, similarities between language and music, and comparative animal behaviour. The first systematic analysis of the Windows Approach, it will be of interest to students and researchers in many disciplines, including anthropology, archaeology, linguistics, palaeontology and primatology, as well as anyone interested in how language evolved.
"In 2006, Rudie Botha launched an all out attack on the legitimacy of the claim that the South African archaeological site of Blombos had evidence of 'fully syntactic' language 75,000 years ago. No one has been able to counter the logic of his argument, and this book applies that same relentless, illuminating logic to other claims in the study of language origins. In doing so, Botha shows just how carefully any claims must be justified, and just how powerful his Windows Approach is. Students and researchers in archaeology, primatology, linguistics, and comparative ethology cannot ignore this book."
– Iain Davidson, University of New England
"This book will prove to be a milestone in the field [...] a meticulous, rigorous, and yet highly readable guide."
– Paul T. Roberge, University of North Carolina, Chapel Hill
Part I. Preliminaries:
1. The Windows Approach
2. Conceptual foundations of the approach
Part II. Correlate Windows:
3. Sea shells, ancient beads, and Middle Stone Age symbols
4. Fossil skulls and ancestral brains
Part III. Analogue Windows:
5. Incipient pidgins and creoles
6. Homesign systems and emergent sign languages
7. Modern motherese
8. Hunter-gatherers' use of language
9. Language acquisition
Part IV. Abduction Windows:
10. Modern music and language
11. Comparative animal behaviour
Part V. Epilogue:
12. A tool fit for demystifying language evolution?
There are currently no reviews for this product. Be the first to review this product!
Rudolf Botha is Emeritus Professor of General Linguistics at the University of Stellenbosch, South Africa, and Honorary Professor of Linguistics at Utrecht Institute of Linguistics, Utrecht University, The Netherlands. |
- (UK) enPR: kŏm'ə, IPA(key): /ˈkɒm.ə/
- (US) enPR: kŏm'-ə, IPA(key): /ˈkɑm.ə/
Audio (US) (file)
- Rhymes: -ɒmə
- Punctuation mark (,) (usually indicating a pause between parts of a sentence or between elements in a list).
- (by extension) A diacritical mark used below certain letters in Romanian.
- A European and North American butterfly, Polygonia c-album, of the family Nymphalidae.
- (music) a difference in the calculation of nearly identical intervals by different ways.
- (genetics) A delimiting marker between items in a genetic sequence.
- In Ancient Greek rhetoric a comma (κόμμα) is a short clause, something less than a colon, originally denoted by comma marks. In antiquity comma was defined as a combination of words that has no more than eight syllables. This term is later applied to longer phrases, e.g. the Johannine comma.
Punctuation mark ','
- The translations below need to be checked and inserted above into the appropriate translation tables, removing any numbers. Numbers do not necessarily match those in definitions. See instructions at Help:How to check translations.
Translations to be checked
- Comma (punctuation) on Wikipedia.Wikipedia:Comma (punctuation)
- Comma (butterfly) on Wikipedia.Wikipedia:Comma (butterfly)
- (in grammar):
- (in verse) a caesura
Third declension neuter.
- In the works of Cicero and Quintilian, the untransliterated Greek κόμμα (kómma) is used for comma in the grammatical sense of “a division…of a period smaller than a colon”.
- (comma: division of a period): incīsum (pure Latin) |
~ Rays ~
Rays are a type of flattened fish and all rays are closely related to sharks. Rays evolved from sharks and belong to the same group as sharks, called Elasmobranchii. These social animals live in seas all over the world. Some rays often congregate in large groups of up to thousands of individuals, but other rays live alone.
Unlike other fish, rays have no bones, instead their skeleton is made of cartilage. Cartilage is a tough, fibrous substance, not nearly as hard as bone. Sharks also have skeletons made of cartilage.
The earliest known rays date from about 150 million years ago during the Jurassic period. Since rays have only cartilage, fossil rays are rare, but like sharks, their teeth, composed of very hard enamel, fossilize well. Many fossilized teeth and some fossilized spines have been found.
Many species of rays have spines on their tails. The spines can poison other animals when stung. Some rays have long tails while other species have short tails. Some rays have a series of thorns on their body as further defense against predators. The ray's tail varies from species to species. It ranges from stubby, on species like the shorttailed electric rays, to incredibly long on the rays like the sting rays.
Rays defend themselves from predators in many different ways. Some use their whip-like tail to lacerate, some sting with a poisonous stinging tail, electrical rays give electrical shocks up to 200 volts, and some have hard, bony spines that puncture their victims. Teeth are not used very much by rays as a defense, but some can bite. A ray can burrow itself into the sea bed and be completely camouflaged by the sand; which is probably their best defense. Rays don't normally attack humans. Some have a sting that is deadly. Most stings to humans happen when they are stepped on or when a ray is being removed from a fishing line or net.
There is a huge range of color variation among rays. Colors can even vary from male to female in the same species of some rays. The size of rays can vary as much as their color, from just a few inches to over 25 feet wide. The short-nose electric ray is the smallest of rays and is about the size of a small plate (approximately 4 inches across and weighs about 1 pound). The manta ray is the largest of rays and can measure over 25 feet wide and can weigh thousands of pounds. Most rays measure and weigh between these two extremes. More than half of all ray species are over 20 inches long. Rays are some of the largest fish in the oceans.
All rays have a flattened body and an elongated tail. The pectoral fins are large and connected to the body to form the ray's disc-like form. The shape of the ray's body differs from species to species and may be circular, oval, wedge-shaped or triangular. Some body shapes are adapted for living on the sea bed and others are adapted for almost constant swimming.
There are about 500 different species of rays and skates, which are divided into 18 families. These different families of rays are very different in the way they look, live, and hunt. They have different shapes, sizes, color, fins, teeth, habitat, diet, personality, method of reproduction, and other attributes.
Rays are carnivores. Just a few examples of their diet would be fish, crustaceans, and mollusks. Rays mostly hunt on or near the bottom of the ocean. The manta ray, however, is a filter feeder, sieving small prey as they swim almost continuously. Other rays are active hunters searching for bottom-dwelling animals like mollusks and crustaceans. The electric ray stuns its prey with electricity.
Rays have a high ratio of brain weight to body weight; they are probably very intelligent, even smarter than sharks. They are known to be very curious animals, often approaching a diver and simply observing the intruder.
Rays live in oceans and seas all over the world. Rays live mostly on or near the sea bed. Different ray species are found in habitats ranging from close to shore to extreme depths of over 10,000 feet deep.
Rays swim very differently than other fish. They are propelled through the water with their powerful, pectoral fins which ripple and flap, much like a bird's wings. Their large pectoral fins also let them glide through the water. Some rays can even jump above the water. Other ray species are coated with a slimy mucous which reduces the surface tension and drag of the water and increases their swimming speed. Rays lack a swim bladder and use their oily liver to maintain buoyancy just like sharks do. When a ray stops swimming, it sinks down to the sea bed.
Some rays are oviparous (laying eggs) while others reproduce via ovoviviparity (this is when animals hatch from eggs, but the eggs hatch and the babies develop inside the female's body; there is no placenta to nourish the pups). All skates are oviparous. Fertilization is always internal. Both rays and skates have a long gestation period and produce relatively few young. The growth of ray populations is slow. Reproduction and growth rates of ray populations are the same as for sharks. |
- Maria Montessori
Success in life is directly correlated to the degree in which people believe they are capable as well as independent. And how do we learn to be capable and independent? We practice the skills that are necessary until we no longer need help and can act and do accordingly.
Allowing children to gain independence and self-discipline is the purpose of the Practical Life activities in the Montessori classroom and at home. I say “home” because Practical Life activities have the purpose of allowing students to gain independence and self-discipline. These skills cannot be practiced only at school. What happens when a child is allowed to prepare their own snack, slice their own apples, pour their own drink, and wash and dry their own dishes in the Montessori classroom, but at home is told “Oh, you’re much too young to use a knife. You will spill that if you pour it. Let me do it for you”? The mixed message is clear.
The skills that are being taught at school are not allowed at home, thus creating a dichotomy in the child’s thinking: I am capable and independent at school, but at home I am not. Later, when Montessori teachers comment about how independent a child is, how he enjoys taking care of his environment and keeps his work area neat and tidy, the parents shake their heads and wonder why these skills are not being demonstrated at home. The answer is clear; the well-meaning and loving parents have done for the child what he is clearly able to do himself.
Montessori Practical Life Activities, In the Classroom and at HomePractical Life activities are the traditional works of the family and home. They can be broken down into four categories:
1. Preliminary activities – carrying a tray, pouring water, spooning grain, walking on the line, etc.
2. Care of the environment – cleaning, sweeping, dusting, gardening, raking, polishing.
4. Grace and courtesy – using table manners, greeting others, saying “please” and “thank you”, learning to control one’s own body.
Each activity is carefully analyzed and broken down into successive steps so that the child may practise each step repeatedly until he has mastered the skill. Adults must model these activities, not just the mechanics of the process, but also the joy that is to be found in a job well done. If the adults lack enthusiasm, the child will learn that it is not a worthwhile task and will not want to continue. We can delight together in dishes that are clean and ready for use at our next meal or in a well-set table.
So, what can be done to extend the Practical Life activities in the home? First off, make sure that the materials you use are child-size. Why is this important? Well, I think about it this way. As an adult, I have several paring knives that I have bought or received over the years. My favorite, however, is the very first one I ever received, even though the tip is broken off and the blade is wobbly. Why is it my favorite? Because it fits my hands just right. The other ones just don’t “feel” right to me. This is the difference between a child learning how to work using materials that fit her just right and trying to adapt an adult-size tool to a child-size body.
Remember that Practical Life activities are the routines and rituals that adults perform daily in order to maintain their environment. Here are a few examples of how to invite your child to continue these valuable Practical Life lessons at home:
- Pouring and transferring liquids and dry ingredients without spilling
- Using scissors
- Opening and closing lids
- Screwing and unscrewing jar lids
- Wringing a wet cloth
- Washing a table or counter top
- Sweeping the floor with a broom and dustpan
- Mopping the floor
- Polishing silver or brass
- Polishing wood furniture
- Polishing shoes
- Sorting laundry by color
- Matching socks
- Folding towels and wash cloths
- Folding napkins
- Ironing handkerchiefs or pillowcases
- Sewing on buttons
- Washing dishes: pots and pans; plastic-ware; silver (flat) ware; glasses; plates
- Watering and caring for houseplants
- Flower arranging
- Caring for pets
- Cleaning up spills
- Putting materials and toys away
- Sorting recycling materials
- Washing hands
- Washing face
- Washing hair
- Blowing nose and properly throwing away the tissue
- Brushing teeth
- Combing hair
- Trimming fingernails
- Running water in the bath
- Hanging up towels after use
- Dressing oneself (including learning how to button, zip, snap, tie, buckle, Velcro)
- Putting on a jacket
- Hanging a jacket on a low hook
- Putting clean clothes in a drawer
- Measuring liquid and dry ingredients
- Peeling fruits and vegetables
- Using kitchen tools (fork, spoon, grater, blunt knife, ice cream scoop, bulb baster, peeler, chopping board, rolling pin, whisk, pitcher, cookie cutters, melon baller, apple corer, etc.)
- Spreading (like butter, peanut butter, a mixture)
- How to greet someone
- How to answer the telephone
- How get up from the table
- How to carry a chair properly
- How to open and shut a door quietly
- How to interrupt when necessary
- How to excuse oneself when passing or bumping into another
- How to hand someone something
- Table manners
- Carrying objects without dropping or spilling
- Walking without bumping objects or people
Link to recent blogs:
- The Importance of Practical Life Activities in the Montessori Preschool Classroom
- The Importance of Practical Life Activities in the Montessori Elementary Classroom
As much as possible, NAMC’s web blog reflects the Montessori curriculum as provided in its teacher training programs. We realize and respect that Montessori schools are unique and may vary their schedules and offerings in accordance with the needs of their individual communities. We hope that our readers will find our articles useful and inspiring as a contribution to the global Montessori community. © the North American Montessori Center - originally posted in its entirety at Montessori Teacher Training on Tuesday, July 8, 2008. |
Bug's nose helps crabs smell
As crabs evolved from living in the sea to the land they developed an insect "nose" to detect airborne chemicals, scientists show.
They would have had to adapt from sniffing hydrophilic or water-loving molecules in a liquid to mainly hydrophobic molecules in a gas, placing dramatically new demands on their sensory equipment.
A Swedish and Australian team looked at whether crabs reached the same solution to sniffing as other land-living creatures, or evolved a unique system of their own.
They publish their research in today's issue of the journal Current Biology.
The researchers studied the robber crab (Birgus latro), also known as the coconut crab because it climbs palm trees and cracks open coconuts with its massive claws.
It is fully terrestrial and will drown if held under water, so its olfactory system makes it perfect for examining the success of the transition from sea to land.
Sniffing for food
First the scientists used aromatic baits to confirm the crabs use smell to find food. They then looked at how they detect these odours, via sensory hairs on their second antenna.
To assess how sophisticated and sensitive the olfactory system is, the researchers cut an antenna from the crab, laid it across two electrodes and monitored the electrical activity in response to stimulants.
The scientists found robbers are sensitive to carbon dioxide, water vapour and odourants representing favourite food sources, like coconuts and bananas.
Measurements made on insect antennae have produced identical results and the receptors are similar in structure and function.
Moreover robber crabs, like insects, flick their antennae when they detect a whiff of something in the air.
In both creatures, their receptors probably evolved as an adaptation to terrestrial conditions, the researchers say.
Carbon dioxide and food odour receptors are important for finding food. And water receptors are important for monitoring their habitat monitoring. For instance, robbers need high humidity to live.
Australian co-author Professor Peter Greenaway, from Sydney's University of New South Wales, says this is "an excellent example of convergent evolution where similar structures have been developed in two essentially unrelated groups of animals at different times and from different starting points". |
On July 9, 1868, the 14th Amendment to the Constitution was ratified, granting citizenship and its benefits to “all persons born or naturalized in the United States” – a right that was previously denied to formerly enslaved persons.
Although the Emancipation Proclamation and the 13th Amendment ended slavery, at the end of the Civil War people still had a lot of questions about what would happen to those who only recently gained their freedom. Along with the 13th and 15th Amendments – collectively known as the “Reconstruction Amendments” – the 14th Amendment widely expanded the rights of former slaves in the United States.
The authors of the amendment took care to ensure that those civil rights would remain protected, forbidding states from denying anyone “life, liberty or property, without due process of law” or the “equal protection of the laws.”
Commonly referenced by that second phrase, the 14th Amendment has played a key role in many important Supreme Court cases that have shaped the past two centuries.
Brown v. Board of Education (1954), for example, struck down the “separate but equal” doctrine – which structured the Jim Crow south – because it violated the “equal protection” clause of the 14th Amendment. Based on cases against segregated schools in Kansas, South Carolina, Virginia and Delaware, Brown challenged the widely enforced Jim Crow laws that, here, limited black children’s access to the same quality education that their white peers experienced. The court ruled that, even if the schools had access to the same tangible factors (like pencils, science lab equipment, or teachers), the act of separation itself was an act of discrimination that violated the 14th Amendment.
The amendment was a milestone in the history of abolition and civil rights in the United States and has continued to protect people from discrimination throughout the decades. Because of the 14th Amendment, our Constitution upholds the idea that “all” – not just white males – “are created equal”. Learn more about the 14th amendment in From Slavery to Freedom, located on the third floor.
Marketing and Communications Intern
Photo: Freedom Center exhibit From Slavery to Freedom explores the 14th Amendment in its historical context.
Related Content: The Emancipation Proclamation
More authored by Elizabeth: Mason man recalls Tiananmen Square, Dr. Newsome speaks at international conference in Paris, Warren County Underground Railroad station honored with historical marker, NHL selects first Chinese player |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy · Philosophies · Philosophers · List of lists
Functionalism is the dominant theory of mental states in modern philosophy. Functionalism was developed as an answer to the mind-body problem because of objections to both identity theory and logical behaviourism. Its core idea is that mental states can be accounted for without taking into account the underlying physical medium (the neurons), instead attending to higher-level functions such as beliefs, desires, and emotions.
According to functionalism, mental states are constituted by their causal relations to one another and to sensory inputs and behavioral outputs. Because these states are not limited to a particular physical medium, they can be realized in multiple ways, including, theoretically, within non-biological systems.
Varieties of functionalismEdit
Functionalism comes in many different varieties. The first formulation of a functionalist theory was put forth by Hilary Putnam. This formulation, which is now called machine-state functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called universal Turing machines).
In non-technical terms, a Turing machine can be visualized as an infinitely long tape divided into squares (the memory) with a box-shaped scanning device that sits over and scans one square of the memory at a time. Each square is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:
- Halt: Do nothing.
- R: move one square to the right.
- L: move one square to the left.
- B: erase whatever is on the square.
- 1: erase whatever is on the square and print a '1.
An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops is specified by the following machine table:
|State One||State Two||State Three|
|B||write 1; stay in state 1||write 1; stay in state 2||write 1; stay in state 3|
|1||go right; go to state 2||go right; go to state 3||[halt]|
This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it's in state two and reads a 1, it will move one square to the right and go into state three. Finally, if it is in state three and reads a B, it prints a 1 and remains in state three.
The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.
According to machine-state functionalism, the nature of a mental state is just like the nature of the automaton states described above. Just as state one simply is the state in which, given an input B, such and such happens, so being in pain is the state which disposes one to cry "ouch", become distracted, wonder what the cause is, and so forth.
A second form of functionalism is based in the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psychofunctionalism. The fundamental idea of psychofunctionalism is that psychology is an irreducible science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of a kidney is to filter it and to maintain certain chemical balances and this is what counts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist. On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.
A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur. In the case of ordinary language terms, such as "belief", "desire" or "hunger", the idea is that they get their meanings from our common-sense "folk psychological" theories about them. Such terms are subject to conceptual analyses which take something like the following form:
- Mental state M is the state that is caused by P and causes Q.
For example, the state of pain is caused by sitting on a tack (for example) and causes one to moan in pain. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the mental states and propositional attitudes they describe. Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities, while the former claims that such identities are necessary and not subject to empirical scientific investigation.
This form of functionalism was developed by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenge that Ned Block's China Brain and John Searle's Chinese Room thought experiments represented for the more traditional forms of functionalism. In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together with each one carrying out the functional or causal role that would normally be ascribed to the mental states of an individual mind, for example, many functionalists simply bit the bullet and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics. Whatever the worth of this latter hypothesis, it was immediately objected against it that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick with Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M and the set of mental facts that occur at the lower-level M1. Given the transitivity of supervenience, if M supervenes on M1 and M1 supervenes on P (physical base), then M and M1 both supervene on P, even though they are (allegedly) totally different sets of mental facts.
Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homonculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the completely stupid and physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become stupider and simpler as one works one's way down the hierarchy.
Functionalism and physicalismEdit
There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists--indeed, some of them, such as David Lewis have claimed to be strict reductionist-type physicalists.
Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is as with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").
On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of inorganic matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.
In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.
Putnam's Twin Earth thought experimentEdit
Hilary Putnam is also responsible for one of the main arguments used against functionalism: the Twin Earth thought experiment, which was originally intended as an argument against semantic internalism. The experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water is not H2O, it's a substance XYZ. It is absolutely critical, however, to note that XYZ on Twin Earth is still called 'H2O' even though it is a different substance (i.e. the one we call 'XYZ' on Earth). Since these worlds are identical in every way but one, you and your Twin Earth Doppelganger see exactly the same things, meet exactly the same people, have exactly the same jobs, and behave exactly the same way. In other words, you share the same inputs, outputs, and relations between inputs and outputs. But there's one crucial difference. You know (or at least believe, if we wish to make a weaker claim or avoid epistemological issues) that water is H2O. Your Doppelganger knows that water is XYZ. Therefore, you differ in mental states though the causal properties that define your mental states are identical.
Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of input and outputs to include the objects that are the causes of mental representations in the external world.
A main criticism of functionalism is the Inverted Spectrum scenario. This is where a person is born with a complete opposite spectrum of seeing light. "Normal" people would see Red Orange Yellow Green Blue Violet, where the inverted spectrum would Violet being Red, Orange being Blue, and so forth. When presented with a strawberry, a normal and inverted case would be asked to define the "redness" of the strawberry. Their inputs and outputs would be the same, so the mental state should too. A solution is the Qualia of what they where saying. By saying "it is red" they are refering to different reds in their mind.
Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor (1980) referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share nothing in common in any of their mental states. A possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents without invoking the analytic-synthetic distinction.
Searle's Chinese RoomEdit
- Main article: Chinese room
The Chinese room argument by John Searle is a more direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. It attacks the idea that thought can be equated with following a set of syntactic rules.
As noted above, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "systems reply"). But this response runs into a more general difficulty: if functionalism ascribes minds to things that don't have them, then it is obviously too liberal. In response to this problem, Lycan (1987) suggested that much of human physiology be included in functional characterizations. But this, in turn, leads to a problem of chauvinism. The question of where exactly to draw the line here does not constitute an objection so much as a challenge to functionalists to come up with clearer ideas on what is mental and what is not.
- Levin, Janet, "Functionalism", The Stanford Encyclopedia of Philosophy (Fall 2004 Edition), Edward N. Zalta (ed.). (online)
- Block, Ned. "What is functionalism" in Readings in Philosophy of Psychology, 2 vols. Vol 1. (Cambridge: Harvard, 1980).
- Brown, Curtis. 'Functionalism in Philosophy of Mind. 2000.
- Mandik, Pete. Fine-grained supervience, cognitive neuroscience, and the future of functionalism. 1998.
- Philosophy of mind
- Cognitive science
- Simulated consciousness
- Personhood Theoryde:Funktionalismus (Philosophie)
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
A rare and little-studied species, much of the blue shiner’s biology is, as yet, undescribed. However, like many other related species, the blue shiner is believed to feed on a variety of terrestrial insects floating on the water surface, often foraging around river margins with overhanging vegetation. This species has a relatively prolonged breeding period, from early May through late August, suggesting females produce multiple clutches each season (5). In common with other shiners, males probably attract mates by defending territories and using courtship displays, with spawning taking place in a cavity in submerged wood (4) (5). The juvenile fish grow to reach maturity at around two years of age, and have a lifespan of three years (2) (4). |
Students explore and investigate naturally - as people. This literacy/science lesson is designed to help them notice the personality traits that they possess, and will need to continue to strengthen, as they become successful scientists.
In this unit, students will explore three types of folktales while practicing their critical thinking skills. In day one, students work in two groups to analyze texts, determine commonalities, and label their learning.
After weeks of research, students have decided that three main factors contributed to the tragedy of the Titanic. They will analyze their data to determine which factor contributed the most and support their opinion with evidence.
Reading informational text, making sense of it, and connecting learning gleaned from the information is heavy duty for third graders. This lesson explores a step by step strategy to combine science, reading, and writing. |
This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website CHAPTER 23 Nuclear Chemistry
NUCLEAR IN ATOMIC NUCLEI. THIS BRANCH OF CHEMISTRY BEGAN WITH THE DIS- COVERY OF NATURAL RADIOACTIVITY BY 23.2 NUCLEAR STABILITY ANTOINE BECQUEREL AND GREW AS A RESULT OF SUBSEQUENT INVESTIGATIONS BY CURIE 23.1 THE NATURE OF NUCLEAR REACTIONS CHEMISTRY IS THE STUDY OF REACTIONS INVOLVING CHANGES PIERRE AND 23.3 NATURAL RADIOACTIVITY MARIE 23.4 NUCLEAR TRANSMUTATION AND MANY OTHERS. NUCLEAR 23.5 NUCLEAR FISSION CHEMISTRY IS VERY MUCH IN THE NEWS TODAY. IN AD- 23.6 NUCLEAR FUSION DITION TO APPLICATIONS IN THE MANUFACTURE OF ATOMIC BOMBS, HY- 23.7 USES OF ISOTOPES DROGEN BOMBS, AND NEUTRON BOMBS, EVEN THE PEACEFUL USE OF 23.8 BIOLOGICAL EFFECTS OF RADIATION NUCLEAR ENERGY HAS BECOME CONTROVERSIAL, IN PART BECAUSE OF
SAFETY CONCERNS ABOUT NUCLEAR POWER PLANTS AND ALSO BECAUSE
OF PROBLEMS WITH DISPOSAL OF RADIOACTIVE WASTES. IN THIS CHAP- TER WE WILL STUDY NUCLEAR REACTIONS, THE STABILITY OF THE ATOMIC
NUCLEUS, RADIOACTIVITY, AND THE EFFECTS OF RADIATION ON BIOLOGICAL SYSTEMS. 903 Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 904 NUCLEAR CHEMISTRY 23.1 THE NATURE OF NUCLEAR REACTIONS With the exception of hydrogen (1 H), all nuclei contain two kinds of fundamental par1
ticles, called protons and neutrons. Some nuclei are unstable; they emit particles and/or
electromagnetic radiation spontaneously (see Section 2.2). The name for this phenomenon is radioactivity. All elements having an atomic number greater than 83 are radioactive. For example, the isotope of polonium, polonium-210 (210 Po), decays spon84
taneously to 206 Pb by emitting an particle.
Another type of radioactivity, known as nuclear transmutation, results from the
bombardment of nuclei by neutrons, protons, or other nuclei. An example of a nuclear
transmutation is the conversion of atmospheric 14 N to 14 C and 1 H, which results when
the nitrogen isotope captures a neutron (from the sun). In some cases, heavier elements
are synthesized from lighter elements. This type of transmutation occurs naturally in
outer space, but it can also be achieved artificially, as we will see in Section 23.4.
Radioactive decay and nuclear transmutation are nuclear reactions, which differ
significantly from ordinary chemical reactions. Table 23.1 summarizes the differences.
BALANCING NUCLEAR EQUATIONS In order to discuss nuclear reactions in any depth, we need to understand how to write
and balance the equations. Writing a nuclear equation differs somewhat from writing
equations for chemical reactions. In addition to writing the symbols for various chemical elements, we must also explicitly indicate protons, neutrons, and electrons. In fact,
we must show the numbers of protons and neutrons present in every species in such
The symbols for elementary particles are as follows:
1p or 1H
0n neutron 0
1e or 0
1e or 0
2 He or 4
particle In accordance with the notation used in Section 2.3, the superscript in each case denotes the mass number (the total number of neutrons and protons present) and the subscript is the atomic number (the number of protons). Thus, the “atomic number” of a TABLE 23.1 Comparison of Chemical Reactions and Nuclear Reactions CHEMICAL REACTIONS 1. Atoms are rearranged by the breaking
and forming of chemical bonds.
2. Only electrons in atomic orbitals are
involved in the breaking and forming
3. Reactions are accompanied by absorption
or release of relatively small amounts
4. Rates of reaction are influenced by
temperature, pressure, concentration,
and catalysts. Back Forward NUCLEAR REACTIONS 1. Elements (or isotopes of the same elements)
are converted from one to another.
2. Protons, neutrons, electrons, and other
elementary particles may be involved. Main Menu TOC Study Guide TOC 3. Reactions are accompanied by absorption
or release of tremendous amounts of
4. Rates of reaction normally are not affected
by temperature, pressure, and catalysts. Textbook Website MHHE Website 23.1 THE NATURE OF NUCLEAR REACTIONS 905 proton is 1, because there is one proton present, and the “mass number” is also 1, because there is one proton but no neutrons present. On the other hand, the “mass number” of a neutron is 1, but its “atomic number” is zero, because there are no protons
present. For the electron, the “mass number” is zero (there are neither protons nor neutrons present), but the “atomic number” is 1, because the electron possesses a unit
The symbol 0e represents an electron in or from an atomic orbital. The symbol
represents an electron that, although physically identical to any other electron,
comes from a nucleus (in a decay process in which a neutron is converted to a proton
and an electron) and not from an atomic orbital. The positron has the same mass as
the electron, but bears a 1 charge. The particle has two protons and two neutrons,
so its atomic number is 2 and its mass number is 4.
In balancing any nuclear equation, we observe the following rules:
The total number of protons plus neutrons in the products and in the reactants must
be the same (conservation of mass number).
• The total number of nuclear charges in the products and in the reactants must be
the same (conservation of atomic number).
• If we know the atomic numbers and mass numbers of all the species but one in a nuclear equation, we can identify the unknown species by applying these rules, as shown
in the following example, which illustrates how to balance nuclear decay equations.
EXAMPLE 23.1 Balance the following nuclear equations (that is, identify the product X):
84 Po 88n 208 Pb
82 X (b) 137
55 Cs 137
56 Ba X 88n (a) The mass number and atomic number are 212 and 84, respectively, on
the left-hand side and 208 and 82, respectively, on the right-hand side. Thus, X must
have a mass number of 4 and an atomic number of 2, which means that it is an
particle. The balanced equation is
84 Po 88n 208 Pb
2 (b) In this case the mass number is the same on both sides of the equation, but the
atomic number of the product is 1 more than that of the reactant. The only way this
change can come about is to have a neutron in the Cs nucleus transformed into a
proton and an electron; that is, 1 n 88n 1 p
1 (note that this process does not
alter the mass number). Thus, the balanced equation is
We use the 0 notation here be1
cause the electron came from the
Similar problems: 23.5, 23.6. 137
55 Cs 88n 137 Ba
1 Note that the equation in (a) and (b) are balanced for nuclear particles
but not for electrical charges. To balance the charges, we would need to add two
electrons on the right-hand side of (a) and express barium as a cation (Ba ) in (b). Comment PRACTICE EXERCISE Identify X in the following nuclear equation:
33 As Back Forward Main Menu TOC 88n Study Guide TOC 0
1 X Textbook Website MHHE Website 906 NUCLEAR CHEMISTRY 23.2 NUCLEAR STABILITY The nucleus occupies a very small portion of the total volume of an atom, but it contains most of the atom’s mass because both the protons and the neutrons reside there.
In studying the stability of the atomic nucleus, it is helpful to know something about
its density, because it tells us how tightly the particles are packed together. As a sample calculation, let us assume that a nucleus has a radius of 5 10 3 pm and a mass
of 1 10 22 g. These figures correspond roughly to a nucleus containing 30 protons
and 30 neutrons. Density is mass/volume, and we can calculate the volume from the
known radius (the volume of a sphere is 4 r3, where r is the radius of the sphere).
First we convert the pm units to cm. Then we calculate the density in g/cm3:
density 5 mass
2 To dramatize the almost incomprehensibly high density, it has
been suggested that it is equivalent to packing the mass of all the
world’s automobiles into one
thimble. 10 3 pm
1 1 10 12 m
1 pm 10 22 g
3 100 cm
13 5 10 13 cm g
cm)3 1014 g/cm3 This is an exceedingly high density. The highest density known for an element is
22.6 g/cm3, for osmium (Os). Thus the average atomic nucleus is roughly 9 1012
(or 9 trillion) times more dense than the densest element known!
The enormously high density of the nucleus prompts us to wonder what holds the
particles together so tightly. From Coulomb’s law we know that like charges repel and
unlike charges attract one another. We would thus expect the protons to repel one another strongly, particularly when we consider how close they must be to each other.
This indeed is so. However, in addition to the repulsion, there are also short-range attractions between proton and proton, proton and neutron, and neutron and neutron. The
stability of any nucleus is determined by the difference between coulombic repulsion
and the short-range attraction. If repulsion outweighs attraction, the nucleus disintegrates, emitting particles and/or radiation. If attractive forces prevail, the nucleus is stable.
The principal factor that determines whether a nucleus is stable is the neutron-toproton ratio (n/p). For stable atoms of elements having low atomic number, the n/p
value is close to 1. As the atomic number increases, the neutron-to-proton ratios of the
stable nuclei become greater than 1. This deviation at higher atomic numbers arises
because a larger number of neutrons is needed to counteract the strong repulsion among
the protons and stabilize the nucleus. The following rules are useful in predicting nuclear stability:
Nuclei that contain 2, 8, 20, 50, 82, or 126 protons or neutrons are generally more
stable than nuclei that do not possess these numbers. For example, there are ten stable isotopes of tin (Sn) with the atomic number 50 and only two stable isotopes of
antimony (Sb) with the atomic number 51. The numbers 2, 8, 20, 50, 82, and 126
are called magic numbers. The significance of these numbers for nuclear stability
is similar to the numbers of electrons associated with the very stable noble gases
(that is, 2, 10, 18, 36, 54, and 86 electrons).
• Nuclei with even numbers of both protons and neutrons are generally more stable
than those with odd numbers of these particles (Table 23.2).
• All isotopes of the elements with atomic numbers higher than 83 are radioactive.
All isotopes of technetium (Tc, Z 43) and promethium (Pm, Z 61) are radioactive.
• Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.2 NUCLEAR STABILITY 907 TABLE 23.2 Number of Stable Isotopes with Even and Odd
Numbers of Protons and Neutrons
PROTONS NEUTRONS Odd
Even NUMBER OF STABLE ISOTOPES 4
157 Figure 23.1 shows a plot of the number of neutrons versus the number of protons
in various isotopes. The stable nuclei are located in an area of the graph known as the
belt of stability. Most radioactive nuclei lie outside this belt. Above the stability belt,
the nuclei have higher neutron-to-proton ratios than those within the belt (for the same
number of protons). To lower this ratio (and hence move down toward the belt of stability), these nuclei undergo the following process, called -particle emission:
1 88n 1 p
1 Beta-particle emission leads to an increase in the number of protons in the nucleus and
a simultaneous decrease in the number of neutrons. Some examples are
6 C 88n 7 N
19 K 88n 20 Ca
40 Zr 88n 41 Nb FIGURE 23.1 Plot of neutrons
versus protons for various stable
isotopes, represented by dots.
The straight line represents the
points at which the neutron-toproton ratio equals 1. The
shaded area represents the belt
of stability. 0
1 120 100 Number of neutrons 80
Belt of stability 60 Neutrons/Protons = 1 40 20 0 Back Forward Main Menu 20 40
Numbers of protons TOC Study Guide TOC 80 Textbook Website MHHE Website 908 NUCLEAR CHEMISTRY Below the stability belt the nuclei have lower neutron-to-proton ratios than those in
the belt (for the same number of protons). To increase this ratio (and hence move up
toward the belt of stability), these nuclei either emit a positron
1p 88n 1 n
1 or undergo electron capture. An example of positron emission is
19 K We use 0e rather than 0 here
because the electron came from
an atomic orbital and not from the
1 88n 38 Ar
18 Electron capture is the capture of an electron — usually a 1s electron — by the nucleus.
The captured electron combines with a proton to form a neutron so that the atomic
number decreases by one while the mass number remains the same. This process has
the same net effect as positron emission:
1 e 88n 17 Cl
1 e 88n 25 Mn 37
26 Fe NUCLEAR BINDING ENERGY A quantitative measure of nuclear stability is the nuclear binding energy, which is the
energy required to break up a nucleus into its component protons and neutrons. This
quantity represents the conversion of mass to energy that occurs during an exothermic
The concept of nuclear binding energy evolved from studies of nuclear properties
showing that the masses of nuclei are always less than the sum of the masses of the
nucleons, which is a general term for the protons and neutrons in a nucleus. For example, the 19 F isotope has an atomic mass of 18.9984 amu. The nucleus has 9 protons
and 10 neutrons and therefore a total of 19 nucleons. Using the known masses of the
1 H atom (1.007825 amu) and the neutron (1.008665 amu), we can carry out the following analysis. The mass of 9 1 H atoms (that is, the mass of 9 protons and 9 elec1
9 1.007825 amu 9.070425 amu and the mass of 10 neutrons is
10 1.008665 amu Therefore, the atomic mass of a
trons, protons, and neutrons is 19
9F 9.070425 amu
There is no change in the electron’s mass since it is not a
nucleon. 10.08665 amu atom calculated from the known numbers of elec10.08665 amu 19.15708 amu which is larger than 18.9984 amu (the measured mass of 19 F) by 0.1587 amu.
The difference between the mass of an atom and the sum of the masses of its protons, neutrons, and electrons is called the mass defect. Relativity theory tells us that
the loss in mass shows up as energy (heat) given off to the surroundings. Thus the formation of 19 F is exothermic. According to Einstein’s mass-energy equivalence rela9
tionship (E mc2, where E is energy, m is mass, and c is the velocity of light), we can
calculate the amount of energy released. We start by writing
E ( m)c2 (23.1) where E and m are defined as follows: Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.2 E energy of product m mass of product NUCLEAR STABILITY 909 energy of reactants
mass of reactants Thus we have for the change in mass
m 18.9984 amu 19.15708 amu 0.1587 amu Because 19 F has a mass that is less than the mass calculated from the number of elec9
trons and nucleons present, m is a negative quantity. Consequently, E is also a negative quantity; that is, energy is released to the surroundings as a result of the formation of the fluorine-19 nucleus. So we calculate E as follows:
E 108 m/s)2 ( 0.1587 amu)(3.00
16 1.43 10 22 amu m /s With the conversion factors
1 kg 6.022 1026 amu
22 1 kg m /s 1J we obtain
10 11 amu m2
s2 1.00 kg
6.022 1026 amu 1J
1 kg m2/s2 J This is the amount of energy released when one fluorine-19 nucleus is formed from 9
protons and 10 neutrons. The nuclear binding energy of the nucleus is 2.37 10 11 J,
which is the amount of energy needed to decompose the nucleus into separate protons
and neutrons. In the formation of 1 mole of fluorine nuclei, for instance, the energy
E ( 2.37 10 11 J)(6.022 1.43 1013 J/mol 1.43 1023/mol) 1010 kJ/mol The nuclear binding energy, therefore, is 1.43 1010 kJ for 1 mole of fluorine-19 nuclei, which is a tremendously large quantity when we consider that the enthalpies of
ordinary chemical reactions are of the order of only 200 kJ. The procedure we have
followed can be used to calculate the nuclear binding energy of any nucleus.
As we have noted, nuclear binding energy is an indication of the stability of a nucleus. However, in comparing the stability of any two nuclei we must account for the
fact that they have different numbers of nucleons. For this reason it is more meaningful to use the nuclear binding energy per nucleon, defined as
nuclear binding energy per nucleon nuclear binding energy
number of nucleons For the fluorine-19 nucleus,
nuclear binding energy per nucleon 2.37 10 11 J
1.25 Back Forward Main Menu TOC Study Guide TOC 10 12 J/nucleon Textbook Website MHHE Website NUCLEAR CHEMISTRY FIGURE 23.2 Plot of nuclear
binding energy per nucleon versus mass number. Nuclear binding energy per nucleon (J) 910 56Fe
4 1.5 × 10–12 He 238U 1.2 × 10–12
9 × 10–13
6 × 10–13
3 × 10–13 2H 0 20 40 60 80 100 120 140 160 180 200 220 240 260 Mass number The nuclear binding energy per nucleon allows us to compare the stability of all
nuclei on a common basis. Figure 23.2 shows the variation of nuclear binding energy
per nucleon plotted against mass number. As you can see, the curve rises rather steeply.
The highest binding energies per nucleon belong to elements with intermediate mass
numbers—between 40 and 100—and are greatest for elements in the iron, cobalt, and
nickel region (the Group 8B elements) of the periodic table. This means that the net
attractive forces among the particles (protons and neutrons) are greatest for the nuclei
of these elements.
Nuclear binding energy and nuclear binding energy per nucleon are calculated for
an iodine nucleus in the following example.
EXAMPLE 23.2 The atomic mass of 127 I is 126.9004 amu. Calculate the nuclear binding energy of
this nucleus and the corresponding nuclear binding energy per nucleon.
Answer There are 53 protons and 74 neutrons in the nucleus. The mass of 53 1 H
1 atoms is
53 1.007825 amu 53.41473 amu and the mass of 74 neutrons is
74 1.008665 amu Therefore, the predicted mass for
and the mass defect is
53 I 74.64121 amu is 53.41473 126.9004 amu 74.64121 128.05594 amu, 128.05594 amu 1.1555 amu The energy released is
E ( m)c2
( 1.1555 amu)(3.00
1.04 Back Forward Main Menu TOC 108 m/s)2 1017 amu m2/s2 Study Guide TOC Textbook Website MHHE Website 23.3 E 1.04
1.73 The neutron-to-proton ratio is 1.4,
which places iodine-127 in the
belt of stability. 1017
10 10 amu m2
s2 1.00 kg
6.022 1026 amu 911 1J
1 kg m2/s2 J Thus the nuclear binding energy is 1.73
nucleon is obtained as follows:
1.73 10 10 J
127 nucleons Similar problems: 23.19, 23.20. NATURAL RADIOACTIVITY 10 10 1.36 10 J. The nuclear binding energy per
12 J/nucleon PRACTICE EXERCISE Calculate the nuclear binding energy (in J) and the binding energy per nucleon of
83 Bi (208.9804 amu). 23.3 NATURAL RADIOACTIVITY Nuclei outside the belt of stability, as well as nuclei with more than 83 protons, tend
to be unstable. The spontaneous emission by unstable nuclei of particles or electromagnetic radiation, or both, is known as radioactivity. The main types of radiation are:
particles (or doubly charged helium nuclei, He2 ); particles (or electrons); rays,
which are very-short-wavelength (0.1 nm to 10 4 nm) electromagnetic waves; positron
emission; and electron capture.
The disintegration of a radioactive nucleus is often the beginning of a radioactive
decay series, which is a sequence of nuclear reactions that ultimately result in the formation of a stable isotope. Table 23.3 shows the decay series of naturally occurring
uranium-238, which involves 14 steps. This decay scheme, known as the uranium decay series, also shows the half-lives of all the products.
It is important to be able to balance the nuclear reaction for each of the steps in
a radioactive decay series. For example, the first step in the uranium decay series is
the decay of uranium-238 to thorium-234, with the emission of an particle. Hence,
the reaction is
92 U 88n 234 Th
2 The next step is represented by
90 Th 88n 234 Pa
1 and so on. In a discussion of radioactive decay steps, the beginning radioactive isotope is called the parent and the product, the daughter.
KINETICS OF RADIOACTIVE DECAY All radioactive decays obey first-order kinetics. Therefore the rate of radioactive decay at any time t is given by
rate of decay at time t N where is the first-order rate constant and N is the number of radioactive nuclei present at time t. (We use instead of k for rate constant in accord with the notation used
by nuclear scientists.) According to Equation (13.3), the number of radioactive nuclei
at time zero (N0) and time t (Nt) is Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 912 NUCLEAR CHEMISTRY TABLE 23.3 The Uranium Decay Series
109 yr 4.51
24.1 days 234
1.17 min 234
86 104 yr 1.60 230
90 105 yr 103 yr Th Ra Rn
3.82 days 218
3.05 min 0.04%
82 At Pb 2s 26.8 min
83 Bi 99.96%
1.6 10 4 214
84 19.7 min
81 Po Tl s 1.32 min
20.4 yr 210
83 Bi 100%
84 5.01 days
81 Po Tl
4.20 min 138 days
82 Pb ln N0
Nt t and the corresponding half-life of the reaction is given by Equation (13.5):
We do not have to wait 4.51
109 yr to make a half-life measurement of uranium-238. Its
value can be calculated from the
rate constant using Equation
(13.5). Back Forward 0.693 The half-lives (hence the rate constants) of radioactive isotopes vary greatly from nucleus to nucleus. For example, looking at Table 23.3, we find two extreme cases: Main Menu 238
84 Po TOC 88n 234 Th
82 Pb Study Guide TOC 4
2 4.51 t1
2 1.6 Textbook Website 109 yr
10 4 s MHHE Website 23.3 NATURAL RADIOACTIVITY 913 The ratio of these two rate constants after conversion to the same time unit is about
1 1021, an enormously large number. Furthermore, the rate constants are unaffected
by changes in environmental conditions such as temperature and pressure. These highly
unusual features are not seen in ordinary chemical reactions (see Table 23.1). DATING BASED ON RADIOACTIVE DECAY The half-lives of radioactive isotopes have been used as “atomic clocks” to determine
the ages of certain objects. Some examples of dating by radioactive decay measurements will be described here.
Radiocarbon Dating The carbon-14 isotope is produced when atmospheric nitrogen is bombarded by cosmic rays:
0n 88n 14 C
1H The radioactive carbon-14 isotope decays according to the equation
1 88n 14 N
7 This decay series is the basis of the radiocarbon dating technique described on p. 527.
Dating Using Uranium-238 Isotopes
We can think of the first step as
the rate-determining step in the
overall process. Because some of the intermediate products in the uranium series have very long halflives (see Table 23.3), this series is particularly suitable for estimating the age of rocks
in the earth and of extraterrestrial objects. The half-life for the first step (238 U to 234 Th)
is 4.51 109 yr. This is about 20,000 times the second largest value (that is, 2.47
105 yr), which is the half-life for 234 U to 230 Th. Therefore, as a good approximation
we can assume that the half-life for the overall process (that is, from 238 U to 206 Pb) is
governed solely by the first step:
92 U 238U t1
_ 238U 206Pb 206 g/2
238 g/2 4.51 × 109 yr FIGURE 23.3 After one halflife, half of the original uranium238 is converted to lead-206. Forward Main Menu 8 4
2 6 0
2 4.51 109 yr In naturally occurring uranium minerals we should and do find some lead-206
isotopes formed by radioactive decay. Assuming that no lead was present when the
mineral was formed and that the mineral has not undergone chemical changes that
would allow the lead-206 isotope to be separated from the parent uranium-238, it is
possible to estimate the age of the rocks from the mass ratio of 206 Pb to 238 U. The
above equation tells us that for every mole, or 238 g, of uranium that undergoes complete decay, 1 mole, or 206 g, of lead is formed. If only half a mole of uranium-238
has undergone decay, the mass ratio Pb-206/U-238 becomes 2 Back 88n 206 Pb
82 0.866 and the process would have taken a half-life of 4.51 109 yr to complete (Figure 23.3).
Ratios lower than 0.866 mean that the rocks are less than 4.51 109 yr old, and higher
ratios suggest a greater age. Interestingly, studies based on the uranium series as well
as other decay series put the age of the oldest rocks and, therefore, probably the age
of Earth itself at 4.5 109, or 4.5 billion, years. TOC Study Guide TOC Textbook Website MHHE Website 914 NUCLEAR CHEMISTRY Dating Using Potassium-40 Isotopes This is one of the most important techniques in geochemistry. The radioactive
potassium-40 isotope decays by several different modes, but the relevant one as far as
dating is concerned is that of electron capture:
19 K 88n 40 Ar
2 1.2 109 yr The accumulation of gaseous argon-40 is used to gauge the age of a specimen. When
a potassium-40 atom in a mineral decays, argon-40 is trapped in the lattice of the mineral and can escape only if the material is melted. Melting, therefore, is the procedure
for analyzing a mineral sample in the laboratory. The amount of argon-40 present can
be conveniently measured with a mass spectrometer (see p. 76). Knowing the ratio of
argon-40 to potassium-40 in the mineral and the half-life of decay makes it possible
to establish the ages of rocks ranging from millions to billions of years old. 23.4 NUCLEAR TRANSMUTATION The scope of nuclear chemistry would be rather narrow if study were limited to natural radioactive elements. An experiment performed by Rutherford in 1919, however,
suggested the possibility of producing radioactivity artificially. When he bombarded a
sample of nitrogen with particles, the following reaction took place:
2 88n 17O
1p An oxygen-17 isotope was produced with the emission of a proton. This reaction
demonstrated for the first time the feasibility of converting one element into another,
by the process of nuclear transmutation. Nuclear transmutation differs from radioactive decay in that the former is brought about by the collision of two particles.
The above reaction can be abbreviated as 17N( ,p)17O. Note that in the parenthe8
ses the bombarding particle is written first, followed by the ejected particle. The following example illustrates the use of this notation to represent nuclear transmutations.
EXAMPLE 23.3 Write the balanced equation for the nuclear reaction
resents the deuterium nucleus (that is, 2H).
26Fe(d, )54Mn, where d rep25 The abbreviation tells us that when iron-56 is bombarded with a deuterium
nucleus, it produces the manganese-54 nucleus plus an particle, 4He. Thus, the
equation for this reaction is
26 Fe Similar problems: 23.33, 23.34. 2
1H 88n 4
25 Mn PRACTICE EXERCISE Write a balanced equation for 106
46 Pd( ,p)109 Ag.
47 Although light elements are generally not radioactive, they can be made so by
bombarding their nuclei with appropriate particles. As we saw earlier, the radioactive
carbon-14 isotope can be prepared by bombarding nitrogen-14 with neutrons. Tritium,
1H, is prepared according to the following bombardment: Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.4 6
3 Li Alternating voltage
± ± Magnetic field Tritium decays with the emission of
1H Target 88n 3 H
88n 3 He
2 12.5 yr Many synthetic isotopes are prepared by using neutrons as projectiles. This approach is particularly convenient because neutrons carry no charges and therefore are
not repelled by the targets — the nuclei. In contrast, when the projectiles are positively
charged particles (for example, protons or particles), they must have considerable
kinetic energy in order to overcome the electrostatic repulsion between themselves and
the target atoms. The synthesis of phosphorus from aluminum is one example: Dees
FIGURE 23.4 Schematic diagram of a cyclotron particle accelerator. The particle (an ion) to
be accelerated starts at the center
and is forced to move in a spiral
path through the influence of
electric and magnetic fields until
it emerges at a high velocity. The
magnetic fields are perpendicular
to the plane of the dees (socalled because of their shape),
which are hollow and serve as
0n 915 NUCLEAR TRANSMUTATION 27
13 Al 4
2 88n 30 P
0n A particle accelerator uses electric and magnetic fields to increase the kinetic energy
of charged species so that a reaction will occur (Figure 23.4). Alternating the polarity
and ) on specially constructed plates causes the particles to accelerate
along a spiral path. When they have the energy necessary to initiate the desired nuclear reaction, they are guided out of the accelerator into a collision with a target substance.
Various designs have been developed for particle accelerators, one of which accelerates particles along a linear path of about 3 km (Figure 23.5). It is now possible
to accelerate particles to a speed well above 90 percent of the speed of light. (According
to Einstein’s theory of relativity, it is impossible for a particle to move at the speed of
light. The only exception is the photon, which has a zero rest mass.) The extremely
energetic particles produced in accelerators are employed by physicists to smash atomic
nuclei to fragments. Studying the debris from such disintegrations provides valuable
information about nuclear structure and binding forces. FIGURE 23.5 A section of a
linear particle accelerator. Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 916 NUCLEAR CHEMISTRY TABLE 23.4 The Transuranium Elements
ATOMIC NUMBER NAME 93
Meitnerium SYMBOL Np
0n 88n 93Np
93Np 88n 94Pu
0n 88n 95Am
2 88n 96Cm
2 88n 97Bk
2 88n 98Cf
U 15 0n 88n 253Es
170n 88n 100Fm
2 88n 101Md
6C 88n 102No
5B 88n 103Lr
Cf 12C 88n 257Rf
7N 88n 105Ha
8O 88n 106Sg
24Cr 88n 107Ns
26Fe 88n 108Hs
26Fe 88n 109Mt 0
0n THE TRANSURANIUM ELEMENTS Particle accelerators made it possible to synthesize the so-called transuranium elements, elements with atomic numbers greater than 92. Neptunium (Z 93) was first
prepared in 1940. Since then, 20 other transuranium elements have been synthesized.
All isotopes of these elements are radioactive. Table 23.4 lists the transuranium elements and the reactions through which they are formed. 23.5 NUCLEAR FISSION Nuclear fission is the process in which a heavy nucleus (mass number > 200) divides
to form smaller nuclei of intermediate mass and one or more neutrons. Because the
heavy nucleus is less stable than its products (see Figure 23.2), this process releases a
large amount of energy.
The first nuclear fission reaction to be studied was that of uranium-235 bombarded
with slow neutrons, whose speed is comparable to that of air molecules at room temperature. Under these conditions, uranium-235 undergoes fission, as shown in Figure
23.6. Actually, this reaction is very complex: More than 30 different elements have
92 143 Xe
54 FIGURE 23.6 Nuclear fission of U-235. When a U-235 nucleus captures a neutron (red dot), it
undergoes fission to yield two smaller nuclei. On the average, 2.4 neutrons are emitted for every
U-235 nucleus that divides. Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.5 NUCLEAR FISSION 917 Relative amounts of fission product been found among the fission products (Figure 23.7). A representative reaction is
92U 80 100 120 140 160 Mass number
FIGURE 23.7 Relative yields of
the products resulting from the fission of U-235, as a function of
mass number. TABLE 23.5 Nuclear
Binding Energies of 235U
and Its Fission Products
0n 88n 90 Sr
54 Xe 3 1n
0 Although many heavy nuclei can be made to undergo fission, only the fission of naturally occurring uranium-235 and of the artificial isotope plutonium-239 has any practical importance. Table 23.5 shows the nuclear binding energies of uranium-235 and
its fission products. As the table shows, the binding energy per nucleon for uranium235 is less than the sum of the binding energies for strontium-90 and xenon-143.
Therefore, when a uranium-235 nucleus is split into two smaller nuclei, a certain amount
of energy is released. Let us estimate the magnitude of this energy. The difference between the binding energies of the reactants and products is (1.23 10 10 1.92
10 10) J (2.82 10 10) J, or 3.3 10 11 J per uranium-235 nucleus. For 1 mole
of uranium-235, the energy released would be (3.3 10 11)(6.02 1023), or 2.0
1013 J. This is an extremely exothermic reaction, considering that the heat of combustion of 1 ton of coal is only about 8 107 J.
The significant feature of uranium-235 fission is not just the enormous amount of
energy released, but the fact that more neutrons are produced than are originally captured in the process. This property makes possible a nuclear chain reaction, which is
a self-sustaining sequence of nuclear fission reactions. The neutrons generated during
the initial stages of fission can induce fission in other uranium-235 nuclei, which in
turn produce more neutrons, and so on. In less than a second, the reaction can become
uncontrollable, liberating a tremendous amount of heat to the surroundings.
Figure 23.8 shows two types of fission reactions. For a chain reaction to occur,
enough uranium-235 must be present in the sample to capture the neutrons. Otherwise,
many of the neutrons will escape from the sample and the chain reaction will not occur, as depicted in Figure 23.8(a). In this situation the mass of the sample is said to be
subcritical. Figure 23.8(b) shows what happens when the amount of the fissionable
material is equal to or greater than the critical mass, the minimum mass of fissionable
material required to generate a self-sustaining nuclear chain reaction. In this case most
of the neutrons will be captured by uranium-235 nuclei, and a chain reaction will occur. FIGURE 23.8 Two types of nuclear fission. (a) If the mass of
U-235 is subcritical, no chain reaction will result. Many of the
neutrons produced will escape to
the surroundings. (b) If a critical
mass is present, many of the neutrons emitted during the fission
process will be captured by other
U-235 nuclei and a chain reaction will occur. (a) Back Forward Main Menu TOC Study Guide TOC (b) Textbook Website MHHE Website 918 NUCLEAR CHEMISTRY TNT explosive THE ATOMIC BOMB The first application of nuclear fission was in the development of the atomic bomb.
How is such a bomb made and detonated? The crucial factor in the bomb’s design is
the determination of the critical mass for the bomb. A small atomic bomb is equivalent to 20,000 tons of TNT (trinitrotoluene). Since 1 ton of TNT releases about 4
109 J of energy, 20,000 tons would produce 8 1013 J. Earlier we saw that 1 mole, or
235 g, of uranium-235 liberates 2.0 1013 J of energy when it undergoes fission. Thus
the mass of the isotope present in a small bomb must be at least
Subcritical U-235 wedge
FIGURE 23.9 Schematic cross
section of an atomic bomb. The
TNT explosives are set off first.
The explosion forces the sections
of fissionable material together to
form an amount considerably
larger than the critical mass. 8 1013 J
2.0 1013 J 1 kg For obvious reasons, an atomic bomb is never assembled with the critical mass already
present. Instead, the critical mass is formed by using a conventional explosive, such
as TNT, to force the fissionable sections together, as shown in Figure 23.9. Neutrons
from a source at the center of the device trigger the nuclear chain reaction. Uranium235 was the fissionable material in the bomb dropped on Hiroshima, Japan, on August
6, 1945. Plutonium-239 was used in the bomb exploded over Nagasaki three days later.
The fission reactions generated were similar in these two cases, as was the extent of
the destruction. NUCLEAR REACTORS In Europe, nuclear reactors
provide about 40 percent of the
electrical energy consumed. A peaceful but controversial application of nuclear fission is the generation of electricity using heat from a controlled chain reaction in a nuclear reactor. Currently, nuclear reactors provide about 20 percent of the electrical energy in the United States.
This is a small but by no means negligible contribution to the nation’s energy production. Several different types of nuclear reactors are in operation; we will briefly discuss the main features of three of them, along with their advantages and disadvantages. Light Water Reactors Most of the nuclear reactors in the United States are light water reactors. Figure 23.10
is a schematic diagram of such a reactor, and Figure 23.11 shows the refueling process
in the core of a nuclear reactor.
An important aspect of the fission process is the speed of the neutrons. Slow neutrons split uranium-235 nuclei more efficiently than do fast ones. Because fission reactions are highly exothermic, the neutrons produced usually move at high velocities.
For greater efficiency they must be slowed down before they can be used to induce
nuclear disintegration. To accomplish this goal, scientists use moderators, which are
substances that can reduce the kinetic energy of neutrons. A good moderator must satisfy several requirements: It should be nontoxic and inexpensive (as very large quantities of it are necessary); and it should resist conversion into a radioactive substance
by neutron bombardment. Furthermore, it is advantageous for the moderator to be a
fluid so that it can also be used as a coolant. No substance fulfills all these requirements, although water comes closer than many others that have been considered.
Nuclear reactors that use light water (H2O) as a moderator are called light water reactors because 1 H is the lightest isotope of the element hydrogen.
1 Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.5 NUCLEAR FISSION 919 Shield Steam
To steam turbine Shield Water Pump
FIGURE 23.10 Schematic diagram of a nuclear fission reactor. The fission process is controlled
by cadmium or boron rods. The heat generated by the process is used to produce steam for the
generation of electricity via a heat exchange system. The nuclear fuel consists of uranium, usually in the form of its oxide, U3O8 (Figure
23.12). Naturally occurring uranium contains about 0.7 percent of the uranium-235 isotope, which is too low a concentration to sustain a small-scale chain reaction. For effective operation of a light water reactor, uranium-235 must be enriched to a concentration of 3 or 4 percent. In principle, the main difference between an atomic bomb
and a nuclear reactor is that the chain reaction that takes place in a nuclear reactor is
kept under control at all times. The factor limiting the rate of the reaction is the number of neutrons present. This can be controlled by lowering cadmium or boron rods
between the fuel elements. These rods capture neutrons according to the equations
FIGURE 23.12 Uranium oxide, U3O8. FIGURE 23.11 Refueling the
core of a nuclear reactor. Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 920 NUCLEAR CHEMISTRY 113
0n 88n 114 Cd
88n 7 Li
2 where denotes gamma rays. Without the control rods the reactor core would melt
from the heat generated and release radioactive materials into the environment.
Nuclear reactors have rather elaborate cooling systems that absorb the heat given
off by the nuclear reaction and transfer it outside the reactor core, where it is used to
produce enough steam to drive an electric generator. In this respect, a nuclear power
plant is similar to a conventional power plant that burns fossil fuel. In both cases, large
quantities of cooling water are needed to condense steam for reuse. Thus, most nuclear
power plants are built near a river or a lake. Unfortunately this method of cooling
causes thermal pollution (see Section 12.4). Heavy Water Reactors Another type of nuclear reactor uses D2O, or heavy water, as the moderator, rather than
H2O. Deuterium absorbs neutrons much less efficiently than does ordinary hydrogen.
Since fewer neutrons are absorbed, the reactor is more efficient and does not require
enriched uranium. The fact that deuterium is a less efficient moderator has a negative
impact on the operation of the reactor, because more neutrons leak out of the reactor.
However, this is not a serious disadvantage.
The main advantage of a heavy water reactor is that it eliminates the need for
building expensive uranium enrichment facilities. However, D2O must be prepared by
either fractional distillation or electrolysis of ordinary water, which can be very expensive considering the amount of water used in a nuclear reactor. In countries where
hydroelectric power is abundant, the cost of producing D2O by electrolysis can be reasonably low. At present, Canada is the only nation successfully using heavy water nuclear reactors. The fact that no enriched uranium is required in a heavy water reactor
allows a country to enjoy the benefits of nuclear power without undertaking work that
is closely associated with weapons technology. Breeder Reactors A breeder reactor uses uranium fuel, but unlike a conventional nuclear reactor, it produces more fissionable materials than it uses.
We know that when uranium-238 is bombarded with fast neutrons, the following
reactions take place:
92 U 1
93 Np Plutonium-239 forms plutonium
oxide, which can be readily
separated from uranium. 88n 239 U
88n 239 Np
94 Pu 0
2 23.4 min t1
2 2.35 days In this manner the nonfissionable uranium-238 is transmuted into the fissionable isotope plutonium-239 (Figure 23.13).
In a typical breeder reactor, nuclear fuel containing uranium-235 or plutonium239 is mixed with uranium-238 so that breeding takes place within the core. For every
uranium-235 (or plutonium-239) nucleus undergoing fission, more than one neutron is
captured by uranium-238 to generate plutonium-239. Thus, the stockpile of fissionable
material can be steadily increased as the starting nuclear fuels are consumed. It takes Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.5 NUCLEAR FISSION 921 about 7 to 10 years to regenerate the sizable amount of material needed to refuel the
original reactor and to fuel another reactor of comparable size. This interval is called
the doubling time.
Another fertile isotope is 232 Th. Upon capturing slow neutrons, thorium is trans90
muted to uranium-233, which, like uranium-235, is a fissionable isotope:
0 n 88n 90 Th
90 Th 88n 91 Pa
91Pa 88n 92 U FIGURE 23.13 The red glow
of the radioactive plutonium-239
isotope. The orange color is due
to the presence of its oxide. 0
2 22 min t1
2 27.4 days Uranium-233 is stable enough for long-term storage.
Although the amounts of uranium-238 and thorium-232 in Earth’s crust are relatively plentiful (4 ppm and 12 ppm by mass, respectively), the development of breeder
reactors has been very slow. To date, the United States does not have a single operating breeder reactor, and only a few have been built in other countries, such as France
and Russia. One problem is economics; breeder reactors are more expensive to build
than conventional reactors. There are also more technical difficulties associated with
the construction of such reactors. As a result, the future of breeder reactors, in the
United States at least, is rather uncertain.
Hazards of Nuclear Energy Molten glass is poured over nuclear waste before burial. Back Forward Main Menu Many people, including environmentalists, regard nuclear fission as a highly undesirable method of energy production. Many fission products such as strontium-90 are dangerous radioactive isotopes with long half-lives. Plutonium-239, used as a nuclear fuel
and produced in breeder reactors, is one of the most toxic substances known. It is an
alpha emitter with a half-life of 24,400 yr.
Accidents, too, present many dangers. An accident at the Three Mile Island reactor in Pennsylvania in 1979 first brought the potential hazards of nuclear plants to public attention. In this instance very little radiation escaped the reactor, but the plant remained closed for more than a decade while repairs were made and safety issues
addressed. Only a few years later, on April 26, 1986, a reactor at the Chernobyl nuclear plant in Belarus surged out of control. The fire and explosion that followed released much radioactive material into the environment. People working near the plant
died within weeks as a result of the exposure to the intense radiation. The long-term
effect of the radioactive fallout from this incident has not yet been clearly assessed, although agriculture and dairy farming were affected by the fallout. The number of potential cancer deaths attributable to the radiation contamination is estimated to be between a few thousand and more than 100,000.
In addition to the risk of accidents, the problem of radioactive waste disposal has
not been satisfactorily resolved even for safely operated nuclear plants. Many suggestions have been made as to where to store or dispose of nuclear waste, including burial underground, burial beneath the ocean floor, and storage in deep geologic formations. But none of these sites has proved absolutely safe in the long run. Leakage of
radioactive wastes into underground water, for example, can endanger nearby communities. The ideal disposal site would seem to be the sun, where a bit more radiation
would make little difference, but this kind of operation requires 100 percent reliability in space technology.
Because of the hazards, the future of nuclear reactors is clouded. What was once
hailed as the ultimate solution to our energy needs in the twenty-first century is now TOC Study Guide TOC Textbook Website MHHE Website 922 NUCLEAR CHEMISTRY Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry Back Nature’s Own Fission Reactor
It all started with a routine analysis in May 1972 at
the nuclear fuel processing plant in Pierrelatte, France.
A staff member was checking the isotope ratio of
U-235 to U-238 in a uranium ore and obtained a puzzling result. It had long been known that the relative
natural occurrence of U-235 and U-238 is 0.7202
percent and 99.2798 percent, respectively. In this
case, however, the amount of U-235 present was only
0.7171 percent. This may seem like a very small deviation, but the measurements were so precise that this
difference was considered highly significant. The ore
had come from the Oklo mine in the Gabon Republic,
a small country on the west coast of Africa. Subsequent
analyses of other samples showed that some contained
even less U-235, in some cases as little as 0.44 percent.
The logical explanation for the low percentages
of U-235 was that a nuclear fission reaction at the
mine must have consumed some of the U-235 isotopes.
But how did this happen? There are several conditions
under which such a nuclear fission reaction could take
place. In the presence of heavy water, for example,
a chain reaction is possible with unenriched uranium.
Without heavy water, such a fission reaction could still
occur if the uranium ore and the moderator were
arranged according to some specific geometric constraints at the site of the reaction. Both of the possibilities seem rather farfetched. The most plausible explanation is that the uranium ore originally present in
the mine was enriched with U-235 and that a nuclear
fission reaction took place with light water, as in a
conventional nuclear reactor.
As mentioned earlier, the natural abundance of
U-235 is 0.7202 percent, but it has not always been
that low. The half-lives of U-235 and U-238 are 700
million and 4.51 billion years, respectively. This means
that U-235 must have been more abundant in the past,
because it has a shorter half-life. In fact, at the time
Earth was formed, the natural abundance of U-235
was as high as 17 percent! Since the lowest concentration of U-235 required for the operation of a fission reactor is 1 percent, a nuclear chain reaction
could have taken place as recently as 400 million
years ago. By analyzing the amounts of radioactive Forward Main Menu TOC fission products left in the ore, scientists concluded that
the Gabon “reactor” operated about 2 billion years
Having an enriched uranium sample is only one
of the requirements for starting a controlled chain reaction. There must also have been a sufficient amount
of the ore and an appropriate moderator present. It
appears that as a result of a geological transformation, uranium ore was continually being washed into
the Oklo region to yield concentrated deposits. The
moderator needed for the fission process was largely
water, present as water of crystallization in the sedimentary ore.
Thus, in a series of extraordinary events, a natural nuclear fission reactor operated at the time when
the first life forms appeared on Earth. As is often the
case in scientific endeavors, humans are not necessarily the innovators but merely the imitators of nature. Photo showing the natural nuclear reactor site (lower righthand corner) at Oklo, Gabon Republic. Study Guide TOC Textbook Website MHHE Website 23.6 NUCLEAR FUSION 923 being debated and questioned by both the scientific community and laypeople. It seems
likely that the controversy will continue for some time. 23.6 NUCLEAR FUSION In contrast to the nuclear fission process, nuclear fusion, the combining of small nuclei into larger ones, is largely exempt from the waste disposal problem.
Figure 23.2 showed that for the lightest elements, nuclear stability increases with
increasing mass number. This behavior suggests that if two light nuclei combine or
fuse together to form a larger, more stable nucleus, an appreciable amount of energy
will be released in the process. This is the basis for ongoing research into the harnessing of nuclear fusion for the production of energy.
Nuclear fusion occurs constantly in the sun (Figure 23.14). The sun is made up
mostly of hydrogen and helium. In its interior, where temperatures reach about 15 million degrees Celsius, the following fusion reactions are believed to take place: FIGURE 23.14 Nuclear fusion
keeps the temperature in the interior of the sun at about 15 million °C. 1
1H 88n 3He
1 88n 2H
1 Because fusion reactions take place only at very high temperatures, they are often
called thermonuclear reactions.
FUSION REACTORS A major concern in choosing the proper nuclear fusion process for energy production
is the temperature necessary to carry out the process. Some promising reactions are
3 Li 2
1H 88n 1H
H 88n 2He 1 n
1H 88n 2 2He ENERGY RELEASED 6.3
10 13 J
J 12 These reactions take place at extremely high temperatures, on the order of 100 million
degrees Celsius, to overcome the repulsive forces between the nuclei. The first reaction
is particularly attractive because the world’s supply of deuterium is virtually
inexhaustible. The total volume of water on Earth is about 1.5 1021 L. Since the
natural abundance of deuterium is 1.5 10 2 percent, the total amount of deuterium
present is roughly 4.5 1021 g, or 5.0 1015 tons. The cost of preparing deuterium
is minimal compared with the value of the energy released by the reaction.
In contrast to the fission process, nuclear fusion looks like a very promising energy source, at least “on paper.” Although thermal pollution would be a problem, fusion has the following advantages: (1) The fuels are cheap and almost inexhaustible
and (2) the process produces little radioactive waste. If a fusion machine were turned
off, it would shut down completely and instantly, without any danger of a meltdown.
If nuclear fusion is so great, why isn’t there even one fusion reactor producing
energy? Although we command the scientific knowledge to design such a reactor, the
technical difficulties have not yet been solved. The basic problem is finding a way to
hold the nuclei together long enough, and at the appropriate temperature, for fusion to Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 924 NUCLEAR CHEMISTRY FIGURE 23.15 A magnetic
plasma confinement design called
tokamak. Plasma Magnet occur. At temperatures of about 100 million degrees Celsius, molecules cannot exist,
and most or all of the atoms are stripped of their electrons. This state of matter, a
gaseous mixture of positive ions and electrons, is called plasma. The problem of containing this plasma is a formidable one. What solid container can exist at such temperatures? None, unless the amount of plasma is small; but then the solid surface would
immediately cool the sample and quench the fusion reaction. One approach to solving
this problem is to use magnetic confinement. Since a plasma consists of charged particles moving at high speeds, a magnetic field will exert force on it. As Figure 23.15
shows, the plasma moves through a doughnut-shaped tunnel, confined by a complex
magnetic field. Thus the plasma never comes in contact with the walls of the container.
Another promising design employs high-power lasers to initiate the fusion reaction. In test runs a number of laser beams transfer energy to a small fuel pellet, heating it and causing it to implode, that is, to collapse inward from all sides and compress
into a small volume (Figure 23.16). Consequently, fusion occurs. Like the magnetic
confinement approach, laser fusion presents a number of technical difficulties that still
need to be overcome before it can be put to practical use on a large scale.
THE HYDROGEN BOMB The technical problems inherent in the design of a nuclear fusion reactor do not affect
the production of a hydrogen bomb, also called a thermonuclear bomb. In this case the
objective is all power and no control. Hydrogen bombs do not contain gaseous hydrogen or gaseous deuterium; they contain solid lithium deuteride (LiD), which can be
packed very tightly. The detonation of a hydrogen bomb occurs in two stages—first a
fission reaction and then a fusion reaction. The required temperature for fusion is
achieved with an atomic bomb. Immediately after the atomic bomb explodes, the following fusion reactions occur, releasing vast amounts of energy (Figure 23.17):
1H Back Forward Main Menu TOC 2
1H 88n 2 4
88n 3 H
1 Study Guide TOC 1
1H Textbook Website MHHE Website 23.6 NUCLEAR FUSION 925 FIGURE 23.16 This small-scale
fusion reaction was created at
the Lawrence Livermore National
Laboratory using the world’s most
powerful laser, Nova. There is no critical mass in a fusion bomb, and the force of the explosion is limited only by the quantity of reactants present. Thermonuclear bombs are described as
being “cleaner” than atomic bombs because the only radioactive isotopes they produce
are tritium, which is a weak -particle emitter (t 1 12.5 yr), and the products of the
fission starter. Their damaging effects on the environment can be aggravated, however,
by incorporating in the construction some nonfissionable material such as cobalt. Upon
bombardment by neutrons, cobalt-59 is converted to cobalt-60, which is a very strong
-ray emitter with a half-life of 5.2 yr. The presence of radioactive cobalt isotopes in
the debris or fallout from a thermonuclear explosion would be fatal to those who survived the initial blast. FIGURE 23.17 Explosion of a
thermonuclear bomb. Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 926 NUCLEAR CHEMISTRY 23.7 USES OF ISOTOPES Radioactive and stable isotopes alike have many applications in science and medicine.
We have previously described the use of isotopes in the study of reaction mechanisms
(see Section 13.5) and in dating artifacts (p. 527 and Section 23.3). In this section we
will discuss a few more examples.
STRUCTURAL DETERMINATION The formula of the thiosulfate ion is S2O2 . For some years chemists were uncertain
as to whether the two sulfur atoms occupied equivalent positions in the ion. The thiosulfate ion is prepared by treatment of the sulfite ion with elemental sulfur:
3 S(s) 88n S2O2 (aq)
3 When thiosulfate is treated with dilute acid, the reaction is reversed. The sulfite ion is
reformed and elemental sulfur precipitates:
S2O2 (aq) 88n SO2 (aq)
3 S(s) (23.2) If this sequence is started with elemental sulfur enriched with the radioactive sulfur35 isotope, the isotope acts as a “label” for S atoms. All the labels are found in the sulfur precipitate in Equation (23.2); none of them appears in the final sulfite ions. Clearly,
then, the two atoms of sulfur in S2O2 are not structurally equivalent, as would be the
case if the structure were
QQQQQ 2 Otherwise, the radioactive isotope would be present in both the elemental sulfur precipitate and the sulfite ion. Based on spectroscopic studies, we now know that the structure of the thiosulfate ion is
SOO S OOS
O 2 STUDY OF PHOTOSYNTHESIS The study of photosynthesis is also rich with isotope applications. The overall photosynthesis reaction can be represented as
6CO2 6H2O 88n C6H12O6 6O2 18 In Section 13.5 we learned that the O isotope was used to determine the source of
O2. The radioactive 14C isotope helped to determine the path of carbon in photosynthesis. Starting with 14CO2, it was possible to isolate the intermediate products during
photosynthesis and measure the amount of radioactivity of each carbon-containing compound. In this manner the path from CO2 through various intermediate compounds to
carbohydrate could be clearly charted. Isotopes, especially radioactive isotopes that
are used to trace the path of the atoms of an element in a chemical or biological
process, are called tracers. Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.7 USES OF ISOTOPES 927 ISOTOPES IN MEDICINE Technetium was the first artificially prepared element. Tracers are used also for diagnosis in medicine. Sodium-24 (a emitter with a halflife of 14.8 h) injected into the bloodstream as a salt solution can be monitored to trace
the flow of blood and detect possible constrictions or obstructions in the circulatory
system. Iodine-131 (a emitter with a half-life of 8 days) has been used to test the activity of the thyroid gland. A malfunctioning thyroid can be detected by giving the patient a drink of a solution containing a known amount of Na131I and measuring the radioactivity just above the thyroid to see if the iodine is absorbed at the normal rate. Of
course, the amounts of radioisotope used in the human body must always be kept small;
otherwise, the patient might suffer permanent damage from the high-energy radiation.
Another radioactive isotope of iodine, iodine-123 (a -ray emitter), is used to image
the brain (Figure 23.18).
Technetium is one of the most useful elements in nuclear medicine. Although technetium is a transition metal, all its isotopes are radioactive. Therefore, technetium does
not occur naturally on Earth. In the laboratory it is prepared by the nuclear reactions
42 Mo 1
0n 88n 99 Mo
42 Mo 88n 99m Tc
1 where the superscript m denotes that the technetium-99 isotope is produced in its excited nuclear state. This isotope has a half-life of about 6 hours, decaying by radiation to technetium-99 in its nuclear ground state. Thus it is a valuable diagnostic tool.
The patient either drinks or is injected with a solution containing 99mTc. By detecting
the rays emitted by 99mTc, doctors can obtain images of organs such as the heart,
liver, and lungs.
A major advantage of using radioactive isotopes as tracers is that they are easy to
detect. Their presence even in very small amounts can be detected by photographic
techniques or by devices known as counters. Figure 23.19 is a diagram of a Geiger
counter, an instrument widely used in scientific work and medical laboratories to detect radiation. FIGURE 23.18 A compound
labeled with iodine-123 is used
to image the brain. Left: A normal
brain. Right: The brain of a patient with Alzheimer’s disease. Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 928 NUCLEAR CHEMISTRY FIGURE 23.19 Schematic diagram of a Geiger counter.
Radiation ( , , or rays) entering through the window ionized
the argon gas to generate a
small current flow between the
electrodes. This current is amplified and is used to flash a light
or operate a counter with a clicking sound. Cathode Anode Insulator Window Argon gas Amplifier and counter High voltage 23.8 BIOLOGICAL EFFECTS OF RADIATION In this section we will examine briefly the effects of radiation on biological systems.
But first let us define quantitative measures of radiation. The fundamental unit of radioactivity is the curie (Ci); 1 Ci corresponds to exactly 3.70 1010 nuclear disintegrations per second. This decay rate is equivalent to that of 1 g of radium. A millicurie
(mCi) is one-thousandth of a curie. Thus, 10 mCi of a carbon-14 sample is the quantity that undergoes
(10 10 3)(3.70 1010) 3.70 108 disintegrations per second. The intensity of radiation depends on the number of disintegrations as well as on the energy and type of radiation emitted. One common unit
for the absorbed dose of radiation is the rad (radiation absorbed dose), which is the
amount of radiation that results in the absorption of 1 10 5 J per gram of irradiated
material. The biological effect of radiation depends on the part of the body irradiated
and the type of radiation. For this reason the rad is often multiplied by a factor called
RBE (relative biological effectiveness). The product is called a rem (roentgen equivalent for man):
1 rem 1 rad 1 RBE Of the three types of nuclear radiation, particles usually have the least penetrating
power. Beta particles are more penetrating than particles, but less so than rays.
Gamma rays have very short wavelengths and high energies. Furthermore, since they
carry no charge, they cannot be stopped by shielding materials as easily as and
particles. However, if or emitters are ingested, their damaging effects are greatly
aggravated because the organs will be constantly subject to damaging radiation at close
range. For example, strontium-90, a emitter, can replace calcium in bones, where it
does the greatest damage.
Table 23.6 lists the average amounts of radiation an American receives every year.
It should be pointed out that for short-term exposures to radiation, a dosage of 50 –
200 rem will cause a decrease in white blood cell counts and other complications, while Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website 23.8 BIOLOGICAL EFFECTS OF RADIATION 929 TABLE 23.6 Average Yearly Radiation Doses
SOURCE DOSE (mrem/yr)* Cosmic rays
Ground and surroundings
Medical and dental X rays
Fallout from weapons tests
*1 mrem 1 millirem 1 10 3 20 – 50
50 – 75
133 – 188 rem. † The radioactivity in the body comes from food and air. a dosage of 500 rem or greater may result in death within weeks. Current safety standards permit nuclear workers to be exposed to no more than 5 rem per year and specify a maximum of 0.5 rem of human-made radiation per year for the general public.
The chemical basis of radiation damage is that of ionizing radiation. Radiation of
either particles or rays can remove electrons from atoms and molecules in its path,
leading to the formation of ions and radicals. Radicals (also called free radicals) are
molecular fragments having one or more unpaired electrons; they are usually shortlived and highly reactive. For example, when water is irradiated with rays, the following reactions take place:
H2O 88888n H2O H2O H2O 88n H3O e
hydroxyl radical The electron (in the hydrated form) can subsequently react with water or with a hydrogen ion to form atomic hydrogen, and with oxygen to produce the superoxide ion,
O2 (a radical):
e Chromosomes are the parts of the
cell that contain the genetic
material (DNA). Back Forward Main Menu O2 88n O2 In the tissues the superoxide ions and other free radicals attack cell membranes and a
host of organic compounds, such as enzymes and DNA molecules. Organic compounds
can themselves be directly ionized and destroyed by high-energy radiation.
It has long been known that exposure to high-energy radiation can induce cancer
in humans and other animals. Cancer is characterized by uncontrolled cellular growth.
On the other hand, it is also well established that cancer cells can be destroyed by
proper radiation treatment. In radiation therapy, a compromise is sought. The radiation
to which the patient is exposed must be sufficient to destroy cancer cells without killing
too many normal cells and, it is hoped, without inducing another form of cancer.
Radiation damage to living systems is generally classified as somatic or genetic.
Somatic injuries are those that affect the organism during its own lifetime. Sunburn,
skin rash, cancer, and cataracts are examples of somatic damage. Genetic damage means
inheritable changes or gene mutations. For example, a person whose chromosomes
have been damaged or altered by radiation may have deformed offspring. TOC Study Guide TOC Textbook Website MHHE Website 930 NUCLEAR CHEMISTRY Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry in Action Chemistry Back Food Irradiation
If you eat processed food, you have probably eaten
ingredients exposed to radioactive rays. In the United
States, up to 10 percent of herbs and spices are irradiated to control mold, zapped with X rays at a
dose equal to 60 million chest X rays. Although food
irradiation has been used in one way or another for
more than 40 years, it faces an uncertain future in this
Back in 1953 the U.S. Army started an experimental program of food irradiation so that deployed
troops could have fresh food without refrigeration. The
procedure is a simple one. Food is exposed to high
levels of radiation to kill insects and harmful bacteria.
It is then packaged in airtight containers, in which it
can be stored for months without deterioration. The
radiation sources for most food preservation are
cobalt-60 and cesium-137, both of which are emitters, although X rays and electron beams can also be
used to irradiate food.
The benefits of food irradiation are obvious — it
reduces energy demand by eliminating the need for
refrigeration, and it prolongs the shelf life of various
foods, which is of vital importance for poor countries.
Yet there is considerable opposition to this procedure.
First, there is a fear that irradiated food may itself become radioactive. No such evidence has been found.
A more serious objection is that irradiation can de- Strawberries irradiated at 200 kilorads (right) are still fresh
after 15 days’ storage at 4°C; those not irradiated are
moldy. stroy the nutrients such as vitamins and amino acids.
Furthermore, the ionizing radiation produces reactive
species, such as the hydroxyl radical, which then react with the organic molecules to produce potentially
harmful substances. Interestingly, the same effects are
produced when food is cooked by heat. Food Irradiation Dosages and Their Effects†
EFFECT Low dose (Up to 100 kilorad) Medium dose (100–1000 kilorads) High dose (1000 to 10,000 kilorads) Inhibits sprouting of potatoes, onions, garlics.
Inactivates trichinae in pork.
Kills or prevents insects from reproducing in grains, fruits, and vegetables
Delays spoilage of meat, poultry and fish by killing spoilage microorganism.
Reduces salmonella and other food-borne pathogens in meat, fish, and
Extends shelf life by delaying mold growth on strawberries and some other
Sterilizes meat, poultry, fish, and some other foods.
Kills microorganisms and insects in spices and seasoning. † Source: Chemical & Engineering News, May 5 (1986). Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website QUESTIONS AND PROBLEMS 931 KEY EQUATION • SUMMARY OF FACTS
AND CONCEPTS 1. For stable nuclei of low atomic number, the neutron-to-proton ratio is close to 1. For
heavier stable nuclei, the ratio becomes greater than 1. All nuclei with 84 or more protons
are unstable and radioactive. Nuclei with even atomic numbers tend to have a greater
number of stable isotopes than those with odd atomic numbers.
2. Nuclear binding energy is a quantitative measure of nuclear stability. Nuclear binding energy can be calculated from a knowledge of the mass defect of the nucleus.
3. Radioactive nuclei emit particles, particles, positrons, or rays. The equation for a
nuclear reaction includes the particles emitted, and both the mass numbers and the atomic
numbers must balance.
4. Uranium-238 is the parent of a natural radioactive decay series that can be used to determine the ages of rocks.
5. Artificial radioactive elements are created by bombarding other elements with accelerated
neutrons, protons, or particles.
6. Nuclear fission is the splitting of a large nucleus into two smaller nuclei and one or more
neutrons. When the free neutrons are captured efficiently by other nuclei, a chain reaction
7. Nuclear reactors use the heat from a controlled nuclear fission reaction to produce power.
The three important types of reactors are light water reactors, heavy water reactors, and
8. Nuclear fusion, the type of reaction that occurs in the sun, is the combination of two light
nuclei to form one heavy nucleus. Fusion takes place only at very high temperatures, so
high that controlled large-scale nuclear fusion has so far not been achieved.
9. Radioactive isotopes are easy to detect and thus make excellent tracers in chemical reactions and in medical practice.
10. High-energy radiation damages living systems by causing ionization and the formation of
free radicals. E ( m)c2 (23.1) Relation between mass defect and energy released. KEY WORDS
Breeder reactor, p. 920
Critical mass, p. 917
Mass defect, p. 908
Moderators, p. 918 Nuclear
Nuclear binding energy, p. 908
chain reaction, p. 917
fission, p. 916
fusion, p. 923 Nuclear transmutation, p. 904
Plasma, p. 924
Positron, p. 905
Radical, p. 929 Radioactive decay series, p. 911
Thermonuclear reaction, p. 923
Tracer, p. 926
Transuranium elements, p. 916 QUESTIONS AND PROBLEMS
Review Questions 23.1 How do nuclear reactions differ from ordinary chemical reactions?
23.2 What are the steps in balancing nuclear equations?
23.3 What is the difference between 0e and 0 ?
23.4 What is the difference between an electron and a
Problems 23.5 Complete the following nuclear equations and identify X in each case: Back Forward Main Menu TOC X
(a) 26 Mg 1p 88n 4
(b) 59 Co 2H 88n 60 Co X
(c) 235 U 1 n 88n 94 Kr 139 Ba 3X
(d) 53Cr 4 88n 1 n X
(e) 20 O 88n 20 F X
23.6 Complete the following nuclear equations and identify X in each case:
(a) 135 I 88n 135 Xe X
(b) 40 K 88n 0
(c) 59 Co 1 n 88n 56 Mn X
25 Study Guide TOC Textbook Website MHHE Website 932 NUCLEAR CHEMISTRY (d) 235
92 U 1
0n 88n 99 Sr
52 Te Problems 2X 23.23 Fill in the blanks in the following radioactive decay
series: NUCLEAR STABILITY
Review Questions (a) 23.7 State the general rules for predicting nuclear stability.
23.8 What is the belt of stability?
23.9 Why is it impossible for the isotope 2 He to exist?
23.10 Define nuclear binding energy, mass defect, and nucleon.
23.11 How does Einstein’s equation, E mc2, allow us to
calculate nuclear binding energy?
23.12 Why is it preferable to use nuclear binding energy
per nucleon for a comparison of the stabilities of different nuclei? (b) H(g) H(g) 88n H2(g) H° 436.4 kJ calculate the change in mass (in kg) per mole of H2
23.18 Estimates show that the total energy output of the sun
is 5 1026 J/s. What is the corresponding mass loss
in kg/s of the sun?
23.19 Calculate the nuclear binding energy (in J) and the
binding energy per nucleon of the following isotopes:
(a) 7 Li (7.01600 amu) and (b) 35 Cl (34.95952 amu).
23.20 Calculate the nuclear binding energy (in J) and the
binding energy per nucleon of the following isotopes:
(a) 4 He (4.0026 amu) and (b) 184 W (183.9510 amu).
74 Th 88n _____ 88n _____ 88n 228Th 235 U 88n _____ 88n _____ 88n 227Ac (c) _____ 88n 233Pa 88n _____ 88n _____
23.24 A radioactive substance undergoes decay as follows:
TIME (DAYS) 23.25 23.26 23.27
23.28 23.29 23.30 MASS (g) 0
6 Problems 23.13 The radius of a uranium-235 nucleus is about 7.0
10 3 pm. Calculate the density of the nucleus in
g/cm3. (Assume the atomic mass is 235 amu.)
23.14 For each pair of isotopes listed, predict which one is
less stable: (a) 6 Li or 9 Li, (b) 23 Na or 25 Na,
(c) 48 Ca or 48 Sc.
23.15 For each pair of elements listed, predict which one
has more stable isotopes: (a) Co or Ni, (b) F or Se,
(c) Ag or Cd.
23.16 In each pair of isotopes shown, indicate which one
you would expect to be radioactive: (a) 20 Ne and
10 Ne, (b) 20 Ca and 20 Ca, (c) 42 Mo and 43 Tc,
(d) 80 Hg and 80 Hg, (e) 83 Bi and 96 Cm.
23.17 Given that 232 500
112 Calculate the first-order decay constant and the halflife of the reaction.
The radioactive decay of T1-206 to Pb-206 has a halflife of 4.20 min. Starting with 5.00 1022 atoms of
T1-206, calculate the number of such atoms left after 42.0 min.
A freshly isolated sample of 90Y was found to have
an activity of 9.8 105 disintegrations per minute at
1:00 P.M. on December 3, 1992. At 2:15 P.M. on
December 17, 1992, its activity was redetermined and
found to be 2.6 104 disintegrations per minute.
Calculate the half-life of 90Y.
Why do radioactive decay series obey first-order kinetics?
In the thorium decay series, thorium-232 loses a total of 6 particles and 4 particles in a 10-stage
process. What is the final isotope produced?
Strontium-90 is one of the products of the fission of
uranium-235. This strontium isotope is radioactive,
with a half-life of 28.1 yr. Calculate how long (in yr)
it will take for 1.00 g of the isotope to be reduced to
0.200 g by decay.
Consider the decay series
A 88n B 88n C 88n D NATURAL RADIOACTIVITY
Review Questions 23.21 Discuss factors that lead to nuclear decay.
23.22 Outline the principle for dating materials using radioactive isotopes. Back Forward Main Menu TOC where A, B, and C are radioactive isotopes with halflives of 4.50 s, 15.0 days, and 1.00 s, respectively,
and D is nonradioactive. Starting with 1.00 mole of
A, and none of B, C, or D, calculate the number of
moles of A, B, C, and D left after 30 days. Study Guide TOC Textbook Website MHHE Website 933 QUESTIONS AND PROBLEMS NUCLEAR TRANSMUTATION USES OF ISOTOPES Review Questions Problems 23.31 What is the difference between radioactive decay and
23.32 How is nuclear transmutation achieved in practice? 23.47 Describe how you would use a radioactive iodine isotope to demonstrate that the following process is in
PbI2(s) 34 Pb2 (aq) Problems 23.33 Write balanced nuclear equations for the following
reactions and identify X:
(a) X(p, )12C, (b) 27Al(d, )X, (c) 55Mn(n, )X
23.34 Write balanced nuclear equations for the following
reactions and identify X:
(a) 80Se(d,p)X, (b) X(d,2p)9Li, (c) 10B(n, )X
23.35 Describe how you would prepare astatine-211, starting with bismuth-209.
23.36 A long-cherished dream of alchemists was to produce
gold from cheaper and more abundant elements. This
dream was finally realized when 198Hg was converted
into gold by neutron bombardment. Write a balanced
equation for this reaction.
Review Questions 23.37 Define nuclear fission, nuclear chain reaction, and
23.38 Which isotopes can undergo nuclear fission?
23.39 Explain how an atomic bomb works.
23.40 Explain the functions of a moderator and a control
rod in a nuclear reactor.
23.41 Discuss the differences between a light water and a
heavy water nuclear fission reactor. What are the advantages of a breeder reactor over a conventional nuclear fission reactor?
23.42 No form of energy production is without risk. Make
a list of the risks to society involved in fueling and
operating a conventional coal-fired electric power
plant, and compare them with the risks of fueling and
operating a nuclear fission-powered electric plant.
Review Questions 23.43 Define nuclear fusion, thermonuclear reaction, and
23.44 Why do heavy elements such as uranium undergo fission while light elements such as hydrogen and
lithium undergo fusion?
23.45 How does a hydrogen bomb work?
23.46 What are the advantages of a fusion reactor over a
fission reactor? What are the practical difficulties in
operating a large-scale fusion reactor? Back Forward Main Menu TOC 2I (aq) 23.48 Consider the following redox reaction:
IO4 (aq) 2I (aq) H2O(l) 88n
I2(s) IO3 (aq) 2OH (aq) When KIO4 is added to a solution containing iodide
ions labeled with radioactive iodine-128, all the radioactivity appears in I2 and none in the IO3 ion.
What can you deduce about the mechanism for the
23.49 Explain how you might use a radioactive tracer to
show that ions are not completely motionless in crystals.
23.50 Each molecule of hemoglobin, the oxygen carrier in
blood, contains four Fe atoms. Explain how you
would use the radioactive 59 Fe (t 1 46 days) to show
that the iron in a certain food is converted into hemoglobin.
ADDITIONAL PROBLEMS 23.51 How does a Geiger counter work?
23.52 Nuclei with an even number of protons and an even
number of neutrons are more stable than those with
an odd number of protons and/or an odd number of
neutrons. What is the significance of the even numbers of protons and neutrons in this case?
23.53 Tritium, 3H, is radioactive and decays by electron
emission. Its half-life is 12.5 yr. In ordinary water the
ratio of 1H to 3H atoms is 1.0 1017 to 1.
(a) Write a balanced nuclear equation for tritium decay. (b) How many disintegrations will be observed
per minute in a 1.00-kg sample of water?
23.54 (a) What is the activity, in millicuries, of a 0.500-g
sample of 237 Np? (This isotope decays by -particle
emission and has a half-life of 2.20 106 yr.) (b)
Write a balanced nuclear equation for the decay of
23.55 The following equations are for nuclear reactions that
are known to occur in the explosion of an atomic
bomb. Identify X.
(a) 235 U 1 n 88n 140 Ba 31 n X
(b) 235 U 1 n 88n 144 Cs 90 Rb 2X
(c) 235 U 1 n 88n 87 Br 31 n X
(d) 235 U 1 n 88n 160 Sm 72 Zn 4X
23.56 Calculate the nuclear binding energies, in J/nucleon,
for the following species: (a) 10B (10.0129 amu), Study Guide TOC Textbook Website MHHE Website 934 NUCLEAR CHEMISTRY 23.57 23.58 23.59
23.61 23.62 23.63 23.64 23.65 23.66 23.67 Back (b) 11B (11.00931 amu), (c) 14N (14.00307 amu), (d)
Fe (55.9349 amu).
Write complete nuclear equations for the following
processes: (a) tritium, 3H, undergoes
(b) 242Pu undergoes -particle emission; (c) 131I undergoes decay; (d) 251Cf emits an particle.
The nucleus of nitrogen-18 lies above the stability
belt. Write an equation for a nuclear reaction by
which nitrogen-18 can achieve stability.
Why is strontium-90 a particularly dangerous isotope
How are scientists able to tell the age of a fossil?
After the Chernobyl accident, people living close to
the nuclear reactor site were urged to take large
amounts of potassium iodide as a safety precaution.
What is the chemical basis for this action?
Astatine, the last member of Group 7A, can be prepared by bombarding bismuth-209 with particles.
(a) Write an equation for the reaction. (b) Represent
the equation in the abbreviated form as discussed in
To detect bombs that may be smuggled onto airplanes, the Federal Aviation Administration (FAA)
will soon require all major airports in the United
States to install thermal neutron analyzers. The thermal neutron analyzer will bombard baggage with
low-energy neutrons, converting some of the nitrogen-14 nuclei to nitrogen-15, with simultaneous
emission of rays. Because nitrogen content is usually high in explosives, detection of a high dosage of
rays will suggest that a bomb may be present.
(a) Write an equation for the nuclear process.
(b) Compare this technique with the conventional Xray detection method.
Explain why achievement of nuclear fusion in the laboratory requires a temperature of about 100 million
degrees Celsius, which is much higher than that in
the interior of the sun (15 million degrees Celsius).
Tritium contains one proton and two neutrons. There
is no proton-proton repulsion present in the nucleus.
Why, then, is tritium radioactive?
The carbon-14 decay rate of a sample obtained from
a young tree is 0.260 disintegration per second per
gram of the sample. Another wood sample prepared
from an object recovered at an archaeological excavation gives a decay rate of 0.186 disintegration per
second per gram of the sample. What is the age of
The usefulness of radiocarbon dating is limited to objects no older than 50,000 years. What percent of the
carbon-14, originally present in the sample, remains
after this period of time? Forward Main Menu TOC 23.68 The radioactive potassium-40 isotope decays to argon-40 with a half-life of 1.2 109 yr. (a) Write a
balanced equation for the reaction. (b) A sample of
moon rock is found to contain 18 percent potassium40 and 82 percent argon by mass. Calculate the age
of the rock in years.
23.69 Both barium (Ba) and radium (Ra) are members of
Group 4A and are expected to exhibit similar chemical properties. However, Ra is not found in barium
ores. Instead, it is found in uranium ores. Explain.
23.70 Nuclear waste disposal is one of the major concerns
of the nuclear industry. In choosing a safe and stable
environment to store nuclear wastes, consideration
must be given to the heat released during nuclear dedecay of 90Sr
cay. As an example, consider the
38Sr The 90 88n 90Y
39 23.72 23.73 23.74 23.75 t1
2 28.1 yr Y (89.907152 amu) further decays as follows:
39Y 23.71 0
1 88n 90Zr
2 64 h Zirconium-90 (89.904703 amu) is a stable isotope.
(a) Use the mass defect to calculate the energy released (in joules) in each of the above two decays.
(The mass of the electron is 5.4857 10 4 amu.) (b)
Starting with one mole of 90Sr, calculate the number
of moles of 90Sr that will decay in a year. (c) Calculate
the amount of heat released (in kilojoules) corresponding to the number of moles of 90Sr decayed to
Zr in (b).
Which of the following poses a greater health hazard: A radioactive isotope with a short half-life or a
radioactive isotope with a long half-life? Explain.
[Assume same type of radiation ( or ) and comparable energetics per particle emitted.]
As a result of being exposed to the radiation released
during the Chernobyl nuclear accident, the dose of
iodine-131 in a person’s body is 7.4 mC (1 mC
N to cal1 10 3 Ci). Use the relationship rate
culate the number of atoms of iodine-131 this radioactivity corresponds. (The half-life of I-131 is
Referring to the Chemistry in Action essay on p. 930,
why is it highly unlikely that irradiated food would
From the definition of curie, calculate Avogadro’s
number. Given that the molar mass of Ra-226 is
226.03 g/mol and that it decays with a half life of
1.6 103 yr.
Since 1994, elements 110, 111, and 112 have been
synthesized. Element 110 was created by bombarding 208Pb with 62Ni; element 111 was created by bom- Study Guide TOC Textbook Website MHHE Website QUESTIONS AND PROBLEMS 23.76 23.77 23.78 23.79 23.80 Back barding 209Bi with 64Ni; and element 112 was created by bombarding 208Pb with 66Zn. Write an equation for each synthesis. Predict the chemical properties of these elements. Use X for element 110, Y for
element 111, and Z for element 112.
Sources of energy on Earth include fossil fuels, geothermal, gravitational, hydroelectric, nuclear fission,
nuclear fusion, solar, wind. Which of these have a
“nuclear origin,” either directly or indirectly?
A person received an anonymous gift of a decorative
cube which he placed on his desk. A few months later
he became ill and died shortly afterward. After investigation, the cause of his death was linked to the
box. The box was air-tight and had no toxic chemicals on it. What might have killed the man?
Identify two of the most abundant radioactive elements that exist on Earth. Explain why they are still
present? (You may need to consult a handbook of
(a) Calculate the energy released when an U-238 isotope decays to Th-234. The atomic masses are given
by: U-238: 238.0508 amu; Th-234: 234.0436 amu;
He-4: 4.0026 amu. (b) The energy released in (a) is
transformed into the kinetic energy of the recoiling
Th-234 nucleus and the particle. Which of the two
will move away faster? Explain.
Cobalt-60 is an isotope used in diagnostic medicine
and cancer treatment. It decays with ray emission.
Calculate the wavelength of the radiation in nanometers if the energy of the ray is 2.4 10 13 J/photon. Forward Main Menu TOC 935 23.81 Am-241 is used in smoke detectors because it has a
long half-life (458 yr) and its emitted particles are
energetic enough to ionize air molecules. Given the
schematic diagram of a smoke detector below, explain how it works.
Current 241Am Battery 23.82 The constituents of wine contain, among others, carbon, hydrogen, and oxygen atoms. A bottle of wine
was sealed about 6 years ago. To confirm its age,
which of the isotopes would you choose in a radioactive dating study? The half-lives of the isotopes
are: 13C: 5730 yr; 15O: 124 s; 3H: 12.5 yr. Assume
that the activities of the isotopes were known at the
time the bottle was sealed.
23.83 Name two advantages of a nuclear-powered submarine over a conventional submarine.
23.1 78 Se. 23.2 2.63
J/nucleon. 23.3 106 Pd 4 88n
2 Answers to Practice Exercises: 10 10 J; 1.26
1 p. Study Guide TOC 10 12 Textbook Website MHHE Website C HEMICAL M YSTERY The Art Forgery of the Century H an van Meegeren must be one of the few forgers ever to welcome technical analysis of his work. In 1945 he was captured by the Dutch police and accused of selling a painting by the Dutch artist Jan Vermeer
(1632 – 1675) to Nazi Germany. This was a crime punishable by death.
Van Meegeren claimed that not only was the painting in question, entitled The Woman Taken in Adultery, a forgery, but he had also produced other
To prove his innocence, van Meegeren created another Vermeer to demonstrate
his skill at imitating the Dutch master. He was acquitted of charges of collaboration
with the enemy, but was convicted of forgery. He died of a heart attack before he could
serve the one-year sentence. For twenty years after van Meegeren’s death art scholars
debated whether at least one of his alleged works, Christ and His Disciples at Emmaus,
was a fake or a real Vermeer. The mystery was solved in 1968 using a radiochemical
White lead — lead hydroxy carbonate [Pb3(OH)2(CO3)2] — is a pigment used by
artists for centuries. The metal in the compound is extracted from its ore, galena (PbS),
which contains uranium and its daughter products in radioactive equilibrium with it.
By radioactive equilibrium we mean that a particular isotope along the decay series is
formed from its precursor as fast as it breaks down by decay, and so its concentration
(and its radioactivity) remains constant with time. This radioactive equilibrium is disturbed in the chemical extraction of lead from its ore. Two isotopes in the uranium decay series are of particular importance in this process: 226Ra (t 1 1600 yr) and 210Pb
(t 1 21 yr). (See Table 23.3.) Most 226Ra is removed during the extraction of lead
from its ore, but 210Pb eventually ends up in the white lead, along with the stable isotope of lead (206Pb). No longer supported by its relatively long-lived ancestor, 226Ra,
Pb begins to decay without replenishment. This process continues until the 210Pb
activity is once more in equilibrium with the much smaller quantity of 226Ra that survived the separation process. Assuming the concentration ratio of 210Pb to 226Ra is
100:1 in the sample after extraction, it would take 270 years to reestablish radioactive
equilibrium for 210Pb.
If Vermeer did paint Emmaus around the mid-seventeenth century, the radioactive
equilibrium would have been restored in the white lead pigment by 1960. But this was
not the case. Radiochemical analysis showed that the paint used was less than one hundred years old. Therefore, the painting could not have been the work of Vermeer. 936 Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website “Christ and His Disciples at Emmaus,” a
painting attributed to Han van Meegeren. CHEMICAL CLUES 1.
2. Write equations for the decay of 226Ra and 210Pb.
Consider the following consecutive decay series:
A 88n B 88n C where A and B are radioactive isotopes and C is a stable isotope. Given that the
half-life of A is 100 times that of B, plot the concentrations of all three species versus time on the same graph. If only A was present initially, which species would
reach radioactive equilibrium?
3. The radioactive decay rates for 210Pb and 226Ra in white lead paint taken from
Emmaus in 1968 were 8.5 and 0.8 disintegrations per minute per gram of lead
(dpm/g), respectively. (a) How many half lives of 210Pb had elapsed between 1660
and 1968? (b) If Vermeer had painted Emmaus, what would have been the decay
rate of 210Pb in 1660? Comment on the reasonableness of this rate value.
4. To make his forgeries look authentic, van Meegeren re-used canvases of old paintings. He rolled one of his paintings to create cracks in the paint to resemble old
works. X-ray examination of this painting showed not only the underlying painting, but also the cracks in it. How did this discovery reveal to the scientists that
the painting on top was of a more recent origin? 937 Back Forward Main Menu TOC Study Guide TOC Textbook Website MHHE Website ...
View Full Document |
Unlike innate immune response, which remain essentially unchanged upon exposure to a recurrent challenge with the same stimulus, adaptive immune cells possess the ability to learn and remember. Adaptive immunity relies on the capacity of immune cells to distinguish between the body’s own cells and unwanted invaders. Healthy cells present peptides from normal cellular proteins on their cell surface, and lymphocytes will not be activated in response to them. When a cell is infected by viruses or other pathogens, the foreign peptides generated are recognized by lymphocytes that are activated and destroy the infected cell.
Viruses employ different strategies to inhibit presentation of viral-derived peptides. One way consists of the modulation of proteasome activity which generate the peptides from full-length proteins. Some viruses also directly interact with and inhibit the machinery responsible for the generation and transport of loaded MHC molecules. For instance, TAP and tapasin are common targets of viral proteins. |
Key Hearing Proteins Identified
Key Hearing Proteins Identified
Researchers have found what appear to be 2 key components of the long-sought-after mechanotransduction channel in the inner ear—the place where sound waves are transformed into the electrical signals that the brain recognizes as sound.
Sensory cells in the inner ear called hair cells are crucial for transforming sound into electrical signals. Hair cells also underlie our sense of balance. Sitting atop hair cells are tiny bristly structures called stereocilia. Microscopic tethers connect the tips of shorter stereocilia to the sides of adjacent taller stereocilia. Most scientists believe that as the stereocilia move, the tethers open ion channels—tiny openings in the cell that let electrically charged molecules (ions) pass in and out. The ions rushing inside begin an electrical signal that travels to the brain.
While researchers have gained many insights into mechanotransduction, the ion channels involved have remained elusive. A team of researchers led by Dr. Andrew J. Griffith of NIH’s National Institute on Deafness and Other Communication Disorders (NIDCD) and Dr. Jeffrey R. Holt of Harvard Medical School decided to focus on 2 proteins. Griffith and other collaborators had previously found that mutations in the TMC1 gene cause hereditary deafness in both humans and mice. The TMC1 protein sequence suggests that it could span the cell’s outer membrane and act as a channel. Another protein, TMC2, has a similar structure. The scientists deleted both genes in mice. Their findings appeared on December 1, 2011, in theJournal of Clinical Investigation.
Mice with no functional copies of TMC1 or TMC2 had the classic behaviors of dizzy mice—head bobbing, neck arching, unstable gait and circling movements. They were also deaf. The TMC1 deficient mice were deaf as well, but had no balance issues. Mice without TMC2 had no problems with hearing or balance.
The scientists examined when the TMC1 and TMC2 genes are expressed (turned on) in the inner ears of mice. The 2 genes were expressed from birth in hair cells in both the cochlea, which is responsible for hearing, and the vestibular organs, which are responsible for balance. When mice were a week old, TMC2 appeared to be turned off in the cochlea but not in the vestibular organs. TMC1 continued to be expressed in mature cochlear hair cells. Taken toghter, these findings suggest that TMC1 is essential for hearing, but TMC2 is not. For balance, however, TMC2 can substitute for TMC1.
In laboratory tests, hair cells lacking functional TMC1 or TMC2 had no detectable mechanotransduction currents, even though the rest of the cells’ structure and function appeared normal. By using a gene therapy technique that adds proteins back into cells, the researchers were able to restore transduction to both vestibular and cochlear hair cells. This finding suggests that it might be possible to reverse these genetic deficits.
The researchers found that TMC1 and TMC2 cluster at the tips of the stereocilia, where one might expect to see proteins that play a prominent role in mechanotransduction. In future work, the scientists intend to explore how TMC1 and TMC2 interact with each other as well as with other known proteins at the stereocilia tip.
* The above story is reprinted from materials provided by National Institutes of Health (NIH)
** The National Institutes of Health (NIH) , a part of the U.S. Department of Health and Human Services, is the nation’s medical research agency—making important discoveries that improve health and save lives. The National Institutes of Health is made up of 27 different components called Institutes and Centers. Each has its own specific research agenda. All but three of these components receive their funding directly from Congress, and administrate their own budgets. |
Fighting The Common Cold: A Second World War II poster from the Office for Emergency Management's War Production Board in the United States offers some some good advice about reducing the threat of the getting a cold.
Photo Credit: Office for Emergency Management's War Production Board
An article, by Beth Mole, in Nature News confirms what many mothers have been saying for years, namely, that cold winter temperatures create better conditions to humans catching the common cold, or from more specifically, rhinoviruses.
A team from Yale University in New Haven, Connecticut, found that low temperatures dampen natural defences against rhinoviruses, the leading causes of seasonal colds, in mice and in human airway cells. “What we show here is a temperature-dependent interaction between the host and the virus,” says team leader Ellen Foxman, who presented the data on 19 May at a conference of the American Society for Microbiology.
Colds are most common in winter, and researchers have known for decades that many rhinoviruses thrive in low temperatures: they replicate better in the upper respiratory tract than in the warmer environment of the lungs. But efforts to link the viruses’ apparent temperature preference to seasonal fluctuations in the incidence of colds have produced mixed results.
In 2005, for example, researchers at Cardiff University, UK, dunked healthy people’s feet into icy water to show that exposure to cold could cause an upper-respiratory infection1. But they could not explain why that was the case. Other studies have found no connection between temperature and rates of infection2.
Cold versus warm
In an attempt to solve the cold conundrum, Foxman and her colleagues studied mice susceptible to a mouse-specific rhinovirus. They discovered that at warmer temperatures, animals infected with the rhinovirus produced a burst of antiviral immune signals, which activated natural defenses that fought off the virus. But at cooler temperatures, the mice produced fewer antiviral signals and the infection could persist.
The researchers then grew human airway cells in the lab under both cold and warm conditions and infected them with a different rhinovirus that thrives in people. They found that warm infected cells were more likely than cold ones to undergo programmed cell death—cell suicide brought on by immune responses aimed at limiting the spread of infections.
Foxman says that the data suggest that these temperature-dependent immune reactions help to explain rhinoviruses' success at lower temperatures, and explain why winter is the season for colds. As temperatures drop outside, humans breathe in colder air that chills their upper airways just enough to allow rhinoviruses to flourish, she says.That mothers were right is an interesting finding; that science can show why is even more interesting. This shows that it is always important to dress warm in cold weather, paying particular attention to the upper body. Mom will be proud of you.
You can read the rest of the article at [Nature News] |
Moore's Law makes things useful. By increasing the number of transistors on integrated circuits to several billions and reducing their size to mere nanometers, engineers can produce ever-faster microprocessors that are the same size, or even smaller, than the ones in today's computers. At the same time, Moore's Law increases efficiency and reduces costs of production. This consistent improvement in processing speed and memory capacity has paved the way for numerous improvements -- without it, we wouldn't be able to have technological advances such as more pixels on high-definition televisions and digital cameras. Intel's Web site lists off some of the more impressive things electronics might achieve in the future with the help of Moore's Law, including facial recognition software and real-time language translation [source: Intel].
While some expect Moore's Law to continue for at least another decade and others -- especially Intel -- think it will hold true for much longer, some have questioned if the statement will continue to matter. Piling transistors onto computer chips, according to critics, doesn't really matter in the end. One of the most prominent critics of Moore's Law is Niklaus Wirth, a prominent Swiss computer scientist who introduced his own "law" as a sort of counterproposal.
Wirth, born in 1934, is an expert in software engineering, and is known for developing the programming language for Pascal and other notable computer languages during the 1960s and 1970s. His is an important voice in computer engineering topics, and in1995 he published a paper titled "A Plea for Leaner Software." In it, Wirth called attention to two statements, with tongue slightly in cheek. This is what he said:
- Software expands to fill the available memory.
- Software is getting slower more rapidly than hardware becomes faster.
Both statements are important in Wirth's thinking, but it's the second statement that we associate with Wirth's Law. In the paper, Wirth actually attributes the sentence to Martin Reiser, so the popular statement we know as Wirth's Law is really a paraphrasing of something Reiser supposedly said at one point. Ironically, Reiser felt he had nothing to do with the idea whatsoever, saying: "It is not the first time I am accused of having said something that I cannot remember having said -- and most likely never have said" [source: IEEE].
So what do these two statements, and especially the second one, have to say about computer engineering? |
Photo: Steve Hunnisett (flickr)
Today, a fever is an uncomfortable nuisance, but a hundred-plus years ago, fevers were often fatal. The difference between then and now is the class of drugs known as antibiotics.
What Are Antibiotics?
As the name implies, “anti-biotics” work “against life,” or, more specifically, against living cells.
While other drugs, such as aspirin, ease the symptoms of a disease, antibiotics attack the living bacteria that are causing the symptoms.
History of Antibiotics
The modern discovery of antibiotics is usually attributed to Alexander Fleming, who was the first to isolate and name “penicillin.”
But the basis for Fleming’s work had begun over fifty years before. In 1874, another English scientist, William Roberts, noticed that some fungi were immune to bacteria.
Later on, the French scientist Louis Pasteur noticed that bacteria stopped growing if they became infected with a microscopic fungus called “penicillium.”
Bacteria And Penicillin
In 1928, Alexander Fleming was studying the bacterium, Staphylococcus, when some of the bacteria became contaminated with Penicillium fungus and stopped growing.
Fleming decided that some chemical produced by the Penicillium fungus must be stopping the growth of the Staphylococcus bacteria. Fleming isolated that chemical and called it “penicillin.”
Fleming’s own attempts to treat patients with penicillin were not very successful, but during the next few decades, scientists were able to isolate purer doses of penicillin and the new drug turned out to be one of the most significant medical advances of the twentieth century.
Since antibiotics kill living cells, one of the problems is finding antibiotics that will kill bacteria cells without killing the patient’s own cells. But that’s our topic for next time. |
Music the sound of a chord as determined by the selection of the component notes and the way the notes are distributed to the instruments
- The act, practice, or production of one that voices.
- Music Tonal quality or blend of an instrument in an ensemble, especially a jazz ensemble, or of the ensemble as a whole.
- Linguistics The vibration of the vocal cords during the production of speech or a speech sound.
- Present participle of voice.
- (music) the final regulation of the pitch and tone of any sound-producing entity, especially of an organ or similar musical instrument
- (music) a particular arrangement of notes to form a chord.
- (phonetics) the articulatory process in which the vocal cords vibrate
- (phonetics, phonology) a classification of speech sounds that tend to be associated with vocal cord vibration |
The educational profile for a student with Williams syndrome is a unique blend of strengths language and nonverbal reasoning and weakness in visuospatial construction (the ability to see an object or picture as a set of parts and then to construct a replica of the original from these parts). In general, students with WS learn best with consistency, structured instructional routines, clear and realistic expectations, social stories, scripts and visual schedules, and technology. In particular, students with WS are often very effective users of computers and iPads/tablets. They also benefit from “chunking” of material into manageable parts, audio and dynamic visual supports, rhyme, rhythm and cadence, music and/or performing, finding materials that they have an emotional connection with, and specific praise. Above all, it is important to provide the material in a variety of ways. Adapting strategies to pre-teach, teach, and then re-teach, in order to enforce concepts, can be very helpful. |
The process model that has been discussed in previous tutorials described that a process was an executable program that is having a single thread of control. The majority of the modern operating systems now offer features enabling a process for containing multiple threads of control. In this tutorial, there are many concepts associated with multithreaded computer structures. There are many issues related to multithreaded programming and how it brings effect on the design of any operating systems. Then you will learn about how the Windows XP and Linux OS maintain threads at the kernel level.
What is a thread?
A thread is a stream of execution throughout the process code having its program counter which keeps track of lists of instruction to execute next, system registers which bind its current working variables. Threads are also termed as lightweight process. A thread uses parallelism which provides a way to improve application performance.
Major Types of Threads
Let us take an example where a web browser may have one thread to display images or text while another thread retrieves data from the network. Another example can be a word processor that may have a thread for displaying the UI or graphics while a new thread for responding to keystrokes received from the user and another thread is to perform spelling and grammar checking in the background. In some cases, a single application may be required to perform several similar tasks.
Advantages / Benefits of Threads in Operating System
The advantages of multithreaded programming can be categorized into four major headings -
- Responsiveness: Multithreading is an interactive concept for an application which may allow a program to continue running even when a part of it is blocked or is carrying a lengthy operation, which increases responsiveness to the user.
- Resource sharing: Mostly threads share the memory and the resources of any process to which they fit in. The advantage of sharing code is that it allows any application to have multiple different threads of activity inside the same address space.
- Economy: In OS, allocation of memory and resources for process creation seems costly. Because threads can distribute resources of any process to which they belong, it became more economical to create and develop context-switch threads.
- Utilization of multiprocessor architectures: The advantages of multithreading can be greatly amplified in a multiprocessor architecture, where there exist threads which may run in parallel on diverse processors.
All the threads must have a relationship between them (i.e., user threads and kernel threads). Here is a list which tells the three common ways of establishing this relationship.
- Many-to-One Model: In the many-to-one model plots several user-level threads to a single kernel thread.
- One-to-One Model: In the one-to-one model maps every particular user thread to a kernel thread and provides more concurrency compare to many-to-one model.
- Many-to-Many Model: In the many-to-many model, many user-level threads get mapped to a smaller or equal quantity of kernel threads. The number of kernel threads might be exact to either a particular application or to a particular machine. |
Organs of sound reception in invertebrates
It has long been believed that at least some insects can hear. Chief attention has been given to those that make distinctive sounds (e.g., katydids, crickets, and cicadas) because it was naturally assumed that these insects produce signals for communication purposes. Organs suitable for hearing have been found in insects at various locations on the thorax and abdomen and, in one group (mosquitoes), on the head.
Among the many orders of insects, hearing is known to exist in only a few: Orthoptera (crickets, grasshoppers, katydids), Homoptera (cicadas), Heteroptera (bugs), Lepidoptera (butterflies and moths), and Diptera (flies). In the Orthoptera, ears are present, and the ability to perceive sounds has been well established. The ears of katydids and crickets are found on the first walking legs; those of grasshoppers are on the first segment of the abdomen. Cicadas are noted for the intensity of sound produced by some species and for the elaborate development of the ears, which are located on the first segment of the abdomen. The waterboatman, a heteropteran, is a small aquatic insect with an ear on the first segment of the thorax. Moths have simple ears that are located in certain species on the posterior part of the thorax and in others on the first segment of the abdomen. Among the Diptera, only mosquitoes are known to possess ears; they are located on the head as a part of the antennae.
All the insects just mentioned have a pair of organs for which there is good evidence of auditory function. Other structures of simpler form that often have been considered to be sound receptors occur widely within these insect groups as well as in others. There is strong evidence that some kind of hearing exists in two other insect orders: the Coleoptera (beetles) and the Hymenoptera (ants, bees, and wasps). In these orders, however, receptive organs have not yet been positively identified.
Types of insect auditory structures
Four structures found in insects have been considered as possibly serving an auditory function: hair sensilla, antennae, cercal organs, and tympanal organs.
Many specialized structures on the bodies of insects seem to have a sensory function. Among these are hair sensilla, each of which consists of a hair with a base portion containing a nerve supply. Because the hairs have been seen to vibrate in response to tones of certain frequencies, it has been suggested that they are sound receptors. It seems more likely, however, that the sensilla primarily mediate the sense of touch and that their response to sound waves is only incidental to that function.
Antennae and antennal organs
Many sensory functions have been attributed to the antennae of insects, and it is believed that they serve both as tactual and as smell receptors. In some species, the development of elaborate antennal plumes and brushlike terminations has led to the suggestion that they also serve for hearing. This suggestion is supported by positive evidence only in the case of the mosquito, especially the male, in which the base of the antenna is an expanded sac containing a large number of sensory units known as scolophores. These structures, found in many places in the bodies of insects, commonly occur across joints or body segments, where they probably serve as mechanoreceptors for movement. When the scolophores are associated with any structure that is set in motion by sound, however, the arrangement is that of a sound receptor.
In the basic structure of the scolophore, four cells (base cell, ganglion cell, sheath cell, and terminal cell), together with an extracellular body called a cap, constitute a chain. Extending outward from the ganglion cell is the cilium, a hairlike projection that, because of its position, acts as a trigger in response to any relative motion between the two ends of the chain. The sheath cell with its scolopale provides support and protection for the delicate cilium. Two types of enclosing cells (fibrous cells and cells of Schwann) surround the ganglion and sheath cells. The ganglion cell has both a sensory and a neural function; it sends forth its own fibre (axon) that connects to the central nervous system.
In the mosquito ear the scolophores are connected to the antenna and are stimulated by vibrations of the antennal shaft. Because the shaft vibrates in response to the oscillating air particles, this ear is of the velocity type. It is supposed that stimulation is greatest when the antenna is pointed toward the sound source, thereby enabling the insect to determine the direction of sounds. The male mosquito, sensitive only to the vibration frequencies of the hum made by the wings of the female in his own species, flies in the direction of the sound and finds the female for mating. For the male yellow fever mosquito, the most effective (i.e., apparently best heard) frequency has been found to be 384 hertz, or cycles per second, which is in the middle of the frequency range of the hum of females of this species. The antennae of insects other than the mosquito and its relatives probably do not serve a true auditory function.
The cercal organ, which is found at the posterior end of the abdomen in such insects as cockroaches and crickets, consists of a thick brush of several hundred fine hairs. When an electrode is placed on the nerve trunk of the organ, which has a rich nerve supply, a discharge of impulses can be detected when the brush is exposed to sound. Sensitivity extends over a fairly wide range of vibration frequencies, from below 100 to perhaps as high as 3,000 hertz. As observed in the cockroach, the responses to sound waves up to 400 hertz have the same frequency as that of the stimulus. Although the cercal organ is reported to be extremely sensitive, precise measurements remain to be carried out. It is possible, nevertheless, that this structure, which is another example of a velocity type of sound receptor, is primarily auditory in function. |
1. A carefully constructed assessment instrument designed to help individuals reflect on their skills profile. The output comprises a person’s score on nine major traits, 27 sub-traits, and seven themes.
2. The Assessment Instrument is valuable for individuals to reflect on their skills profile and can be highly effective in team settings where team members assess each other to know how colleagues view each other's skills profile.
3. Well-designed measures of personality and skills traits like the Skills Studio Assessment Tool can predict highly consequential outcomes, including enabling a roadmap of learning and refining key skills.
About Formative Assessments: a tool to identify misconceptions, struggles, and learning gaps along the way and assess how to close those gaps. It includes effective tools for helping to shape learning and can even bolster students’ abilities to take ownership of their learning when they understand that the goal is to improve learning, not apply final marks. It can include students assessing themselves, peers, or even the instructor, through writing, quizzes, conversation, and more. In short, Formative Assessment occurs throughout a class or course and seeks to improve student achievement of learning objectives through approaches that enable the conscious reflection of behavior. |
In the last blogpost, we have discussed on the 5 FACTORS that will determine and affect the Rate of Reactions.
Today we shall discuss on the last factor – Effect of Catalysts on Rate of Reaction
A catalyst is a substance that speeds up a reaction and remains chemically unchanged at the end of the reaction.
A catalyst works by allowing the reaction to proceed by an alternative pathway that involves a lower activation energy.
Characteristics of a Catalyst:
1. Only a small amount is needed to speed up a chemical reaction
2. It lowers the activation energy of a reaction
3. It is not used up during the reaction. Same amount of catalyst is present at beginning & at the end of reaction
4. It is selective in action. One catalyst cannot act on or speed up all types of reactions. Different catalysts speed up different reactions
5. The physical appearance of catalyst may change at the end of reaction, but it?s chemical properties remain unchanged
6. A catalyst increases the speed, and NOT the yield of a chemical reaction, i.e. the same amount of products is formed whether a catalysts is used of not
7. Impurities can prevent catalysts from working, i.e. catalysts is poisoned or inactivated
In O Level Chemistry Exams, there has been a recent trends that more questions are associated with Catalysts and how it affects the Rate of Reaction.
Let’s check out an exam-based question to see how much you understand:
What statement about catalysis is NOT true?
A. They speed up chemical reactions.
B. They are used up in a chemical reactions
C. Different catalysts catalyse different reactions.
D. Many transition metals are good catalysts.
- O Level Chemistry – Rate of Reaction Mini Series Part 1
- O Level Chemistry – Rate of Reaction Mini Series Part 2
- O Level Chemistry: Energy Changes (Exo/Endo) & Bond Energy
- O Level Chemistry: Acids,Bases & Salts / Organic Chemistry
- O Level Chemistry – Strategies to Predict Products of Electrolysis for Aqueuous Solutions |
Super Therm and Solar Panels – the Perfect Solar Team
Too much heat can be bad for solar panels, reducing their efficiency by 10%-25%, says a US solar supplier. It sounds counterintuitive, but too much heat can reduce the ability of solar to produce power, says the World Economic Forum.
Like an iPhone that switches off when left in the heat, the output of solar panels slows down over 25 degrees by 0.5 percentage points for every degree rise in temperature, it says. Most rooftops in Sydney exceed 50°C on a hot day, said Dr Peter Irga, an atmospheric scientist in the University of Technology Sydney’s Faculty of Engineering. “Solar works better on a rooftop in Scotland or Wales than it does in Sydney,” he said (source).
How does extreme heat affect solar panels?
Heat can “severely reduce” the ability of solar panels to produce power, according to CED Greentech, a solar equipment supplier in the United States. Depending on where they’re installed, hot temperatures can reduce the output efficiency of solar panels by 10%-25%, the company says.
According to the American renewable energy website EnergySage, solar panels are tested at 25°C (77°F) and generally have a temperature range of between 15°C and 35°C. Solar cells – the electronic devices that convert sunlight into electricity that are connected together to build solar panels – produce solar power most efficiently within this range. But solar panels can get as hot as 65°C (149°F), EnergySage says. This can affect the efficiency of solar cells.
Why do solar panels struggle in very hot weather?
The impact of heat on solar panels is to do with the laws of thermodynamics – the science of heat and how it affects things. The electricity generated by solar panels comes from a flow of particles called electrons inside the electrical circuit, explains news site Euronews. When temperatures soar, these electrons can bounce around too much – and this reduces voltage, or the amount of electricity generated (source).
Super Therm and Solar Panels are the perfect team for multiple reasons:
- Both save energy
- Both reduce CO2
- Both work with the sun
- Both bring better value
- Both can protect the building
- Both increases each other’s growth
The cost of investment and return both in dollars but also environmental costs are multi-beneficial. The concept of an energy saving coating is higher value; high performance yet the energy saving returns are there.
In fact Super Therm® and Solar Panels work beautifully with each other for the full encapsulation of solar heat into your home…and converting to electricity.
The increased returns, benefits and longevity of the best performing ceramic solar coating is a solid investment that will last over 20 years. Let’s compare a solar system alone, then combined.
‘Cool roofs’ increase power yield in bifacial rooftop PV systems by 8.6%
An Algerian-Spanish research team has looked at how cool roofs (CR) helps increase power yield in bifacial rooftop PV systems and has found that the proposed combination offers higher energy yields than bifacial counterparts deployed on conventional roofs.
“When a cool roof coating is used beneath the bifacial modules, PV production can be substantially boosted (+8.6%) compared to the monofacial solution, meaning an economic benefit about 18.3 €/kWp/year,” the group said. “Moreover, the [cool roof] coating can help reduce floor temperatures in unshaded areas during summer (−3.8 °C), potentially leading to significant energy savings in cooling the building.”
The scientists said that cool roofs could be either added to the floor of existing bifacial rooftop PV systems or used for new installations. “It is an already available market product with reasonable costs that offers dual advantages: increased energy yield and reduced building cooling consumption, both yielding economic benefits.
Solar Panels with Cool Roofs Review – UNSW – JAN 2022
- The annual energy yield of PV increases by 0.71%-1.36%.
- The cool roof performance increases by 14%.
- The roof surface temperature decreases by 3.1-5.2 °C. A decrease by 1 °C in the roof surface
temperature increases PV system efficiency by 0.2-0.9% (report).
Citywide Impacts of Cool Roof and Rooftop Solar Photovoltaic Deployment on Near-Surface Air Temperature and Cooling Energy Demand
During the day, cool roofs are more effective at cooling than rooftop solar photovoltaic systems, but during the night, solar panels are more efficient at reducing the UHI effect. For the maximum coverage rate deployment, cool roofs reduced daily citywide cooling energy demand by 13–14%, while rooftop solar photovoltaic panels by 8–11 % (without considering the additional savings derived from their electricity production). The results presented here demonstrate that deployment of both roofing technologies have multiple benefits for the urban environment, while solar photovoltaic panels add additional value because they reduce the dependence on fossil fuel consumption for electricity generation (report).
An experimental study of the impact of cool roof on solar PV electricity generations on building rooftops in Sharjah, UAE
The preliminary findings of the experimental study indicated that there is a likely impact of 5–10% improvement of electricity generation with the cool roof applications (report).
A 5kW solar system is approximately 30m2
Cost is approximately $5,630 or $187m2 installed + input credits
Super Therm® is $750 for 30m2 or $25m2 + application
Super Therm® is an Energy Saving Coating!
Unique to its formulated compounds, Super Therm®‘s testing will blow you away. As the environment gets hotter this increases demand, political conversation and environmental protection, Cool Roof coatings will be the next big wave of thermal protection for all living spaces and other heat related uses.
Both from the blazing sun, warmer days and nights, UV and blocking 96.1% of the solar heat, ensures you’re protected over 20+ years.
It has been tested by the Department of Energy and it reduces energy consumption between 20-50% (industry tested) so the payback is much quicker and a longer lifespan, showing 32 years of current testing.
*Performance and results are based on industry averages and locations.
What is the environmental Impact?
The other element of solar is the impact on our environment.
As solar panels last between 10-25 years they will reduce in efficiency over time therefore reduce in savings and end up in landfill. They will expire!
Currently, almost all broken or expired solar panels go into landfill and experts have been warning for some time that more than 100,000 tonnes of modules will end up there by 2035 (ABC). The global outlook for solar panels is bleak based on environmental need. It’s economic cost v environmental impact.
Solar by Area
Source: Solar Calculator
The 5kW solar system is the most popular system in Australia with the sheer demand for systems driving down the cost in comparison to smaller system sizes. The cost of a 5kW solar system varies depending on where you reside as there are three major price variables:
- Retail competition in your local market
- The solar rebate applicable to your geographic location
- The quality of the system you wish to install
The table below shows the average price of a 5kW system in major capital cities:5KW SYSTEM PRICES IN MAJOR CITIES:
The prices above are for the full cost of installation of a 5kW system and include GST, the federal solar rebate and assume the system is comprised of good quality components.
5kW solar inverter cost
The prices shown above include the cost of the inverter for a 5kW system. The quality of the inverter is one of the variants in prices between different systems.
Savings and payback numbers
Both the prices and output of 5kW systems vary according to location and so too do the payback and savings figures. A 5kW system in Melbourne, for example, can payback within six years while in Brisbane it can be less than five years. Below is a snapshot from the Solar Calculator calculator which shows the savings and payback results for a 5kW system in Sydney:
Internal roof space in Australian home
Guide: Original Temperature: 40°C; New Temperature 30°C
ASHRAE Formula: (Original Temp. X Difference / 24 = tons of energy X 12,000 BTU per ton = BTU Savings = 40 X 10 / 24 X 12,000 = 200,000 BTU
Change BTU into KW to find COST SAVINGS per hour / day / week / month / year. (1 BTU = .293 Watt)
200,000 X .293 Watt = 58,600 Watts / 1000 = 58.6 KW/hr in Australia average (34 cents K/W) or $19.92 per hour savings
3 months (December to February – Summer) of net savings of $19.92 = $1,792.80 (per year savings) or 20 years = $35,856
Savings vs payback
Solar Calculator recommend considering lifetime savings of a 5kW solar system in conjunction with the payback period. The payback calculation tends to favour lower cost systems that may pay back faster, but do not necessarily generate greater savings over the long term. A better quality 5kW system will perform better and not deteriorate at the rate of a lower cost system.
|Initial Cost for Improvement (¥10,000) |
|Energy Saving Effect (¥10,000/year) |
|Pay-Back Period (year) Excluding Interest |
|Ceramic Insulation Coating: 6,850|
|Bituminous Coating: 5,680|
Return on Investment
Thirteen (13) month payback savings to investment over the difference in cost of applying Bituminous Coating which has no insulation payback. See more > |
Whether or not it’s lucky, Varsity team member Sven likes the number seven, so he is playing with patterns created by seven points lying on the perimeter of a circle.
The first thing that Sven does is draw the heptagon inside the circle whose vertices are the seven points. Then he starts to think about drawing in all the diagonals of that heptagon (line segments that join two non-adjacent vertices). It seems like it might take a long time…
How many diagonals does this heptagon have?
In the end Sven decides to draw all the diagonals, and he notices that between the heptagon itself and the diagonals, the original circle is broken up into many regions. (He also notices that for the seven points he’s chosen, there is no point through which three or more diagonals pass.) He wonders how many regions there are, and begins thinking about it by noticing that drawing all the lines between 2, 3, or 4 points on a circle splits the circle up into either 2, 4, or 8 regions (as shown in the diagram).
Into how many regions do Sven’s heptagon and all its diagonals split the original circle?
|Spread the word:||Tweet|
Solutions to week 54
Watch for Falling Nuts. The second nut falls twice as far away from the tree as the second one, and it is critical to remember that the difference in distance means that it falls for twice as much time but it does not necessarily mean that it falls from twice the height. In fact, it has to fall from considerably higher than that in order to take twice the time to fall, since it is constantly speeding up due to the influence of gravity as it falls. In particular, the fact that the acceleration due to gravity is constant means that for any fall, the average speed of the nut is equal to the speed at the midpoint of the trip. So imagine that the first nut takes s seconds to fall, so that it has average speed 6/s meters per second. Then the second nut takes 2s seconds to fall, and its average speed is the same as its speed after s seconds, which has to the be the same as the first nut’s speed after s seconds, which is its final speed. But since the first nut is constantly accelerating, its final speed is twice its average speed. Hence, its final speed is 12/s meters per second. Therefore, the second nut’s average speed is 12/s meters per second, and it falls for 2s seconds, so it must have started from 24 meters up.
Fall Out of Line. This problem really led to a lot of creativity on the part of the Varsity Math team; there are lots of different possible ways to arrange the dominoes so that pushing A will make B fall but not the other way around. The conceptually simplest way is to simply rotate one of the middle dominoes 90 degrees so that it is lying on its long edge, and move it farther away from its neighbor toward A and closer to its neighbor toward B, so that when the dominoes are falling from A to B, the neigbor toward A will reach and tip over the sideways one, which is then close enough to tip over its neighbor toward B, but if it is tipped over from the other direction, its falling over will not reach the neighbor toward A. Another team member, wanting to know if it could be done with all of the dominoes vertical, made an extra-large gap and stacked dominoes three high vertically, so that falling from B to A can’t possibly reach A, but falling from A sends the top of the stack flying to knock over the dominoes toward B. A third team member, not satisfied with stacking dominoes, got them all standing in the ordinary way by creating a big chain reaction that would only be triggered in the falling direction from B which would knock away some intermediate tiles on the route from B to A before the chain could get there. And finally, a fourth member, concerned about the somewhat unpredictable dynamics of the “right-angle” knockout in the third solution, devised a way (using two forks of the type near B in that layout) so that when B is tipped over, the path to A is cancelled out by dominoes falling in the opposite direction, for the most reliable “one-way fall” of all. Photos of all four layouts, as actually tested by the Varsity Math team members, are below (the second two layouts are accompanied by pictures of what they look like after B is pushed over). In every photo, A is the leftmost domino and B the rightmost.
Links to all of the puzzles and solutions are on the Complete Varsity Math page.
Come back next week for answers and more puzzles. |
Rabies is a fatal viral infection that is transmitted primarily through bite wounds. Skunks, bats, raccoons, and foxes are the primary carriers. Rabies is also fatal to humans, there has been only one case of a person surviving rabies when treatment was started after clinical signs were present. Puppies are vaccinated when three to four months of age and then one year later.
Each state varies in its rabies law, most states require rabies vaccine every three years for adult pets, but some states still require them annually. If a person or a pet is bitten by an unknown or unvaccinated animal (dog, cat, or wild animal), the local health department or your veterinarian should be consulted.
The animal that bit should be apprehended, if possible, and your veterinarian or local health official should be contacted immediately. A test can be done to see if rabies is present, but it does require the animal be euthanized because the test can be done only on the brain. Rabies is preventable through regular vaccination of dogs and cats. |
What is Education and Examples?
Education is a process of acquiring knowledge, skills, values, and attitudes through various formal and informal means. It is an essential tool that empowers individuals to lead a meaningful and productive life by providing them with the necessary tools and resources to succeed in various spheres of life. Education can take many forms, including formal education such as attending schools, colleges, and universities, as well as informal education such as learning through life experiences, reading, and online courses. It plays a crucial role in personal growth, career development, and social mobility, and can lead to higher levels of achievement, better employment opportunities, and a higher standard of living. There are various types of education, such as: Formal Education: Formal education is a structured form of education that takes place in schools, colleges, and universities. It provides students with a structured curriculum that covers various subjects, including math, science, |
As warmer winter temperatures become more common, one way for some animals to adjust is to shift their ranges northward. But a new study of 59 North American bird species indicates that doing so is not easy or quick — it took about 35 years for many birds to move far enough north for winter temperatures to match where they historically lived.
\”This is a problem, because birds are among the most mobile of animals, and yet they take decades to respond to warming,\” said Frank La Sorte, a postdoctoral researcher at the Cornell Lab of Ornithology and lead author of the study, which was published online by the Journal of Animal Ecology this month. \”Climatic conditions are steadily moving northward, whether particular animals come along or not. As conservation biologists we need to know how well animals are keeping up.\”
Earlier studies of responses to climate change examined shifts in species\’ geographic ranges. \”Our work adds important realism and a temporal dimension to these models for a critical aspect of climate: minimum winter temperature,\” said co-author Walter Jetz of Yale University.
The researchers used 35 years of data from the North American Christmas Bird Count to match winter temperatures to where birds were seen. They tested 59 bird species individually and found that they responded differently to climate change. When summarized across bird species, there was evidence for a strong delay lasting about 35 years.
For example, black vultures have spread northward in the last 35 years and now winter as far north as Massachusetts, where the minimum winter temperature is similar to what it was in Maryland in 1975. On the other hand, the endangered red-cockaded woodpecker did not alter its range at all despite the warming trend, possibly because its very specific habitat requirements precluded a range shift.
Both of these scenarios could represent problems for birds, La Sorte said. Species that do not track changes in climate may wind up at the limits of their physiological tolerance, or they may lose important habitat qualities, such as favored food types, as those species pass them by. But they also can\’t move their ranges too fast if the habitat conditions they depend on also tend to lag behind climate.
\”When you think about it, it makes sense that species move slower than the rate at which climate is changing,\” La Sorte said. \”They\’re not just tracking temperature — many of them need to follow a prey base, a type of vegetation, or they need certain kinds of habitat that will create corridors for movement.\”
Variability in climate warming is likely to affect how species respond, too, La Sorte said. If warming trends weaken, as they did over the past few years, birds may be able to catch up. But accelerated warming, which is likely as global carbon emissions continue to increase, may put additional strain on birds. The study highlights these challenges and the high potential climate change has for disrupting natural systems. It also underscores the challenges ecologists face in predicting the long-term consequences of climate change for many species simultaneously. |
A recent study found that West Antarctic ice shelves are fracturing at an ever-faster rate, thus losing hold of the rocky walls that slow their flow out to sea. Bottom line: our warming globe is depleting the South Pole's ice cap faster than it can be replenished.
Traditionally, as ice slowly moves out to sea, it bunches up and creates a bottleneck that acts as a doorstop to impede the ice's flow (Check out Extreme Ice Survey to see time lapse photography of ice flow.) But the observed increase in fracturing leads to an increase in calving (when a big chunk of ice simply splits off). Result? An increase in icebergs, and a decrease in ice that sits on (or clings to) the continent.
Unfortunately, this news, from glaciologists at The University of Texas at Austin's Institute for Geophysics, is not an anomaly. Melting, cracking, splitting and calving in Earth's cryosphere (the parts of the globe where water is frozen) may be the most dramatic embodiments of the climatic changes our actions are triggering. Here is just a sampling of some other similarly troubling findings:
- Two Canadian ice shelves -- in place before Europeans settled the area -- have been dramatically melting. One is on the verge of melting away completely.
Dramatic, right? But why should all this melting ice matter to us? Here are a few reasons:
Dr. Martin Sommerkorn, senior climate change adviser for the World Wide Fund for Nature's international Arctic program:
Remove the Arctic ice cap and we are left with a very different and much warmer world. [It will] set in motion powerful climate feed-backs which will have an impact far beyond the Arctic itself, [and could] lead to flooding affecting one quarter of the world's population, substantial increases in greenhouse gas emission from massive carbon pools, and extreme global weather changes.
A recent study published on Climate Central:
Arctic Warming -- which is happening twice as fast as the rest of the Northern Hemisphere -- is altering weather patterns and leading to high-impact, extreme weather events in the United States and Europe.
The Geological Society of America, 2002:
"The recent collapse of several Antarctic Peninsula ice shelves has been linked to rapid regional atmospheric warming during the twentieth century."
Science Daily reporting on two 2007 studies:
Scientists consider that the acceleration of the melting of the Greenland ice cap could play an important role in the future stability of ocean circulation and, hence, in the development of climate change.
Science tells us that Earth's cryosphere is very important, and that it's being severely compromised by anthropogenic climate change. Does all this melting and cracking bring any good news? Sled dogs will finally get a rest? Titanic historical reenactment societies can put all the extra icebergs to good use? I'll keep working on it. |
If You Could Change One Thing to Better Your Community, What Would It Be?
Every community has its strengths and weaknesses, and often, its residents have ideas on how to improve their surroundings. If given the opportunity to change one thing to better your community, what would it be? This question prompts people to reflect on the challenges they face and envision a brighter future for their neighborhoods. Let’s explore the possibilities and potential impact of making a positive change.
One common desire to better communities is improved access to education. By investing in education, we can empower individuals and enrich the community as a whole. This could involve building new schools or renovating existing ones, providing more resources for students, and offering educational programs for adults. A well-educated community is more likely to thrive economically, socially, and culturally.
Another crucial aspect often mentioned is enhancing public spaces. Parks, playgrounds, and recreational areas are essential for fostering community engagement and promoting physical and mental well-being. By creating safe and inviting spaces, we encourage people to come together, exercise, and connect with nature, ultimately strengthening the sense of belonging within the community.
Additionally, addressing poverty and homelessness is a priority for many. Establishing programs that provide affordable housing, job training, and support services can significantly improve the lives of those struggling to make ends meet. By addressing these issues head-on, we can create a community that takes care of its most vulnerable members.
1. How can improved access to education benefit the community?
Improved access to education can lead to a more educated workforce, higher employment rates, and increased economic opportunities. It also fosters a culture of lifelong learning and personal growth.
2. Why are public spaces important for communities?
Public spaces serve as gathering spots, promoting social interaction and community cohesion. They also provide opportunities for physical activity, relaxation, and the appreciation of nature.
3. How can addressing poverty and homelessness improve the community?
By providing support services and affordable housing options, we can help individuals and families escape the cycle of poverty and homelessness. This, in turn, leads to a safer, more stable community for all residents.
4. What are the potential challenges in implementing these changes?
Challenges may include funding limitations, resistance to change, and ensuring the sustainability of the initiatives. Collaboration and community involvement are crucial in overcoming these obstacles.
5. How can individuals contribute to these changes?
Individuals can volunteer their time, donate resources, advocate for policy changes, and support local organizations working towards community improvement.
6. Can changing one thing really make a significant difference?
Yes, even a single change can have a ripple effect. By focusing on one area and implementing meaningful improvements, it can inspire further positive changes and motivate others to get involved.
7. How can community members voice their suggestions for change?
Community members can participate in town hall meetings, join local organizations, and engage in conversations with elected officials. They can also utilize online platforms to share their ideas and concerns with a wider audience.
In conclusion, envisioning positive change for our communities is a powerful exercise. By focusing on areas such as education, public spaces, and addressing poverty, we can create more vibrant, inclusive, and prosperous communities for everyone to thrive in. It’s essential to remember that change begins with individuals, and together, we can shape a better future for our communities. |
Hello 1-GSM Visitors, Are you having trouble understanding genetics x linked genes answer key? Don’t worry, you’re not alone. It can be a challenging topic to navigate, but with the right resources, you can master it. To help you out, let’s take a closer look at what we found.
Understanding X-Linked Genes
X-linked genes are genes that are located on the X chromosome. Since males only have one X chromosome, they are more likely to be affected by X-linked traits than females. This is because females have two X chromosomes, so if one X chromosome has a mutated gene, the other X chromosome can often compensate for it.
Answer Key for X-Linked Genes
When it comes to X-linked gene answer keys, it’s important to first understand the basics of genetics. Each gene has two copies, one from each parent. These copies can either be the same or different. When it comes to X-linked genes, males only have one copy of each gene since they only have one X chromosome. Females, on the other hand, have two copies of each gene since they have two X chromosomes. To understand X-linked gene answer keys, it’s important to understand the difference between dominant and recessive genes. Dominant genes only need one copy to be expressed, while recessive genes need two copies to be expressed.
Example X-Linked Gene Answer Key Problems
Let’s take a look at a few examples of X-linked gene answer key problems.
Example 1: Hemophilia A is an X-linked recessive disorder. If a woman who is a carrier for hemophilia A has a child with a man who does not have hemophilia A, what is the chance that their son will have hemophilia A?
Answer: There is a 50% chance that their son will have hemophilia A.
Example 2: Color blindness is an X-linked recessive disorder. If a woman who is a carrier for color blindness has a child with a man who is colorblind, what is the chance that their daughter will be colorblind?
Answer: There is a 50% chance that their daughter will be a carrier for color blindness and a 50% chance that she will not be affected.
Understanding X-linked genes and their answer keys can be challenging, but with the right resources and practice, you can master it. We hope this article has helped you gain a better understanding of X-linked genes and their answer keys. See you again at our other interesting article. |
The blooming of the Kyoto cherry blossoms is already a week earlier than average and 11 days earlier than if the climate crisis had not been exacerbated by human-induced climate change. This culturally significant event is already a symptom of the climate crisis. But how does climate change affect cherry blossoms?
Kyoto cherry blossoms reached full bloom a week earlier than all previous averages.
According to a study in the journal Environment Research Letters, the peak date of cherry blossoms in Kyoto has shifted a week earlier. The change is primarily due to climate change, with temperatures in the city center rising by several degrees since pre-industrial times. Climate change also aggravates the heat island effect, making cities more likely to experience warmer temperatures than their surroundings.
The early flowering dates of cherry blossoms in Kyoto are attributed to climate change, as the city has been warming up much more quickly than the rural area. Kyoto’s central part warmed up faster than the rural areas so that now the cherry blossoms bloom 11 days earlier than previous averages. However, the difference between the two areas has leveled off since the middle of the 20th century, indicating that urban warming has been responsible for most of Kyoto’s recent warming.
Rising temperatures are changing nature’s timing.
Rising temperatures have affected the timing of the flowering of cherry blossoms across the world, and scientists believe the early flowering dates of this year are the result of urban warming and climate change.
Extreme droughts have reduced the number of cherry blossoms in some areas. However, extreme heat and drought have destroyed the natural habitats of many species.
They are a culturally significant event in Japan.
The climate crisis is causing Kyoto cherry blossoms to bloom earlier each year. Due to urbanization, buildings absorb heat more quickly than natural landscapes. This phenomenon is known as the “heat island effect” and contributes to rising temperatures. By the end of this century, scientists predict that the cherry blossoms in Kyoto will bloom an average of one week earlier than they do today.
The traditional peak of the cherry blossom season is at the end of March. But in 2021, the ancient capital of Japan has seen the blossoms bloom even earlier than usual. The cherry blossom season began on 11 March, four days earlier than usual in Kyoto.
They are a symptom of the climate crisis.
Scientists say that the early blooming of Kyoto cherry blossoms indicates a more significant crisis: global warming. The cherry blossoms typically peak in the spring months, but temperatures have increased earlier than last year. The cherry blossoms are especially sensitive to weather conditions.
They will reach full bloom an additional week earlier by 2100
By the end of this century, the Kyoto cherry blossoms are projected to begin blooming an additional week earlier than today. Scientists blame global warming and urbanization for the early flowering event. Buildings absorb solar radiation more efficiently than rural landscapes, leading to the “heat island effect” responsible for warming cities. Climate change also affects other areas, such as farming and land management practices. Because plants depend on one another for growth, changing one area can lead to a chain reaction.
The scientists used data from 58 benchmark cherry trees in Japan to estimate the full bloom date in Kyoto. Of these, 40 have already reached the peak bloom this year, and 14 have bloomed at the early record date. Typically, cherry trees bloom for two weeks. But because cherry trees are sensitive to changes in temperature, the timing of their blooming is crucial for climate change studies. Kyoto’s average temperature has increased several degrees Celsius since pre-industrial times. |
Imagine a world where you can have a conversation with your furry best friend. A world where you can say a word, and your dog understands exactly what you mean. Sounds incredible, doesn’t it? Well, the question of whether dogs can learn to recognize specific words has been a subject of fascination and curiosity for pet lovers everywhere. In this article, we will explore this intriguing topic and delve into the possibility of dogs acquiring language skills beyond simple commands. So, grab your pup, sit back, and let’s explore the wonderful world of canine communication.
Table of Contents
The Intelligence of Dogs
When it comes to intelligence, dogs are an incredibly diverse bunch. Different breeds have different abilities, and some are known for their exceptional intelligence. One such breed is the Border Collie, which has long been recognized for its remarkable intelligence and problem-solving skills. In fact, the Border Collie is often referred to as a natural wordsmith due to its ability to understand and respond to verbal commands with precision and accuracy.
The Psychological Mechanisms
To understand how dogs are able to recognize specific words, we need to delve into the psychological mechanisms that underlie their learning and understanding. Two key processes are at play here: associative learning and operant conditioning.
Associative learning is the process by which an animal forms connections between two events or stimuli. Dogs are masters of associative learning, as they are able to make associations between certain words or commands and the actions or rewards that follow. This means that when you repeatedly pair a specific word with a particular action or reward, your dog will eventually learn to associate that word with the desired behavior.
Operant conditioning refers to the process of learning through consequences. Dogs learn through this mechanism by performing certain behaviors and experiencing the positive or negative outcomes that follow. When it comes to word recognition, operant conditioning plays a vital role. Through consistent reinforcement and reward for correctly responding to verbal commands, dogs are motivated to listen and understand the words they are being taught.
Numerous studies have provided experimental evidence of dogs’ ability to recognize and understand words. Some noteworthy studies include speech discrimination studies, the Chaser experiment, the Rico study, and the Dog Project.
Speech Discrimination Studies
Research has shown that dogs have the capacity to discriminate between different words and understand their meanings. In one study, dogs were trained to determine whether a spoken word matched a picture of an object. The results revealed that dogs were able to successfully select the correct object based on the word they heard, demonstrating their ability to comprehend specific words.
The Chaser experiment, conducted by Dr. John Pilley, focused on a Border Collie named Chaser. Chaser was trained to identify and retrieve over 1,000 different toys by their names. This experiment showcased the remarkable word recognition abilities of dogs, particularly the Border Collie breed.
The Rico Study
The Rico study was another groundbreaking experiment that examined word recognition in a border collie. Rico was taught to associate specific words with different objects and demonstrated an impressive ability to retrieve the correct item when given the corresponding verbal command. This study further highlighted the capabilities of dogs to understand and respond to words.
The Dog Project
The Dog Project, led by Dr. Claudia Fugazza, explored dogs’ ability to understand human communication through gesture and word cues. The results of this project demonstrated that dogs have a unique talent for comprehending both verbal and non-verbal cues from humans, reinforcing their ability to recognize and interpret specific words.
Imitation and Understanding
Beyond associative learning and operant conditioning, dogs possess additional cognitive abilities that contribute to their word recognition skills. Two important mechanisms in this regard are the mirror neuron system and theory of mind.
Mirror Neuron System
The mirror neuron system refers to a network of neurons in the brain that activate when observing the actions of others. This mechanism plays a crucial role in imitation and understanding. Dogs have been shown to possess mirror neurons, which enable them to mimic human actions and assist in comprehending the meaning of words through observation and imitation.
Theory of Mind
Theory of mind refers to the ability to attribute mental states to oneself and others, allowing individuals to understand that others may have different thoughts, beliefs, or intentions. While dogs may not possess the same level of theory of mind as humans, they do exhibit some understanding of others’ intentions and motivations, which can aid in word recognition.
Context and Familiarity
In addition to their inherent cognitive abilities, context and familiarity play significant roles in dogs’ ability to recognize words.
The Importance of Context
Dogs are highly sensitive to the context in which words are spoken. They rely on environmental cues, body language, and the surrounding circumstances to understand the meaning behind the words they hear. For example, if you say the word “sit” while holding a treat and pointing towards the ground, your dog will likely associate the word with the action of sitting.
The Role of Familiarity
Familiarity with specific words and commands is crucial for dogs to recognize and respond to them accurately. Through consistent training and repetition, dogs become familiar with the meaning and expectations associated with certain words. This familiarity enhances their ability to understand and execute verbal commands.
Training Dogs to Recognize Words
To train dogs to recognize specific words, several steps can be followed to build their vocabulary and develop their word discrimination skills.
Building a Vocabulary
The first step in training dogs to recognize words is to establish a vocabulary of relevant commands and words. Start with basic commands such as “sit,” “stay,” and “come,” and gradually introduce more complex words as your dog progresses.
Teaching Word Discrimination
Once the vocabulary is established, it’s important to teach dogs to discriminate between different words. This can be done through repetition and reinforcement. For example, say the word “sit” while gently guiding your dog into a seated position, then praise and reward them for the correct response. Repeat this process consistently until your dog associates the word with the action.
Generalization and Transfer
After dogs have learned to recognize individual words, it’s essential to encourage generalization and transfer of their word recognition skills to various contexts and situations. Practice using the words in different locations, with different people, and under different circumstances to ensure that your dog can understand and respond to the words in a wide range of scenarios.
Factors Influencing Word Recognition
Several factors can influence dogs’ ability to recognize specific words, including tone and vocal inflection, visual cues, and contextual information.
Tone and Vocal Inflection
Dogs are highly attuned to vocal cues and can pick up on subtle variations in tone and vocal inflection. Using a consistent tone and inflection when giving verbal commands can help dogs better understand the intended meaning behind the words.
In addition to verbal cues, dogs also rely on visual cues to understand words. Incorporating gestures or physical prompts alongside verbal commands can enhance word recognition and facilitate comprehension.
Providing dogs with contextual information can significantly aid in word recognition. By associating words with particular situations or contexts, dogs can better understand the meaning and intent behind the commands they hear.
Limitations and Challenges
While dogs possess remarkable word recognition abilities, there are certain limitations and challenges associated with this skill.
Limited Vocabulary Size
Dogs’ vocabulary is limited by their cognitive capacity and training. While they can learn numerous words and commands, there is a limit to how many they can recognize and understand consistently.
Humans and dogs are different species with distinct ways of perceiving and understanding the world. Dogs may not comprehend words in the exact same way humans do, but their ability to recognize and respond to specific words is still impressive.
Variability in Individual Abilities
Just like humans, individual dogs may have varying levels of word recognition abilities. Some dogs may excel in learning and understanding words, while others may require more time and effort. It’s important to consider and respect the unique abilities and potential limitations of each dog.
The ability of dogs to recognize specific words has numerous practical applications in various fields. Some notable examples include the use of assistance dogs, search and rescue operations, canine communication studies, and therapy dogs.
Assistance dogs, such as guide dogs for the visually impaired, rely on their exceptional word recognition skills to perform tasks and assist their handlers. By understanding and responding to specific words, these dogs are able to navigate the world and provide invaluable support to their owners.
Search and Rescue
In search and rescue operations, dogs are trained to respond to specific verbal commands to locate missing persons or detect certain scents. Their ability to recognize and understand words plays a critical role in these life-saving missions.
Canine Communication Studies
Studying dogs’ word recognition abilities contributes to our understanding of canine communication and cognition. By unraveling how dogs comprehend specific words, researchers gain insights into the remarkable intelligence and capabilities of these incredible animals.
Therapy dogs provide comfort, companionship, and support to various individuals in need, such as patients in hospitals or individuals with mental health conditions. By recognizing and responding to specific words, these dogs can fulfill their role as therapy animals and positively impact the lives of those they encounter.
In conclusion, dogs are capable of learning and recognizing specific words through associative learning, operant conditioning, imitation, and understanding. Experimental evidence and research studies have demonstrated their impressive word recognition abilities, with breeds like the Border Collie standing out for their exceptional intelligence in this regard. Factors such as context, familiarity, and various cues influence dogs’ ability to understand specific words. While there are limitations and challenges associated with word recognition, the practical applications of dogs’ abilities extend to fields like assistance work, search and rescue, and therapy. The intelligence of dogs and their ability to comprehend specific words continues to amaze and inspire us, highlighting the remarkable bond and communication that exists between humans and their canine companions. |
In light of the Novel Coronavirus (COVID-19) outbreak, our curriculum team has designed a guide specially for parents to extend their children’s knowledge and understanding of the COVID-19 through reading the information and conducting meaningful activities with their child.
We believe that it is important for such learning to be extended and reinforced at home. Our educators will also be using this guide to share with the children about COVID-19. They will creatively use the guide for the different age groups within the school. With this, teachers will be able to use the pictures and information provided to explain and discuss sensitive issues such as the COVID-19 with ease.
In this crucial time, we should stay vigilant to monitor the health of our children and ourselves. Let us start educating our children on the virus by downloading the guide below. |
A protease (also known as a proteolytic enzyme, peptidase or proteinase) is an enzyme that helps digest different kinds of proteins in a process called proteolysis. Proteases are a category of enzymes; some are produced by the body, some are found in foods, and some are produced by bacteria and other microbes. Proteases assist with many different body processes including digestion, immune system function, and blood circulation.
How Do Proteases Work?
Proteases break down a protein’s bonds by hydrolysis, a chemical process that converts proteins into smaller chains called polypeptides and even smaller units called amino acids.
Proteins have a complex folded structure and require protease enzymes to disassemble the molecule in very specific ways. Without proteases the intestinal lining would not be able to digest proteins, causing serious consequences to your health.
What Do Proteases Do?
Proteases play a key role in many physiological processes: they are important for DNA replication and transcription, cell housekeeping and repair, immune function, stopping the flow of blood, and many other critical body functions – all of which involve breaking down proteins.
Where Do Proteases Come From?
Proteases, including trypsin and chymotrypsin, are produced by the pancreas. You will also find them in fruit like papaya (papain) and pineapple (bromelain).
Bacteria and other microbes also produce proteases. Sometime pathogenic bacteria produce proteolytic enzymes that mimic human proteases, and these can have negative consequences for health. Also, when out of balance, your body may produce too many or not enough proteases which can lead to cardiovascular, metabolic and immune system conditions.
The Health Benefits of Protease Enzymes
Proteolytic enzymes have many health benefits. The first that comes to mind is digestion. Proteases are extremely important for the digestion of foods, but their intestinal duties go even further. They also digest the cell walls of unwanted harmful organisms in the body and break down unwanted wastes such as toxins, cellular debris, and undigested proteins. In this way, proteases help digest the small stuff, so that our immune system can work hard to avoid toxin overload. By breaking down proteins, protease activities give our cells the amino acids they need to function.
In this way, digestion plays a huge role in overall health, and enzymes are a big part of digestive health. With the distinct ability to break down peptide bonds and liberate amino acids, proteolytic enzymes are now being studied by modern science and medicine for their clinical and therapeutic use in the realms of general oncology and overall immune function.
The following list describes some health benefits of protease, as well some of the exciting research on the functions of the body’s protease enzymes and their applications to human health.
1. Supports Gut Health
A 2010 U.S. study on inflammatory bowel diseases found that the proteolytic enzyme bromelain from fresh pineapple juice could help reduce chronic indications and concerns in the colon. Research suggests that bromelain counteracts intestinal pathogens like Vibrio cholera and Escherichia coli, but the mode of action is still unclear. It may prevent the bacteria from sticking to the intestinal walls, or it may interact with the body’s secretion signaling, keeping diarrhea in check.
2. Soothes Skin Burns and Stomach Ulcers
A 2010 Brazilian study in Burns journal found that protease helped cellular repair of third-degree skin burns and stomach ulcers in laboratory mice. The study looked at a protease from the mountain papaya.
3. Helps the Body Recover From Bruises, Fractures, and Tissue Injuries
Clinical trials have shown that protease enzymes can speed the healing of sprains, bruises, fractures and tissue injuries.[5, 6] Some natural healthcare providers use a bromelain cream that brings more blood flow to a wound. Bromelain enzyme also reduced bruising and swelling from episiotomy scars after childbirth in women.
4. Slows or Stops Irritation
The body’s natural protease enzymes respond to irritation in the body, particularly those associated with allergies, harmful organisms, intestinal issues, and restoring health to tissues after blood stopped flowing to that area. Interestingly, invasive bacteria in our body also emit proteases that mimic and hijack our own, which our bodies then have to stop through a complex series of physiological reactions. Some research suggests that bromelain enzyme, from pineapples, may also reduce irritation inside the body.
5. Eases Bone and Joint Discomfort
Although preliminary and not to be interpreted as a new therapy, a few studies have found that proteolytic enzymes helped ease osteoarthritis symptoms. Studies found that enzyme therapy that included bromelain and trypsin was as effective and better tolerated, with fewer side effects, than the non-steroidal anti-inflammatory drug (NSAID) diclofenac (DC).[10, 11]
6. Assists Recovery From Sprains and Sports-Related Injuries
Research suggests that protease enzyme combinations may aid in the recovery of sports injuries. A small German study of 44 people with sports-related ankle injuries found that those given the protease bromelain experienced faster recovery, and needed less time away from training. A more recent study found that enzyme therapy reduced exhaustion and speeded muscle recovery after exercising at maximum strength and pain levels.
7. Digests Proliferating Cells
The digestive proteolytic enzyme bromelain ate away at cells that were growing excessively in both mouse and human cells. Bromelain reduced the cell growth and played a role in regulating the expression of proteins that boosted the immune system’s ability to fight against serious health issues.
8. Helps the Circulatory and Lymph Systems
Protease enzymes help to cleanse debris out of our circulatory and lymph system. In patients who did not have healthy lifestyle habits, using bromelain along with medication improved the medication’s effects by 121%. Research on animal and human models has found this protease enzyme improves circulation and reduces risk factors associated with cardiovascular events.[3, 13]
9. Helps Blood Clot Normally
10. May Have Antioxidant Properties
Some proteases have been found to have antioxidant properties. Studies have tested for the safety of papain isolated from the unripe papaya latex for use in healing wounds and discovered that not only is it safe and effective but that it also has antioxidant properties.[15, 16]
How to Read the Units of Measurement for Protease
Most dietary enzyme supplements contain between 30,000 and 60,000HUT of protease. HUT stands for Hemoglobin Units on Tyrosine Basis and measures the hydrolysis or breakdown of proteins into smaller polypeptides and amino acids. HUTs tell you the activity level of the enzyme. This test, or assay, for protease activity is based on 30-minute hydrolysis (break down) of a hemoglobin protein molecule at pH 4.7 and 40 degrees Celsius.
Never buy an enzyme that lists the amount in weight, like milligrams (mg) because this fails to tell you about the enzyme’s effectiveness.
The United States Pharmacopeia (USP) creates the standard measurements for supplements. These are published in the USP’s Foods Chemical Codex (FCC), an internationally accepted compendium of standards for the quality of food ingredients, supplements, and additives.
Where Can I Find The Best Source of Supplementary Protease?
VeganZyme® contains a 100% vegan form of Protease. It comes from all vegetarian, non-GMO sources, is kosher certified, gluten-free, made in the USA from globally sourced ingredients, contains no animal products, and is great for vegetarians and vegans.
VeganZyme is the most advanced full-spectrum systemic and digestive enzyme formula in the world. It is free from fillers and toxic compounds. This formula contains protease and other digestive enzymes that help digest proteins, fats, sugars, carbohydrates, gluten, fruits and vegetables, cereals, legumes, bran, nuts and seeds, soy, dairy, and all other food sources.
VeganZyme also provides a comprehensive blend of systemic enzymes to break down excess mucus, fibrin, toxins, and environmental irritants.
- López-Otín C, Bond CS. "Proteases: Multifunctional Enzymes in Life and Disease." J Biol Chem. 2008; 283(45),30433–30437.
- Hale LP, et al. "Dietary supplementation with fresh pineapple juice decreases inflammation and colonic neoplasia in IL-10-deficient mice with colitis." Inflamm Bowel Dis. 2010;16(12),2012-21.
- Pavan R, et al. "Properties and Therapeutic Application of Bromelain: A Review." Biotechnol Res Int. 2012;2012,976203.
- Gomes FS, et al. "Wound-healing activity of a proteolytic fraction from Carica candamarcensis on experimentally induced burn." Burns. 2010;36(2),277-83.
- Baumuller M. "The application of hydrolytic enzymes in blunt wounds to the soft tissue and distortion of the ankle joint—a double-blind clinical trial [in German]." Allgemeinmedizin.1990;19:178-182.
- "Conditions: Injuries, Minor. Principal Proposed Natural Treatments." EBSCO.
- Howat RCL, Lewis GD. "The effect of bromelain therapy on episiotomy wounds—a double blind controlled clinical trial." Journal of Obstetrics and Gynaecology of the British Commonwealth. 1972;79(10),951–953.
- Antalis TM, et al. "Mechanisms of Disease: protease functions in intestinal mucosal pathobiology." Nat Clin Pract Gastroenterol Hepatol. 2007;4(7),393–402.
- Taussig SJ, Batkin S. "Bromelain, the enzyme complex of pineapple (Ananas comosus) and its clinical application. An update." J Ethnopharmacol. 1988;22(2),191-203.
- Akhtar NM, et al. "Oral enzyme combination versus diclofenac in the treatment of osteoarthritis of the knee--a double-blind prospective randomized study." Clin Rheumatol. 2004;23(5),410-5.
- Klein G, et al. "Efficacy and tolerance of an oral enzyme combination in painful osteoarthritis of the hip. A double-blind, randomised study comparing oral enzymes with non-steroidal anti-inflammatory drugs." Clin Exp Rheumatol. 2006;24(1),25-30.
- Marzin T. "Effects of a systemic enzyme therapy in healthy active adults after exhaustive eccentric exercise: a randomised, two-stage, double-blinded, placebo-controlled trial." BMJ Open Sport Exerc Med. 2016;2(1), e000191.
- Juhasz B, et al. "Bromelain induces cardioprotection against ischemia-reperfusion injury through Akt/FOXO pathway in rat myocardium." American Journal of Physiology. 2008;294(3),H1365–H1370.
- Lotz-Winter H. "On the pharmacology of bromelain: an update with special regard to animal studies on dose-dependent effects." Planta Med. 1990;56(3),249-53.
- Manosroi A, et al. "Antioxidant and Gelatinolytic Activities of Papain from Papaya Latex and Bromelain from Pineapple Fruits." Chiang Mai J Sci. 41(3),635-648.
- da Silva CR, et al. "Genotoxic and Cytotoxic Safety Evaluation of Papain (Carica papaya L.) Using In Vitro Assays." J Biomed Biotechnol. 2010; 2010,197898.
†Results may vary. Information and statements made are for education purposes and are not intended to replace the advice of your doctor. If you have a severe medical condition or health concern, see your physician. |
Flamingos in the Rainforest
Flamingos are one of the most ancient species of bird still living. Their bright colours and plumage along with their odd feeding habits distinguish them from other tropical species.
A staple of zoos across the world, wild flamingos can be found in tropical and temperate regions near bodies of water and often near rainforests.
Flamingos can be found living in tropical and temperate regions of Africa, the Mediterranean region, India, Caribbean coasts, the highlands of the Andes in South America and on the Galapagos Islands. The six types of flamingos -- Caribbean, greater, Chilean, lesser, Andean and James' -- congregate near shallow salt lakes or lagoons, whether it is near the coast, connected to the sea or far inland. Lesser flamingos can even live in volcanic soda lakes, which are considered inhospitable to most animal life.
Diet and Feeding
In the rainforest, flamingos feast primarily on algae or invertebrates, such as mollusks. Flamingos' long legs allow them to tread into deeper water in search of food. To find a meal, a flamingo dips its bill upside down under the water and sucks both water and mud through the front and then gushes any unwanted particles out the sides. According to the San Diego Zoo, the bird's lamellae, which are briny plates on the sides of the bill, act as a filter to keep the edible portions in the bill. The food flamingos eat, such as shrimp and blue-green algae, are what give it the bright pink, red and orange colours.
- In the rainforest, flamingos feast primarily on algae or invertebrates, such as mollusks.
- According to the San Diego Zoo, the bird's lamellae, which are briny plates on the sides of the bill, act as a filter to keep the edible portions in the bill.
Standing on One Leg
One of the quirks scientists and observers have noted about flamingos is the birds often stand on only one leg while in the water. Researchers at Saint Joseph's University in Philadelphia have suggested that the birds do this as a way to conserve body heat, according to a 2009 BBC article. Having both legs in the water could lower the bird's body temperature to an unhealthy level, even in temperate or tropical climates.
Flamingos are social creatures; they flock in colonies and form bonds in pairs. The uncertain nature of the rainforest and other regions where flamingos live makes the bird's mating season irregular, as resources and weather can vary. Rainfall is thought to trigger displays and coupling, according to a 2007 project at Davidson College. One unique trait flamingos have is they are monogamous and the male and female birds take turns protecting the egg and maintaining the nest.
- Flamingos are social creatures; they flock in colonies and form bonds in pairs.
- The uncertain nature of the rainforest and other regions where flamingos live makes the bird's mating season irregular, as resources and weather can vary.
Allison Edrington is a freelance journalist based out of Eureka, Calif., specializing in crafts, science fiction and gaming. She has written for the "Eureka Times-Standard," covering education, business and city government, and previously worked for the "Chico Enterpise-Record." Edrington graduated from California State University, Chico, with a bachelor's degree in journalism and a minor in history. |
(Last Updated on : 26/08/2014)
Hydra is a very small, solitary polyp not exceeding six millimetres in length. It lives in ponds and rivers attached to stones and weeds. Under a lens it looks like a short tube fixed at one end by a sticky disc and the other end free and conical. The cone is called the hypostome or manubrium, at the summit of which is the mouth leading into the cavity of the tube - the gastro vascular cavity - where digestion of food takes place. Surrounding the base of the hypostome is a circlet of thread-like tentacles armed with stinging cells. Each tentacle has numerous such cells each of which contains a coiled whip bathed in a poisonous fluid. At the slightest pressure the cell bursts out and shoots the whip, piercing the prey. The tentacles can extend, capture prey and take the food into the mouth.
The body wall of hydra is of two layers - the outer one protective and inner one digestive. The wall is firm and muscular and the animal can stretch or contract its body. It can also move slowly from place to place by looping. From the normal erect position to animal bends like a horseshoe until the tentacles touch the ground. Holding the ground firmly with the arms, it releases the basal disc and contracts. Then it stretches its body, bends in another direction and fixes the disc at a convenient point. Then releasing the free end it assumes its normal form. Thus it moves like a measuring worm. Some species move by gliding the basal disc slowly.
From the attached position a hydra can hold or adjust its body in such a way as to secure maximum oxygen and food supply. A hydra living at the bottom of a tank stands erect; attached to the side of a piling it grows horizontally; and if on floating weeds it hangs directly downwards.
Some hydras are symbiotic: green algae live in their cells and feed on the waste of the hydras which in their turn receive a copious supply of oxygen from the photosynthetic activity of the algae. Hydras have great power of regeneration. Even if cut into pieces each part develops into a whole. Biologists have produced hydras with many heads by grafting pieces of a cut animal to the trunk of a living hydra. |
The structure of a Web page is imposed by the grid or page template you choose for your page design. The grid is a conceptual layout device that organizes the page into columns and rows. You can impose a grid to provide visual consistency throughout your site. You can use the grid to enforce structure, but you also can break out of the grid to provide variety and highlight important information. Web pages that respect the grid and consistently align different elements have a more polished look than pages that have scattered alignments.
The World Health Organization Web site main page (www.who.int) in Figure 2-9 has a strong four-column grid. All of the text and graphic elements on the page align within the grid to create an orderly layout. Most current Web sites use tables in one form or another to give their pages structure and consistency. With table borders turned off, the user cannot tell the layout is held together by a table; they see a coherent, well-structured page. The reliance on tables as a design tool will eventually wane as more users adopt newer browsers that support CSS, which allows columnar positioning without tables. |
This post has a free math chart with primes, squares, composite, and factor pairs facts with plastic lids activity for a unique view of prime numbers, square numbers, composite numbers, and factor pairs. The printable will help create the necessary numbers for this learning.
My free printable today is an activity-based 100’s chart that can also be used while solving math problems. Although I’ve been thinking about making this printable for a long time, I still don’t have enough plastic lids to make the entire 100’s chart. I hope from the photo you can get a sense of the idea for this activity for students. I would suggest trying to collect enough lids for the whole chart or using paper numbers from the printable.
How To Use This Math Chart
I made the different categories of numbers in different colors to help students place them correctly, and think about what the chart is showing. It is quite interesting to see the numbers this way, and students will have many observations about the chart.
The printable chart could be used without the activity, but I think working with the numbers as a chart will assist students in seeing what they are working on in math. Higher grades could continue the chart to 200 or 300 to see even more patterns.
Here is a sample of one of the pages, the first prime numbers page. The circles could be cut to place on plastic lids. |
(Natural News) It looks like Nature is still weird and wonderful. Researchers have looked into a phenomenon where shrew heads shrink in winter and return to normal size come spring, reported the Daily Mail.
Like other animals, shrews sometimes struggle to survive during winter. It can be hard to gather enough food during this harsh season, and a group of researchers has discovered that shrew skulls shrink by at least 20 percent before winter approaches. While the reason for this is still shrouded in mystery, it appears that shrews shrink their skulls to prepare for cold weather.
Researchers at the Max Planck Institute for Ornithology in Germany have found that the size of a shrew’s head will change depending on the season. This occurrence is also known as “Dehnel’s phenomenon.” The lead author of the study, Dr. Javier Lazaro, said, “We found that each shrew undergoes a dramatic decrease in braincase size from summer to winter.”
He continued, “Then, in spring, the braincase regrows, almost reaching the original size in the second summer.”
Previous studies have already noted the occurrence of shrew heads shrinking as the seasons change, but this study is the first to monitor individual animals and record the phenomenon. The study took place from the summer of 2014 to the autumn of 2015.
Using live traps, researchers captured 12 shrews. The animals that underwent observation were anesthetized before their skulls were X-rayed. Researchers also implanted microchips in the shrews’ skulls for easier identification. Based on the X-ray results, the shrews’ heads did indeed shrink as the seasons changed.
The twelve shrews were caught during all three stages. The animals exhibited the same pattern: summer meant a peak head size, their skulls shrank in winter, and the skulls returned to normal size during spring.
Despite the data collated, researchers are still trying to determine the exact reason for Dehnel’s phenomenon. And it’s not just their skulls that shrink – the entire body of a shrew decreases in size, along with its major organs, spine, and brain. Shrews are notable for their high metabolism, and the researchers believe that the phenomenon helps shrews survive during winter.
Because shrews don’t migrate or hibernate for winter like other animals, Dr. Lazaro posited that the phenomenon may be due to the shrew’s efforts to increase their probability of survival. He added, “The partial regrowth into the adult phenotype in spring may then increase competitiveness during their only reproductive period when both sexes expand and aggressively defend their territories.”
It is yet to be determined how the shrew skulls shrink, but some evidence suggests that their braincase shrinks when tissue is reabsorbed
Fast facts on shrews
- The Etruscan shrew (scientific name: Suncus etruscus), measuring at an estimated 3.5 cm and two grams, is the smallest living terrestrial mammal.
- The masked or common shrew’s heart (scientific name: Sorex cinereus) beats 800 times a minute. This is notably faster than a hummingbird’s heartbeat.
- Shrews need to consume at least 80 to 90 percent of their own body weight in food daily. Most shrews can starve to death if deprived of food for half a day. These animals will eat anything, but shrews prefer small animals. Shrews also help destroy insects and slugs that harm crops.
- Shrews are easily startled. They will jump, faint, or drop dead at sudden noises.
Are you enjoying this article? You can find similar animal news and how nature is fun and wonderful at WeirdScienceNews.com. |
1. Teaching and learning require a healthy, safe, and orderly environment that supports cooperation and collaboration.
2. Parents must be actively engaged in the education of their children
3. Students learn at different rates.
4. Parent and community involvement is essential to successful school.
5. Curriculum should focus on enhancing student learning.
6. All staff must continue to learn, and all schools must continue to improve.
7. Schools must have effective leaders that are lifelong learners.
8. Everyone can learn when learning is differentiated to meet individual needs.
9. If we work in harmony with others in the education of our children and show that we take pride in our work, students will learn from our attitudes.
10. Experiencing success is crucial to every learner.
The vision of Lawrenceburg Public School is to provide an authentic learning experience that will enable and empower students to become lifelong learners and productive citizens. |
Breast cancer is a cancer that develops in the tissues of the breast. According to the American Cancer Society, one in eight women will be diagnosed with breast cancer in their lifetime. Second to skin cancer, breast cancer is the most common cancer among women in the United States, and it is the second leading cause of cancer death in women, after lung cancer. Although breast cancer is far more prevalent in women, the disease can also affect men.
There are two main types of breast cancer: ductal carcinoma and lobular carcinoma. Ductal carcinoma accounts for the majority of breast cancers and develops in the ducts that carry milk from the breast to the nipple. Lobular carcinoma forms in the milk-producing glands, or lobules, of the breast.
Breast cancer can be invasive or noninvasive. Invasive breast cancer means it has spread from the point of origin to surrounding lymph nodes and tissues in the breast. Noninvasive breast cancer, which is in its early stages and has yet to metastasize, is referred to as "in situ."
Scheduling recommended yearly mammograms and performing monthly breast self-exams are important for early detection of breast cancer. Statistics show that 97 percent of women with breast cancer survive if the disease is discovered and treated before it progresses. There are currently more than 2.8 million breast cancer survivors in the U.S.
Breast specialist Dr. Kristi Funk recommends getting a mammogram at age 35, and if that test shows no abnormalities, begin scheduling annual breast screenings at age 40. "Cancers basically double in size every three to four months," Dr. Funk says. "So, if you go two years between mammograms, it's dangerous."
There are many factors associated with an increased risk of breast cancer, some of which cannot be controlled. For example, being a woman and growing older are the primary risk factors for breast cancer. In addition, particular races are more affected by breast cancer than others. According to Breastcancer.org, Caucasian women have a slightly higher chance of developing breast cancer than African-American women, but African-American women tend to be diagnosed at younger ages and are more likely to die from the disease. Women of Asian, Hispanic and Native-American descent have a lower risk of developing breast cancer and dying from it.
Women with a personal or family history of breast cancer also have an elevated risk; however, statistics show that approximately 85 percent of breast cancers occur in women who have no family history of breast cancer. Inherited genetic mutations, namely the BRCA1 and BRCA2 genes, are responsible for about five to 10 percent of breast cancers and about 10 to 15 percent of ovarian cancers.
Other risk factors for breast cancer, such as obesity, smoking and alcohol consumption, can be reduced with healthy lifestyle changes.
Common Signs and Symptoms
Since the advent of mammography screening, the majority of breast cancers are found at an early stage, before symptoms present; however, while mammograms are considered the gold standard for breast cancer detection, they may not always provide accurate results, particularly for women with large or dense breasts.
The warning signs of breast cancer are not uniform for all women, which is why doctors stress the importance of knowing your breasts so you can notice any changes or abnormalities. Common symptoms of breast cancer may include:
• A lump, hard knot or thickening of tissue in the breast or underarm area
• Sharp pain or tenderness in the breast
• A change in the size or appearance of the breast
• Clear or bloody discharge from the nipple
• Inversion of the nipple or dimpling of the breast skin
• Soreness, inflammation and scaliness of the breast skin, areola or nipple
Certain symptoms of breast cancer can be caused by other benign breast conditions, such as fibroadenomas and cysts, so it's important to consult a health care professional for a proper diagnosis and treatment.
Methods used to treat breast cancer include:
• Chemotherapy and other drugs that target and destroy cancer cells.
• Targeted radiation therapy
• Lumpectomy: Also referred to as a breast-sparing surgery, this procedure involves the excision of only the cancerous tissue, leaving the rest of the breast(s) untouched.
• Removal of the entire breast: Known as a mastectomy, this surgery removes all of the breast tissue. In many cases, the skin covering the breast is left intact for a reconstructive breast surgery, which can be done during the same operation or at a later date.
• Five breast cancer prevention tips
• Breakthrough treatments for breast cancer
• Benefits of contrast enhanced spectral mammography (CESM)
• Touch-free breast screening
• Christina Applegate on breast cancer awareness
• Controversial study on mammograms |
Sometime around 6000 BCE a nomadic herding people settled into villages in the Mountainous region just west of the Indus River. There they grew barley and wheat using sickles with flint blades, and they lived in small houses built with adobe bricks. After 5000 BCE the climate in their region changed, bringing more rainfall, and apparently they were able to grow more food, for they grew in population. They began domesticating sheep, goats and cows and then water buffalo. Then after 4000 BCE they began to trade beads and shells with distant areas in central Asia and areas west of the Kyber Pass. And they began using bronze and working metals.
The climate changed again, bringing still more rainfall, and on the nearby plains, through which ran the Indus River, grew jungles inhabited by crocodiles, rhinoceros, tigers, buffalo and elephants. By around 2600, a civilization as grand as that in Mesopotamia and Egypt had begun on the Indus Plain and surrounding areas. By 2300 BCE this civilization reached had reached maturity and was trading with Mesopotamia. Seventy or more cities had been built, some of them upon buried old towns. There were cities from the foothills of the Himalayan Mountains to Malwan in the south. There was the city of Alamgirpur in the east and Sutkagen Dor by the Arabian Sea in the west.
One of these cities was Mohenjo-Daro, on the Indus river some 250 miles north of the Arabian Sea, and another city was Harappa, 350 miles to the north on a tributary river, the Ravi. Each of these two cities had populations as high as around 40,000. Each was constructed with manufactured, standardized, baked bricks. Shops lined the main streets of Mohenjo-Daro and Harappa, and each city had a grand marketplace. Some houses were spacious and with a large enclosed yard. Each house was connected to a covered drainage system that was more sanitary than what had been created in West Asia. And Mohenjo-Daro had a building with an underground furnace (a hypocaust) and dressing rooms, suggesting bathing was done in heated pools, as in modern day Hindu temples.
The people of Mohenjo-Daro and Harappa shared a sophisticated system of weights and measures, using an arithmetic with decimals, and they had a written language that was partly phonetic and partly ideographic. They spun cotton and wove it into cloth. They mass-produced pottery with fine geometric designs as decoration, and they made figurines sensitively depicting their attitudes. They grew wheat, rice, mustard and sesame seeds, dates and cotton. And they had dogs, cats, camels, sheep, pigs, goats, water buffaloes, elephants and chickens.
Being agricultural, the people of Mohenjo-Daro and Harappa had religions that focused on fertility, on the earth as a giver of life. They had a fertility goddess, whose naked image as a figurine sat in a niche in the wall of their homes. Like the Egyptians they also had a bull god. They worshiped tree gods, and they had a god with three heads and an erect phallus, which they associated with fertility. Like some others, including the Egyptians, they buried objects with their dead. And they had taboos, especially about cleanliness.
The Disappearance of the Mohenjo-Daro and Harappa Civilization
Between the years 1800 and 1700 BCE, civilization on the Indus Plain all but vanished. What befell these people is unknown. One suspected cause is a shift in the Indus River. Another is that people dammed the water along the lower portion of the Indus River without realizing the consequences: temporary but ruinous flooding up river, flooding that would explain the thick layers of silt thirty feet above the level of the river at the site of Mohenjo-Daro. Another suspected cause is a decline in rainfall.
Agriculture declined and people abandoned the cities in search of food. Later, a few people of a different culture settled in some of the abandoned cities, in what archaeologists call a "squatter period." Then the squatters disappeared. Knowledge of the Mohenjo-Daro civilization died -- until archaeologists discovered the civilization in the twentieth century. |
The confluence of the St. Lawrence Estuary and the Saguenay River, where the waters of the Great Lakes, the Saguenay basin and the Atlantic Ocean meet, is recognized as an ecologically exceptional region.
The oceanographic conditions that occur at the confluence of the Saguenay encourage the emergence of life and the concentration of species at the bottom of the food chain.
The uneven underwater topography, the estuarine circulation and the regular upwelling of cold water make it a very distinctive region. The upwelling of cold water at the head of the Laurentian channel is the most important oceanographic process of the Marine Park. This phenomenon brings nutrients and zooplankton to the surface and encourages the water’s oxygenation. The upwelling of cold water following the rhythm of the tides somewhat acts as the heart and lungs of the Marine Park.
The abundance of food in the Marine Park’s ecosystems attracts many species of birds, whales and seals. As well, numerous types of algae, benthic animals and fish have been observed in the Marine Park. Together, these species form a complex food chain supporting the significant biodiversity present in the Marine Park. |
The production of books in great quantity had to await the mechanical processes of printing from movable type. Printing was invented in China, where the first book printed by means of woodblocks is thought to date from the 9th cent. Korea developed movable metal type during the 13th cent. In the West movable metal type was developed by Johann Gutenberg of Mainz, and to a very large extent the history of the book is henceforth the history of printing.
Book production developed very rapidly, the craft becoming enormously sophisticated by the 16th cent. Italian printers set the standards of format and quality retained in Europe until the 19th cent. Great printing houses also arose in France and the Netherlands and, after a general decline in the 17th cent., in England and the United States. The 19th cent. witnessed machine replacement of all the old manual processes. By the end of the century printing quality had been so debased that a revolution, led by William Morris during the arts and crafts movement in England, was necessary to restore the concept of beauty to bookmaking.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
See more Encyclopedia articles on: Libraries, Books, and Printing |
By Teachers, For Teachers
The change of seasons from winter to spring is known to increase happiness. Research suggests that this seasonal change boosts mood as well as energy. One theory is that the increase in and exposure to daylight boosts dopamine, a naturally produced hormone which is known to promote pleasure in individuals. Teachers can take advantage of these beneficial seasonal changes and spend more time with their students completing classroom activities that take place outdoors. They can also use the effects of this seasonal change to their advantage. Here’s how.
This is a no-brainer. Going outdoors is known is to have many health benefits. Exposure to green spaces contributes to a healthier well-being, and it has a positive impact on young children. Research says that outdoor classroom activities can have a positive impact on a student’s confidence, self-esteem, communication skills and social skills. Children are able to connect with nature, get fresh air, increase their vitamin D level, and get that much-needed physical exercise. Go on a field trip, take a nature hike, and put your students’ bodies in motion. You will find that your students’ energy levels will be boosted.
This time of year, children tend to have a lot of energy which leads them to be much more curious about things. Promote your students’ natural curiosity by encouraging them to let it lead them. The increase in dopamine helps them remember information better, so now is a great time to push some boundaries and really challenge students. Take a walk outdoors and let students’ natural curiosity lead you. Take turns choosing a leader and allow them to choose where the walk takes you. You never know what you’ll find in nature. You may come upon a bird that you have never seen before, and this bird can lead your students to a new lesson or activity that will teach them more about it. This is a great way to really ignite students’ interest in learning.
While springtime means extended daylight and a boost in your mood, it also means a change in your sleep schedule. Daylight savings time and spring break vacation can have a negative impact on your sleep pattern. As the hours of daylight extend, you can have a difficult time falling and staying asleep. This can interfere with a student’s attention, memory, and other cognitive functions. Inform students to be aware of these entities and teach them to follow a strict sleep and homework schedule to help their bodies stay regular. Encourage students to get an adequate amount of sleep, and to study and play before it’s dark out.
The flowers are blooming, the birds are chirping, and the grass is as green as an emerald. All of these beautiful parts of nature are a surefire way to get your students creative minds’ flowing. Give students plenty of opportunities to be creative this spring. All students have to do is look outside for a little inspiration. Challenge students to create a spring craft, art piece, drawing, painting, or anything that they wish. Use this time of year to give them choices and be creative. You will find that your students are more confident and willing to work this time of year.
The harsh winter season is behind us, and your students have a lot of pent-up energy that they need to release. Springtime is the perfect time to get your students up and moving. If you were one to give your students brain breaks throughout the school year, now is the time to try them outdoors. All you have to do is take 5-10 minutes out of your day and go outside and let the students get that energy out. Try having students take turns following a leader and do some jumping jacks, run in place, or crazy dance. Try anything that will get them moving and get some fresh air.
Use the switch from winter to spring to your advantage. You will find your students will be happier, have an increase in their energy level and want to work. Use the effects of these seasonal changes to get students outdoors and be creative. Heed your own advice too, it’s just as important for you to get enough rest and get doors as it for your students.
What do you do to boost classroom learning in springtime? Do you have any tips or fun activities to share? Please share your comments with us in the comet section below, we would love to hear your ideas.
Janelle Cox is an education writer who uses her experience and knowledge to provide creative and original writing in the field of education. Janelle holds a master's of science in education from the State University of New York College at Buffalo. She is also the elementary education expert for About.com, as well as a contributing writer to TeachHUB.com and TeachHUB Magazine. You can follow her at Twitter @Empoweringk6ed, on Facebook at Empowering K6 Educators, or visit her website at Empoweringk6educators. |
This book by R. Schoch addresses the evolutionary history of amphibians, the earliest vertebrates to emerge onto land, with the aim of providing a factual framework to the analysis of a series of significant biological issues, such as morphogenesis, heterochrony, and adaptation, to name a few. It also furnishes a fascinating overview of how the changes in the theoretical basis of biological thinking have impacted the interpretation of raw evidence. Because of the biphasic life of many amphibians, they constitute plausible model-organisms to study the morphological and physiological features involved in the change of habits and habitat.
The book is divided into ten chapters. The Introduction begins with the discussion of the definition of Amphibia, a name that herein is applied to lissamphibians and all the taxa on their stem. The author argues against the past view that salamanders constitute a reasonable model to study the passage from water to land and criticizes the ecological scenarios that putatively forced vertebrates out of the ancestral pond, in view of the most recent studies suggesting the fundamentally aquatic lifestyle of early tetrapods. Some basic topics on the cladistic method to reconstruct the interrelationships of taxa are also briefly described in this chapter. In Chapter 2, the author overviews the history of tetrapods, which spans over 300 million years, including the latest discoveries. He comments on the significance of 12 exaptations in the origin of tetrapods and the appearance of other features that might be interpreted as synapomorphies but, as the fossil record demonstrates, result from convergent evolution. Several significant basal taxa, including Eusthenopteron, Panderichthys, Tiktaalik, Ventastega, Acanthostega, and Ichthyostega, are described. These taxa are mostly from Devonian rocks of scattered high latitude localities of the northern hemisphere and form a crownward series with increasing degree of relatedness to tetrapods. Carboniferous fossils already include representatives of the two diverging lineages of tetrapods that possibly originated the lissamphibian and amniote clades. Stem-group taxa of the former include dissorophoids; it is within this group that a lifecycle with a brief period of marked morphological change, or metamorphosis, emerges. The climatic and environmental conditions represented in the most important amphibian fossil localities ranging from the Devonian to the Cenozoic, together with their paleogeographic locations, are reviewed exhaustively in Chapter 3. Soft structures, such as cephalic musculature, respiratory organs, and hearing organs, are described and interpreted phylogenetically in Chapter 4. Special attention is paid to the origin of the impedance matching system (middle ear) to maximize sensitivity to airborne sound, owing to the higher impedance of the fluids of the inner ear with respect to that of the air. The evolution of this system is thoroughly described in relation to anatomical changes in the otic region that preceded the acquisition of terrestriality by tetrapods according to available paleontological data. Chapter 5 deals with the evolution of functional systems. Inferences from the use of extant phylogenetic brackets, as in the previous chapter, are complemented by evidence furnished by experimental data and observations in extant exemplars. Accordingly, a scenario of feeding and respiration, tightly coupled in tetrapodomorph fish, is described. Line drawings illustrate the changes of the hyoid arch, which had a pivotal role in the movements of the cheek and operculum involved in the breathing and feeding cycles, during the fish-tetrapod transition. Transformation of the fins into limbs is also analyzed taking into consideration the available picture of the phylogenetic relationships of relevant fossils. Two chapters, 6 and 8, deal with developmental aspects. Not only do these chapters examine the life-cycles of extant lissamphibians but also the rich fossil record of ontogeny for amphibians. Especially interesting is the latter, although, unfortunately, growth series for stem tetrapods are practically unknown. In contrast, hundreds of specimens of a branchiosaurid species that belong to different growth stages have been described; these specimens show subtle morphological changes throughout ontogeny, with larvae and adults living in the aquatic milieu. In turn, some small dissorophoids underwent more radical changes correlated with a drastic change of habitat. Other examples among temnospondyls and stem-amniotes are reviewed. In addition, the neotenic and metamorphosing developmental trajectories of some Paleozoic amphibians are compared with those of the living groups. The interplay between ontogeny and evolution, how development evolves and what the outcome of this change is and how it affects morphological evolution, are explored. A key concept in this relation is heterochrony, understood as the changes in the rates and timing of developmental processes with respect to the ancestral ontogeny. Changes in the sequences of developmental events document heterochrony. In this regard, comments on developmental sequences of temnospondyls are included in this chapter alongside with the role of heterochrony in the origin of lissamphibians. Also, similar features in the skull ossification sequence of branchiosaurids and that of salamanders, also resembling the sequence in some fishes and amniotes call for a non-adaptive, and broader, explanation. Ecological aspects of the amphibian history are addressed in Chapter 7, whereas ecological aspects of development are explored in Chapter 8. Factors that might affect life-history, with extant and extinct amphibians providing examples of developmental and morphological plasticity, are examined in this chapter as well. Main hypotheses on the origin of the three modern groups are reviewed in Chapter 9, and the last chapter is devoted to macroevolutionary patterns of amphibian evolution and underlying processes.
The treatment of the taxonomy, at odds with the current use of several well known taxon names based on phylogenetic definitions, is noteworthy. Many names that designate total groups (i.e., stem plus crown) are applied only to the stem and, thus, to paraphyletic groupings. For example, such is the case of Salientia (which includes Anura as crown clade) and Tetrapodomorpha (which includes Tetrapoda, the last common ancestor of amniotes and lissamphibians and all of its descendants, as crown clade). Because of the possible utility of this book in college, it is a pity that several errors (e.g., the anuran urostyle is a rod composed of fused tail vertebrae (rudimentary caudal vertebral elements plus hypochord); the opercular muscle is attached to the scapula (for suprascapula); outgrowth of the tail (fleshy outgrowth of the cloaca) forms the intromittent organ in Ascaphus; metapterygoid (for metapterygial) axis; in anurans and amniotes digits holding the highest number form first (digit 5 may be the last to form)) passed unnoticed through revisions. High quality photographs, some of which lack scales, illustrate different topics. Abundant and updated literature is included at the end of each chapter. In summary, this is a comprehensive work that could be used as a guide to focus on specific aspects of one of the most exciting chapters of vertebrate history. |
Stereoviews (also known as stereographs or stereoscopic cards) are among the first form of 3D photography. The pictures are taken with a special stereoscopic camera, which has two lenses, simulating the views received by the left and right eye. Two nearly identical pictures are then developed and printed, and are mounted next to each other, usually on a piece of card stock (although occasionally glass). When looked at through a stereoscope, the image can then be seen in 3D.
An early stereoview
Source: Wikimedia Commons
A brief history of stereoviews
The first stereoscopic viewer (stereoscope) was patented in 1838 by Sir Charles Wheatstone. His device used mirrors to create and project a 3-dimensional image to the person looking into the device. It was rather bulky and ungainly, however, and could only be used to view drawings. In 1844, the art of taking stereoscopic photographs was first demonstrated in Germany, and David Brewster, in Scotland, created the first of the modern stereoscopic viewers.
In 1851, Queen Victoria viewed and praised the stereoscopic views presented at the Great Exhibition. Suddenly, they were a must-have item in Europe, and companies such as the London Stereoscopic Company developed methods for mass-production of images. It took a few more years for the views to catch on in the United States, but they eventually did. Shortly thereafter, Doctor Oliver Wendall Holmes developed the Holmes Stereopticon, a hand viewer, which is still produced in limited numbers today.
Stereoviews were regularly produced until the 1940s, when they were, for the most part, supplanted by film.
(Source: HubPages, The geek girl: http://thegeekgirl.hubpages.com/hub/An-Overview-of-Stereoviews)
See also Making a Positive: Stereoview |
Environmental flows are a measure of the amount and quality of water flowing in a freshwater stream or river over time. This measure is based on how well the overall water flow supports and sustains a freshwater ecosystem and the life (including humans) that depends on it.
Environmental Flows and River Ecology
When studying environmental flows, five elements of a river's or stream's ecology need to be addressed:
Frequently Asked Questions About Environmental Flows
Why are environmental flows important?
Maintaining sufficient environmental flows
- ensures there is a secure supply of drinking water
- maintains healthy aquatic ecosystems
- provides a reliable supply of water for a sustainable economy
Environmental flows help maintain a healthy fishery. For example, sufficient water flow supports the natural sediment balance of rivers and provides fish with enough water to move up and downstream for spawning.
If environmental flows are not maintained, the river can become slower, narrower and shallower. This can change the river's suitability as fish habitat, meaning that as the environmental flows change, the species of fish that can thrive there will change.
What is a natural flow regime?
There are five elements to the natural flow regime:
- Rate of change
Flow in a river is naturally variable, with changes in flow within a year and changes in flow from one year to the next. All elements of the natural flow regime play a critical role in sustaining native biodiversity and overall ecosystem integrity in rivers.
All rivers have variable flow values (e.g., high flows, low flows) and this variability is critical to their well-being. Variation in flow is important since it periodically restores different physical, chemical, and biological functions essential to the ecosystem.
In any given river, some species do well in high flow years and other species do well in low flow years. Therefore, a single flow value (minimum, optimal, or otherwise) cannot simultaneously meet the requirements for all species or maintain a fishery.
What are examples of human water use that affect environmental flows?
People use or manage river flows and lake levels for a number of reasons, including:
- Flood protection
- Industrial processing
- Irrigation for agriculture
- Power generation
- Water supply for drinking
How does removing water from the river affect fish and other aquatic organisms?
Removing water from a river affects all five elements of riverine ecology (see diagram above). For example, water withdrawals can affect the water chemistry, such as temperature and dissolved oxygen.
Fish have specific tolerances for temperature and dissolved oxygen is what they need to breathe. A change in temperature or dissolved oxygen can have important consequences for fish and other aquatic organisms.
Taking too much water out of a river or lake causes stress to fish and other organisms that rely on this water. The Environmental Flows Program works at understanding what is "too much" and interacts with others to protect these flows.
How do dams affect environmental flows?
Dams can remove natural variability in river flows. For example, the high water flows that are necessary to move the sediment which maintains the shape and structure of river channels. |
Attention: For textbook, access codes and supplements are not guaranteed with used items.
Translated by Carl Ipsen.
This short book provides a succinct and masterly overview of the history of migration, from the earliest movements of human beings out of Africa into Asia and Europe to the present day, exploring along the way those factors that contribute to the successes and failures of migratory groups. Separate chapters deal with the migration flows between Europe and the rest of the world in the 19th and 20th centuries and with the turbulent and complex migratory history of the Americas.
Livi Bacci shows that, over the centuries, migration has been a fundamental human prerogative and has been an essential element in economic development and the achievement of improved standards of living. The impact of state policies has been mixed, however, as states have each established their own rules of entry and departure - rules that today accentuate the differences between the interests of the sending countries, the receiving countries, and the migrants themselves. Lacking international agreement on migration rules owing to the refusal of states to surrender any of their sovereignty in this regard, the positive role that migration has always played in social development is at risk.
This concise history of migration by one of the world's leading demographers will be an indispensable text for students and for anyone interested in understanding how the movement of people has shaped the modern world. |
In this tutorial, we'll explain how to create a cute earth illustration. We'll use basic shapes and some Illustrator knowhow to make this. It's a fun look at the earth, as seen from space, and interpreted in a cartoon vector style. Let's get started!
Final Image Preview
Below is the final image we will be working towards. Want access to the full Vector Source files and downloadable copies of every tutorial, including this one? Join Vector Plus for just 9$ a month.
- Program: Adobe Illustrator CS4
- Difficulty: Beginner
- Estimated Completion Time: 1 hours
Open up a new document and create a rectangle with the Rectangle Tool (M). Fill it with black. This will be the universe background.
Next, create a circle with the Ellipse Tool (L) and fill it with a blue.
Make a copy of the ellipse. Then select the Pencil Tool (N) and start drawing a wave like shape. Place it so it overlaps with the circle.
Select both, the circle and the wave and select the Add to Shape Area in the Pathfinder Palette.
This will be the basic background for the earth resembling the water.
Make a copy of the earth/water shape on top (Command + C + F) and fill it with a radial black to white gradient. Set the layer mode to Multiply at 41% Opacity.
Select the original circle and duplicate it. Overlap both until you can create a sickle like shape (Choose Subtract from the Pathfinder Palette). Fill it with a radial black to white gradient and set the layer mode to Multiply at 27% Opacity.
Place the first full circle on top of the earth and then the sickle like shape on top of that.
Let's create the land pieces. Create another circle (the same size as the earth) and set the stroke to blue. Then start drawing shapes with the Pencil Tool (N) resembling land shapes. Make sure they overlap with the circle outline.
Select them all and fill them with blue, no stroke. You can easily achieve this by clicking Shift + X to swap the Stroke color to the Fill Color. Now select all the shapes and hit the Divide button on the Pathfinder Palette.
Delete the shapes you do not need. Now fill the remaining ones with a color of your choice. I chose a green gradient to represent the land masses.
Select the land shapes and go to Object > Path > Offset Path and apply the settings below.
Make a copy of the circle and then select it with the bigger offset shapes. Hit the Divide button in the Pathfinder Palette.
Use the Direct Selection Tool to select the overlapping parts, then delete them. Now make sure the shapes that are slightly bigger are placed behind the original shapes of the land. Then change the fill color to a light blue.
Select the light blue shapes and apply a Feather Effect and set the layer mode to 55% Opacity.
Place the earth shape with the radial gradient on top of all shapes.
Let's move on to the smaller elements. We want to create some trees that are oversized and whimsical. Create a tree trunk with the Pen Tool (P) and fill it with brown.
Then create the leaf part and fill it with a radial green gradient.
Group both and apply a Drop Shadow (Effect > Stylize > Drop Shadow).
Make several copies of the tree and change the green gradient slightly. Overlap them and place them on top of one of the grass shapes. Rotate them as needed.
Let's move on to the light house part. Create several red rectangles and place them apart, just like you see in the image below.
Then draw on top of the rectangles a light house shape filled with red.
Select all of the rectangles and the light house shape and hit the Divide button in the Pathfinder Palette.
Start deleting the unnecessary shapes and fill each second shape with white.
Add to the bottom red rectangle a path point with the Pen Tool (P). Then select the point with the Direct Selection Tool (A) and drag it downwards. Add path point handles by pressing Alt and once you move over the point, click and drag. Adjust the handles so the bottom of the rectangle becomes rounded.
Add some more shapes to complete the lighthouse. Now select all and group (Command + G). Skew or rotate the lighthouse.
Place the lighthouse on top of another land shape.
You can add as many fun objects as desired. For example, a house, a boat, some bushes etc. All can be achieved with simple shapes.
Let's give the scene some drama. Create a funny shape like you see in the images below and fill it with a black to white radial gradient.
Place it on top of all the other shapes including the background. Then set the layer mode to Overlay at 100% Opacity.
Create a fairly big circle, fill it with black and place it behind the lighthouse, but on top of the background. Now add one mesh point in the middle with the Mesh Tool (U). Select the mesh point with the Direct Selection Tool (A) and fill the point with a dark grey.
Create another circle and repeat the same. But instead of selecting the middle mesh point, select the outer mesh point and fill the point with a lighter grey. This will make the circle look like another far away planet.
This is it. Pretty straight forward and simple. I hope you had fun with this tutorial. I would like to thank Sean Geng who's earth illustration inspired me to write a tutorial. Please check out his portfolio, DesignSpasm.net.
Subscribe to the Vectortuts+ RSS Feed to stay up to date with the latest vector tutorials and articles.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post |
Sir Isaac Newton developed three laws of motion. The first law of inertia says that an object’s speed will not change unless something makes it change. The second law: the strength of the force equals the mass of the object times the resulting acceleration. Finally, the third law says that for every action there is a reaction. In some classes, these laws are taught by having the students memorize the words, instead of lecturing students or children about these somewhat complex laws. Here are a few ways to demonstrate the laws and gain a better understanding.
Newton's First Law of Motion
Place the hard boiled egg on its side and spin it. Put your finger on it gently while it is still spinning in order to stop it. Remove your finger when it stops.
Place the raw egg on its side and spin it. Place your finger gently on the egg until it stops. Once you remove your finger, the egg should start to spin again. The liquid inside the egg has not stopped so it will continue to spin until enough force is applied.
Push an empty shopping cart and stop it. Then push a loaded shopping cart and stop it. It takes more effort to push the loaded cart than an empty one.
Newton's Second Law of Motion
Drop a rock or marble and a wadded-up piece of paper at the same time. They fall at the same rate of speed, but the rock's mass is greater so it hits with greater force.
Push the roller skates or toy cars at the same time.
Push one harder than the other. One had greater force applied to it so it moves faster.
Newton's Third Law of Motion
Pull one ball or swing back and let it go.
It will swing into the other balls making the ball at the other end swing.
Explain how this represents an equal and opposite reaction. |
What are style guides? Style guides contain standards of style and formatting for various fields (e.g., biology, chemistry, medicine, humanities, engineering).
What are style guides good for? Editors must make decisions based on context and their knowledge of the subtleties of language. Style guides can inform those decisions by answering questions about formatting and usage conventions. Examples of such questions include:
- Should a number be spelled out or written as a numeral? When are units spelled out vs. abbreviated? For example, it is correct to write “Three milligrams of…” or “After mixing, 3 mg of…” but not “Three mg of…” (abbreviated units should not be combined with spelled-out numbers) or “3 mg of…” (numerals are not used at the beginning of a sentence).
- How are multiple units presented? For example, “100 mg/kg/d” is incorrect (multiple slashes should not be used). This expression could be written “100 mg kg·d–1” or “100 mg/(kg·d)”.
- When are geographic terms capitalized? For example, Laohun Mountain not Laohun mountain, The Nakdong River, but the Nakdong and Seomjin rivers (the rule here is that geographic terms are capitalized when they follow a name but not when they are used in the general, plural sense).
- What symbols and variables should be italicized, and which should be roman (non-italic) (e.g., P < 0.05, n = 8, ex2 – 1)?
- Is the correct abbreviation for “second” written “sec” or “s,” and is “year” abbreviated “yr” or “y” in SI notation? SI style abbreviates second as “s” and year as “y.”
- Which taxonomic levels are capitalized? What is the correct way to write cultivar names?
How do you know which style guide(s) to consult? Journal formatting guidelines often stipulate points of style and sometimes request that authors and editors follow a specific guide. The “Instructions to Authors” of many scientific journals specify that SI notation should be used, and these instructions often give other guidance on formatting of numbers, units, abbreviations, and other items. There are many useful guides and books on grammar and language usage. Here are some of the style manuals that we use on a regular basis when editing scientific manuscripts:
Scientific Style and Format, The Council of Scientific Editors Manual for Authors, Editors, and Publishers (the CSE guide) covers style, nomenclature, and symbol usage for biological and physical sciences and provides links and references to other discipline-specific resources. Highlights include:
- Definitions of commonly misused (“imprecisely applied”) scientific terms
- Recommended substitutes for unnecessarily wordy phrases
- Guidance on writing units
- In-depth chapters on taxonomy and genetic nomenclature and conventions
- Coverage of conventions in the physical and earth sciences
The American Medical Association (AMA) Manual of Style provides a well-organized treatment of important elements of grammar, formatting, usage, and style. Highlights include:
- Chapters on genetic, virus, and disease terminology
- Units of measure and reference ranges for clinical laboratory measurements
- Guidance on statistical methods, study design (especially in relation to medical studies), and a glossary of statistical terms
- In-depth discussion of manuscript preparation
The ACS Style Guide, Effective Communication of Scientific Information, is a publication of the American Chemical Society. Highlights include:
- A guide to the peerreview process
- Important grammar rules, with examples
- A list of “tricky plurals” with correct spellings
- Recommended spellings for words that have multiple accepted spellings
- A chapter on mathematical style and the use of numbers and units
- A chapter on names and numbers for chemical compounds
- Advice on preparing figures and tables
The Chicago Manual of Style is loaded with guidance on the publishing process, style, usage, and documentation. It is an important reference for editors and writers of books, journal articles, and most other types of manuscripts. A few highlights include:
- A section on the process of manuscript editing
- An extensive table on hyphenation, with rules and examples
- A glossary of problematic words and phrases
- A bibliography of works on writing, editing, publishing, and related references
The Gregg Reference Manual calls itself “the primary reference for professionals in all fields who are looking for authoritative guidance on matters of style, grammar, usage, and formatting.” While other books (to be discussed in another post) cover usage in more detail, The Gregg manual is an excellent guide to fundamentals and fine points of writing. Subjects covered in detail include:
- Punctuation, capitalization, compound words
- Grammar (parts of speech, sentence structure)
- Structure and format of professional letters
- Essays on style
The United States Government Printing Office (GPO) Style Manual was originally developed (it was first released in 1894) to standardize word and type treatment. The GPO Manual has evolved to become a useful resource for editors on points of usage including:
- Capitalization rules and examples
- Names, capitals, and governmental details for many of the world’s countries
- Names of regions and geographic features
- Geological terms
The National Institute of Standards and Technology (NIST) Guide for the Use of the International System of Units (SI) is a guide to the “modern metric system.” The NIST guide includes a checklist of basic SI principles to use when editing or reviewing manuscripts. However, requirements specified by the author’s target journal overrule the conventions presented in this (or any other) guide. Some points of SI notation (e.g., detachment of the “%” sign from its number, as in “10 %”) differ from common usage (“10%”). Editors must make judgment calls and should always check the dominant usage of the target journal or publisher when settling stylistic or formatting questions.
What are style sheets? Professional editors maintain “style sheets” in which they define points of style, formatting, and usage; editors refer and add to these style sheets while working on assignments. Style sheets are personalized guides that are tailored to a specific assignment, subject area, or discipline and that provide rapid answers to formatting or other questions. Style sheets can include elements from an array of reference materials. We will discuss style sheets in more detail in a future post.
What else do style guides cover? Many style guides discuss grammar, word usage, and composition. Overlap and contradictions occur among style and usage manuals. For example, the CBE and AMA guides specify that a hyphen should be used with abbreviated units used as modifiers (e.g., 100-mL beaker), while the American Heritage Guide to Contemporary Usage and Style rejects the hyphen in these cases (100 mL beaker). In such cases, the editor or writer should follow the conventions typically used in the field of study or by the target journal. We will discuss usage guides in another post.
Please take a moment to share your thoughts and comments. What style guides do you use? What advice do you have about using style guides?
Like this article? Share it with your colleagues. |
The city of Biloxi has had quite a colorful past, with everything from changes of ownership, to air bases, to hurricane warnings, to tourism. This city has seen and experienced it all.
Let’s delve into the history of Biloxi, and see where it’s come from, and where it’s going.
Biloxi was founded between 1710 and 1725. Some historians state that it was founded in 1699, but that settlement is now referred to as “Old Biloxi.”
The city was founded by the French during the European rush to colonize the New World, and it was even the administrative capital of French Louisiana for a time before the French moved the capital inland due to fear of flooding and storms. However, the French ownership of Biloxi didn’t last. After the Seven-Years War between England and France, Biloxi was given to the British, and they ruled it from 1763 to 1779, after which the Spanish had their chance to claim ownership over it. Despite that, the French roots remained strong, up until Biloxi joined the United States in 1817.
Even back then, Biloxi was starting to make a name for itself as a tourist location, with many people making the journey from New Orleans for their summer vacations. It was shortly after joining the United States that the famous Biloxi Lighthouse was built, and it is still standing to this day.
During the American Civil War, Ship Island was captured by Union Forces, which meant that Biloxi was a Union City early on. Thankfully, no major battles took place in or near Biloxi, so many of the landmarks were left undamaged.
During World War II, the Keesler Air Base was built, which was, at the time, simply called Keesler field. It was meant to be a base for training, as well as a site for air maintenance. The other benefit of Keesler Field was that it brought a lot of new faces to Biloxi, and in a tourism city, new faces meant more tourists.
The downside to Biloxi being a coastal city is that it runs the risk of being battered by storms. Although storms are commonplace in most coastal regions, Biloxi has had bad luck taking the brunt of the storms.
The two major storms were Hurricane Camille in 1969, and Hurricane Katrina in 2005. The devastation from Hurricane Katrina was massive, with 90% of the coastal buildings in Biloxi being damaged from the gale-force winds. Even now, 12 years after the damages first occurred, both the residents and government of Biloxi are still repairing the damages. Much of the waterfront was destroyed and had to be rebuilt.
Because of the extent of the damage and trauma, the government of Biloxi erected monuments to memorialize those who lost their lives and loved ones during Hurricane Katrina. They also erected a monument designed to show tourists how high the waves reached before they came crashing down on the sea front.
The city of Biloxi is an incredibly popular tourist destination. As it’s located along the Mississippi gulf coast, there are scores of beachfront hotels and casinos. However, casinos have been running in Biloxi since the 1940s, where illegal gambling took place at the Broadwater Beach Resort. Thankfully, gambling has been made legal since then, and Biloxi is now one of the leading gambling centers in the Southern United States.
There are more reasons than gambling for why you may have heard of Biloxi, however:
The movie “Biloxi Blues” starring Mathew Broderick was an American drama film, which first aired in the late ‘80s. This film focused on the lives of soldiers at Keesler Field. Thanks to the film, the tourism industry in Biloxi grew exponentially.
However, Biloxi’s fame doesn’t just come from that one film. Many of John Grisham’s novels are based in Biloxi, which also increased tourism as avid readers traveled there to “feel the environment” for themselves.
With beachfront hotels, museums, casinos, and offshore islands to explore, Biloxi is a must-see city that all tourists should visit at least once in their lives to experience it for themselves. |
The geological history of western North America has been, and continues to be, shaped by its position on the eastern rim of the Pacific Ocean. The modern Pacific Ocean’s basin is the successor of the original ocean which split Laurentia - our continent’s cratonic core - away from the rest of the Precambrian supercontinent Rodinia, an ocean that widened until in late Paleozoic time, it became Panthalassa, the World Ocean. Unlike the eastern side of the continent, where continental collision was followed by re-opening of the Atlantic Ocean - the “Wilson cycle” - western North America has always faced the same active ocean basin. Its tectonic evolution has always been that of an active margin, affected first by multi-episodic rifting, and then by plate-margin subduction and transcurrent faulting, over a 700 million year period of time. Throughout this long interval, fluctuating regimes determined by relative plate vectors have created the complex and varied geology and topography of the region; its thrust belts, volcanoes, and granite canyons; its scarps, plateaus, and cordilleras.
The geological history of this area can be viewed as four distinct plate-tectonic phases. The Rifting/Open Margin phase lasted from initial breakup at about 700 Ma (late Proterozoic) until 400 Ma (Middle Devonian), when widespread subduction began along the margin. The Oceans and Islands phase commenced as subduction built arcs and the continent retreated away from them, creating a scenario like the modern southeastern Pacific ocean. This phase lasted until about 180 Ma (Early Jurassic), when opening of the Atlantic ocean reversed the motion of North America such that it drove strongly westward relative to the long-standing subduction zones off its west coast, creating a broad zone of compression in the offshore arcs and ocean basins as well as its own miogeocline (Collisional/Orogenic phase). The final, Post-Collisional phase commenced during the Early Tertiary when parts of the East Pacific Rise subducted under the continent and turned the plate margin from pure subduction to a regime with both transcurrent faulting and continued subduction of short, remnant segments. Industrial mineral deposits formed during each of these tectonic phases. Combined effects of two or more phases were required to form some of these deposits.
The Cordilleran miogeocline developed during the Rifting/Open Margin phase, from its inception with deposition of the late Proterozoic, syn-rifting Windermere Supergroup, through the deposition of the thick sequences of Paleozoic-early Mesozoic carbonate and terrigenous siliciclastic strata that are now beautifully exposed in the Canadian Rocky Mountains. The Mt. Brussilof magnesite deposit is hosted by Middle Cambrian carbonate within this continental shelf sequence. Equivalent platformal strata are best exposed in the southwestern United States, the Grand Canyon being a world-renowned example. The opening of Panthalassa was not a single event in western North America: convincing Cambrian as well as late Proterozoic rift-related sequences occur, and alkalic to ocean-floor basalts in the miogeocline range through Ordovician into Devonian age. The implied protracted nature of this rift event, in contrast to the short-lived and efficient opening of the Atlantic Ocean, continues to puzzle. The exact identity of the missing twin continent or continents, also provides grounds for lively debate, with Australia, Australia/Antarctica, and Siberia attracting the most adherents.
The Oceans and Islands phase began in Devonian-Mississippian time with the first widespread arc volcanism and plutonism along the continent margin: these rocks are recognized from southern California to Alaska. West of the North American miogeocline, a large portion of California, Oregon, Washington, British Columbia, and most of Alaska are made up of rocks of intra-oceanic island arc to oceanic affinity that occur in relatively coherent packages separated from each other by faults. These assemblages, famously termed a collage of “suspect terranes” by Peter Coney and Jim Monger, had uncertain relationships to the North American continent during at least part of their history. Most of them have now been shown either to contain faunas of eastern Pacific affinity (an excellent example can be viewed at the Lafarge limestone quarry near Kamloops, B.C.), or to exhibit sedimentological, geochemical and/or historical aspects that link them, however distally, to the continent. Some, however, are more convincingly exotic imports: the Cache Creek Terrane of central British Columbia with its Tethyan, Japanese-Chinese, late Permian fusulinid fauna; Wrangellia and Alexander, a linear belt on the coasts of B.C. and southeastern Alaska, with its late Paleozoic cold-water, Baltic-affinity fauna; and fragments of continental crust in Alaska with Precambrian ages that are unknown in North America.
At present, the Pacific Ocean is highly asymmetric, its west side festooned with island arcs, its east side bare, bordering a continental margin made up of fragments of just such arcs and marginal oceans. It is reasonable to suppose that both sides of the Pacific were once mirror images. However, opening of the northern Atlantic Ocean at about 180 Ma destroyed that symmetry. Although earlier compressional events affected the suspect terranes, their thrusting on top of the North American miogeocline dates from the latest Early Jurassic, roughly 183 Ma - the same age as early rift basalts on the eastern seaboard. From that time until the end of the Mesozoic, both suspect terranes and sedimentary strata of the miogeocline were stacked into a complex but overall easterly-tapering thrust wedge. Oceanic terranes incorporated into the wedge have provided both asbestos (Cassiar Mine) and jade deposits. During this Collisional/Orogenic phase, successive Jurassic and Cretaceous magmatic arcs draped across the growing accretionary collage. Numerous dimension stone quarries exploit granites from this phase. The notably voluminous mid-Cretaceous arc was probably linked to an episode of rapid subduction around the entire northern Pacific Rim. Broad plutonic provinces of this age occur from China, through Russia, into Alaska and British Columbia and south into California and Mexico - a spectacular example of the global geological consequences of relative plate motion.
Towards the end the Collisional/Orogenic phase, dextral (Pacific-northward) transcurrent motion became an increasingly important part of tectonic development, with at least hundreds of kilometres of displacement along faults such as the Denali, Tintina, Pinchi and Fraser. By early Tertiary time, thrusting in the Canadian Rockies and Foreland Belt had ceased, and crustal extension was accompanied by volcanism and graben development in southern British Columbia and northern Washington-Idaho. At about 38 Ma ago, the edge of North America began to impinge on the East Pacific Rise, shutting off subduction along increasingly long sections of the margin. Major dextral faults - the San Andreas, Queen Charlotte, and Denali faults - began to express the strong component of lateral motion between the American and Pacific plates. Some 300 kilometres of crustal extension related to this lateral motion generated the Basin and Range Province in the western United States. Limited subduction continues today off the coast of Washington, giving rise to the Cascade volcanoes; and the westward turn of the continent margin in Alaska creates a continuing collision zone in the Wrangell and Alaska Ranges, resulting in North America’s highest summits. Late Cenozoic uplift, perhaps due to mantle heating events, has rejuvenated the Canadian Rockies and the Coast Mountains. In terms of industrial mineral deposits, late Cretaceous and Cenozoic volcanic and terrestrial deposits of bentonite, diatomaceous earth, pumice, zeolites, and opals have attracted exploration and mining interest.
The saga continues, with the recent February 28, 2001 earthquake in Washington State rattling our knick-knack shelves and causing short term cell-phone and 911 overloads. This west coast is by nature Pacific Rim, not just in cultural orientation, but in terms of day-to-day tectonic reality as well.
Canadian Cordilleran Geoscience
Cannings, S., Nelson, J, and Cannings, R., 2011. Geology of British Columbia, A Journey through Time (new edition). Greystone Books, Vancouver, 154p.
Geological Survey of Canada, Canadian Cordillera Geoscience
Nelson, J. and Colpron, M. (2007): Tectonics and Metallogeny of the British Columbia, Yukon and Alaskan
Cordillera, 1.8 Ga to the present, Special Publication No. 5, p. 755-791.
For questions or more information on geology and minerals in British Columbia contact BCGS Mailbox or use the toll free number (BC Residents only). |
Bringing Fruit Flies in from the Cold
DECEMBER 21, 2009
Based on the University of Western Ontario press release
Using a microscope the size of a football field, researchers from The University of Western Ontario, the University of Nevada-Las Vegas, Argonne National Laboratory, and Virginia Polytechnic Institute and State University are studying why some insects can survive freezing, while others cannot.
Why is this important? Because the common fruit fly (Drosophila melanogaster) is one of the bugs that cannot survive freezing and the little creature just so happens to share much of the same genetic makeup as humans, so finding a way to freeze the flies for research purposes is a top priority for geneticists the world over (about 75% of known human disease genes have a recognizable match in the genetic code of fruit flies).
And why the large microscope?
“It’s the only one in the world that’s set up for this kind of imaging on insects,” said lead researcher Brent Sinclair of his team’s use of the U.S. Department of Energy’s Advanced Photon Source (APS), at Argonne National Laboratory. The APS generates high-energy x-rays that allow Sinclair and his collaborators to film the formation and spread of ice in real time as the maggots freeze.
Sinclair explained that the physical processes of ice formation seem to be consistent among species that do and don’t survive freezing. However, it seems that the insects that survive freezing have some control over the process of ice formation. They freeze at consistently higher temperatures than those that don’t.
Sinclair said this implies that the main adaptations required to survive freezing are at the cellular or biochemical level, rather than because of fundamental structural differences.
“We’re comparing Chymomyza amoena, an insect native to Ontario that survives freezing, with Drosophila melanogaster, because they’re very close relatives,” he said. “The idea is to find the magic bullet that allows some bugs to survive freezing and some not. That’s the goal here.” |
Optic Nerve Atrophy is a permanent visual impairment caused by damage to the optic nerve. The optic nerve functions like a cable carrying information from the eye to be processed by the brain. The optic nerve is comprised of over a million small nerve fibers (axons). When some of these nerve fibers are damaged through disease, the brain doesn’t receive complete vision information and sight becomes blurred.
Atrophy (wasting away) may be partial in which some axons are damaged or profound in which most axons are damaged. A child’s ability to see clearly (visual acuity) is affected due to nerve damage that occurs in the central part of the retina responsible for detail and color vision (macula). These areas of the eye are more vulnerable to the effects of atrophy. ONA is the end result of damage to the optic nerve. It can affect one or both eyes. It may also be progressive, depending on the cause.
Many diseases and conditions may lead to optic atrophy. Tumors of the visual pathway, inadequate blood or oxygen supply (hypoxia ischmia) before or shortly after birth, trauma, hydrocephalus, heredity, and rare degenerative diseases have been identified as causes of ONA. When hereditary, the pattern is dominant. This means that one parent with the condition would pass the gene to 50% of his/her children. If caused by a tumor, the process of ONA may be halted by removal of the tumor.
Additional information is available in the Blind Children’s Center Pediatric Visual Diagnosis Fact Sheets. |
Exploring Reflections 1
Lesson 5 of 16
Objective: SWBAT understand reflections, their properties, and the symmetry that results.
Access Prior Knowledge
To begin this lesson I pair up my students, give each pair a Reflections APK slip, and ask that they answer each question in parts I and II. Most 8th graders should have have worked with symmetry before. Yet I allow them to use any resource they wish to answer the questions. I encourage the use of their cell phones or ipads, that when turned off, can be used as mirrors, placed along the "reflecting" line, of the figures in part II. This concrete demonstration can help learners see if the given line is a reflecting line (or not) for a figure.
As students work I walk around assessing their work. Some may want to use their phones for the letters in Part I, but find them too small, so I encourage the drawing of vertical or horizontal lines (using pencils) for these cases. I also encourage folding the paper, showing that if folded along the symmetry line, both the pre-image and image of the letter should should coincide. I hold up and demonstrate this with a letter on a sheet of paper for all to see.
Some students may want to rewrite a letter on another sheet of paper big enough to easily work with. When students are done, I project the Reflections APK document on the whiteboard so we can go over the questions together. I will ask students to share how they were able to determine if the symmetry line, if any, was horizontal or vertical for a letter. I also ask for explanations on how they determined the answers to Part II.
Common mistake: Students may think that a letter like N, for example has a vertical line of symmetry, because when drawn, both sides of the letter are identical (same with letter S). I ask these students to fold the paper along the vertical line and see for themselves that the image and pre-image do not coincide, and therefore, N has no vertical symmetry line. I ask that they test a horizontal line through N, as well.
To open this section of the lesson, I hand out the About Face graphic organizer. The first two transformations are a translation and a dilation, which students have seen in our previous lessons. I ask the class to complete the entire page with their partners. Although I have not formally discussed reflections with the class, I ask that they analyze the reflection in Figure 3 and make at least three conclusions based on what they see. Before calling on students, I project this reflection on the board. When they step up to write their responses, here are some of the answers that students have given:
(1) "The images have the same size and shape"
(2) "The tip of the nose on the pre-image and image is the same distance to the vertical line"
(3) "The images face opposite direction"
(4) "The distance between two points on one face is the same as between the same two points on the image."
Students may not make any conclusions with respect to angles, so I will be prepared to ask the class if they can conclude anything about the images with respect to angles. I've had students state that the angle that the girl's chin or lower jar, makes with her neck looks the same in both images. I ask if these corresponding angles should be equal and why should they? I don't expect students to see that the line connecting a point and its reflection image are perpendicular to the reflecting line. They will discover this in the following section.
For this activity I pass out the resource Exploring Reflections. Then, I ask students to work with their partners and take the measurements required in Question 1. I remind students to use their ruler and protractor. I walk around the room assessing the students proficiency and answering students' questions.
I am most interested in the conclusions students make at the end of Question 1. I ask the class to stop at the end of Question 1. I then choose a pair of students with correct answers and ask that they share their responses. This question is usually not difficult for students to answer and discuss.
After discussing this question, I tell students to reflect on these properties when answering the rest of the questions. In Question IV, students encounter a good "flip" reflection. I ask if anyone recalls the informal word for translations (slide) and I state that "flip" is the informal word for reflections.
In Questions V, I added some triangles that are not reflections of any preimage on the plane. I encourage students to use their phones or iPads as mirrors to find the reflections. Students often find that using rulers to measure distances is easier. I allow each student to choose the method that works best for him/her. Some students may forget how to state the equation of a horizontal or vertical line when stating the reflecting line in reflections A>>B and F>>H, so I may decide to provide students with a quick refresher.
To close this lesson, I like to have a group discussion and have students share answers with the whole class. I've found that students that have been struggling can benefit a lot when listening to how other students got their answers. After going around the room assessing students' work on Exploring Reflections, and getting a good idea of who these students are, I make sure I call on these students to share an answer to one of the questions. I may even create a question based on a concept already discussed. This informal assessment helps me, and helps those struggling students. If time runs out, I tell my students to finish this activity for homework and I make sure I discuss Question V during the next class. |
Meaning and history of the St. George Ribbon
The history of the St. George ribbon is inextricably linked with the heroic past of Russia. It is known that it was an integral part of the three insignia established in the name of the patron saint of the Russian army of St. George the Victorious, the order, the cross and the medal. In addition, the ribbon adorned the caps of the sailors who served as part of the Imperial Guard crew, and on ships awarded the St. George flag. It fluttered on the banners of the royal army.
What does the St. George ribbon mean? The story of its appearance
During the period of the military company of 1768-1774, to award those who showed courage, courage and prudence for the benefit of Russia, a special award was instituted - the St. George ribbon. Her motto became the words: "For service and courage." The corresponding award badge appeared - a white equilateral cross or a four-pointed golden star.
There are four order degrees. Cavaliers were first awarded with a cross, a star and a ribbon decorated with stripes of black and orange colors.Heroes awarded the order of the second degree, also had a star and a separate cross, which was worn around the neck. The next degree gave the right to wear a small cross on the neck, and the fourth - in the buttonhole. Since the establishment of the Order, black and yellow colors have become symbols of military valor and courage. Thus, the story of the appearance of the St. George Ribbon can only be considered in conjunction with the history of the order itself.
How the ribbon looked, how it was put on
The tape was worn depending on what class the awarded cavalier was. Three options were envisaged: in the buttonhole, on the neck or over the shoulder. The history of the St. George Ribbon includes such a curious fact: those who were awarded it received a lifetime salary from the treasury, and after their death the heirs became the owners of the award. But the Order of the Statute also provided for the deprivation of the award of those who by some unseemly act had tarnished the honor of the George Knight.
Initially, the St. George ribbon was made of silk and decorated with stripes of black and yellow colors - as it was provided for in the Order Statute of 1769. But if you look at the samples of those old years that have come down to us, then you can see that even then the yellow color on them was clearly to orange, which would only be approved in 1913.For a long time there have been discussions about what the St. George ribbon means.
The story of its appearance is connected with the war, so many believe that black means smoke, and orange means flame. This version, of course, has the right to exist, but the one that was expressed by the well-known expert in the field of phaleristics S. Andolenko is more likely. He draws attention to the correspondence of the colors of the ribbon and the national emblem of Russia - on a gold background a black eagle.
St. George Ribbon. History, meaning and features
There are many order ribbons, but only a few of them have independent status. The history of the St. George ribbon knows the periods when it was used as a full-fledged analogue of the order or the cross. For example, during the Crimean War, the defenders of Sevastopol could not receive the insignia and the ribbons were given to them. Another example is the period of the Imperialist war, when those who were awarded the order pinned a ribbon to the side of an overcoat. But the case is also known when the St. George ribbon was presented without an order and had an independent significance.
This happened in 1914. One of the highest ranks of the General Staff was awarded to her for having managed in the shortest possible time to mobilize the army.Neither the order nor the cross could not be handed, because they were awarded only to participants in hostilities. The tape was granted to him to the earlier order, and thus the general received the right to wear it on the St. George ribbon, which was a unique event in the history of Russia.
Two kinds of tapes
In the reign of Emperor Alexander I, it became a tradition to reward the units that particularly distinguished themselves in military actions with St. George's banners. These award standards differed from others in that the St. George Cross was placed in their upper parts (tops), and under it was a black and gold ribbon with standard tassels. There were no inscriptions on it. Over time, they began to be called "narrow St. George ribbons."
In contrast, the imperial decree of 1878 introduced wide ribbons on which it was written, for what specific services the military unit received this award banner. Such a tape became an integral part of the standard and was not removed from it under any circumstances. Their history begins with the fact that at the end of the military campaign of 1877–1878, Alexander II wished to award the most distinguished units and subunits of the Danube and Caucasian armies that took part in the battles.
Unique rewards for combat regiments
Army commanders provided information about two regiments that fought under their command. A detailed list of their exploits was attached to the report. But when the relevant commission began to consider the issue of awarding, it turned out that these regiments already had all the awards that existed at that time. It was for them that the wide St. George ribbon was established, listing their merits.
More similar ribbons were not awarded, and these two regiments were forever the only ones who were awarded this honor. It is known that at the end of the Crimean War, by the decree of the emperor, a nominal award weapon, decorated with the straps of flowers of the St. George ribbon, was introduced. Such an award was considered no less honorable than the Order. Samples of this golden weapon can be seen today in many museums in the country.
Hall of the Palace, dedicated to the gentlemen of the Order
In St. Petersburg in the royal residence at the end of the XVIII century, the Great Throne Hall was opened. His consecration took place on November 26 on the day of the celebration of the memory of St. George the Victorious. In this regard, he was named after him. Since then, all protocol events related to the awards were held precisely within its walls.There also sat a commission that considered the candidacies of the next cavaliers, and receptions were held annually in honor of his cavaliers.
Awarding ribbon in the White Guard troops
After the seizure of power in 1917, the Bolsheviks abolished the old award system, and the black and gold ribbon was used only in parts of the White Army. An example is its presentation along with the sign “For the Ice Campaign”, used in the award system of the Volunteer Army of Kornilov. Also on the Eastern Front, it was attached to the medal "For the great Siberian campaign."
In addition, the history of the St. George Ribbon knows many facts of its use as patriotic symbolism by many White Guard units and formations. Ribbons with black and orange stripes decorated the banners, chevrons and hats of soldiers and commanders. This was especially characteristic of the participants in the Yaroslavl uprising. The famous ataman Annenkov obliged the veterans of his movement to wear St. George ribbons to distinguish them from the newly called soldiers.
Allies of enemies and fighters against Bolshevism
In 1943, the German command formed the so-called Russian corps, consisting of immigrants and former citizens of the USSR, who had gone over to the side of the enemy.It was used to suppress the resistance of the Yugoslav partisans, and its most distinguished members were awarded St. George crosses and ribbons. Unfortunately, not only the heroic pages contain the history of the St. George ribbon. Vlasov, who fought in the ranks of the Wehrmacht, also often carried this sign of valor on their chests.
In 1944, a collaborationist organization was created in Bobruisk, called the Union Against Bolshevism. On his banner, decorated with two-color ribbons, an image of St. George cross was embroidered with silver. The same tapes served as armbands and distinctive signs of its leaders. Among the numerous unions created in the West by Russian immigrants, all sorts of symbols were popular, including St. George ribbon. One of these organizations was the Russian All-Military Union.
The continuation of the patriotic tradition
St. George Ribbon, the history of which is closely connected with the heroic pages of the Russian-Turkish war, eventually entered into the symbolism of the Soviet army. In 1942, at the height of the battles against fascism, the Guards Ribbon was established,corresponding in appearance to the well-known Georgievskaya. This was a continuation of the glorious patriotic tradition.
It was used on the Red Navy caps and as a breastplate “Sea Guard”. The image of the ribbon were decorated with banners of guards units, formations and ships. In 1943, a ribbon of the Order of Glory was established by government decree. Its appearance is completely identical to St. George. It was also used to design the pads of the medal "For the victory over Germany."
The revival of glorious awards
With the advent of democratic changes in the country, the attitude towards the monuments of our history has changed in many ways. A government decree of March 2, 1992 restored the Order of St. George and the insignia of the Cross of St. George. In 2005, in honor of the sixtieth anniversary of the victory over fascism, a public action called the St. George Ribbon was held. It was initiated by the RIA Novosti news agency and the Student Community ROOSPM.
From this time on, the Guards Ribbon was again called the St. George, and the actions dedicated to it became annual.Thousands of activists hand out ribbons to those who wish in this way to express their gratitude to our veterans. Black and gold ribbons, symbolizing the courage and heroism of Russian soldiers, are attached to the clothes, bags and antennas of cars. The action is held under the motto "I remember, I am proud." Thus, the story of the St. George Ribbon, briefly outlined in this article, has been continued. |
Key Events in String Theory History
Although string theory is a young science, it has had many notable achievements. What follows are some landmark events in the history of string theory:
1968: Gabriele Veneziano originally proposes the dual resonance model.
1970: String theory is created when physicists interpret Veneziano’s model as describing a universe of vibrating strings.
1971: Supersymmetry is incorporated, creating superstring theory.
1974: String theories are shown to require extra dimensions. An object similar to the graviton is found in superstring theories.
1984: The first superstring revolution begins when it’s shown that anomalies are absent in superstring theory.
1985: Heterotic string theory is developed. Calabi-Yau manifolds are shown to compactify the extra dimensions.
1995: Edward Witten proposes M-theory as unification of superstring theories, starting the second superstring revolution. Joe Polchinski shows branes are necessarily included in string theory.
1996: String theory is used to analyze black hole thermodynamics, matching earlier predictions from other methods. |
Pernicious anemia (PA) is a decrease in red blood cells that occurs when the intestines cannot properly absorb vitamin B12. Red blood cells provide oxygen to body tissues. There are many types of anemia. Pernicious anemia is a type of vitamin B12 anemia. The body needs vitamin B12 to make red blood cells. You get this vitamin from eating foods such as meat, poultry, shellfish, eggs, and dairy products. A special protein, called intrinsic factor (IF), helps your intestines absorb vitamin B12. This protein is released by cells in the stomach. When the stomach does not make enough intrinsic factor, the intestine cannot properly absorb vitamin B12. Common causes of pernicious anemia include: weakened stomach lining (atrophic gastritis), an autoimmune condition in which the body’s immune system attacks the actual intrinsic factor protein or the cells in the lining of your stomach that make it. Very rarely, pernicious anemia is passed down through families. This is called congenital pernicious anemia. Babies with this type of anemia do not make enough intrinsic factor. Or they cannot properly absorb vitamin B12 in the small intestine. In adults, symptoms of pernicious anemia are usually not seen until after age 30. The average age of diagnosis is age 60. Patients usually do well with treatment. It is important to start treatment early. Nerve damage can be permanent if treatment does not start within 6 months of symptoms. |
The most comprehensive assessment tool of its kind, this diagnostic battery of tests is easy for busy teachers to administer and to interpret. It provides valid, reliable procedures for individual, in-depth assessment in seven areas: Emergent Reading, Word Identification and Phonics, Comprehension, Spelling, English as a Second Language, Writing, and Oral Processing. Unique to this inventory is an arithmetic screening test so reading and language skills can be compared with math skills for better overall assessment, and visual and auditory discrimination screening is included. Chapters include: The Bader Reading and Language Inventory; Administering the Inventory; Student Priorities and Interest; English as a Second Language (ESL) Quick Start; English as a Second Language (ESL) Checklist; Word Recognition Lists; Graded Reading Passages; Spelling Tests; Visual Discrimination Test; Preliteracy and Emerging Literacy Assessment; Semantic and Syntactic Evaluation: Cloze Tests; Phonics and Structural Analysis Test; Oral Language; Writing; Arithmetic Test; Open Book Reading Assessment; Student Background Information; Case Study: Jackie; Developing and Validating the Inventory. >
Table of Contents
I. THE BADER READING AND LANGUAGE INVENTORY.
The Bader Reading and Language Inventory.
Administering the Inventory.
II. TEST BATTERY.
Student Priorities and Interest.
English as a Second Language (ESL) Quick Start.
English as a Second Language (ESL) Checklist.
Word Recognition Lists.
Graded Reading Passages.
Visual Discrimination Test.
Auditory Discrimination Test.
Preliteracy and Emerging Literacy Assessment.
Semantic and Syntactic Evaluation: Cloze Tests.
Phonics and Structural Analysis Test.
Open Book Reading Assessment.
III. RECORDING, SUMMARIZING, INTERPRETING.
Student Background Information.
Case Study: Jackie.
IV. DEVELOPMENT OF THE INVENTORY: VALIDITY AND RELIABILITY.
Developing and Validating the Inventory.
To the extent that students' needs are understood, they can be helped to learn.As a result of the dedication of many teachers, specialists, and researchers, a great deal has been learned about factors that influence achievement in basic areas of functioning such as reading, writing, and arithmetic. This knowledge is tempered with the realization that assessment of achievement, as well as the factors underlying achievement, is a complex process requiring conclusions that must be considered tentative. Teachers of children, adolescents, and adults, however, have instructional decisions to make. They need to make referrals to specialists in vision, hearing, and language development, as appropriate, to meet the needs of individual students. Yet most teachers have teaching responsibilities that make individual, in-depth evaluation difficult to do. Most reading and learning specialists have demands on their time, too. Thus, this inventory was developed for teachers and specialists who need a diagnostic battery that encompasses vital areas of evaluation based on research and practice, efficient in administration and interpretation, and relatively inexpensive to use. Concerns about "high stakes testing" that results in children, teens, and adults being erroneously failed or excluded have been raised repeatedly in the popular media and in professional publications. Professional authorities have long recommended individual tests for children and adults who fail group-administered or pencil-and-paper tests. This inventory provides valid, reliable procedures for individual assessment. The instruments within can assist teachers and specialists to discover inhibiting conditions that can be ameliorated with appropriate instruction. NEW IN THIS EDITION The fourth edition of theBader Reading and Language Inventoryhas been revised to improve its organization, increase its passages' appeal, provide more guidance for subtest selection and sequencing, expand English as a second language record keeping, highlight phonemic and emergent literacy assessment, and provide a model for diagnostic and instructional decision making. The flowcharts are more accessible. Flowcharts for quick screening, basic assessment, and diagnostic testing are on the inside front cover. A flowchart for preliteracy and emergent literacy is on the inside back cover. Page numbers on the charts provide for easier location of the tests. An instructor's checklist now supplements the English as a second language screening test. These instruments have been used with thousands of migrant families, refugees, and immigrants in the last eight years and found to be practical, useful, and accurate for initial program placement and for monitoring development in English both by teachers and by tutors. A case study presented by Michelle Johnston, Ph.D., provides a clear example of the reasoning of a diagnostician as he or she plans and carries out an assessment and makes recommendations for instruction. Although earlier editions of the inventory contained tests to assess phonemic awareness and emergent literacy, the subtests pertaining to these areas are now highlighted through a revised sequence and with more explanation. The remainder of the inventory has undergone minor changes based on recommendations of reviewers who are literacy authorities from universities across the nation. These reviewers who have used the inventory report that they are satisfied with its content. |
As has been previously explained, due to the absence of an atmosphere no optical barrier exists in empty space to prevent using telescopes of unlimited sizes. But also from the standpoint of construction, conditions are very favorable for such instruments due to the existing weightlessness. The electrical power necessary for remotely controlling the instruments and their components is also available in the space station.
Thus, for example, it would be possible to build even kilometer- long reflecting telescopes simply by positioning electrically adjustable, parabolic mirrors at proper distances from the observer in empty space. These and similar telescopes would be tremendously superior to the best ones available today on Earth. Without a doubt, it can be stated that almost no limits would exist at all for the performance of these instruments, and consequently for the possibilities of deep space observations. |
The wealth of data generated from intensive study of the brown tree snake as a result of the need to control introduced populations of this pest species allow several important conclusions. First, that the snakes on Guam are extraordinary in terms of their absolute abundance and in terms of their ability to exploit a broad prey base. Our data suggest an exceptionally high reproductive success on Guam for a snake with an otherwise unnoteworthy reproductive capability and life history (i.e. small clutch size, typical ontogenetic shift from small heterothermic prey to larger homeotherms). Especially important was the snakes versatility in taking advantage of extremely common prey on islands; population expansion was slow but survival was maximal, ultimately leading to high population levels. The brown tree snake shares many attributes with other snakes that could cause similar biodiversity crises in a wide variety of contexts in which they lack coevolutionary histories (especially formerly snake-free island environments). As opposed to their relatively poor history as over-water dispersers, snakes may be especially problematic as travelers in increasing ship and air traffic between widely separated geographic regions of the world.
Additional publication details
The disappearance of Guam's wildlife: new insights for herpetology, evolutionary ecology, and conservation |
Options & More Options...
You have many options available to you when it comes to grading your kids. Some parents prefer to stick to the familiar letter-grades and percentage points because it makes sense to them or maybe their state requires it. Others rely on a portfolio system or simply hand out home-made awards and certificates. And then there are those parents who believe a job well-done and a concept learned is enough reward for their kids and steer clear of grades altogether. What kind of homeschooler are you?
|Click here for a printable version of our Grading Chart|
So you’ve decided to keep grades – or maybe your state requires that you show them graded progress at the end of the year. You may be wondering how to get started. How do those teachers come up with A’s and B’s anyway? Here’s a quick guide to recording grades:
- Grading a Worksheet: Simply divide the number of problems correct by the total number of problems. For example, if the page has 14 problems and your child got 12 correct, divide 12 by 14 to get .857, or 86%. See the chart to the right to translate percentages into letter grades, or you can download our free printable version of this Grading Chart
- Grading an Essay: Grading an essay or project can be much trickier because you’re not dealing with a simple correct or incorrect answer. In these cases, you need to clearly explain to your child what you will expect from them and then decided how close to that expectation they’ve come. Rubrics are a great way to grade written essays. Rubrics break down every element that is being considered and assigns points to each. Click here for some sample rubrics.
- Grading an Entire Year of Work: Your state may require that you show them grades for an entire year (or quarter or semester) of work. In this case, all you do is record the percentage points for each worksheet, quiz, or test, then at the end of the year simply add up the points and divide by the total number of assignments. For example, if you assigned 10 worksheets, 6 quizzes, and 4 tests, just add up the points and divide by 20. Once you have a final percentage score, consult our grades chart above to convert to a letter grade.
There is a problem with this system, though. All of the assignments are equally important here so that if your child does poorly on a few quizzes but always pulls through on the tests, they may still come up with a low grade. To solve this, count all important tests or projects twice (or even three times). This is called weighting the test so that it counts for more of the grade.
Portfolios are a great way to keep track of your child’s progress. Some homeschoolers keep portfolios for their own records and others are required to by their state. There are usually four main aspects to a portfolio system:
- Journal: Keep a record of your child’s homeschool lessons in a journal. This can be as brief or as detailed as you want it to be, but be sure to write something every day or you might forget what you covered. This journal is also a good place to set goals for the upcoming year and record which text books (if any) you plan to use.
- Photo Album: Take pictures of field trips, homeschool support group meetings, competitions, projects, etc.
- Sample Papers: You can either keep all of your child’s completed papers in a large file box or simply save representative pieces throughout the year in a three-ring binder.
- Summary: Most state reviewers want a summary of what your child has accomplished over the year and it can be helpful for you as well.
Awards & Certificates
Whether you’re keeping grades, portfolios, or nothing at all, kids still love to receive home-made awards and certificates to commemorate a job well done. Check out our printable Awards for Kids.
|A Note on Content Standards|
So what is a grade, anyway? Well, if you think about it, it's an indication of how close your child has met a standard. But who sets that standard? Most public schools follow set lesson plans called Content Standards and many homeschoolers follow the same standards because they want to make sure that their kids are "keeping up" with the public schools.
Content standards are an official guide to what children of certain age groups should be taught. Every state has different standards and every private or charter school in each state uses the standards differently. Every state will set up their standards in a way they feel is best. To get a copy of your state's content standards, call your local or district superintendent's office. Tell them that you're a homeschooling parent and you're looking for easy to read content standards for your state. What you'll most likely get is a binder of somewhere between thirty to fifty pages.
Use content standards as the wonderful resource tool they are. See them as the compass pointing you in the right direction, but remember that they're by no means an outline of what you have to teach your child. They can just help you to be certain that something important isnt being left out of your children's education. |
Hurricane Season is June 1 to November 30, 2017
According to Elizabeth Dunn, adjunct faculty member in the USF College of Public Health’s Department of Global Health, 12 named storms have been predicted this season to come out of the Atlantic Ocean, including six hurricanes and two major hurricanes.
“Regardless of the anticipation of an active or moderate hurricane season, knowing how to prepare you and your family for a hurricane could save lives as you reduce risks to potential hazards that may arise when a severe storm impacts the area,” Dunn said, who specializes in disaster preparedness.
Understanding common terminology used during the hurricane season can help the public be more informed and ready to respond in the event that an evacuation is issued.
“The intensity, wind speed, and size of the storm can vary and the impact to your neighborhood could be influenced by the direction in which the storm may approach the area,” Dunn said. “Listening to the forecasters, news reporters, emergency managers, and your elected officials is vital when an approaching storm is heading your direction.”
According to Dunn, there are various levels of strength of a hurricane including tropical depressions, tropical storms, and hurricanes.
Each of these are determined by their wind speed in which the Saffir-Simpson Hurricane Wind Scale (SSHWS) is used to classify the different categories of a hurricane.
Tropical depressions are storms with sustained wind speeds under 38 mph while tropical storms will reach between 39-73 mph winds. Hurricanes have winds 74 mph and greater ranging from a Category 1 (74-95 mph) to a Category 5 (> 155 mph).
Here are some important terms:
- Mitigation: To eliminate or reduce the impacts and risks of hazards by taking preventative measures before a severe weather event.
- Sandbags: Bags filled with sand or soil to help prevent or reduce the impact of water damage in the event of flooding in the area.
- Hurricane watch: Hurricane conditions are possible in the area when a watch is declared. At this time, homeowners and businesses should start preparing their home and developing an evacuation plan in the case there is a warning issued for the area. Watches are usually issued about 48 hours before projected occurrence of tropical storm force winds to reach the area.
- Hurricane warning: When a warning is issued for a community it means hurricane conditions are expected. During an issued warning, it is important to follow the directions of officials, and to leave the area of impact if it is advised. Warnings are typically issued 36 hours in advance of anticipated tropical storm force winds.
- Eye of the storm: The period of time during a storm where there are calmer conditions and is usually a well-defined center. People may believe that the storm is over at this moment and be lulled into a false sense of security. It is important to know that strong destructive winds will pick back up and send debris flying once the eye wall approaches the area again.
- Eye Wall: Surrounding the eye of the storm is the eye wall. This is where some of the most severe weather occurs as this is the point in the storm with the highest wind speed and largest amount of precipitation. It is important to remain inside during the eye of the storm to ensure safety as the eye wall approaches.
- Rain bands: Bands that come off of the hurricane that can produce severe weather conditions such as heavy rain, wind and even tornadoes.
- Storm surge: Water that increases quickly and that floods coastal areas and rivers that are inland due to ocean water rising after the storm has made landfall. Storm surge is often underestimated as water levels increase rapidly and can be extremely deadly.
“Predicting a hurricane’s path can be a challenge since there are many factors that could impact the movement of the storm,” Dunn said. “The size of the storm and path can influence wind patterns that may escalate or impede the growth of the storm.”
According to Dunn, computer models are utilized by weather forecasters to manage large amounts of data that have been collected in an attempt to predict the direction of the storm.
In many cases, Dunn said, the National Hurricane Center is able to calculate the path of a tropical storm or hurricane two to three days out from landfall to an area with accuracy. They have the most up-to-date information on any developments and weather alerts to help monitor the storm as it approaches.
Get a Plan, Make a Hurricane Kit
“Keep in mind that emergency response vehicles may not respond to a 911 call if wind speeds are over 39 mph to ensure the safety of first responders,” Dunn said. “Citizens are encouraged to be prepared and self-sufficient for at least three days, but ideally seven days after a storm has passed, through in extreme circumstances. With downed power lines, trees, and major debris, it may take a few days for roads to be cleared and bridges to be opened to allow assistance to get to your area.”
Dunn said priorities for the government agencies during the first 72 hours are to get search and rescue efforts underway, establishing security efforts, and addressing any immediate life-safety hazards that could cause further harm.
“It could take days for humanitarian assistance from non-governmental disaster relief organizations and governmental agencies to get established,” she said.
In preparation for hurricane season, it is important to create a supply kit for use during a time of evacuation or loss of power.
“Keep in mind that once a storm is approaching many people may rush to gather the supplies they need and preparing ahead of time can alleviate the stress of having to rush out to collect items last minute and possibly dealing with items not in stock,” she said.
Dunn urges the need to know where current storm surge evacuations zones are located. It is also important to know if a structure is secure enough to withstand if a hurricane were to approach.
“If required to evacuate, make a plan for where you would go and how you would get there. Many local public transportation lines have evacuation routes to get residents to pre-identified hurricane shelters,” she said. “Many local shelters do not allow pets unless you have pre-registered to come to a designated pet-friendly shelter. Lastly, make a point to sit down with family members to talk through your plan of action in case you are unable to get in contact with each other and are separated due to the storm.”
When creating a kit, the bag should contain items that are easily portable.
“Ensure to pack items that are needed for those in your care including small children, the elderly, those with medical needs and, of course, pets,” Dunn said.
Some recommended items to include in the kit are:
- Non-perishable food to last at least three days per person
- Water, one gallon of water per person per day for at least three days for drinking and sanitation
- First-aid kit with prescription medications that may be needed
- Personal hygiene and sanitation items (i.e. moist towelettes, garbage bags, soap, bleach)
- Dust mask to help filter contaminated air and plastic sheeting and duct tape to shelter-in-place
- Manual can opener for canned foods
- Lighter or matches
- Flashlights with extra batteries
- Battery operated or hand crank radio with extra batteries
- Local maps
- Chargers or solar chargers for cell phone
- Pet supplies and baby supplies, if applicable
- Cooler with ice packs
- Waterproof container with cash and important documents
- Family and friends addresses and phone numbers
- Bank cards, credit card numbers and phone numbers
- Copy of passport, driver’s licenses, social security cards, credit cards
- Adoption/foster records or naturalization/immigration documents if applicable
- Insurance information
- Immunization records
- Current prescription list
- Deed and titles for home and/or vehicles, or lease information
- Birth certificates, wedding licenses, wills, death certificates
Mitigation: Protect Your Home
Hurricane mitigation comprises the actions and measures that are taken before a hurricane strikes in an effort to protect a structure, Dunn said.
Knowing if a structure is vulnerable to storm surge, flooding, and wind is the first step to assessing the potential risks in the case of an approaching storm.
There are online hazard and vulnerability assessment tools that are available to identify risks and to take the proper mitigation measures to secure structures.
Check if for risk of flooding by checking the FEMA Flood Map or rate flood risk with the FloodSmart.gov portal.
“Homeowners can take steps to protect their property and help alleviate the impact of storm surge, flooding, and wind damage by taking into consideration protective measures to reduce vulnerability,” she said.
Some of these measures include the following:
- Utilize sandbags to create a barrier to stop floodwater from entering a structure.
- Cover all windows, either with hurricane shutters or plywood
- Although tape can prevent glass from shattering, keep in mind that this does not prevent windows from breaking
- If possible, secure straps or clips to securely fasten the roof to the structure of the home
- Make sure all trees and shrubs are trimmed to reduce potential debris and clear rain gutters
- Reinforce garage doors, if applicable
- Bring in all outdoor furniture, garbage cans, decorations, and anything else that is not tied down. Clearing the yard of these items will reduce the amount of debris that can fly into the home or vehicle
Flood Water Safety
Storm surge and flooding are a major threat to communities, while high winds allow debris to damage structures or cause personal injuries, according to Dunn.
“During a hurricane, the number one cause of death is drowning due to storm surge and flooding. In the City of Tampa, some neighborhoods could actually experience 20 foot storm surge,” she said.
Dunn said during Hurricane Katrina, some areas experienced upwards of 28 foot storm surges along the Louisiana and Mississippi coast that lead to high mortality and morbidity rates within the neighborhoods that were impacted by the flooding. With Hurricane Sandy, areas along the coast experienced 6 to10 foot storm surge with waves reaching up to 29 feet.
“First and foremost, it is important to know that the water currents from storm surge and flash flooding are too powerful for even the strongest swimmers,” Dunn said. “There are various risks to take into consideration when having to navigate in water with low visibility, as well as including the fact that in many cases the flood waters could conceal storm drains, manholes without covers, or potholes that people can fall into which lead to injury or death by drowning.”
Electrical lines could be down following strong winds and severe storms.
If there is flooding with live down power lines on the ground, the area may become charged causing electric shock.
Vectors such as snakes, rats, and other biting insects could be in flood waters which pose a threat.
Furthermore, being trapped in a vehicle could be a major concern as it is a leading cause of death during flooding events.
“It is important to avoid flooded intersections, making the decision to turn around and find other routes to get to your final destination could save your life and protect your property. Water depths as low as 6 to 10 inches of water are strong enough to ruin an engine or carry away your vehicle if the flood waters are moving fast, according to Dunn.
According to Dunn, other considerations that need to be taken into consideration are flood waters may contain chemicals that have been released into the waterway from storm water runoff and products that are stored in homes, garages, and local businesses could end up in the flood waters as well.
Fecal matter and other by products from sewage are often detected, therefore wading through flood waters should be avoided if possible.
In the event of severe weather that cuts power, there are a few points of consideration for being prepared and staying safe beyond typical hurricane preparedness measures.
- Gas: Keep gas tank full in advance of an approaching storm since most people wait until the last minute—when everyone then rushes to get extra gas reserves for their vehicles and generators, gas stations can run out early
- ATMs: Make sure to have extra cash on hand in the event electricity goes out and ATMs are not working
- Cell phones: Charge cell phone and use it sparingly after power is out
- A/C: Losing power during a storm will result in the loss of A/C. Reduce the amount of light coming into the home to keep temperatures down—exposure to extreme heat can result in heat stroke, heat exhaustion, heat cramps, or heat rashes
- Water: Toilets will stop working when the electricity is out—fill bathtubs and large containers with water for washing hands and flushing only, not for consumption
- Food: Freeze any food or drinking water that can be frozen if a potential power outage could occur—check out this guide on properly freezing food: Freezing and Food Safety
- Health/Safety: Indoor use of portable generators, charcoal grills, or camp stoves can lead to carbon monoxide poisoning.
“Remember, preparation ahead of time and listening to the advice of officials in regards to evacuations is one of the best practices for preventing unnecessary injuries or death,” Dunn said. “Whether experiencing tropical storm wind speeds or a category 5 hurricane, any storm can be fatal and destructive. Preparing a hurricane kit, securing your home, finding a safe shelter to ride out the storm, and knowing how to remain safe after the hurricane has passed are essential.”
Story by Elizabeth Dunn, USF College of Public Health
Tags: Department of Global Health, Elizabeth Dunn, flood water safety, flood zones, hurricane forecasts, hurricane kit, hurricane prep, hurricane preparedness, hurricane safety, hurricanes, mitigation strategy, power outages, storm surge |
These collisions had three major results. The first was that the universe reached a condition called thermal equilibrium. To give you an idea about what this is we'll look at a glass of water at 40 degrees. The temperature of an object is a reflection of the amount of energy present in that sample of matter. However, not every molecule present has the energy that corresponds to 40 degrees. The total energy is actually spread out over a range of energies, so that there are some that have more energy than the corresponding temperature, and others that have less. This is what it looks like on a graph:
These molecules are constantly colliding with the surrounding molecules and results in exchanges of energy. This causes the number of molecules at certain energies to change. If something is in thermal equilibrium, every molecule that changes its energy level will result in another molecule changing its energy level to replace it (not necessarily in the same collision). In a way, this means that the energy present in a system is spread out among particles in such a way that the population of molecules at different energies doesn't change, even though the molecules themselves are constantly changing energy levels. In the early universe, as a result of the rapid collisions between particles, there was a state of thermal equilibrium. The reason this is so important is that things in thermal equilibrium can be quantified. This means that the system can be described by mathematical formulas, and that predictions can be made as to how the system would would change with time. Therefore, we can follow the evolution of the early universe through these formulas, even though we weren't there.
The other two consequences of these collisions involve interactions between particles as they collided.
The first interaction to be considered was the constant annihilation and re-creation of electrons and positrons. One of the most famous scientific discoveries of this century is the equivalence of matter and energy. The basic concept is that under the proper conditions, energy can be turned into matter, or vice versa. This is not something common to our experience because of the conditions in which we now live (it's too cold and there's not enough pressure). But in the early universe, with its high temperature and density, this was common. Photons were converted into electrons and positrons. (Known as PAIR PRODUCTION) They could not be converted into heavier particles (protons and neutrons) because they didn't have enough energy. These electrons and positrons would eventually collide with their respective anti-particle, and then be changed back into radiation. (Referred to as ANNIHILATION)
The second interaction was the conversion between protons and neutrons. These heavier atomic particles were already present In the Beginning. They were continually changing back and forth by means of the following two reactions:
In the beginning, because of the high energy density, the collisions between particles happened so rapidly that the proton- and neutron-creating reactions balanced each other out, and the relative number of protons and neutrons, though small, were equal. But, the equality between protons and neutrons was broken almost immediately. A neutron is slightly heavier than a proton. Therefore, it requires a little more energy to change a proton into a neutron than vice versa. Initially this didn't matter because there was plenty of energy to go around. But, because the energy density was decreasing as the universe expanded, there was less energy available for each collision. This started to tip the balance in favor of the proton-forming reactions. This lead to an increase in the number of protons compared to neutrons, and as the temperature dropped more, this effect became more exaggerated. (The final numbers will be mentioned later on.)
According to the formulas, at 13.82 seconds after the Beginning, the temperature had dropped to 3,000,000,000 K. At this point there was a drastic reduction in the population of electrons and positrons. The reason for this was once again the expansion of the universe. As electrons and positrons were annihilated, the radiation that formed was stretched (specifically its wavelength) by the growing universe. This reduced the energy carried by the photons below the level which would allow them to be converted back into electrons and positrons.
Up to this time (just over three minutes past the Beginning) there had been no nucleosynthesis. This was a result of the high energy density. In order to form atomic nuclei, the nucleons (the scientific word for protons and neutrons) must be able to collide and stick together. In the early universe the key reaction was the collision of a proton and a neutron to form a deuterium nucleus (an isotope of hydrogen). Collisions between protons and neutrons had been happening continuously since the Beginning, but their energies were too high to allow them to stick together to form deuterium nuclei.
This prevented further nuclear reactions leading to heavier nuclei. This type of situation where an intermediate product is the weak link in the overall synthesis is sometimes called a "bottleneck." This concept also applies in nucleosynthesis of heavier elements. Once the bottleneck is overcome, the remaining reactions are able to be completed. In the early universe, once the deuterium bottleneck was cleared, the newly formed deuterium could undergo further nuclear reactions to form Helium.
This could happen by means of two different reaction pathways described below.
He nuclei were the heaviest to form. This was the result of the energy density being too low to allow heavier nuclei to collide with enough energy to stick. At the time that nucleosynthesis began, the relative abundance of protons to neutrons was 13% neutrons and 87% protons. When nucleosynthesis began, all the neutrons present were incorporated into He nuclei. When all the neutrons were used up, the remaining protons remained as hydrogen nuclei. So, when this first wave of nucleosynthesis was completed, the universe consisted of roughly 25% He and 75% H (by weight).
Below is a graphical summation of nucleosynthesis in the early universe. The graph shows the relative abundances of different nuclei (vertical axis) during the first three hours of creation. The horizontal axis has been labeled using both time (top) and the equivalent temperature (bottom). For those not used to using a logarithmic scale, a dashed line has been added at the 1% abundance level. Anything below this line would be less than 1% of the total mass present.
As can be seen from the curves, at the higher temperatures only neutrons and protons exist, with there being more protons than neutrons. But, as the temperature decreases, there is an increase in the amount of deuterium and helium nuclei. Just below 1 billion degrees there is a significant increase in deuterium and helium, and a decrease in the abundance of protons and neutrons. This is the deuterium bottleneck mentioned previously. This uses up the all the free neutrons and some protons, and causes the neutron line to drop off, and the proton line to dip (relatively few protons are used up). The deuterium abundance only increases to a point because it is an intermediate to the formation of helium. So as it is created, it is quickly consumed to complete the process of helium nucleosynthesis. Once all the neutrons have been used up, its presence drops off.
The final step in the formation of elements was capture of the proper number of free electrons to form neutral atoms.
But, the remaining electrons still had plenty of energy, so it took about 700,000 years of cooling until this was able to occur. The capture of electrons to form atoms resulted in an important change in the universe. At that moment, without free electrons to interact with the photons present, the universe became transparent to radiation. This means that the photons were freely able to expand with the universe. These photons had high energies, which means that they had short wavelengths. But the expansion of the universe caused the wavelengths to get stretched out as the universe grew. These stretched out photon wavelengths are what we now refer to as the Cosmic Microwave Background (CMB). They are a leftover from the Big Bang. We have been able to measure the intensity of this background radiation, and it has closely matched that which is predicted from theoretical calculations. This has been a strong evidence in support of the "Big Bang" theory of the creation of the universe.
Back to top |
“The systematic assessment of student learning outcomes is essential to monitoring quality and providing the information that leads to improvement.” -Middle States Standard XIV
STUDENTS & CLASSROOM ASSESSMENT
Framework of assessment is an educator’s task of assessing the students learning from all aspects from direct methods to indirect methods. We assessed our students through the results of the exams and converted into grades, and sometimes, we fail to acknowledge the factors that affect the results of the assessment. In assessment, it is the teacher’s diagnosis for the students. It is a process of coming to understand the student’s current learning needs well enough to plan for the best possible instructional processes and outcomes for each learner whose academic welfare is the teacher’s full responsibility. Unfortunately, teachers often do prescribe without a diagnosis. To start with our classroom management, it is important to think about the assessment as an instrumental element of a classroom practice.
Classroom assessment is the process of collecting, synthesizing, and interpreting information in a classroom for the purpose of aiding a teacher’s decision making. It includes a broad range of information that helps teachers understand their students, monitor teaching and learning, and build an effective classroom community. Teachers use assessment to do the following: diagnose student problems, make judgments about student academic performance, form student work groups, develop instructional plans, and effectively lead and manage a classroom. There are essentially two kinds of classroom assessments: formative and summative. Formative assessment is sometimes called on-going assessment. It is a process used to guide, mentor, direct, and encourage student growth. Teachers use on-going or formative assessment to consistently monitor students’ developing knowledge, understanding, and skill related to the topic at hand in order to know how to proceed with instruction in a way that maximizes the opportunity for student growth and success with key content. An assessment can be considered formative if a teacher gathers evidence about student performance, interprets the evidence, and uses the evidence to make decisions about next steps in instruction that are likely to be better focused or informed than the decisions would have been without the evidence. Formative assessment implies a pragmatic intent—to improve the precision of instructional plans; and an immediacy—to improve those plans in the very term. While, summative assessment has a different tone and purpose than formative assessment. Whereas the intent of formative assessment is to help teachers and students change course when warranted to improve instructional outcomes, summative assessment is intended to measure and evaluate student outcomes. Thus whereas formative assessment should rarely be graded, summative assessment suggests that a grade will be given and a student’s performance will be evaluated based, to some degree, on the information produced.
Effective distinction requires teachers to assess student status before a unit of study begins. A diagnostic assessment helps determine a student’s starting point with learning targets as well as with prerequisite knowledge, understandings, and skills that are essential to continued progress in a content sequence. Pre-assessment is also useful in developing awareness about students’ interests and learning preferences. Formative assessment lets teachers closely monitor a student’s evolving knowledge, understanding, and skills—including any misunderstandings a student may have or develop about key content. As with diagnostic, formative assessment also plays a role in revealing students’ various interests and approaches to learning. Summative assessment evaluates a student’s status with the learning targets at designated endpoints or checkpoints in a unit of study. Assessment in an effectively differentiated classroom will be both informal and formal. Informal assessments include things like talking with students as they enter and leave the room, observing students as they work on a task or in groups, watching students on the playground or at lunch, asking students to use hand signals to indicate their degree of confidence with a skill they have just practiced, or making note of informative comments made by parents at a back-to-school night. Informal assessments are useful in giving a teacher a sense of what makes a student tick, providing a big-picture look at how the class as a whole seems to be doing at a given moment, and amassing a growing sense of how specific students work in particular contexts.
Students vary in at least three ways that affect learning: readiness, interest, and learning profile. Readiness has to do with a student’s current proximity to current learning targets; interest has to do with topics, ideas, or skills that attract a student, generate enthusiasm, or align with a student’s passion; and learning profile relates to a preferred mode of learning or learning preference. Teachers can better focus their planning if they understand their students’ differences in these areas; therefore, teachers should assess all three. Of the three, understanding student readiness calls for more persistent assessment and analysis of assessment information in order to plan curriculum and instruction that moves each student forward from his current point of record.
Assessment of instruction is summative and is especially useful in determining the degree to which a student has mastered an extended body of content at a concluding point in a sequence of learning. Summative assessments result in grades that should reveal that degree of mastery. It emphasizes a teacher’s use of information derived from assessments to do instructional planning that can effectively and efficiently move students ahead from their current points of knowledge, understanding, and skill. It can also be useful in understanding and addressing students’ interests and approaches to learning. Assessment for learning should rarely be graded and feedback that helps students clearly understand areas of proficiency and areas that need additional attention is generally more useful than grading because students are still practicing and refining competencies, and untimely grading or judgment creates an environment that feels unsafe for students to engage in learning.
An effective classroom management demonstrates important connections between assessment and learning environment, and between assessment and classroom leadership. When teachers regularly use assessment to help students develop competence and a sense of independence rather than to judge them, the environment feels harmless and more predictable to students. When teachers help students understand that differentiated tasks often branch from assessment information, students come to understand that the teacher’s goal is to help each learner take the next appropriate step in learning, with clear and dynamic learning goals, student progress monitored by persistent formative assessment, and instruction tailored to extend the likelihood that each student will develop proficiency necessary for growth, a student’s prospects for success are greatly enhanced when the summative or more judgmental aspects of assessment are in show. |
Curriculum Standards for Social Studies
National Council for the Social Studies
Theme II: Time, Continuity and Change
- Standard A - The student demonstrates an understanding that different scholars may describe the same event or situation in different ways but must provide reasons or evidence for their views.
- Standard C - The student identifies and describes selected historical periods and patterns of change within and across cultures, such as the rise of civilizations, the development of transportation systems, the growth and breakdown of colonial systems, and others.
Theme VI: Power, Authority, and Governance
- Standard B - The student describes the purpose of government and how its powers are acquired, used, and justified.
- Standard E - The student identifies and describes the basic features of the political systems in the United States, and identifies representative leaders from various levels and branches of government.
Theme X: Civic Ideals, and Practices
- Standard A - The student examines the origins and continuing influence of key ideals of the democratic republican form of government, such as individual human dignity, liberty, justice, equality, and the rule of law.
- Standard C - The student locates, accesses, analyzes, organizes, and applies information about selected public issues - recognizing and explaining multiple points of view.
- Standard D - The student practices forms of civic discussion and participation consistent with the ideals of citizens in a democratic republic. |
The region of space within our Solar System is called interplanetary space, also known as interplanetary medium. Most people are so fascinated by the planets, Sun, and other celestial objects that they do not pay any attention to space. After all, there is nothing in outer space right? A common misconception is that outer space is a perfect vacuum, but there are actually particles in space including dust, cosmic rays, and burning plasma spread by solar winds. Particles in interplanetary space have a very low density, approximately 5 particles per cubic centimeter around Earth and the density decreases further from the Sun. The density of these particles is also affected by other factors including magnetic fields. The temperature of interplanetary medium is about 99,727°C.
Interplanetary space extends to the edge of the Solar System where it hits interstellar space and forms the heliosphere, which is a kind of magnetic bubble around our Solar System. The boundary between interplanetary space and interstellar space is known as the heliopause and is believed to be approximately 110 to 160 astronomical units (AU) from the Sun. The solar winds that blow from the Sun, and are part of the material in interplanetary space, flow all the way to the edge of the Solar System where they hit interstellar space. The magnetic particles in these solar winds interact with interstellar space and form the protective sphere.
The way that interplanetary space interacts with the planets depends on the nature of the planets’ magnetic fields. The Moon has no magnetic field, so the solar winds can bombard the satellite. Astronomers study rocks from Earth’s Moon to learn more about the effects of solar winds. So many particles have hit the Moon that it emits faint radiation. Some planets, including Earth, have their own magnetospheres where the planets’ magnetic fields override the Sun’s. The Earth’s magnetic field deflects dangerous cosmic rays that would otherwise damage or kill life on Earth. Material leaking from the solar winds is responsible for auroras in our atmosphere. The most famous aurora is the Aurora Borealis, which appears in the sky and is only visible in the Northern Hemisphere.
Interplanetary medium also causes a number of phenomena including the zodiacal light, which appear as a faint broad band of light only seen before sunrise or after sunset. This light, brightest near the horizon, occurs when light bounces off dust particles in the interstellar medium near Earth. In addition to interplanetary space, there is also interstellar space, which is the space in a galaxy in between stars.
Astronomy Cast has an episode on the heliosphere and interstellar medium. |
Gametes are mature sexual cells with unpaired chromosomes. This gamete is the basic male or female sexual unit in reproduction. Both the egg and sperm are forms of gametes, with specialized biological structures and functions for their roles. The gametes fuse, forming a zygote as the initial cell after combination. Male gametes have chromosomes XX, whereas the female chromosome is XY. The fused chromosome is decided as male or female depending on the chromosome structure after fusion.
Examples of Gametes:
Egg and sperm (mammals)|
Algae and fungi: Identical gametes
Plant pollination duplicates the same basic process, but plants may either specialize sexually or produce both types of gametes.
The human sex cells (gametes) are produced by meiosis. |
Heat and light are both different types of
energy. Light energy can be converted into heat
energy. A black object absorbs all wavelengths
of light and converts them into heat, so the
object gets warm. A white object reflects all
wavelengths of light, so the light is not
converted into heat and the temperature of the
object does not increase noticeably.
Different wavelengths (colors) of light have
different amounts of energy. Violet light has
more energy than red light. If we compare an
object that absorbs violet light with an object
that absorbs the same number of photons
(particles of light) of red light, then the
object that absorbs violet light will absorb
more heat than the object that absorbs red
The amount of heat absorbed is also affected
by how light or dark an object is. A dark object
of a given color will absorb more photons than a
light object of the same color, so it will
absorb more heat and get warmer.
Note about how the color of an object
appears: The color an object appears is the
complementary color to the color the object
absorbs. If an object absorbs yellow light, then
it will reflect all of the other colors of light
and it will look violet.
Click Here to return to the search form. |
Life: September 2, 1883 – August 11, 1957
Why you should know him: Rudolf Weigl was a Polish biologist who not only invented the vaccine for epidemic typhus, but also saved the lives of countless Jews during the Holocaust.
Born in 1883 in Moravia (then part of the Austro-Hungarian Empire), Rudolf Weigl later moved to Lwow, Poland (now Lviv, Ukraine), where he completed his training in biology, zoology, and anatomy.
Dr. Weigl's work was in the field of vaccinations. During the 1930s and 1940s, he developed the first successful vaccine for typhus, a disease that ravaged Europe – particularly the ghettos and camps that imprisoned the continent's Jews.
Dr. Weigl's research came to the attention of the Nazis after Poland was captured. The Nazis ordered Dr. Weigl to set up a vaccination production plant. This very Nazi-mandated facility would save many from the Nazis, however, as Dr. Weigl employed many of Poland's Jews and members of the country's underground at his institute. The vaccines also saved many of Poland's Jews, as they were smuggled into the disease-ravaged ghettos of Lviv and Warsaw.
While Dr. Rudolf Weigl died in 1957, Israel honored him for the numerous Jewish lives his actions and his vaccine saved by naming him Righteous Among the Nations in 2003. |
Enzymes are vital for processes to take place in our body without them they couldn’t take place. What are enzymes exactly? We have an many enzymes in our body from our saliva to our pancrease. Enzymes are specialized proteins that are produced by living cells to catalyze reactions in the body=breakdown. Protein in the form of an enzyme acts as a catalyst. A catalyst in action brakes down something, any chemical substance affected with the speed of reaction without being permanently altered by the reaction. For a chemical or biochemical reaction to occur, a certain amount of energy is required=the activation energy. Energy can be transformed from one state to another. The role of an enzyme is to decrease the amount of energy needed to start the reaction. Exactly how enzymes lower activation energies is not completely and fully understood but it is known that an enzyme attaches itself to one of the reacting molecules, this is called a substrate complex. Thousands of enzymes exist but each kind can attach ONLY to one kind of substrate. The enzyme molecule must fit exactly with the substrate molecule (just like how pieces in a jigsaw puzzle have to fit in their specific space of the picture). Well, if the substrate and enzyme don’t perfectly match or fit properly no reaction takes place. When they do fit perfectly the substrate molecule can react with other molecules in a synthesis reaction and when completed the enzyme is free to move on elsewhere to connect with another substrate molecule. This whole process takes place quickly. Clearly, enzymes are essential to the body’s overall homeostasis. (In order to lead a healthy life, we need to bring a balance in the way we lead our lifestyle. Homeostasis is nothing but a mechanism which helps the human body maintain a balance between the internal and external environment). Enzymes quickly perform catalyze chemical reactions and they also govern the reactions that occur. Enzymes are named by adding the suffix “ase” to the name of their substrates. For example there is:
The breaking down of starches = the enzyme that does this function is amylase. (Know this about amylase, it is present in human saliva where it begins the chemical process of digestion; that starts in our mouth. Foods that contain much starch but little sugar, such as rice and potato, taste slightly sweet as they are chewed because amylase turns some of their starch into sugar in the mouth. The pancreas also makes amylase (alpha amylase) to hydrolyse dietary starch into disaccharides and trisaccharides which are converted by other enzymes to glucose to supply the body with energy. There is even b and y amylases. Ending product on enzymes breaking down starches or carbohydrates gives us one thing only sugar.)
The breaking down of sugars, like sucrose = the enzyme is sucrase. The ending product of the enzyme is it breaks down complex sugars to more simple sugars in the body.
The breaking down of fats (lipids) = the enzyme is lipase. Lipase perform essential roles in the digestion, transport and processing of dietary lipids in most if not all living organisms (example (triglycerides, fats, oils). Most lipases act at a specific position on glycerol backbone of lipid substrate (A1,A2 or A3 in the small intestines). For example, human pancreatic lipase (HPL) is the main enzyme that breaks down dietary fats in the digestive system, converts triglyceride substrates found in ingested oils to monoglycerides and two fatty acids. Know that glycerol is a simple sugar compound. Enzymes deal with breaking down our foods because they take a major role in what we call the process digestion in the human body but notice what the ending result is of mostly every ingredient out of 3 of our food groups, which is SUGAR. It’s because of the food already having some sugar in it but more importantly also the chemical reaction with the enzyme to allow the food to break down into smaller compounds to be utilized in the body=simpler sugar compounds which also plays a part in the entire digestion process.
So know sugar in the body is our fuel for energy but with our digestion process, in how it works is like this: when the body gets a meal within 1 hour digestion starts in the stomach and complete in 6 to 8 hours depending on how large the meal is, especially if 3 large meals a day. The foods if contain starches, fat, lipids they all break down to simple sugars that transfer to the bloodstream and whatever energy the body needs at that point the tissues with cells utilize it but when enough sugar is used and we have excess in the blood we than have the body store the extra sugar that first converts the glucose (active sugar) to glycogen (inactive sugar) in our liver. The liver is only so big and when it reaches its optimal level of storage than the sugar gets stored in our fat tissue = WEIGHT GAIN. This is the problem with people in America not understanding this process. Plus as most people get older from 30 than to 40 years old and every 10 years after that till heaven we put cellulite on the body for 2 major reasons not eating as healthy due to the bikini and speedo fit not being the priority in life but getting the feet up after a hard day’s work is. The other reason is we aren’t as active as when we were 20 or 30 years old and the metabolism naturally slows down unless you’re a Jack la Lanne.
How do we deal with this to prevent obesity? Do what I did go on a 6 small meal diet. Eat a meal every 3 hours with keeping fat, calories/sugar, carbohydrates in proper proportions to prevent excess sugar in the meals to not allow fat storage=weight gain. Of course some exercise or activity daily or every other day helps tone the muscle and not let it flab due to cellulite. Live healthier habits of living not a month, 3 months or 6 months but make it your daily routine with treating yourself to foods you don’t eat daily to maintain a good weight and increase your health status to allow you to live a happier, longer and more exciting life. Dr. Anderson with his book “Dr. A’s Habits of Health” is a great book to check out with so many others and than the network. You learn how all 4 food groups are divided up in your meals.
Let’s not forget with enzymes they also break proteins down in our body: The breaking down of proteins=Trypsin Proteins are large biological molecules consisting of one or more chains of amino acids. Proteins perform a vast array of functions within living organisms, including catalyzing metabolic reactions, replicating DNA, responding to stimuli, and transporting molecules from one location to another. Trypsin is a enzyme catalyst, which allows the catalysis of chemical reactions. The ending product of the break down is amino acids not sugar. Know high on a protein diet continuously for years can hurt the body also.
Enzymes deal with breaking down our foods because they take a major role in what we call the process digestion in the human body. but notice what the ending result is of mostly every ingredient in our 4 food groups is; SUGAR. It because of the food has some sugar in it but also the chemical reaction with the enzyme to allow the food to break down into smaller compounds to be utilized in the body with send through the entire digestion process.
There are risks with eating just high protein diets for long periods of time. You put yourself at risk for: Osteoporosis: Research shows that women who eat high protein diets based on meat have a higher rate of bone density loss than those who don’t. Women who eat meat lose an average of 35% of their bone density by age 65, while women who don’t eat meat lose an average of 18%. In the long run, bone density loss leads to osteoporosis.
Kidneys: A high protein diet puts strain on the kidneys. It is well known that patients with kidney problems suffer from eating a high protein diet which is due to the high amino acids levels. A high-protein diet may worsen kidney function in people with kidney disease because your body may have trouble eliminating all the waste products of protein metabolism.
However, the risks of using a high-protein diet with carbohydrate restriction for the long term are still being studied. Several health problems may result if a high-protein diet is followed for an extended time:
Some high-protein diets restrict carbohydrate intake so much that they can result in nutritional deficiencies or insufficient fiber, which can cause health problems such as constipation and diverticulitis.
Some high-protein diets promote foods such as red meat and full-fat dairy products, which may increase your risk of heart disease.
If you want to follow a high-protein diet, do so only as a short-term weight-loss aid. Also, choose your protein wisely. Good choices include fish, skinless chicken, lean beef, pork and low-fat dairy products. Choose carbs that are high in fiber, such as whole grains and nutrient-dense vegetables and fruit.
It’s always a good idea to talk with your doctor before starting a weight-loss diet. And that’s especially important in this case if you have kidney disease, diabetes or other chronic health condition.
So if you want to continue on high protein diets longer than 6 months know how to alkalize the body chemicals to decrease the proteins and there are supplements that can do that via the pharmacy or look up even online.
Before changing your diet check with your doctor to make sure its cleared ok by the doctor since he knows your entire medical history. |
“Our first teacher is our own heart”
Introduction: The people known as the Quinault Indian Nation lived on the Olympic Peninsula as members of individual family groups thousands of years before a small portion of their ancient lands became the Quinault Indian Reservation. They lived off the land in harmony with nature, their spirits acquiring strength from many bonds to the creatures and plants, which shared the environment. Material needs were met by the ocean, rivers, and land. The Tribe used chitem, the Western Red cedar, to make longhouses for shelter, canoes for transportation, baskets for storage, and clothing for their bodies. The rivers and beaches were highways to move from place to place in pursuit of food and commerce. Their longhouses, sheltering family groups, were built along the river banks convenient to the abundant salmon which were so important to their lives.
Land: The land was blessed with food and the people were one of the most successful hunting and gathering societies. They harvested fish, whales, and seals from the sea; clams, mussels, and sea bird eggs from the beaches; elk, bear, and other animals from the forests and meadows; and berries, tea and roots from the prairies. Despite the ready abundance of these foods, their lives were closely tied to the salmon which returned to the rivers each year. The runs of Chinook, coho, chum, steelhead, and blueback were the basis of the culture and economy.
Government: No formal structure of government was needed. The people relied upon tradition and loyalty and conscience for social order. Those who lived along the coast had the most contact with others who shared the same watershed drainage’s and together, they formed a loosely knit, larger body of organization that came to be regarded by other governments as a tribe.
Resources: Since resources were so plentiful, farming was not essential for survival, and land ownership as such was virtually unknown. They shared the land and its resources with one another. The land was there for all to use, but it belonged to no one. There was considerable rivalry between tribes however, and territories were to be respected. The present boundaries of the Quinault Reservation were established by executive order of President Grant in 1873.
Explorers: Beginning in the late 1700 ‘s the Quinault’s way of life began to change drastically and quickly when Spanish, English, and Russian explorers searched the Pacific Coast for furs and the mythical Northwest passage.
Changing Lands: Allotments on the Quinaielt Indian Reservation in Washington began in 1905 under the General Allotment Act of 187. The first roll of 119 names was submitted in 1907, the second of 327 names in 1908, and the third of 300 names in 1910. Thus, 748 Indians were allotted on the Quinaielt reservation before the passage of the Act of March 4, 1911.
The Treaty: In 1855, the Treaty of Olympia was signed by the, Quinault, Quileute, and their bands the Hoh and Queets, they ceded nearly a third of the Olympic Peninsula to the United States in exchange for a “tract or tracts of land sufficient for their wants”. The Quinault Indian Nation was formed with the original signers of the treaty the; Quinault, Queets, Hoh, Quileute. During this time representatives, interpreters, certifiers, witnesses, and members of the Quinault Reservation took part in handling the affairs of the Quinault Reservation. The Superintendent of Indian Affairs in 1921 requested that the Quinault Tribe form a Council to administer the affairs of the Quinaielt Reservation. A Council was Elected: The first elected officers were nominated: President; Harry Shale, Secretary; Frank W. Law, Treasurer; William Garfield. The Constitution and by-laws were written. (Blanche S. McBride, Elder, and Lelani Jones-Chubby, Museum)
Location and Lands: The Quinault Indian Reservation is located on the Pacific coast in Northwest Washington State. It is a land of magnificent forests, swift-flowing rivers, gleaming lakes and 23 miles (37 kilometers) of unspoiled Pacific coastline. Its boundaries enclose over 208,150 acres (84,271 hectares) of some of the most productive conifer forest lands in the United States.
Resources: Located on the southwestern corner of the Olympic Peninsula, its rain-drenched lands embrace a wealth of natural resources. Conifer forests composed of western redcedar, western hemlock, Sitka spruce, Douglas-fir, Pacific silver fir and lodgepole pine dominate upland sites, while extensive stands of hardwoods, such as red alder and Pacific cottonwood, can be found in the river valleys. Roosevelt elk, black bear, blacktail deer, bald eagle, cougar, and many other animals make these forests their home.
Villages & Neighbors: The reservation is located within a seasonal tourist haven surrounded by the rainforests of the Olympic National Park, the National Forest Service, and the Quinault Lake. The Reservation primarily is contained within Grays Harbor County, with a small portion also in Jefferson County. The Reservation includes three primary villages: Taholah and Queets (primarily Native) and Amanda Park (primarily non-Native.) There are also several areas with very small, but growing populations. The recent construction of approximately 45 homes in a new area called “Qui-Nai-Elt Village” is, perhaps, the start of a new Native community.
Demographics: According to the 2010 Census, the Reservation population is comprised of 1,408 residents, of which are between the ages of 19 and 65. Over 73% are native residents. Overall, the QIN is comprised of close to 2,900 tribal members. Roughly half of its members live off-Reservation and half live on-Reservation. A majority of the off-Reservation members live within 60 miles of the Reservation in the towns of Moclips, Pacific Beach, Hoquiam, Aberdeen, and Ocean Shores.
Poverty Rate: In 1990 the poverty rate for all residents of the Quinault Indian Reservation was 31%, with 425 persons living in households with income below the poverty line. For American Indians living on the reservation the poverty rate was 37% with 375 persons living in households with income below the poverty line. Key contributors to the high poverty rate are 1) the rural location of the reservation; 2) a decades-poor and declining timber industry; and 3) declining fish runs due, in large part, to declining river spawning habitat above and off the reservation. These poverty numbers will change once the 2010 census if fully tabulated and available to the public.
Tribal Government: The Quinault Indian Nation is governed by the Quinault Business Committee (QIN Tribal Council). It consists of the president, vice president, secretary, treasurer, and seven council members. Council members are elected at the annual general council meeting and serve staggered, 3-year terms. Several council members served on an interim board to govern the Taala Fund once the organization was formally established. This interim board was responsible for identifying and recruiting seven people who became the self-perpetuating permanent board of governors for Taala Fund, effectively removing Taala Fund from under the tribal government’s wings and establishing Taala Fund as an independent nonprofit.
Economy/Employment: Currently, tribal government and tribal enterprises—including a casino, a seafood plant, two convenience stores and timber enterprises—make up a large majority of the employment on the Reservation. These jobs are an important part of the local economy, but they do not represent growth opportunities. Existing private-sector jobs are related to fisheries, including fishermen, fish processing, and fishing guides. Most jobs are seasonal. QIN is committed to expanding private business development, asset-building, and financial literacy on the Reservation and among our tribal members |
Letter Writing Tips
Every educated person should know how to write a good letter. All of us have to write letters of some sorts at some point of time.
There are several different kinds of letters. For examples, there are personal letters and business letters. The form of each letter is determined by its kind. For example, personal letters are written in a friendly tone. Business letters, on the other hand, are written in a formal style.
Parts of a letter
There are six important parts to all letters. They are:
3. Body of the letter
4. Subscription or leave taking
6. Superscription on the envelope
The heading usually consists of two elements – the writer’s address and the date. The purpose of the heading is to inform the reader where the letter was written and when.
The heading should give the full postal address of the writer to which the reader may reply. The heading is usually given in the top right-hand corner of the first page. The date is given below the heading. Don’t put your name with the address. The address and the date may alternatively go on the left.
The date may be written in any of the following formats:
18 October 2003
18th October 2003
October 18, 2003
The date may also be written entirely in figures.
All-figure dates are interpreted differently in British and American English. For example, 12.10.2003 means 12th October 2003 to British people. To an American it means 10th December 2003. Americans put the month before the day.
Salutation or greeting
The form of greeting depends upon the relationship between the writer and the reader of the letter.
In letters written to family members and close friends, the greeting could be –
Dear Father, My Dear Mother, Dear Uncle, Dear John etc.
In business letters the greeting should be Dear Sir/Dear Madam/Dear Sirs etc.
Note that here the use of the term dear does not imply any special affection. It is a merely a polite expression.
Put the salutation at the left-hand corner of the page. It should be put at a lower level than the heading. |
The Hyperbolic Toolbox:
Non-Euclidean Constructions in Geometer's Sketchpad
Example 6.2: "Proving the Parallel Postulate"
The geometry that essentially arises from Euclid's first four postulates (see
the Introduction) is called Neutral (or Absolute)
Geometry. Historically, before the discovery of hyperbolic geometry,
there were numerous attempts to prove Euclid's fifth postulate in neutral
geometry. Of course, these attempts were doomed to failure since hyperbolic
geometry provides an example of a neutral geometry in which Euclid's fifth
postulate is violated (noting that hyperbolic geometry is no less consistent than
One type of activity that can help students understand the axiomatic
distinction between hyperbolic and Euclidean geometry is considering historical
attempts to prove Euclid's fifth postulate in neutral geometry. Each
of these flawed attempts contains at least one unjustifiable (in neutral
geometry) statement, else the proof would be valid and Euclid's fifth postulate would be a theorem. The flawed statement logically implies and most
often is equivalent to the parallel postulate. A terrific exercise
for students is to consider one of these proofs, with the justifications
Below is an attempted proof of the parallel postulate in neutral
geometry. This "proof" was the work of Farkas Bolyai, who was the father of
Janos Bolyai, one of the discoverers of hyperbolic geometry. Your job
is to justify all the steps that can be justified in neutral geometry and
to identify the statement that is equivalent to the parallel postulate (i.e.
find the flaw!). Be complete! Some statements might require more than one
Given: point P not on line k.
Where is the flaw?
Let Q be the foot of the perpendicular from P to k.
Let m be the line through P perpendicular to line PQ
Line m is parallel to line k.
Let n be any line through P distinct from m and line
Let ray PR be a ray of n between ray PQ and a ray
of m emanating from P.
There is a point A between P and Q.
Let B be the unique point such that Q is between A
and B and AQ is congruent to QB.
Let S be the foot of the perpendicular from A to n.
Let C be the unique point such that S is between A and C
and AS is congruent to SC.
A, B, and C are not collinear.
There is a unique circle G passing through A, B, and
k is the perpendicular bisector of AB, and n is the
perpendicular bisector of AC.
k and n meet at the center of G.
The parallel postulate has been proven.
How the tools can help:
Students with some experience in geometry from an axiomatic standpoint
will typically be able to recognize many of the above statements
as propositions in neutral geometry and justify those statements.
However, they may have difficulty precisely identifying the unjustifiable
statement and how it is flawed. When students are stuck, it helps
to encourage them to try reproducing the proof in one of their hyperbolic
models. The statement that is unjustifiable in neutral Geometry
becomes false in hyperbolic geometry.
Below is a Java Sketchpad script that illustrates the proof's constructions
in the Klein model. (This particular applet may take a few moments to load.)
Click on "Construction Steps" then click on each step in turn to see
After observing the construction, manipulate
the free (red-colored) points to find the flaw:
The dynamic figures on this page were produced
a World-Wide-Web component of The
Geometer's Sketchpad. Copyright ©1990-1998 by Key Curriculum
Press, Inc. All rights reserved.
Back to Teaching Examples |
At the beginning of the 20th century, the British Empire was one of the largest the world had ever seen. With the Japanese alliance of 1902, Britain, it could be suggested, ended the tradition of ‘Splendid Isolation’ as it was categorised by Lord Salisbury. Debate exists over the intention of the following years.
During a period of rapprochement, Britain became more closely tied to the French. This, as has been suggested by Philip Pedley, was in part due to the personalities of the ‘new men’. Edward VII was a Francophile (he certainly liked their prostitutes) and he embarked upon a state visit which went down very well. At the Foreign Office at the same time there was emerging a generation of civil servants who were distinctly anti-German in temperament. Eyre Crow didn’t like the Germans very much at all (despite having been born in Germany and speaking with a German accent himself); he constantly warned against what he perceived to the German threat to British dominance. Lord Lansdowne was not exactly anti-German, but he was far more willing to be friendly with the French; he was related to Talleyrand, for one thing, and this gave him a kind of cultural affinity with one of Britain’s oldest enemies. This argument implies that the interest of British statesmen in the European project was more derived from personality than politics. It also casts doubt on the true intentions of British political figures when they acted with regard to Europe. If personal dislike or national affinity played a part, the idea that British policy was inspired by a genuine desire to act on the European stage is lessened to some extent. But when we assess the extent to which British policy pointed in a particular direction, the personalities of those involved can be largely discounted; reasons for action can be largely irrelevant when discussing the shape that action took.
In 1902, Britain and Japan signed a treaty – the symbolic end of ‘Splendid Isolation’ – and a naval alliance was created. According to the treaty, if there was a war between one of the signatories and two major European powers, the other would intervene. Britain would count on Japanese support if she was attacked by France and Russia; Japan also secured British favour in case a Russian attack on Japan threatened to bring in the French.
On the back of this agreement, Britain was able withdraw a lot of ships from the Far East. This meant that she could bring them back to Europe – and just in time for the beginning of the naval race. It must be remembered that at this time Britain controlled a great deal of the Far East; the majority of the whole empire was east of Suez. This being so, it does seem that British policy during this period saw a European re-orientation. But it could also be argued that Britain was still looking out for colonial interests at this time; Japan, after all, could be useful in protecting colonial territories from the Russians, a perennial threat to Britain’s empire.
In 1904-5 Russia and Japan went to war. Russia started the war but promptly lost it. France – another formidable foe of the British as colonies and influence went – kept out of the matter. This, it could be argued, signalled the strength of the British alliance with Japan. France did not join with its ally in fighting in the Far East; the war was not extended but contained.
The 1904 Anglo-French Entente (the Entente Cordiale) was agreed. It was not, however, an alliance. Primarily it concerned colonial matters, which is significant. Britain and France settled their rivalry in Egypt, which had existed since the 1880s – when Disraeli practically bought the Suez Canal. (Also in the 1880s, Britain had taken over the running of Egypt’s bank, potentially monopolising the financial and business opportunities which that nation presented to willing colonial powers.) Morocco was regarded as a future economic bonanza – it had nitrates and other mineral resources – and the British said the French could have primacy there. Many said that the British did worse out of the agreement: Britain already effectively controlled Egypt, after all, and did not desperately need to have this status confirmed. This ending of colonial friction was necessary in order to achieve a better relationship with Russia, which Britain could be argued to have greatly needed. Another aspect was the perception of such a deal in Germany.
British manoeuvres of this era do seem to have a colonial tilt, and not just in the events themselves. Britain and France ended some sources of colonial tension, and together they also allocated areas of colonial dominance. By and large, this can be seen as an effort to safeguard British colonial interests in the long term, even if that meant slight temporary disadvantage. Similarly, British attempts to achieve closer and more harmonious diplomatic relations with Russia can be viewed through the prism of the 19th century ‘Great Game’, in which both powers jockeyed to dominate the nation of Afghanistan in order to achieve a commanding position with regard to India. Britain fought numerous Afghan Wars and expended much treasure in the defence of the ‘Jewel in the Crown’. It is reductionist to rule out entirely the possibility that Indian considerations played a central role in British diplomacy of this period, even in ostensibly European matters.
The First Moroccan Crisis was a deliberate German attempt to break the Anglo-French Entente in 1905-6. The idea was to provoke an international crisis, during which latent Franco-British rivalry would re-emerge. The timing seemed good for Germany: Russia was mired in fallout from the Russo-Japanese War of 1904-5, tumult which was only exacerbated by the 1905 Revolution and Bloody Sunday. Thus there was no real chance of war between Germany and France. Ideally, Germany wanted to break the Franco-Russian alliance as well. This demonstrates that Germany was worried about the Anglo-French Entente; it seemed to her that this arrangement concerned the European balance of power. In this, it can be argued, British foreign policy at least appeared to have a European focus, which suggests that the nation and her statesmen truly were concerned with the European balance of power.
At the Algeciras Conference 1906 everything went badly for Germany. She secured only the backing of Morocco and Austria-Hungary; the master plan had failed. The other nations (Britain, the United States, Spain, Russia and Italy) all backed France. The Germans expected the British to revert to type, but, despite press opinion going the other way, Britain did not. Britain’s leaders maintained the Entente agreement on protectorates, seeing Germany as a dangerous international troublemaker. Germany also expected Spain and Italy to back her proposition, and the Americans to act against European imperialism. They did not. This is interesting, as this British action was again about colonial affairs – at least it seemed so first and foremost. Germany felt humiliated, however, and her policymakers worried about the state of affairs in Europe, perhaps signalling that the focus of this action again appeared to be local.
Signed in August 1907, the Anglo-Russian Entente represented a continuation of the retreat from isolation. It was not an alliance, and it was primarily colonial in essence. The Russians pledged to give up all claims to Afghanistan and to recognise British interests in Tibet. Further, Persia would be divided into Russian and British sections, and there was to be a buffer zone in between. Britain got southern Persia, where all the oil was and still is; what later became BP was initially called Anglo-Persian oil. Both nations recognised Chinese sovereignty over Tibet. But the new British foreign minister, Sir Edward Grey, was on record as saying that the new Russian alliance, in combination with the Anglo-French Entente, could be the thing to stop Germany – if such stopping were needed. The nature of this confession suggests that Britain was interested in the European situation, but the actual action itself need not. An alignment with Russia could be seen as a wise move in purely colonial terms, not just as an aspect of European triangulation.
In summary, it appears that Britain’s major foreign policy concern remained colonial. All actions aimed at either deterring Russia or allying with her could be seen to have ulterior imperial motivations. Similarly, actions leading up to or promoting Anglo-French alliance can also be seen as a planned route to detente between the two participants of the ‘Great Game’. The shift in personalities of personages involved in the Foreign Office, as described by Philip Pedley, does not necessitate a change of national objectives. Britain’s interests remained, at least for this period, tied up with India and the empire east of Suez, as illustrated by the alliance with Japan in 1902. |
Seeing blood in your urine can be alarming. While in many instances the cause is harmless, blood in urine (hematuria) can indicate a serious disorder.
Blood that you can see is called gross hematuria. Urinary blood that's visible only under a microscope (microscopic hematuria) is found when your doctor tests your urine. Either way, it's important to determine the reason for the bleeding.
Treatment depends on the cause.
Gross hematuria produces pink, red or cola-colored urine due to the presence of red blood cells. It takes little blood to produce red urine, and the bleeding usually isn't painful. Passing blood clots in your urine, however, can be painful.
Bloody urine often occurs without other signs or symptoms.
When to see a doctor
Make an appointment to see your doctor anytime you notice blood in your urine.
Some medications, such as the laxative Ex-lax, and certain foods, including beets, rhubarb and berries, can cause your urine to turn red. A change in urine color caused by drugs, food or exercise might go away within a few days.
Bloody urine looks different, but you might not be able to tell the difference. It's best to see your doctor anytime you see red-colored urine.
In hematuria, your kidneys — or other parts of your urinary tract — allow blood cells to leak into urine. Various problems can cause this leakage, including:
Urinary tract infections. These occur when bacteria enter your body through the urethra and multiply in your bladder. Symptoms can include a persistent urge to urinate, pain and burning with urination, and extremely strong-smelling urine.
For some people, especially older adults, the only sign of illness might be microscopic blood in the urine.
- Kidney infections (pyelonephritis). These can occur when bacteria enter your kidneys from your bloodstream or move from your ureters to your kidney(s). Signs and symptoms are often similar to bladder infections, though kidney infections are more likely to cause a fever and flank pain.
A bladder or kidney stone. The minerals in concentrated urine sometimes form crystals on the walls of your kidneys or bladder. Over time, the crystals can become small, hard stones.
The stones are generally painless, so you probably won't know you have them unless they cause a blockage or are being passed. Then there's usually no mistaking the symptoms — kidney stones, especially, can cause excruciating pain. Bladder or kidney stones can also cause both gross and microscopic bleeding.
- Enlarged prostate. The prostate gland — which is just below the bladder and surrounding the top part of the urethra — often enlarges as men approach middle age. It then compresses the urethra, partially blocking urine flow. Signs and symptoms of an enlarged prostate (benign prostatic hyperplasia, or BPH) include difficulty urinating, an urgent or persistent need to urinate, and either visible or microscopic blood in the urine. Infection of the prostate (prostatitis) can cause the same signs and symptoms.
- Kidney disease. Microscopic urinary bleeding is a common symptom of glomerulonephritis, an inflammation of the kidneys' filtering system. Glomerulonephritis may be part of a systemic disease, such as diabetes, or it can occur on its own. Viral or strep infections, blood vessel diseases (vasculitis), and immune problems such as IgA nephropathy, which affects the small capillaries that filter blood in the kidneys (glomeruli), can trigger glomerulonephritis.
- Cancer. Visible urinary bleeding may be a sign of advanced kidney, bladder or prostate cancer. Unfortunately, you might not have signs or symptoms in the early stages, when these cancers are more treatable.
- Inherited disorders. Sickle cell anemia — a hereditary defect of hemoglobin in red blood cells — causes blood in urine, both visible and microscopic hematuria. So can Alport syndrome, which affects the filtering membranes in the glomeruli of the kidneys.
- Kidney injury. A blow or other injury to your kidneys from an accident or contact sports can cause visible blood in your urine.
- Medications. The anti-cancer drug cyclophosphamide and penicillin can cause urinary bleeding. Visible urinary blood sometimes occurs if you take an anticoagulant, such as aspirin and the blood thinner heparin, and you also have a condition that causes your bladder to bleed.
Strenuous exercise. It's rare for strenuous exercise to lead to gross hematuria, and the cause is unknown. It may be linked to trauma to the bladder, dehydration or the breakdown of red blood cells that occurs with sustained aerobic exercise.
Runners are most often affected, although anyone can develop visible urinary bleeding after an intense workout. If you see blood in your urine after exercise, don't assume it's from exercising. See your doctor.
Often the cause of hematuria can't be identified.
Almost anyone — including children and teens — can have red blood cells in the urine. Factors that make this more likely include:
- Age. Many men older than 50 have occasional hematuria due to an enlarged prostate gland.
- A recent infection. Kidney inflammation after a viral or bacterial infection (post-infectious glomerulonephritis) is one of the leading causes of visible urinary blood in children.
- Family history. You might be more prone to urinary bleeding if you have a family history of kidney disease or kidney stones.
- Certain medications. Aspirin, nonsteroidal anti-inflammatory pain relievers and antibiotics such as penicillin are known to increase the risk of urinary bleeding.
- Strenuous exercise. Long-distance runners are especially prone to exercise-induced urinary bleeding. In fact, the condition is sometimes called jogger's hematuria. But anyone who works out strenuously can develop symptoms.
Aug. 17, 2017 |
Diatom of the month - March 2018: Afrocymbella barkeri
by Heather Moorhouse*
The tropical diatom genus Afrocymbella has only 12 known species, all of which are found in the African Rift Valley lakes1. They have been observed both free-living in the water column and attached to rocks and plants, and are solitary or colonial1. One such species is Afrocymbella barkeri Cocquyt & Ryken sp. nov. (19.9-63.8 µm) (Fig. 1), a newly described diatom found in Lake Chala1 (Fig. 2), a tropical crater lake that lies directly on the border of Kenya and Tanzania, just south of the equator (Fig. 3).
This species was named after Prof. Philip Barker whose seminal work on diatoms in the East African Rift valley lakes has helped understand past climate and environmental change in the region. Afrocymbella barkeri is common at the end of the dry and windy season in Lake Chala, which corresponds to the northern hemisphere summer. The summer winds mix the lake water column and cause nutrient-rich deep water to rise to the surface, providing the nutrients that fuel diatom blooms. As paleolimnologists, we can use observations of how the current climate shapes diatom communities to help reconstruct historical environmental changes using fossilized diatoms found in lake sediment records.
Fig. 1. SEM images of Afrocymbella barkeri Cocquyt & Ryken sp. nov. found in the sediments of Lake Chala. Image taken by H.Moorhouse.
Lake Chala is a deep crater lake (maximum depth of 97 metres, 4.2 km2 in size) which lies on the lower eastern flanks of Mount Kilimanjaro, Africa’s tallest mountain and a dormant volcano. Lake Chala is a hydrologically simple system as it has no major river inflows or outflows, which makes it an ideal study site to investigate changes in precipitation versus evaporation of lake water. Currently, more water is lost from evaporation than is replaced by rainfall at Chala, but the lake water level is maintained by subsurface or groundwater flows from rainfall that has fallen on the Mount Kilimanjaro area and seeped underground.
Fig. 2. Lake Chala, a tropical crater lake. Image courtesy of Loes van Bree.
Fig. 3. Location of Lake Chala in eastern Africa.
Interestingly, the diversity of diatoms in Lake Chala is extremely low, with assemblages dominated by Nitzschia and Afrocymbella species. This may be a result of the lake’s isolation or lack of in-lake habitat diversity. Nevertheless, whilst we still know relatively little about the ecology of the diatoms in this lake, we can use other information on the chemistry of diatom cells to reconstruct environmental change.
Fig. 4. SEM images of Nitzschia spp. found in the sediments of Lake Chala, which, along with Afrocymbella spp., dominate this lake's diatom community. Image taken by H. Moorhouse.
This is the one of the motives behind the DeepCHALLA project, an International Continental Scientific Drilling Programme which aims to study over 214 meters of sediment cores retrieved from the depths of Lake Chala. The sediment in these cores are estimated to have been deposited as far back as >250,000 years, allowing scientists the opportunity to investigate long-term changes in climate, the lake and surrounding terrestrial ecosystems, volcanic activity, and the role of these in shaping human evolution. We are particularly interested in the African Megadroughts period that occurred between 90,000 and 130,000 years ago and is thought to have caused the dispersal and evolution of our modern human ancestors. The fossilized diatoms of Lake Chala will be key to discern the nature of the African Megadroughts and will help identify how the lake and surrounding terrestrial ecological communities responded to prolonged aridity.
My role in the DeepCHALLA project is to clean sediment samples so all that remains are pure fossilized diatoms. We will then look at the stable isotopes of oxygen and carbon found in the diatom silica. Stable isotopes are elements with the same number of protons, but different number of neutrons. Oxygen and carbon isotopes provide snapshots of what the ambient environment of the diatoms was like at certain points in time, because we know different environmental conditions effect their cell isotopic composition. Diatoms are useful hosts of stable isotopes because their silica frustule (cell wall) acts as protection against degradation. Different layers of the diatoms silica frustule host different isotopes; oxygen is found in its inner isotopically homogenous silica layer, protected by the outer layer, while carbon is found in the organic inclusions or proteins that form within the silica frustule.
By looking at the oxygen isotopes captured by the diatoms, we can estimate the amount of precipitation to that of evaporation of the lake water. We measure the oxygen isotope (δ18Odiatom) which is calculated by analyzing the 18O:16O ratio3. Higher values mean that more of the lighter 16O has evaporated out of the lake; the heavier 18O left behind is then incorporated by diatoms into their frustules3. This is a great way to reconstruct the past hydro-climate, as we can derive periods of aridity from heavier oxygen values.
We are also going to investigate the occluded carbon (δ13Cdiatom) within the diatom silica, which is determined by the ratio of 12C and 13C and broadly tells us about changes to the supply and demand of carbon over time4. For example, previous work found that high values of δ13Cdiatom during the Last Glacial Maximum (which resulted in dry conditions), may be explained by not only high diatom productivity, but also inputs of drought-adapted terrestrial vegetation4. This will help us to understand how climate can shape carbon cycling in lakes, which are integral components of the global carbon cycle.
Ultimately, the stable isotopes of oxygen and carbon found in diatom silica will help provide unique insights into tropical climate and carbon cycling over two glacial-interglacial cycles. Understanding long-term variation in climate and its impact on ecosystems is important to more accurately predict future climate change in this drought-sensitive region. Diatoms greatly help us understand environmental history; environmental geochemistry techniques, such as stable isotope analysis, can further complement diatom taxonomy and ecology.
*Diatom Isotope Research Technician, Lancaster University
1. Cocquyt, C. and Ryken, E., (2016). Afrocymbella barkeri sp. nov.(Bacillariophyta), a common phytoplankton component of Lake Challa, a deep crater lake in East Africa. European Journal of Phycology 51: 217-225.
3. Leng, M.J. and Barker, P.A., (2006). A review of the oxygen isotope composition of lacustrine diatom silica for palaeoclimate reconstruction. Earth-Science Reviews 75: 5-27.
4. Barker, P.A., Hurrell, E.R., Leng, M.J., Plessen, B., Wolff, C., Conley, D.J., Keppens, E., Milne, I., Cumming, B.F., Laird, K.R. and Kendrick, C.P., (2013). Carbon cycling within an East African lake revealed by the carbon isotope composition of diatom silica: a 25-ka record from Lake Challa, Mt. Kilimanjaro. Quaternary Science Reviews 66: 55-63. |
In evaluating available temperature data, a new correlation has been found. In the graph to the right (click to enlarge), you will see the world population (human only, not including animal life) plotted (in red) along with the global average temperature anomalies (in blue) from approximately 1850 until the present day (the data is available at the Climatic Research Unit and the UK Met. Office Hadley Centre web site). The temperature data represents the "anomalies" vs. the arithmetic mean over 1960 - present (2007). The population data (available at the US Census Bureau web site) is the world population divided by 10 billion (i.e., plotted in tenths-of-a-billion) in order to fit on the same scale plot as the temperature anomaly data. Note the correlation - this is remarkable evidence in support of the conclusion that the world is being overpopulated, leading to rising global temperatures. (This theory, overpopulation leading to rising global average temperature, has recently been proposed by Ted Turner, who completed part of the requirements for a degree in economics, thus qualifying his statements on the topic.) This is in contrast to the many available charts of CO2 level vs. global average temperature, which do not show a high correlation (the reader is left to research this topic on his own).
The area of the graph marked with the yellow arrow corresponds to roughly 1960, where the average rate of temperature rise seems to outpace the population rise. Researching this time period, we noticed that the World Wildlife Fund was started in 1961. Part of the work of this organization is to help prevent extinction of endangered species. It would seem that, as the human population increases, the population of other species tends to decrease, thus the total global population (human and non-human) remains relatively stable. However, the WWF (note: this is the World Wildlife Fund, not the World Wrestling Federation) seeks to offset this natural balance in the overall world population by preserving the species that otherwise would have dwindled or gone extinct. This leads to even more "creature-heat" being supplied into the environment, thus causing the global average temperature to increase even more.
We need to find a solution to the earth's overpopulation problem; this leads to the series on solar system colonization. As previously mentioned, prior Venus inhabitants apparently also had a runaway population issue, combined with the desire to preserve all species of creatures on the tropical paradise (at the time) planet, and failed to realize the fate they were going to endure. We should look into these possibilities to alleviate anthropogenic global warming:
- find alternate places to house the excess population (such as colonies on other bodies throughout the solar system; it has not been studied whether this would lead to "solar system warming" or not)
- reduce the population (we refer the reader to the Jonathan Swift's "A Modest Proposal" - apparently he saw this issue back in 1729 and came up with an ideal solution which also offers a solution to world hunger issues; this would preclude the cannibalism that Ted Turner says will follow once the population and temperature increases reach catastrophic levels, in essence being a "controlled cannibalism" in order to make population adjustments in an orderly fashion instead of out-of-control, willy-nilly fashion)
- disband the WWF (World Wildlife Fund; however, in this case, the suggestion may also apply to the World Wrestling Federation, as their actions tend to cause high levels of energy expenditure that may be impacting the environment negatively as well; in fact, the WWF - World Wrestling Federation - has been known to impact the environment negatively even without taking anthropogenic global warming into consideration - we'll consider this in a future article); the reduction of the animal population in opposition to the increasing human population should help to reduce the impact on global average temperatures; however, we would need to catch up on nearly 50 years of sub-optimal animal extermination very quickly, and even surpass the intervening numbers of animals that were saved since we need to reduce the global average temperature; endangered species would be the first targets since they would be quick to eliminate and thus mitigate their reproduction as well, while the remainder could be more quickly reduced in numbers
However, whichever method we take, we need to decide and move quickly before the earth turns into Venus II.
This is not intended to be an invitation to eradicate species of creatures from the earth, although the author wouldn't mind if mosquitoes and tics were eliminated (we can send them to Jupiter if someone really doesn't want to make them completely extinct - and whether it's the person or the pests we send to Jupiter, I don't really care). It is intended to show how easy it is to correlate data that doesn't necessarily indicate a causal relationship, which unfortunately seems to be the case with a lot of the touted "anthropogenic (human-caused) global warming" data these days. Yes, the earth is warming, but it's not due to anything that humans have done - we're on the upside of an ice-age, so temperatures will tend to warm; in addition, the sunspot cycles recently have been above average, which tends to correlate with periods of increased temperature as well, and the recent low sunspot activity seems to have coincided with a sudden reduction in temperature over the last year. Don't let anthropogenic global warming activists scare you - or tell you what you think - go out and look at the data yourself and make up your own mind. We hope to bring you more data in the future regarding this hot topic (sorry, but yes, that was an intentional pun). |
Once implanted, a pacemaker can stay in the body for about five to 15 years before its battery life runs out. At that point, a surgeon must replace the battery or insert a new pacemaker. Such procedures could be averted entirely, however, if pacemakers drew power from the heart beat itself. Researchers at the University of Michigan have concluded that the heartbeat could supply 10 times more than enough piezoelectricity to power a current generation pacemaker. The technology could also be used to power devices such as implantable defibrillators.
Interestingly, the research was a spinoff from research to power wireless sensors from the vibration of aircraft wings. The technology works by harvesting energy from the vibration of the chest cavity that is a result of the heart beat. That vibration can then be captured and transduced into electrical energy. Led by M. Amin Karami, PhD, a research fellow in the Department of Aerospace Engineering at the University of Michigan, the researchers are using the money from a grant from the university's medical school to develop a prototype device using data gathered from open-heart surgeries.
Brian Buntz is the editor-at-large at UBM Canon's medical group. Follow him on Twitter at @brian_buntz.
- Linearity Measurements for MEMS Pressure Sensors - Supplier Resource
- Dual Die Compensation for MEMS Pressure Sensors - Supplier Resource
- Special Considerations for Mounting and Handling Pressure Sensors - Supplier Resource
- Special Kind of Plastic V3 - Video
- Visimobile v6 - Video
- Corporate Video - Video |
Lesson 4 of 9
Objective: SWBAT give examples and non examples of vocabulary words.
Last summer I attended an Anita Archer training to help struggling readers. She introduced us to word diagrams, and I think they are a really great tool to help students understand complex words in a simple way. These diagrams also double as a great student made study tool as well. I find that many of my students come to me with very poor study skills. They have no idea how to commit something to memory. I like to expose them to many different study tools throughout the year, so that when I am not with them anymore, they have their own tool box.
In this particular strategy, students start by generating a simple definition of a word. For this example, I will use the word devastate. It can be defined as simply as "to destroy". Next, they will decide what it is like and record it. It is like demolishing. Next they will tell what it is not like. For example, devastate is not like rebuilding. Finally, they will give an example. Devastate is like the destruction seen after a hurricane. You could also have students do a quick picture or symbol to help connect to the word.
I have found that after carefully examining words in this way, and coming up with their own examples, students are much more comfortable using the words. In fact, my vocabulary test scores were much higher (most students scored a full letter grade higher) after making word diagrams and using them a study tools.
Students do struggle to decide what the word is like and not like, so support is often needed in those areas. Making those connections will allow them to see that the word is actually useful and applicable in their lives. Helping student find real life relevance in their learning is one of the most important ways to foster motivation to learn in middle level students. If they can't see it's relevance, they are not going to want to learn it!
My big push with vocabulary strategies this year is to increase my students' lexile ranges in order to help them comprehend more complex texts. Using word diagrams, forces students to think about vocabulary words in their own terms. They have to process all of the information that they are given by me and then put their own spin on it. Students must compare and contrast the word to something in their own experiences. I feel like this higher level of thinking helps my students really understand new and complex words. I have used different forms in the past, but I prefer the one that I am attaching to this lesson. The reason I like it is that the boxes are small so students cannot 1. copy a whole dictionary definition or 2. give complicated answers. It is short simple and perfect to use as a study tool. |
How does NASA drive Mars rover Curiosity?
Share This article
How exactly do you drive the one-ton Mars rover Curiosity, when the driver is, on average, 150 million miles away? With a one-way time delay of around 13 minutes, it certainly isn’t a matter of sitting down in front of a monitor and waggling a joystick.
While we’ve tackled just about every aspect of NASA’s Curiosity rover, from the radiation-hardened on-board computers through to its nuclear-powered laser, we’ve never really discussed navigation — a rather important aspect, as most of Curiosity’s two-year prime mission will be spent driving the few miles to Mount Sharp.
In short, there are two ways that Curiosity can navigate the surface of Mars: NASA can transmit a series of specific commands, which the rover then dutifully carries out — or NASA can give Curiosity a target, and then trust the rover to autonomously find its own way there. In both cases, the commands are transmitted to Curiosity via NASA’s Deep Space Network — the worldwide network of big-dish antennae that NASA uses to communicate with spacecraft, and carry out some radio astronomy on the side.
To decide which navigation method to use, NASA uses the Rover Sequencing and Visualization Program (RSVP), which is basically a Mars simulator. RSVP shows Curiosity’s current position on Mars, along with surface topology, obstacles (rocks), and so on. RSVP can then be used to plot a move (go forward 10 meters, turn 30 degrees right, go forward 3 meters) — or to pick an end point, which Curiosity will dutifully, autonomously navigate to. To safely navigate Mars, Curiosity uses its Hazcams (hazard avoidance cameras) to build a stereoscopic map of its environment, identifies which objects are too large to drive over, and then plots out a course to the end point.
When Curiosity finishes its drive, it transmits a bunch of thumbnail images from its on-board cameras to NASA, which are then used to work out Curiosity’s exact location on Mars. This data is fed into RSVP, the next day’s movements are plotted, and so on and on.
In other news, Curiosity is now deep within Yellowknife Bay, a shallow depression on the surface of Mars where NASA will hopefully find an interesting rock that will become the first victim of Curiosity’s percussive hammer. Yellowknife Bay is pictured above, along with the step (about 2ft high) that Curiosity had to cross to descend into the bay. Curiosity will spend the next few days in Yellowknife Bay, while the NASA/JPL engineers enjoy a long-overdue break, and then begin the long trek to Mount Sharp, which will probably take up most of 2013.
Finally, a beautiful bonus image — think of it as a Christmas gift from ExtremeTech. What you see above is Saturn, all of its rings, and its moons Enceladus and Tethys (bottom left). This unique image, which is a mosaic of hundreds of images, was captured by the NASA/ESA/ASI Cassini orbiter as it passed through Saturn’s shadow, roughly 500,000 miles (800,000km) from the planet. With Saturn between the Sun and Cassini, and the dramatic viewing angle, this is probably the best view of Saturn’s rings that you will ever see.
Cassini has only taken an image from the shadow of Saturn once before, in 2006 — and that time, Earth was visible (10 o’clock, at the edge of the rings). |
The Society of Friends, or the Quakers as they are most commonly known, trace their origins back to northern England in 1652. This religious group formed amid the religious upheaval of the Protestant Reformation of the sixteenth and seventeenth centuries. Many people questioned the belief in religious authority, the interpretation of the Scriptures, and other common practices such as the role of clergy and sacraments. So when George Fox began preaching beliefs that combined the English Bible, Calvinist theology, and Puritan ethics in 1652, people proved quite receptive to his views.
The earliest Quakers did not believe in a hierarchical structure and felt that God's word according to the Scriptures and Spirit were deemed more important than human ideas and wishes. Like the Anabaptists and English Baptists, Quakers also rejected the belief in predestination and infant baptism. Their foremost concern lay in peaceful living and they dedicated themselves to eradicate war and promoting toleration. They referred to themselves as "the Camp of the Lord" as they struggled against evil and taught that victory lay within human hearts, not killing. But in 1660, the restoration of the English monarchy and the severe persecution over all forms of worship forced Friends to alter their pursuits. They no longer sought to usurp the Church of England, and the Camp of the Lord evolved into a less-threatening organization - the Society of Friends.
Quakers saw themselves as a nonconformist group who worked to ensure a more peaceful, orderly, and prosperous society. They formed stable communities which were regulated by Monthly and Yearly Meetings. Others admired them for their success in the merchant and banking fields as well as their keen interests in scientific knowledge and education. Yet regardless of this general material success, Friends taught members to disdain material accumulation and live plainly. Over the Years, Quakers have established themselves all around the world as a group dedicated to lobbying for equal rights for all and providing aid to the suffering.
A Monthly Meeting for Quakers in Cache Valley was not established until 1972, which is indicated in the minutes of that year. Members in the first recorded minutes expressed their wish to create a regular monthly meeting. The initial concerns of the Logan Friends revolved around educating the public about Quakerism and pondering which social concerns they could "take on." Throughout the collection of minutes, the organization's budget was reported. For the most part, costs were accrued by members traveling to various Quaker Meetings around the country and the world, and bringing well-known members in to speak to Logan Friends. Those that attended other Meetings were expected to report their experiences to the Logan Meeting. Many of these visits are described in the newsletter.
The collection also contains information on the administrative activities of Friends such as the regulation of worship practices, and locating places to worship every month. Meetings were held in members' homes, the Logan Public Library, the local prison, Sunshine Terrace, and even other churches such as the local Presbyterian Church. Additional information deals with the new membership or "clearance" of prospective members as well as births, marriages, and deaths. The minutes and newsletter also describe the liberal ideals of Quakers. Their social concerns covered numerous topics both locally and worldwide. They supported homosexuals, international amnesty, gun control, euthanasia, and the construction of Logan's Planned Parenthood. They donated goods and offered their own homes to orphans from Vietnam during the 1970s and Cuban and Salvadoran refugees in the 1980s. During the refugee crisis of the 1980s, members of the local meeting felt that they needed to reestablish an "underground railroad" to help those fleeing from oppression just as Quakers had for runaway slaves prior to the American Civil War in 1862. Logan Friends frowned on capital punishment, nuclear testing, and the negative stereotypes of Middle Easterners circulating during Desert Storm in the early 1990s. Quakers boycotted USU's eating facilities in the late 1970s because they bought lettuce imported from Brazilian farmers who were being exploited by the Brazilian government. They also boycotted Nestlé products in the early 1980s because the company advised mothers in third world countries to discontinue nursing their infants and use Nestlé formulas. Throughout the collection members of the meeting were encouraged to write state legislators and other government officials to express their sentiment towards various social issues. Some of these letters are included in the newsletter as are a few replies they received.
There were no reports of animosity between this group and the dominant religious group of the area - the L.D.S. Church. In fact many friends expressed their appreciation when Mormons attended conferences with them and agreed on certain topics. In the first minutes of 1972 members even voiced their concern over the anti-Mormon sentiment many non-Mormons expressed and wanted to ease these tensions.
This collection contains the monthly business minutes and monthly newsletter of th Logan Friends' Meeting. The collection of the newsletters begins in 1977, whereas the minutes of the business meetings begin in November of 1972. The same basic topics are discussed in both the newsletter and minutes. The content of the newsletter does provide more detailed descriptions of the Logan Meeting, however, as well as the musings of the editors and clippings of articles written by or about Quakers in the Herald Journal, Cache Valley's local newspaper.
Restrictions on Access :
No restrictions on use, except: not available through interlibrary loan.Restrictions on Use :
It is the responsibility of the researcher to obtain any necessary copyright clearances.
Permission to publish material from the Logan Society of Friends Records must be obtained from the Special Collections Manuscript Curator and/or the Special Collections Department Head.Preferred Citation :
Initial Citation: Logan Society of Friends Records USU_COLL MSS 193, Box [ ]. Special Collections and Archives. Utah State University Merrill-Cazier Library. Logan, Utah.
Following Citations:USU_COLL MSS 196, USUSCA.
This collection is arranged in chronological order.
Processing Note :
Processed in 2001 of OctoberAcquisition Information :
This collection was donated by Jim Boone of Lewiston, Utah in 1992. He was the editor of the Cache Valley Quaker Newsletter at that time.Related Materials :
Papers of Allen W. Stokes (COLL MSS 215), the Meeting's first Chairman of Ministry and former Utah State University professor.Bibliography : Sources:
Detailed Description of the Collection |
Black holes (BHs) progressed from a theoretical concept to a necessary ingredient in extragalactic astronomy with the discovery of quasars by Schmidt (1963). Radio astronomy was a growth industry at the time; many radio sources were identified with well-known phenomena such as supernova explosions. But a few were identified only with "stars" whose optical spectra showed nothing more than broad emission lines at unfamiliar wavelengths. Schmidt discovered that one of these "quasi-stellar radio sources" or "quasars", 3C 273, had a redshift of 16% of the speed of light. This was astonishing: the Hubble law of the expansion of the Universe implied that 3C 273 was one of the most distant objects known. But it was not faint. This meant that 3C 273 had to be enormously luminous - more luminous than any galaxy. Larger quasar redshifts soon followed. Explaining their energy output became the first strong argument for gravity power (Zel'dovich 1964; Salpeter 1964).
Studies of radio jets sharpened the argument. Many quasars and lower-power active galactic nuclei (AGNs) emit jets of elementary particles that are prominent in the radio and sometimes visible at optical wavelengths. Many are bisymmetric and feed lobes of emission at their ends (e.g., Fig. 1). Based on these, Lynden-Bell (1969, 1978) provided a convincing argument for gravity power. Suppose that we try to explain the typical quasar using nuclear fusion reactions, the most efficient power source that was commonly studied at the time. The total energy output of a quasar is at least the energy stored in its radio halo, E ~ 1054 J. Via E = mc2, this energy weighs 107 solar masses (M). But nuclear reactions have an efficiency of only 0.7%. So the mass that was processed by the quasar in order to convert 107 M into energy must have been 109 M. This waste mass became part of the quasar engine. Meanwhile, rapid brightness variations showed that quasars are tiny, with diameters 2R 1013 m. But the gravitational potential energy of 109 M compressed inside 1013 m is GM2 / R ~ 1055 J. "Evidently, although our aim was to produce a model based on nuclear fuel, we have ended up with a model which has produced more than enough energy by gravitational contraction. The nuclear fuel has ended as an irrelevance" (Lynden-Bell 1978). This argument convinced many people that BHs are the most plausible quasar engines.
Figure 1. Cygnus A at 6 cm wavelength (Perley, Dreher, & Cowan 1984). The central point source is the galaxy nucleus; it feeds oppositely directed jets (only one of which is easily visible at the present contrast) and lobes of radio-emitting plasma. The resolution of this image is about 0."4.
Jets also provide more qualitative arguments. Many are straight over ~ 106 pc in length. This argues against the most plausible alternative explanation for AGNs, namely bursts of supernova explosions. The fact that jet engines remember ejection directions for 106 yr is suggestive of gyroscopes such as rotating BHs. Finally, in many AGNs, jet knots are observed to move away from the center of the galaxy at apparent velocities of several times the speed of light, c. These can be understood if the jets are pointed almost at us and if the true velocities are almost as large as c (Blandford, McKee, & Rees 1977). Observations of superluminal motions provide the cleanest argument for relativistically deep potential wells.
By the early 1980s, this evidence had resulted in a well-established paradigm in which AGNs are powered by BHs accreting gas and stars (Rees 1984). Wound-up magnetic fields are thought to eject particles in jets along the rotation poles. Energy arguments imply masses M ~ 106 to 109.5 M, so we refer to these as supermassive BHs to distinguish them from ordinary-mass (several-M) BHs produced by the deaths of high-mass stars. But despite the popularity of the paradigm, there was no direct dynamical evidence for supermassive BHs. The black hole search therefore became a very hot subject. It was also dangerous, because it is easy to believe that we have proved what we expect to find. Standards of proof had to be very high. |
On February 20, 1986, a Proton rose off its launchpad in Kazakstan bound for low Earth orbit. Its payload was a module designated 17KS, known better as the core stage of the Mir space station. In the 15 years that followed, modules were added and rearranged, prompting some to liken history’s first modular space station to a Tinker Toy. But however non-traditional it was at the time, a lot can be gleaned from its name. “Mir” roughly translates to “peace” or “world,” but a more nuanced is the translation of “village.” If the Americans and Soviets behind their space programs are a village, Mir brought everyone together for the sake of the mission.
Mir’s story begins in 1976 with a Soviet pledge to improve on the Salyut space station program. Like the American Skylab, Salyut was a single module station such that all experiments and systems had to be launched ready to go inside the monolithic structure. Once in orbit, there was no real way to resupply these stations or add additional modules to extend their capabilities in orbit. But both the Americans and the Soviets knew this would be a valuable capability, not to mention a means to build a bigger, more complete space station for a larger mission; Wernher von Braun was among the first proponents of constructing a space station in orbit as early as the 1950s. Mir became the first proof-of-concept of this novel idea.
The core module of Mir reflects its early history as part of Salyut. The module is similar to the Salyut-6 and Salyut-7 space stations but different in one key respect: internal clutter. The now model for a space station had additional capabilities and research stations added with every new module, so the core wasn’t cluttered. Where Salyut was packed with instrument sections and payload, Mir’s core featured two small crew cabins.
From there, modules made Mir bigger and more capable. On April 12, 1987, the Kvant-1 docked to the core module, adding instruments to measure electromagnetic spectra and x-ray emissions of distant galaxies, quasars, and neutron stars to the growing station. This module also housed an attitude control system using gyrodines rather than propellant-fed reaction controls, making the whole station far more maneuverable. A second Kvant module, Kvant-2, brought a second set of gyrodines to Mir on December 6, 1989, as well as an airlock for simpler spacewalks, a jetpack akin to NASA’s Manned Maneuvering Unit, and a new life-support system capable of recycling water and generating breathable oxygen. With this module, the station started becoming truly self-reliant, or at least more than it had been to this point.
The Kristall module was added next. When it docked on June 10, 1990, it gave Mir two androgynous docking ports that could accept both the Soyuz spacecraft and the Buran shuttle, though the latter never visited the station. Kristall did receive a different shuttle, though; it was the docking point for NASA’s space shuttle beginning in 1995. The final two pieces of Mir, the Spektr module and the Priroda module, were added in June of 1995 and April of 1996 respectively. By this time, the Soviet Union had fallen and Mir was owned and operated by the Russian Federal Space Agency.
Over the course of its decade under construction, Mir wasn’t empty. The first crew boarded while it was still just a core module in mid-March of 1986. From this first crew to when the last crew departed in 2000, Mir hosted 28 long-duration crews. These largely-Russian expeditions lasted about six months, with some cosmonauts launching with one crew and returning with another.
But there were also other nations on board through collaborative programs. Intercosmos ran from 1978-1988 and saw Mir host visitors from Warsaw Pact Nations, other socialist nations, as well as and pro-Soviet non-aligned nations. Euromir began in the 1990s as a collaborative effort between the Russian Federal Space Agency and the European Space Agency. The Shuttle–Mir Program was a collaboration between Russia and the United States that saw astronauts taking Soyuz rides to Mir and cosmonauts riding in space shuttles that docked with Mir via the Kristall module.
Mir far outlived its planned five year operational lifetime and ultimately hosted 125 cosmonauts and astronauts from 12 different nations on 17 expeditions. But technical problems and wear from age eventually took their toll. In November of 2000, the Russian government announced that Mir would be decommissioned and deorbited. On January 24, 2001, a Progress cargo ship loaded with fuel rendezvoused with Mir, then fired its engines to start the station’s controlled descent through the atmosphere. The station crashed in the South Pacific Ocean more than 1,500 miles from New Zealand.
There is a lot more to say about Mir, including some really interesting close calls and near disasters. Before you comment: I’m getting to those; rest assured! I just wanted to start off with an overview article! Sources: NASA;NASA; NASA; Russian Space Web; Universe Today. |
Table of Contents :
Top Suggestions Word Problem Worksheets :
Word Problems Worksheets Dynamically Created Word Problems
This Word Problems Worksheet Will Produce A Great Handout To Help Students Learn The Symbols For Different Words And Phrases In Word Problems Addition Word Problems Worksheets Using 1 Digit With 2 Addends These Addition Word Problems Worksheets Will Produce 1 Digit Problems With Two Addends With Ten Problems Per Worksheet These Word Problems Worksheets Are Appropriate For 3rd Grade 4th Grade And 5th Grade
Word Problems Dadsworksheets
28 Word Problems Worksheets These Story Problems Deal With Travel Time Including Determining The Travel Distance Travel Time And Speed Using Miles Customry Units This Is A Very Common Class Of Word Problem And Specific Practice With These Worksheets Will Prepare Students When They Encounter Similar Problems On Standardized Tests Travel Time
Math Word Problem Worksheets K5 Learning
Grade 3 Word Problems Worksheets Simple Addition Word Problems Numbers Under 100 Addition In Columns Numbers Under 1 000 Mental Subtraction Subtraction In Columns 2 3 Digits Mixed Addition And Subtraction Simple Multiplication 1 Digit By 1 Or 2 Digit Multiplying Multiples Of 10 Multiplication In Columns Simple Division
Math Word Problems Worksheets
Math Word Problem Worksheets Read Explore And Solve Over Math Word Problems Based On Addition Subtraction Multiplication Division Fraction Decimal Ratio And More These Word Problems Help Children Hone Their Reading And Analytical Skills Understand The Real Life Application Of Math Operations And Other Math Topics
Word Problems Worksheets Free Printables Education
Word Problems Are The Best Math Problems And We Re Here To Help You Solve Them Try Our Word Problem Worksheets To Increase Vocabulary And Improve Your Child S Reading And Math Skills With Fun Activities Like Place Value Puzzles And Themed Holiday And Sports Problems Your Child Won T Want To Stop Doing Math Word Problems Worksheets Come In Varying Levels Of Difficulty For Students Of All AgesWord Problem Worksheets Worksheets Lesson Plans
Word Problem Worksheet Basic 1 We Use Very Basic Numbers To Work On All Operations Basic 2 Common Scenarios That Most Kids Will Run Into At Some Point Basic 3 Mostly Simple Addition And Subtraction On These Basic 4 We Break Out The Multiple Choice Problems For This 2 Pager Easter Related Word Problems 5 All Problems Are Related To The Bunny And Jelly Beans Money Related Word Problem BasicMath Word Problem Worksheets Helpingwithmath
The Word Problem Worksheets Listed Below Will Provide Help For Students Who Need To Practice Solving Math Word Problems Before Working Through The Worksheets Discuss With Your Children Any Phrases Or Vocabulary That They May Be Unsure OfMath Word Problem Worksheets
Multiple Step Word Problems Word Problems Where Students Use Reasoning And Critical Thinking Skill To Solve Each Problem Math Word Problems Mixed Mixed Word Problems Stories For Skills Working On Subtraction Addition Fractions And More Math Worksheets Full Index A Full Index Of All Math Worksheets On This Site
3rd Grade Math Word Problems Free Worksheets With Answers
The Following Collection Of Free 3th Grade Maths Word Problems Worksheets Cover Topics Including Addition Subtraction Multiplication Division And Measurement These Free 3rd Grade Math Word Problem Worksheets Can Be Shared At Home Or In The Classroom And They Are Great For Warm Ups And Cool Downs Transitions Extra Practice Homework And Credit Assignments
Word Problem Worksheets. The worksheet is an assortment of 4 intriguing pursuits that will enhance your kid's knowledge and abilities. The worksheets are offered in developmentally appropriate versions for kids of different ages. Adding and subtracting integers worksheets in many ranges including a number of choices for parentheses use.
You can begin with the uppercase cursives and after that move forward with the lowercase cursives. Handwriting for kids will also be rather simple to develop in such a fashion. If you're an adult and wish to increase your handwriting, it can be accomplished. As a result, in the event that you really wish to enhance handwriting of your kid, hurry to explore the advantages of an intelligent learning tool now!
Consider how you wish to compose your private faith statement. Sometimes letters have to be adjusted to fit in a particular space. When a letter does not have any verticals like a capital A or V, the very first diagonal stroke is regarded as the stem. The connected and slanted letters will be quite simple to form once the many shapes re learnt well. Even something as easy as guessing the beginning letter of long words can assist your child improve his phonics abilities. Word Problem Worksheets.
There isn't anything like a superb story, and nothing like being the person who started a renowned urban legend. Deciding upon the ideal approach route Cursive writing is basically joined-up handwriting. Practice reading by yourself as often as possible.
Research urban legends to obtain a concept of what's out there prior to making a new one. You are still not sure the radicals have the proper idea. Naturally, you won't use the majority of your ideas. If you've got an idea for a tool please inform us. That means you can begin right where you are no matter how little you might feel you've got to give. You are also quite suspicious of any revolutionary shift. In earlier times you've stated that the move of independence may be too early.
Each lesson in handwriting should start on a fresh new page, so the little one becomes enough room to practice. Every handwriting lesson should begin with the alphabets. Handwriting learning is just one of the most important learning needs of a kid. Learning how to read isn't just challenging, but fun too.
The use of grids The use of grids is vital in earning your child learn to Improve handwriting. Also, bear in mind that maybe your very first try at brainstorming may not bring anything relevant, but don't stop trying. Once you are able to work, you might be surprised how much you get done. Take into consideration how you feel about yourself. Getting able to modify the tracking helps fit more letters in a little space or spread out letters if they're too tight. Perhaps you must enlist the aid of another man to encourage or help you keep focused.
Word Problem Worksheets. Try to remember, you always have to care for your child with amazing care, compassion and affection to be able to help him learn. You may also ask your kid's teacher for extra worksheets. Your son or daughter is not going to just learn a different sort of font but in addition learn how to write elegantly because cursive writing is quite beautiful to check out. As a result, if a kid is already suffering from ADHD his handwriting will definitely be affected. Accordingly, to be able to accomplish this, if children are taught to form different shapes in a suitable fashion, it is going to enable them to compose the letters in a really smooth and easy method. Although it can be cute every time a youngster says he runned on the playground, students want to understand how to use past tense so as to speak and write correctly. Let say, you would like to boost your son's or daughter's handwriting, it is but obvious that you want to give your son or daughter plenty of practice, as they say, practice makes perfect.
Without phonics skills, it's almost impossible, especially for kids, to learn how to read new words. Techniques to Handle Attention Issues It is extremely essential that should you discover your kid is inattentive to his learning especially when it has to do with reading and writing issues you must begin working on various ways and to improve it. Use a student's name in every sentence so there's a single sentence for each kid. Because he or she learns at his own rate, there is some variability in the age when a child is ready to learn to read. Teaching your kid to form the alphabets is quite a complicated practice. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.