content
stringlengths 275
370k
|
---|
Urolithiasis is a disease in which stones form in the kidneys, ureters, bladder, and urethra - where urine is produced, transported, stored, and excreted. Depending on the location, the stones are divided into kidney stones, ureter stones, bladder stones, and urethral stones. This is because stones can occur anywhere in the urinary tract. The age at which urolithiasis occurs varies from young people in their 30s to middle-aged people in their 60s. In Korea, urolithiasis occurs most often in men in their 40s and women in their 60s.
Reasons why Urolithiasis is more frequent in summer
In summer, the number of urolithiasis patients increases rapidly. Although urolithiasis can be caused by various factors, it is known that the risk of occurrence is higher in a high temperature environment.
The more you sweat, the less you urinate. As a result, substances that cause stones remain in the urine for a long time, resulting in the formation of stones. |
Expenses are the costs that relate to the earnings of revenue. Another way to think of expenses is as the cost of doing business. Just as revenues represent the inflow into the organization, so do expenses represent the outflow- a stream of expenditures flowing out of the organization.
Direct cost can be specifically associated with a particular unit or department or patient. The critical distinction for the manager is that the cost is directly attributable. Remember direct cost can be traced. Indirect cost, on the other hand, cannot be specifically associated with a particular cost object.
· Complete Practice Exercise 5–1: Grouping Expenses by Cost Center
· Complete Practice Exercise 6–1: Identifying Direct and Indirect Costs
Revenue represents amounts earned by an organization: that is, actual or expected cash inflows due to the organization’s major business. In the case of health care, revenue is mostly earned by rendering services to patients. Revenue flows into the organization is sometimes referred to as the revenue stream.
· Review the following key terms: (Students may see the key terms on a quiz)
What do you believe the proportion of revenues from different sources is for your current health care organization?
Do you believe that this proportion (payer mix) will change in the future? Why?
Please make your initial post by Wednesday! You will need to reply to at least two classmates by Sunday. You initial post must contain at least 200 words. In your replies to your classmates you will need at least 100 words. |
As technology has advanced, it is driving component and PCB formats smaller, to the point that the phone in your pocket has more computing power than the IBM mainframes that took up entire rooms at NASA used to calculate orbital projections. However, as technology has provided more power in smaller form factors, it’s changed some of the requirements for production, requirements that you need to stay on top of to ensure you get the best possible results from your printed circuit boards.
When you’re told that your printed circuit board designs will require microvias to provide layer to layer high-density interconnects and the use of multiple features on a single board, you may be wondering what these circuit board features are and how they work to improve functionality on small electronic boards. Providing interconnects between layers of printed circuit boards, microvias make it easier to improve the board’s ability to run multiple features without increasing the overall area of the board. Here’s a quick look at what microvias are, how they work, how reliable they are, and similar concerns.
What are microvias in printed circuit boards?
What are considered microvias?
The technical definition of a microvia under IPC standards is a hole that has an aspect ratio of 1:1 or less, meaning the ratio of the diameter to the depth, with the depth not to exceed 0.25 millimeters in large microvias, with some tiny microvias as small as 15 micrometers, which is why they are most commonly cut using a laser. As technology has become smaller while becoming more powerful, this means that circuit boards have evolved from a single board to multiple layer boards with microvias connecting printed circuit layers so that multiple functions can be operating at the same time.
This process has four steps, including layer lamination, via formation, via metallization, and via filling. Via filling can be handled using different materials, including epoxy resin, non-conductive or conductive material, electroplated copper and similar materials. Microvias within the layers of the circuit board must be filled, while those on the surface may be left open, depending on your needs.
What is microvia in PCB?
Modern printed circuit boards that use microvias typically are printed on both sides, which are then layered to create a sandwich of processing power. The microvias connect the different layers of the circuit boards. However, they do not connect within a single layer. In other words, they connect the horizontal aspects of a single printed circuit board layer to the horizontal aspects of another single printed circuit layer vertically within the same PCB, rather than following the circuit horizontally.
Because the reliability of the entire assembly is vital to its performance and the device’s overall reliability, microvia reliability is one of the constraints to large-scale adoption. There are many factors that can impact microvia reliability, including the dielectric properties of the material used, geometry parameters of the microvia and production parameters in the manufacturing facility.
Other factors that can impact the microvia’s reliability over time include the stress and strain loads in single-layer microvias, the estimated microvia fatigue life and thermomechanical stress, which can project the point of failure in these important features. These can be impacted by the trace or conductor thickness, the dielectric layers around the microvia, ductility coefficient of the conductor and strain concentration factor.
There are some challenges to be considered in producing printed circuit boards with microvias, especially the issue of microvia voiding. This happens when there is an incomplete filling of the microvia, causing a void to be formed. This void can cause increased stress in the microvia which, in turn, lowers the overall useful lifespan of the printed circuit board.
Because it can prevent the board from operating properly, it can have a significant impact on the board’s reliability over time. Though studies have shown that small spherical voids can slightly increase the overall reliability and lifespan of the printed circuit board, larger ones can drastically reduce reliability and lifespan, to the point of causing a failure in quality assurance testing.
Microvias are an excellent tool when creating multi-layer printed circuit boards, allowing your company to keep up with advancements in technology while reducing the size of your circuit boards. However, they must be processed carefully during the filling and metallization process to ensure that voids and poor plating adhesion are prevented in manufacturing. Working with a precision laser microvia drilling company can help improve your odds of success.
Contact Micron Laser Technology to see how our microvia laser drilling processes can advance your PCB designs utilizing microvias and avoid the pitfalls of underprocessed, overprocessed, and poor plating adhesion that lead to downstream reliability issues. |
cross posted at http://blogs.parisisd.net/dmartin
TCEA session 1
Language Arts Activities using Microsoft Office
1. sentence elaboration
copy and paste
The dog ran.
to elaborate – see wherer they started and where they ended up for complete concrete visual of improvement
copy and paste sentence – add an adjective
The spotted dog ran.
How did the dog run?
The spotted dog ran swiftly.
more descriptive what breed?
The spotted dalmation ran swiftly.
The spotted dalmation swiftly trotted.
some language arts skills
teach open word, copy and paste,
Find and replace
given a prepared sentence or paragraph
use find and replace to find passive verbs like is
and replace with is but click more
format – replace unhighlighted is with highlighted is
replace is replace all
student can quickly see that they have a lot of is in there
vary sentence structure
find The (capital The space) check match case
this highlights all the sentences that start with The
you can change the colors of highlight
Parts of Speech
formatting and or highlighting
or their own writing
have them go through and highlight or bold all the verbs or different font
can use format painter
find all the nouns?
select noun, highlight
double click format painter
it will now stay active
now click off painter and click maybe the verbs
change format and click on format painter
tools addins box tool pack – click first tow little check boxes
has spread sheet of words click f9 and it generates a new sentence
uses a combination of an if statement
function =if randbetween
can highlight the list and format font to white so kids don’t see the words
file available online
design the sentences
ID student name adjective noun 1 noun 2 adverb past tense verb
add data for each field
she used label wizard type in what sentence might look like
the space adjective space noun space verb to the noun.
it will put them in there for you
detailed instructions online
drawn objects with text
click on the ? could be noun? whatever you want them to click on
create text box, type in sentence, enlarge font
create buttons to tell student if they clicked right or wrong
can create prompts 1n wordart
e.g. try again, you are right, maybe a star that says correct!
if they click anywhere on the sentence besides the noun we want it to say try again
add effect to try again
attach animation to (right click timing box comes up click triggers
first everything will be try again
create boxes over nouns for exceptions
make fill transparent so word will show (and no outline)
add animation effect for correct and again right click timing trigger and choose rectangle number (whatever is correct)
use this for finding an error in a sentence (taks)
(this one is in book – gamewise for Language arts
booth 935 and 2479
conference price for book 27.00 |
New Zealand’s freshwater fish have strong connections with fish of other southern lands. The shortfin eel, īnanga and kōaro are found in eastern Australia, and īnanga in Patagonian South America and the Falkland Islands. In the past some researchers suggested that this spread was because New Zealand, like Australia, was once part of the supercontinent Gondwana. Recent DNA research indicates that it is far more likely that these fish are more recent arrivals, carried around the southern hemisphere on oceanic currents. Some endemic groups such as the pencil galaxias may have an ancient Gondwana heritage.
Evolving from marine species
Some species that evolved as marine fish have established themselves in fresh water. Just how this happens is unknown, but at some stage an event must have caused a shift into fresh water. Perhaps a lack of fish diversity in river rapids provided an opportunity for a marine species to invade this environment.
The torrentfish still retains its marine connections by living at sea during larval and early juvenile life. The black flounder must still return to sea for spawning and early juvenile life. Several flounder (mainly marine) can also live in river estuaries and lowland lakes. But the black flounder has taken the process a little further – it may be found many kilometres up some rivers.
Links to the sea
Nearly half the native freshwater species are found in the sea at some life stage. This may be as larvae and juveniles (as with whitebait species and several bullies), after which they return to fresh water. Some adults (such as eels) may migrate to sea to spawn. In another example, the smelts living in rivers spend most of their lives at sea before returning to fresh water as adults, to spawn.
Fish migrate between rivers and the sea at most times of the year, but especially in spring and autumn. These species are known as diadromous (from Greek words meaning ‘running across’).
The distribution of the migratory species depends on how far upstream they can move. Rapids and waterfalls are not necessarily barriers. Some species have extraordinary climbing abilities, and can be found upstream of waterfalls tens of metres high. Eels are able to climb like this, and some of the whitebait species, especially kōaro, banded kōkopu and shortjaw kōkopu.
These fish climb mostly when small, moving up the wet margins of falls, and using their fins to hold onto rocks by surface adhesion. Some are well known for climbing out of buckets, and if in captivity, often climb out of aquariums (they can climb glass as long as it is damp).
Stuck in the Nevis
For the past half million years Otago’s Nevis River has flowed north into the Kawarau River, which then flows into the Clutha River. But it is thought that the Nevis once flowed south, into Southland’s Mataura River. Supporting evidence is that the Nevis has a Galaxiid species (Galaxias gollumoides) that is otherwise found only in the Mataura and other Southland waterways. The fish is found only in one other isolated locality in the Clutha catchment.
Many species can vary their behaviour. Although the ancestral pattern is for them to go to sea, they can establish landlocked populations in the open water of lakes rather than the sea – mostly at the juvenile stage.
A high number of native species are nocturnal, moving from under cover to be active at night. Why they are so nocturnal is not understood. The most likely explanation is that it might minimise predation by aquatic birds, especially shags, and perhaps herons. But if it is an avoidance strategy then a paradox emerges. Some fish are habitual prey for large eels, which are also more active at night, emerging from cover to feed. |
Wind chimes produce clear, pure tones when struck by a mallet or suspended clapper. A wind chime usually consists of a set of individual alloy rods, tuned by length to a series of intervals considered pleasant. These are suspended from a devised frame in such a way that a centrally suspended clapper can reach and impact all the rods. When the wind blows, the clapper is set in motion and randomly strikes one or more of the suspended rods– causing the rod to vibrate and emit a tone.
The pitch of said tone is governed by the length of the rod, but the perceived loudness is affected by many determinants: the force of the clappers impact, the alloy’s density and structure, and the speed and direction of the wind (to name a few). Also affecting the loudness is the lack of resonating chamber or hard connection between rods and frame. The chime would certainly be louder, for instance, if the rods were built with the inclusion of small chambers containing a volume of air whose fundamental harmonic was the same as that of the rod– when struck, the rod would transfer vibration to the enclosed air as well as directly to the atmosphere, resulting in a louder tone. A hard connection between rods and frame would also accomplish this result somewhat; the vibrations of each separate rod would be commuted to the others, resulting in more vibrating surface area (and hence, more volume).
The transmission of the chime’s sound without the abovementioned alterations is quite simple; each rod releases longitudinal waves radically from its longest axis (excepting deviances caused by deformation or impurity of the metal), which travel until they are absorbed or reflected by an independent surface. These waves travel at a speed governed by the temperature of the atmosphere– the colder the air, the more immediate the transmission.
The waves that are not absorbed can be perceived by the human ear; of equal importance to the directly intercepted waves are those reflected before interception, as these allow an animal or human to identify the physical relationship of self to sound-emitter. These intercepted waves (reflected or not) are processed by the ear in an amazing process.
Sound waves vibrate the ear-drum, causing the minute movement of three microscopic bones (hammer, then anvil, then stirrup) in the middle ear. The bone chain, having transferred air vibration to physical vibration, systematically disturbs the fluid (perilymph) in the inner ear (cochlea). Hair cells along the basilar membrane (which runs the length of the cochlea) perceive the disturbances and interpret them as auditory signals to be transmitted to the nervous system. With pure tones such as those created by a wind chime, certain groups of hair cells are agitated more than others– and the position of that group along the basilar membrane can be directly correlated to the relative pitch of the tone. |
Flag of the Osage Nation of Oklahoma
|Regions with significant populations|
|United States historically Missouri, Oklahoma, Arkansas, and Kansas, now only Oklahoma|
|Christianity, Traditional Spirituality|
|Related ethnic groups|
|Siouan peoples, Dhegihan peoples esp. Ponca, Otoe, Iowa|
The Osage Nation ( // OH-sayj) (Osage: Ni-u-kon-ska, "People of the Middle Waters") is a Midwestern Native American tribe of the Great Plains. The tribe developed in the Ohio and Mississippi river valleys around 700 BC along with other groups of its language family. They migrated west of the Mississippi after the 17th century due to wars with Iroquois invading the Ohio Valley from New York and Pennsylvania in a search for new hunting grounds. The nations separated at that time, and the Osage settled near the confluence of the Missouri and the Mississippi rivers.
Osage is a Siouan language spoken by the Osage people of Oklahoma.
The Midwestern United States, also referred to as the American Midwest, Middle West, or simply the Midwest, is one of four census regions of the United States Census Bureau. It occupies the northern central part of the United States. It was officially named the North Central Region by the Census Bureau until 1984. It is located between the Northeastern United States and the Western United States, with Canada to its north and the Southern United States to its south.
Native Americans, also known as American Indians, Indigenous Americans and other terms, are the indigenous peoples of the United States, except Hawaii. There are over 500 federally recognized tribes within the US, about half of which are associated with Indian reservations. The term "American Indian" excludes Native Hawaiians and some Alaska Natives, while Native Americans are American Indians, plus Alaska Natives of all ethnicities. Native Hawaiians are not counted as Native Americans by the US Census, instead being included in the Census grouping of "Native Hawaiian and other Pacific Islander".
The term "Osage" is a French version of the tribe's name, which can be roughly translated as "warlike". The Osage people refer to themselves in their indigenous Dhegihan Siouan language as Wazhazhe, or "Mid-waters".
An indigenous language or autochthonous language, is a language that is native to a region and spoken by indigenous people. This language is from a linguistically distinct community that originated in the area. Indigenous languages are not necessarily national languages and national languages are not necessarily indigenous to the country.
At the height of their power in the early 19th century, the Osage had become the dominant power in the region, feared by neighboring tribes. The tribe controlled the area between the Missouri and Red rivers, the Ozarks to the east and the foothills of the Wichita Mountains to the south. They depended on nomadic buffalo hunting and agriculture.
The Missouri River is the longest river in North America. Rising in the Rocky Mountains of western Montana, the Missouri flows east and south for 2,341 miles (3,767 km) before entering the Mississippi River north of St. Louis, Missouri. The river drains a sparsely populated, semi-arid watershed of more than 500,000 square miles (1,300,000 km2), which includes parts of ten U.S. states and two Canadian provinces. Although nominally considered a tributary of the Mississippi, the Missouri River above the confluence is much longer and carries a comparable volume of water. When combined with the lower Mississippi River, it forms the world's fourth longest river system.
The Red River, or sometimes the Red River of the South, is a major river in the southern United States of America. It was named for the red-bed country of its watershed. It is one of several rivers with that name. Although it was once a tributary of the Mississippi River, the Red River is now a tributary of the Atchafalaya River, a distributary of the Mississippi that flows separately into the Gulf of Mexico. It is connected to the Mississippi River by the Old River Control Structure.
The Ozarks, also called the Ozark Mountains or Ozark Plateau, is a physiographic region in the U.S. states of Missouri, Arkansas, Oklahoma, and extreme southeastern Kansas. The Ozarks cover a significant portion of northern Arkansas and most of the southern half of Missouri, extending from Interstate 40 in Arkansas to the Interstate 70 in central Missouri.
The 19th-century painter George Catlin described the Osage as "the tallest race of men in North America, either red or white skins; there being ... many of them six and a half, and others seven feet."
George Catlin was an American painter, author, and traveler, who specialized in portraits of Native Americans in the Old West. Travelling to the American West five times during the 1830s, Catlin was the first white man to depict Plains Indians in their native territory.
The missionary Isaac McCoy described the Osage as an "uncommonly fierce, courageous, warlike nation" and said they were the "finest looking Indians I have ever seen in the West".
Isaac McCoy was a Baptist missionary among the Native Americans in present-day Indiana, Michigan, Missouri and Kansas. He was an advocate of Indian removal from the eastern United States, proposing an Indian state in what is now Kansas, Nebraska, and Oklahoma. He also played an instrumental role in the founding of Grand Rapids, Michigan and Kansas City, Missouri.
In the Ohio Valley, the Osage originally lived among speakers of the same Dhegihan language stock, such as the Kansa, Ponca, Omaha, and Quapaw. Researchers believe that the tribes likely became differentiated in languages and cultures after leaving the lower Ohio country. The Omaha and Ponca settled in what is now Nebraska, the Kansa in Kansas, and the Quapaw in Arkansas.
The Kaw Nation are a federally recognized Native American tribe in Oklahoma and parts of Kansas. They come from the central Midwestern United States. The tribe known as Kaw have also been known as the "People of the South wind", "People of water", Kansa, Kaza, Kosa, and Kasa. Their tribal language is Kansa, classified as a Siouan language.
The Ponca are a Midwestern Native American tribe of the Dhegihan branch of the Siouan language group. There are two federally recognized Ponca tribes: the Ponca Tribe of Nebraska and the Ponca Tribe of Indians of Oklahoma. Their traditions and historical accounts suggest they originated as a tribe east of the Mississippi River in the Ohio River valley area and migrated west for game and as a result of Iroquois wars.
The Omaha are a federally recognized Midwestern Native American tribe who reside on the Omaha Reservation in northeastern Nebraska and western Iowa, United States. The Omaha Indian Reservation lies primarily in the southern part of Thurston County and northeastern Cuming County, Nebraska, but small parts extend into the northeast corner of Burt County and across the Missouri River into Monona County, Iowa. Its total land area is 796.355 km2 (307.474 sq mi) and a population of 5,194 was recorded in the 2000 census. Its largest community is Macy.
In the 19th century, the Osage were forced to remove from Kansas to Indian Territory (present-day Oklahoma), and the majority of their descendants live in Oklahoma. In the early 20th century, oil was discovered on their land. Many Osage became wealthy through leasing fees generated by their headrights. However, during the 1920s, they suffered manipulation and numerous murders by whites eager to take over their wealth. In the 21st century, the federally recognized Osage Nation has ~20,000 enrolled members,6,780 of whom reside in the tribe's jurisdictional area. Members also live outside the nation's tribal land in Oklahoma and in other states around the country, including Kansas.
The Osage are descendants of cultures of indigenous peoples who had been in North America for thousands of years. Studies of their traditions and language show that they were part of a group of Dhegian-Siouan speaking people who lived in the Ohio River valley area, extending into present-day Kentucky. According to their own stories (common to other Dhegian-Siouan tribes, such as the Ponca, Omaha, Kaw and Quapaw), they migrated west as a result of war with the Iroquois and/or to reach more game.
Scholars are divided as to whether they think the Osage and other groups left before the Beaver Wars of the Iroquois.Some believe that the Osage started migrating west as early as 1200 CE and are descendants of the Mississippian culture in the Ohio and Mississippi valleys. They attribute their style of government to effects of the long years of war with invading Iroquois. After resettling west of the Mississippi River, the Osage were sometimes allied with the Illiniwek and sometimes competing with them, as that tribe was also driven west of Illinois by warfare with the powerful Iroquois.
Eventually the Osage and other Dhegian-Siouan peoples reached their historic lands, likely developing and splitting into the above tribes in the course of the migration to the Great Plains. By 1673, when they were recorded by the French, many of the Osage had settled near the Osage River in the western part of present-day Missouri. They were recorded in 1690 as having adopted the horse (a valuable resource often acquired through raids on other tribes.) The desire to acquire more horses contributed to their trading with the French.They attacked and defeated indigenous Caddo tribes to establish dominance in the Plains region by 1750, with control "over half or more of Missouri, Arkansas, Oklahoma, and Kansas," which they maintained for nearly 150 years. They lived near the Missouri River. Together with the Kiowa, Comanche, and Apache, they dominated western Oklahoma. They also lived near the Quapaw and Caddo in Arkansas.
The Osage held high rank among the old hunting tribes of the Great Plains. From their traditional homes in the woodlands of present-day Missouri and Arkansas, the Osage would make semi-annual buffalo hunting forays into the Great Plains to the west. They also hunted deer, rabbit, and other wild game in the central and eastern parts of their domain. The women cultivated varieties of corn, squash, and other vegetables near their villages, which they processed for food. They also harvested and processed nuts and wild berries. In their years of transition, the Osage had cultural practices that had elements of the cultures of both Woodland Native Americans and the Great Plains peoples. The villages of the Osage were important hubs in the Great Plains trading network served by Kaw people as intermediaries.
In 1673 French explorers Jacques Marquette and Louis Joliet were among the first Europeans to encounter the Osage as they explored southward from present-day Canada in their expedition along the Mississippi River. Marquette and Joliet claimed all land in the Mississippi Valley for France. Marquette's 1673 map noted that the Kanza, Osage, and Pawnee tribes controlled much of modern-day Kansas.
The Osage called the Europeans I'n-Shta-Heh (Heavy Eyebrows) because of their facial hair.As experienced warriors, the Osage allied with the French, with whom they traded, against the Illiniwek during the early 18th century.
The first half of the 1720s was a time of more interaction between the Osage and French. Étienne de Veniard, Sieur de Bourgmont founded Fort Orleans in their territory; it was the first European fort on the Missouri River. Jesuit missionaries were assigned to French forts and established missions to the Osage, learning their language. In 1724, the Osage allied with the French rather than the Spanish in their fight for control of the Mississippi region.
In 1725, Bourgmont led a delegation of Osage and other tribal chiefs to Paris. The Native Americans were shown the wonders and power of France, including a visit to Versailles, Château de Marly and Fontainebleau. They hunted with Louis XV in the royal forest and saw an opera. During the French and Indian War (the North American front of the Seven Years' War in Europe), France was defeated by Great Britain and in 1763 ceded its lands east of the Mississippi River to that nation. France made a separate deal with Spain, which took nominal control of much of the Illinois Country west of the great river.
By the late 18th century, the Osage did extensive business with the French Creole fur trader René Auguste Chouteau, who was based in St. Louis; the city was part of territory under nominal Spanish control after the Seven Years' War but was dominated by French colonists. They were the de facto European power in St. Louis and other settlements along the Mississippi, building their wealth on the fur trade. In return for the Chouteau brothers' building a fort in the village of the Great Osage 350 miles (560 km) southwest of St. Louis, the Spanish regional government gave the Chouteaus a six-year monopoly on trade (1794–1802). The Chouteaus named the post Fort Carondelet after the Spanish governor. The Osage were pleased to have a fur trading post nearby, as it gave them access to manufactured goods and increased their prestige among the tribes.
Lewis and Clark reported in 1804 that the peoples were the Great Osage on the Osage River, the Little Osage upstream, and the Arkansas band on the Verdigris River, a tributary of the Arkansas River.The Osage then numbered some 5,500.
The Osage and Quapaw suffered extensive losses due to smallpox in 1801-1802. Historians estimate up to 2,000 Osage died in the epidemic.
In 1804 after the United States made the Louisiana Purchase, they appointed the wealthy French fur trader Jean Pierre Chouteau, a half-brother of René Auguste Chouteau, as the US Indian agent assigned to the Osage. In 1809 he founded the Saint Louis Missouri Fur Company with his son Auguste Pierre Chouteau and other prominent men of St. Louis, most of whom were of French-Creole descent, born in North America. Having lived with the Osage for many years and learned their language, Jean Pierre Chouteau traded with them and made his home at present-day Salina, Oklahoma, in the western part of their territory.
The Choctaw chief Pushmataha, based in Mississippi, made his early reputation in battles against the Osage tribe in the area of southern Arkansas and their borderlands.
In the early 19th century, some Cherokee, such as Sequoyah, voluntarily removed from the Southeast to the Arkansas River valley under pressure from European-American settlement in their traditional territory. They clashed there with the Osage, who controlled this area. The Osage regarded the Cherokee as invaders. They began raiding Cherokee towns, stealing horses, carrying off captives (usually women and children), and killing others, trying to drive out the Cherokee with a campaign of violence and fear. The Cherokee were not effective in stopping the Osage raids, and worked to gain support from related tribes as well as whites. The peoples confronted each other in the "Battle of Claremore Mound," in which 38 Osage warriors were killed and 104 were taken captive by the Cherokee and their allies. As a result of the battle, the United States constructed Fort Smith in present-day Arkansas. It was intended to prevent armed confrontations between the Osage and other tribes. The US compelled the Osage to cede additional land to the federal government in the treaty referred to as Lovely's Purchase.
In 1833, the Osage clashed with the Kiowa near the Wichita Mountains in modern-day south-central Oklahoma, in an incident known as the Cutthroat Gap Massacre. The Osage cut off the heads of their victims and arranged them in rows of brass cooking buckets. 33 Not a single Osage died in this attack. Later, Kiowa warriors, allied with the Comanche, raided the Osage and others. In 1836, the Osage prohibited the Kickapoo from entering their Missouri reservation, pushing them back to ceded lands in Illinois.:
After the US acquired Louisiana Territory in 1803, the government became interested in relations with the various tribal nations of the territory. President Thomas Jefferson commissioned the Lewis and Clark Expedition to survey the territory and report on its peoples, plants and animals, at the same time that it sought a route via the Missouri River to the Pacific Ocean. It encountered the Osage in their territory along the Osage River.
The major part of the tribe moved to the Three-Forks region of what would become Oklahoma soon after the encounter with the Lewis and Clark Expedition, wanting to maintain distance from European Americans. They were buffered for a period from interaction with the United States settlers and representatives. This part of the tribe did not participate in negotiations for the treaty of 1808, but their assent was obtained in 1809.[ citation needed ]
After the expedition was completed in 1806, Jefferson appointed Meriwether Lewis as Indian Agent for the territory of Missouri and the region. There were continuing confrontations between the Osage and other tribes in this area. Lewis anticipated that the US would have to go to war with the Osage, because of their raids on eastern Natives and European-American settlements. However, the U.S. lacked sufficient military strength to coerce Osage bands into ceasing their raids. It decided to supply other tribes with weapons and ammunition, provided they attack the Osage to the point they "cut them off completely or drive them from their country."
For instance, in September 1807, Lewis persuaded the Potawatomie and Sac and Fox to attack an Osage village; three Osage warriors were killed. The Osage blamed the Americans for the attack. One of the Chouteau traders intervened, and persuaded the Osage to conduct a buffalo hunt rather than seek retaliation by attacking Americans.
Lewis tried to control the Osage also by separating the friendly members from the hostile. In a letter dated Aug. 21, 1808 that President Jefferson sent to Lewis, he says that he approves of the measures Lewis has taken in regards to making allies of the friendly Osage from those deemed as hostile. Jefferson writes, "we may go further, & as the principal obstacle to the Indians acting in large bodies is the want of provisions, we might supply that want, & ammunition also if they need it."
But the goal foremost pursued by the US was to push the Osage out of areas being settled by European Americans, who began to enter the Louisiana Territory after the US acquired it. The lucrative fur trade stimulated the growth of St. Louis and attracted settlers there. The US and Osage signed their first treaty on November 10, 1808, by which the Osage made a major cession of land in present-day Missouri. Under the Osage Treaty, they ceded 52,480,000 acres (212,400 km2) to the federal government. This treaty created a buffer line between the Osage and new European-American settlers in the Missouri Territory. It also established the requirement that the U.S. President had to approve all future land sales and cessions by the Osage.
The Treaty of Ft. Osage states the U.S. would "protect" the Osage tribe "from the insults and injuries of other tribes of Indians, situated near the settlements of white people....".As was common in Native American relations with the federal government, the Osage found that the US did not carry through on this commitment.
The Osage also occupied land in present-day Kansas and in Indian Territory. In the 1830s the US government promised some of this land to the Cherokee and four other southeastern tribes under Indian Removal. When the Cherokee arrived to find that the land was already occupied, many conflicts arose with the Osage over territory and resources.[ citation needed ]
Between the first treaty with the US and 1825, the Osages ceded their traditional lands across Missouri, Arkansas, and Oklahoma in the treaties of 1818 and 1825. In exchange they were to receive reservation lands and supplies to help them adapt to farming and a more settled culture.[ citation needed ]
They were first relocated to a southeast Kansas reservation called the Osage Diminished Reserve. The city of Independence later developed here. The first Osage reservation was a 50 by 150-mile (240 km) strip. The United Foreign Missionary Society sent clergy to them, supported by the Presbyterian, Dutch Reformed, and Associate Reformed churches. They established the Union, Harmony, and Hopefield missions. Cultural differences often led to conflicts, as the Protestants tried to impose their culture. The Catholic Church also sent missionaries. The Osage were attracted to their sense of mystery and ritual, but felt the Catholics did not fully embrace the Osage sense of the spiritual incarnate in nature.
During this period in Kansas, the tribe suffered from the widespread smallpox pandemic of 1837–1838, which caused devastating losses among Native Americans from Canada to New Mexico.All clergy except the Catholics abandoned the Osage during the crisis. Most survivors of the epidemic had received vaccinations against the disease. The Osage believed that the loyalty of Catholic priests, who stayed with them and also died in the epidemic, created a special covenant between the tribe and the Catholic Church, but they did not convert in great number.
Honoring this special relationship, as well as Catholic sisters who taught their children on reservations, in 2014 numerous Osage elders went to St. Louis to celebrate the city's 250th anniversary of the European founding. They participated in a mass partially conducted in Osage at St. Francis Xavier (College) Catholic Church of St. Louis University on April 2, 2014, as part of planned activities.One of the con-celebrants was Todd Nance, the first Osage ordained as a Catholic priest.
In 1843 the Osage asked the federal government to send "Black Robes", Jesuit missionaries to educate their children; the Osage considered the Jesuits better able to work with their culture than the Protestant missionaries. The Jesuits also established a girls' school operated by the Sisters of Loretto from Kentucky.During a 35-year period, most of the missionaries were new recruits from Ireland, Italy, the Netherlands and Belgium. They taught, established more than 100 mission stations, built churches, and created the longest-running school system in Kansas.
White squatters continued to be a frequent problem for the Osage, but they recovered from population losses, regaining a total of 5,000 members by 1850.The Kansas–Nebraska Act resulted in numerous settlers arriving in Kansas; both abolitionists and pro-slavery groups were represented among those trying to establish residency in order to vote on whether the territory would have slavery. The Osage lands became overrun with European-American settlers. In 1855, the Osage suffered another epidemic of smallpox, because a generation had grown up without getting vaccinated.
Subsequent US treaties and laws through the 1860s further reduced the lands of the Osage in Kansas. During the years of the Civil War, they were buffeted by both sides, as they were located between Union forts in the North, and Confederate forces and allies to the South. While the Osage tried to stay neutral, both sides raided their territory, taking horses and food stores. [ citation needed ]They struggled simply to survive through famine and the war. During the war, many Caddoan and Creek refugees from Indian Territory came to Osage country in Kansas, which further strained their resources.
Although the Osage favored the Union by a five to one ratio, they made a treaty with the Confederacy to try to buy some peace. As a result, after the war, they were forced to make a new treaty with the US during Reconstruction. They were forced to give up more territory in Kansas to European-American settlers. By a treaty in 1865, they ceded another 4 million acres (16,000 km2) to the United States and were facing the issue of eventual removal from Kansas to Indian Territory.
In 1867, Lt. Col. George Armstrong Custer chose Osage scouts in his campaign against Chief Black Kettle and his band of Cheyenne and Arapaho Indians in western Oklahoma. He knew the Osage for because of their scouting expertise, excellent terrain knowledge, and military prowess. Custer and his soldiers took Chief Black Kettle and his peaceful band by surprise in the early morning near the Washita River on November 27, 1868. They killed Chief Black Kettle, and the ambush resulted in additional deaths on both sides. This incident became known as the Battle of Washita River, or, better, as the Washita massacre, an ignominious part of the United States' Indian Wars.
Following the American Civil War and victory of the Union, the Drum Creek Treaty was passed by Congress July 15, 1870 during the Reconstruction era and ratified by the Osage at a meeting in Montgomery County, Kansas, on September 10, 1870. It provided that the remainder of Osage land in Kansas be sold and the proceeds used to relocate the tribe to Indian Territory in the Cherokee Outlet. By delaying agreement with removal, the Osage benefited by a change in administration. They sold their lands to the "peace" administration of President Ulysses S. Grant, for which they received more money: $1.25 an acre rather than the 19 cents previously offered to them by the US.
The Osage were one of the few American Indian nations to buy their own reservation. As a result, they retained more rights to the land and sovereignty. 1,470,000 acres (5,900 km2), is coterminous with present-day Osage County, Oklahoma in the north-central portion of the state between Tulsa and Ponca City. The Osage established three towns: Pawhuska, Hominy and Fairfax. Each was dominated by one of the major bands at the time of removal. The Osage continued their relationship with the Catholic Church, which established schools operated by two orders of nuns, as well as mission churches.The reservation, of
It was many years before the Osage recovered from the hardship suffered during their last years in Kansas and their early years on the reservation in Indian Territory. For nearly five years during the depression of the 1870s, the Osage did not receive their full annuity in cash. Like other Native Americans, they suffered from the government's failure to provide full or satisfactory rations and goods as part of their annuities during this period. Middlemen made profits by shorting supplies to the Indians or giving them poor-quality food. Some people starved. Many adjustments had to be made to their new way of life.
During this time, Indian Office reports showed nearly a 50 percent decline in the Osage population.This resulted from the failure of the US government to provide adequate medical supplies, food and clothing. The people suffered greatly during the winters. While the government failed to supply them, outlaws often smuggled whiskey to the Osage and the Pawnee.
In 1879, an Osage delegation went to Washington, DC and gained agreement to have all their annuities paid in cash; they hoped to avoid being continually shortchanged in supplies, or by being given supplies of inferior quality - spoiled food and inappropriate goods. They were the first Native American nation to gain full cash payment of annuities. They gradually began to build up their tribe again, but suffered encroachment by white outlaws, vagabonds, and thieves.
By the start of the 20th century, the federal government and progressives were continuing to press for Native American assimilation, believing this was the best policy for them. Congress passed the Curtis Act and Dawes Act, legislation requiring the dismantling of other reservations. They allotted communal lands in 160-acre portions to individual households, declaring the remainder as "surplus" and selling it to non-natives.
As the Osage owned their land, they were in a stronger position than other tribes. The Osage were unyielding in refusing to give up their lands and held up statehood for Oklahoma before signing an Allotment Act. They were forced to accept allotment, but retained their "surplus" land after allotment to households, and apportioned it to individual members. Each of the 2,228 registered Osage members in 1906 (and one non-Osage) received 657 acres, nearly four times the amount of land (usually 160 acres) that most Native American households were allotted in other places when communal lands were distributed. In addition, the tribe retained communal mineral rights to what was below the surface. As development of resources occurred, members of the tribe received royalties according to their headrights, paid according to the amount of land they held.
In 1906, the Osage Allotment Act was passed by U.S. Congress, as part of its effort to extinguish Native American tribal rights and structure, and to prepare the territories for statehood as Oklahoma. In addition to breaking up communal land, the Act replaced tribal government with the Osage National Council, to which members were to be elected to conduct the tribe's political, business, and social affairs.
Although the Osage were encouraged to become settled farmers, their land was the poorest in the Indian Territory for agricultural purposes. They survived by subsistence farming, later enhanced by stock raising. They discovered they were fortunate to have lands covered with the rich bluestem grass, which proved to be the best grazing in the entire country.[ citation needed ][ opinion ][ original research? ] They leased lands to ranchers for grazing and earned income from the resulting fees.
The Osage had learned about negotiating with the US government. Through the efforts of Principal Chief James Bigheart, in 1907 they reached a deal which enabled them to retain communal mineral rights on the reservation lands. These were later found to have large quantities of crude oil, and tribal members benefited from royalty revenues from oil development and production. The government leased lands on their behalf for oil development; the companies/government sent the Osage members royalties that, by the 1920s, had dramatically increased their wealth. In 1923 alone, the Osage earned $30 million in royalties. The Commissioner of the Bureau of Indian Affairs called them "the richest people in the nation."[ citation needed ]
They are the only tribe since the early 20th century within the state of Oklahoma to retain a federally recognized reservation.
In 2000 the Osage sued the federal government over its management of the trust assets, alleging that it had failed to pay tribal members appropriate royalties, and had not historically protected the land assets and appreciation. The suit was settled in 2011 for $380 million, and a commitment to make numerous changes to improve the program.
In August 2016 the Osage nation bought Ted Turner's 43,000-acre (17,000 ha) Bluestem ranch.
In 1889, the US federal government claimed to no longer recognize the legitimacy of a governing Osage National Council, which the people had created in 1881, with a constitution that adopted some aspects of that of the United States. In 1906, as part of the Osage Allotment Act, the US Congress created the Osage Tribal Council to handle affairs of the tribe. It extinguished the power of tribal governments in order to enable the admission of the Indian Territory as part of the state of Oklahoma.
Under the Act, initially each Osage male had equal voting rights to elect members of the Council, and the principal and assistant principal chiefs. Because the Osage owned their land, they negotiated under the Allotment Act to keep their communal land, above the then-common allotment which the government was making of 160 acres per person. They allocated this land as well, so that each of the 2,228 Osage members and one non-Indian on the 1906 tribal roll received 657 acres.The rights to these lands in future generations was divided among legal heirs, as were the mineral headrights to mineral lease royalties. Under the Allotment Act, only allottees and their descendants who held headrights could vote in the elections or run for office (originally restricted to males). The members voted by their headrights, which generated inequalities among the voters.
A 1992 US district court decision ruled that the Osage could vote in a process to reinstate the Osage National Council as city members of the Osage nation, rather than being required to vote by headright. But, this decision was reversed in 1997 with the United States Court of Appeals ruling that ended the government restoration.In 2004 Congress passed legislation to restore sovereignty to the Osage Nation and enable them to make their own decisions about government and membership qualifications for their people.
In March 2010, the United States Court of Appeals for the Tenth Circuit held that the 1906 Allotment Act had disestablished the Osage reservation established in 1872.This ruling potentially affected the legal status of three of the seven Osage casinos, including the largest one in Tulsa, as it meant the casino was not on federal trust land. Federal Indian gaming law allows tribes to operate casinos only on trust land.
The Osage Nation's largest economic enterprise, Osage Casinos,officially opened newly constructed casinos, hotels and convenience stores in Skiatook and Ponca City in December 2013.
In the late 19th century, the Osage discovered oil on their reservation lands. This resource generated great wealth through the 1920s for people who held headrights.
In 1894 large quantities of oil were discovered to lie beneath the vast prairie owned by the tribe, land which had been unsuitable for the subsistence farming urged by the federal government. Because of his recent work in developing oil production in Kansas, Henry Foster, a petroleum developer, approached the Bureau of Indian Affairs (BIA) to request exclusive privileges to explore the Osage Reservation in Oklahoma for oil and natural gas. The BIA granted his request in 1896, with the stipulation that Foster was to pay the Osage tribe a 10% royalty on all sales of petroleum produced on the reservation.
Foster found large quantities of oil, and the Osages benefited greatly monetarily. But this discovery of "black gold" eventually led to more hardships for tribal members. In preparation for statehood, the US government pressed the Osage to accept allotment and end tribal government. Before having a vote within the tribe on the question of allotment, the Osage demanded that the government purge their tribal rolls of people who were not legally Osage. The Indian agent had been adding names of persons who were not approved by the tribe, and the Osage submitted a list of more than 400 persons to be investigated. Because the government removed few of the fraudulent people, the Osage had to share their land and oil rights with people who did not belong.
The US Congress passed the Osage Allotment Act on June 28, 1906. Because the Osage owned their land, they kept control of it all. The government made the allocation of land extremely complicated, in a way that prevented most Osage from owning contiguous parcels. This was intended to increase their incentive to sell or lease portions of land, and the takers were mostly whites.
But, the Osage had negotiated keeping communal control of the mineral rights.The act stated that all persons listed on tribal rolls prior to January 1, 1906 or born before July 1907 (allottees) would be allocated a share of the reservation's subsurface natural resources, regardless of blood quantum. The headright could be inherited by legal heirs. This communal claim to mineral resources was due to expire in 1926. After that, individual landowners would control the mineral rights to their plots. This provision heightened the pressure for those whites who were eager to gain control of Osage lands before the deadline.
Although the Osage Allotment Act protected the tribe's mineral rights for two decades, any adult "of a sound mind" could sell surface land. In the time between 1907 and 1923, Osage individuals sold or leased thousands of acres to non-Indians of formerly restricted land. At the time, many Osage did not understand the value of such contracts, and often were taken advantage of by unscrupulous businessmen, con artists, and others trying to grab part of their wealth. Non-Native Americans also tried to cash in on the new Osage wealth by marrying into families with headrights.
Alarmed about the way the Osage were using their wealth, in 1921 the US Congress passed a law requiring any Osage of half or more Indian ancestry to be appointed a guardian until proving "competency". Minors with less than half Osage ancestry were required to have guardians appointed, even if their parents were living. This system was not administered by federal courts; rather, local courts appointed guardians from among white attorneys and businessmen. By law, the guardians provided a $4000 annual allowance to their charges, but initially the government required little record keeping of how they invested the difference.Royalties to persons holding headrights were much higher: $11,000–12,000 per year during the period 1922–1925. Guardians were permitted to collect $200–1000 per year, and the attorney involved could collect $200 per year, which was withdrawn from each Osage's income. Some attorneys served as guardians and did so for four Osage at once, allowing them to collect $4800 per year.
The guardianship program created an incentive for corruption, and many Osage were legally deprived of their land, headrights, and/or royalties. Others were murdered, in cases the police generally failed to investigate. The coroner's office colluded by falsifying death certificates, for instance claiming suicides when people had been poisoned. The Osage Allotment Act did not entitle the Native Americans to autopsies, so many deaths went unexamined.
The tribe auctioned off development rights of their mineral assets for millions of dollars. According to the Commissioner of Indian Affairs, in 1924 the total revenue of the Osage from the mineral leases was $24,670,483.After the tribe auctioned mineral leases and more land was explored, the oil business on the Osage reservation boomed. Tens of thousands of oil workers arrived, more than 30 boom towns sprang up and, nearly overnight, Osage headright holders became the "richest people in the world." When royalties peaked in 1925, annual headright earnings were $13,000. A family of four who were all on the allotment roll earned $52,800, comparable to approximately $600,000 in today's economy.
In the early 1920s there was a rise in murders and suspicious deaths of Osage, called the "Reign of Terror", and the Osage Indian Murders. In one plot, in 1921, Ernest Burkhart, a European American, married Molly Kyle, an Osage woman with headrights. His uncle William "King of Osage Hills" Hale, a powerful business man who led the plot, and brother Byron hired accomplices to murder Kyle family heirs. They arranged for the murders of Molly Kyle's mother, two sisters and a brother-in-law, and a cousin, in cases involving poisoning, bombing, and shooting.
With local and state officials unsuccessful at solving the murders, in 1925 the Osage requested the help of the Federal Bureau of Investigation. It was the bureau's first murder case. By the time it started investigating, Molly Kyle was already being poisoned. This was discovered and she survived. She had inherited the headrights of the rest of her family. The FBI achieved the prosecution and conviction of the principals in the Kyle family murders. From 1921-1925, however, an estimated 60 Osage were killed, and most murders were not solved.John Joseph Mathews, an Osage, explored the disruptive social consequences of the oil boom for the Osage Nation in his semi-autobiographical novel Sundown (1934).
As a result of the murders and increasing problems with trying to protect Osage oil wealth, in 1925 Congress passed legislation limiting inheritance of headrights only to those heirs of half or more Osage ancestry. In addition, they extended the tribal control of mineral rights for another 20 years; later legislation gave the tribe continuing communal control indefinitely.Today, headrights have been passed down primarily among descendants of the Osage who originally possessed them. But the Bureau of Indian Affairs (BIA) has estimated that 25% of headrights are owned by non-Osage people, including other American Indians, non-Indians, churches, and community organizations. It continues to pay royalties on mineral revenues on a quarterly basis.
Beginning in 1999, the Osage Nation sued the United States in the Court of Federal Claims (dockets 99-550 and 00-169) for mismanaging its trust funds and its mineral estate. The litigation eventually included claims reaching into the 19th century. In February 2011, the Court of Federal Claims awarded $330.7 million in damages in partial compensation for some of the mismanagement claims, covering the period from 1972 to 2000. On October 14, 2011, the United States settled the outstanding litigation for a total of $380 million.The tribe has about 16,000 members.
The settlement includes commitments by the United States to cooperate with the Osage to institute new procedures to protect tribal trust funds and resource management.
The Osage Tribal Council was created under the Osage Allotment Act of 1906. It consisted of a principal chief, an assistant principal chief, and eight members of the Osage tribal council. The mineral estate consists of more than natural gas and petroleum. Although these two resources have yielded the most profit, the Osage have also earned revenue from leases for the mining of lead, zinc, limestone, and coal deposits. Water may also be considered a profitable asset that is controlled by the Mineral Council.
The first elections for this council were held in 1908 on the first Monday in June. Officers were elected for a term of two years, which made it difficult for them to accomplish long-term goals. If for some reason the principal chief's office becomes vacant, a replacement is elected by the remaining council members. Later in the 20th century, the tribe increased the terms of office of council members to four years.
In 1994 by referendum, the tribe voted for a new constitution; among its provisions was the separation of the Mineral Council, or Mineral Estate, from regular tribal government. According to the constitution, only Osage members who are also headright holders can vote for the members of the Mineral Council. It is as if they were shareholders of a corporation.
The Osage wrote a constitution in 1881, modeling some parts of it after the United States Constitution.
The Osage Allotment Act of 1906, mentioned in more detail under the previous section Natural Resources and Headrights, provided for election of a principal chief, assistant principal chief and an eight-member tribal council as the recognized governing body of the Osage Tribe. Each allottee received 657 acres (2.66 km2) of surface rights and mineral rights were reserved to the Osage Tribe. Only allottees and their descendants with headrights, considered shareholders, could vote or run for office in the tribe. Over generations, headrights and votes became highly fractionated.
By a new constitution of 1994, the Osage voted that original allottees and their direct descendants, regardless of blood quantum, were citizen members of the Nation. Due to court challenges, this constitution was overruled. At the time of allotment, the Osage had challenged some of the allottees listed by the Bureau of Indian Affairs, but the BIA had never cleaned up their records according to the tribal position. Later this challenge was brought up again.
The Osage appealed to Congress for support to create their own government and membership rules. In 2004, President George W. Bush signed Public Law 108-431, "An Act to Reaffirm the Inherent Sovereign Rights of the Osage Tribe to Determine Its Membership and Form a Government."From 2004 to 2006, the Osage Government Reform Commission formed and worked to develop a new government. It explored "sharply differing visions arose of the new government's goals, the Nation's own history, and what it means to be Osage. The primary debates were focused on biology, culture, natural resources, and sovereignty."
The Reform Commission held weekly meetings to develop a referendum that Osage members could vote upon in order to develop and reshape the Osage Nation government and its policies.On March 11, 2006, the people ratified the Constitution in a second referendum vote. Its major provision was to provide "one man, one vote" to each citizen of the nation. Previously, based on the allotment process, persons voted proportionally as shareholders. By a 2/3 majority vote, the Osage Nation adopted the new constitutional form of government. It also ratified the definition of membership in the Nation.
Today, the Osage Nation has 13,307 enrolled tribal members, with 6,747 living within the state of Oklahoma.Since 2006 it has defined membership based on a person's lineal descent from a member listed on the Osage Rolls at the time of the Osage Allotment Act of 1906. A minimum blood quantum is not required. But, as the Bureau of Indian Affairs restricts federal education scholarships to persons who have 25% or more blood quantum in one tribe, the Osage Nation tries to support higher education for its students who do not meet that requirement.
The tribal government is headquartered in Pawhuska, Oklahoma and has jurisdiction in Osage County, Oklahoma.The current governing body of the Osage nation contains three separate branches; an executive, a judicial and a legislative. These three branches parallel the United States government in many ways.
The tribe operates a monthly newspaper, Osage News.The Osage Nation has an official website and uses a variety of communication media and technology.
The judicial branch maintains courts to interpret the laws of the Osage Nation. It has the power to adjudicate civil and criminal matters, resolve disputes, and judicial review. The highest court is the Supreme Court. This Supreme Court has a Chief Justice, currently Meredith Drent,who replaced former Chief Justice Charles Lohah. There is also a lower Trial Court and more inferior courts as allowed by the tribal constitution.
The executive branch is headed by a Principal Chief, followed by an Assistant Principal Chief. The current Principal Chief is Geoffrey Standing Bear, and Raymond Red Corn is the Assistant Principal Chief, who were both sworn in on July 2, 2014. Administrative offices also fall under this executive branch.
The legislative branch consists of a Congress that works to create and maintain Osage laws. In addition to this role, their mission is to preserve the checks and balances within the Osage government, carry out oversight responsibilities, support trial revenues, and preserve and protect the nation's environment. This Congress is made up of twelve individuals who are elected by the Osage constituency and serve four-year terms. They hold two regular Congressional sessions and are headquartered in Pawhuska.
The Osage Nation issues its own tribal vehicle tags and operates its own housing authority. The tribe owns a truck stop, a gas station, and ten smoke shops. In the 21st century, it opened its first gaming casino and as of December 2013, has seven casinos.Casinos are located in Tulsa, Sand Springs, Bartlesville, Skiatook, Ponca City, Hominy and Pawhuska. The tribe's annual economic impact in 2010 was estimated to be $222 million. Osage Million Dollar Elm, the casino management company, is encouraging employees in education, paying for certificate classes related to their business, as well as for classes leading to BA and master's business degrees.
Located in Pawhuska, Oklahoma, the Osage Nation Museumprovides interpretations and displays of Osage history, art, and culture. The continuously changing exhibits convey the story of the Osage people throughout history and celebrate Osage culture today. Highlights include an extensive photograph collection, historical artifacts, and traditional and contemporary art.
Founded in 1938, the ONM is the oldest tribally owned museum in the United States. Historian Louis F. Burns donated much of his extensive personal collection of artifacts and documents to the museum.
The Missouria or Missouri are a Native American tribe that originated in the Great Lakes region of United States before European contact. The tribe belongs to the Chiwere division of the Siouan language family, together with the Iowa and Otoe.
As general terms, Indian Territory, the Indian Territories, or Indian country describe an evolving land area set aside by the United States Government for the relocation of Native Americans who held aboriginal title to their land. In general, the tribes ceded land they occupied in exchange for land grants in 1803. The concept of an Indian Territory was an outcome of the 18th- and 19th-century policy of Indian removal. After the Civil War (1861–1865), the policy of the government was one of assimilation.
Osage County is the largest county by area in the U.S. state of Oklahoma. Created in 1907 when Oklahoma was admitted as a state, the county is named for and is home to the federally recognized Osage Nation. The county is coextensive with the Osage Nation Reservation, established by treaty in the 19th century when the Osage relocated there from Kansas. The county seat is in Pawhuska, Oklahoma, one of the first three towns established in the county. The total population of the county is 47,987.
The Iowa or Ioway, known as the Báxoǰe in their own language, are a Native American Siouan people. Today, they are enrolled in either of two federally recognized tribes, the Iowa Tribe of Oklahoma and the Iowa Tribe of Kansas and Nebraska.
The Peoria are a Native American people. Today they are enrolled in the federally recognized Peoria Tribe of Indians of Oklahoma. Historically, they were part of the Illinois Confederation.
The Oklahoma Indian Welfare Act of 1936 is a United States federal law that extended the 1934 Wheeler-Howard or Indian Reorganization Act to include those tribes within the boundaries of the state of Oklahoma. The purpose of these acts were to rebuild Indian tribal societies, return land to the tribes, enable tribes to rebuild their governments, and emphasize Native culture. These Acts were developed by John Collier, Commissioner of Indian Affairs from 1933 to 1945, who wanted to change federal Indian policy from the "twin evils" of allotment and assimilation, and support Indian self-government.
The Wyandotte Nation is a federally recognized Native American tribe in Oklahoma. They are descendants of the Wendat Confederacy and Native Americans with territory near Georgian Bay and Lake Huron. Under pressure from Iroquois and other tribes, then from European settlers and the United States government, the tribe gradually moved south and west to Ohio, Michigan, Kansas and finally Oklahoma in the United States.
The Kickapoo Tribe of Oklahoma is one of three federally recognized Kickapoo tribes in the United States. There are also Kickapoo tribes in Kansas, Texas, and Mexico. The Kickapoo are a Woodland tribe, who speak an Algonquian language. They are affiliated with the Kickapoo Traditional Tribe of Texas, the Kickapoo Tribe in Kansas, and the Mexican Kickapoo.
The Shawnee Tribe is a federally recognized Native American tribe in Oklahoma. Also known as the Loyal Shawnee, they are one of three federally recognized Shawnee tribes. The others are the Absentee-Shawnee Tribe of Indians of Oklahoma and Eastern Shawnee Tribe of Oklahoma.
The Otoe–Missouria Tribe of Indians is a single, federally recognized tribe, located in Oklahoma. The tribe is made up of Otoe and Missouria Indians. Traditionally they spoke the Chiwere language, part of the Siouan language family.
The Ponca Tribe of Indians of Oklahoma, also known as the Ponca Nation, is one of two federally recognized tribes of Ponca people. The other is the Ponca Tribe of Nebraska. Traditionally, peoples of both tribes have spoken the Omaha-Ponca language, part of the Siouan language family.
A Half-Breed Tract was a segment of land designated in the western states by the United States government in the 19th century specifically for Métis of American Indian and European or European-American ancestry, at the time commonly known as half-breeds. The government set aside such tracts in several parts of the Midwestern prairie region, including in Iowa Territory, Nebraska Territory, Kansas Territory, Minnesota Territory, and Wisconsin Territory.
The Iowa Reservation of the Iowa Tribe of Kansas and Nebraska straddles the borders of southeast Richardson County in southeastern Nebraska and Brown and Doniphan Counties in northeastern Kansas. Tribal headquarters are west of White Cloud, Kansas. The reservation was defined in a treaty from March 1861. Today the tribe operates Casino White Cloud on the reservation.
The Cherokee Commission, was a three-person bi-partisan body created by President Benjamin Harrison to operate under the direction of the Secretary of the Interior, as empowered by Section 14 of the Indian Appropriations Act of March 2, 1889. Section 15 of the same Act empowered the President to open land for settlement. The Commission's purpose was to legally acquire land occupied by the Cherokee Nation and other tribes in the Oklahoma Territory for non-indigenous homestead acreage.
An Organic Act is a generic name for a statute used by the United States Congress to describe a territory, in anticipation of being admitted to the Union as a state. Because of Oklahoma's unique history,, an explanation of the Oklahoma Organic Act needs a historic perspective. In general, the Oklahoma Organic Act may be viewed as one of a series of legislative acts, from the time of Reconstruction, enacted by Congress in preparation for the creation of a unified State of Oklahoma. The Organic Act created Oklahoma Territory, and Indian Territory that were Organized incorporated territories of the United States out of the old "unorganized" Indian Territory. The Oklahoma Organic Act was one of several acts whose intent was the assimilation of the tribes in Oklahoma and Indian Territories through the elimination of tribal reservations and the elimination of the tribes' communal ownership of property.
On the eve of the American Civil War in 1861, a significant number of Indigenous peoples of the Americas had been relocated from the Southeastern United States to Indian Territory, west of the Mississippi. The inhabitants of the eastern part of the Indian Territory, the Five Civilized Tribes, were suzerain nations with established tribal governments, well established cultures, and legal systems that allowed for slavery. Before European Contact these tribes were generally matriarchial societies, with agriculture being the primary economic pursuit. The bulk of the tribes lived in towns with planned streets, residential and public areas. The people were ruled by complex hereditary chiefdoms of varying size and complexity with high levels of military organization.
United States v. Ramsey, 271 U.S. 467 (1926), was a U.S. Supreme Court case in which the Court held that the government had the authority to prosecute crimes against Native Americans (Indians) on reservation land that was still designated Indian Country by federal law. The Osage Indian Tribe held mineral rights that were worth millions of dollars. A white rancher, William K. Hale, devised a plot to kill tribal members to allow his nephew, who was married to a tribal member, to inherit the mineral rights. The tribe requested the assistance of the federal government, which sent Bureau of Investigation agents to solve the murders. Hale and several others were arrested and tried for the murders, but they claimed that the federal government did not have jurisdiction. The district court quashed the indictments, but on appeal, the Supreme Court reversed, holding that the Osage lands were Indian Country and that the federal government therefore had jurisdiction. This put an end to the Osage Indian murders.
The Kickapoo Tribe of Indians of the Kickapoo Reservation in Kansas is one of three Federally recognized tribes of Kickapoo people. The other Kickapoo tribes in the United States are the Kickapoo Traditional Tribe of Texas and the Kickapoo Tribe of Oklahoma. The Tribu Kikapú are a distinct subgroup of the Oklahoma Kickapoo and reside on a hacienda near Múzquiz Coahuila, Mexico; they also have a small band located in the Mexican states of Sonora and Durango.
|Wikimedia Commons has media related to Osage .|
|Wikisource has the text of The New Student's Reference Work article " Osages ".| |
A recent study suggests that ocean acidification may affect coral less than was originally thought, reports Steph Yin for The New York Times. Coral relies on a process called calcification, the creation of a thin layer of skeletal protection each day, to survive. Common belief is that ocean acidification inhibits this process, which is why coral reefs have suffered so much in recent years. As oceans acidify, coral reefs struggle to undergo calcification. But a new study suggests that calcification is still possible in the face of lowering pH levels.
The study has been controversial, many scientists disagreeing with the new hypothesis. But the process is still relatively unknown, despite extensive research. Both sides of the argument agree, though, that warming oceans is the major concern for coral reefs, which bleach when water temperatures become unlivable. Whether or not ocean acidification affects coral, warming oceans do, and both can be traced to carbon dioxide emissions. |
The Demodex mite is a type of parasite that lives on humans and can reside in hair follicles and sebaceous glands. These mites are arachnid (eight-legged) and invisible to the naked eye, varying in size from 0.1mm to 0.4 mm long. They typically live on the face and in the hair follicles of the eyebrows, eyelids, roots of the eyelashes, facial hair, and around the ears and are associated with various skin problems of the eyes and face, such as blepharitis and acne rosacea.
Demodex can affect humans at any age, but their presence increases in prevalence with increasing age. Immunity compromised patients such as diabetics, patients on long-term corticosteroids or chemotherapy, or patients who have HIV/AIDS also have increased risk and prevalence of Demodex infection. Usually, when the immune system is weakened and the parasitic population has colonized, this disease can badly damage the skin.
For transmission of mites from one person to another, direct contact of hair and sebaceous glands on the nose, or dust containing eggs is required. Since the disease processes begin when there is an overpopulation of Demodex, the vast majority of cases of mites go unobserved and don't show any adverse symptoms. However, in certain cases, the mite populations migrate and multiply in the eyelashes.
There are two existing types of Demodex mites: the longer kind, Demodex folliculorum, which live in the hair follicles and the short ones, Demodex brevis, which live in the sebaceous (oil) glands in the skin.
In the early stages, there are often no noticeable symptoms, but if left untreated Demodex can progress. Symptoms vary among patients and may include dry eye, red eyes, severe itching along the eyelid margin and eyebrow, especially in the morning, eyelid irritation, burning sensation, foreign body sensation that seems to originate beneath the eyelids, heavy lid, and blurry vision. One of the earliest signs of mite infestation is cylindrical dandruff (CD), which is the accumulation of fine, waxy, dry debris that collects at the base of the lash and extends up to 2 mm along the length of the lashes and is most noticeable on the upper lashes.
Demodex mites can be diagnosed by a slit-lamp evaluation or by carefully removing and viewing an epilated eyelash under the microscope.
Initial treatment involves an in-office lid scrub/débridement which starts with a drop or two of long-lasting anesthetic being instilled. The lashes and eyebrows are then thoroughly débrided. Next, an antibiotic/steroid ointment is applied to help keep the mites from moving and also possibly suffocate them. The steroid also helps in calming down the inflammation secondary to the chemical and mechanical irritation of the in-office treatment, in addition to suppressing any possible inflammatory cascade associated with the decaying mites. The patient should return in 2 weeks and repeat the in-office treatment.
The patients diagnosed with Demodex need to know a few simple instructions:
Immediately wash bedding and pillowcases in hot water and dry in a heated dryer before beginning treatment, and once a week thereafter.
Wash face, nostrils, hair, external ear and neck with a non-soap cleanser twice daily.
Scrub the eyelids with a mild (baby) shampoo.
Avoid using makeup for at least 1 week and discard all old makeup.
Avoid oil-based cleansers, greasy makeup, lotions, and sunscreens which can provide further "food" for the mites.
Exfoliate face once or twice a week to remove dead skin cells and trapped sebum. Keep pets away from sleeping surfaces.
With the proper medical care, treatment, and hygiene, the Demodex count usually drops to zero in 4-6 weeks without recurrence in the majority of cases. Patients receiving therapy show dramatic improvements in symptoms, eye inflammation, tear film stability and vision. |
In this topic you are going learn how to integrate any rational function (a ratio of polynomials) by expressing it as a sum of simpler fractions, called partial fractions, that is simplified and easy for us to integrate. The process of taking a rational expression and decomposing it into simpler rational expressions that we can add or subtract to get the original rational expression is called partial fraction decomposition. It is important algebraic technique because many integrals involving rational expressions can be done if we first do partial fractions on the integrand.
This method is based on the simple concept of adding fractions by getting a common denominator.
The Partial Fractions decomposition for ¾ is
This concept can also be used with functions of . For example,
so that we can now say that a partial fractions decomposition for is
Integrating the function:
METHOD: In general Consider a rational function:
where P and Q are polynomials. It’s possible to express f as a sum of simpler fractions provided that the degree of P is less than the degree of Q: Such a rational function is called proper. If f is improper, then we must take the preliminary step of dividing Q into P (by long division) until a remainder R(x) is obtained such that deg(R) < deg(Q): The division statement is
where S and R are also polynomials. Sometimes this preliminary step is all that is required to get the integral.
Example 1. Evaluate
SOLUTION: Since the degree of the numerator is greater than the degree of the denominator, we first perform the long division. This enables us to write
The next step is to factor the denominator Q(x) as far as possible. It can be shown that any polynomial Q can be factored as a product of linear factors (of the form ax + b) and irreducible quadratic factors (of the form ax2 + bx + c; where b2 – 4ac < 0). For instance,
- x2 – 1 = (x – 1) (x + 1)
- 10×2 – 11x – 6 = (2x – 3) (5x + 2)
- x3 + 1 = (x + 1) (x2 – x + 1)
- x3 + 5 = (x + 5⅓) (x2 – 5⅓x + 25⅓)
- x3 + x2 + x + 1 = x3 (x + 1) + x + 1 = (x + 1) (x3 + 1)
- 3×3 + 14×2 + 7x – 4 = 3×2 (x + 1) + 11x(x + 1) – 4 (x + 1) = (x + 1) (3x – 1) (x + 4)
- x4 – 16 = (x2)2 – 42 = (x2 – 4) (x2 + 4) = (x – 2) (x + 2) (x2 + 4)
- x4 + 16 = x4 + 8×2 + 16 – 8×2 = (x2 + 4)2 – 8×2 = (x2 + 2√2x + 4) (x2 – 2 √2x + 4)
- x5 + 1 = (x + 1)(x4 – x3 + x2 – x + 1) = ¼ (x + 1) (2×2 – x + √5x + 2) (2×2 – x – √5x + 2)
- x6 – 3×5 + 10×3 – 15×2 + 9x – 2 = (x + 2) (x – 1)5
- x9 + 6×8 + 21×7 + 51×6 + 81×5 + 87×4 + 32×3 – 63×2 – 108x – 108 = (x – 1) (x + 2)2 (x2 + x + 3)3
- x2 + 1; x2 + 2x + 5; etc. .. are irreducible quadratic factors
The third step is to express the proper rational function R(x)/Q(x) as a sum of partial fractions of the form
A theorem in algebra guarantees that it is always possible to do this. There are four possible cases:
Case I: The denominator Q(x) is a product of distinct linear factors.
This means that we can write
where no factor is repeated (and no factor is a constant multiple of another). In this case the partial fraction theorem states that there exist constants A1;A2; . . . ;Ak such that
Example 2: Consider
SOLUTION: Since the degree of the numerator is less than the degree of the denominator,
we don’t need to divide. We factor the denominator as
As you can see, the denominator has three distinct linear factors (with yellow background), therefore, the partial fraction decomposition of the integrand has the form
To determine the values of A, B, and C, we multiply both sides of this equation by the product of the denominators,x ( 2x – 1 ) (x + 2 ) , obtaining
Expanding the right side of Equation and writing it in the standard form for polynomials, we get
The polynomials in Equation are identical, so their coefficients must be equal. The coefficient of x2 on the right side, 2A + B + 2C, must be equal to the coefficient of x2 on the left side—which is, 1 . Likewise, the coefficients of x are equal and the constant terms are equal. This gives the following system of equations for A, B, and C:
3A + 2B – C = 2
-2A = -1
Solving, we get A = ½ , B = ⅕ , C = -1/10, integrate the partial fractions
At this point, it is a must that you can already do simple integration in mind. In integrating the middle term you can made the mental substitution u = 2x – 1, which gives du = 2 dx and dx = ½ du
Note: Another way of finding the coefficients of A, B and C is elimination or simplifying the equation
Let x = 0, the second and third term eliminate, the equation becomes -1 = -2A, therefore A = 1/2
Let x = 1/2, the first and third terms eliminate, 1/4 = 5/4B, then B = 1/5
Let x = -2, the first and second terms eliminate, -1 = 10C, we get C = -1/10
Important: After getting the coefficients of A, B, and C, substitute then proceed integrating the partial fractions. As you can see this method of getting the coefficients is much simpler and easy compare to the previous one but it is up to you what do you prefer.
Case II: The denominator Q(x) is a product of linear factors, some of which are repeated.
This means that we can write
Instead of the single term A1/(a1x + b1x) in (Case I), we would use
Example 3: Find
Solution: The Partial Fraction decomposition is
Putting the right side over the common denominator (x – 1)3, you have
Equating numerators, (I’m going to use the second method then the first method.)
Let x = 1, first and second terms eliminate, equation equal to, 3 -1 + 4 = C, thus, C = 6
Equating coefficients of x2 : 3 = A
Equating coefficients of x: -1 = -2A + B ; B = -1 + 2(3) ; thus B = 5
Case III: The denominator Q(x) contains irreducible quadratic factors, none of which is repeated.
For example: x2 + 1; x(2x – 3) (x2 + x + 1); x2(x2 + 4) (x2 + 3x + 8); etc.
In this case if Q(x) has the factor ax2 +bx + c; where b2 – 4ac < 0; then, in addition to the partial fractions in (Case I) and (Case II), the expression for R(x)/Q(x) will have a term of the form
Example 4: Evaluate
SOLUTION: Factored the denominator, x3 + 4x = x (x2 + 4 ) thus, the partial fraction decomposition is
Multiplying by x (x2 + 4), we have
= (A + B)x2 + Cx + 4A
Equating these coefficients
Coefficients of x2 : 2 = (A + B)
Coefficients of x : -1 = C
Coefficients of x0 : 4 = 4A
Then, from 3rd equation A = 1 , from 1st equation using A = 1 we got B = 1 , and from the second equation we have C = -1
Important: To integrate the second term I split it into two parts. Make the u-substitution for the second integral so that du = 2xdx. Evaluate the third integral using the Inverse Trigonometric forms of tangent. Trigonometric Table/Formulas.
Case IV: The denominator Q(x) contains a repeated irreducible quadratic factor.
For example: (x2 + x + 2)2; (2x + 7)(x2 + 4)3; x(x – 2)(x + 1)2(2×2 + 3)(x2 + 1)3(x2 + 4x + 5)4; etc.
In this case if Q(x) has the factor (ax2 + bx + c)r; where b2 – 4ac < 0; then, instead of a single partial fraction (In Case III), the sum
occurs in the partial fraction decomposition of R(x)/Q(x). Each of the terms in the decomposition can be integrated by using a substitution or by first completing the square if necessary.
SOLUTION: The form of the partial fraction decomposition is
= A( x4 + 2×2 + 1) + (B x4 + x2) + C( x3 + x ) + Dx2 + Ex
= ( A + B) x4 + Cx3 + ( 2A + B + D )x2 + ( C + E )x + A
Equating the coefficients,
Coefficients of x4: 0 = A + B
Coefficients of x3: C = -1
Coefficients of x2: 2 = 2A + B + D
Coefficients of x1: -1 = C + E
Coefficients of x0: 1 = A
Then solving we get, A = 1 , B = -1 , C = -1 , D = 1 , and E = 0
Important:Again, you will notice that I split the second integral. Try some practice exercises to familiarize yourself to evaluate integral using this method. Integration by Partial Fractions – Set 1 Problems
credit: D. A. Kouba (UC Davis), Kiryl Tsishchanka, James Stewart©2013 www.PinoyBIX.com |
ADD/ADHD – Attention Deficit Disorder and Attention Deficit Hyperactivity Disorder
But you’re so calm – you can’t possibly have ADHD! While many people affected by ADD/ADHD exhibit symptoms of hyperactivity and lack of focus, not all those who suffer from the condition present these same difficulties. Both Attention Deficit Disorder (ADD) and Attention Deficit Hyperactivity Disorder (ADHD) are terms used to refer to the same mental health condition, and professionals tend to use ADHD as ADD is the more outdated appellation.
While sufferers may exhibit a range of symptoms and behaviours, the condition is typically characterized by inattention, hyperactivity and/or impulsivity. We most frequently hear of ADHD in children; however adults can also be affected by the disorder, with many sufferers not diagnosed until adulthood. Males also tend to present the classic symptoms and are therefore more frequently diagnosed with ADHD than females.
Sufferers with inattentive symptoms may miss important details or make careless mistakes at work, home, or school. They have trouble maintaining sustained attention, are forgetful, and may have organization problems, often losing important items. They generally avoid tasks that require prolonged attention and effort. Those with the more stereotypical hyperactivity or impulsivity symptoms tend to fidget frequently, having difficulty sitting for long periods. They talk excessively and interrupt frequently, generally answering a question before they are finished being asked. It is difficult for them to complete any task calmly or quietly.
We do not yet fully understand all the causes of ADHD – experts believe that there could be a genetic factor; however, environmental factors may also contribute to the onset of symptoms. There are high rates of correlation with exposure to toxins in utero or at a young age; low birth weight; brain injuries; and maternal drug, alcohol or tobacco use. While people with ADHD frequently come from lower-income families where there is a history of abuse, this does not seem to play a major role in the development of the disorder. Researchers have discovered several differences between the brains of healthy individuals and those with ADHD – people with ADHD typically have significantly lower levels of dopamine. Dopamine is an important neurotransmitter, a chemical released by neurons (nerve cells) to send signals to other nerve cells; it helps begin physical movement, and is critical to other nervous system functions such as pleasure, attention, mood, and motivation.
There are several treatments for ADD/ADHD – both behavioural and medical. Behaviour modification or behavioural therapy is a psychological practice that helps a person with ADHD develop practices and coping mechanisms for their daily life, and for handling school, work, or family life. Often, couples therapy or family therapy is recommended in order to create a supportive environment for patient. Regular exercise and sleep routines are also crucial for managing the symptoms of ADHD; a healthy diet with enough protein, complex carbohydrates, zinc, iron and magnesium are also important. Studies also show that omega-3 fatty acids improve mental focus in people with ADHD. Medications with stimulant properties such as Ritalin and Adderall help trigger the central nervous system; there are also non-stimulant medications that can be effective at treating ADHD. Experts agree that most people with ADHD thrive when the proper balance of treatments is found.
Programs like Valiant Behavioral Health are the ideal place to find that balance, and treat your ADHD.
Call today to speak with one of Staff 1-855-795-7380Follow Valiant Recovery Inc and Valiant Behavioural Health. |
Iris is a part of the eyes. It is the part which imparts colour to the eyes. It is circular and pigmented membrane. The opening in the centre of the iris is known as pupil of the eye.
The iris has a function of adjusting the light entering the eye to achieve maximum clarity in the vision. The muscle fibres of the iris contract or relax in order to decrease or increase size of the pupil to adjust the light.
When this iris becomes inflamed due to one of the following causes, the condition is known as “Iritis”.
Iritis is a part of a broader disease terminology known as uveitis. Uveitis affects 1 in 4500 population. mostly people form the age group of 20 to 60 years get affected by this disease. It affects men and women equally. Uveitis results in blindness in 10% to 15% people in USA.
Blunt trauma to the eye can cause injury to the iris resulting in traumatic iritis.
Associated with auto immune disease:
Many times this condition is associated with some auto immune diseases such as psoriasis, spondylitis, Reiter syndrome, sarcoidosis etc.
It can also be caused in certain infectious diseases such as Lyme disease, syphilis, herpes, and tuberculosis etc.
It can also be caused as a side effect to certain drugs.
The patients may present with following symptoms:
- Pain in the eye
- Burning sensation in the eye
- Floaters in the eye
- Worsening of the pain after exposure to the bright light (photophobia)
- Redness in the eyes especially near the iris
- Reduced size of the pupil
- Distorted shape of the pupil
- Diminution of the vision
- Watering of the eye
- Head ache
The diagnosis is confirmed by examination of the eye on slit lamp examination. Slit lamp is a specialised type of microscope developed for the examination of the eyes.There is presence of flare seen on the slit lamp in a case of iritis.
Other tests such as Rheumatoid factor, HLA testing, chest X ray may be useful in diagnosing concurrent conditions.
It is very important to seek medical treatment by visiting an ophthalmologist.
Following prescription only treatment modalities are prescribed by the ophthalmologist to treat iritis:
They are given either as a topical treatment such as drops or oral therapy. They help in controlling the inflammatory symptoms.
Mild analgesics are given by doctor to reduce the pain and inflammation.
These medications are given if the iritis is caused because of viral or bacterial infections.
Wearing dark sun glasses:
Wearing UV protective sun glasses will reduce the glare and expose the affected eye to less bright sun light which will help in reducing the symptoms.
The condition should not be ignored or it may result in following serious complications:
- Macular oedema
- Permanent loss of vision |
Surface radiative fluxes have been derived with the objective of supplementing top-of-atmosphere (TOA) radiative fluxes being measured under NASA’s Clouds and the Earth’s Radiant Energy System (CERES) project. This has been accomplished by using combinations of CERES TOA measurements, parameterized radiative transfer algorithms, and high-quality meteorological datasets available from reanalysis projects. Current CERES footprint-level products include surface fluxes derived from two shortwave (SW) and three longwave (LW) algorithms designated as SW models A and B and LW models A, B, and C. The SW and LW models A work for clear conditions only; the other models work for both clear and cloudy conditions. The current CERES Edition-4A computed surface fluxes from all models are validated against ground-based flux measurements from high-quality surface networks like the Baseline Surface Radiation Network and NOAA’s Surface Radiation Budget Network (SURFRAD). Validation results as systematic and random errors are provided for all models, separately for five different surface types and combined for all surface types as tables and scatterplots. Validation of surface fluxes is now a part of CERES processing and is used to continually improve the above algorithms. Since both models B work for clear and cloudy conditions alike and meet the accuracy requirement, their results are considered to be the most reliable and most likely to be retained for future work. Both models A have limited use given that they work for clear skies only. Models B will continue to undergo further improvement as more validation results become available.
Changes in the radiation budget of the Earth–atmosphere system, both at the surface and at the top of the atmosphere (TOA), are the first indicators of the perturbation of climate caused by human-induced changes in greenhouse gases, aerosols, and other environmental factors (Wild and Roeckner 2006). Estimates of the components of radiation budget at the surface and at the TOA were identified by the Global Climate Observing System (GCOS) as essential climate variables (GCOS 2003). The NASA Clouds and the Earth’s Radiant Energy System (CERES) is a satellite project designed for deriving accurate estimates of those Earth radiation budget parameters for use in investigations of the climate system and cloud–radiation interactions (Loeb et al. 2018; Wielicki et al. 1996). Shortwave (SW) and longwave (LW) radiative fluxes at the TOA are derived from corresponding radiances directly measured by CERES instruments. Surface radiative fluxes, which, along with fluxes of latent and sensible heat, constitute the surface energy budget, are not directly measurable from satellites, although they are just as essential for climate system studies as TOA fluxes (Kato et al. 2013; Wild et al. 1995). In view of such a need, combinations of TOA measurements, radiative transfer models, and high-quality meteorological datasets available from reanalysis projects have been used for developing algorithms for deriving surface radiative fluxes (Rose et al. 2013; Kato et al. 2018). Two SW (Li et al. 1993a,b; Gupta et al. 2001) and three LW (Inamdar and Ramanathan 1997; Gupta et al. 1992; Zhou et al. 2007) algorithms from among these are currently being used within the surface-only flux algorithms (SOFA) segment of CERES processing and are respectively labeled as SW models A and B and LW models A, B, and C. These algorithms produce two SW and three LW flux estimates on an instantaneous-footprint basis for each scanning instrument and are all provided in the CERES Single Scanner Footprint (SSF) product. Validation of these fluxes against ground-based flux measurements and intercomparison among products of different algorithms is helpful in assessing their strengths and weaknesses and identifying the most robust among them. The purpose for the use of several algorithms at the same time and the extensive validation is to identify the best algorithms among them.
CERES project has launched seven instruments on five separate low-Earth-orbiting satellites starting with the preflight model (PFM) on the Tropical Rainfall Measuring Mission (TRMM) in November 1997, followed by flight models 1 and 2 (FM1 and FM2) on Terra in December 1999, FM3 and FM4 on Aqua in May 2002, FM5 on Suomi National Polar-Orbiting Partnership (Suomi-NPP) in October 2011, and FM6 on Joint Polar Satellite System-1 (JPSS-1)/NOAA-20 in November 2017. PFM was deorbited in June 2015, and FM1–FM5 are currently producing data. FM6 data are presently going through the calibration/validation phase. Each instrument on Terra, Aqua, and Suomi-NPP (FM1–FM5) has scanning radiometers with three spectral channels defined as the total (0.2–200 μm), SW (0.2–5 μm) window, and LW (8–12 μm) window. The LW window channel on FM6 has been replaced by a broadband LW (5–100 μm) channel. Results from Terra and Aqua instruments constitute the longest data record and have undergone the most extensive characterization and calibration (Loeb et al. 2016). Also, these data have gone through progressively refined editions of the CERES processing system. The current version of the system used with Terra and Aqua data is designated as Edition 4A. Imager data from the Moderate Resolution Imaging Spectroradiometer (MODIS) that flies aboard both Terra and Aqua are used for deriving cloud properties (Menzel et al. 2008; Minnis et al. 2010) and scene identification information (Loeb et al. 2005; Su et al. 2015a,b) required for surface and TOA flux computations.
CERES-derived SSF data have been intercompared with one another and extensively validated using surface-based flux measurements collected from high-quality networks like the Baseline Surface Radiation Network (BSRN) and NOAA’s Surface Radiation Budget Network (SURFRAD). This paper presents validation of SSF fluxes from each algorithm separately for clear and cloudy footprints. Clear footprints are defined here as those with <0.1% cloud amount with all others defined as cloudy. Section 2 presents brief descriptions of the algorithms and input data sources. These algorithms were chosen for processing and validation at the recommendation of the CERES Science Team (CST) and can be removed only on a CST recommendation. Section 3 presents a discussion of the sources of surface validation data. Results of those intercomparisons and validation are presented in section 4, followed by summary and conclusions in section 5.
2. Surface flux models and input data
Numerous improvements have been incorporated into some of these models since their initial application to CERES processing. Such improvements are a direct consequence of the continual validation of SOFA SSF products (Gupta et al. 2004; Kratz et al. 2010). Only details of those model improvements will be provided in this section. Other models will be only briefly mentioned since adequate description of those has been presented in earlier publications (e.g., Kratz et al. 2010). For example, SW model A proposed by Li et al. (1993a,b) is a linear parameterization for estimating net SW surface flux. Downward SW flux (DSF) was to be derived by using surface albedo values from Li and Garand (1994). Although considered promising at the time it was introduced, it produces clear-sky surface net SW fluxes only and has remained that way without any further enhancements. LW model A proposed by Inamdar and Ramanathan (1997) derives broadband downward LW flux (DLF) at the surface as a sum of its components in the 8–12-μm window and the nonwindow regions by making use of the window region TOA measurements from CERES instruments. This model was also a clear-sky-only model at the time of introduction and remained so without any further improvements. SW and LW models B are both all-sky models and have undergone substantial improvements over the years. LW model C is newly introduced in CERES with Edition-4A processing. Improvements made to these models are separately described below.
a. SW model B
The SW model B is also known as the Langley parameterized shortwave algorithm (LPSA), which derives downward SW flux (DSF) as
where S0d−2μ together represent the incoming solar irradiance at the TOA and Ta and Tc represent atmospheric and cloud transmittances computed separately. A comprehensive description of LPSA is available in Gupta et al. (2001) and Kratz et al. (2010).
The LPSA model has undergone a number of significant improvements since the Kratz et al. (2010) study on CERES Edition-2B results. For example, the clear-sky surface albedo climatology used in DSF computation has been updated several times using newer editions of CERES clear-sky measurements (unpublished results). These updates substantially improved the agreement between CERES-derived DSF with ground-based measurements, especially at polar sites (Kratz et al. 2010). Another important improvement made to the LPSA model since the Edition-2B processing was the replacement of the original World Climate Programme (WCP)-55 aerosol properties (Deepak and Gerber 1983). This was initially accomplished by using a monthly climatology of aerosol optical depths (AODs) derived from 10 years of the Model for Atmospheric Transport and Chemistry (MATCH; Collins et al. 2001; Rasch et al. 1997) data, and subsequently by near-real-time daily AODs, also from MATCH. Corresponding values of single scattering albedo and asymmetry parameter were derived from the Optical Properties of Aerosols and Clouds (OPAC) database (Hess et al. 1998). The use of daily aerosol properties has allowed for the effects of transient aerosol events to be captured in the derived SSF fluxes.
b. LW model B
This model makes use of the Langley parameterized longwave algorithm (LPLA) as initially outlined in Gupta (1989) with revisions presented in Gupta et al. (1992) for computing DLF. By incorporating the surface emissivity map (Wilber et al. 1999), upward and net surface fluxes can be computed from downward fluxes. A more detailed discussion of the LPLA model is provided in Kratz et al. (2010).
As with the LPSA, the LPLA model has also undergone two important improvements since its use in the Edition-2B processing. Validation of the LPLA-derived DLF over arid sites, especially during daytime, revealed substantial overestimations in the DLF. Further analysis showed that these overestimations occurred when surface skin temperature far exceeded the temperature of the atmospheric layers just above the surface, causing the temperature lapse rates near the surface to be unsustainably high. Detailed discussion of this problem and a formulated solution were presented in Gupta et al. (2010). The solution consisted in limiting the temperature lapse rate between the surface and the lowest atmospheric layer close to the adiabatic lapse rate of ≈10 K km−1.
Another problem observed during the validation of DLF over polar regions, especially for Antarctica, caused an effect that was exactly opposite to that described above, where substantial underestimations occurred when the surface skin temperature was much lower (by 20–30 K) than the temperature of the lowest atmospheric layers. This condition occurred frequently over polar and high-altitude regions where extremely low water vapor amounts allowed the surface to radiate essentially directly to space and surface temperature to plummet. This situation was remedied by limiting the temperature lapse rate to −10 K km−1, a value chosen after considerable numerical experimentation.
c. LW model C
This model is based on work reported in Zhou and Cess (2001) and Zhou et al. (2007). It was incorporated into the SOFA SSF processing because the CERES FM6 instrument, which is now flying aboard the JPSS-1/NOAA-20 satellite, has been modified by replacing the 8–12-μm window channel by a broadband (5–100 μm) LW channel, thereby rendering the LW model A inoperative with the FM6 data. The early version of this model (Zhou and Cess 2001) demonstrated that clear-sky DLF can be represented in terms of surface upward LW flux (ULF) and column water vapor. That version was revised later based on the results of validation studies and also extended to cover cloudy-sky conditions (Zhou et al. 2007). Separate expressions were developed for the clear-sky component,
where w is the column water vapor, a0–a3 are regression coefficients, and ULF is the upward LW flux computed with surface skin temperature and surface emissivity of unity, and the cloudy-sky component,
where lwp and iwp are the liquid and ice water paths present in the cloud, respectively, and b0–b5 are regression coefficients. Total DLF was then computed as the sum of clear and cloudy components weighted by their respective fractions.
d. Input data
The input parameters necessary to run the SW and LW models have been obtained from the CERES processing stream. Temperature and humidity profiles along with the column integrated ozone amounts have been made available through the Meteorology, Ozone, and Aerosol (MOA) database. For the CERES Edition-4A processing, MOA profiles were produced from a frozen version of the Goddard Earth Observing System data assimilation product (GEOS-5.2, also known as G5-CERES) obtained from the Global Modeling and Assimilation Office (GMAO; Rienecker et al. 2008). Having a frozen version of the GEOS dataset was essential for ensuring climate quality stability for CERES data products. Fractional cloud amounts and cloud-base heights have been made available through the Clouds subsystem within CERES processing (Minnis et al. 2010) where cloud properties were derived using high-resolution imager data obtained from MODIS aboard Terra and Aqua satellites. A significant improvement to the CERES Edition-4A processing was made through the introduction of the daily total solar irradiance (TSI) data, obtained from the Solar Radiation and Climate Experiment (SORCE; Kopp and Lean 2011). These TSI measurements have provided highly precise and accurate values of total solar irradiance rather than relying on an estimated static value of 1365 W m−2. These TSI values are substantially lower than the static value, lying in the 1360–1362 W m−2 range with a median value close to 1361 W m−2. The vast majority of daily TSI values were obtained from SORCE (Kopp 2014), version 15, data available online starting with 25 February 2003. Since the CERES Terra data record began production on 1 March 2000, the period prior to the start of SORCE data has been covered by the World Radiation Center (WRC) Davos (Fröhlich 2012) dataset from the file composite_d41_62_0906.dat. The substantial offset between WRC and SORCE data was removed by applying an offset correction of −4.4389 W m−2 to WRC data as suggested by providers of SORCE data (G. Kopp 2009, personal communication). The SORCE data stream was interrupted in the middle of July 2013 because of a battery problem on board the SORCE spacecraft. The CERES team decided to fill the gap by using the Royal Meteorological Institute of Belgium (RMIB) Composite data (Dewitte et al. 2004) starting on 1 July 2013. Even though SORCE data became available again as early as March 2014, CERES processing continued to use the RMIB-Composite data until 31 October 2014. Starting on 1 November 2014, CERES began using a version of SORCE data that had been revised slightly from version 15. To maintain consistency with the earlier SORCE data, however, the most recent SORCE data have been offset in the CERES processing to match the version-15 values and have continued to be used in this manner.
3. Surface validation data
Ground-based measurements of surface radiative fluxes required for validation of satellite retrievals are currently being acquired by numerous high-quality networks that have come into existence during the last 20–25 years. Notable among these are BSRN (Ohmura et al. 1998), the U.S. Department of Energy Atmospheric Radiation Measurement (ARM) Program (Stokes and Schwartz 1994), and the SURFRAD (Augustine et al. 2000) network operated by NOAA’s Global Monitoring Division (GMD). Flux measurements from 28 BSRN sites; Southern Great Plains (SGP), North Slope of Alaska (NSA), and two Tropical Western Pacific (TWP) sites from the ARM Program; and seven sites of the SURFRAD network were compiled under the CERES–ARM Validation Experiment (CAVE) banner and were made available to the worldwide science community online (https://www-cave.larc.nasa.gov). Measurements from an ocean site operated by NASA’s Langley Research Center (LaRC) at the Chesapeake Lighthouse under the CERES Ocean Validation Experiment (COVE; Jin et al. 2002) were also included in the CAVE database. After discontinuation of the COVE site in 2016, that operation was shifted to the Chemistry and Physics Atmospheric Boundary Layer Experiment (CAPABLE) site already operational on the grounds of NASA LaRC. The surface sites included in the CAVE database were chosen on the basis of their availability, reliability, and diversity in representing a variety of different surface types (e.g., coastal, continental, desert, island, and polar). A map showing the locations of surface sites used in this study is displayed in Fig. 1.
Many network sites provide surface insolation measurements produced by two methods. In the first method, the direct component is measured by a normal incidence pyrheliometer (NIP) and combined with diffuse component measured by a shaded pyranometer. In the other method, the total irradiance is measured by an unshaded pyranometer. In general, measurements by the first method are considered higher quality (Michalsky et al. 1999), but those can be seriously affected by tracking errors in NIP measurements and also exhibit large gaps. The unshaded measurements, although not affected by tracking problems, show far fewer gaps but are subject to cosine errors (Gupta et al. 2004). In view of this, some networks (e.g., GMD) apply strict quality controls and provide the better of the two measurements to the outside user. The pyrgeometers measuring surface LW fluxes present no such problems.
Satellite-derived and ground-measured fluxes are spatially and temporally matched prior to their comparisons. Spatial matching is ensured by using the half-width of a nadir-viewing footprint (~10 km) as the maximum allowable distance between the site location and center of the footprint(s). Temporal matching is ensured by imposing the highest temporal resolution of ground site data (1 min for most sites) as the maximum allowable time difference between the overpass time and site measurement (Gupta et al. 2004; Kratz et al. 2010). Thus, all CERES footprint fluxes from within 1 min of the overpass time and within 10 km of the surface site were averaged together for comparison with the site measurement. A special consideration was made for cloudy footprint SW comparisons as those showed a large scatter contributed by the large spatial variability of clouds in many cloudy footprints. This scatter was substantially reduced by compensating for the spatial variability by averaging 1-min SW measurements over longer intervals. A 60-min interval (±30 min) was found to be optimum and was adopted (Gupta et al. 2004).
4. Output product and validation results
Validation results presented here cover a period of 204 month (March 2000–February 2017) for Terra and 176 months (July 2002–February 2017) for Aqua. March 2000 and July 2002 mark the beginning of Terra and Aqua operations, respectively, and February 2017 marks the end of MODIS imager collection-5 data. Beyond February 2017, collection-5 data were superseded by collection 6 and subsequently by collection 6.1 to remedy the worsening cross-talk problem between certain MODIS channels (Platnick et al. 2017; Moeller et al. 2017) that began to affect the quality of cloud properties retrieved from the imager data. Use of cloud properties derived from collection 6.1 would have affected the climate-quality stability of the CERES product time series. As a result, Edition-4A fluxes acquired beyond February 2017 were not included in this analysis. All fluxes presented in the present study were derived using cross-track scanning mode observations, even though both Terra and Aqua have been deployed in other scanning modes at various times.
a. SW model results
All clear-sky SW flux comparisons are shown in Fig. 2, with Aqua results for SW model A shown in Fig. 2a. Selection of clear-sky footprints is made on the basis of satellite-derived cloud amount. Since the total number of points is very large, points on the plot were put into two-dimensional 20 W m−2 bins and the frequency of each bin was color coded on the density plot. Figure 2b shows the mean and standard deviation of model derived fluxes for each 20 W m−2 interval of surface measured fluxes and provides a measure of the scatter in each flux interval. Figures 2c and 2d show clear-sky results from SW model B. Cloudy-sky results are from SW model B only and are shown in Fig. 3. Results shown in Figs. 2 and 3 were taken from Aqua observations only to limit the number of points on each plot. Corresponding results from Terra were found to be very similar and are not presented here. Detailed statistical comparisons for both Aqua and Terra, including a breakdown of results into different surface categories (island, coastal, polar, continental, and desert), are presented in Table 1 for clear-sky fluxes from SW models A and B. The global category is defined here to represent combined results for all surface categories. Statistics on each row represent combined results for all sites in that category. Bias in all tables presented in this paper is defined as satellite-derived minus ground-measured flux. Percentage differences for both systematic and random errors are provided along with flux values as measures of the relative magnitude of errors. The same system has been adopted in the discussion of all SW and LW fluxes presented in this work.
Systematic errors for SW model A clear-sky fluxes for most surface types are within 20 W m−2 (Suttles and Ohring 1986) except for the polar surface type for both satellites and for island surface type for Terra only. Corresponding errors for SW model B clear-sky fluxes are also within 20 W m−2 and generally lower than those for SW model A. Random error is very large, especially for the island surface type for both satellites but comparable to clear-sky errors found in other studies (Sun et al. 2014). Statistics for SW model B cloudy-sky fluxes are shown in Table 2. Systematic errors for cloudy-sky fluxes are larger, exceeding 35 W m−2 for the island and polar surface types. These large systematic errors may be attributable to corresponding errors in satellite cloud retrievals over island and polar surfaces. Random errors, although large for both satellites, are comparable to those for instantaneous comparisons of cloudy-sky fluxes found in other studies (Sun et al. 2012; Gautier and Landsfeld 1997) and exceed 100 W m−2 for the island surface type only.
b. LW model results
Comparisons of clear-sky LW fluxes from all models are shown in Fig. 4. These results were also taken from Aqua observations only to limit the number of points on each plot. Density plots for LW data were prepared by putting fluxes into 10 W m−2 bins since the dynamic range for LW fluxes is much smaller than for SW fluxes, while the frequency in each bin was color coded as before. Figures 4a, 4c, and 4e show scatterplots of clear-sky fluxes for the three models (A, B, and C) while corresponding mean and standard deviation plots are shown in Figs. 4b, 4d, and 4f, respectively. Figures 4b, 4d, and 4f clearly show the substantial departure of flux averages from the 45° line for very low fluxes for all models. This departure can be attributed to the deficiency in all models in handling extremely low water vapor amounts in the atmosphere. Although present in all models, this deficiency is most prominent in model C. LW results from Terra were also found to be very similar and are not presented here.
Details of statistical comparisons are shown in Table 3 for both Aqua and Terra clear-sky fluxes including the breakdown of results for different surface types and the global category. Clear-sky LW model A results for both Terra and Aqua (Table 3, top third) show small systematic errors of less than ±7.0 W m−2 for nearly all surface types with the exception of deserts for Aqua for which the bias reached a value slightly greater than 10 W m−2. Corresponding comparisons from LW model B (Table 3, middle third) also show systematic errors in both Terra and Aqua results to be within 10 W m−2 except for the desert surface type for which biases for both satellites reached in the 10–15 W m−2 range. Results for LW model C (Table 3, bottom third) show that biases for both desert and polar surface types are substantially larger for Terra, reaching as high as 30 W m−2. Specific causes of these large errors may be difficult to identify, although large errors in scene identification remain a possible reason.
Cloudy-sky fluxes are available from LW models B and C only. Their density scatterplots are shown in Figs. 5a and 5c along with the standard deviation plots in Figs. 5b and 5d, respectively. Figures 5b and 5d show, for both models, substantial deviation from the 45° line in the low flux range, indicating overestimation by the models. Once again, this overestimation is more severe in model C than in model B. Statistical results for cloudy-sky fluxes are presented in Table 4. Systematic errors for both models are very small, all staying with in 10 W m−2. Random errors for both models are larger, mostly between 20 and 30 W m−2, with the highest values occurring for polar surface types. These larger random errors can be attributed to the high spatial and temporal variability of clouds along with other reasons that impart errors to clear-sky fluxes.
An earlier study on the validation of CERES-derived surface fluxes (Gupta et al. 2004) showed an 8–10 W m−2 difference in systematic errors between daytime and nighttime clear-sky LW flux comparisons. The primary cause of these difference was determined to be a negative bias during daytime and positive bias during nighttime in surface temperatures in the input meteorological data taken from GEOS-4 reanalysis product. Based on that experience, systematic and random errors for clear-sky LW fluxes were carefully examined in this study separately for daytime and nighttime for global (combined for all surface types) results. Statistical results of that study are presented in Table 5 for both daytime and nighttime for LW models A, B, and C. Scatterplots for clear-sky LW fluxes in this study are shown in Fig. 6 for daytime only. A comparative look at day and night systematic errors shows that those are small and negative for models A (Table 5, top third) and B (Table 5, middle third) while being slightly larger and positive for model C (Table 5, bottom third). A look at random errors for clear-sky LW fluxes shows that, whereas random errors for models A and B are in the 13–18 W m−2 range, those for model C are in the 23–27 W m−2 range. While there is no clear indication of large day–night differences in systematic errors, as was seen in Edition-2B results (Gupta et al. 2004), model C clear-sky LW fluxes show substantially larger systematic and random errors relative to the other two models.
5. Summary and conclusions
The main objective of this study was to validate the derived surface fluxes and the algorithms used to derive them. To accomplish that objective, surface SW and LW fluxes for clear and cloudy skies were derived on an instantaneous footprint basis under the SOFA segment of CERES project using TOA measurements, cloud parameters derived from MODIS collection-5 radiances and aerosol properties derived from MATCH products. Temperature and humidity profiles and ozone came from the CERES MOA, which is primarily based on GEOS-5 reanalysis products. All of these constituted the inputs for the radiative transfer algorithms used for deriving surface fluxes. The period for this validation covers 204 months (March 2000–February 2017) for Terra and 176 months (July 2002–February 2017) for Aqua. The cutoff date was imposed by the discontinuation of MODIS collection 5 at the end of February 2017 and the desire in this validation to use a totally consistent dataset for which all algorithms and inputs remain unchanged throughout the period.
Clear-sky fluxes derived by SW model A (Li et al. 1993b) met the established accuracy criteria for four of the five surface types but not for polar surfaces where random errors far exceeded the established criteria (Table 1, top half). Clear-sky SW model B fluxes follow a similar pattern although error magnitudes are much smaller except for the large random error over island-type sites for Terra observations (Table 1, bottom half). Cloudy-sky SW fluxes provided only by SW model B (Table 2) exhibit larger systematic errors for the island and polar surfaces and a large random error only for islands. Island site comparisons are especially prone to large errors because of the spatial mismatch between point observation from sites and gridbox values from satellite retrievals caused by frequent formation of low clouds over islands during certain times of the day (Long and McFarlane 2012). Random errors for other surface types are similar to those found in other studies (Sun et al. 2012, 2014; Gautier and Landsfeld 1997). Potential causes of these errors were discussed in the previous section.
Systematic errors for LW models A and B are very modest, exceeding 10 W m−2 only for desert surfaces. For LW model C, however, systematic errors are much larger for desert surfaces and even larger for polar surfaces. Also, while random errors for models A and B are mostly within 20 W m−2, those for model C are in excess of 20 W m−2 for desert surfaces and exceed 30 W m−2 for polar surfaces. For cloudy-sky fluxes, both systematic and random errors are modest for both models B and C. In summary, both SW and LW models A have limited value because these are clear-sky-only models. Models B, both SW and LW, perform well for both clear and cloudy conditions and will likely be preferred models for future studies. LW model C does not perform well for clear conditions.
This research has been supported by the NASA CERES project. The CERES instantaneous Single Scanner Footprint (SSF) Ed4A data are available online (https://ceres.larc.nasa.gov/products-info.php?product=SSF-Level2). The NASA Langley Atmospheric Sciences Data Center processed the SSF data. The surface validation data are also available online (https://www-cave.larc.nasa.gov/pages/sfcobs.html). For land-based observations, we include sites from the Baseline Surface Radiation Network (BSRN), NOAA’s Global Monitoring Division (GMD) and SURFRAD network, and the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Program. The authors also thank D. A. Rutan (SSAI) for providing the CAVE database and Joanne Saunders for providing the formatting of the equations in this document. |
The first Sinhalese arrived in Sri Lanka late in the 6th century B.C., probably from northern India. Buddhism was introduced circa 250 B.C., and a great civilization developed at the cities of Anuradhapura (kingdom from circa 200 B.C. to circa A.D. 1000) and Polonnaruwa (from about 1070 to 1200). In the 14th century, a south Indian dynasty established a Tamil kingdom in northern Sri Lanka. The Portuguese controlled the coastal areas of the island in the 16th century and the Dutch in the 17th century. The island was ceded to the British in 1796, became a crown colony in 1802, and was formally united under British rule by 1815. As Ceylon, it became independent in 1948; its name was changed to Sri Lanka in 1972. |
Diabetic retinopathy refers to damage to the blood vessels of the eye retina due to diabetes. The condition is progressive and results in a gradual weakening of eyesight, at times causing complete vision loss. While this predominantly affects patients suffering from type 1 diabetes, approximately 60 percent of those suffering from type 2 diabetes also experience the condition after having lived with diabetes for a long period of time.
While it is known that diabetics need to monitor their diet to control their sugar levels, a recent study shows that particular foods are also associated with an increased risk of diabetic retinopathy. Hence, avoiding these foods can slow-down the progression of the condition and help to manage and even prevent the development of diabetic retinopathy.
Foods That Reduce Risk of Diabetic Retinopathy
Researchers found that the Mediterranean diet can be greatly beneficial in reducing complications that can arise from diabetes. It can help to reduce insulin resistance, oxidative stress, and inflammation, all of which are responsible for damaging the retina of the eye.
What Is the Mediterranean Diet?
The Mediterranean diet is one that includes foods eaten by people in the Mediterranean countries. It incorporates fruits and vegetables that have high fiber content, olive oil, oily fish, and red wine, all of which are considered good for health. Therefore, the Mediterranean diet is believed to be one of the healthiest diets.
In addition to the general health benefits, fruits and vegetables not only contain fiber but are also rich in antioxidants and have a low glycaemic index. This means that they are digested slowly and cause a slow increase in blood sugar levels.
Foods That Increase Risk of Diabetic Retinopathy
Scientists found that high-calorie diets that are rich in carbohydrates increase the risk of diabetic retinopathy. This is because the metabolic burden of these diets is high and they also cause oxidative stress. Therefore, the quality of carbohydrates is crucial in the diet of diabetics. Monitoring this as well as overall calorie intake can reduce the risk of developing diabetic retinopathy.
Shortcomings of the Research Findings
Researchers were unable to observe the effect of vitamin D and sodium on diabetic retinopathy. They were also not able to establish the effect of antioxidants, alcohol, protein, and fatty acids on the condition.
Recommendations by Experts
The research that led to the findings involved 31 observational and interventional studies. Researchers carried out controlled trials to observe the effect of food and beverage intake, dietary patterns, and micro and macronutrients on diabetic retinopathy. The observations helped them draw a correlation between the diet of a person and the risk of retinopathy.
The findings indicate that the disease can be prevented if not managed by monitoring food intake. Doctors recommend reducing calories and increasing the quantity of dietary fiber and oily fish in your diet. Alternatively, the Mediterranean diet was found to have a positive impact in lowering the risk of diabetic retinopathy and preserving the eyesight of diabetics.
Related: Diabetic retinopathy eye disease causes, prevention and treatment |
A certain string-processing language offers a primitive operation which splits a string into two pieces. Since this operation involves copying the original string, it takes n units of time for a string of length n, regardless of the location of the cut. Suppose, now, that you want to break a string into many pieces. The order in which the breaks are made can affect the total running time. For example, if you want to cut a 20-character string at positions 3 and 10, then making the first cut at position 3 incurs a total cost of 20 + 17 = 37, while doing position 10 first has a better cost of 20 + 10 = 30.
Give a dynamic programming algorithm that, given the locations of m cuts in a string of length n, finds the minimum cost of breaking the string into m + 1 pieces. |
Impact of a Handicapped Child on the Family, by Marcia A. Cohen
Guide Entry to 82.06.08:
This unit addresses some of the issues and complexities faced by a family having a handicapped child. The focus is not on a specific disability but rather how a family might cope and adjust to the realization that their child will always be “exceptional.” There is a need to address these issues because handicapped individuals are becoming increasingly visible in our schools and communities. With the advancements of modern science most disabled children will live a normal life span. This has significant implications for the family as it matures. Teaching a unit on this topic will help sensitize students to the feelings and needs of disabled persons and create an awareness of the stress and joys experienced by a family having such a child. Bibliographies are offered for teachers to read more about specific disabilities as well as the relationships between family members and the handicapped child.
(Recommended for 8th through 12th grade Social Studies and Health.) |
This topic center covers parenting and child development of preschool children (early childhood aged 3 to 7. For a complete review of the theories of child development upon which this article is based, please visit our Child and Adolescent Development topic center. For coverage of child development and parenting topics applicable to infant children (ages 0-2) please visit our Infant Parenting and Child Development topic center.
Nurturing is vital to children's development, a secret ingredient that enables children to grow physically, mentally, socially, emotionally, culturally, and spiritually. It doesn't matter if children are provided with a healthy diet, adequate shelter and medical care. If they are not adequately nurtured as well, their health and development will generally suffer. It's a good thing, then, that nurturing children is one of the more fun and rewarding parts of raising a family. Nurture activities allow parents to express their creative, loving, and playful sides as they help their children grow and learn.
There is no one single and proper way to nurture a child. Some parents, anxious to do their best, worry that they may not be adequately nurturing their children, or that they may not be nurturing their children the "right way". For the most part, such concerns are unfounded. By its nature nurturing is a creative and spontaneous activity that can take many forms. Most any activity parents engage in that shows children that they are loved will be an effective act of nurture. It is important that parents encourage and select nurturing activities that will help young children to develop properly, but in most cases, parents will naturally and spontaneously be drawn to select and provide children with nurturing activities that will accomplish this goal. Children will just think that Mom, Dad, and Grandpa want to play and to enjoy time together. They won't know that parents are actually trying to teach loving lessons.
Some parents fail to bond adequately with their children, and as a result find the act of nurturing their children to be something they have to force rather then something that comes naturally. Parents who feel this way are not necessary bad at caregiving. Instead, it may be the case that the maternal-infant bond (the core relationship between primary caregiver and child) was prevented from developing due to circumstances outside of the caregiver's control. A promising but not-yet-definitively-studied form of psychotherapy is available to help repair such disturbed maternal-infant bonds. Interested parents should listen to this podcast, and then take a look at this article for more information.
This article provides general guidance on how parents can best create a nurturing environment in which young children can grow. We accomplish this goal by reviewing various important aspects of child development, including children's physical, cognitive, social, emotional, cultural, and spiritual development, and providing examples of nurturing activities that can spur growth in each area. As you read this article, remember that there is no one "right way" to nurture your child. We are not intending to provide a "how-to" guide which will enable your child to achieve (or exceed) developmental milestones. Rather, we are hoping to provide examples of ways that parents can encourage development while simultaneously expressing their love and enjoyment of their younger children. |
Wetlands Prairie Discovery
Instructors: You will be walking the students to the Prairie Platform, through the identification area and out into the grasses. Please have bug spray and closed toe shoes.
Grades: 4 - 5th, but can be adapted to all ages.
Time: 3 hours
TEKS Correlation: 4.1A, 4.2B, 4.4A, 4.5A, 4.7B, 5.1A, 5.2B, 5.4A, 5.5AObjectives:
- Learn the difference in plants that live in the forest and the prairie.
- Learn that the American prairie is endangered and why.
- Learn the characteristics of the prairie and why important.
- Learn what restoration is and how prairie restoration is done.
- Learn some of the animals that live on the prairie.
- Lesson plan for American Prairie
- Grass and wildflower/insect/bird/identification guides
- Soil and grass sample bags
- Sweep nets/hand lenses/white pans/collection jars
- For soil sampling: 1" PVC �" pipe/hammer
- Flower and grass presses/blotter paper/rubber bands/scissors
As you begin your walk to the prairie platform from the interpretive building, you will pass the McGovern Discovery Area under construction. Point out the surrounding trees, vines and grasses, (palmettos, oaks, sphagnum mosses, poison ivy, Arizona ash...) in order to contrast this with the flat grassland looming in the distance. Show them the bold Golden Silk Spiders if they are currently out.
Did You Know the American Prairie is Endangered?
When you approach the prairie, tell the students that 740 million acres of the United States used to be covered in Prairie but less than 1% prairie land remains. Why do the students think that this rapid decline in prairie has occurred? (Invasive species of trees such as the Chinese tallow tree, woody vines and scrub trees, natural changes in the weather, but mostly DEVELOPMENT).
Coastal Wetlands Prairie Characteristics
Coastal wetlands prairie such as is found here at ABNC is similar to the Northern tallgrass prairie yet the species of wildflowers and grasses are different. The Coastal Prairie is also home to the endangered Attwater's Prairie Chicken, whose population currently rests at 52 individuals. There is a preserve for these birds in Texas City, TX and one near Eagle Lake. Conservationists are trying to keep this species alive. Many Native Indian ceremonial dances are derived from the "booming" dances of the chickens in Early January when their mating season begins. You can show the children this behavior and they enjoy being "chickens."
Winter wetlands are exemplified in the coastal prairie. The black, dense soil in the prairie, often called "gumbo" works like a sponge when the rains come. This soil, though hard to work, swells and absorbs a great deal of water, thus preventing flooding. The grasses help to hold it in place, so there is little runoff. When the soil dries out due to evaporation from the heat of the sun, it cracks and shrinks becoming a self plowing field.
Prairie Restoration at ABNC
ABNC is currently experiencing a restoration project to bring back the acres formerly occupied by coastal prairie grasses. From time to time, you will notice the grasses being mowed, then set afire. This is part of the "controlled burn" process. Trained groups of firefighters and land managers gather on just the right day to burn out an area of the prairie. Why do you think they do this? [Often lightening, Native Americans and farmers would start a fire in the Prairie. The fires eliminate many of the invasive species that are competing for the food sources in the area. If you have invasive plant species, you probably have invasive animal species, too. In order to maintain a grass dominant ecosystem, improve the soil with the ash from the fires, remove woody underbrush and satisfy the seed dormancy requirements, these controlled fires are started periodically. The weather conditions must be just right for this to happen.] When a prairie has been burned, many animals, plants, birds and insects will leave or hide, but most will return.
One example of the grazing animals on the prairie is the bison.
Who Lives in the Prairie
Sweep Netting Activity. Since more than grasses and bison exist in the prairie, we'll be taking a look at the insects that are dependent on the grasses for survival. It is easier to catch insects in the summer when they are in season, but many are still around. At this point, demonstrate the sweeping techniques and do some catch and release exercises. Talk about how the animals here have adapted to living in the grasses. (Camouflage, wings, sounds, size, length of body parts...)
Standing still in the prairie in fall is a quiet and beautiful time of the year. Just looking at the different grasses and wildflowers can be a wonderful experience. Gather a few samples with your class and go back to the identification garden. The students can glue or tape a small sample of the different grasses in a journal or notebook to identify. If you have some artists in the group, they might like to do a drawing or sketch of the view. There have even been dances and music composed on a Prairie theme, (Martha Graham and Aaron Copeland).
Birds & Binoculars Activity. Use the binoculars to look for birds on the prairie. Use the identification guides to determine the kind of birds seen.
Soil Samples. Talk about the necessity of good soil for things to grow. Remember that plants are the only PRODUCERS. Everything else is a CONSUMER. There are just different levels of consumers and finally, there are the DECOMPOSERS. It sounds like it could be the name of a band, but is actually a necessary step in the finest breakdown of materials. Decomposers are bacteria and fungi.
Hammer your soil sampler into the ground and twisting, withdraw a sample. Have classmates try different areas. Try to remove your sample in one piece and place it on a piece of paper. Compare the samples with those in your class. Look for color, texture, whether it is coarse or grainy, sand content, clay content, if it is sticky.
Review by means of questions and answers. Gather your supplies and return to the class room and have everyone wash hands before dismissal.
- SBEC for Teachers
- Field Trip Planner
- Outreach Programs
- EcoTots Classes
- EcoKids Classes
- Summer EcoCamp
- Winter EcoCamp
- Scouting Programs
- Hazardous Beasties Safety Training
- Wetland Education Resources |
AC induction motors, also known as asynchronous motors, use a rotating magnetic field to produce torque. Three-phase motors are widely used because they are reliable and economical. The rotating magnetic field is easily achieved in three-phase asynchronous motors because the phase angle offset between the individual phases is 120 degrees. However, single-phase AC motors require external circuitry which creates the phase angle offset in order to produce a rotating magnetic field. This circuitry can be realized using advanced power electronics, or more simply using a motor capacitor.
The video below shows an easy to understand explanation of the working principle of the AC induction motor.
AC single-phase induction motors
Single-coil AC induction motors
AC induction motors usually use two or more coils to generate a rotating magnetic field, which produces torque on the rotor. When a single coil is used, it will generate a pulsating magnetic field, which is enough to sustain rotation, but not sufficient to start the motor from a standstill. Motors with a single coil have to be started by using an external force, and can rotate in either direction. The direction of the rotation depends on the external force. If the motor was started in a clockwise direction, it will continue to rotate and build up speed in the clockwise direction, until it reaches a maximum speed which is defined by the power source frequency. Similarly, it will continue rotating counter-clockwise if the initial rotation was counter-clockwise. These motors are not practical due to their inability to reliably start rotation on their own.
Start capacitor AC induction motors
One way to improve on the single coil design is by using an auxiliary coil in series with a motor starting capacitor. The auxiliary coil, also called starting coil, is used to create an initial rotating magnetic field. In order to create a rotating magnetic field, the current flowing through the main winding must be out of phase in respect to the current flowing through the auxiliary winding. The role of the starting capacitor is to lag the current in the auxiliary winding, bringing these two currents out of phase. When the rotor reaches sufficient speed, the auxiliary coil is disconnected from the circuit by means of a centrifugal switch, and the motor remains powered by a single coil creating a pulsating magnetic field. In this sense, the auxiliary coil in this design can be regarded as a starting coil, since it is only used during motor startup.
Start/run capacitor AC induction motors
Another way to further improve on the single-coil single-phase induction motor design is to introduce an auxiliary coil, which remains powered not only during the motor startup phase, but also during normal operation. As opposed to an AC motor using only a motor start capacitor, which creates a pulsating magnetic field during normal operation, AC motors using a motor start capacitor and a motor run capacitor create a rotating magnetic field during normal operation. The function of the motor start capacitor remains the same as in the previous case – it gets disconnected from the circuit after the rotor reaches a predetermined speed by means of a centrifugal switch. After that point, the auxiliary winding remains powered through a motor run capacitor. The figure below describes this type of design.
Motor start and motor run capacitors
Motor start capacitors are used during the motor startup phase and are disconnected from the circuit once the rotor reaches a predetermined speed, which is usually about 75% of the maximum speed for that motor type. These capacitors usually have capacitance values of over 70 µF. They come in various voltage ratings, depending on the application they were intended for.
Some single phase AC motor designs use motor run capacitors, which are left connected to the auxiliary coil even after the start capacitor is disconnected by the centrifugal switch. These designs operate by creating a rotating magnetic field. Motor run capacitors are designed for continuous duty, and remain powered whenever the motor is powered, which is why electrolytic capacitors are avoided, and low-loss polymer capacitors are used instead. The capacitance value of run capacitors is usually lower than the capacitance of start capacitors, and is often in the range of 1.5 µF to 100 µF. Choosing a wrong capacitance value for a motor can result in an uneven magnetic field, which can be observed as uneven motor rotation speed, especially under load. This can cause additional noise from the motor, performance drops and increased energy consumption, as well as additional heating, which can cause the motor to overheat.
Motor start and run capacitors are used in single-phase AC induction motors. Such motors are used whenever a single-phase power supply is more practical than a three-phase power supply, such as in domestic appliances. They are not as efficient as three-phase AC induction motors, however. In fact, single-phase AC motors are 2 to 4 times less efficient than three-phase AC motors, which is why they are used only for less powerful motors. Typical applications which utilize start and run motor capacitors include power tools, washing machines, tumble dryers, dishwashers, vacuum cleaners, air conditioners and compressors. |
Proteins are a very important component of the cell; in fact, they make up about 50% of the cell.
*Proteins are polymers made up of amino acid monomers
*The most important type of proteins are enzymes; enzymes regulate metabolism by acting as catalysts(chemical agents that selectively speed up chemical reactions in the cell)
*Proteins consist of one or more polypeptides folded and coiled into specific conformations
Amino Acids are organic molecules that contain a carboxyl group and an amino group, as well as an R group (Variable Group), that gives each amino acid its identity and property. There are 20 amino acids that make up protein molecules. You should be able to recognize from their names that they are amino acids, because most amino acids end in –ine. Ex: Glycine, Glutamine
*The physical and chemical properties of the side chains determine the charctersitics of the Amino Acid
*Acidic Amino Acids have amino groups in their side chains that are negative in charge
*Basic Amino Acids have amino groups in their side chains that are positive in charge
The chain with the amino end is called the N- Terminus, and the carboxyl end is called the C-terminus.
*In proteins, amino acids are joined by peptide bonds in a dehydration synthesis reaction. The function of proteins depends on how many amino acids and what type of amino acids are joined together.
Protein Conformation and Function
*There are four levels of protein structure. We can recognize three superimposed levels of structure, known as primary, secondary, and tertiary structure. The fourth
level, quaternary structure, arises when a protein consists of 2 or more polpeptide chain
The Primary Structure:
·This structure is the protein’s unique sequence of amino acids.
The Secondary Structure:
* Refers to one of the two three-dimensional shapes that the protein can have due to its hydrogen bonding. One shape is a coiled shape called an alpha helix, and the second shape is an accordion shape called a beta pleated shape.
~ Alpha Helix Ex: Hair
* These coils and folds are the result of hydrogen bonds between the repeating constituents of the polypeptide backbone.
*Both the oxygen and the nitrogen atoms of the backbone are electronegative, with partial negative charges
The Tertiary Structure:
· Refers to interactions between side chains of the protein. These interactions involve hydrophobic interactions, Van der Waals forces, and disulfide bridges.
- As a polypeptide folds into its functional conformation, amino acids with hydrophobic (nonpolar) side chains usually end up in clusters at the core of the protein, out of contact with water.
·Van der Waals forces, disulfide bridges and hydrophobic interactions can all occur in one protein
The Quaternary Structure
· Refers to the association of 2 or more polypeptide chains into one giant macromolecule, or functional protein.
·Ex: Collagen, a fibrous protein that has helical subunits intertwined into a larger triple helix, giving the long fibers great strength.
· Ex: Hemoglobin, consists of 4 polypeptide subunits, two of alpha helix, and two of beta pleated
· When a protein is denatured, upon heating or the introduction of a pH change or other disturbance, it becomes inactive. Denaturation causes the protein to lose its shape, or conformation.
· Most proteins become denatured if they are transferred from an aqueous environment to an organic solvent, such as ether or chloroform
· Other denaturation agents include chemicals that disrupt the hydrogen bonds, ionic bonds, and disulfide bridges that maintain a protein’s shape.
·When a protein in a test-tube solution has been denatured by heat or chemicals, it will often return to its functional shape when the denaturing agent is removed. However, in the crowded environment inside a cell, correct folding may be more of a problem than it is in a test tube.
Nucleic Acids-Informational Polymers
· The last group of important biological molecules we’ll discuss is the nucleic acids. The two Nucleic Acids are
· Each gene along the length of a
· There are two types of nitrogenous bases, purines and pyrimidines. The purines are adenine, abbreviated A, and Guanine (G), and the pyrimidines are cytosine (C), thymine (T) and uracil (U).
· Purines are larger, with a six-membered ring fused to a five-membered ring.
· Thymine is found only in
Multiple Choice Questions:
1. All of the following are found in the strructure of an Amino Acid, except:
A) Amino Group
B) Carboxyl Group
C) Phosphate Group
D) Hydrogen Atom
E) Variable Group, R
2. Which of the following correctly describes the primary structure of a protein?
A) There is a random linking of amino acids
B) The pattern of amino acids is the same for every protein
C) There are always 15 amino acids in a protein
D) The precise primary structure is determined by inherited genetic information
3. All of the following can lead to the denaturing of a protein, except
A) Extreme heat
B) Sudden change in environment
C) Certain Chemicals
D) Interaction with other proteins |
Teach short story writing
Find and save ideas about story elements activities on pinterest teaching story elements with a pixar short to draw or write the story out teach your. Teaching a short story can be easy if you give your students the basic how to teach short stories if they want to write a good short story. Lesson plan #942 write a short story in one class. Monthly lesson plan - april 2014: planning and april 2014: planning and writing a story 3 to develop writing skills in the context of writing a short story. Creative writing lesson plans this indicates resources located on the teacher's corner start a story grades various help students with creative writing.
Here you can find worksheets and activities for teaching story writing to kids, teenagers or adults, beginner intermediate or advanced levels. Teaching the short story to improve l2 reading and writing skills 267 short story is a compact literary genre in which much is left unsaid in order for the reader to. Planning and resources to help support the development of short stories, with a focus on structure and creating tension hope it helps. The websites below about teaching writing with picture books edit and artist the cover of my short stories and mini stories in all categories 5 to 15 pages. How to teach story writing projects involve story writing the guardian teacher network has resources to help students of all ages to write stories at home.
Writing short stories lesson plans and worksheets from thousands of teacher-reviewed resources to help you inspire students learning. Looking for some lesson plans on writing short stories lesson plans provide the necessary structure needed by both the teacher and the students in order to better. Printable resources and ideas to support your children when writing fiction a range of writing and story story machine' teaching ideas and.
Creative writing in the it's about unpacking the emotions and finding ways to let the reader see the story for themselves when teaching all adverbs must. This lesson plan will get your students fluent with the five elements of a short story five elements of a story with our writing academic rhymes lesson. Ready to get writing here are seven steps on how to write a short story from start to finish. These short stories will help you teach the elements of literature while cov creative writing lesson plan: using using short stories to teach elements of.
Teach short story writing
Get an answer for 'teaching short story writingi am teaching creative writing for the first time, and we have finally reached the short story do you have any. Teaching the short story provides participants with a detailed approach to teaching students to write short stories. This guided writing lesson on esl story writing is intended to help bridge the gap from simply helping students write a creative story writing a short story.
Short stories make versatile and useful subjects for teaching students many aspects of literature read an idea of teaching a short story unit help with writing. 6 ways to teach writing reatively teach your students the fun aspects of writing students of all ages write short stories and papers, from younger elementary-school. Gotham writers workshop is a creative home we teach the craft of writing in a the story goes that ernest hemingway won a bet by writing a short story that. Students write stories to go along with these fun cartoon pictures free, printable worksheets include a picture page and lined paper for writing. Pardede, using short stories to teach language skills 15 introduction in the nineteenth century, the grammar translation method (gtm) predominated esl. The prison is surrounded by double fences, razor-ribbon wire and two gun towers sister pat schnapp steps inside here weekly, as she has done for nearly 30 years, to. Short story and novel assignments students will be learning all of the skills necessary to write a fiction story with lesson, or moral of the story through.
Creative writing print use this lesson to assign a short story writing activity as well as to illustrate the critical steps of short story composition. Use a teaching guide that helps students analyze the elements of short stories, their responses to the selection, and the craft of the genre. Whether you are a teacher or a parent, teaching children to write a story is one of the most important tools you can give them once your child is. Students then read short stories as a whole class, in small groups emphasizing the connection between reading and writing, this lesson combines collaborative. |
Shakespeare: A Study Guide summarizes and analyzes the plays and poems of William Shakespeare. It lists themes, describes characters, identifies figures of speech and allusions, discusses writing techniques, and provides a wealth of other background information. What caused the feud between the families of Romeo and Juliet? Why didn’t Hamlet succeed to the thrones of Denmark after the murder of his father? What do the witches in Macbeth mean whey they say “fair is foul and foul is fair”? This guide answers all those questions. It also discusses in detail the format and meaning of Shakespeare’s sonnets and other poems.
After teaching English in public schools, Michael J. Cummings entered journalism in 1968, serving as a reporter, news editor, editorial writer, and managing editor of a national publication. In 1984, he became a freelance writer, publishing several thousand articles over the next two decades. While freelancing, Cummings also began teaching college English part time. Over the years, he has maintained a passionate interest n Shakespeare, reading and analyzing his complete works. To share his knowledge, he developed a free web site, Shakespeare, as a study guide. Its success led to publication of this book as a guide for students. |
As an intense dust storm rages on Mars, many are wondering — how bad can a Martian storm really be?
Tuesday (June 12), NASA's Opportunity rover stopped communications amid a severe dust storm on the Red Planet. But while the storm hasn't killed the rover yet — Opportunity could still revive once the skies clear — how dangerous can storms on Mars get?
For fans of "The Martian" novel by Andy Weir, or the film based on that book, the answer may be a disappointment. Storms on Mars aren't quite as dramatic as the book or the film adaptation portray them to be. While Martian winds at the planet's surface can reach up to about 60 mph (about 97 km/h), this is less than half the speed of some hurricane winds here on Earth and probably not strong enough to rip apart or tip any major equipment, NASA officials said in a statement. [Mars Dust Storm 2018: How It Grew & What It Means for Opportunity]
However, even when winds on the Red Planet reach their highest speeds, wind on Mars isn't quite as powerful as it is on Earth. "Mars' atmospheric pressure is a lot less [than Earth's]. So, things get blown [around], but it's not with the same intensity," William Farrell, a scientist at NASA's Goddard Space Flight Center in Maryland, said in the statement.
So, "The Martian" film accurately shows Mark Watney sweeping dust off of his solar panels every day, because Martian dust particles accumulate and stick easily because they're slightly electrostatic. But dust storms on Mars aren't as powerful as they might seem based on the movie.
Still, Martian storms still could pose risks to humans. In a special NASA teleconference on June 13, researchers said that, while Mars' atmosphere is thin, there is still dust being raised. This could hypothetically complicate regular functioning and visibility for future crewed missions. Additionally, dust storms create "sort of a greenhouse effect in which the radiation that otherwise would be lost to space is trapped," heating up the planet, Rich Zurek, Mars Program Office chief scientist at NASA's Jet Propulsion Laboratory, said in the conference. Humans on the Martian surface will already have to contend with radiation, and this effect will only increase the risk.
Plus, Martian storms can grow to epic scale: The researchers said the current storm is expanding and could potentially stretch across the entire planet, which humans have seen happen on Mars before.
So, a Martian dust storm likely won't strand any future space colonists or rip any antennas off of equipment, like what happened in "The Martian." Instead, the dangers to humans would more likely range from radiation to dust accumulation (because of the static electricity), or possibly less-dramatic risks associated with winds, researchers said in the conference. Solar-powered tech will also continue to struggle against the dust that sticks to solar panels on rovers like Opportunity.
NASA is already seriously considering these potential threats to future space explorers. "We really need to understand these storms to the degree that we can have some level of forecasting ability," Jim Watzin, director of the Mars Exploration Program at NASA Headquarters in Washington, D.C., said in the conference.
So, it turns out that dust storms on Mars aren't as cinematically dramatic as fans of "The Martian" may have thought. Still, NASA is working to protect future crewed missions from the dangers that may arise from Martian weather. |
In multiple-electron systems these are split into different energy levels, in that case the lower values are usually lower in energy level (i.e. S becomes lower than P)
The energy levels of multi-electron atoms are mostly found by spectroscopy. Understanding why the levels are arranged that way can be solved by computers using wave equations, but "a computer confirmed it" isn't a fun answer. A less reliable answer, but one more relatable to humans, can be found by considering orbital penetration.
Consider the radial distribution functions shown below, these show for each orbital how the probability of finding an electron varies with distances from the nucleus.
You can view more at the orbitron, but you should be able to see a pattern in these. Going up in principle quantum numbers, the first S has one hump, the second S has two humps, and so on. The other subshells follow the same pattern.
For an atom containing 2S and 2P electrons, such as carbon, we know from A-level chemistry that the S orbitals are filled first. The reason is electron shielding, electrons close to the nucleus reduce the attraction of electrons outside the nucleus, since negative charge repels. 2S electrons experience relatively less shielding then 2P electrons because they have an extra hump close to the nucleus, and you can see it by superimposing their radial distribution functions:
|Superimposition of the 2S and 2P subshells|
The penetration of the 2S subshell allows electrons in it to experience more nuclear charge, which is enough to dip the orbital lower in energy level. I'm aware that it isn't entirely obvious from the above graph, but this answer is enough for many undergrad exams and textbooks.
Note: The S subshell has the unique property of having a non-zero chance of being found on the nucleus itself. This means you should draw the functions touching the Y-axis at just above 0, with other orbitals hitting it right on 0. It is common for some exam marks to based on this. |
Since ancient times, fish have been held in a wide variety of man-made structures. These structures were built using simple methods and readily available materials. The fish or other aquatic crop were cared for by the fish farmer and relied upon as an important source of protein for their families. The typical fish farm was developed by forming small ponds by hand, or an even simpler method of trapping tidal water flow in estuaries by building simple water retaining structures.
In less developed parts of the world today, the basic earthen pond design system is still the most important and affordable type of design. Not surprisingly, there have been considerable technical advances over the last few decades that have transformed the aquaculture industry, yet the basic earthen pond system remains mostly unchanged and still highly relevant in less developed countries.
The size of earthen ponds built today can vary anywhere from 20 square meters to 20 hectares (44 acres) or more. Pond size is determined by the type of species cultured, the intensity of the system, size and maturity of the species being farmed, access to capital, land availability, water availability, the harvesting method, and even the marketing and sales goals of the project.
The species being farmed and the size of animal as it grows through various stages of development plays a big role in pond size and farm design. For example, a commercially oriented tilapia farming operation typically utilizes 0.1 or 0.2 hectare ponds for nursery phases and 0.3 to 0.5 hectare ponds for growout. Semi-intensive shrimp farms generally use 7 to 20 hectare ponds, while more intensive shrimp farms generally use ponds less than 7 hectares and quite often less than 1-2 hectares in size. Most ponds are rectangular in shape, but there are also square, circular, and irregularly shaped ponds in existence. Most farms build the ponds to maintain a minimum water depth of at least 1 meter with levels of around 1.5 meters considered ideal. Ponds are also used for many different purposes: spawning, broodstock conditioning, nursery, growout, or finishing. Quite often, the expected use of the pond dictates the design.
Once a high potential site has been thoroughly analyzed and found suitable for fish farming or shrimp farming, it must be surveyed. Based on this detailed survey and the targeted production strategy, farm design plans are then drawn up by an experienced aquaculture engineer and the project manager. The possible ways to design a farm are endless, but certain designs are definitely more efficient and effective than others. Farm design is always an exciting period, but it takes a skilled and experienced aquaculture engineer to put together the best shrimp or fish farm design for a given site. It may look easy, but it is really a very involved and complicated process.
Once detailed aquaculture engineering designs and drawing are in place, the actual construction of the farm can begin. The slope of a pond is always less than 1% and usually closer to 0.1%, particularly when the pond is large. Cut and fill volumes are determined from the topographic survey of the area and equipment operators are guided by elevation stakes set on-site. Earth movement is ideally accomplished using tractor drawn scrapers, but the use of bulldozers is the more common method. Tractor drawn scrapers that are guided by lasers can give a perfect slope and high rate of compaction, resulting in a perfectly constructed pond under ideal soil conditions. This kind of equipment is expensive and not always available, but it is our strong recommendation to use this method of shrimp and fish pond construction whenever possible, especially on larger projects where the purchase of new or used earth movement machinery is warranted.
Construction of the dikes is one of the most important tasks in the fish or shrimp farm construction process. Slopes are normally set at 2:1, but a wide range of slope ratios have been used over the years. Proper soil compaction and sufficient clay content are very important to maintaining the slope of the dike during the actual operations in the years ahead. The top of the dike is usually made wide enough to facilitate truck movement. Wherever possible, dikes are shared between ponds to reduce earth movement costs. In areas of low elevation that are at risk of flooding or storm surge, a high elevation farm perimeter levee should be specified.
After the pond bottom and dikes are completed, the inlet and outlet structures are constructed. These water movement structures are sometimes referred to as monks. The monks are used to control the amount of water coming in and going out of the pond. In modern aquaculture engineering, monk inlet and outlet structures are usually constructed of formed and poured concrete. There are many different styles and installation points for monks. Our preference is to locate outlet monks in the center of the short side and build them directly into the dike for easy access. Reinforcing steel is used to frame the structural dimensions of the monk. The floor is poured and leveled to the correct slope. Boards are used to shape and level the concrete during the pour. Grooves created in the floor and sides of the monk and the grooves will be fitted later with boards that will be used to control the flow of water in and out of the pond. Some of the vertical space is also fitted with filters to prevent the entry of predators and the escape of the cultured crop. Concrete pipe of the proper diameter is laid on the floor. The entire monk box is framed around this pipe. The size of these structures varies with the size of the pond and with the amount of water that needs to be moved in or out of the pond. In some cases, these structures are used to concentrate the animals for harvest. This is especially true in shrimp farming. A recessed area in the monk box is constructed to give a point where mechanized harvests can take place.
Obtaining the proper grade and slope of the pond bottom is extremely important. If done properly, the ponds will drain completely and can be harvested quickly and efficiently and the pond bottoms can prepared in between crops with much greater ease and efficiency. In areas that have changes in elevation, some pond bottoms may require considerable cutting while others may require very little bottom cut. The overall goal is to develop an entire farming system that is capable of filling and draining using the natural gravitational forces of moving water. |
Before we take a look at the causes of elevated bilirubin in adults, it is necessary to understand what exactly bilirubin is. Bilirubin is a yellow colored pigment that the liver produces when red blood cells are broken down and recycled. It is a byproduct that occurs after the breakdown of hemoglobin. The red blood cells in the body are constantly building and breaking down and as a result many by products are released as waste.
Bilirubin is one such byproduct. The liver processes bilirubin into bile and it is released through the body through the digestive system. The levels of bilirubin in the body are normally low. Therefore, what causes elevated bilirubin in adults? There are several reasons as to why there could be elevated bilirubin in adults. Read on if you wish to find out what causes elevated bilirubin in adults. Keep in mind that any disorder that destroys a large number of liver cells or disrupts the normal functioning of the liver cells can cause elevated bilirubin levels.
Some of the causes of elevated bilirubin in adults are as follows:-
- Tumors affecting the gall bladder, liver or bile ducts could be responsible for elevated levels.
- An allergic reaction to the blood received during a transfusion can also cause the levels of bilirubin to rise in adults.
- Cirrhosis of the liver is another reason for elevated bilirubin levels.
- Acute hepatitis caused by Hepatitis A and Hepatitis B is another reason.
- Hemolysis - red blood cell destruction.
- Liver failure or any liver disease that worsens over a period of time.
- Choledocholithiasis or presence of gall stones in the bile duct
- A very large obstruction in the bile duct.
- Blood related disorders like sickle cell anemia can cause rapid destruction of the red blood cells and thus cause elevated levels.
- Chronic liver disease such as chronic hepatitis C, hemochromatosis, nonalcoholic steatohepatitis, autoimmune hepatitis, alcoholic hepatitis also cause elevated bilirubin levels.
- Dubin-Johnson syndrome, Gilbert's syndrome, Rotor's syndrome and other inherited disorders of bilirubin metabolism cause the levels to rise as well.
- Crigler Najjar syndrome or another rarely found disorder which affects the metabolism of bilirubin.
- Antibiotics and some medicines like phenytoin, indomethacin, flurazepam, diazepam and some kinds of birth control pills can also give rise to elevated levels.
- Pancreatic cancer.
Submitted by N on July 11, 2011 at 11:44 |
Each and every one of us is made up of thousands of different ingredients, which all combine together to create something amazing; life. Perhaps the most important of these are proteins.
Each protein in the body has its own special job to do. From making our muscles contract to controlling blood sugar, proteins are an essential ingredient in life.
In MND research we have identified a number of MND causing genes. These are genes that are found to be mutated in some people living with MND, which somehow causes the motor neurones to die. But, how does this happen? How does a gene form a protein? This blog post explains how an MND causing gene becomes a protein.
As simple as baking a cake
Here at the MND Association we love our cake. So, I thought what better way is there to describe how we make proteins?
Every cell in our body contains 23 pairs of chromosomes (46 in total), except for the egg and sperm cells that contain 23 chromosomes each.
Like a recipe book, these chromosomes hold all of our genetic material in the form of genes, in which everyone inherits two sets of (one from each parent).
Humans have approximately 24,000 genes, which each consist of their own DNA recipe to make a protein. Like cakes, proteins come in a range of different shapes and sizes, that come together to create you and me.
These DNA recipes are read by different cells to create the right protein for the job. For example, you would only make a wedding cake for a wedding, and the type of cake (chocolate or fruit) would depend on the wedding couple.
This is what happens in nerve cells (or motor neurones). A motor neurone will make a specific type of protein to help it grow, or to help it survive in low oxygen levels.
Following the recipe
A nerve cell creates a protein by finding the exact DNA recipe amongst the genes within the cell’s control centre, known as the nucleus. Once the recipe has been found the cell has a problem… The nucleus does not have the right tools to make a protein! The cell instead needs a specialised machine, or food mixer, which is only found outside of the nucleus called a ribosome.
In order to make the protein the DNA recipe needs to travel from the nucleus to the ribosome and this is done by means of a messenger. The DNA recipe can’t leave the nucleus so the cell ‘copies’ it into a messenger version, called mRNA.
The cell does this by removing certain parts of the DNA that do not affect the finished protein which are known as introns or ‘non-coding DNA’. This is known as ‘RNA splicing’ and is the same as removing raisons from a fruit cake. The cake is still made and still contains fruit, but the raisons are not essential in the finished cake.
mRNA can then travel the DNA recipe safely from the nucleus to the ribosome, where it can be finally made into a protein. Once made, this protein can then go on to do its specific job role (or in cake terms, be a wedding cake!).
Changing and ruining the recipe
Sometimes the DNA recipe in our genes can change through means of a mutation. Most of these are harmless spelling mistakes (sugarr instead of sugar) that do not affect the finished protein. However, sometimes these mutations can be so big and harmful (salt instead of sugar) that they do.
These kind of mutations are so big that the size, shape and structure of the protein can be changed – meaning that the protein can no longer do the job it was designed to do (our wedding cake is now no longer sweet and tasty, but ruined and salty!)
This is what happens in some of the MND causing genes. A big mutation occurs in the DNA recipe in a specific gene that causes the structure and shape of that protein to change. This change can then cause the proteins to ‘clump’ together in the motor neurones as they can no longer do the job they were designed to do.
An understanding of genes and how proteins are connected is essential for understanding how they can go wrong in MND. The Association funds a number of exciting research projects investigating the MND causing genes, along with the proteins they form.
To help raise awareness of MND you can bake your own cake as part of our ‘Bake it!’ fundraising campaign. For more information and to request a fundraising pack please see our website. |
When tracing an ancestry it is common to encounter records filled with obsolete, archaic, or legal terms that can be difficult to interpret. Misinterpreting these terms can make the difference between linking persons to the right generation, parents, spouse or children. Understanding exactly what is stated in any record is vital before attempting to move to the next generation. Inexperienced or impatient genealogists undervalue the quality of their research by applying present-day definitions to documents created in an earlier century. Take the time to use the glossaries provided here and other excellent dictionaries, genealogical reference books and encyclopedias to interpret documents correctly.
Includes the following:
- Abbreviations: These are those most commonly used in genealogical records. It is not unusual to find, within the pages of one record, different variations used, but care should be taken to ensure that in these instances, it is a variation and not meant to indicate something else.
- Censuses: This describes what is listed on the census forms in each of the census years. Few, if any, records reveal as many details about individuals and families as do government census records. Substitute records can be used when the official census is unavailable.
- Illnesses: This describes the various old time Illnesses and Diseases that you will find in old documents, medical records or listed as causes of death on old death certificates or in old family Bibles.
- Occupations: This following list that describes the various old occupations of which many are archaic. These are useful to genealogists since surnames usually originated from someone's occupation. Ships passenger lists, census returns and other documents used in genealogy may give an ancestor's occupation, this list gives more modern interpretations of those terms. They also are useful to historians in general. The list is by no means complete.
- Terms: This page defines the Genealogical Terms used in genealogical research you will find in documents
- Nickname Meanings
- Worldwide Epidemics
- Tombstone Symbols
This is in addition to another Encyclopedia of Genealogy (eogen) by Dick Eastman |
In Washington, D.C., humanitarians Clara Barton and Adolphus Solomons found the American National Red Cross, an organization established to provide humanitarian aid to victims of wars and natural disasters in congruence with the International Red Cross.
Barton, born in Massachusetts in 1821, worked with the sick and wounded during the American Civil War and became known as the "Angel of the Battlefield" for her tireless dedication. In 1865, President Abraham Lincoln commissioned her to search for lost prisoners of war, and with the extensive records she had compiled during the war she succeeded in identifying thousands of the Union dead at the Andersonville prisoner-of-war camp.
She was in Europe in 1870 when the Franco-Prussian War broke out, and she went behind the German lines to work for the International Red Cross. In 1873, she returned to the United States, and four years later she organized an American branch of the International Red Cross. The American Red Cross received its first U.S. federal charter in 1900. Barton headed the organization into her 80s and died in 1912. |
As we learnt in the previous post, simple covalent molecules with higher relative molecular masses (Mr) should have higher m.p./b.p. as compared to those with lower Mr. However, this is not the case for all simple covalent molecules.
When two atoms of different elements are joined together in covalent bonding, the sharing of electrons is not always equal, creating permanent partial charges on each atom. (Fancy A level term = permanent dipole). These types of molecules are known as polar molecules.
Why are the electrons not shared equally?
The atom that exhibits a higher affinity for electrons would “pull” the shared pair of electrons closer to itself, creating a partial negative charge on itself, leaving the other atom with a positive charge.
The relative affinity for electrons in turn depends on the electronegativity of the atom / effective nuclear charge (ENC) of the atom.
Effective Nuclear Charge ≈ No. of protons – No. of inner shell shielding electrons
This is an intuitive formula. The protons in the atom are trying to attract electrons (unlike charges attract) while the inner shell electrons are trying to repel away electrons that are being added to the valence shell (like charges repel). Thus, ENC measures the net attraction that an atom has on the valence electrons.
Using this concept, we calculate the ENC of Hydrogen and Fluorine. As a reminder, Hydrogen has 1 proton (electronic arrangement: 1) while Fluorine has 9 protons (electronic arrangement: 2,7).
ENC of H = 1 – 0 = 1
ENC of F = 9 – 2 = 7
Since Fluorine has higher ENC as compared to Hydrogen, the shared pair of electrons would be pulled closer to the Fluorine atom, leaving the Fluorine atom with a permanent partial negative charge while Hydrogen has a permanent partial positive charge. See diagram below.
It can also be represented like this:
On the other hand, the electrons in a fluorine molecule (F2) are shared equally between the two atoms as the ENC of both atoms are equal. Thus, fluorine molecule will not have any permanent partial positive or partial negative charges. Thus, the intermolecular bonds in Fluorine are the result of the instantaneous-dipole, induced dipole interactions (id-id).
How does all of this explain the difference in b.p./m.p.?
The intermolecular bonds between HF molecules are the result of the electrostatic forces of attraction between the permanent partial positive portion of a molecule with the permanent negative portion of another molecule (fancy A level term permanent dipole-permanent dipole interaction (pd-pd)).
This is stronger than the id-id interactions that exist between the F2 molecules, since in HF, the dipoles are permanent as opposed to fleeting. Since the intermolecular bonds are stronger in HF, they require more energy to break and HF will have a higher m.p./b.p. than F2. |
Because you must remember to read the volume with your eye at the same level as the bottom of the meniscus. Thanks.
A graduated cylinder, measuring cylinder or mixing cylinder is a piece of laboratory equipment used to measure the volume of a liquid. Graduated cylinders are generally more accurate and precise than laboratory flasks and beakers. However, they are less accurate and precise than volumetric glassware, such as a volumetric flask or volumetric pipette. For these reasons, graduated cylinders should not be used to perform volumetric analysis. Graduated cylinders are sometimes used to indirectly measure the volume of a solid by measuring the displacement of a liquid.
Often, the largest graduated cylinders are made of polypropylene for its excellent chemical resistance or polymethylpentene for its transparency, making them lighter and less fragile than glass. Polypropylene (PP) is easy to repeatedly autoclave; however, autoclaving in excess of about 130 °C (266 °F) (depending on the chemical formulation: typical commercial grade polypropylene melts in excess of 160 °C (320 °F)),can warp or damage polypropylene graduated cylinders, affecting accuracy.
Fluid mechanics is the branch of physics that studies fluids (liquids, gases, and plasmas) and the forces on them. Fluid mechanics can be divided into fluid statics, the study of fluids at rest; fluid kinematics, the study of fluids in motion; and fluid dynamics, the study of the effect of forces on fluid motion. It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms, that is, it models matter from a macroscopic viewpoint rather than from a microscopic viewpoint. Fluid mechanics, especially fluid dynamics, is an active field of research with many unsolved or partly solved problems. Fluid mechanics can be mathematically complex, and can best be solved by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach to solving fluid mechanics problems. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
Laboratory glassware refers to a variety of equipment, traditionally made of glass, used for scientific experiments and other work in science, especially in chemistry and biology laboratories.
In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings about interest, sympathy or motivation in the reader or viewer.
Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement. |
A labeled monocot stem is a diagram that features the cross section of a monocot plant stem. In this diagram, the parts of the monocot stem are labeled and usually consist of the vascular bundle, the parenchyma, the cortex, the epidermis, the xylem, and the phloem.Continue Reading
Monocot stems differ from those of dicots in regards to the organization of the vascular bundles within the plant stem. In monocots, the vascular bundles are situated sporadically inside the stem. However in dicots, the vascular bundles are organized in more of a donut pattern.
In addition to the differences in stems, monocot and dicot plants also have comparable characteristics in their root structures, leaves, and seeds. Monocot roots tend to spread out in many different directions in order to populate the surface soil. However, dicot roots tend to grow downward and are normally situated around a centralized taproot, from which other smaller roots branch off from.
There is also a difference between the two in terms of the arrangement of the veins within the leaves of the respective plants. In monocot leaves, the veins are aligned parallel to one another, whereas in dicot leaves the veins are situated in a more branching pattern.
In regards to the differences in seeds between the two, monocot seeds have a single cotyledon vein compared to the dicot seed which has two.Learn more about Botany |
Using Transponders on the Moon to Increase Accuracy of GPS
- Thursday, 27 March 2008
Ranging to the Moon would be unaffected by the terrestrial atmosphere.
It has been proposed to place laser or radio transponders at suitably chosen locations on the Moon to increase the accuracy achievable using the Global Positioning System (GPS) or other satellite-based positioning system. The accuracy of GPS position measurements depends on the accuracy of determination of the ephemerides of the GPS satellites. These ephemerides are determined by means of ranging to and from Earth-based stations and consistency checks among the satellites. Unfortunately, ranging to and from Earth is subject to errors caused by atmospheric effects, notably including unpredictable variations in refraction.
The proposal is based on exploitation of the fact that ranging between a GPS satellite and another object outside the atmosphere is not subject to error-inducing atmospheric effects. The Moon is such an object and is a convenient place for a ranging station. The ephemeris of the Moon is well known and, unlike a GPS satellite, the Moon is massive enough that its orbit is not measurably affected by the solar wind and solar radiation.
According to the proposal, each GPS satellite would repeatedly send a short laser or radio pulse toward the Moon and the transponder(s) would respond by sending back a pulse and delay information. The GPS satellite could then compute its distance from the known position(s) of the transponder(s) on the Moon.
Because the same hemisphere of the Moon faces the Earth continuously, any transponders placed there would remain continuously or nearly continuously accessible to GPS satellites, and so only a relatively small number of transponders would be needed to provide continuous coverage. Assuming that the transponders would depend on solar power, it would be desirable to use at least two transponders, placed at diametrically opposite points on the edges of the Moon disk as seen from Earth, so that all or most of the time, at least one of them would be in sunlight. |
... A Safe Internet Gateway For Kids
You are here:
1. A lunar eclipse occurs when the moon passes behind the earth so that the earth blocks the sun's rays from striking the moon.
2. A solar eclipse occurs when the Moon passes between the Sun and the Earth, and the Moon fully or partially covers the Sun as viewed from some locations on Earth.
3. A Solar eclipse always occurs two weeks before or after a lunar eclipse.
4. Lunar eclipses can only occur during a full moon. Solar eclipses can only occur during a new moon.
5. Eclipses very often occur in threes, alternating lunar, solar and lunar.
6. The maximum time a lunar eclipse can last is 3 hours and 40 minutes. The maximum time for a total solar eclipse is 7 minutes and 40 seconds.
7. Lunar eclipses can occur up to 3 times a year.
Solar eclipses can occur at least 2 and no more than 5 times a year.
8. Lunar eclipses are visible over an entire hemisphere. Solar eclipses are visible in a narrow path a maximum of 167 miles wide.
9. The cycle of eclipses repeats every 18.6 years called the saros.
10. The eclipse shadow moves at 2,000 mph at the Earth's poles and 1,000 mph at the Earth's equator.
Eclipse 2009: Africa, Europe and Central Asia
Goddard Space Flight Center: Eclipse Page
Introductory Eclipse Tutorial
Lunar Eclipse Computer – Locations Worldwide
Lunar Eclipse Computer – U.S. Cities and Towns
Lunar Eclipses 2000 - 2020
NASA: Eclipse 99
Sky and Telescopes: Eclipse
Solar Eclipse Information
Total Solar Eclipse
Total Solar Eclipse 1
What Causes a Lunar Eclipse?
Fast Facts Resources
Web Hosting, Web Development, E-Mail Marketing, IT Services.
A safe portal for children to safely and effectively utilize the Internet.
Terms and Conditions
Back to Top
Copyright © 2014 Kids Konnect. All Rights Reserved.
Site Hosted by JIKOmetrix.net |
In this exercise, various calculations of the electronic band structure of a one-dimensional crystal are performed with the Kronig-Penney (KP) model. This model has an analytical solution and therefore allows for simple calculations. More realistic models always require extensive numeric calculations, often on the fastest computers available. The electronic band structure is directly related to many macroscopic properties of the material and therefore of large interest. Nowadays, hypothetical (nonexistent) materials are often investigated by band structure calculations – and if they show attractive properties, researchers try to prepare these materials experimentally. The KP model is a strongly simplified one-dimensional quantum mechanical model of a crystal. Despite of the simplifications, the electronic band structure obtained from this model shares many features with band structures that result from more sophisticated models.
The exercise will enable you to learn the following:
1. Understand the concept of energy bands and energy gaps, their variations as a function of the size of the periodic potentials and energy
Cite this work
Researchers should cite this work as follows: |
States of Matter Poster Assignment
By: Michael McInroy
Is the state of matter in which materials have a definite shape and a definite volume.
- Almost all solids have some type of orderly arrangement of particles at the atomic level.
- Solids are often referred to as condensed phases because the particles are very close together.
- Does not flow easily.
Is the state of matter in which material has a definite volume but not a definite shape.
- A liquid always has the same shape as its container and can be poured from one container to another.
- The forces between liquid particles are weaker than the forces between solid particles.
- Liquid particles are further apart and can move about more easily.
Is the state of matter in which a material has neither a definite shape nor a definite volume.
- A gas takes the shape and volume of its container.
- Pure gases are made up of just one atom.
- Gas pressure is measured in pascals.
- Hydrogen sulfide
A highly ionized gas containing an approximately equal number of positive ions and electrons.
- Plasma is different from a gas, because it is made up of groups of positive and negatively charged particles.
- Natural plasmas aren't found around you that often.
- Man-made plasmas are everywhere.
- Welding arcs
- Fireball of a nuclear explosion
- Interstellar gas clouds
Is a state of matter of a dilute gas of bosons cooled to temperatures very close to absolute zero.
- Bose-Einstein condensate atoms are unexcited and super cold.
- The BEC happens at super low temperatures.
- You can create a BEC with a few special elements.
- Liquid helium
- Rubidium atoms |
New Zealand Mudsnail
Species and Origin: A tiny snail that reproduces asexually. Native to New Zealand, it was accidentally introduced with imported rainbow trout in Idaho in the 1980s and into the Great Lakes via ballast water from ocean going ships.
Impacts: Densities can reach 100,000 to 700,000 per square meter. They outcompete species that are important forage for native trout and other fishes and provide little nutrition to fish that eat them.
Status: First discovered in the late 1980s in the Snake, Idaho, and Madison Rivers, they quickly spread to other western rivers. They were discovered in Lake Ontario, and later in Thunder Bay, Lake Superior in 2001. In fall of 2005, they were discovered in the Duluth-Superior harbor. See US map.
Where to look: Look on docks, rocks, and other hard surfaces along the shorelines and bottoms of lakes, rivers, and streams.
Regulatory classification (agency): It is proposed as a prohibited invasive species (DNR), which means import, possession, transport, and introduction into the wild will be prohibited.
Means of spread: They likely spread by attaching to recreational fishing gear, other types of equipment placed in the water, or in fish shipments.
How can you help?
- Inspect and remove visible animals, plants, and mud from waders, recreational fishing equipment, research gear, and other field equipment.
- Rinse everything with 120° F water, or dry equipment for 5 days.
- Report suspected infestations.
(Information provided by the Minnesota Department of Natural Resources) |
Colored by lights at night, an incredible amount of water plunges continuously over the sheer rock wall of Niagara Falls. If you stand close to the falls, the roar of the water is almost deafening, and the rushing water drenches you with spray. No wonder Niagara Falls was once named one of the seven natural wonders of the world! It’s amazing how such a common substance—water—can be so impressive.
Water and Other Liquids
Water is the most common substance on Earth, and most of it exists in the liquid state. A liquid is one of four well-known states of matter, along with solid, gas, and plasma states. The particles of liquids are in close contact with each other but not as tightly packed as the particles in solids. The particles can slip past one another and take the shape of their container. However, they cannot pull apart and spread out to take the volume of their container, as particles of a gas can. If the volume of a liquid is less than the volume of its container, the top surface of the liquid will be exposed to the air, like the vinegar in the bottle pictured in the Figure below .
Q: Why does most water on Earth’s surface exist in a liquid state? In what other states does water exist on Earth?
A: Almost 97 percent of water on Earth’s surface is found as liquid salt water in the oceans. The temperature over most of Earth’s surface is above the freezing point (0°C) of water, so relatively little water exists as ice. Even near the poles, most of the water in the oceans is above the freezing point. And in very few places on Earth’s surface do temperatures reach the boiling point (100°C) of water. Although water exists in the atmosphere in a gaseous state, water vapor makes up less than 1 percent of Earth’s total water.
A Liquid has intermolecular forces that are weaker than those of a solid.
What happens to ice if you heat it? As the temperature of ice increases, the strong chemical bonds holding the molecules in place give way. The molecules of ice are no longer held in a fixed location: they start to move around, turning into liquid water.
Have you ever seen a water strider bug resting on the surface of water? What keeps the bug from falling in? The intermolecular forces of the water are strong enough to hold the bug on the surface of the water.
Surface Tension and Viscosity
Two unique properties of liquids are surface tension and viscosity. Surface tension is a force that pulls particles at the exposed surface of a liquid toward other liquid particles. Surface tension explains why water forms droplets, like the water droplet that has formed on the leaky faucet pictured in the Figure below . You can learn more about surface tension at this URL: http://io9.com/5668221/an-experiment-with-soap-water-pepper-and-surface-tension .
Water drips from a leaky faucet.
Viscosity is a liquid’s resistance to flowing. You can think of it as friction between particles of liquid. Thicker liquids are more viscous than thinner liquids. For example, the honey pictured in the Figure below is more viscous than the vinegar. You can learn more about viscosity at this URL: http://chemed.chem.wisc.edu/chempaths/GenChem-Textbook/Viscosity-840.html .
Q: Which liquid do you think is more viscous: honey or chocolate syrup?
A: The viscosity of honey and chocolate syrup vary by brand and other factors, but chocolate syrup generally is more viscous than honey.
- A liquid is a state of matter in which particles can slip past one another and take the shape of their container. However, the particles cannot pull apart and spread out to take the volume of their container.
- Surface tension is a force that pulls particles at the exposed surface of a liquid toward other liquid particles. Viscosity is a liquid’s resistance to flowing.
- liquid : State of matter that has a fixed volume but not a fixed shape.
Table below shows the viscosity of water at different temperatures. Use the data in the table to answer the questions below. The meaning of the units of viscosity is not necessary to appreciate the relationship between temperature and viscosity.
|Temperature [°C]||Viscosity [mPa•s]|
- Describe in words what the data in the table show.
- If you were to draw a line graph of temperature and viscosity, what would it look like? Make a rough sketch to show how it would look. (Assume that the x-axis represents temperature and the y-axis represents viscosity.)
- Write a hypothesis to explain the relationship between temperature and viscosity of water.
- State the properties of matter in the liquid state.
- What property of liquids explains why water beads up on the car surface pictured in the Figure below ?
- Predict which liquid has greater viscosity: olive oil or motor oil (SAE 40). Then do online research to find out if your prediction is correct. |
Lesson 2: Summarizing Data
Answers to Self-Assessment Quiz
- Line list or line listing. A line listing is a table in which each row typically represents one person or case of disease, and each column represents a variable such as ID, age, sex, etc.
- Sex A, D, F
Age B, G, H
Lymphocyte count B, G, H
Sex is a nominal variable, meaning that its categories have names but not numerical value. Nominal variables are qualitative or categorical variables.
Age and lymphocyte count are ratio variables because they are both numeric variable with true zero points. Ratio variables are continuous and quantitative variables.
- A. Because the centers of each distribution line up, they have the same measure of central location. But because each distribution is spread differently, they have different measures of spread.
- B, C, E. Right/left skewness refers to the tail of a distribution. Because the "hump" of this distribution is on the left and the tail is on the right, it is said to be skewed positively to the right. A skewed distribution is not symmetrical.
- C. For a distribution such as that shown in Figure 2.12, with its hump to the left, the mode will be smaller than either the median or the mean. The long tail to the right will pull the mean upward, so that the sequence will be mode < median < mean.
- B. The mode is the value that occurs most often.
- C. The median is the value that has half the observations below it and half above it.
- D. The mean is the value that is statistically closest to all of the values in the distribution
- D. The geometric mean is the value that is statistically closest to all of the values in the distribution on a log scale.
- C, E. The mode is the value that occurs most often. A distribution can have one mode, more than one mode, or no mode. In this distribution, both 38.0°C and 38.5°C appear 3 times.
- D. The median is the value that has half the observations below it and half above it. For a distribution with an even number of values, the median falls between 2 observations, in this situation between the 7th and 8th values. The 7th value is 38.2°C and the 8th value is 38.5°C, so the median is the average of those two values, i.e., 38.35°C.
- C. The mean is the average of all the values. Given 14 temperatures that sum to 531.6, the mean is calculated as 531.6 ⁄ 14, which equals 37.97°C, which should be rounded to 38.0°C.
- A. The midrange is halfway between the smallest and largest values. Since the lowest and highest temperatures are 35.1°C and 39.6°C , the midrange is calculated as 35.1 + 39.6 ⁄ 2, or 37.35°C.
- B. In epidemiology, the measure of central location generally preferred for summarizing skewed data such as incubation periods is the median.
- A. The measure of central location generally preferred for additional statistical analysis is the mean, which is the only measure that has good statistical properties.
- A, C, D, E. Interquartile range, range, standard deviation, and variance are all measures of spread. A percentile identifies a particular place on the distribution, but is not a measure of spread.
- B. The range is the difference between the extreme values on either side, so it is most directly affected by those values.
- B. The interquartile range covers the central 50% of a distribution.
- C. The interquartile range usually accompanies the median, since both are based on percentiles. The interquartile range covers from the 25th to the 75th percentile, while the median marks the 50th percentile.
- A. The standard deviation usually accompanies the arithmetic mean.
- A. The standard deviation is the square root of the variance.
- A, D. Use of the mean and standard deviation are usually restricted to data that are more-or-less normally distributed. Calculation of the standard deviation requires squaring differences and then taking the square root, so you need a calculator that has a square-root function.
- B. Distributions A, B, and C all range from 1 to 39 and have two central values of 20. Considering the eight values other than the smallest and largest, distribution C has values close to 20 (from 15 to 25), Distribution A has values from 10 to 30, and Distribution B has values from 3 to 37. So Distribution B has the broadest spread among the first 3 distributions. Distribution D has larger values than the first 3 distributions (41–49 rather than 1–39), but they cluster rather tightly around the central value of 45.
- A and E. The area from the 2.5th percentile to the 97.5th percentile includes 95% of the area below the curve, which corresponds to ± 1.96 standard deviations along the x-axis.
- A. The primary use of the standard error of the mean is in calculating a confidence interval.
- Page last reviewed: May 18, 2012
- Page last updated: May 18, 2012
- Content source: |
We are going to watch these videos about the inclined plane and the screw.
After watching these videos you have to answer these questions:
1. Energy is the ability to do WHAT?
2. Work = ×
3. Why did he slice the barrel into four barrels?
4. Lifting the barrel exerts the same amount of energy as using the plank. So why is the plank a better idea? Why is it considered a simple machine?
5. The pyramids were built using the same principle of standing on an escalator. Explain why.
1.Why is the mountain road considered a simple machine?
2. Why can’t cars go straight up the side of the mountain?
3. What type of inclined plane is the road that goes up a mountain or a screw?
4.What does a screw help you do?
Now we are going to review the inclined plane filling the gaps in activity 21 from this topic made by Carles Egusquiza Bueno.
Finally, you have to answer the activity 28 from this topic. |
PASADENA, Calif., Sept. 28 (UPI) -- NASA says its Curiosity rover has found evidence an ancient stream may have flowed vigorously on Mars.
While there has been earlier evidence of the presence of water on Mars, images returned by the rover of rocks containing ancient streambed gravels are the first of their kind, NASA's Jet Propulsion Laboratory in Pasadena, Calif., reported.
The images show stones cemented into a layer of conglomerate rock, and the sizes and shapes of stones offer clues to the speed and distance of the long-ago stream's flow, JPL scientists said.
"From the size of gravels it carried, we can interpret the water was moving about 3 feet per second, with a depth somewhere between ankle and hip deep," said Curiosity science co-investigator William Dietrich of the University of California, Berkeley. "This is the first time we're actually seeing water-transported gravel on Mars. This is a transition from speculation about the size of streambed material to direct observation of it."
The ancient stream bed lies between the north rim of Gale Crater and the base of Mount Sharp, a mountain inside the crater that is Curiosity's main research destination because clay and sulfate minerals detected there from orbit can be good preservers of carbon-based organic chemicals that are potential ingredients for life, JPL said. |
A team of French and German researchers report in the May 2008 print issue of The FASEB Journal (http://www.fasebj.org) that people with limb-girdle muscular dystrophy are missing a protein called c-FLIP, which the body uses to prevent the loss of muscle tissue. By targeting the cellular and molecular mechanisms responsible for creating this protein, scientists could develop new drugs to stop muscle wasting from limb-girdle muscular dystrophy and other conditions.
“Unfortunately, rare diseases like limb-girdle muscular dystrophy don’t get the attention or funding they deserve,” said Gerald Weissmann, M.D., Editor-in-Chief of The FASEB Journal. “I hope that the breakthrough described in this study—the discovery of what regulates a protein that determines which muscle tissue stays and goes in our bodies—will lead to a range of new drugs for this form of muscular dystrophy and many others.”
To identify c-FLIP as a culprit in limb-girdle muscular dystrophy, the researchers used tissue from human biopsies to analyze the molecular pathways involved at each step of the disorder’s progression. The researchers found that the c-FLIP protein, which is responsible for blocking the death of muscle cells, is not produced as it should in people with limb-girdle muscular dystrophy, and that the creation of the c-FLIP protein is controlled by another protein called calpain-3. According to the authors, this finding may have implications for other types of muscular dystrophy and other situations that cause the death of muscle fibers, such as long-term immobilization, denervation, aging, or cachexia.
“Limb-girdle muscular dystrophy is a rare and devastating condition that robs people of movements that the rest of us take for granted,” Weissmann added. “Fortunately, this study should provide researchers with a much-needed target for developing drugs to treat at least one of these conditions.”
According to the U.S. Muscular Dystrophy Association, limb-girdle muscular dystrophy is a group of disorders affecting voluntary muscles around the hips and shoulders, and it is caused by mutations in at least 15 genes responsible for making proteins needed for normal muscle function. As the disease progresses, people with limb-girdle muscular dystrophy may lose their ability to walk, get in and out of chairs, comb their hair, and feed themselves.
The FASEB Journal (http://www.fasebj.org) is published by the Federation of American Societies for Experimental Biology (FASEB) and is consistently ranked among the top three biology journals worldwide by the Institute for Scientific Information. FASEB comprises 21 nonprofit societies with more than 80,000 members, making it the largest coalition of biomedical research associations in the United States. FASEB advances biological science through collaborative advocacy for research policies that promote scientific progress and education and lead to improvements in human health.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. |
Many know that mammals (including humans) grow hair, differentiating them from other types in the animal world. External variations in skin type provide a good starting point. However, several other differences separate mammals from reptiles and amphibians. Body temperature, metabolism, hearts and breathing methods also stand as distinctions between the three types. These variations often determine the environments necessary for these animals to survive.
Amphibians begin as larvae (such as tadpoles) and undergo a metamorphic entrance into adulthood (frogs). Mammals give live birth and reptiles generally lay hard-shell eggs upon land. Amphibians lay their gel-like eggs in the water.
Body Temperature and Metabolism
Mammals, classified as warm-blooded, regulate their own body temperature. External temperature determines the body temperatures of reptiles and amphibians, both known as cold-blooded creatures. Warm-blooded animals mostly use food as energy to maintain body temperature rather than for size. Cold-blooded animals' food goes mostly toward their body mass, resulting in a smaller need for food. Changing body temperatures help make cold-blooded creatures less susceptible to viruses, which find it difficult to grow in fluctuation conditions.
Mammals grow hair and fur on their skin. Reptiles feature scales on their skin. Amphibians possess a moist skin, essential to their existence. Should an amphibian's skin dry out, the animal will die.
Mammals and reptiles breathe through lungs throughout their lives. Amphibians breathe through gills in the water as larvae. As adults, amphibians absorb oxygen through their skin.
Amphibians need a moist environment to survive. Both amphibians and reptiles demand warmer climates, as they do not regulate their own body temperatures. Mammals can thrive in cold climates.
Hearts of mammals contain four chambers: two atria and two ventricles. Reptiles and amphibians possess three-chambered hearts, bearing two atria and one ventricle.
The duck-billed platypus, defined as a mammal, lays eggs rather than giving live birth to its young. The crocodile, classified as a reptile, runs on a four-chambered heart. Not all mammals completely maintain their body temperature; bats cool down when inactive, and animals such as bears and gophers, can lose up to 10 degrees C when hibernating.
- Encyclopedia.com: Amphibians
- Encyclopedia.com: Reptiles
- Encyclopedia.com: Mammals
- Cool Cosmos: Warm and Cold Blooded
- Idea Center: The Vertebrate Animal Heart: Unevolvable, whether Primitive or Complex
- South Carolina Biological Science Education: Exploring the Differences Between Animals!; Watson, Lindsay |
Terms for Bibliography
1. Bar- handle to be pulled
2. Bed-The part of a press on which type is placed for printing.
3. Bibliography-The study of books, including their texts, materials, history, production and
distribution (Also an account, list or description of books or works).
4. Binder’s Ticket-A stamped or printed identification of a book’s binder, generally
appearing (if used) on a paste-down endpaper.
5. Binding-The process or product of folding, gathering and fastening together the printed
sheets of a book and enclosing them in covers.
6. Binding Cloth- Cloth used in binding especially since the 1820s, when publishers began
issuing books in prefabricated casings rather than leaving binding to the bookseller or
purchaser. The cloth may be embossed with a variety of patterns or grains, that in
descriptive bibliography may be designated diaper, rib, ripple, bead, sand, pansy and
7. Black Letter Type- A group of angular, script-like type-faces represented by textura.
Rotunda and bastarda are no longer commonly used, although one bastarda type (Fraktur)
was used in Germany until the mid 1900s. Gothic type is sometimes used as a synonym
but confusingly also refers to recent sans-serif typefaces.
8. Book Plate- a slip, often decorated, pasted to an end paper to show ownership of a book.
9. Boards-The wood, cardboard, or other material used as stiff covers or to stiffen the covers
of a binding. 10. Case-A compartmented tray in which type is kept for composition; a type case. Also, a
cover or binding; used especially to refer to bindings made up separately and
subsequently affixed to books.
11. Casting Off-Estimating the space, including number of pages, to be occupied by copy
when it has been set into type (copy fitting).
12. Catchword- The first word of a page appearing also at the foot of the preceding page as a
guide to assembling the pages in correct order. Catchwords were in common use in
English printed books from the mid 16 century18 century.
13. Chase-A metal frame in which pages of type are arranged and locked up for printing or
for making plates.
14. Codex-A book (as opposed say to a papyrus roll); in particular, a manuscript book. The
plural is codices.
15. Common Press- The wood handpress in use throughout the handpress period (1450-1800)
consisting of a wood frame in which a screw-driven plated impressed the paper onto an
inked form of type.
16. Composing Stick-a handheld tray into which the compositor places the types was fixed so
a compositor would have to have several of various standard lengths. Later composing
sticks had an adjustable end that allowed one stick to serve for setting lines of varying
17. Composition-The process of setting type, spaces rules, headings and the like.
18. Compositor-a person who sets type 19. Deckle Edge-The untrimmed, uneven edge of a sheet of paper as it comes from the mold
in papermaking by hand or form the web in papermaking by machine. The deckle is the
frame around the mold used in making paper by hand; it is a rubber dam or strap in
20. Distribution-The process of removing pieces of type from the chase and returning them to
the type case.
21. Edition Binding-The binding up of books before the publisher supplies them to book-
sellers. The practice became comm. In the early 19 century.
22. Format- In the most general sense, the design and layout of a book. More particularly the
scheme by which type pages have been arranged (imposed) within a forme so that when a
printed sheet is folded, it produces a particular number and sequence of leaves.
“duodecimo” “folio” “octavo” “quarto” and “sixteenmo.” Also, a designation of book
size since the size depends on the number of times the sheet is folded (and the size of the
23. Forme-The assemblage, or imposition of type pages for the printing of one side of a
sheet. The outer forme includes the two pages that will come first and last when the sheet
is printed and folded correctly; The inner forme is the opposite side. Also, especially in
American usage: FORM.
24. Foul Case- A compositor’s case in which some pieces of type have been distributed into
the wrong compartments and wait for the opportunity to create a typographic error.
25. Frisket- A frame covered with parchment or paper in which holes have been cut to
expose the areas to be printed and to mask the areas of the chase that are not to be printed
(the furniture, q.v). 26. Furniture- In printing, wood or metal spacing material placed around type pages within a
27. Gilt/Gilded- Of a book, having gold leaf applied to its edges; sometimes used to refer to
various kinds of stamping on bindings. |
By examining the mouthparts of a river insect, Australian researchers have shown that rivers can bounce back after years of damaging pollution.
RIVERS HAVE THE ABILITY to bounce back to health from years of pollution, offering hope for rivers worldwide, according to the results of a new Victorian study.
Melbourne University researchers Vincent Pettigrove and Bryant Gagliardi studied the rivers in the Ovens Valley, north east Victoria, which had been polluted with pesticides from 150 years of tobacco farming.
They found that by examining the mouthparts of a common river insect, the non-biting midge, or 'chironomid', they could ascertain whether pesticides — for example DDT, a now-banned carcinogenic toxin — were present in the river. Deformities included missing, additional or fused 'teeth'. The researchers said that insect deformity can be a cheaper measure of river health than traditional chemical analysis and can assist agriculture in managing land more sustainably.
"We have seen that land-managers can make decisions about how they use their land that positively impact aquatic ecosystems because current farming practices in the area are having a lesser impact on the Ovens River," Dr Pettigrove said.
The Oven's River Basin is located in the Murray Darling catchment. Tobacco cultivation occurred from the 1850s and peaked during the 1970s. The use of pesticides such as DDT were banned in Victoria in 1981, and, after years of declining profit margins, the industry completely closed in 2006.
The study examined midge larvae at tobacco sites, in 1988 and again in 2010, as well as sites 10km upstream and downstream, to see changes over time. The 1988 research showed a significantly higher amount of deformities in the larvae collected at the tobacco sites compared with upstream and downstream samples. But in 2010, the amount of deformities was not significantly different between the different locations.
Midge larvae are commonly found in the fine sediments of slow-flowing rivers, which is also where pesticides and heavy metals are likely to accumulate.
In addition to DDT, the sediments were also examined for other pollutants (organochlorides). In 1988, significant levels of DDT were detected at the tobacco zone but the tests were not considered sensitive enough to be reliable.
In 2010, the sediment analysis was much more comprehensive, including screening for numerous different pesticides and heavy metals over a much larger area. DDT and related organochlorides were found only at the tobacco sites.
The authors acknowledged the reduction in the midges' deformities might have indicated an adaptation to the pesticides since 1988, but dismissed this idea as previous studies into the Australian sheep blowfly found that adaptation occurred within a 10 year timeframe. Since pesticides were used in the Ovens valley between the 1950s and 1981, any adaptations would have occurred prior to 1988.
According to the researchers, using the midge as a 'bioindicator' is an inexpensive and reliable method for indicating the presence of pollutants. The results in the Ovens Valley showed that improved farming practices can significantly improve the health of local rivers within a short space of time. The findings could be used to help rivers worldwide.
"The improvement in health of the chiromonid is significant as an indicator of river health as they are found in every type of freshwater environment worldwide and are an important food resource for fish," said Pettigrove.
The findings were published in the journal Agriculture, Ecosystems and Environment.
Sarah Hardgrove wrote this article as part of her science communication studies at the University of Melbourne. |
13 December 2013
Scientists have been puzzled by the hole in the ozone layer, which forms each year over Antarctica, but has been changing in size from year to year. Now Nasa researchers say the changes are mostly due to the weather - not the ozone-destroying chemicals in the atmosphere. The findings were presented at the American Geophysical Union Fall Meeting in San Francisco.
Click to hear the report
Ozone-damaging substances - such as CFCs - began to be phased out 20 years ago. Since then, the hole in ozone layer stopped getting bigger. However there haven't yet been signs of a full recovery - and damaging UV rays from the sun are still streaming through.
Now scientists from Nasa believe that the weather also plays a key role. Satellite images show that fluctuating air temperatures and winds change the amount of ozone that sits above Antarctica. This means the size of the ozone hole changes year on year.
The team thinks the weather will continue to be the dominant driver in the process until 2030. But after that, as the long-lasting chemicals in the atmosphere finally start to clear, the layer could recover by 2070.
Click here to hear the vocabulary
- phased out
used less and less
- streaming through
continuously flowing from one side to the other (here, from outside the earth's atmosphere to inside it)
- plays a key role
has a strong influence
- dominant driver
- to clear
to become less and less until they disappear |
Seal Script (篆書)
Seal script (zhuanshu) exists in two major forms. The earlier form, known as large-seal script (dazhuan), derived from symbols cast on bronze ritual vessels from the Shang and Zhou dynasties of the eleventh to third century BCE. As its linear composition became more regular, seal-script inscriptions were used mostly for commemorative records. Its later and more unified form, called small-seal script (xiaozhuan), was specifically devised as a standardized system of writing under the first emperor of the Qin dynasty, who reigned from 221 to 209 BCE. Often used for official inscriptions on stone monuments, small-seal script is characterized by a symmetrical structure formed with thin, even lines executed with balanced movements (see right). |
It seems impossible that a tiny creature in the sea could someday be an effective treatment for hearing loss, but one group of researchers says they have all the right stuff. The Center for Hearing and Communication estimates 48 million people in the U.S. have hearing problems and many of them are elderly. Age-related hearing loss affects one in every three people over the age of 65. These are the individuals that will likely benefit from the studies being done on the sea anemones.
What is a Sea Anemone?
Sea anemones are the exotic creatures often seen in ocean-based photography. It’s a group of sea animals that get their name from a flowering plant called the anemone. Similar to the plant, sea anemones have at Medusa-like quality that consists of a columnar trunk surrounded by flowing tentacles.
These are highly predatory creatures that use their tentacles for hunting. They pull the arms in to draw in prey and then expand when it comes time to catch their next meal. The tentacles also help propel them through the water, although, they tend to remain stationary for weeks at a time.
What kind of food do they eat? The sea anemones are not picky eaters. They pull the tentacles out to catch just about any animal that comes within reach and will fit in its mouth.
How the Sea Anemone can Help the Hearing Impaired
A 2016 study published in the Journal of Experimental Biology reports that the sea anemone has tiny hair cells that allow them to sense vibrations in the ocean when catching prey. The core of these hair cells is similar to what humans use to hear.
The inner ear consists of a labyrinth structure filled with delicate hair cells that resemble what the sea anemone use to detect vibrations. The hair cells transduce the vibrations of sound into something the brain can understand. Without them, there is no way for you to comprehend what you hear.
The problem with the very tiny hair cells found in both humans and sea anemones is that they tend to break. These broken hairs are the basis for the hearing loss that occurs as people get older. Decades of listening to people talk, to your favorite TV show and to the local band that plays every weekend will catch up to you. The tiny hair cells break down after years of service and hearing is diminished.
For humans, when the hair cells are gone, there is no way to get them back. The sea anemone, however, has a built-in repair system. It’s the key to their survival because they need their hair cells to live. The sea anemones reproduction is a traumatic event that requires them to tear their body in two, breaking their hair cells in the process. In response, they produce mucus that covers their body and aids in healing. In that mucus is a protein that repairs the hair cells.
The Sea Anemone Study
University of Louisiana biology professor Glen Watson and his colleagues decided to look closer at the healing process of the sea anemone to see if those same repair proteins might work for different species. The researchers used mice in the study because their ears have similar hair cells — called stereocilia — that enable hearing. They destroyed the stereocilia in the test mice and then treated them with repair protein taken from a starlet sea anemone. The result was significant repair of the stereocilia.
Does This Mean Protein From the Sea Anemone Will Work on Humans?
The study shows that repair of these very delicate hair cells is possible in other animals, but mice are not humans. Mice have proteins that are related to the ones the sea anemones use for repair. Humans are not quite as lucky. The next step is to find a way to harness that same repair power either using human protein or something taken out of nature that can give people with this kind damage back their hearing.
It’s likely that the therapeutic use of repair proteins to heal damaged hair cells in humans is years away. This study is good news, though, because it does show that some species have this ability and more research might put it to use for humans. |
Credit: Chris Sergeant/ZSL
Bacteria living on the skin of frogs could save them from a deadly virus, new research suggests.
Ranavirus kills large numbers of European common frogs – the species most often seen in UK ponds – and is one of many threats facing amphibians worldwide.
Scientists from the University of Exeter and ZSL’s Institute of Zoology compared the bacteria living on frogs – known as their “microbiome” – from groups with varying history of ranavirus.
They found that populations with a history of outbreaks had a “distinct” skin microbiome when compared to those where no outbreaks had occurred.
“Whether a population of frogs becomes diseased might depend on the species of bacteria living on their skin,” said Dr Lewis Campbell.
“Ranavirus is widespread, but its presence in the environment doesn’t necessarily mean frogs become diseased – there appears to be some other factor that determines this.
“The skin is often the first infection point in ranavirus, and the first stage of the disease can be skin sores.
“It’s possible that the structure of a frog’s microbiome – the mix of bacteria on its skin – can inhibit the growth and spread of the virus so it can’t reach a level that causes disease.
“While the results of our study demonstrate a clear link between the frog skin microbiome and disease, further research will be need to understand the exact mechanisms which cause this relationship to form.”
Laboratory trials will help establish whether a history of ranavirus infection causes the microbiome differences, or whether these are pre-existing differences that predispose some populations to infection.
The scientists tested the skin bacteria of more than 200 wild adult European common frogs (Rana temporaria) from ten populations.
They found that the microbiome of individual frogs is usually most similar to that of others in the same population (those living in the same geographical area), but that populations with the same disease history were more similar to each other than to populations of the opposite disease history.
Even though amphibians can partially “curate” their microbiome by producing proteins that benefit specific bacteria, they are limited to those bacteria which are available in their environment.
Ranavirus can wipe out entire common frog populations and, though the new findings need further investigation, the researchers hope their work could help the species.
Dr Xavier Harrison said: “There’s growing evidence that skin bacteria may protect amphibians from lethal pathogens such as chytrid fungus, and that we can develop cocktails of probiotic bacteria to prevent vulnerable individuals from contracting disease.
“Our work suggests that given enough effort and research, similar probiotic therapies may be effective against ranavirus.”
The research was funded by the Natural Environment Research Council, the Royal Society and the Marie Curie Foundation.
The paper, published in the journal Frontiers in Microbiology, is entitled: “Outbreaks of an emerging viral disease covary with differences in the composition of the skin microbiome of a wild UK amphibian.”
Related Journal Article |
(ORDO NEWS) — It will be forgiven if you think that our nearest planetary neighbor is Venus. In a sense, you are right – Venus is closer to Earth than any other planet in the solar system.
Likewise, its orbit is closer to ours than any other. However, in another sense you would be wrong. At least this argument is presented in an article published in Physics Today.
To identify our nearest neighbor, engineers partnering with NASA, the Los Alamos National Observatory, and the US Army Engineering Research Center built a computer model to calculate the Earth’s average proximity to the three closest planets (Mars, Venus, and Mercury) over a 10,000-year period.
Because of the way the planets align during their orbits, the model shows that Earth spends more time closer to Mercury than either Venus or Mars.
“In other words, Mercury is on average closer to Earth than Venus because it orbits the Sun more closely,” the authors explain.
Indeed, it is not just Earth. Further calculations show that all seven planets in the solar system spend most of their orbit closer to Mercury than any other planet. Sounds impossible? Here’s how they figured it out.
The results are based on a method called the dotted circle method (PCM) – essentially a mathematical equation that takes the orbits of two planets as circular, concentric and coplanar and calculates the average distance between the two planets as they orbit the sun.
“From PCM, we observed that the distance between two orbiting bodies is minimal when the inner orbit is minimal,” the authors explain.
“This observation leads to what we call the vortex-dirley consequence: for two bodies with roughly coplanar, concentric circular orbits, the average distance between the two bodies decreases as the radius of the inner orbit decreases.”
“It becomes clear that Mercury (average orbital radius 0.39 AU), and not Venus (average radius 0.72 AU), is the closest planet to Earth on average” (AU is an astronomical unit equal to the distance between the Earth and the Sun).
To test their hypothesis, they built a computer model that tracked the positions of all four planets over a 10,000-year period and calculated the average distance between them.
The results of this simulation differed from traditional calculations (determined by subtracting the average radius of the inner orbit from the average radius of the outer orbit) by a staggering 300 percent.
It turned out that the average distance between the Earth and Venus is 1.136 astronomical units (0.28 according to the “old method”). By comparison, the average distance between Earth and Mercury was 1.039 astronomical units (0.61 by the “old method”).
The hypothesis has not yet been presented in a peer-reviewed article and will no doubt be subjected to scrutiny by experts in the field.
Contact us: [email protected] |
“Blue Carbon” is the scientifically recognized term defining carbon stored by coastal ecological systems. These systems of seagrasses, mangroves, salt marshes, and seaweed, cover less than 0.5% of the seabed, are equal in size to about 0.05% of the biomass on land, and are responsible for over 50% of all carbon storage in ocean sediment. Through photosynthesis, carbon is captured in the plants and roots as the plants grow, ending up in the sediment where the carbon is stored (sequestered) for up to millions of years. One acre of seagrass can remove the carbon emitted from 4,000 miles of car exhaust each year.
There are about 72 species of flowering plants collectively called seagrasses that evolved from sea algae, moved onto land, and then, about 100 million years ago, transitioned back to the sea. Forming beautiful water-based meadows along coastal floodplains and in water up to 150’ deep, seagrass is a critical food source and habitat for wildlife, supporting a diverse community of fish, snails, sea turtles, crabs, shrimp, oysters, clams, squid, sea urchins, sponges, and anemones.
Seagrasses have been called “the lungs of the sea”, capturing and storing large amounts of carbon and releasing oxygen into the water as they build their leaves and roots through the process of photosynthesis—similar to how trees take carbon from the air to build their trunks. As parts of the seagrass die and decay, it collects on the seafloor and becomes buried and trapped in the sediment. The sediment forms sedimentary rock under the effects of time and pressure, and through actions of Earth’s tectonic plates, becomes buried in the upper crust; the carbon is thus effectively sequestered for decades and up to millions of years, resurfacing as volcanic spew (lava) and mountain outcrop eruption (magma).
But Blue Carbon ecosystems are being lost by 2-7%/year—a higher rate than even the rainforests. This loss adds to deadly atmospheric carbon by removing seagrasses’ carbon sequestration actions, and also reduces habitat that is vital for managing the health and viabilities of our climate, our coasts, our health, and a multitude of plants & animals.
Seagrass habitats are threatened by many anthropogenic (human) activities. Paved surfaces cause heavy runoffs of dirty, unfiltered water from storms and waste-waters. Increased fertilizer use from farms and housing developments, nitrous oxides from various fossil fuel burning activities—auto, factory, electricity generation, home heating—all cause excess nutrient accumulations flowing to the seas. These activities encourage algae blooms like red tide which deplete oxygen, cloud the sunlight, increase the water’s temperature, and release deadly toxins which kill animals, people, and the seagrass meadow.
Dragging weighted fishing nets over the meadows uproot the plants. Global Warming causes sea levels to rise and ocean acidification, reducing light infiltration, hindering photosynthesis thus causing a decrease in plant growth, health, and plant biodiversity. ( A “wasting disease” in the early 1930’s caused a large die-off of up to 90 percent of a seagrass species called eelgrass growing in temperate North America, causing an extinction of a snail species. ).
Another cause of seagrass depletion is the reduction of predatory actions upon the herbivore community, allowing these grass-eating animals unfettered access to seagrasses. This situation is mostly caused by over-fishing of the herbivore-eaters. For example, Chesapeake Bay Blue Crabs eat grazing snails. Over-harvesting the crabs allows the snails to flourish, destroying the seagrasses.
Seagrass loss has accelerated over the past few decades, from 0.9% per year prior to 1940 to 7% per year in 1990, with about a 1/3 global loss since WWII. An increase in awareness and protections are required to eliminate these losses and to ensure the health and survival of not only these magnificent habitats, but of us too.
A global assessment, done by the National Academy of Sciences, involving 215 studies found that seagrass habitat has been disappearing at an increasing rate since 1940; 29% has disappeared since seagrass data began in 1879, with 25-50% in the past 55 years. Seagrass loss rates are comparable to those reported for mangroves and coral reefs, and place seagrass meadows among the most threatened ecosystems on earth.
So, vegetative coastal ecological systems are emerging as the most carbon-rich ecosystems in the world, and one of the most effective methods for long-term carbon storage: they bury carbon 35 times faster than tropical forests and contribute 50% of the total carbon buried in ocean sediments. Because of their remarkable speed and effectiveness to sequester carbon for millions of years, Blue Carbon storage should be a key strategy to combat Global Warming.
https://www.thebluecarboninitiative.org/library#Mangroves; https://www.pnas.org/content/106/30/12377; https://mousamwaylandtrustmaine.files.wordpress.com/2019/12/44722-atwood-et-al.-2015.pdf; https://www.fws.gov/verobeach/MSRPPDFs/Seagrass.pdf; https://ocean.si.edu/ocean-life/plants-algae/seagrass-and-seagrass-beds#element_37; https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4633871/ |
The logical operators expect their operands to be boolean values, and they perform "boolean algebra" on them. In programming, they are usually used with the comparison operators to express complex comparisons that involve more than one variable.
The && operator evaluates to true if and only if its first operand and its second operand are both true. If the first operand evaluates to false, then the result will be false, and && operator doesn't even bother to evaluate the second operand. This means that if the second operand has any side effects (such as those produced by the ++ operator) they might not occur. In general, it is best to avoid expressions like the following that combine side effects with the && operator:
(a == b) && (c++ < 10) // increment may or may not happen
The || operator evaluates to true if its first operand or its second operand (or both) are true. Like the && operator, the || operator doesn't evaluate its second operand when the result is determined by the first operand (i.e., if the first operand evaluates to true, then the result will be true regardless of the second operand, and so the second operand is not evaluated). This means that you should generally not use any expression with side effects as the second operand to this operator.
The ! operator is a unary operator; it is placed before a single operand. Its purpose is to invert the boolean value of its operand. For example, if the variable a has the value true, then !a has the value false. And if p && q evaluates to false, then !(p && q) evaluates to true. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms.
Click herefor a complete list of Reading Like a Historian lessons, and click here for a complete list of materials available in Spanish. |
It’s known that airplane crews at high altitude are exposed to potentially harmful levels of radiation from cosmic rays, but could these cosmic rays pose hazards even at sea level? A new study into these potential terrestrial effects has just been published in the Journal of Geophysical Research.
Study co-author Adrian Melott, professor of physics and astronomy at the University of Kansas, says previous research suggests congenital birth defects down on Earth’s surface could be caused by these “solar particle events” – spikes in cosmic rays from the sun that touch off the northern lights and sometimes hamper communications or the electric power grid.
“We looked at two different studies,” said co-author Melott. “Both of them indicated a connection between cosmic rays and the rate of birth defects. One also associated mutations in cells growing in a petri dish with a 1989 solar particle event.”
But Melott says calculations to estimate the dose of radiation from a solar particle show no danger. “We have a contradiction,” he explained. “Our estimates suggest that the radiation on the ground from these solar events is very small. And yet the experimental evidence suggests that something is going on that causes birth defects. We don’t understand this.”
Investigating further, Melott and his colleagues looked at how cosmic rays from the sun create hazardous “secondaries” by reacting with the Earth’s atmosphere. “Cosmic rays are mostly protons,” he said. “Basically, they are the nuclei of atoms – with all the electrons stripped off. Some come from the sun. Others come from all kinds of violent events all over the universe. Most of the ones that hit the Earth’s atmosphere don’t reach the ground, but they set off ‘air showers’ in which other particles are created, and some of them reach the ground.”
These air showers, Melott suggests, pose the most serious threat for the health of humans and other biology on the Earth’s surface via “ionizing radiation.”
The researchers looked carefully at two forms of radiation formed by solar particle events – muons and neutrons – finding that muons are the most dangerous to biology at the Earth’s surface.
“Muons are a kind of heavy cousin of the electron,” Melott said. “They’re produced in great abundance by cosmic rays and are responsible for most of the radiation we get on the ground from cosmic rays. Neutrons can do a lot of damage. However, very few of them ever reach the ground. We checked this because some of them do reach the ground. We found that they’re likely responsible for a lot less damage than muons, even during a solar particle event.”
Of particular interest to the authors was a massive dose of solar radiation around the years 773-776 A.D. “Carbon-14 evidence was found in tree rings in 2012 that suggests a big radiation dose came down around 775, suggesting a huge solar particle event, at least 10 times larger than any in modern times,” Melott said. “Our calculations suggest that even this was mostly harmless, but maybe there is something wrong with our assumptions. We used ordinary understandings of how muons may cause damage, but perhaps there is some new physics here which makes the muons more dangerous.”
He adds the next step in the investigation should be honing an understanding of how much exposure to muons DNA can withstand. “In calculating the effect of muons, we used standard assumptions about what the effect of muons should be,” Melott said. “Their physics is pretty simple, just that of an electron with a lot of mass. But no one has ever actually done much experimentation to measure the effect of muons on DNA, because under normal conditions they are not a dominant player. They are not important, for example, in nuclear reactor accidents. We would like to put some synthetic DNA in a muon beam and actually measure the effect.”
Discuss this article in our forum
Solar wind creating interplanetary rain
Physics-defying magnetic field behavior in solar flares explained
Solar flares spookily linked to radioactive decay on Earth
Deadly Cosmic Rays That Both Destroy And Create |
This is a quick counting, adding and subtracting activity.
You need a number line or a tape measure.
Talk about the numbers on a tape measure, choose a small number (like 5). Have your child count up to it from one, and then count on past it to 10.
Mark the number with your finger or a pen.
Now, ask a few questions about the number:
What number is 1 more than...?
What number is 2 more than...?
What number is 1 less than...?
What number is 2 less than...?
Next, swap roles so your child chooses a number and asks you a question.
Finally, you can take it in turns to play a "I'm Thinking of a Secret Number" game.
Place a pen pointing at a number, and say:
"This is the number 8. I'm thinking of a Secret Number that is 3 more than 8. What is my number?"
Help your child to count along the tape-measure to find your 'Secret Number'.
Then, swap roles again. Can you find their 'Secret Number'?
We used a tape-measure here. We chose to use inches because on our tape this was the clearer side. The children are not measuring with the tape, just counting and adding. You could use a ruler or a printed number-line, if you wish. Or simply write the numbers out on paper. |
flying the thousands of miles to the islands from Asia or the Americas, especially if aided by strong winds. Birds, in turn, often carry seeds and other organisms in their guts or stuck to their feathers, beaks, and feet. Insects, spiders, snails, and other small organisms likely rafted to the islands on floating branches or mats of vegetation. And fishes, mollusks, seaweeds, and other marine organisms found new homes on the underwater flanks of the volcanoes after swimming to the islands or being carried there by oceanic currents.
Thus each new Hawaiian island was colonized by a variety of plant and animal species. But because of the islands’ isolation in the middle of the Pacific, only a small fraction of the species from surrounding landmasses likely reached Hawaii. For example, about 2,500 species of bony fishes live in the near-shore waters of the Philippines, but only about 530 occupy Hawaiian waters. Only a single genus of palm, the loulu palm, became established in Hawaii before the arrival of humans, though up to 100 genera of the family occur on other islands in the southwestern Pacific. And only 6 of 174 families of songbirds worldwide are native to Hawaii.
Once a newly introduced species became established in the Hawaiian islands, it could remain part of a widely distributed species found both there and elsewhere. For example, many of the fish species that live in Hawaii receive continued immigrants from surrounding regions and remain genetically linked to species distributed throughout the Pacific.
Alternately, a newly established species in Hawaii could evolve into one or more new species. In some cases, this resulted in just a few new species. For example, several known species of flightless ducks, all now extinct, appear to be descended from a single duck species that colonized the islands, probably from North America. In other cases, the conditions encountered by colonizing species led to an explosive proliferation of new species, as demonstrated by the flies known as drosophilids.
An Adaptive Radiation Has Led to a Dramatic Diversification of the Drosophilids in Hawaii
In an area of just 16,700 square kilometers (about 6,500 square miles), the Hawaiian islands have the most diverse collection of drosophilid flies found anywhere in the world (see Figure 10). Different species range in body length from less than 1.5 millimeters (a sixteenth of an inch) to more than 20 millimeters (three-quarters of an inch). Their heads, forelegs, wings, and mouthparts have very different appearances and functions. Hawaiian drosophilids live everywhere from sea-level rainforests to subalpine meadows. Some species produce one egg at a time while others produce hundreds.
The approximately 800 native drosophilid species in Hawaii belong to two genera—Drosophila and Scaptomyza—which in turn are part of the family Drosophilidae. Drosophila and Scaptomyza are two of approximately 10,000 genera in the order Diptera, which includes flies, gnats, and mosquitoes. It is a tremendously diverse and successful group of
organisms: the fly species on earth far out-number all of the vertebrate species combined. But the native insects of the Hawaiian islands include very few separate fly genera, and most of the native fly species are drosophilids.
When biologists began to study the evolutionary history of the Hawaiian drosophilids, they first examined the physical similarities and differences of the species. If two species have very similar appearances, scientists might hypothesize that both are descended from an ancestral species that lived quite recently. If two species are physically quite distinct, scientists could infer that they are more distantly related. Researchers then would seek additional evidence to support or reject these hypotheses. For example, two species can develop similar adaptations if they live in similar environments and therefore can appear to be more closely related than they actually are.
In recent decades, biologists have gained an additional way of examining the relationships among species. Each individual fly has a particular sequence of the chemical units that make up the DNA in its cells. In general, these sequences are more similar among the members of a single species than they are between the members of different species. Similarly, DNA sequences generally are more similar between closely related species than they are between more distantly related species. Genetic sequences accumulate changes over the generations as DNA randomly mutates and is influenced by natural selection or other evolutionary processes. If the DNA sequences of two Drosophila species are more similar, the two species are more likely to be descended from a relatively recent ancestral species, because their DNA has not had much time to diverge. If the DNA sequences are less similar, the two species had more time to accumulate genetic changes, indicating that their common ancestral species lived in the more distant past.
Study of the physical and genetic differences among the hundreds of species of native
drosophilids in Hawaii has led scientists to a remarkable conclusion. All of the native Drosophila and Scaptomyza species in Hawaii appear to be descended from a single ancestral species that colonized the islands millions of years ago! In fact, all of the approximately 800 species of drosophilids in Hawaii could be descended from a single fertilized fly that somehow reached the islands—perhaps blown there by a storm, or carried to the islands in a scrap of fruit stuck to the feathers of a bird.
Since that time, the descendents of the original colonists have undergone what evolutionary biologists call an adaptive radiation. New species have evolved and have occupied a wide range of ecological niches. Several interacting factors have contributed to this adaptive radiation. An especially important factor for the Hawaiian drosophilids has been what is called the founder effect. Many new populations of drosophilids in Hawaii must have become established in much the same way as did the original population. A few individuals or a single fertilized female must have journeyed or been transported from one area of suitable habitat within an island to another such area, or from one island to another. These founders carried with them just a subset of the total genetic variability within its species. As a result, the physical characteristics and behaviors of the founders could differ from those typical of the parental population. Under such circumstances, a founder population can diverge from the ancestral population and eventually may become a new species.
The great ecological diversity of the Hawaiian islands also plays a role in adaptive radiations. Drosophila species continually expanded into wetter or drier areas, higher or lower elevations, and regions of differing vegetation. The members of a species able to survive in these new areas can acquire new adaptations that set them apart from the original species.
Finally, the lack of competitors in island settings can spur the evolution of new species. In Hawaii, the drosophilids could move to new islands or into ecological niches that on the continents would already have been filled by other species. For example, many Hawaiian drosophilids lay eggs in decaying leaves on the ground, an ecological niche that
is filled by many organisms on the continents but in early Hawaii was almost empty.
As one species diversifies into many, a variety of different evolutionary paths can be taken (see Figure 11). An ancestral species can give rise to a daughter species while remaining relatively unchanged itself. Or a succession of single species can lead from an ancestral species to a single current species. Or an ancestral species can undergo repeated divisions, producing complex networks of evolutionary relationships (Panel 2).
The speciation of drosophilid flies in Hawaii is continuing to occur. For example, a species known as Drosophila silvestris occupies several discrete patches of forest on the Big Island (see Figures 12a and 12b), living in cool, wet forests above 750 meters (2,500 feet) in elevation and laying its eggs in the decaying bark of trees. Males of D. silvestris have a series of hairs on their forelegs that they brush against females during courtship. On the northeastern half of the island (known as the Hilo side), the males have many more of these hairs than do the males on the southwestern side (the Kona side). These two populations are developing physical and behavioral differences that over time might split a single species into separate species. |
The asteroid 16-Psyche contains an estimated $10,000 quadrillion worth of precious metals.
Psyche Mission Key Takeaways:
- Purpose: to study the chemical composition of the 16-Psyche asteroid.
- For the first time ever, study a terrestrial world not made of rock and ice, but made of metal.
- To determine the asteroid’s age.
- To analyze the topography
- Launch Date: August 1, 2022 from the NASA Kennedy Space Center on Merritt Island, Florida
- Estimated arrival date: January 2026
- Trip Length: 3.5 years
- Mission Length: After reaching the asteroid, the plan is to spend 21 months in orbit studying and analyzing 16-Psyche.
The Psyche Mission: Why it Matters
“What makes the asteroid Psyche unique is that it appears to be the exposed nickel-iron core of an early planet, one of the building blocks of our solar system.” – NASA Jet Propulsion Laboratory
The mission is planning to explore 16-Psyche, a metallic asteroid, and will launch from a SpaceX Falcon Heavy rocket in August 2022.
Sending a robotic spacecraft beyond Earth orbit into deep space is always a big milestone. When dealing with distances in the neighborhood of millions of miles, we’re talking about so long of a voyage that the spacecraft components will never come back to Earth.
The Psyche mission may present unique insights for a future industry in asteroid mining.
Most asteroids that are made of rock or ice. Psyche is special, as it is composed almost entirely of metal.
Metal is much harder than rock, and perhaps would allow the morphology and crater formation on a metallic asteroid to be quite different from that of a rocky object.
Psyche holds one the the big mysteries of the universe – one of the only mainly metal object in space, how did a metal asteroid like this form?
The metallic composition is interesting because Earth’s core is made of up to 95% metal (iron and nickel) as well. Unfortunately, Earth’s core is 1864 miles bow the crust and mantle, so we can’t directly study it.
Seeking answers to the origins of an asteroid like psyche may help us unlock answers to our own planet’s formation – how might planetary cores have formed?
The metallic core of Earth is unreachable, so we can only indirectly observe its unique properties, magnetic field, etc. By exploring a metal core that resembles that of Earth, but isn’t surrounded by the mantle and crust, we may gain a better understanding of our own planet, and even the formation of other rocky planets like Earth.
The Asteroid Belt
Psyche is located within the asteroid belt, an aggregation of rocky debris of various size between the orbits of Mars and Jupiter.
Jupiter, with such a strong gravitational field, plays a large role in protecting Earth from experiencing too many asteroid impacts. Jupiter’s proximity to the asteroid belt means it attracts a large percentage of rogue asteroids, keeping Earth out of harm’s way.
While moving through space, asteroids and comets smash into each other at 11000 miles per hour, causing the surfaces to have contours and craters from these impacts.
What Does 16-Psyche Look Like?
Humans have never visited a celestial object like this up close, so we literally can only guess what features the images might show.
16-Psyche is one if the largest metallic asteroids – an M-type asteroid, meaning it is made up of primarily metal, iron, nickel, and other substituents.
Given that the asteroid is made primarily of nickel/iron metal scientists can hypothesize that 16-Psyche may resemble nickel/iron meteorites that have hit Earth.
The 16- stands for the fact that it was the 16th asteroid discovered in centuries past.
Comparing the unexplored asteroid to meteorites that we have directly observed, we may expect to find the rock exhibiting crystal structures resembling octahedron (known as Widmanstätten patterns), and potentially even crystals embedded within the rock.
How big is 16-Psyche?
Not a perfect sphere, Psyche has an average diameter of 139 miles across and is 3% the mass of the moon.
Technologies used during the Psyche Mission
The spacecraft, named Psyche after the asteroid itself, is being built for NASA by Maxar Technologies in Palo Alto, California.
With the ultimate purpose of testing hypothesis for how 16-Psyche was formed, the spacecraft will use the following tools to study 16-Psyche asteroid:
- multispectral imager
- gamma-ray spectrometer
- neutron spectrometer
- X-Ray / radio instrument (for gravity measurement)
- solar electric propulsion mechanism (ion thrusters)
Multispectral imaging technologies are able to capture images including wavelength data within and beyond the visible light spectrum.
Humans are able to see between 400-700 nm wavelengths of light; however, imaging beyond these wavelengths into the UV or infrared range can allow scientists to gather information about the greater electromagnetic spectrum.
Spectrometers are tools that measure light. Although there are many different types of spectrometers, the name literally means “light measuring”. The root word “spectrum” comes from Latin, meaning light. The work “meter” comes from Greek, meaning “a measure”.
The spectrometers used on this mission will identify the way that light reflects off the asteroid to identify its physical and chemical composition.
An instrument used for measuring magnetic forces, especially the Earth’s magnetism.
The spacecraft will use a magnetometer to measure 16-Psyche’s magnetic field, seeing how it might resemble earth.
Although NASA has used ion thrusters for deep space missions to Ceres, another asteroid, in the past, this will be the first time a mission has used hall thrusters to go into deep space, and will use Xenon gas as propellant.
Hall thrusters are commonly used in Earth orbiting satellites. SpaceX Starlink satellites famously use hall thrusters to alter their orbit and trajectory.
Ion thrusters are the ideal propulsion system for long-term missions because they allow for a slow but consistent and energy efficient acceleration, allowing the spacecraft to reach a higher max velocity. These ion thrusters are often solar powered via electricity.
Whereas chemical propulsion (which rockets use to take off from Earth) are useful for short bursts of power to reach orbit, these systems are not ideal to sustain long-distance space travel because the fuel would take up much more space than we have room for.
If the mission leaves in August 2022 as planned, it will take 3.5 years to reach Psyche, arriving in January 2026.
In addition to determining the feasibility of possible mining missions in the future, scientists hope that studying a metal based asteroid will uncover insights into Earth’s core, which is also composed mainly of metal.
How Much Money is 16-Psyche Worth?
According to one of NASA’s principal investigators for the mission, Lindy Elkins-Tanton, the fact that an asteroid contains trillions of dollars worth of precious metals doesn’t mean that it’s going to make everyone on Earth rich.
Elkins-Tanton, who was recently interviewed by the Miami Herald, stated that although Psyche contains massive amounts of iron, nickel, copper, even gold and platinum, humans will not be able to benefit financially from it for two reasons:
- The logistics of mining and transporting that amount of cargo back to Earth is impossible from a practical standpoint. It would take us decades or centuries to develop and start this process.
- If we were to magically have all that metal on Earth, it would crash the markets due to an oversupply, making metals practically worthless.
Although the Elkins-Tanton’s logic is sound, perhaps there is a scenario where humans are able to overcome the logistics of asteroid mining for our benefit without downfalls of flooding the markets.
The way this scenario could play out is through exponential technological progress.
In the 1950s, for example, people never imagined that we would carry computers around in our pockets. We never would have imagined
In 2021, the industry for computer chips is worth around $500 billion. Anyone who says we could have predicted this is lying.
Similarly, it is impossible to imagine what industries and markets will exist, or what manufacturing and commerce will look like as humans embark into the Space Age.
Perhaps someday, within a couple generations or even sooner, humans will be mining asteroids and using the materials to build unimaginable technologies that only exist in the world of science fiction today.
Get the Future of Technology letter each month. Sign up below.
- NASA JPL
- Psyche Mission site
- The Size of Psyche asteroid
- Psyche Asteroid Miami Herald
- Computer chips
- The Psyche mission blog on Medium |
The following is a re-publication, in substantial part, of the history of the Piqua Shawnee Tribe of Alabama as originally published by the Alabama Indian Affairs Commission, except as edited and modified herein.
History of the Early Shawnee People in Alabama:
Most historians classify early Shawnee Tribes as a nomadic people because historians have found creditable evidence of Shawnees moving about in North America, settling in various places, and often retaining small family units for long periods of time.
(Note: To the left is a depiction of Blue Jacket (or Weyapiersenwah). Blue Jacket (1743 -- 1810) was a war chief of the early Shawnee in the Ohio Valley who fought in defense of Shawnee land. He was an important predecessor to the famous Shawnee leader Tecumseh. There is no known historical record of Blue Jacket having lived in Alabama.).
Tecumseh, (Right) by Benson Lossing in 1848 based on 1808 drawing.
The State of Alabama has long been the home of many Shawnee people. In fact, some historians report that perhaps the Shawnees have inhabited Alabama for a longer period of time than any other geographic region. Some archaeologists set the date of 1685 as the first evidence of Shawnee settlement in Alabama. However, oral tradition recounts that the Shawnee have lived in Alabama much longer that. Ancient burial sites that used burial methods common to the Shawnee have been located in several sections of the State of Alabama. Early accounts can be confusing since what is now called Alabama was once a part of the Georgia Territory. Several early maps show Shawnee settlements in what is now the State of Alabama.
Early French and English maps show several Shawnee towns in what would be considered Upper Creek territory in Alabama. Some of the most notable were near modern Alabama towns. One village was near present day Talladega and was known in English as Shawnee Town. Another town was near Sylacauga. In 1750 the French took a census mentioning the Shawnee at Sylacauga as well as enumerating another Shawnee town called Cayomulgi (currently spelled Kyamulga Town) that was located nearby. Kiamulgatown was also listed in an 1832 census. A 1761 English census names Tallapoosa Town. This town was also named in a 1792 census by Marbury. There are French military records that mention a Shawnee presence at Wetumpka near Fort Toulouse. In most cases the traders called Alabama Indians "Creeks" because they lived on the numerous creeks and waterways in the area. Many of these "Creeks" were not of the same tribe or nation. Rather they went by a large number of names. Each group maintained their own unique heritage while living side by side with their neighbors.
Piqua Shawnee Tribe, Today:
Now, in the 21st century, there are many descendants who still call the State of Alabama home. Many of their family stories are varied. Some avoided the forced march of the Trail of Tears. Some families escaped into the Cumberland mountains, others hid
in swamps or less traveled places. A careful study of southeastern history that not all settlers agreed with President Andrew Jackson's removal policy. While many people did not escape the forced removal, some did. After the turmoil subsided some families returned. Many families chose to live in outlying rural areas where there was little government scrutiny and their neighbors were not too curious. While a lot was lost, family histories and ways were passed down.
It is out of that background that the Piqua Shawnee Tribe of Alabama members live and work to preserve their unique heritage. The tribe consists of clans that live in several states and Canada. While the majority of the Piqua Shawnee Tribe live in the State of Alabama, with members also living in Tennessee, Kentucky, Ohio, Indiana, Missouri, Texas, Maryland, and South Carolina. Because they are so widely dispersed, they have at least four tribal gathering each year in alternating geographic locations, thereby preventing any of their people from having to travel much further than the others.
If you would like to read more about the Shawnee people, you may find the following books helpful:
1. Shawnee!!, James Howard, Ohio University Press, |
Centrifugal Compressors are the turbomachines also known as turbo-compressors, and belong to the roto-dynamic class of compressors. In these compressors the required pressure rise takes place due to the continuous conversion of angular momentum imparted to the working fluid by a high-speed impeller into pressure. These compressors are used in small gas-turbines, turbochargers, chiller units, in the process and paper industries, oil & gas industries and others.
The design and manufacturing of such compressors are always challenging because of its 3-dimensional shapes, high rotational speeds that interact with different loss mechanisms, and stringent working environments. In many circumstances, it is necessary to analyze an existing compressor, with the end goal being to redesign it, enhance its performance, or to use it in completely different applications. In order to meet such requirements, reverse engineering is a viable option. With reverse engineering, one can review competitor’s design to remain in market competition.
Reverse engineering allows us to collect incomplete or non-existing design data and manufacture an accurate recreation, safely, of the original product or component.
Sometimes, it is also referred to as back engineering, in which centrifugal compressors or any other product are deconstructed to extract design information from them. Oftentimes, reverse engineering involves deconstructing individual components like the impeller or diffuser of larger compressors. End-users often use this approach when purchasing a replacement impeller or any other compressor part from an OEM is not an option. In some cases, where older impellers that have not been manufactured for 20 years or more, the original 2D drawings are no longer available. When this is the case, the only way to obtain the design of an original compressor is through reverse engineering.
Reverse engineering requires a series of steps to gather precise information on a product’s dimensions. Once collected, the data can be stored in digital archives. Figure 1 (left) shows the typical process of reverse engineering. In figure 1 (right), one can see the scanning process of the centrifugal impeller using a laser scanner.
To reverse engineer an impeller or any other part of compressor, an organization will typically acquire the component and take it apart to examine its internal mechanisms. This way, engineers can unveil information about the original design and construction of the product. One can start by analyzing the dimensions and attributes of the impeller and make measurements of the blade widths, diameters and angles, as these dimensions often relate to the compressor’s performance.
In the current scenario, 3D scanning technologies are usually preferred to extract the information of existing models. With three-dimensional scanners, engineers can acquire accurate readings of the compressor specifications and have this information automatically logged in their databases. After all the pertinent information has been gathered and recorded, engineers can use this data to create CAD drawings for subsequent analysis and development. These digital models help to unveil design intent and inform the creation of a reverse-engineered component. After the CAD model generation, manufacturing is the final step in which the object can be re-created to replace an original impeller to provide like-new performance.
Is Reverse Engineering really an Easy Task?
As mentioned above, the reverse-engineering process has several steps, with the most challenging task being to convert the scanned data of an impeller or any other component into useful form. Generally, this is done by utilizing CAD programs which process scanned cloud points into 3D surface models, but this is a very time and resource consuming task. To accomplish this process quickly presents a serious challenge. A designer can, however, save time, if a program like that found in the AxSTREAM platform is used to extract the impeller’s geometrical and cascade information to create full 3D model as shown in figure 2. After creating the full model, the designer can perform 1D/2D/3D analysis of the existing turbomachine, and redesign it to enhance the performance or for completely different applications.
While the concept of a digital twin has been around since 2002, it is only thanks to the Internet of Things (IoT) that it has become cost-effective to implement. Quite simply, a digital twin is a virtual model of a process, product like a compressor and its operation. This pairing of the virtual and physical worlds allows data analysis and system monitoring to head off problems before they even occur. Digital twins can also be used to prevent downtime, develop new opportunities and even plan for the future through simulation. The digital twin enables engineers to monitor a turbomachine’s performance over time, thanks to the advances in digital technology.
Designers can make more informed choices for future designs and ensure simulations are accurate and reflective of the real world. More importantly, digital twins make predictive maintenance possible. Instead of redundant servicing and maintenance in the mission of avoiding downtime, designers can visualize exactly when and where maintenance is needed, instead of making blind guesses and needless budget expenditures. Once any turbomachine has been reverse engineered, a digital twin of it can be created in a program such as AxSTREAM, and engineers can pinpoint what kind of maintenance and servicing needs to happen, and when it needs to be done.
Reverse engineering is a viable option to develop new compressor designs based on older design strategies, to upgrade existing machines, ensure easy maintenance and much more. Reverse engineering along with creating a digital twin can enhance design quality, reduce product development time, and lower compressor maintenance cost.
Here, AxSTREAM can be a great choice for designers to reverse engineer turbomachinery and create digital twins in a very short timespan, generating valuable data that can be used to show the path forward for design upgrades, and assure engineers that their machines will continue to run well into the future. |
Biology can be an interesting subject for students to learn, especially Class 9 biology. In Class 9 Biology students will learn concepts like the chemical changes of food, movement and location, mystery of growth, towards a healthy life and so on. It can also be a bit tough subject to learn and the main aim of class 9 Biology is to bring back the interest of the students to the subject, so that they can continue learning it for their higher studies. A student who wishes to pass the exams with flying colours will now look beyond the SCERT Textbooks Class 9 and solve previous year question papers and even model question papers. Adding to it is the Kerala Class 9 Biology Important Questions.
These questions help to build the confidence of the students as they will be more acquainted with the type of questions asked. They will also be able to gauge their performance and study accordingly thus working to bridge their knowledge gap of a substance. Most of the Biology important questions are created after analyzing the often repeated questions and those that which we believe are more liable to repeat again. One can also use Kerala Class 9 Syllabus, to see where they stand on a subject, what they have learnt as per expectations and how much more they need to learn for the exams etc.
Meanwhile, here is a list of the important questions we have compiled on Class 9 Biology. Do take a look:
- (a) Name the teeth from the pics below:
(b) Write their functions.
2. Name the scientists from the following contributions:
- Oxygen is formed as a result of photosynthesis
- Source of oxygen evolved during photosynthesis is water
3. From the given figure, prepare a flowchart of the food from the mouth and at the end of rectum.
4. What is the chemistry behind the sweetness of rice when chewed for sometime without any curries?
5. Prepare a poster for the awareness of the subject “Air Pollution and Disease”
6. Analyze the illustration and answer the questions:
- Identify and write the name of the enzyme indicated as “X”.
7. Identify the blood vessel given in the pic below:
- Write the name
8. Complete the illustration showing the chemical changes of glucose in plants.
9. Analyze the diagram of chloroplast given below:
Answer the questions.
- Identify A and B and write their names
- Explain the process of photosynthesis that takes place within A.
10. Give an explanation for the following statement.
“Consumption of fatty food causes thrombosis”.
11. Analyze the news paper report and answer the questions:
Oceans turn as waste bin Tonnes of waste materials reach the ocean everyday. Due to these wastes the plants and animals of the ocean__________
- Oceanic pollution not only affect aquatic organisms. Why?
- Prepare a message against water pollution
12. Observe the diagram and answer the questions:
- Which part is indicated here?
- Which are the nutrients that are absorbed into lacteal and blood capillaries?
13. Diagram of a human tooth is given below:
Copy the diagram. Identify the parts using the following hints and labels.
- Soft connective tissue
- Hardest part
14. Complete the illustration related to functions of blood
15. Analyze the following statements and find the correct option related to bile juice.
- Bile is the digestive juice created by pancreas
- Bile contains an enzyme called Amylase
- Bile makes the food alkaline
16. Draw a well-labelled and a neat diagram of a plant or an animal cell. Label any 6
17. Explain why respiration is said to be the opposite of photosynthesis
18. Draw the emblem of the Red cross. State 2 main functions of the Red Cross.
19. (i)Why is fertilization in plants known as double fertilization?
(ii) What happens to the following after fertilization?
(a)Ovules (b) Calyx (c)Petals (d) Stamens
20. Name the chief pollinating agent of the following plants:-
(a)Maize (b)Sweet Pea (c)Vallisneria (d)Dahlia |
An ovarian cyst is a fluid-filled sac that develops on a woman's ovary. They're very common and don't usually cause any symptoms.
Most ovarian cysts occur naturally and disappear in a few months without needing any treatment.
The ovaries are two small, bean-shaped organs that are part of the female reproductive system. A woman has two ovaries – one each side of the womb (uterus).
The ovaries have two main functions:
- to release an egg approximately every 28 days as part of the menstrual cycle
- to release the female sex hormones, oestrogen and progesterone, which play an important role in female reproduction
Ovarian cysts may affect both ovaries at the same time, or they may only affect one.
Types of ovarian cyst
The two main types of ovarian cyst are:
- functional ovarian cysts – cysts that develop as part of the menstrual cycle and are usually harmless and short-lived; these are the most common type
- pathological ovarian cysts – cysts that form as a result of abnormal cell growth; these are much less common
Ovarian cysts can sometimes also be caused by an underlying condition, such as endometriosis.
The vast majority of ovarian cysts are non-cancerous (benign), although a small number are cancerous (malignant). Cancerous cysts are more common in women who have been through the menopause.
Read more about the causes of ovarian cysts.
Treating ovarian cysts
Whether an ovarian cyst needs to be treated will depend on:
- its size and appearance
- whether you have any symptoms
- whether you've been through the menopause
In most cases, the cyst often disappears after a few months. A follow-up ultrasound scan may be used to confirm this.
As post-menopausal women have a slightly higher risk of ovarian cancer, regular ultrasound scans and blood tests are usually recommended over the course of a year to monitor the cyst.
Surgical treatment to remove the cysts may be needed if they're large, causing symptoms, or potentially cancerous.
Read more about treating ovarian cysts .
Ovarian cysts and fertility
Ovarian cysts don't usually prevent you from getting pregnant, although they can sometimes make it harder to conceive.
If you need an operation to remove your cysts, your surgeon will aim to preserve your fertility whenever possible. This may mean removing just the cyst and leaving the ovaries intact, or only removing one ovary.
In some cases, surgery to remove both your ovaries may be necessary, in which case you'll no longer produce any eggs. Make sure you talk to your surgeon about the potential effects on your fertility before your operation.
An ovarian cyst usually only causes symptoms if it splits (ruptures), is very large, or blocks the blood supply to the ovaries.
In these cases, you may have:
- pelvic pain – this can range from a dull, heavy sensation to a sudden, severe and sharp pain
- pain during sex
- difficulty emptying your bowels
- a frequent need to urinate
- heavy periods , irregular periods or lighter periods than normal
- bloating and a swollen tummy
- feeling very full after only eating a little
- difficulty getting pregnant – although fertility is unaffected in most women with ovarian cysts (see ovarian cysts and fertility)
See your doctor if you have persistent symptoms of an ovarian cyst.
If you have sudden, severe pelvic pain you should immediately contact either:
- your doctor or local out-of-hours service
- non-emergency medical services by telephone
- your nearest accident and emergency (A&E) department
Causes and types
Ovarian cysts often develop naturally in women who have monthly periods.
They can also affect women who have been through the menopause.
Types of ovarian cyst
There are many different types of ovarian cyst, which can be categorised as either:
- functional cysts
- pathological cysts
Functional ovarian cysts are linked to the menstrual cycle. They affect girls and women who haven't been through the menopause, and are very common.
Each month, a woman's ovaries release an egg, which travels down the fallopian tubes into the womb (uterus), where it can be fertilised by a man's sperm.
Each egg forms inside the ovary in a structure known as a follicle. The follicle contains fluid that protects the egg as it grows and it bursts when the egg is released.
However, sometimes a follicle doesn't release an egg, or it doesn't discharge its fluid and shrink after the egg is released. If this happens, the follicle can swell and become a cyst.
Functional cysts are non-cancerous (benign) and are usually harmless, although they can sometimes cause symptoms such as pelvic pain. Most will disappear in a few months without needing any treatment.
Pathological cysts are cysts caused by abnormal cell growth and aren't related to the menstrual cycle. They can develop before and after the menopause.
Pathological cysts develop from either the cells used to create eggs or the cells that cover the outer part of the ovary.
They can sometimes burst or grow very large and block the blood supply to the ovaries.
Pathological cysts are usually non-cancerous, but a small number are cancerous (malignant) and often surgically removed.
Conditions that cause ovarian cysts
In some cases, ovarian cysts are caused by an underlying condition such as endometriosis.
Endometriosis occurs when pieces of the tissue that line the womb (endometrium) are found outside the womb in the fallopian tubes, ovaries, bladder, bowel, vagina or rectum. Blood-filled cysts can sometimes form in this tissue.
Polycystic ovary syndrome (PCOS) is a condition that causes lots of small, harmless cysts to develop on your ovaries. The cysts are small egg follicles that don't grow to ovulation and are the result of altered hormone levels.
If your doctor thinks you may have an ovarian cyst, you'll probably be referred for an ultrasound scan , carried out by using a probe placed inside your vagina. If a cyst is identified during the ultrasound scan, you may need to have this monitored with a repeat ultrasound scan in a few weeks, or your doctor may refer you to a gynaecologist (a doctor who specialises in female reproductive health).If there's any concern that your cyst could be cancerous, your doctor will also arrange blood tests to look for high levels of chemicals that can indicate ovarian cancer. However, having high levels of these chemicals doesn't necessarily mean you have cancer, as high levels can also be caused by non-cancerous conditions such as:
- a pelvic infection
- your period
In most cases, ovarian cysts disappear in a few months without the need for treatment.
Whether treatment is needed will depend on:
- its size and appearance
- whether you have any symptoms
- whether you've had the menopause – as post-menopausal women have a slightly higher risk of ovarian cancer
In most cases, a policy of "watchful waiting" is recommended.
This means you won't receive immediate treatment, but you may have an ultrasound scan a few weeks or months later to check if the cyst has gone.
Women who have been through the menopause may be advised to have ultrasound scans and blood tests every four months for a year, as they have a slightly higher risk of ovarian cancer.
If the scans show that the cyst has disappeared, further tests and treatment aren't usually necessary. Surgery may be recommended if the cyst is still there.
Large or persistent ovarian cysts, or cysts that are causing symptoms, usually need to be surgically removed.
Surgery is also normally recommended if there are concerns that the cyst could be cancerous or could become cancerous.
There are two types of surgery used to remove ovarian cysts:
- a laparoscopy
- a laparotomy
These are usually carried out under general anaesthetic .
Most cysts can be removed using laparoscopy. This is a type of keyhole surgery where small cuts are made in your tummy and gas is blown into the pelvis to allow the surgeon to access your ovaries.
A laparoscope (a small, tube-shaped microscope with a light on the end) is passed into your abdomen so the surgeon can see your internal organs. The surgeon then removes the cyst through the small cuts in your skin.
After the cyst has been removed, the cuts will be closed using dissolvable stitches.
A laparoscopy is preferred because it causes less pain and has a quicker recovery time. Most women are able to go home on the same day or the following day.
If your cyst is particularly large, or there's a chance it could be cancerous, a laparotomy may be recommended.
During a laparotomy, a single, larger cut is made in your tummy to give the surgeon better access to the cyst.
The whole cyst and ovary may be removed and sent to a laboratory to check whether it's cancerous. Stitches or staples will be used to close the incision.
You may need to stay in hospital for a few days after the procedure.
After the ovarian cyst has been removed, you'll feel pain in your tummy, although this should improve in a day or two.
Following laparoscopic surgery, you'll probably need to take things easy for two weeks. Recovery after a laparotomy usually takes longer, possibly around six to eight weeks.
If the cyst is sent off for testing, the results should come back in a few weeks and your consultant will discuss with you whether you need any further treatment.
Contact your doctor if you notice the following symptoms during your recovery:
- heavy bleeding
- severe pain or swelling in your abdomen
- a high temperature (fever)
- dark or smelly vaginal discharge
These symptoms may indicate an infection.
If you haven't been through the menopause, your surgeon will try to preserve as much of your reproductive system as they can. It's often possible to just remove the cyst and leave both ovaries intact, which means your fertility should be largely unaffected.
If one of your ovaries needs to be removed, the remaining ovary will still release hormones and eggs as usual. Your fertility shouldn't be significantly affected, although you may find it slightly harder to get pregnant.
Occasionally, it may be necessary to remove both ovaries in women who haven't been through the menopause. This triggers an early menopause and means you no longer produce any eggs.
However, it may still be possible to have a baby by having a donated egg implanted into your womb. This will need to be discussed with specialists at a centre that specialises in assisted reproduction techniques.
In women who have been through the menopause, both ovaries may be removed because they no longer produce eggs.
Make sure you discuss your fertility concerns with your surgeon before your operation.
If your test results show that your cyst is cancerous, both of your ovaries, your womb (uterus) and some of the surrounding tissue may need to be removed.
This would trigger an early menopause and mean that you're no longer be able to get pregnant.
Read more about treating ovarian cancer . |
Wildfires burning large swaths of Russia are generating so much smoke, they're visible from space, new images from NASA's Earth Observatory reveal.
Since June, more than 100 wildfires have raged across the Arctic, which is especially dry and hot this summer. In Russia alone, wildfires are burning in 11 of the country's 49 regions, meaning that even in fire-free areas, people are choking on smoke that is blowing across the country.
The largest fires — blazes likely ignited by lightning — are located in the regions of Irkutsk, Krasnoyarsk and Buryatia, according to the Earth Observatory. These conflagrations have burned 320 square miles (829 square kilometers), 150 square miles (388 square km) and 41 square miles (106 square km) in these regions, respectively, as of July 22.
The above natural-color image, taken on July 21, shows plumes rising from fires on the right side of the photo. Winds carry the smoke toward the southwest, where it mixes with a storm system. The image was captured with the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP, a weather satellite operated by the U.S. National Oceanic and Atmospheric Administration.
The Russian city of Krasnoyarsk is under a layer of haze, the Earth Observatory reported. And while Novosibirsk, Siberia's largest city, doesn't have any fires as of now, smoke carried there by the winds caused the city's air quality to plummet.
Wildfires are also burning in Greenland and parts of Alaska, following what was the hottest June in recorded history. It's common for fires to burn during the Arctic's summer months, but the number and extent this year are "unusual and unprecedented," Mark Parrington, a senior scientist at the Copernicus Atmosphere Monitoring Service (CAMS), a part of the European Union's Earth observation program, told CNN.
These fires are taking a toll on the atmosphere; they've released about 100 megatons of carbon dioxide from June 1 to July 21, which is roughly equivalent to the amount of carbon dioxide Belgium released in 2017, according to CAMS, CNN reported.
The Arctic is heating up faster than other parts of the world, making it easier for fires to thrive there. In Siberia, for example, the average June temperature this year is nearly 10 degrees Fahrenheit (5.5 degrees Celsius) hotter than the long-term average between 1981 and 2010, Claudia Volosciuk, a scientist with the World Meteorological Organization, told CNN.
Many of this summer's fires are burning farther north than usual, and some appear to be burning in peat soils, rather than in forests, Thomas Smith, an assistant professor of environmental geography at the London School of Economics, told USA Today (opens in new tab). This is a dangerous situation, because whereas forests might typically burn for a few hours, peat soils can blaze for days or even months, Smith said.
Moreover, peat soils are known carbon reservoirs. As they burn, they release carbon, "which will further exacerbate greenhouse warming, leading to more fires," Smith said.
- In Photos: The Deadly Carr Fire Blazes Across Northern California
- In Photos: Devastating Wildfires in Northern California
- In Photos: The Vanishing Ice of Baffin Island
Originally published on Live Science. |
In mathematics, equality is a relationship between two quantities or, more generally two mathematical expressions, asserting that the quantities have the same value, or that the expressions represent the same mathematical object. The equality between A and B is written A = B, and pronounced A equals B. The symbol "=" is called an "equals sign". Thus there are three kinds of equality, which are formalized in different ways.
- Two symbols refer to the same object.
- Two sets have the same elements.
- Two expressions evaluate to the same value, such as a number, vector, function or set.
These may be thought of as the logical, set-theoretic and algebraic concepts of equality respectively.
- 1 Etymology
- 2 Equality in mathematical logic
- 3 Equality in set theory
- 4 Equality in algebra and analysis
- 5 Relation with equivalence and isomorphism
- 6 See also
- 7 Notes
- 8 References
Equality in mathematical logic
Equality is defined so that things which have the same properties are equal. If some form of Leibniz's law is added as an axiom, the assertion of this axiom rules out "bare particulars"—things that have all and only the same properties but are not equal to each other—which are possible in some logical formalisms. The axiom states that two things are equal if they have all and only the same properties. Formally:
In this law, the connective "if and only if" can be weakened to "if"; the modified law is equivalent to the original.
Instead of considering Leibniz's law as an axiom, it can also be taken as the definition of equality. The property of being an equivalence relation, as well as the properties given below, can then be proved: they become theorems. If a=b, then a can replace b and b can replace a.
Some basic logical properties of equality
The substitution property states:
- For any quantities a and b and any expression F(x), if a = b, then F(a) = F(b) (if both sides make sense, i.e. are well-formed).
Some specific examples of this are:
- For any real numbers a, b, and c, if a = b, then a + c = b + c (here F(x) is x + c);
- For any real numbers a, b, and c, if a = b, then a − c = b − c (here F(x) is x − c);
- For any real numbers a, b, and c, if a = b, then ac = bc (here F(x) is xc);
- For any real numbers a, b, and c, if a = b and c is not zero, then a/c = b/c (here F(x) is x/c).
The reflexive property states:
- For any quantity a, a = a.
This property is generally used in mathematical proofs as an intermediate step.
The symmetric property states:
The transitive property states:
These three properties were originally included among the Peano axioms for natural numbers. Although the symmetric and transitive properties are often seen as fundamental, they can be proved if the substitution and reflexive properties are assumed instead.
Equalities as predicates
When A and B are not fully specified or depend on some variables, equality is a proposition, which may be true for some values and false for some other values. Equality is a binary relation, or, in other words, a two-arguments predicate, which may produce a truth value (false or true) from its arguments. In computer programming, its computation from two expressions is known as comparison.
Equality in set theory
Equality of sets is axiomatized in set theory in two different ways, depending on whether the axioms are based on a first-order language with or without equality.
Set equality based on first-order logic with equality
In FOL with equality, the axiom of extensionality states that two sets which contain the same elements are the same set.
- Logic axiom: x = y ⇒ ∀z, (z ∈ x ⇔ z ∈ y)
- Logic axiom: x = y ⇒ ∀z, (x ∈ z ⇔ y ∈ z)
- Set theory axiom: (∀z, (z ∈ x ⇔ z ∈ y)) ⇒ x = y
Incorporating half of the work into the first-order logic may be regarded as a mere matter of convenience, as noted by Lévy.
- "The reason why we take up first-order predicate calculus with equality is a matter of convenience; by this we save the labor of defining equality and proving all its properties; this burden is now assumed by the logic."
Set equality based on first-order logic without equality
In FOL without equality, two sets are defined to be equal if they contain the same elements. Then the axiom of extensionality states that two equal sets are contained in the same sets.
- Set theory definition: "x = y" means ∀z, (z ∈ x ⇔ z ∈ y)
- Set theory axiom: x = y ⇒ ∀z, (x ∈ z ⇔ y ∈ z)
Equality in algebra and analysis
When A and B may be viewed as functions of some variables, then A = B means that A and B define the same function. Such an equality of functions is sometimes called an identity. An example is (x + 1)2 = x2 + 2x + 1.
An equation is the problem of finding values of some variables, called unknowns, for which the specified equality is true. Equation may also refer to an equality relation that is satisfied only for the values of the variables that one is interested on. For example x2 + y2 = 1 is the equation of the unit circle.
There is no standard notation that distinguishes an equation from an identity or other use of the equality relation: a reader has to guess an appropriate interpretation from the semantics of expressions and the context. An identity is asserted to be true for all values of variables in a given domain. An "equation" may sometimes mean an identity, but more often it specifies a subset of the variable space to be the subset where the equation is true.
In some cases, one may consider as equal two mathematical objects that are only equivalent for the properties that are considered. This is, in particular the case in geometry, where two geometric shapes are said equal when one may be moved to coincide with the other. The word congruence is also used for this kind of equality.
There are some logic systems that do not have any notion of equality. This reflects the undecidability of the equality of two real numbers defined by formulas involving the integers, the basic arithmetic operations, the logarithm and the exponential function. In other words, there cannot exist any algorithm for deciding such an equality.
The binary relation "is approximately equal" between real numbers or other things, even if more precisely defined, is not transitive (it may seem so at first sight, but many small differences can add up to something big). However, equality almost everywhere is transitive.
Relation with equivalence and isomorphism
Viewed as a relation, equality is the archetype of the more general concept of an equivalence relation on a set: those binary relations that are reflexive, symmetric, and transitive. The identity relation is an equivalence relation. Conversely, let R be an equivalence relation, and let us denote by xR the equivalence class of x, consisting of all elements z such that x R z. Then the relation x R y is equivalent with the equality xR = yR. It follows that equality is the finest equivalence relation on any set S, in the sense that it is the relation that has the smallest equivalence classes (every class is reduced to a single element).
In some contexts, equality is sharply distinguished from equivalence or isomorphism. For example, one may distinguish fractions from rational numbers, the latter being equivalence classes of fractions: the fractions and are distinct as fractions, as different strings of symbols, but they "represent" the same rational number, the same point on a number line. This distinction gives rise to the notion of a quotient set.
Similarly, the sets
are not equal sets – the first consists of letters, while the second consists of numbers – but they are both sets of three elements, and thus isomorphic, meaning that there is a bijection between them, for example
However, there are other choices of isomorphism, such as
and these sets cannot be identified without making such a choice – any statement that identifies them "depends on choice of identification". This distinction, between equality and isomorphism, is of fundamental importance in category theory, and is one motivation for the development of category theory.
- Kleene, Stephen Cole (2002) . Mathematical Logic. Mineola, New York: Dover Publications. ISBN 978-0-486-42533-7.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Lévy, Azriel (2002) . Basic set theory. Mineola, New York: Dover Publications. ISBN 978-0-486-42079-0.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Mac Lane, Saunders; Birkhoff, Garrett (1999) . Algebra (Third ed.). Providence, Rhode Island: American Mathematical Society.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Mazur, Barry (12 June 2007), When is one thing equal to some other thing? (PDF)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Mendelson, Elliott (1964). Introduction to Mathematical Logic. New York: Van Nostrand Reinhold.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Rosser, John Barkley (2008) . Logic for mathematicians. Mineola, New York: Dover Publication. ISBN 978-0-486-46898-3.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Shoenfield, Joseph Robert (2001) . Mathematical Logic (2nd ed.). A K Peters. ISBN 978-1-56881-135-2.CS1 maint: ref=harv (link)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> |
Fossil remains of two newly described palaeotheriidae mammals inhabiting the subtropical landscape of Basque region were found to be relatives of horses that trotted the Earth 37 million years ago.
The previously unknown mammals are called paleotheres or ‘pseudo-horses’ that lived in the archipelago of Europe when the climate was much warmer, according to the UPV/EHU’s Vertebrate Palaeontology research group of the Basque Country. These mammals thrived on the Eocene period right before dinosaurs existed, which could have hindered their diversity.
The study published in the Journal of Vertebrate Paleontology suggests that some of that diversification led to the beginning of the ‘pseudo-horses’.
Odd Features of Paleotheres
“Imagine animals similar to horses with three toes, the size of a fox terrier, a Great Dane and a donkey living in a subtropical landscape,” said co-author of the study and paleontologist Ainara Badiola from the Universidad del País Vasco in a press release.
These group of ‘odd-toed ungulates’ are crews of today’s zebras, rhinos, donkeys, and horses. To add to the casts, two of them are now scientifically known as Leptolophus cuestai and Leptolophus franzeni. According to Badiola, many pseudo-horse fossils have been described at the Zambrana site, a town in the province of Álava, in the Basque region of northern Spain.
The Zambrana fossil site had also introduced other mammals from the Eocene era including rodents, marsupials, and even primates.
Like other paleotheres, pseudo-horses were smaller than modern horses and had ‘short legs and weird teeth. Another paleontologist and lead author Leire Perales-Gogenola from the Universidad del País Vasco described their molars to have a very high crown, covered with a thick layer of cementum.
“This type of dentition, also present in other endemic Iberian palaeotheriidae, could be indicative of a difference in environmental conditions between the Iberian and Central European areas, with more arid conditions or less dense or closed forests and the presence of more open areas in Iberia,” she explained.
As the crowns in the species’ teeth were said to have shared similarities with modern horses, the researchers said they also feed on grass. Researchers of the study have yet to analyze further the paleothere remains found at the site.
Biodiversity Of Palaeotheriidae Fauna
When the Eocene in Europe disappeared and the Miocene period was introduced, modern horses or equids later on appeared in Europe. The once intertropical forests gradually disappeared and paved way to a more temperate type of plant communities with more open areas. The diverse fossil association of the mammals at the time shed light to the climatic and environmental changes that occurred in Europe from then until now over a geological time.
The unusual dental features among new found palaeotheriidae material could govern description of the new generation of species, like how different foraging conditions such as tougher vegetation, more open or drier habitats with a higher consumption of grit in the diet contributed to their earlier development.
© 2021 NatureWorldNews.com All rights reserved. Do not reproduce without permission. |
A digital signature is like a real signature that is used to validate the authenticity and integrity of a message or software or any digital document or financial transaction. It symbolizes that the element is solely originated by the signer and also protects the elements from forgery or tampering. This post is a part of our Free and Complete Computer Notes important for most of the competitive Exams as UPSC CSE, State PCS, SSC CGL, CHSL, MTS, FSSAI, ASRB, Railways, DMRC, CDS, NDA, and others. You can also check out Our Courses.
A Digital Signature is an electronic, encrypted, stamp of authentication on digital information such as email messages, macros, or electronic documents. A signature confirms that the information originated from the signer and has not been altered.
The following terms and definitions show what assurances are provided by digital signatures:
The signer is confirmed as the signer. It verifies the identity of a user who wants to access the system.
The content has not been changed or tampered with since it was digitally signed. It ensures that the message is real, accurate and safeguards from unauthorized user modification during the transmission.
Proves to all parties the origin of the signed content. Repudiation refers to the act of a signer denying any association with the signed content.
Types of Digital signature
There are three types of Digital signatures.
- The first type of Digital Signature provides a basic level of security and are used in such area where there is a low risk of data compromise. This type of digital signature does not use for legal business. Example – Social Media log-in credential (user id and password); email user id and password etc.
- The second type of Digital Signature are used where data compromisation risks are moderate. Electronic filling of tax documents, income tax returns files and GST returns files are few examples.
- The third type of Digital Signature are used where data risks are very high. It is used for e-auctions, e-tendering, e-ticketing, court fillings etc. It requires a person or organisations to present in front of a certifying authority to provide their identity before signing. Example – Aadhar, Banking, Patent & Trade mark etc.
Benefits of Digital Signatures
- Digital security
- Legal authentication
- Global Acceptance
- Time & Cost saving for hand copies of signature of documents
- Trace ability
To create a digital signature, you need a signing certificate, which proves identity when you send a digitally signed macro or document, you also send your certificate and public key.
Certificates are issued by a certification authority, a certificate is usually valid for a year, after which, the signer must renew or get a new, signing certificate to establish identity.
Validation of Digital Signature
To make a digital signature valid –
- The associated signing certificate must be current.
- The signing person or organization or the publisher must be trusted.
How to Create a digital signature in MS Word?
Methods of Digital Signature
Personal identification number (PIN), Password & Codes
It is required for the authentication and verification of the signer’s identity. Email, User id and password are its types.
Based on a public key algorithm and includes private and public-key encryption and authentication. Software Product key, the License Product key is its example.
These are the tools with which we can check the difference between original and forgery files. It is basically a type of error detection method.
It is a long string of letters and numbers that represents the sum of the correct digits in a piece of digital data, against which comparisons can be made to detect errors or changes. A checksum acts as a data fingerprint.
CRC (Cyclic redundancy Check)
It is also an error detecting method used in digital networks and storage devices to detect changes to raw data.
Certificate Authority (CA) Validation
Certificate authorities issue digital signatures and act as trusted third parties by accepting, authenticating, issuing, and maintaining digital certificates. The use of Cas helps avoid the creation of fake digital certificates.
Trust Service Provider (TSP) Validation
A TSP is a person or legal entity that performs validation of a digital signature on a company’s behalf and offers signature validation reports.
Computer Course for FSSAI
If you are preparing for FSSAI Exam and looking for 100% Exam-oriented FSSAI Courses that will give you an edge over other students, then download Nishant eAcademy App and Visit Store Section. The Computer Course we are providing has the following features:
- Topic-Wise Conceptual Videos (in Hinglish)
- Topic-Wise Tests with two attempts
- Topic-Wise Notes (Both Hindi & English)
- Must-Do Practice tests
- Previous year Questions
Frequently Asked Questions
A digital signature is a mathematical technique that validates?
A digital signature is a mathematical technique that validates the authenticity, non-repudiation, and integrity of a message, software, or digital documents.
___ is a process that verifies the identity of a user who wants to access the system.
Authentication is a process that verifies the identity of a user who wants to access the system.
How many algorithms digital signature consists of?
A digital signature consists of three algorithms: Key generation algorithm, Signing algorithm, and Signature verifying algorithm.
A ___ produces a signature for the document.
A signing algorithm produces a signature for the document. |
Pre and Post Confederation
Aboriginal-State relations in the pre-Confederation period
Canadian Aboriginals are numerous and diverse, with each nation possessing their own languages, social and political organization, and way of being. The Europeans arrival to Canada was the beginning of the intricate political relations which they would share with the Aboriginals; these relations were at times both friendly and hostile. The resources and knowledge which the Aboriginals provided the Hudson’s Bay Company was essential for the company’s success as well as increasing and maintaining the European economic system at the time and following the colonization of North America. The battles between French and English colonies often included Aboriginals as allies, however, the intense conflict led to the creation of The Great Peace of Montreal of 1701 where Aboriginals took a stand against the violence. The colonization process and increase in European settlers altered the traditional Aboriginal way of life; natural resources were becoming scarce in areas that had been primary hunting grounds, and the diseases which the Europeans were already immune to had deadly effects on the Natives. For example, an entire Aboriginal nation, the Beothuk of Newfoundland, was annihilated.
There were numerous land, peace and friendship treaties in Canada prior to the Confederation. The first formal legislation placing responsibility of Aboriginal-settler relation in the jurisdiction of various colonial Governors was passed by the British Parliament in 1670. It included the following main elements:
- Protection of Aboriginal peoples from 'unscrupulous'settlers and traders
- Introduction of Christianity
- Designation of the Crown as protector of "Indians"
In 1755 the Indian Department was established as an operational arm of the military. The Royal Proclamation of 1763 acted as the first Bill of Rights for Aboriginals and acknowledged the shared jurisdiction between Europeans and Aboriginals by setting out the rules of territorial management with the Crown as the only one in position to acquire aboriginal lands. It is considered as the first model of tripartite relationship with sovereign rights to land; nation-to-nation relationship based on shared jurisdiction.The Royal Proclamation aimed to protect the hunting grounds of the Indian tribes, reserving them for the use of the Indians, though power over most Indian affairs was exercised by the Superintendant on behalf of the Indians. Nearing the end of this period, there was a clear change in motivation and intention vis-à-vis the Indians and their territories, leading towards the formation of a formal Indian Department, and the creation of procedures of acquiring lands.Breaches to this Act led to resistance on behalf of the Aboriginals, requiring the creation of more treaties to firmly establish their rights and land claims. For example, between 1837-1838 rebellions in Upper and Lower Canada led to the signing of the Act of the Union in 1840. These disputes provided the Indian Department with a new mandate: the assimilation of Aboriginals into European society. The Bagot Commission of 1842 provided a justification for leasing, licensing or permitting settlers to cut timber on the Indian territories, discontinuing the tradition of ‘gifting’ the Indians, for adopting measures to confirm Christianity by introducing it early to children through formal schooling, and the formation of Industrial schools.
Most interestingly, the creation of the Upper Indian Protection Act in 1850 provided the definition of an Indian (presumably the first definition which would be the basis for future government definitions), while definition in Lower Canada was different because it included all women married to an Indian and their children, but not non-Indian men married to Indian women, and thus created the notions of Status and non-Status Indians. The Gradual Civilization Act passed in 1857 introduced enfranchisement as a way to extinguish the status of an Indian, by setting conditions that if met would grant an Indian the same status as a white. Over time, it became apparent that the measures being adopted were failing, but the philosophies informing the policies did not.
Federal legislation after the Confederation
Nova Scotia and New Brunswick joined the Province of Canada in 1867 thus creating the new nation of the Dominion of Canada. Aboriginals and their lands were transferred under the jurisdiction of the federal Parliament as per section 91(24) of the Constitution Act. In 1868 the Canadian parliament passed the Act to provide for the organization of the Department of the Secretary of State of Canada and for the Administration of the Affairs of the Indians and of the Ordinance that provided the federal government with control over the management of Aboriginal land, property and funds. In 1869, parliament passed the Act for the gradual enfranchisement of Indians, which was intended to help the Aboriginals integrate into 'white' society. The Act was incredibly gender biased, and although Aboriginal men could marry non-native women and keep their status, Aboriginal women could not. The principles of ‘guardianship’ over Indians found in previous legislation and practices were consolidated in the 1876 Indian Act, the first draft and subsequent amendments. Under this Act, three areas of responsibility under Band councils were defined: allocation of reserve land, law-making powers and local government, or band elections. The Act took elements from the Civilization and Enfranchisement Act, the Lower Canada Indian Protection Act provisions of blood quantum, and loss of status for female ‘out-marriage’. Later, in 1876, further revisions defined Indian status in terms of band membership and Indian blood, and sought to further consolidate laws stemming from the Upper and Lower Canada Acts, making it their official policy that the Indians be treated as minors. Illegitimate children, foreign residency and marrying out held the consequence of loss of status.
The Indian Advancement Act of 1884 was passed to allow Aboriginal bands to self-govern and have full responsibility of their own affairs. The Franchise Act soon followed in 1885, and was passed to allow all male Aboriginals to vote in Canada. After some disagreements among the non-native community, the act was repealed. It was believed that Aboriginals were not responsible or serious-minded people, because they did not own property or pay taxes and therefore should not be allowed to vote.By the late 1800s, the policy toward civilization of Western Indians and western settler expansion was failing by government standards. A transitional Indian policy sought to make Indians more independent, but also paradoxically gave greater authority over band affairs to the Superintendant-General.
- Leslie, John. 1982. "The Bagot Commission: Developing a Corporate Memory for the Indian Department", Historical Papers / Communications historiques, 17(1): 31-52
- An Act to encourage the gradual Civilization of the Indian Tribes in this Province, and to amend the Laws respecting Indians
- An Act for the gradual enfranchisement of Indians, the better management of Indian affairs, and to extend the provisions of the Act 31st Victoria, Chapter 42, S.C. 1869, c. 6
- An Act further to amend the "Act to make further provision for the government of the North West Territories," S.C. 1873, c. 34
- An Act to amend "An Act to make further provision as to Duties of Customs in Manitoba and the North West Territories," and further to restrain the importation or manufacture of Intoxicating Liquors into or in the North West Territories, S.C. 1874, c. 7
- An Act to amend and consolidate the several Acts relating to the North-West Territories, S.C. 1880, c. 25 |
History of Special Education & What It Means for Us Today
Have you ever wondered when education began for children with special needs? Was it in America? Europe?
Did it begin with children who had Down syndrome? Blindness? Who decided to develop teaching materials for children who needed modifications? How did classical education influence special education?
You may appreciate the quick timeline beginning in the 1500s and embedded in this new article rich with applications for us today. I wrote the piece at the request of Classical Thistle, classical education advocates. With gratitude to Margret A. Winzer for her informative book, From Integration to Inclusion: A History of Special Education, here is an excerpt from my article:
When Pedro Ponce de León opened his school for non-speaking children with profound deafness, he had no delusions of teaching without the aid of adaptations. His early work remains instructive to us today. “Ponce’s work was … an astute application of the sign language he and his brother Benedictine monks used daily. Ponce’s great achievements may not have been teaching speech and language to the deaf boys but more his recognition that disability did not hinder learning and his use of alternative stimuli…. Most importantly, perhaps, Ponce de León was the first successful special educator, and 1578 the year in which special education truly began.”
Read the full article with applications for the children we love and teach:
Teach Them to Climb
Simply Classical Curriculum for the riches of a classical education with researched teaching techniques for children with learning challenges. |
A Framework for Anti-bias Education
The Social Justice Standards are a road map for anti-bias education at every stage of K–12 instruction. Comprised of anchor standards and age-appropriate learning outcomes, the Standards provide a common language and organizational structure educators can use to guide curriculum development and make schools more just and equitable.
Divided into four domains—identity, diversity, justice and action (IDJA)—the Standards recognize that, in today's diverse classrooms, students need knowledge and skills related to both prejudice reduction and collective action. Together, these domains represent a continuum of engagement in anti-bias, multicultural and social justice education. The IDJA domains are based on Louise Derman-Sparks’ four goals for anti-bias education in early childhood.
Each of the IDJA domains has learning outcomes and school-based scenarios organized by grades K–2, 3–5, 6–8 and 9–12.
Identity Anchor Standards
1. Students will develop positive social identities based on their membership in multiple groups in society.
2. Students will develop language and historical and cultural knowledge that affirm and accurately describe their membership in multiple identity groups.
3. Students will recognize that people’s multiple identities interact and create unique and complex individuals.
4. Students will express pride, confidence and healthy self-esteem without denying the value and dignity of other people.
5. Students will recognize traits of the dominant culture, their home culture and other cultures and understand how they negotiate their own identity in multiple spaces.
Diversity Anchor Standards
6. Students will express comfort with people who are both similar to and different from them and engage respectfully with all people.
7. Students will develop language and knowledge to accurately and respectfully describe how people (including themselves) are both similar to and different from each other and others in their identity groups.
8. Students will respectfully express curiosity about the history and lived experiences of others and will exchange ideas and beliefs in an open-minded way.
9. Students will respond to diversity by building empathy, respect, understanding and connection.
10. Students will examine diversity in social, cultural, political and historical contexts rather than in ways that are superficial or oversimplified.
Justice Anchor Standards
11. Students will recognize stereotypes and relate to people as individuals rather than representatives of groups.
12. Students will recognize unfairness on the individual level (e.g., biased speech) and injustice at the institutional or systemic level (e.g., discrimination).
13. Students will analyze the harmful impact of bias and injustice on the world, historically and today.
14. Students will recognize that power and privilege influence relationships on interpersonal, intergroup and institutional levels and consider how they have been affected by those dynamics.
15. Students will identify figures, groups, events and a variety of strategies and philosophies relevant to the history of social justice around the world.
Action Anchor Standards
16. Students will express empathy when people are excluded or mistreated because of their identities and concern when they themselves experience bias.
17. Students will recognize their own responsibility to stand up to exclusion, prejudice and injustice.
18. Students will speak up with courage and respect when they or someone else has been hurt or wronged by bias.
19. Students will make principled decisions about when and how to take a stand against bias and injustice in their everyday lives and will do so despite negative peer or group pressure.
20. Students will plan and carry out collective action against bias and injustice in the world and will evaluate what strategies are most effective. |
- Equation SolverFactoring CalculatorDerivative Calculatormore...
Adding and Subtracting Rational Functions
When adding or subtracting rational functions, you must find a common denominator as you might do with regular fractions. For example, to add 1/2 and 1/3, you might do the following:
Now, let's apply this same strategy to the addition and subtraction of rational functions:
Step 1) Find a common denominator by multiplying the denominators. So, (x + 3)(x - 2) becomes our common denominator in this case. Then, multiply each fraction by something equivalent to one, such as (x+3)/(x+3), to get each fraction in terms of that common denominator:
Here's what we have so far. Just multiply out the top and we will be ready to add the two fractions:
Now add the numerators just like you would with two simple fractions:
Finally we want to expand the denominator as well to give us the resulting rational function:
And that's our answer!
NOTE: To subtract rational functions, follow the same steps that you used to add rational functions, but just subtract the numerators instead of adding them! |
The salivary glands are part of the digestive system.
Function of salivary glands
The salivary glands produce a fluid called saliva which bathes the mouth, keeping the mucosal membranes moist and also performing the first stage of digestion and facilitating the passage of food into the stomach.
The salivary glands are exocrine glands that excrete saliva. Saliva composed of water (99%), inorganic substances (ions), organic substances (glucose, urea, hormones), and digestive enzymes (amylase, lipase, lysozyme, etc.). In addition to its role in digestion, saliva also has an antiseptic action that specifically protects the teeth from caries.
More than a litre of saliva is secreted daily, only half of which is during meals. Salivation is a reflex triggered by the presence of food in the mouth or by smells or emotions. The salivation reflex is innate but can be acquired, as was shown in the famous Pavlov's dogsexperiments).
Structure of the salivary glands
There are six salivary glands, three on each side of the mouth composed of different cell types:
- the parotid glands, located behind the mouth beneath the ears which secrete the largest amount of saliva. Saliva is poured into the cheek through the Stenon duct. These are almost exclusively composed of serous cells ;
- The sub-maxillary glands are located beneath the angle of the mandible and pour saliva through the Wharton duct beneath the front of the tongue. The secretory cells are mixed (mucosal but above all serous).
- The sub-lingual glands are located beneath the tongue and secrete saliva through many ducts. It has mixed secretory cells (serous and particularly mucus).
The salivary glands are contained in vascularised innervated connective tissue capsules. The secretory cells (mucosal or serous) are assembled in clusters (acini) but secrete saliva into their cavity. The acini are surrounded by small muscles which facilitate the expression of your saliva.
The three types of salivary glands: the parotid glands (1), the sub-maxillary glands (2), and the sub-lingual glands (3). © DR |
Fine motor skills refer to the ability of making movements using the small muscles in the hands, fingers, and wrists. We rely on these skills to make small movements and thus, to carry out our everyday tasks with ease and effectiveness. Since these skills come naturally to people, usually not much attention is paid to them regardless of the fact that they are crucial to the growth and development of every child.
These skills are not specific like reading or solving math equations but they have a direct impact on the ability of the children to learn and showcase what they know. Even the smallest of tasks like circling an answer in a question paper, writing an essay, holding a pencil, drawing pictures, and so on need the aid of the fine motor skills to be completed successfully. Now, the good news is that the fine motor skills of children can be improved to a great extent by simple activities that do not need any special tools and can be, in fact, carried out at home. We, the JP International School, recognized as the Best CBSE School in Greater Noida, suggest the following activities if you find your child weak in fine motor skills. These activities will surely help to improve upon the fine motor skills of your children.
Use bumpy papers for writing
We have seen that many children face the problem of staying within the lines while writing. Using bumpy and dark ruled papers to make them write can be a good solution to that. The bold lines on the top and bottom with a dotted line in between helps the children to see the barriers and that way, their letters do not drift while writing. You can help your child further by tracing the top and bottom lines with glue so that when the glue dries, the child hits the bumps while writing and thus gets the practice of writing within the lines.
Let them play with putty and play-dough
Playing with putty and play-dough help a lot in the development of the sensory organs of a child. Encourage your child to stretch, squeeze, roll, and pinch the play dough to make shapes like worms and snakes. You can even get playing scissors to get your child to cut the dough to make different shapes. We highly suggest this activity as it does the twofold job of ensuring that his/her fine motor skills and imaginative prowess both show significant improvement with time.
Bring out the crayons and let them paint
Bring out all of the favorite crayons and sketch pens of your child and let him/her paint to his/her heart’s content. Different kinds of painting help in strengthening the hand-eye coordination and manual dexterity of a child. It is better to let children use crayons and brushes while painting because this way, the kids learn to hold the brush and get greater control by using it like a tool. You can make different shapes and borders for your child and then ask him/her to try and keep the color within the lines while painting.
Get competitive with him/her using tweezers
Get a bowl and mix objects of two separate colors in it. Now, get a pair of tweezers to separate out the colors with your child. You can mix two colors of pebbles or rice and lentils to get the game started. Start the race by transferring the objects into two separate bowls using the tweezers. This will help strengthen the muscles of your child’s arms and forearms and also aid in the development of coordination skills.
Engage in gardening and planting activities
We vouch for engaging kids in planting and gardening activities from a young age because it helps in the development of both gross and fine motor skills. The activities like transferring seedlings into a garden needs hand-eye coordination to safely carry the small plants to the holes dug. Help your child in grasping trowels to dig and pincer grasps to pick up seeds for planting. All of these activities related to gardening require smaller muscle control which is crucial to the improvement of fine motor skills in them.
We, at the JP International School, have seen that children who have their fine motor skills developed from a young age also fare well in both academics and sports. On our part, we always attempt to engage our kids in varied fun games and activities in between the daily lessons, which aid in improving gross as well as fine motor skills. In the long run, the effects of such motor skills will be evidently seen in their all-encompassing growth, which is something that we all hope to see for them. |
The North Atlantic right whale is one of the rarest of marine mammals. It is a big, mostly black animal with some whitish patches on its head and belly, it lacks a dorsal fin, and it has a graceful, deeply notched tail or "fluke". It has long fringes of baleen rather than teeth, which are used for straining tiny animals out of the water for food. A pair of blowholes on its head cause the right whale's spout to have a distinctive V-shape. It is amongst the most endangered of the world's marine mammals.
North Atlantic right whales inhabit the western North Atlantic, from Nova Scotia to Florida. They migrate from a calving ground near Florida and Georgia on North America's eastern seaboard, to summering grounds in the Bay of Fundy, the Gulf of Maine, and the Scotian Shelf, with some animals going as far as the Gulf of St Lawrence, the Denmark and Davis Straits and sometimes Iceland and Norway. The species migrates between two essential habitats: calving grounds and feeding grounds, the latter in the north of the range and the former in the warmer waters of the south of the range, in bays or shallow coastal waters.
These whales generally travel solo or with a small group. The usual group size ranges from two to 12 but is usually two. The composition of the group varies from female-calf, males only, or mixed. Group size is difficult to determine because of the dispersion. Larger groups may exist over long distances by staying in contact with echolocation. These whales are quite social and swim alongside other species of cetaceans. Social groups can moan and bellow at night to each other around breeding areas. Females will sometimes swim while on their backs, cradling a newborn calf on their bellies in their huge flippers. The North Atlantic right whale will make a series of brief shallow dives before diving underwater for as long as 20 minutes.
These whales are polyandrous, with females mating with many males. No aggression is seen between competing males, a rare behavior for mammals. A North Atlantic right whale mates in the winter. Gestation lasts 12-13 months, with females giving birth to a single calf. Every three to four years they give birth to one calf. The mother and her calf remain close together until around the age of one when the calf is weaned. During the first year the calf learns from its mother where the critical feeding grounds are, and it will visit them for the rest of its life. North Atlantic right whales are sexually mature between 8 – 11 years old.
This species is threatened by being separated from calving areas due to shipping traffic by, ship collisions and by becoming tangled in fishing nets, entanglement sometimes causing serious injury or death because fishing gear can wrap around the whale’s mouth and stop it from feeding or cause it to drown because it cannot surface for air. Oceans warming up can impact the food sources of whales. Extensive patches of minute animals and plants that they eat will likely change in abundance or move elsewhere as seawater temperature, ocean currents and winds alter due to climate change. The shift in the availability of food has already damaged the reproductive rates of this endangered whale.
According to the IUCN Red List, the total population size of the North Atlantic right whale is estimated to be around 300-350 individuals. Currently this species is classified as Endangered (EN). |
Today is a historic day in the annals of black history related to agitation and civil disturbance against injustice in the United States. In Raleigh, North Carolina, at Shaw University, on April 7, 1960, the Student Non-Violent Coordinating Committee (SNCC) was founded. Out of this organizational founding came some of the most memorable people that participated in the struggle for black civil rights. The establishment of SNCC afforded young African Americans students an independent voice of leadership as well as the direction of the black freedom movement.
The initial leading groups of the black civil rights movement were the alliance of black southern ministers (SCLC), the NAACP, CORE, and the Urban League. These four organization drew its strengths from the middle-class sectors of the black community while SNCC was going to pull its momentum and power from the black youth, especially young black college students. Most of these black students were enrolled in historically black colleges and universities. Why? Because of societal segregation which forbade black students from attending this country's white universities and colleges.
So these black brothers and sisters became the young leaders who transversed the United States southern states fighting and directly confronting racial bigotry, racist oppression, and racist violence. Stokely Carmichael, Courtland Cox, Bob Moses, Julian Bond, Marion Barry, John Lewis, James Bevel, Diane Nash, Angeline Butler, Ruby Doris Smith-Robinson, Oretha Haley, James Forman, Charles McDrew, Jean Thompson, Charles Jones, H Rap Brown and so many others joined the Student Non-Violent Coordinating Committee with the intent to erase racially motivated bigotry and segregation in the United States. Where would this country be without the energies the members of SNCC provided to uplift our black communities?
"The Greensboro And Nashville Lunch Counters Sit-Ins, Freedom Summers, Southern Voting Registration Drives, The Lowndes County Freedom (originator: black panther symbol),
Supporting The Development Of Mississippi Freedom Democratic Party, Driving The Initial Opposition Against The Vietnam War, And So Many Other Causes Essential To Black Community Uplift."
The ultimate sacrifice of SNCC member Sam Younge who was murdered in cold calculated blood. The first black college student killed in the black freedom movement. Of course, SNCC became a direct enemy of the United States government. The COINTELPRO plan to erase any organization that wanted to move this nation towards true democratic principles led eventually to the demise of the Student Non-Violent Coordinating Committee. However, the impact and energy of SNCC are still prevalent today. So today on the 69th Birthday of this historic organization I deliver the words of the founder Ella Baker speech given during the founding events of SNCC, "It's Bigger Than A Hamburger". |
This instructable is intended to detail the steps used in creating a contact angle instrument. The design, construction, and testing of this particular contact angle instrument served as my college senior project. The instrument was built for the university for research in chemistry and materials science.
Step 1: What is it......?
Before I go in to depth on the design and construction, Id like to give a little background info on what exactly a contact angle instrument (aka contact angle goniometer) is. References sited in this section can be found in the last "step" of this instructable
A contact angle instrument is a piece of equipment used to determine certain specific properties of liquids and solid materials as well as interactions between the two. These properties include “cohesive forces, adhesive behavior, wetting
properties, and morphological properties” . The “contact angle”, measured with this type of instrument, refers to the angle
created by the surface of a drop of liquid and a flat surface of a known material at the point of contact between the two. The angle is determined by the shape that the drop takes when placed onto the surface. This shape is produced by the interaction between the
properties of the liquid and the solid surface, which are determined by the relative surface tensions of the two materials . To be more specific, the cohesive behavior of the liquid serves to increase the contact angle by attempting to keep the liquid together in the drop shape, while the adhesion interaction between the two materials attempts to decrease the angle by trying to spreading the liquid across the surface of the solid material . It is important to note that the angle is always measured through the liquid , as seen in
The instrument is used to create the liquid drop, and placing it onto the solid, and then taking a picture of it. Then there is generally a piece of software used to help measure the actual contact angle from the image. So, any contact angle instrument consists of at least four components: a dispensing system, the stage, the viewing system, and the measurement system.
The dispensing system generally consists of a micro syringe to hold and dispense the liquid, and can be operated manually or by a motorized system.
The stage is that part of the instrument that hold the solid surface. It must be flat and level - usually built with the ability to adjust the tilt to a level position - so that the contact angle can be accurately measured (a tilted surface will change the observed contact angle; increased on the "down hill" side and decreased on the "up hill" side). The stage is generally able to move up and down a few inches to allow the liquid to be transferred from the syringe needle to the solid surface, and sometimes able to move laterally as well.
The viewing system generally consists of a microscope and/or camera to magnify and capture the image of the drop sitting on the solid, and a light source to illuminate the samples and enhance visibility of the outline of the drop.
The measurement system is the part that actually measures the contact angle from the image created with the viewing system, and is not a physical part of the instrument. This is usually done using a software program setup to trace the drop profile and calculate the angle at the contact point. With the correct calibrations these programs can provide additional information like drop volume, and contact area. |
Understanding someone else’s feelings
What is empathy?
Empathy is the ability to understand and feel what others feel. It is putting yourself in another’s shoes.
When my youngest daughter was little she watched the movie, The Fox and The Hound and cried and cried at the end when Todd was attacked by the bear. That’s empathy!
How do you teach children empathy?
Some children are just naturally more empathetic. It may be due to their personality style or perhaps a life experience that has made them able to identify with how someone else is feeling.However, as parents and educators we can play a significant role in developing empathy in children.
Here are some tips:
- Create an environment where their own emotional needs are being met. The first order of business is for the adults in a child’s life to show empathy toward them. Children need to feel safe both physically and emotionally. When they experience disappointing and frustrating events, is there someone who understands and supports them?
- Encourage children to explore how beliefs, emotions and desires impact relationships. Rather than brushing off children’s experiences and feelings, parents can explore all aspects of a situation by asking questions; How did that make you feel? How do you think the other person felt? How did that influence their behavior? What are your choices in responding to the situation? Teach children to expand their perspective by taking on the perspective of others.
- Be a role model for empathy. Child learn more by what we do that what we say. Make sure that you are showing empathy in your relationships. If you collect food for a food bank or toys for a holiday service project make the experience more real by discussing where these things go and what life must be like for the people who benefit. Better still, involve them in a way that helps them interact with real people not just collecting a box of food items or toys.
- Look for teachable moments. Use real life situations, books, movies and cartoons to point out ways that others show empathy as well as to help children identify with the feelings of others. Ask: how would you feel if that happened to you? What would you want someone else to do to show that they cared? To go beyond talking and imagining what it must be like, act out situations with your child. This not only helps them empathize with others but helps them experience the choices they have in how they respond.
- Teach that even though we are all very different, we still have much in common. Part of understanding and appreciating diversity is recognizing that we all have the same feelings. Everyone has a need for compassion and support.
- Have children use facial expression to imagine how someone else is feeling. Research shows that there is a connection between our brain and our physical expression that results in an ability to feel and understand other’s feelings. Just the act of making an angry face or a sad face, helps us tune in to other’s emotions.
- Help your child develop an internal sense of right and wrong. Providing reasonable explanations and moral consequences that are not based on rewards and punishments will help your child internalize a moral code that will serve them well throughout life. |
Alternative Energy Sources
The students learn how electricity is created, used, measured, and conserved. Learners explore energy conservation and energy efficiency using a lab activity about lightbulbs and a research assignment about alternative energy resources. Students plan and carry out a project to advocate for conserving energy and using green technology.
This lesson explores how electricity is created, used, measured, and conserved. Through the use of data collection tables, students measure and analyze their families' electrical energy consumption. Students learn about the various renewable and non-renewable resources available to produce electricity. The concept of stewardship of resources is introduced.
Through a scientific investigation, the students compare features and costs of two types of lightbulbs. This lesson helps the learners understand how energy efficiency choices can impact their family energy costs and reduce the amount of energy consumed.
Students research and present information on the pros and cons of various types of renewable energy. Student groups make a plan to advocate for energy stewardship in the community. |
Your child is now a toddler, they like to do things in their own way, in their own time so it’s important they have a safe environment to learn, play and explore in. Their concentration is better so they are better listeners and will happily play for longer. Day by day they’re open to new challenges and can seem more independent, but don’t be fooled – you can’t take your eyes off them for a second!!
The following information has been sourced from the Early Years Learning Framework Developmental Milestones booklet, developed by Community Child Care Co-operative Ltd NSW (CCCC) for the Department of Education.
Encourage your toddler to ask questions and face new challenges e.g. what’s the right way to go down the stairs – walk through each problem with them
Help your toddler to experiment with everyday things e.g. show and explain why some things float in the bath and others sink
Do simple experiments together like making play dough, blowing bubbles and looking at insects
Talk with them about the technology and objects we use each day and how it helps us to live e.g. cups, pencils, TVs and computers
Explore the outdoors together and talk about how things change during the day or over the year e.g. the weather or the seasons
Pull things apart and put them back together again (e.g. a toy) and discuss what each part does
Please seek advice from your local community health worker or doctor if your toddler is: |
For centuries, people have been attempting to predict the weather. Sailors, in particular, focus heavily on conditions at sea. Throughout the years, man’s observations took the form of some interesting proverbs. Let’s take a look at the science behind these observations.
Red sky at morning,
Sailors take warning;
Red sky at night,
The explanation for this is quite simple. A red sunset occurs when you view it through dust particles which are the main ingredient for rain. Each rain drop has a tiny dust particle inside it. Weather, for the most part, flows from west to east. So when you see the red sky at night you are seeing dry weather soon to come. The dust particles have not developed into rain. The red sky in the morning is caused by the sun lighting up the cirrus clouds before it has actually risen. These clouds are generally followed by cirrostratus clouds and lowering frontal clouds which produce foul weather.
Mackerel skies and mare’s tails,
Make all tall ships carry low sails.
If there are just a few high-flying cirrus clouds that resemble mare’s tails then good weather is on its way. However, when the sky becomes overwhelmed by cirrocumulus or mackerel clouds (resemble rippled sand on a beach) you can expect a storm. Cirrocumulus clouds frequently appear before a warm front and veering winds which eventually brings precipitation.
A sailor is truly salty once he can manage the helm and sail trim while keeping a weather eye on the horizon for approaching ships and storms. The clouds that are almost always present in the Caribbean sky give a detailed look into the next 12 hours of weather for the sailor who knows what to look for. The best way to read the clouds is to know their different types and what each one means:
Rainbow in the morning,
Sailors take warning;
Rainbow at night,
The explanation is even simpler than for the previous proverb. As you remember, storms usually travel from west to east. If you were to see a rainbow in the morning you are looking at it in the west as the sun shining over your back rises in the east. If the rainbow is in the west, the storm has not yet passed. The reverse holds true in the evening. You will see the rainbow in the east as the sun sets in the west and the storm has already passed you by.
Rainbow to windward, foul fall the day;
Rainbow to leeward, rain run away.
If the rainbow is in the direction of the prevailing wind, then the bad weather has not passed through yet and you should prepare to get wet. Conversely, if you see the rainbow to the leeward then the storm has already passed and you can enjoy the sights.
Winds that swing against the sun
and winds that bring rain are one.
Winds that swing around the sun
Keep the rain storm on the run.
Again, we rely on the fact that most weather moves from west to east. Therefore, winds that swing with the sun (east to west) bring good weather while winds that blow against the direction of the sun (west to east), bring bad weather.
When a halo rings the moon or the sun
The rain will come upon the run.
Halos are very good indicators for upcoming weather. When you look at the sun or moon through a halo, you are looking at it through ice crystals formed in high cirriform clouds. When the whole sky is covered with these clouds, a warm front is approaching and it will begin to rain soon.
After understanding the scientific reasoning behind them, some of these proverbs are quite plausible, but not always accurate. Have fun seeing how often they come true for you, but use with caution! |
If you want to build a rocket with a bold new design, you have to have a way to test its structural integrity without installing an engine. You don’t have a wind tunnel, but you’re not ready to concede. You think to yourself, “What is flight without propulsion?” Then you answer your own question: “Falling.” Simply put, the easiest way to fly without launching is to plummet. Take a prototype up very high, drop it, and you’ll get a sense of its performance at speed.
The world’s foremost practitioner of the art of precision dropping is the Japan Aerospace Exploration Agency, or JAXA, which is basically Japan’s version of NASA. The agency is trying to build a practical supersonic plane, which is no easy thing. Similar efforts in the past created mediocre products, most famously the Concorde.
The Concorde was plagued with problems that prevented other airliners from adopting the same kind of design for their own craft. One of the biggest issues was excess noise. The term “sonic boom” isn’t a misnomer — breaking the sound barrier is an insanely loud phenomenon. Manufacturers had to design the plane to keep passengers heads from exploding, and airliners couldn’t fly the plane over land since no human being on the ground wants to be subjected to such destructively loud sounds. JAXA’s goal is to create a quieter supersonic passenger plane. And its testing it out through drop tests with an experimental model in Sweden.
How the hell does that work? Basically, a balloon lifts the unmanned model plane — JAXA’s Silent SuperSonic Concept Model about 18.6 miles up in the air, and simply drops it. Sensors attached to the plane measure the shockwaves as the plane nears speeds of up to Mach 1.39 in free fall.
The physics of a supersonic free fall are not all that different from how an object moving faster than sound on a horizontal plane operates. Air becomes powerfully compressed in front of the plane, which floods out a wave of high pressure in all directions. This shockwave starts to propagate through the air but gets weaker as it moves further out, becoming a sound wave in the process. This is the loud explosion we hear and call a sonic boom.
To understand what’s special about a supersonic free fall, we should take a closer look at what exactly Mach numbers refer to: the ratio between the speed of object to the speed of sound at a particular place. And the speed of sound is subject to changes in temperature and pressure — at higher altitudes, the speed of sound decreases, so an object doesn’t need to travel at necessarily the same speed to reach Mach 1 a dozen miles in the air as it does at sea level. (The speed of sound at sea level is about 760 miles per hour).
Furthermore, Mach 1 is a highly unstable environment due to the shockwave created by breaking the sound barrier. Even small movements can have very forceful physical effects on the object. The worst place to be is basically between Mach 0.9 and 1.2.
So when an object is moving at supersonic speeds in free fall, it’s in the unusual position of accelerating faster while its Mach number increases at a slower rate. More time is spent in the unstable Mach zone than if it were moving on a horizontal plane. Most planes are designed to move past Mach 1 and enter a safe zone as quickly as possible. You can’t test something like that in a free fall experiment.
The speed also tops off because of drag. This is what happened in probably the most famous instance of an object moving faster than sound by way of gravity: Felix Baumgartner’s jump in 2012 from about 23 miles up in the air, to become the first sky diver to break the sound barrier without the use of an aircraft. When Baumgartner fell down to Earth, he eventually stopped accelerating because of the collision with air molecules, creating ‘drag force’ that built up as air resistance until it became equal and opposite to the force of gravity. At this point, Baumgartner had reached a maximum speed.
In fact, while most objects that reach terminal velocity would simply stay at a constant speed, Baumgartner actually started slowing down, since the surrounding atmosphere starts to get thicker and thicker as an object in free fall moves down. So the terminal velocity starts to decrease — meaning Baumgartner started to slow down as well. The same thing would presumably happen to one of the Silent SuperSonic Concept Model planes JAXA is testing.
Science, like most other things in life, is cooler when its faster. |
Electrolytes are substances that produce an electrically conducting solution when dissolved in water or other polar solvents. Higher organisms such as the human body need to maintain a sensitive and complex electrolyte balance between their intracellular and extracellular environments. In the same organisms, the primary ions of electrolytes are sodium, potassium, calcium, magnesium, chloride, and hydrogen phosphate.
The Three Major Roles of Electrolytes in the Human Body
1. Fluid Balance for the Maintenance of Homeostasis
Fluid balance is important in the maintenance of homeostasis in the human body. Note that homoeostasis is the state of steady internal conditions needed to maintain proper body functioning. In the case of fluid balance, the core principle is that the amount of water lost from the body must equal the amount of water taken in.
Electrolytes play a role in maintaining fluid balance both at the intracellular and extracellular levels. To be specific, intracellular fluids are dominated by potassium and phosphate ions while extracellular fluids are preponderated by sodium and chloride. These electrolytes maintain fluid balance inside and outside the cells through osmotic pressure.
An increased concentration of substances either from the intracellular or extracellular environment would draw water from the other. Consider the case of water intoxication. Consumption of large amounts of water leads to the dissolution of sodium in the extracellular fluids. To cope with this, the water naturally enters the cells, thus increasing the volume of intracellular fluids and leading to cellular swelling.
Nevertheless, an adequate level of electrolytes in both intracellular and extracellular fluids is a determinant of fluid balance. In other words, the same appropriate level of electrolytes is one of the variables of homeostasis.
2. Maintenance of Acid-Base Balance in the Blood
Another function of electrolytes in the human body is the maintenance of acid-base balance in the blood. The balance is determined by blood pH level. Blood acidity increases when the level of acidic compounds in the body rises or when the level of basic or alkaline compound falls. Blood alkalinity increases when the level of alkaline increases or when the level of acid decreases.
The acid-base balance of the blood is precisely controlled by different mechanisms. The lungs play a primary role in releasing carbon dioxide. Note that carbon dioxide is a mildly acidic waste product of metabolism. The kidneys help in excreting acidic or alkaline compounds in the body although their effect on blood pH level takes several days.
Another mechanism for maintaining acid-base balance is fluid balance and electrolyte balance. All of these three are interlinked with one another. Hence, an electrolyte imbalance leads to a fluid imbalance that in turn, would lead to an acid-base imbalance.
Essentially, both electrolyte balance and fluid balance are needed in the maintenance of proper hydration levels in the body. It is worth mentioning that dehydration results in a decrease in the pH or metabolic acidosis while overhydration results in an increase in the pH or metabolic alkalosis.
3. Role in the Activities of Muscles and Nerves
Electric current is needed in the proper functioning of muscle cells and nerve cells and thereby, muscle tissues and neurons. Remember that electrolytes produce an electrically conducting solution when dissolved in water or other polar solvents. Muscles and neurons are activated by electrolyte activity between the extracellular and intracellular fluid.
For example, calcium, sodium, and potassium are required in muscle contraction. This contraction occurs through an electrical stimulus. To be specific, in skeletal muscles, the brain sends electrochemical signals through the nervous system to the motor neuron that innervates several muscle fibers. In smooth muscles, contraction is partly influenced by spontaneous electrical activity. Nonetheless, low levels of these electrolytes result in either muscle weakness or severe muscle contraction.
Electrolytes also have specific roles in proper brain functioning and neural activities. Remember that the entire nervous system depends on electrical signaling. Sodium gives the inside of the nerve cell an electrical charge while potassium neutralizes the charged cell to reestablish resting state. Magnesium prompts the activation of enzymes that control the flow of sodium and potassium into and out of the nerve cells. The interplay between these electrolytes is vital to the electrical signaling between neurons. |
Active verbs form efficient, powerful sentences.
This document will teach you why and how to prefer active verbs over passive verbs.
- The subject of an active voice sentence performs the action of the verb: “I throw the ball.”
- The subject of a passive voice sentence is still the main character of the sentence, but something else performs the action: “The ball is thrown by me.
- How to Recognize Active and Passive Sentences
- Basic Examples
- Difference between Passive Voice and Past Tense
- Imperatives: Active Commands
- Sloppy Passive Construction
- Linking Verbs: Neither Active nor Passive
- Passive Voice is not Wrong
- Tricky Examples
- Links to Active & Passive Verb Resources
- Works Cited
1. How to Recognize Active and Passive Sentences ^
- Find the subject (the main character of the sentence).
- Find the main verb (the action that the sentence identifies).
- Examine the relationshipbetween the subject and main verb.
- Does the subject perform the action of the main verb? (If so, the sentence is active.)
- Does the subject sit there while something else — named or unnamed —performs an action on it? (If so, the sentence is passive.)
- Can’t tell? If the main verb is a linking verb (“is,” “was,” “are,” “seems (to be),” “becomes” etc.), then the verb functions like an equals sign; there is no action involved — it merely describes a state of being.
2. Basic Examples ^
|I love you.|
|You are loved by me.|
3. Difference between Passive Voice and Past Tense ^
Many people confuse the passive voice with the past tense. The most common passive constructions also happen to be past tense (e.g. “I’ve been framed”), but “voice” has to do with who, while “tense” has to do with when.
|Active Voice||Passive Voice|
|Past Tense||I taught; I learned.||I was (have been) taught [by someone];|
It was (has been) learned [by someone].
|Present Tense||I teach; I learn.||I am [being] taught [by someone];|
It is [being] learned [by someone].
|Future Tense||I will teach; I will learn.||I will be taught [by someone];|
It will be learned [by someone].
4. Imperatives: Active Commands ^
A command (or “imperative”) is a kind of active sentence, in which “you” (the one being addressed) are being ordered to perform the action. (If you refuse to obey, the sentence is still active.)
- Get to work on time.
- Insert tab A into slot B.
- Take me to your leader.
- Ladies and gentlemen, let us consider, for a moment, the effect of the rafting sequences on our understanding of the rest of the novel.
5. Sloppy Passive Constructions ^
Because passive sentences do not need to identify the performer of an action, they can lead to sloppy or misleading statements (especially in technical writing). Compare how clear and direct these passive sentences become, when they are rephrased as imperative sentences.
Ambiguous Passive Verbs
To drain the tank, the grill should be removed, or the storage compartment can be flooded.
Because they do not specify the actors, the passive constructions (“should be removed” and “can be flooded”) contribute to the ambiguity of this sentence. Does the author intend to
- offer two different ways to drain the tank (“you may either remove the grill or flood the compartment”)?
- warn of an undesirable causal result (“if you drain the tank without removing the grill, the result will be that the storage compartment is flooded”)?
The readers would have to know something about how the tank works in order to make sense of the instructions, but the thing about instructions is that people are reading them because they don’t already know what to do. Here are two ways you could fix the ambiguity.
Drain the tank in one of the following ways:
- remove the grill
- flood the storage compartment
1) Remove the grill.
2) Drain the tank.
If you fail to remove the grill first, you may flood the storage compartment (which is where you are standing right now).
6. Linking Verbs: Neither Active nor Passive ^
When the verb performs the function of an equals sign, the verb is said to be a linking verb. Linking verbs describe no action — they merely state an existing condition or relationship; hence, they are neither passive nor active.
|This||could be||the first day of the rest of my life.|
|She||might have been||very nice.|
7. The Passive Voice Is not Wrong ^
- When you wish to downplay the action:
Mistakes will be made, and lives will be lost; the sad truth is learned anew by each generation.
- When you wish to downplay the actor:
Three grams of reagent ‘A’ were added to a beaker of 10% saline solution.
(In the scientific world, the actions of a researcher are ideally not supposed to affect the outcome of an experiment; the experiment is supposed to be the same no matter who carries it out. I will leave it to you and your chemistry professor to figure out whether that’s actually true, but in the meantime, don’t use excessive passive verbs simply to avoid using “I” in a science paper.)
- When the actor is unknown:The victim was approached from behind and hit over the head with a salami.
8. Tricky Examples ^
|Punctuality seems important.|
|Remember to brush your teeth.|
9. Links to Active & Passive Verb Resources ^
10. Works Cited ^
Strunk, William. Elements of Style. Ithaca, N.Y.: Priv print,1918. <http://www.bartleby.com/141/> 03 Jul 2004.
Dennis G. Jerz
25 Sep 2000 — first posted
21 May 2002 — minor maintenance
05 Nov 2002 — minor reformatting
04 Jul 2004 — rearrangement and tweaking
03 oct 2007 — fixed broken link
15 Jun 2008 — minor edits
- Technical Writing: What is It?
Technical writing is the presentation of information that helps the reader solve a particular problem. Scientific and technical communicators write, design, and/or edit proposals, reports, instruction manuals, web pages, lab reports, newsletters, and many other kinds of professional documents.
- Usability Testing: 8 Quick Tips for Designing Tests
If you already have a prototype and you want to conduct a usability test, and you’re eager to learn how to make the most of your opportunity to learn from your users, then this document is for you. Keep…
- Quotations: Integrating them in MLA-Style Papers
The MLA-style in-text citation is a highly compressed format, designed to preserve the smooth flow of your own ideas (without letting the outside material take over your whole paper). A proper MLA inline citation uses just the author’s last name and the page number (or line number), separated by a space (not a comma).
- Show, Don’t (Just) Tell
Don’t just tell me your brother is funny… show me what he says and does, and let me decide whether I want to laugh. To convince your readers, show, don’t just tellthem what you want them to know.There. I’ve just told you something. Pretty lame, huh? Now, let me show you.
- Titles for Web Pages: In-Context and Out-of-Context
Most writers know the value of an informative title, but many beginning web authors don’t know that each web page needs two kinds of titles. The in-context (IC) title always sits at the top of a page, with the rest…
- Blurbs: Writing Previews of Web Pages
On the Web, blurbs are compressed summaries of what a reader will find on the other end of a hyperlink. Good blurbs don’t harangue (“Click here!”) or tease (“Learn ten great tips!”). You’re reading a blurb now. If it helps you decide whether to click the link, it’s done its job.
- Short Stories: Developing Ideas for Short Fiction
A short story is tight — there is no room for long exposition, there are no subplots to explore, and by the end of the story there should be no loose ends to tie up. End right at the climax, so that the reader has to imagine how a life-changing event will affect the protagonist.
- Short Stories: 10 Tips for Creative Writers (Kennedy and Jerz)
Short stories make every word count. They avoid unnecessary scenes, characters, and plot points. It usually focuses on a single problem and a short time period. This page offers tips on writing dialogue, building to a climax, and capturing the reader’s interest.
- MLA Style: Step-By-Step Instructions for Formatting MLA Papers
Need to write a paper in MLA format? This step-by-step includes images showing how to use MS-Word to create the title block, page layout, and works cited list.
- Writing Effective E-Mail: Top 10 Tips
People decide to read or trash e-mails in seconds. From the subject line to the closing, offer a focused, scannable message that puts your reader’s needs first. |
This lesson has been designed to get students to think about key factors that led to medical breakthroughs during the Industrial period. It is designed for the Pearson Edexcel Spec for Medicine in Britain 1250-Present. It also uses pages from the Hodder textbook, but is easily adaptable if using other textbooks or even other specs. This lesson is designed to go at the end of the unit on industrial medicine, before students complete a test.
Suggested use of lesson
Give students a minute to think and discuss how pizza can help us to explain our thinking in history. They usually give some interesting and funny ideas. I often won’t reveal how till later.
**Main Activities **
After introducing the lesson, students get an opportunity to do a little bit of recap. I usually get them in pairs and the first students to name all the individuals and their contributions win a prize.
After recap, I will hand out each student an A5 copy of the baskets in preparation for the tasks on slide 5. Pleas see notes on slide 5 regarding page numbers in Hodder textbook and suggested extra-challenge.
Hand out blank pizza slice to each student and have them label and briefly explain their Pizza. Possible opportunity for students to share and compare their thinking. |
This anatomical interpretation is the conclusion of Rutgers Professor John W.K. Harris and an international team of colleagues. Harris is a professor of anthropology, a member of the Center for Human Evolutionary Studies and director of the Koobi Fora Field Project.
Harris is also director of the field school which Rutgers University operates in collaboration with the National Museums of Kenya. From 2006 to 2008, the field school group of mostly American undergraduates, including Rutgers students, excavated the site yielding the footprints.
The footprints were discovered in two 1.5 million-year-old sedimentary layers near Ileret in northern Kenya. These rarest of impressions yielded information about soft tissue form and structure not normally accessible in fossilized bones. The Ileret footprints constitute the oldest evidence of an essentially modern human-like foot anatomy.
In the foreground, Christine Galvagna, Rutgers undergraduate at the time, cleans a trail of hominid footprints as Professor Harris (dark blue shirt) looks on. Credit: Rutgers
To ensure that comparisons made with modern human and other fossil hominid footprints were objective, the Ileret footprints were scanned and digitized by the lead author, Professor Matthew Bennett of Bournemouth University in the United Kingdom.
The authors of the Science paper reported that the upper sediment layer contained three footprint trails: two trails of two prints each, one of seven prints and a number of isolated prints. Five meters deeper, the other sediment surface preserved one trail of two prints and a single isolated smaller print, probably from a juvenile.
In these specimens, the big toe is parallel to the other toes, unlike that of apes where it is separated in a grasping configuration useful in the trees. The footprints show a pronounced human-like arch and short toes, typically associated with an upright bipedal stance. The size, spacing and depth of the impressions were the basis of estimates of weight, stride and gait, all found to be within the range of modern humans.
Based on size of the footprints and their modern anatomical characteristics, the authors attribute the prints to the hominid Homo ergaster, or early Homo erectus as it is more generally known. This was the first hominid to have had the same body proportions (longer legs and shorter arms) as modern Homo sapiens. Various H. ergaster or H. erectus remains have been found in Tanzania, Ethiopia, Kenya and South Africa, at dates consistent with the Ileret footprints.
Other hominid fossil footprints dating to 3.6 million years ago had been discovered in 1978 by Mary Leakey at Laetoli, Tanzania. These are attributed to the less advanced Australopithecus afarensis, a possible ancestral hominid. The smaller, older Laetoli prints show indications of upright bipedal posture but possess a shallower arch and a more ape-like, divergent big toe.
- Why It Took Big Humans To Populate Europe
- Man's Earliest Direct Ancestors Looked More Apelike Than Previously Believed
- Homo Floresiensis- Hobbit Feet Were Primitive But Not Pathological (So They Took It Slow)
- Diet Was A 'Game Changer' For Hominids Of 3.5 Million Years Ago
- Real Paleo Diet: Anything Early Hominids Could Find |
Main Verb or Helping Verb? Using Have and Has
About the Main Verb or Helping Verb? Using Have and Has Lesson
Learning the uses of the words ? have and has. Students will learn the two jobs these words do as main verbs or helping verbs and their tenses.
• Introduce verbs ? have and has.
• Give a brief lesson on using these verbs as main verbs or helping verbs.
• Show students how have and has work either as present tense verbs or past tense helping verbs.
• Use fill in the blank questions to see if students understand when to use each word.
Some sentences have more than one verb. They have a main verb ? the verb that shows the main action or state of being. They also have a helping verb.
A helping verb helps us know when the action of the verb happened. It tells the tense of the verb. Remember the verb tenses are present, past and future. |
Malaria, an infectious disease spread by mosquitoes, affects hundreds of millions of people and is also responsible for a million deaths every year across the globe. It is caused when a parasite, Plasmodium falciparum, invades one red blood cell (RBC) after another and blocks the capillaries that take blood to the brain and other organs. The process of invasion and infection of the RBCs is so swift that researchers hardly get to study the infection stages. To facilitate the development of more effective drugs and vaccines for the disease, Cambridge University researchers are now using a tool called laser optical tweezers for analyzing the critical process of interaction between the parasite and red blood cells.
The most crucial stage for the survival of a parasite is the erythrocyte invasion by plasmodium merozoite which leads to pathogenesis - development of malaria. The researchers has been intensively studying the invasion process but the disease-causing parasite after infecting one RBC invades another in less than a minute and after 2-3 minutes of release, the parasite losses it’s potential to infect the cells. The researchers used laser optical tweezers because it can control the movements of cells. The precise and accurate control is done by applying very small forces with extremely focused beam of laser. The optical tweezers was used to select parasites that had just emerged from a RBC and delivered them to another blood cell. This method is useful for scrutinizing the invasion process.
Optical Tweezers delivering merozoite to a healthy erythrocyte
The researchers also tried to find out how strongly the parasites stick to the blood cells. They found that the adherence between the parasite and the RBCs is weak and can be blocked by using antibodies or drugs. Moreover, they showed how three different invasion-inhibiting drugs affect the interactions between the erythrocyte and merozoite by using this technique.
Forcible Detachment of merozoite from erythrocyte surface via Optical Tweezers
The study titled Quantitation of Malaria Parasite-Erythrocyte Cell-Cell Interactions Using Optical Tweezers was published in Biophysical Journal. |
Chordates derive their name from one of their synapomorphies, or derived features indicating their common ancestry. This is the notochord , a semi-flexible rod running along the length of the animal. In those chordates which lack bone, muscles work against the notochord to move the animal. All chordates have a notochord at some stage in their lives, but in some (such as tunicates) the notochord is lost in the adult, whereas in others (such as the vertebrates) the notochord is present in the embryo, but in later stages is largely replaced and surrounded by the vertebrae, or backbones.
The notochord runs beneath the dorsal nerve cord, which is another chordate feature. This is in contrast to organisms such as annelids and arthropods, in which the main nerve cord is ventral. The chordate nerve cord is hollow, with pairs of nerves branching from it at intervals and running to the muscles. The anterior (forward) end of the nerve cord is often enlarged into a brain.
Pharyngeal slits are a third chordate feature; these are openings between the pharynx, or throat, and the outside. They have been modified extensively in the course of evolution. In primitive chordates, these slits are used to filter food particles from the water. In fishes and some amphibians, the slits bear gills and are used for gas exchange. In most land- living chordates, the "gill slits" are present only in embryonic stages; you had pharyngeal slits at one time. The slits are supported by gill arches, which have also been highly modified in various groups of vertebrates.
Lastly, all chordates have a post-anal tail, or extension of the notochord and nerve cord past the anus. This feature is also lost in the adult stages of many chordates, such as frogs and people.
Chordates also have a closed circulatory system, and most, but not all, chordates have a heart. The blood of most chordates contains the oxygen-carrying molecule hemoglobin. The muscles of the body are segmented into blocks called myotomes. Like their relatives the echinoderms, chordates are deuterostomes: in early embryonic development, the anus forms before the mouth.
Buchsbaum, R., Buchsbaum, M., Pearse, J. and Pearse, V. 1987. Animals Without Backbones, 3rd ed. University of Chicago Press, Chicago. |
The recent development of the concept of organs on a chip opens the possibility of realistically studying human organs without the use of patients or animal testing. Professor Jaap den Toonder, who gave his inaugural lecture at Eindhoven University of Technology (TU/e) on 20 June, even goes one step further: he intends to make microsystems in which multiple 'organs' are connected through 'blood vessels'. That will for example allow precise investigation of how cancer spreads. This could eventually make the development of medical drug much cheaper and faster. TU/e is starting a special microfabrication lab to develop the required technology.
Breast cancer usually spreads to the bone marrow, the brain or the lungs. But it is hard to follow exactly how this process works -- it can't be observed directly in the human body. This is exactly the question that Jaap den Toonder, professor of Microsystems, wants to help answer, together with other Dutch institutes. Den Toonder has been involved right from the start in the development of organs on a chip, together with other researchers including Donald Ingber of the Wyss Institute at Harvard.
The TU/e professor is working to develop a microsystem in which different organs are represented as an 'organ on a chip', linked by a system of 'blood vessels'. The sample of breast tissue contains the primary tumor. Because the microsystem is fully transparent, researchers can see with high accuracy how and when the cancer cells spread, or metastasize, to the other organs. For an impression of how this will work, please see this video: https://www.youtube.com/watch?v=DOvDMut0Vx4
Individual organs on a chip are tiny pieces of cultured live tissue with an artificial blood supply. The aim is to allow the tissue to be studied, for example to investigate how a disease develops or how tissue responds to medicines. However both disease and medicines often involve interaction between multiple organs. A typical example is the interaction between different medicines in the liver, through which substances are produced which could be toxic for other organs. This is the reason to move from one organ on a chip to microsystems with multiple organs. A microsystem typically measures several centimeters and contains a network of channels and microchambers with sizes varying from 1 to 100 micrometers.
No animal testing
Systems of this kind can help to achieve a big reduction in the cost of developing medical drugs. Testing is now often carried out on human cells in Petri dishes, but these do not provide a realistic natural environment. In addition, animal tests are carried out, but these often react differently from humans. In addition, in animal tests it is not possible to observe in real-time exactly what is happening. And the fact that a medicine does not work as expected is often not discovered until it is actually tested on humans, by which time a lot of costly work may already have been done. By using a microsystem with organs on a chip, researchers will in the near future be able to perform tests much more quickly and realistically, without the need to use animals or human test subjects. Den Toonder believes that the first applications will be ready for use within four to eight years.
The microsystems need to provide an environment as is present in the human body to ensure the validity of the test results, Den Toonder explains. The cell environment must for example produce the right bioactive signals, so cells display true (patho-)physiological behavior. Also, the deformation and rigidity of the environment are very important. "There are strong indications that increased rigidity of the environment can make cancer cells trigger to become invasive, which is the first phase of metastasis."
No costly cleanroom
To make the microsystems, Den Toonder uses a technique derived from semiconductor chip production: lithography. He refers to this as 'everyday lithography', because the smallest dimensions are much larger than those in the production of microchips. "Our smallest dimensions are 1 to 10 micrometers. At that scale you don't need a costly cleanroom, and we don't need to use smaller dimensions than that. The smallest scale at which we work is that of red blood cells and micro size blood vessels, and these are of the order of several micrometers." In addition, the fluid flow in such narrow vessels is by definition laminar, so it can easily be monitored.
TU/e will in the near future build a 'microfab lab' specially for the development of microsystems and research with these systems. The 700 square meter lab will be the best equipped facility of its kind in the Netherlands, and represents an investment of more than a million euros.
Cite This Page: |
Atlas of Tonespace
Theories of tuning are generally based on the harmonic series, which is the sequence of component sine waves present in complex musical sounds such as the vibration of a violin string. The harmonic series of a fundamental frequency "f" is simply its multiples. The musical intervals between these sinewaves can be expressed as fractions of one sinewave over another. So the interval between the fourth and fifth sinewave (a major third) can be written as 5f over 4f, which, cancelling out the f's, gives 5/4. As a general rule, in most sounds, the higher the harmonic the less perceptible it is. So it follows that fractional relationships between frequencies also become harder to recognize, the higher the numbers involved.
It is useful to distinguish between the idea of a Tone Factory, which can generate a limited or unlimited number of frequencies, and the Tone Set, which is the ensemble of those frequencies considered useful in a given culture.
The Chinese One-Dimensional Tone Factory
The Chinese and Pythagoras both knew about the fifth, which is 3/2 times the frequency of the root note. They both discovered that if you stack 12 fifths you arrive at a note pretty close to seven octaves higher than the note you started on. The error was called the Pythagorean comma, and the Chinese may have had a name for it as well. It can be expressed as a a fraction: 531441/524288; as a percentage: 1.36%; or in cents (hundredths of a semitone): 23.46 cents.
Equal Temperament wasn't invented by Bach but he brought it to fruition. It works by dividing the Pythagorean comma into twelve equal parts called schismas (meaning "a division", one schisma = 1.955 cents) and adding one to each fifth so that they meet up neatly at the top of the stack of twelve. The same effect can be obtained by defining each semitone rise as an increase in frequency by the twelfth root of two.
the 31-notename system
Such a system of tuning could have brought about a wonderful simplification of notation, not to mention the possibility of simplified keyboards, but we were not to be so lucky. Thanks to rearguard action from the inevitable diehards, antique distinctions of tuning were preserved in the notation, resulting in a system of 31 names for 12 tones. If the sole system of natural tuning were the Chinese/Pythagorean system of stacked fifths, a case could be made for keeping these 31 (or even more) names, on the grounds that they refer unequivocally to specific products of the Chinese Tone Factory.
But, as you are about to see, it is not the only system of tuning, nor is it the only system for deciding on notenames. The "thirds" spelling rule, which states that triads, including diminished and augmented, must be spelled with alternating letters, comes into direct conflict with the 31 note ladder system and renders the whole sharp/flat system so ambiguous as to be useless.
The Indian Two-Dimensional Tone Factory
So let's have a look at the so-called just system of tuning, originally from India and named after a Greek called Aristogenes or Aristoxenos. Instead of relying solely on the fifth (3/2), it introduces thirds, both major (5/4) and minor (6/5) into the picture, forming a web of frequency relationships which I have organised as a honeycomb. Note that however far we extend this honeycomb in any direction, we will never find two notes of exactly the same frequency!
The Indians and Aristoxenos had a comma too, called the syntonic comma, which as a fraction is 81/80; as a percentage: 1.25%; and in cents: 21.51. This comma was considered the smallest nuance distinguishable by the human ear. In Indian music it is the distance between two neighbouring shrutis. Here, each plus or minus represents a difference of one comma.
Aristoxenos versus E.T.
We should take this opportunity to zoom out and see how all the honeycomb frequencies compare to Equal Temperament - the MIDI standard. The E.T. semitone or halftone is divided into 100 cents, which are the same as those displayed on your electronic tuner.
A note sitting on the bold line going down the middle is in tune to E.T. Notes on the left are sharp and those on the right are flat. The pale lines represent cents. If you click on one of the lower rows you will see a dotted line joining two notes only 1.954 cents apart, almost the same value as the Pythagorean schisma (see above). It fulfills a similar function, because being imperceptible to all but the gods, both the Turkish and the Indians (and maybe others) have used it to simplify and limit their tone factory output. You can liken it to a warp or wormhole in tonespace forming a shortcut between two distant points. So I don't know whether to call it the Oriental wormhole, or Turkish Tonewarp.
In this chart I have given the notes names according to the "thirds" rule. You can see by running your mouse over a few names that the same name may represent several widely differing frequencies, while two notes joined by the dotted line will have different spelling. Enharmonic spelling, a legacy of the Pythagorean system, breaks down completely with Just Tuning.
The Indian Shruti Set
By taking 5 complete sets of 12 notes one syntonic comma apart we get the Indian tone factory arranged in five interlocking tiles
We can identify each tile/chromatic set by +'s and -'s relative to the middle set. Each plus or minus represents a difference of one comma. (If you are puzzled by the positions of the red and blue tiles, check out that wormhole again.) Indian musicians recognise these differences by listening for the emotion they carry. Without going into details here I have marked the emotional categories by colouring the tiles blue, green, white, yellow and red.
However since Indian music is played against a continuous drone (which we will call C) some substitutions have been made to create the Master Shruti Set in order to avoid having tones that are simply too far away from C. Note that some tones from the red series have crossed over the quartertone divide to the blue series and vice versa.
Debussy and Three-Dimensional Tonespace
We have seen how the Chinese/Pythagorean system uses one prime number - 3 - to build a one-dimensional (linear) space: the stack of fifths; and how the Indian/Aristoxenian system uses two prime numbers - 3 & 5 - to build a two-dimensional space: the honeycomb. (I realise that 2 is also a prime, but as I said at the beginning, all powers of 2 including 1 - zero power - are interchangeable here. We are just using them as a lubricant to help us compare frequencies in the same octave.) What we find is that each additional prime we introduce is accompanied by a new dimension.
Let's look at what happens when we add the prime number 7. We can create another honeycomb based on the numbers 5 and 7 on a plane which intersects our original honeycomb (as in the image above) along any 5_4 axis (i.e. any axis containing augmented chords, eg. c - e - ab). As you will see in the next chart, we get one whole tone set. It contains axes based on tritones, minor sevenths, and the augmented chords that were on the intersecting axis. If we follow two tritones to see the kind of octave they make we observe quite heavy error: 25/49, roughly 2%, or a third of a halftone. Here the grey lines represent 10 cents error vis a vis ET. Plus is to the left, minus to the right.
It was Debussy who introduced this extra plane - based on 7f and 5f - into Western music. In order to understand how such a feat can be achieved with a piano tuned to ET intervals, we will need to grasp tone token theory.
Tone Token Theory
This theory enables us to discover strategies for creating music out of unmusical intervals. Let us take the view that ET intervals are standing in for honeycomb intervals, in the same way that words like love, happiness, anger, sadness, etc. have to do duty for a wide range of subtle shades of those emotions. We can call this hypothesis "HH1":
HH1: The 12 notes of the ET set are tokens used to substitute for and denote natural tones drawn from an unbounded set situated on a honeycomb of several dimensions.
Can such a crude tool when set in the right hands recreate those subtleties? And, using the honeycomb, can we learn how it is done?
Let us look at some of the applications of all this in harmony and composing. I keep a stock of photocopies of the honeycomb on which I like to turn chords and scales into pretty patterns. You know: draw lines round the notes in a chord to make triangles, lozenges, parallelograms. In doing this I am testing hypothesis "HH2":
HH2: A harmonic chord is one whose tones are adjacent on the honeycomb (and therefore are related by simple fractions.) An ET chord written for (a good) choir or string orchestra will by default cause a harmonic chord to be played, and can therefore be said to be a token for that chord even when performed on an ET synthesizer.
Mainly, the honeycomb helps me see which notes are family rather than just neighbours. I can see (for example) that stationary, placid chords tend to fit into a maximum of two adjacent vertical columns. More exciting chords tend to spread out sideways. The tonic of any group is usually the bottom note in the shape. Chords with two bottom notes often sound curious.
HH3: A descending order of harmonicity exists which removes most of ET's ambiguities. A chord which can be traced on the 3f_5f honeycomb counts as more harmonic than a chord derived from the higher multiples of f.
And further, by making other honeycombs, I can get a picture of how many dimensions a chord is in. Some chords (a 4 2 4 2 for example) refuse to fit inside a single bubble on the 3f_5f honeycomb yet are elemental in the 5f_7f honeycomb.
HH4: Tones which are not adjacent to the nucleus of the chord on the 3f_5f honeycomb do not belong to that honeycomb at all, but belong to other dimensions (7f, 11f, 13f etc) and their intersections; the criterion being the lowest dimension wherein they can be found adjacent to the chord nucleus. (See the next section)
If I get an unusual scale I can draw it out on the honeycomb and see at a glance all the types of chords that could go with it. But I am careful about using it to generate composing ideas. For the moment I prefer to let the ideas come naturally and then trace them out on the honeycomb to see if there is some hidden symmetry. I expect to spend years seeing what sort of theorems it throws up, before putting any of them into practice.
The Eleventh Sinewave and Quarter Tone ComposingDespite the wide errors in the 7_5 honeycomb plane, a range of ET chords will serve as tokens to evoke that plane unambiguously. The main requirement is that they contain only notes belonging to one whole tone scale, such as seventh or ninth flatted five chords. If we wish to introduce the eleventh harmonic however, the 12 tone ET system has simply not enough resolution to do the job. The problem is that the 11th harmonic is 48 cents flat to ET, nearly a full quartertone. The trick usually used to indicate the 11f harmonic of say C is to have an A (as 6th or 13th) somewhere in the chord. This links the C to the blue F#-- in the shruti table, which would cause the first violins to play the note slightly flat, and get a bit closer to the desired frequency. The limitation of this system is obvious: we are obliged to have a four or more part root position chord to simulate the fourth dimension.
The alternative, which is gaining ever wider acceptance among composers, is to attack the resolution problem by composing in quartertones.
Why can't we just use natural tuning?
Because of drift. Watch what happens if we use Just Tuning on a chord sequence like Giant Steps (by John Coltrane). Assuming that at least one common note is held from each chord to the next one, you will see that the tuning drifts downwards by nearly six commas (well over a halftone) per cycle. This kind of chord sequence is just not safe with unaccompanied choir, unless the singers have perfect pitch.
So are we doomed to suffer the crude ET system if we want harmony? Not necessarily. It is no longer beyond the bounds of human ingenuity to write realtime Just Tuning programs with microtone portamento to correct drift imperceptibly and give us the best of both worlds on a synthesiser. I have used the honeycomb to create chord analysis routines which I will send to anyone who wants to write the code. |
Creating a literacy environment in your home
- Books Galore! Fill a child's world with books of all shapes, sizes, and kinds. Include stories, books filled with facts, poetry books, song books, nursery rhyme books, and books about people and places. Don't forget other texts like magazines, restaurant menus, recipes, etc.
- Read It Again! Read children's favorite books again and again.
- Comfort Zone! Create comfortable places to read to and with children. Set up a cozy corner with pillows and a basket of favorite books. A parents' lap is also a favorite!
- Shhhh! Set aside a daily quiet time for children to read independently.
- Check It Out! Visit the school and/or public library regularly.
- Write On! Provide writing materials such as crayons, pencils, paint, and paper to encourage children to draw and/or write down their thoughts and ideas.
- Display it! Show children you value their work by hanging it up for all to see.
- Do It Yourself! Let children see you reading and writing on a regular basis.
Read aloud to your child every day
Why read aloud?
Reading aloud builds a foundation for future success in school and in life. Read alouds help children learn new words, experience new places, enjoy models of fluency, and appreciate the value of books and reading. Reading aloud shows young children how print works. They see that books are read from front to back and left to right. They notice that the letters and words on the page carry meaning.
nursery rhyme of the month |
|Jordan was created out of the greater mandated Palestine following World War I. The British saw an opportunity to appease Abdullah, a son of Hussein ibn Ali, who was ruler of the Hejaz in Arabia. In 1923, the Kingdom of Transjordan was carved out of the area of mandated Palestine and the Hashemite dynasty began. Though Transjordan was an Allies supporter during World War II, in 1948 the country joined the Arab League, changed its name to Jordan. and participated in the 1948 war with Israel. As a consequence of this conflict, Jordan gained territory: the West Bank and the Old City of Jerusalem which were annexed n 1950. Many of the Arab refugees from that war were placed in camps in the West Bank and the population of Jordan is today, overwhelmingly Palestinian. The late King Hussein came to the throne in 1952 following the assassination of his grandfather, King Abdullah and the abdication of his father due to mental illness. Hussein went to war against Israel once again in 1967. The Six Day War left Israel in control of the West Bank and east Jerusalem, along with other territories seized from Egypt and Syria. Though Jordan had once hoped to remain the negotiator for the Palestinians in the matters of territory, it was forced to cede that power to the PLO under Yassir Arafat. Jordan has made some questionable decisions over the years including its opposition to the Camp David accords and the Egypt-Israel peace treaty and its support of Saddam Hussein during the Gulf War in 1991. But its charismatic king and policies that appeared to Western eyes as moderate, particularly when compared with its more radical neighbors, has helped Jordan to be viewed favorably by the US. And, Jordan's decision to make a formal peace with Israel in 1994, also earned the country renewed respect. Hussein's death in 1999 resulted in the ascension to the throne of his son, Abdullah who has pledged to continue his father's efforts on behalf of peace. |
November 5, 2013
Earth May Not Be So Unique After All
The now-retired Kepler spacecraft has revealed that 1 in every 5 sun-like stars has an Earth-sized planet orbiting within the habitable zone. So breaking it down—when you look up at the sky, you can see the closest sun-like star with an Earth-size exoplanet with the naked eye because it’s only 12 light years away. NASA says the fact that 1/5 of all stars like our sun have a potentially habitable planet is an important find because the distance of these planets from us will influence the size and kind of telescope astronomers build to replace the now-defunct Kepler telescope. But remember these planets are only “potentially” habitable; we still have to consider whether they have a thick enough atmosphere and liquid water to support life.
[ Read the Article: Habitable, Earth-Sized Planets Believed To Orbit One-Fifth Of All Sun-Like Stars ] |
Battery Matching Worksheet
In this science worksheet, high schoolers look for the definition of each word by matching the terms related to batteries and electricity. The answers are found by clicking the button found at the bottom of the page.
3 Views 7 Downloads
Clipper and Clamper Circuits Clipper and Clamper Circuits
In this electrical worksheet, learners design and build a circuit board to grasp the understanding of circuit design including clipper and clamper circuits and before answering a series of 21 open-ended questions that include analyzing...
10th - Higher Ed Science |
How does the word sound?
History of this Word
"sesqui" is from "semis" (half) + "que" (and) spoken by ancient people in central Italy around 700 B.C.
A prefix added to the start of a word. Indicates that "one and a half" modifies the word. Created to expand meanings. Can be used with many words to form new words.
Examples of how the word is used
|Join us as our Sesquicentennial Celebration continues with the dedication of the Memorial.|
|This is a form of the "Red Herring" fallacy which I call "Sesquipedalianism" or babbling on and on about nothing.| |
New research identifies genetic mutations that cause an inherited form of cataracts in humans. The study, published online June 2 in the American Journal of Genetics, provides new insight into the understanding of lens transparency and the development of cataracts in humans.
A cataract is a clouding of the crystalline lens in the eye. Opacity of the normally transparent lens obstructs the passage of light into the eye and can lead to blindness. Congenital cataracts (CCs) are a significant cause of vision loss worldwide and underlie about one-third of the cases of blindness in infants. "Autosomal-recessive CCs form a clinically diverse and genetically heterogeneous group of lens disorders," explains senior study author Dr. J. Fielding Hejtmancik from the National Eye Institute in Bethesda, Maryland. "Although several genes and genetic regions have been implicated in the rare nonsyndromic form of autosomal-recessive CCs, in many cases the mutated gene remains unknown or uncharacterized."
One candidate gene that has been identified as playing a role in lens biology and in the pathogenesis of autosomal-recessive CCs is FYCO1. As part of an ongoing collaboration between the National Eye Institute in Bethesda MD and the National Center for Excellence in Molecular Biology and the Allama Iqbal Medical College in Lahore, Pakistan, Dr. Hejtmancik and colleagues performed a sophisticated genome-wide analysis of unrelated consanguineous families (in which both parents are descended from the same ancestor) of Pakistani origin and identified mutations in FYCO1 in 12 Pakistani families and one Arab Israeli family with autosomal-recessive CCs. The researchers went on to show that FYCO1 is expressed in the embryonic and adult mouse lens.
Both the high frequency of FYCO1 mutations and the recessive inheritance pattern seen in the families support the idea that autosomal-recessive CCs might result from a loss of FYCO1 function. The FYCO1 protein has been shown to play a role in "autophagy," a process that is necessary for degrading unwanted proteins. To become transparent, lens cells must get rid of some of their protein components, and the researchers suggest that as lens cells lose their organelles during development, abnormal accumulation of protein aggregates might play a role in the loss of lens transparency.
Taken together, the results implicate FYCO1 in lens development and transparency in humans and FYCO1 mutations as a cause of autosomal-recessive CCs in the Pakistani population. "Our study provides a new cellular and molecular entry point to understanding lens transparency and human cataract," concludes Dr. Hejtmancik. "In addition, because of the frequency of FYCO1 mutation in the Pakistani population, it might be useful in genetic diagnosis and possible even improved future cataract treatment and prevention."
Cite This Page: |
An umbilical hernia is a hernia that happens when part of the intestines bulges through the abdominal wall next to the belly button.
More to Know
A hernia is an opening or weakness in the wall of a muscle, tissue, or membrane that normally holds an organ in place. If the opening or weakness is large enough, a portion of the organ may be able to poke through the hole. With an umbilical hernia, the opening is found near the belly button, at a part of the abdominal wall called the umbilical ring.
The umbilical ring is a muscle that surrounds the belly button. During pregnancy, the umbilical cord flows through the umbilical ring to deliver blood and nutrients to the developing baby. The umbilical ring normally closes shortly after birth. If the muscle doesn't close correctly, the intestines can poke through. This can cause a bulge near the belly button, especially when someone cries, coughs, or strains.
Umbilical hernias are most common in newborns and infants under 6 months, but they can also affect older kids and adults. They usually heal on their own by the time a baby is 1 year old. Surgery is only necessary if the hernia is very large; grows in size after age 1 or 2; fails to heal by age 4 or 5; or if blood flow to the part of the intestine sticking out gets cut off.
Keep in Mind
In most instances, an umbilical hernia causes no pain or problems and usually closes up on its own by age 2. Surgery is rarely necessary and long-term complications are rare, but any suspected hernia should be examined by a doctor.
All A to Z dictionary entries are regularly reviewed by KidsHealth medical experts. |
Scientists at the University of Hawaiʻi at Mānoa’s Kewalo Marine Laboratory have identified two distinct groups of cells in a marine invertebrate that are like the ciliary photoreceptors responsible for light detection in the human eye.
In the swimming larvae of brachiopods, an ancient group of invertebrate animals, these neural cells are part of a simple two-cell eye that can detect the direction of light and help control the behavior of the animal. One eye cell contains a lens to collect light; the other, pigments to block light coming from behind the eye.
Surprisingly, genes responsible for ciliary photoreception occur at very early stages of embryonic development, before neurons are even formed. Despite the simplicity of the embryos at these stages, they are able to move toward light.
This discovery can serve as a model for the earliest stages in the evolution of the complex human eye, Postdoctoral Researcher Yale Passamaneck and Kewalo Director Mark Martindale write in the March 1, 2011, online journal EvoDevo. |
Being exposed to a number of different languages at a young age is beneficial to your child. Signing adds another dimension to this exposure because it is non-verbal, thus offering your child a multitude of tools to express himself, even before verbal skills are acquired. This can highly improve communication between parent and child.
Signing Before They Can Speak
A great deal of research has clearly demonstrated that the early years – ages 2 to five – are the best time to educate children in different modes of communication and language. This goes beyond the spoken word (though it is an optimal time for children to learn a second language); many young children have an aptitude for signing as well.
This is not as odd as you may think. As you know, many indigenous peoples around the world, including American Indian nations, have used sign language for centuries to facilitate communication with other tribes with whom they do not share a language. Some paleontologists and anthropologists theorize that Neanderthals – who apparently lacked the vocal mechanism to produce many spoken words – depended a great deal upon hand gestures to communicate.
In fact, recent research suggests that sign language is innate. An article published in the Boulder Daily Camera in 2003 presented strong evidence that babies as young as six months old communicate with their hands:
"...by 6 to 7 months, babies can remember a sign. At eight months, children
can begin to imitate gestures and sign single words. By 24 months, children
can sign compound words and full sentences. They say sign language reduces
frustration in young children by giving them a means to express themselves
before they know how to talk." (Glarion, 2003)
The author also cites study funded by the National Institute of Child Health and Human Development demonstrating that young children who are taught sign language at an early age actually develop better verbal skills as they get older. The ability to sign has also helped parents in communicating with autistic children; one parent reports that "using sign language allowed her to communicate with her [autistic] son and minimized his frustration...[he now] has an advanced vocabulary and excels in math, spelling and music" (Glarion, 2003).
The Best Time To Start
Not only does early childhood education in signing give pre-verbal youngsters a way to communicate, it can also strengthen the parent-child bond – in addition to giving children a solid foundation for learning a skill that will serve them well in the future. The evidence suggests that the best time to start learning ASL is before a child can even walk – and the implications for facilitating the parent-child relationship are amazing.
Co-written by Emily Patterson and Kathleen Thomas
Emily and Kathleen are Communications Coordinators for the network of Texas child care facilities belonging to the AdvancED® accredited family of Primrose child care schools. Primrose Schools are located in 16 states throughout the U.S. and are dedicated to delivering progressive, early childhood, Balanced Learning® curriculum throughout their preschools. |
Ancient German History
A Gothic fibula, or brooch
The Germans, and other people who lived in what is
now Germany and Eastern Europe, were Indo-Europeans,
originally from the area between
the Black Sea and the Caspian Sea. Sometime between 3000 BC
and 2000 BC, they had migrated gradually,
in many different waves, out of that area and all across Europe.
Some ended up north of Europe, in Scandinavia (modern Norway, Sweden and Denmark). These are the ancestors of modern Swedes, Norwegians, and Danes. Some went to Poland, where they developed into the Visigoths and the Ostrogoths.
Some ended up in Germany, where they are the ancestors of the modern Germans, but also of the Franks, Vandals, and Sueves.
After the fall of the Roman Empire, the Visigoths moved into Spain, and the Ostrogoths moved into Italy. The Franks moved into France, but soon conquered Germany as well, so that by 800 AD Charlemagne was able to establish a German Holy Roman Empire that extended over France, Germany, and much of central Italy. After Charlemagne died, his sons split his empire into three parts so they could each have some, but it was the branch of the family who got Germany who continued to call themselves the Holy Roman Emperors.
The Holy Roman Emperors continued to rule Germany, and to some extent Italy, all through the Middle Ages. At first they were very powerful, but later they lost power to the smaller German and Italian lords in each region. |
|Up a level|
Participants are asked to do a presentation on a German-speaking city (covered in course materials)
Participants are asked to reflect upon post war events of 20th century Germany
Participants are asked to express their opinions on the German re-unification process, using standard phrases and expressions
Participants describe what they know about the unified Germany in the 21st century
This activity uses the same introductory whiteboard slides as L203Unit1Act1a (to familiarize students with Elluminate) but additionally includes some PowerPoints to trigger conversations about German-speaking countries. It is aimed at groups who have already met face-to-face and don't need another tutorial introducing themselves.
This map can be used for an ice breaker activity, in which students are asked to name and talk briefly about places in German-speaking countries which they know. Describing the landscape of their chosen place leads into Th. 1, Teil 1. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.