content
stringlengths 275
370k
|
---|
The United Nations Environment Frontiers Report has revealed a phenomenon known as ‘poison chalice,’ which corresponds to an accumulation of toxins in plants, all caused by climate change.
The basic premise is that, due to climate change, crops cannot develop their chemical processes efficiently. One example is corn, where its nitrate levels are altered by the changing weather conditions, causing the harmful chemical to be accumulated in the plant.
The effects of climate change on crops
One of the objectives of the 2030 global agenda dictates a quest for sustainable development with a feasible, sustainable development goal to eliminate world hunger by ensuring food supplies and viable methods of agriculture.
According to the Environment Frontiers Report, the period comprehended between 2011 and 2015 is regarded as the hottest since global temperature started being recorded in the 19th century.
A primary concern comes in the form of agriculture, as it an activity that relies on weather conditions. The report establishes that the continent with the greatest threat to its agriculture coming from global warming is Africa, where in 2011 the eastern part of the continent experienced a catastrophic drought that killed much of its crops.
— Climate Progress (@climateprogress) May 27, 2016
How ‘poison chalice’ occurs
Climate variations that are perceived by the plant as harmful will cause a protective response, leading to the lack of clean produce in exchange for preserving the plant’s life. This scenario can cause to the collection of microbes and chemicals that can be harmful on whoever eats the plant or its produce.
When plants are under normal conditions, they undergo chemical processes that transform nitrates into amino acids and protein so the plant can grow and reproduce. Extreme weather, specifically droughts, cause the nitrate transformation process to be hindered, which can reach toxic levels. Nitrates cannot be thoroughly handled by the digestive systems of ruminants, such as cows and sheep, ultimately causing asphyxia or even death.
But nitrate is not the only harmful chemical that can be accumulated due to stress. Nitrate accumulation occurs on drought, but the stress caused by water can lead the plant to produce hydrogen cyanide. Some crops that produce this harmful component when stressed by water are corn, peach, apples, flax and cassava.
The most common chemicals produced by the fungus that can be found in food are mycotoxins, which are harmful even in the slightest amount for both humans and animals. A sufficient exposure to mycotoxins can cause immune suppression and even cancer.
One of the most notable mycotoxins is aflatoxin, able to infect spices, oil seeds, nuts, and cereals. Produce can become contaminated if stored improperly. If the infected crop is consumed by animals, then the dairy and meat will also be contaminated, which will then impact the overall livestock production by increasing its mortality and the amount of income generated by the farm. It goes both ways for direct consumption by humans, as aflatoxin is directly linked to the appearance of liver cancer.
The world’s response to stress-related plant toxins
Although programs are aiming to reduce the impact of the toxin intake due to stressed plants, researchers expect these occurrences to multiply, as climate zones become warmer as they expand towards the tropics. Research is underway to develop stronger crops that can sustain extreme weather conditions, or that are not able to produce harmful toxins as the plant undergoes stressful conditions.
There are already records showing significant losses due to climate adversities which are closely related to climate change. Even if the plants can survive, it is a fact that the produce will be tainted due to its efforts to escape. Stress is easily able to overwhelm the plant’s capacity, which will increase the susceptibility of becoming diseased or of transmitting the disease to animals or humans.
Over 80 types of crops are susceptible to the accumulation of chemicals that were described, including corn, rice, wheat, and soybean, some of the most harvested crops in the history of humanity.
Biological reactions cannot be avoided unless genome modifications take place. Researchers are working hard towards developing better types of plants, but it is not an easy task to modify the defense mechanism of an organism that evolved to live under a particular set of weather conditions. It appears that the presence of humans asks for an increase in evolution speed, at least in the case of our most vital crops and plants.
Food production is a crucial topic in the conservation of the environment. World leaders need to take a direct approach to protecting the areas that are most likely to suffer damages due to climate change. Although there are already tests taking place, it is a long-term effort, as it is the duty of those in power now to ensure suitable living conditions for those in the future.
— UN Environment (@UNEP) May 20, 2016 |
Paleontology and geology
Subduction of the Farallon Plate continued beneath the western margin of the North American Plate. The ancestral Sierra Nevada rose and was eroded, and the Coast Ranges began to rise. The eroded sediments from the ancestral Sierra Nevada (sand, gravel, and volcanic material) were deposited east of the rising Coast Ranges. These sediments became the rock layers of the Central Valley (i.e., the Great Valley Sequence) and record the position of the Cretaceous shoreline in California. Exposed throughout the Central Valley, the marine rocks have yielded abundant fossil remains of ammonites, marine reptiles, bivalves, and even plants. |
Physics help from experts.
Centrifugal Force usually refers to a fictional or pseudo force that seems to give an outward "center-fleeing" acceleration to an object traveling in a circle. Let us imagine a book on a slippery dashboard in a moving car. To a passenger in the car the book appears stationary until the driver makes a sharp turn to the left. At this point the book appears to the passenger to accelerate to the right. The passenger may be inclined to conclude that some force is accelerating the book away from the center of the car's curved path. Actually the book is not accelerating. Someone outside the car would observe the book from going in a straight line, and the car moving out from underneath it. This is an example of Newton's First Law of Motion: since there is no force acting on the book it moves in a straight line. At the same time, the passenger feels as though there is something pushing him into the right side of the car. What the passenger is actually experiencing is the right side of the car pushing against him, causing the passenger to travel in a circular path with the car. The passenger may conclude from his frame of reference that there is a centrifugal force causing him and the book to slide toward the outside of the curve. The outside observer would conclude from his frame of reference that there is no force acting on the book and the only force acting on the passenger is normal force provided by the surfaces of the car. To the outside impartial observer, the motion of the book is because of the book's inertia, and the centrifugal force is not a real force.
Centrifugal Forces come into effect when an object is continuously going around in a circle, such as when a ball, attached to a spring, which is attached to a nail in a board, is going around in a circle. Centripetal force is applied onto the ball. Centrifugal force is applied to the spring. The centripetal force is the force that is applied on the ball causing it to move inwards. The centrifugal force is the force that is applied on the spring making it stretch outwards. If the centrifugal force is too large, the spring connecting the ball to the nail will break and the ball to fly off in a straight line tangent to the circular path. The faster the ball spins, which is called Rotations Per Minute (RPM), the farther the ball will move outward from the nail, which brings us to the Real Life Application. Centrifugal force is harnessed in a clutch used in go-karts.
A centrifugal clutch is used on most racing go-karts. Centrifugal clutch is the common name given to a clutch which uses centrifugal forces to engage. It is what connects the engine's drive shaft to the drive axle which spins the wheels. The centrifugal clutch and the drive axle have sprockets which spin a chain, which will spin the wheels.
Observe the picture of the Centrifugal Clutch. Inside of the centrifugal clutch are pads (also known as shoes) which move outward as the drive shaft of the engine spins faster, just like with the ball and nail explanation stated before. Once the shaft is spinning fast enough, the pads start to push on the inside of the hub on the clutch. Once the hub of the clutch has started spinning, the sprocket which is attached to the hub starts spinning, which in turn, spins the drive chain, and the go-kart moves forwards.
During this time the pads are slipping somewhat inside of the hub, which makes it possible for the vehicle to be run at different speeds. Once the maximum rotation speed (RPM) is reached by the engine, there will no longer be any slipping between the clutch shoes and the hub. Maximum speed of the vehicle has been reached at this time. Centripetal and centrifugal forces are at their limit here, if excessive centrifugal forces are achieved, the clutch will blow apart, as in the ball and nail explanation in which the spring broke and the ball flew away in a straight line.
Why do some racing organizations require a safety shield to be secured around the centrifugal clutch of the racing vehicle?
Answer: The safety shield is in place to protect the driver and spectators from flying debris if the centrifugal forces inside a centrifugal clutch cause the clutch to blow apart.
For help with physics problems, try Physics Homework Help
- Gregory Shepertycky |
Constitution is the legal document in which various governing principles are established functions and procedural aspects of the government are specified under which different organs of the government are specified under which different organs of the government work .Constitution is the supreme law of the land which is ascertained by Kelsen as the " Grund Norm" in his Pure theory of law.
American Constitution is the pioneer of all the federal constitutions followed by the Canadian and Australian constitution respectively. It may be traced that the Federal principal was adopted in the Government of India Act 1935 and the same was reinserted in the draft constitution by the Constitution Assembly Dr. B. R. Amedkar feels it convenient to describe Indian constitution as both Federal and Unitary. He opines that it works as a federal constitution under the normal condition and as Unitary during the war or crisis.
The principle may be understood as 'the method of dividing powers , so that the general and regional governments are each within a sphere of co-ordinate and independent; and not sub-ordinate to each other- Professor Wheare . The existence of co-ordinate authorities independent of each other is the gift of the federal principal where as the supreme sovereign power is vested with the only central organ which ultimately controls the state in a unitary form of government. Federalism is not static but a dynamic concept. It is always in the process of evolution and constant adjustments. It is also recognized that federalism is one of the basic features of the Constitution in Kesavananda Bharathi's case.
- There must be a written and rigid Constitution. Constitution being the supreme law of the land, it must be rigid so as to uphold its supremacy.
- Written constitution is essential if federal government is to work well.
- Distribution of powers, between the central Government and State governments is the most essential and ordained feature of a federal constitution. The distribution must be such that both the governments should exist in a co ordinate and independent in their own spheres.
- Independent and impartial judiciary is to uphold the supremacy of the constitution by interpreting the various provisions and settling the disputes between the laws made by the governments and the Constitution.
This Court in paras 71 to 73 of the judgment in Kuldip Nayar & Ors. v. Union of India & Ors., (2006) 7 SCC 1 held as under:
"71 But then, India is not a federal State in the traditional sense of the term. There can be no doubt as to the fact, and this is of utmost significance for purposes at hand, that in the context of India, the principle of federalism is not territory related. This is evident from the fact that India is not a true federation formed by agreement between various States and territorially it is open to the Central Government under Article 3 of the Constitution, not only to change the boundaries, but even to extinguish a State (State of West Bengal v. Union of India 1 SCR 371) . Further, when it comes to exercising powers, they are weighed heavily in favour of the center, so much so that various descriptions have been used to describe India such as a pseudo-federation or quasi- federation in an amphibian form, etc."
"72 The Constitution provides for the bicameral legislature at the center. The House of the People is elected directly by the people. The Council of States is elected by the Members of the Legislative assemblies of the States. It is the electorate in every State who are in the best position to decide who will represent the interests of the State, whether as members of the lower house or the upper house."
"73 It is no part of Federal principle that the representatives of the States must belong to that State. There is no such principle discernible as an essential attribute of Federalism, even in the various examples of upper chamber in other countries."
34) In State of Karnataka v. Union of India and Anr. (1977) 4 SCC 608, in para 220 of the judgment, Untwalia, J. (for Singhal J., Jaswant Singh J. and himself) observed as under:
"Strictly speaking, our Constitution is not of a federal character where separate, independent and sovereign State could be said to have joined to form a nation as in the United States of America or as may be the position in some other countries of the world. It is because of that reason that sometimes it has been characterized as quasi-federal in nature.............."
This quasi-federal nature of the Constitution is also brought out by other decisions of this court. [See State of West Bengal v. Union of India 1 SCR 371; State of Rajasthan and Ors. v. Union of India 1 SCR 1; ITC Ltd. v. Agricultural Produce Market Committee 1 SCR 441; State of West Bengal v. Kesoram Industries Ltd. 266 ITR 721(SC)
In order to be called federal it is not necessary that a Constitution should adopt federal principle completely. It is enough if the federal principle is the pre-dominant principle in the constitution. The mere presence of Unitary features in a constitution which may make the Constitution 'quasi federal' in law, does not prevent the Constitution from being pre-dominantly federal in practice. ( H.M.Seervai). Professor Whear described India as neither Federal nor Unitary but 'Quasi Federal'.
Indian Constitution came into existence on 26th January 1950 adopting the federal principle pre dominant. The doctrine of pre dominance as ascertained by HM Seervai does not hold good as the degree of pre dominance is negligible compared to that of other Federal Constitutions. According to M.C Setalvad, " the constitution of India having been drawn in mid 20th Century presents a modified form of federation suitable to the special requirements of the Indian society."
Article 1 of the Constitution describes as a Union of States. Dr B.R. Ambedkar justifies it to be advantageous to describe India to be a union of States, though it is federal in nature. Accordingly, during the crisis it shall be Unitary in nature.
Prof. Alexandrowitz says that India is supposed to have quasi federation mainly because of the articles 3, 249, 352 to 360 and 371. It may be aptly be stated that he supports Lord Ambedkar's view.
Power to alter the boundaries:
Article 3 empowers the Parliament to alter the boundaries of states even without the consent of the states which dilutes the federal principle. State of West Bengal in its memorandum submitted to the President of India compares article 3 to be a damocle sword hanging over the heads of the states. HM Seervai defends the power of the Parliament to alter the boundaries of the states that " by extra constitutional agitations the states have forced parliament to alter the boundaries of States" In practice, therefore the federal principle has not been violated." But, Seervai agrees that the power vested in the Parliament was a serious departure from the federal principle. History reveals that there has been no answer or rationale basis for such a serious departure.
Distribution of powers:
Distribution of powers is one of the pre requisites of a federation of states. The object for which federal state is formed involves a division of authority between the national government and the separate states- Prof. A.V.Dicey.
Parliament can legislate with respect to a matter under the State List
a) in the national interest(Art . 249) or
b) if a proclamation of emergency is in force (A250).
The provisions resolving inconsistency between central and state laws is also weighed in favour of the centre (A251 and 254)-AG Noorani.
Gwyer C.J. observed that the conferment of residuary power upon the centre has been done following the Canadian constitution. The U.S and the Australian constitutions which are the indisputably federal confer the residuary power on the states. The non congress opposition parties conferences [held in 1986-87] resolved to demand for the conferment of residuary power on the states as a measure to strengthen the federal principle.
- Under the present provisions of our Indian Constitution the States are entitled to a share of the centers revenues derived from only a few taxes principally income tax and excise duties ( @ 45% approximately)
- Finance Commission constituted under Article 352 as the balance wheel of the Indian Federal financial relationship
- Article 365 dilutes the Federal Principle by imposing President's Rule in the State which fails to comply with or direction of the Center. Seervai defends the power as it is open for judicial review. But it may be noted that the imposition of President's Rule effects the independence of the States. However, practically speaking when once a democratically constituted government is de throned through such imposition of President's Rule it is not only un- democratic but it costs burden on the exchequer of the State for conducting re-elections. The judicial review is a time consuming process and sometimes, by the time the decision is given the tenure of office of the government may expire. Therefore, conferment of such blanket power on the Center is undesirable as its effects the democratic process and dilutes the Federal Principle.
- President is competent proclaim Emergency in any part or whole of the country under Article 352 if he is satisfied that grave emergency exists. The 44th Amendment to the Constitution replaced the words," internal disturbance" and inserted " armed rebellion". The proclamation of Emergency in 1975 by the unilateral decision of the then Prime Minister of India Mrs Indira Gandhi, led to the Amendment of the Constitution and the power has been much mis used during the emergency.
- In Rajasthan v Union of India the Supreme Court has re iterated its dictum in West Bengal v. Union that the extent of Federalism is largely watered down by the needs of progress and development of the country.
- State of West Bengal submitted a memorandum suggesting certain changes in our Constitution to strengthen the Federal principle. Parliament's power to alter the boundaries of a state under Article 3 should be subject to the State's approval. Residuary power under Article 248 of the Constitution should be conferred upon the States. Deletion of Article 249 and Article 356 to 360 would likely to strengthen the federal Principle.
- It is unfortunate to note that there has not been proper utilization of Article 263 of the Constitution.
This is high time to re constitute the Inter State Council as an autonomous, independent and high powered. It must be entrusted with the responsibility to deal with all the issues between the center and the states. Finance Commission and Planning commission should be made independent autonomous authorities and the appointments shall be made in consultation with the States. Adequate autonomy must be facilitated to the States through the conferment of power on the States and by suitably amending Articles 3, 249 and 346 respectively. Conferment of residuary power on the States is also desirable. Governors shall be appointed by the Inter state council. Disputes if any between the Center and the States shall be expeditiously decided through constitution of Special Constitutional Benches.
-Mohan Rao B. former Principal, Rajiv Gandhi Institute of Law, Kakinada
source : http://www.lawyersclubindia.com/articles/Federal-Principle-in-the-Indian-Constitution-a-perspective--3278.asp |
Trauma is one of those factors. Trauma means that we are causing discomfort to an anatomical feature. This either weakens the feature or otherwise causes it to reinforce itself.
Weakening can be represented by tears, breaks, inflammation, etc.
Some examples of weakened injuries are:
Reinforcement is the body's natural protection process. During exercise you actually breakdown and tear the muscles. The body then creates more muscle fibers than were destroyed to increase the strength of the muscles. Other anatomical features can do the same thing.
The body can reinforce an anatomical feature to strengthen it, like the muscle, or protect it, like a callus. It can also reinforce a joint to increase stability. This usually occurs by building up bone or tightening ligaments to reduce the range of motion.
Reinforcement can be helpful or harmful. When the body does it without your knowledge or direction it can lead to injury and body deformation.
Some examples of reinforcement injuries are:
- Trigger Finger
- Bone Spurs
Repetition is a common factor. Repetition of trauma from an activity. If the activity strains the body, even minutely, the build up of trauma over time can lead to a break down of that anatomical feature causing an injury.
The trauma level does not have to be constant. The harder the strain the more it will accelerate the build up. However even slight trauma, when done long enough, can cause a collapse.
Think about the Grand Canyon. A little stream ran across rock for a long time and cut a humongous gorge through stone.
Posture and Body Mechanics
Another factor is poor posture or bad body mechanics. The body is designed to work a certain way.
- Joints have a range of motion.
- Bones provide a frame work.
- Nerves send the instructions and provide feedback.
- Muscles contract to move the skeleton.
- Bursas lubricate the joint.
- Tendons and ligaments hold it all together.
- Other parts do other things.
Everything has a function and moves a certain way. Performing an action with a bad posture or body mechanics forces the body to move in a more difficult way.
Poor posture and bad body mechanics can also cause poor circulation. Circulation is vital to a healthy body. It delivers the fuel. Without fuel the body is forced to function less efficiently and that places more strain on it.
Using the wrong muscles is another example of bad body mechanics. Some actions do not require a great deal of strength at the final body link to perform an action. You can generate the momentum needed in other body parts and let the rest whip around to final position.
Think about how you throw a baseball. Momentum is started in the hips and transferred to the arm. The arm then generates the bulk of the strength needed. The hand is used for fine control as it whips into position.
You can generate a lot more speed this way, but since muscles are meant to control motion it can overextend those joints and strain them. Sprains and pulls are common. Major league pictures have even broken their arms simply by throwing the ball.
Body in Motion
A lot of products and setups are designed to support you in a good posture. This is beneficial but it misses an important point. The body is meant to move.
Muscles are designed to move. They are not designed to stay in the same position for hours on end. Static strain is very fatiguing.
Movement also helps circulation.
Your health is a contributing factor to developing a repetitive stress injury. If you are unhealthy or overweight you are already straining your body. Your circulation is less than it should be and your body has to carry more weight. That makes you more susceptible to repetitive or acute trauma.
Not to be confused with the physical stress in repetitive stress injury, mental stress can contribute to the injury. In some respects stress is stress no matter what causes it. More to the point mental stress can materialize in physical symptoms.
It will usually show up in the weakest part of your body. If you are already developing a repetitive stress injury there is a good chance a high mental stress level will exacerbate it.
The Problem Task
Most repetitive stress injuries can be contributed to a specific task. Yet once the injury has occurred other actions that would not have caused a problem on their own now further traumatize the injured area due to its weakened status.
Discovering the problem tasks can be difficult but there are ways you can do it. By understanding what your repetitive stress injury is, tracking your pain on a visual pain scale and analyzing your tasks you should be able to find the culprit and correct the situation. |
The surface of the planet is made up of land and water. Some of the water on land is fresh water, while much of the planet's water is in oceans or seas. If you've ever been knocked over by a big oceanic wave and swallowed a gulp of seawater, you are keenly aware that seawater is salty. It is the saltiness of the water that signifies the presence of ions in seawater.
What is seawater?
Seawater is defined simply as water from the ocean, but it is much more than that. Seawater is often an underappreciated substance. In reality, seawater is a complex mixture of mostly pure water-two hydrogen atoms covalently bonded to one oxygen atom-and other components, chiefly ions. The ions in seawater create a positive or negative charge. These ions mainly come from components in rocks known as salts.
What are salts?
Salts are ionic compounds that, when dissolved in water, release ions. Salt compounds are electrically neutral, but, according to the National Oceanic and Atmospheric Administration, the released ions are electrically charged atomic particles. Both positive particles, or cations, and negative particles, or anions, are released as the salts break down from the erosion of falling rain.
How do salts get in the ocean?
Rain is slightly acidic, due to carbon dioxide from the air, and water acts as a solvent because of its high polarity, or unequal distribution of shared electrons resulting in a negative pole at the oxygen atom and two positive poles at the hydrogen atoms. When rain falls over land it erodes the rock, and the acids and water further break down the rocks, releasing salts and the ions therein. The salts and their ions wash through streams and rivers, eventually dumping into the ocean.
What kind of salt ions are found in the ocean?
There are several types of salts and therefore several types of ions. The most common salt ions found in seawater are chloride and sodium. In fact, 90 percent of all the salt ions found in seawater are chloride and sodium. The rest of the ions are mainly sulfate ions, magnesium ions, calcium ions, bicarbonate and potassium ions. Interestingly enough, seawater can contain a lot more than ions. Every major naturally occurring element has been found in seawater, from gold to lead.
Why doesn't seawater get saltier every time it rains?
Seawater remains in a relatively constant state of salinity. By measuring the amount of chlorine in seawater, scientists have been able to deduce the ratio of the major salts over time. Seawater has remained constant for about a billion years. This is called a state of equilibrium. While there is a constant intake of ions into the sea, there is an equal amount of ion output. The output of ions occurs mainly as sediment, salt deposits left after seawater has evaporated and hydrothermal vents. |
You have done your experiment or study and have your data – what next, how do you make sense of the results? In fact one of the best ways to design a study is to imagine this situation before you start!
This part will address a number of questions to think about during analysis (or design) including: Whether your work is to test an existing hypothesis (validation) or to find out what you should be looking for (exploration)? Whether it is a one-off study, or part of a process (e.g. ‘5 users’ for iterative development)? How to make sure your results and data can be used by others (e.g. repeatability, meta analysis)? Looking at the data, and asking if it makes sense given your assumptions (e.g. Fitts’ Law experiments that assume index of difficulty is all that matters). Thinking about the conditions – what have you really shown – some general result or simply that one system or group of users is better than another?
- why are you doing it?
- look at the data
- visualise carefully
- what have you really shown?
- diversity: individual and task
- building for the future |
Bryozoans (moss animals) are a group of aquatic invertebrates that are found in great variety throughout the world, with well over 100 species in Sweden alone. Yet little is known about them. Researchers at the University of Gothenburg have now studied Swedish bryozoan species using DNA techniques.
"There are currently over 6000 known species of Bryozoa. Earlier studies were based on visible characteristics of these animals, which is not sufficient to decide how the species are related to each other. To understand the evolution of bryozoans and how they are related to other animals, it is necessary to use molecular data, that's to say DNA," says Judith Fuchs of the Department of Zoology at the University of Gothenburg.
When Bryozoa were discovered in the 16th century, they were regarded as plants. Later on they were found to have a nervous system, muscles and an intestinal system and were classified as animals. On their own, bryozoans are barely visible to the naked eye, but like coral animals all bryozoans build colonies that reach several centimetres in size and some species build colonies of over 30cm.
In her thesis, Fuchs has studied the evolution and relationships of Bryozoa using molecular data (DNA) from more than 30 bryozoan species, most collected in Sweden. The results show that this animal group developed from a common ancestor that probably lived in the sea. Two groups of Bryozoa evolved from this common ancestor: a group that stayed in the marine environment and another that evolved in freshwater. The DNA studies of the larval stage of Bryozoa can also contribute to a better understanding of the evolution of life cycles and larval stages of other multicellular animals.
Together with her supervisor, Matthias Obst, over a period of four years she has also taken part in the marine inventory of the Swedish Species Project along the west coast of Sweden. The collection of all marine bottom-living animals is based on more than 500 samples from 400 locations.
"We found as many as 120 marine bryozoan species in our waters, and many of them had not been previously known in Sweden. We also found a completely new species of Bryozoa. This is a very small bryozoan with characteristic spikes on its surface, which I have described in my thesis."
To date, 45 per cent of the bryozoans collected in the inventory have been determined.
"Sweden has a very rich bryozoan fauna. On your next trip to the beach you might perhaps take a closer look at seaweed or pebbles. If you see a white covering with small holes in it, you have found a bryozoan colony for yourself."
Explore further: VIMS scientists help solve mystery of 'alien pod' |
The pharynx is a tube belonging to what is called the upper aerodigestive tract. It is located deep behind the mouth, above the larynx and oesophagal opening. The pharynx has three primary functions: transmitting food from the mouth to the oesophagus during the process of swallowing, which makes it part of the digestive system; it also allows air to pass through from the mouth and nose towards the larynx located in the neck, and then the trachea, which makes it part of the respiratory system; finally, it plays a role in speech ability, influencing the sounds emitted by the vocal cords located in the larynx. There are three elements making up the pharynx, which are, from top to bottom: the nasopharynx, which communicates with the nasal cavities, the oropharynx, connected to the mouth, and the laryngopharynx (or hypopharynx).
Original article published by
. Translated by Jeff
Latest update on October 29, 2013 at 07:16 AM by Jeff. |
The annual date of ice out for some lakes is fodder for prognostication and even wagers, but for aquatic plants and animals, that date has deeper ecological significance. Light and temperature are key cues in the aquatic environment, and ice cover keeps lakes cold and dark in late winter. As the air temperature warms, the ice melts, usually leaving open water around the edge and then falling apart over deeper water over a short time period. If that date is earlier, algae and rooted plants can get a head start on spring growth. If that date is later, growth is delayed. Temperature also affects when hibernating aquatic animals, like turtles and frogs, become active. Fish are active even under the ice, as any ice fisherman will tell you, but are more aggressive after ice-out and turn to spawning activities based on temperature cues.
While lakes may not actively manage time, it is a lot like it is for people; if you get up early, you can get a lot more done in a day, and you may not be able to finish your to-do list if you sleep in. As the water warms and light penetrates further without ice, lots of biological processes increase in lakes. Bacteria decompose organic bottom sediments, using oxygen and releasing various substances into the water column. Algae take up nutrients and use sunlight to photosynthesize and make more biomass. Zooplankton eat algae and reproduce more frequently, but small fish also eat zooplankton and limit that trophic level by early summer in most lakes. Fish spawn and make small fish that eat those zooplankton. In the meantime, rooted plants are growing, either from seeds, various winter buds, or root stocks, anywhere that light penetrates to a hospitable bottom substrate. Benthic invertebrates, often dependent on those plants, grow, reproduce and are eaten by fish or each other. A lake waking up from what seems like a winter sleep is indeed a busy place!
With variation in ice out date from year to year, and weather variation once the ice does go out, the sequence and intensity of cues will vary considerably from year to year, making every year unique to some extent. General patterns of plant growth, algae succession, fish spawning and other biological processes are known, but small changes can make quite a difference. A cold snap or windy period in May can retard stratification or cause a downturn in fish spawning that is not recoverable in that year. A very mild winter like we had going into 2016 can let perennial plants like invasive species of watermilfoil get a very early start (some plants may not even have died back to roots and stems) that outcompetes native species and makes it hard for harvesting programs to keep up. Weather plays a big role, and is influenced by climate change.
Climate change is a popular topic and the subject of spirited debates, but the data clearly show that lakes have been experiencing earlier ice-out dates over the last century (see graph). We seem to be losing a day of ice about every decade, such that based on the period of record going back about 150 years ice-out is now occurring two weeks earlier on average. Just keep in mind that aquatic organisms do not live in the “average”, and lakes have experienced both very late and very early ice out dates in just the last few years. |
is the writing
of a word
with its first letter
and the remaining letters in lowercase
Capitalization custom varies with language.
In Latin and ancient Greek, only proper nouns are capitalized.
In most modern languages[?], the first word in a sentence is capitalized as well.
In the English language the word I is always capitalized, and many authors capitalize all words in a title exept conjunctions and articles[?].
Many other languages, such as German, capitalize all nouns.
Some other miscellaneous rules:
- In English, in addition to proper nouns, proper adjectives (those derived from a name, such as Canadian, Shakespearian) are written with initial majuscules, as are the names of days of the week, months, languages, and the pronoun I.
- In German, all nouns are written with an initial majuscule.
- In Dutch, if a proper noun starts with the diphthong ij both i and j are capitalized. Example: IJsland (Iceland).
- In Romance languages, days of the week, months, and adjectives are not written with initial majuscules.
- In Spanish, the abbreviation of the pronoun usted, Ud. or Vd., is usually written with a capital. The same goes for the Italian pronoun Lei and the German Sie when these are used as a respectful second-person pronoun (see T-V distinction).
- Some Romance languages capitalize specific nouns; for example, French often capitalizes such nouns as l'État (the state) and l'Église (the church) when not referring to specific ones.
- Many European languages capitalize pronouns used to refer to God or a god.
The full rules of capitalization for English are complicated and have changed over time, generally to capitalize fewer terms; to the modern reader, an 18th century document seems to use initial capitals excessively. It is an important function of English style guides to describe the complete current rules.
See also: Bicapitalization
All Wikipedia text
is available under the
terms of the GNU Free Documentation License |
Water is without doubt one of the most undervalued resources on earth.
Without water life would not exist on the planet - all living things rely on water and without it we die, quite quickly. Humans can survive without food for up to a month, but without drinking water survival is limited to a matter of days.
It’s not surprising that throughout history, people have settled near to water sources for drinking and to grow crops. From the Seine River in Paris to Lake Texcoco in Mexico City, population growth and distribution have been intimately linked to the availability of fresh water for centuries.
This striking image taken by the astronaut Doug Wheelock, shows how important a water supply is for human settlement:
Q. Can you identify the river shown above, and name the city it provides water for?
Q. How many major cities can you think of that developed alongside significant waterbodies?
(You may find some clues in the famous rivers of the world link below.)
Water is not only essential for human survival. There is also an ecological water requirement, below which our natural world cannot function. Water has a number of competing uses:
as an element of ecosystems.
a foundation of livelihoods.
a resource of value.
an anchor of cultural meaning.
Globally, there are increasing pressures on water supply. In particular, population growth and economic development are putting pressure on available freshwater resources. Water quality is inextricably linked to human health in many ways and poor water quality can lead to disease, reduced food availability and malnutrition. Improved access to fresh water has a direct positive impact on people and communities leading to significant social, economic and environmental benefits. |
Campylobacter jejuni is a species of curved, helical shaped, non-spore forming, Gram-negative microaerophilic, bacteria commonly found in animal feces. It is one of the most common causes of human gastroenteritis in the world. This food poisoning can be severely debilitating but rarely life-threatening. It has also been linked to Guillain-Barre syndrome which generally develops two to three weeks after the initial illness.
It is commonly associated with poultry, and it naturally colonizes the digestive tract of many birds. It has also been found in wombats and kangaroo feces. Contaminated drinking water and un-pasteurized milk provide an efficient means for distribution.
Infection often results in enteritis which results in abdominal pain, diarrhea, fever, and malaise. Symptoms persist for between 24 hours and a week but can last longer. The disease does respond to antibiotics, typically ciprofloxacin. Fluid and electrolyte replacement are necessary in serious cases. |
Introduction: This simple circuit provides an extensive as well as a quite easy approach to build up a fading led circuit ( i-e : to make a led on and off ) without having to go for coding and programming. This circuit requires a couple of simple passive components which will result in constant fade up and fade down of the led. Similar to the led fade out dimmer, multiple LEDs can be used in parallel with one resistor per node taking into consideration the maximum ratings of the transistor before connecting a number of LEDs.
CIRCUIT DIAGRAM :
|Resistor||10 k & 3.3 k|
|Light emitting diode(LED)||Of any bright color|
WORKING: The main working components of this circuit are transistor and a capacitor. Fading LEDS can be constructed when negative terminal is grounded with respect to some applied voltage. The charging and discharging of capacitor causes led to fade up and down. In beginning, capacitor starts to charge through R1 and the transistor amplifies the current entering into the capacitor. As capacitor and resistor are in parallel connection, this gives some voltage signal to the base of amplifier thus it start to conduct (i-e : when voltage across capacitor raises same happens to the base of transistor) this delivers 100 times this value to the LED through collector emitter pins as the voltage across capacitor continues to increase it flows from collector to the emitter since it is connected to the ground potential. By the continuous generation of pulses led starts glowing. Since the capacitor’s voltage is still increasing, voltage across led will also increase and so they fade in slowly. Contrary to this, during discharging of capacitor the LEDs will fade out slowly until they get turned off. The periodical ups and down can be achieved by adjusting the values of resistors.
APPLICATIONS: The fading led circuits could be implemented in flowing areas of work.
Copyright © 2014 - All Rights Reserved
Muettech.com Designed and Developed by MUETTECH GROUP |
If you have the Year 1 CGP Grammar and Punctuation book, please complete the next 2 pages in your book.
If you have the Year 3 or 4 CGP Grammar and Punctuation book, please follow the lesson below.
Apostrophes for contraction
Today we are (we're) going to look at how to use apostrophes for contraction. Contraction means to make something shorter so using an apostrophe for contraction is making 2 words into 1 word using an apostrophe. Watch this video:
So the apostrophe replaces any letters that have been missed out when shortening the words and the two words squish into one.
what has = what's
does not = doesn't
you will = you'll
Sometimes the apostrophe replaces just one letter, others it can replace two or more.
Have a go at pages 50 and 51 of your CGP Grammar and Punctuation book to practise. |
Wide-Screen Motion Picture
Wide-Screen Motion Picture
a type of motion picture in which the standard projection screen is replaced with a wider screen having an aspect ratio from 1:1.66 to 1:2.35. The larger screen size, together with stereophonic sound reproduction, considerably expands the graphic possibilities of the art of motion pictures and increases the impact of the motion picture on the viewer (especially in color feature films during outdoor, crowd, and battle scenes). There are several wide-screen systems presently in use, including those with anamorphic compression, those using a cropped frame (the most common), and Techniscope.
Systems using anamorphic optics (first proposed by E. Abbe in 1897) compress the frame during filming of the motion picture and expand it during projection. In 1927 the French scientist H. Chrétien designed the Hypergonar anamorphic lens that approximately doubled the horizontal field of view. The Cinema-Scope system, developed by the US motion-picture company Twentieth Century-Fox Film Corporation under the Chrétien patent, has enjoyed wide use in the USA and other countries. The first motion picture filmed in Cinemascope was The Robe, released in the USA in 1953. In the USSR the first wide-screen feature film, Il’ia Muromets, was released by the Mosfil’m motion-picture studio in 1956. The anamorphic optical system used for modern wide-screen motion pictures compresses the image in the horizontal by a factor of 2:1, which places it within the confines of a somewhat enlarged, four-perforation frame on standard 35-mm motion-picture film. The usable frame area (Figure 1) is thus increased correspondingly.
The use of a cropped frame in the wide-screen systems that gained wide use abroad in the mid-1950’s reduces the height of the standard 35-mm frame to such dimensions that the aspect ratio corresponds to the required aspect ratio of the screen (Figure 2). The frames may be masked in the camera or in the projector. In the latter case the frame is composed during filming so that the most important objects will not be cut off, that is, not masked,
during projection. In order to accomplish this, two lines are marked (as shown by the broken lines in Figure 3) on the ground glass of the camera viewfinder; they permit the cameraman to restrict the height of the frame section containing the most important visual material. This method of viewfinder cropping is most commonly used when filming wide-screen motion pictures for which frame masking will be done during projection. It makes it possible to use a film copy with an aspect ratio of 1:1.37 for television programs, thus eliminating the need for cropping the image above and below by means of a mask. The principal advantage is that the photographic and projection processes are practically unchanged. However, the technique reduces the useful frame area considerably, thereby requiring additional magnification for projection on a wide screen through the use of close-focus optics. Image sharpness is also somewhat degraded, and graininess is increased. Cropping gives satisfactory results for aspect ratios in the range from 1:1.66 to 1:1.85 when high-quality film, high-quality optical projection systems, and high-power light sources are used.
Techniscope was developed in 1964 by the photographic film company Technicolor Corporation mainly to reduce the costs of producing wide-screen color films. Filming is accomplished with a standard camera equipped with a spherical optical system and a modernized pull-down mechanism that produces a negative frame with a height corresponding to the dimension of two perforations (instead of four as in standard systems). A wide-screen positive copy with anamorphic compression is produced on special equipment by means of optical printing (Figure 4). The system substantially reduces the cost of negative motion-picture film and can provide the film copies required for television programs. However, the resolution is inferior to other wide-screen systems because of the 2:1 reduction in the area of a negative frame compared with the positive obtained by enlargement during optical printing and simultaneous anamorphic compression.
In the USSR the Motion Picture and Photography Institute (NIKFI) and the Mosfil’m motion-picture studio in 1974 jointly developed a wide-screen motion-picture system known as the Universal Frame Format. The system photographs on 35-mm film with a standard optical system and uses the entire area of the frame between perforations. In the printing process, copies can be obtained for practically all formats used in modern theaters—standard 35-mm, 35-mm wide-screen with anamorphic compression, masked 35-mm, 16-mm, and 70-mm (wide-film with stereophonic sound).
REFERENCESVysotskii, M. Z. Sistemy kino istereozvuk. Moscow, 1972.
Goldovskii, E. M. Vvedenie v kinotekhniku. Moscow, 1974.
Konoplev, B. N. Osnovy fil’moproizvodstva, 2nd ed. Moscow, 1975.
M. Z. VYSOTSKII |
The purpose of a dc motor is to produce movement from a battery. If a current is passed through a coil positioned in a magnetic field, the coil will experience a force according to Fleming's Left Hand Rule. The left hand of the coil AB will experience an upwards force and the right hand side CD of the coil will experience a downwards force. When the coil reaches the vertical position the commutator changes polarity. This means that current will reverse direction in the coil, but since the coil has undergone a half turn, CD is on the left and AB is on the right. Now CD experiences and upwards force, AB experiences a downwards force and the coil continues to turn in the same sense -clockwise above.
The rate of rotation can be increased by
Increasing the number of turns onthe coil. Each wire experiences an upwards force, so doubling thewire double the force.
Increasing the strength of themagnetic by using stronger magnets.
Putting more cells in the battery.
The gap between the two halves of the split-ring commutator is so wide that a carbon brush can
only touch one half of the split-ring at any time. This protects the circuit. It also means that
sometimes the motor will not start when switched on, if the carbon brushes are not in contact with the commutator. |
Soil Health Spotlight: Soil structure and aggregation
Soil structure is a way to describe how mineral and organic particles are arranged in the soil. Structure is an important characteristic because it influences how stable and resilient the soil is to disturbance and erosion. It also plays an important role in water infiltration, aeration, root penetration, carbon storage, and nutrient cycling. Measuring soil aggregates is one way to determine the structural status of the soil. Aggregates are the small-scale clumps of bound soil particles and organic matter ranging from a half inch in diameter to microscopic in size. In general, the bigger and more numerous the aggregates, the better structure your soil has. Stable soil aggregates can hold soil particles together against wind, rain, and traffic more effectively than unbound free agents because they are literally stuck together.
Aggregates are often divided into two groups: macroaggregates (pea gravel size) and microaggregates (smaller than a grain of sand). Each group serves a different role in carbon cycling. Macroaggregates can be an important source of easily available carbon for soil microbial activity but breakdown relatively quickly in response to disturbance in the soil. Microaggregates act as more long-term carbon storage where they protect small amounts of organic matter from microbial breakdown. These aggregates are far more stable and resistant to destructive forces. Microaggregates can be formed inside of macroaggregates, which highlights the importance of macroaggregate formation. If macroaggregates are broken before microaggregates can be formed, the long-term storage of organic matter in soil can be compromised.
Aggregates are formed by attractive forces between particles, held together by sticky “organic glues” exuded by microorganisms, and physically molded together by plant roots and soil animals (Figure 1). Preventing aggregate breakdown and enhancing the activity of plants and microbes are the primary ways agricultural management can improve soil aggregation.
Figure 1. Earthworms exude sticky glues which help to build soil structure. Photo from K-State Research and Extension.
The following management practices have been shown to improve soil aggregation:
- Reduced tillage or no-till – Reduces disturbance and increases plant residue return to soil
- Crop rotation – Enhances plant-microbe-soil interactions
- Cover crops – Enhances plant-microbe-soil interactions, increases plant residue return to the soil
- Adjusting soil pH or cation availability with amendments such as lime or gypsum (in case of sodic soils) – Can increase soil particle binding from calcium input and/or improve soil pH to facilitate plant and microorganism growth
Laura Starr, Ph.D. graduate student |
This video says about itself:
The world’s oldest fossil: 3.7 billion year old bumps found on ancient sea bed
31 August 2016
They were formed by prehistoric colonies of bacteria living in a shallow sea.
It suggests life may have emerged on Earth far faster than first thought.
The finding raises hopes life may have existed on Mars.
Rapid emergence of life shown by discovery of 3,700-million-year-old microbial structures
Published online 31 August 2016
Biological activity is a major factor in Earth’s chemical cycles, including facilitating CO2 sequestration and providing climate feedbacks. Thus a key question in Earth’s evolution is when did life arise and impact hydrosphere–atmosphere–lithosphere chemical cycles? Until now, evidence for the oldest life on Earth focused on debated stable isotopic signatures of 3,800–3,700 million year (Myr)-old metamorphosed sedimentary rocks and minerals1, 2 from the Isua supracrustal belt (ISB), southwest Greenland3.
Here we report evidence for ancient life from a newly exposed outcrop of 3,700-Myr-old metacarbonate rocks in the ISB that contain 1–4-cm-high stromatolites—macroscopically layered structures produced by microbial communities. The ISB stromatolites grew in a shallow marine environment, as indicated by seawater-like rare-earth element plus yttrium trace element signatures of the metacarbonates, and by interlayered detrital sedimentary rocks with cross-lamination and storm-wave generated breccias. The ISB stromatolites predate by 220 Myr the previous most convincing and generally accepted multidisciplinary evidence for oldest life remains in the 3,480-Myr-old Dresser Formation of the Pilbara Craton, Australia4, 5. The presence of the ISB stromatolites demonstrates the establishment of shallow marine carbonate production with biotic CO2 sequestration by 3,700 million years ago (Ma), near the start of Earth’s sedimentary record. A sophistication of life by 3,700 Ma is in accord with genetic molecular clock studies placing life’s origin in the Hadean eon (>4,000 Ma)6.
See also here.
Newly discovered bacterial fossils may push back the date of the earliest direct evidence of life on Earth to 3.7 billion years ago, 220 million years older than the previous record. This is roughly four-fifths of the way back to the original formation of the planet, 4.6 billion years ago. If confirmed, this discovery would have tremendous significance for our understanding of the evolution of life in the universe: here.
Coastal waters were an oxygen oasis 2.3 billion years ago. Despite being ripe for complex life, it took another 1.5 billion years for oxygen-hungry animals to evolve: here.
The breath of oxygen that enabled the emergence of complex life kicked off around 100 million years earlier than previously thought, new dating suggests. Previous studies pegged the first appearance of relatively abundant oxygen in Earth’s atmosphere, known as the Great Oxidation Event, or GOE, at a little over 2.3 billion years ago. New dating of ancient volcanic outpourings, however, suggests that oxygen levels began a wobbly upsurge between 2.460 billion and 2.426 billion years ago, researchers report the week of February 6 in Proceedings of the National Academy of Sciences: here.
Life on Earth could be nearly four billion years old, suggests new fossil discovery. The Earth was an extremely hostile place at the time as it was still being bombarded by asteroids: here.
Tiny mounds touted as the earliest fossilized evidence of life on Earth may just be twisted rock. Found in 3.7-billion-year-old rocks in Greenland, the mounds strongly resemble cone-shaped microbial mats called stromatolites, researchers reported in 2016. But a new analysis of the shape, internal layers and chemistry of the structures suggests that the mounds weren’t shaped by microbes but by tectonic activity. The new work, led by astrobiologist Abigail Allwood of NASA’s Jet Propulsion Laboratory in Pasadena, Calif., was published online October 17 in Nature: here. |
Most species of trout do have teeth, although depending on the species the teeth may be located in different places. For example, rainbow trout have a mouth that doesn’t go past the back of their eyes and teeth that are located along the roof of their mouth whereas other species of trout have teeth that are located along the base of their tongue (known as hyoid teeth).
When some trout are feeding they often engulf their prey. Sometimes they may follow their moving prey very slowly or their prey may be carried by the current in front of a trout which has happened to open its mouth. When this happens the trout’s gills flare up and the prey is sucked into the fish’s mouth. Here, the prey is then trapped by the tongue against the roof of the mouth as a means to “test” the food and it is then either swallowed or expelled. This process is done very quickly.
To engulf some prey would pose a deadly threat to these fish. For instance, the Freshwater crayfish is equipped with spikes and a spine that would catch in the throat of a trout and result in choking the fish. It is for this reason that the crayfish must be grasped and turned tail first toward the throat and then swallowed. This grasping feeding system is done by the trout using its small, sharp grasping teeth. |
Moving towards a zero-carbon future
The global and national climate change goals and how they will affect us
Whatever your political leanings, there can be little doubt now that climate change is one of the biggest challenges our world will face over the next few decades. Most climate scientists agree that “anthropogenic” (man-made) climate change is in motion. The only disagreement is over exactly how severe its near-to-long-term effects may be, as such scenarios are difficult to model and dependent on many assumptions.
The UN's stance on climate change
The UN’s Intergovernmental Panel on Climate Change (IPCC) has warned that in order to limit global warming to 1.5°C above pre- industrialised levels – and thus avoid the more drastic consequences of climate change – global carbon emissions must hit net-zero by 2050.
“Human activities are estimated to have caused approximately 1.0°C of global warming above pre-industrial levels, with a likely range of 0.8°C to 1.2°C,” according to the IPCC’s 2018 report. “Global warming is likely to reach 1.5°C between 2030 and 2052 if it continues to increase at the current rate.”
The IPCC notes that impacts to land and ocean ecosystems from global warming have already been observed. “Warming greater than the global annual average is being experienced in many land regions and seasons, including two to three times higher in the Arctic,” its 2018 report reads, with warming being “generally higher over land than over the ocean.”
The potential risks from climate change include the irreversible loss of ice sheets in Greenland and a subsequent rise in sea levels; widespread flooding, with certain areas of the world including Florida and parts of Asia at risk of submersion; extreme heat in regions such as the Middle East; drought, cyclones, dramatic changes to ecosystems, species loss, disease and mass migration.
So far, 195 nations, including the UK, have signed up to the United Nations Framework Convention on Climate Change’s Paris Agreement, agreed in 2016. This aims to limit the increase in global average temperature to less than 2°C above pre-industrial levels and to pursue measures that would limit the increase to 1.5°C to mitigate some of the most devastating impacts of climate change.
Given that governments across the globe (besides the US which, under President Donald Trump, will officially withdraw from the Paris Agreement in 2020) are taking the matter seriously, and that consumers are increasingly concerned about climate issues when it comes to choosing goods and services, businesses will need to plan ahead for major changes as we transition towards a zero-carbon future. So what will this look like?
Decarbonising the economy
Beyond carbon-neutral: the CO2-negative business
Funding the zero-carbon future
Harnessing natural energy in the Outer Hebrides
The UK government's current climate change goals
In June 2019, one of outgoing prime minister Theresa May’s final acts was to make the UK the first member of the G7 group of nations to legislate (via the Climate Act 2019) for net-zero carbon dioxide emissions by 2050. Britain is one of the first major world economies to commit to this goal – France has proposed similar legislation and Finland and Norway have committed to make the transition earlier, in 2035 and 2030 respectively.
In part, these rapid shifts are expected to be achieved through the use of carbon credits, even though the UK government’s Committee on Climate Change (CCC), chaired by John Gummer, advised May not to do so. Carbon credits allow polluting nations to offset carbon emissions by purchasing credits accumulated by less-polluting countries. However, critics argue that this does not deter rich nations from emitting excess greenhouse gases or encourage them to develop more sustainable practices – instead, it merely passes the responsibility onto poorer developing nations, who are arguably less well-placed to make the radical changes needed.
A blueprint for radical change
Yet even if carbon credits are used, the CCC’s recent report into achieving net-zero emissions in the UK reveals the steep mountain that must be climbed in a short space of time – just 30 years – in order to deliver a zero-carbon economy. Almost every area of our lives – our lifestyles, diets, homes, buildings, businesses and public infrastructure – will have to be transformed, leaving life in Britain in 2050 looking quite different to today.
Few areas will be impacted as directly or as obviously as power generation – we need to move from a reliance on fossil fuels towards much greater use of clean, renewable energy. The good news on that front is that considerable progress has already been made by the UK in reducing carbon emissions linked to power generation. In October, energy regulator Ofgem noted that emissions in Britain have fallen by 42% since 1990. That’s more than in any other major developed economy, and it’s mostly due to the near-eradication of coal usage for electricity generation. Government policies, such as the carbon price – which penalises coal-fired power plants – and the growth in renewable technologies, such as wind and solar power, have driven this shift.
Today, transport represents the biggest single source of carbon emissions, although even these fell in 2018 due to a rise in electric vehicle use. And according to Ofgem, the energy sector has beaten all others in cutting emissions, reducing them by 50% between 2010 and 2018, with the transport sector managing just a 2% reduction over the same period. However, Ofgem also warns that progress has slowed in recent years. In 2018, the UK’s greenhouse gas emissions fell by just 2.5%, down from a 3% drop in 2017 – the smallest reduction seen since 2012.
The CCC notes that while net-zero carbon progress thus far has been significant, domestic emissions will need to fall a lot faster than they currently are to meet the new targets introduced by May. “Meeting future carbon budgets and the UK’s 2050 target to reduce emissions by at least 100% of 1990 levels, will require reducing domestic emissions by at least 3% of 2018 emissions, that is 50% higher than under the UK’s previous 2050 target and 30% higher than achieved on average since 1990,” says the committee in its annual assessment of the UK’s progress in cutting emissions. “This is an indication of how substantial the step-up in action must be to cut emissions in every sector.”
What are the challenges?
Public engagement is one major hurdle. The CCC’s most recent progress report expresses concern over whether the government can convince the public to accept the dramatic lifestyle changes that will be required to prevent the more serious consequences that could result from extreme man-made climate change. Another obstacle is the UK’s capacity for rapid uptake of new technologies.
The CCC recommends, for example, the widespread installation of heat pumps and other green technologies to heat homes – but it also admits that as yet there are not enough qualified installation engineers to facilitate the required ramp-up in scale. Currently, there are around 20,000 heat pump installations a year in the UK versus more than one million gas boiler installations.
A further issue is that the most carbon-intensive businesses are also those that will find it hardest to make the transition. Of course, few industries are likely to be left untouched by the move to a zero-carbon economy. However, certain sectors use far more energy than others and are designated by the government as “energy intensive” industries.
Traditional manufacturing – businesses involved in the production of aluminium, cement, steel, fertiliser, chemicals, industrial gas production and paper – tends to be energy intensive, with electricity costs that range between 13% and 55% of gross value added.
These businesses are often owned by international companies whose investment needs are spread across the globe, and who therefore may have limited budgets for capital spending in the UK. Plus, their plants also tend to be situated in economically-deprived areas with high unemployment and low standards of living, where the jobs will be difficult to replace if the industry dies.
Transport is another key area. Currently, there are only 210,000 electric cars in the UK. Just 1% of the population owns an all-electric car, while only 2% own hybrids. The purchase price of these vehicles remains a barrier, while government subsidies for electric cars have been cut, and there is still a shortage of charging points. It will take time for the market to mature and for the price of electric vehicles to fall, and to become more accessible to the masses.
However, electric cars have an advantage in that new diesel and petrol vehicles are set to be outlawed in the UK by 2040. Indeed, under new proposals by the Conservative government, the ban could be accelerated to 2035 to bring Britain into line with European neighbours, such as Sweden, Denmark, the Netherlands and Ireland, which plan to outlaw the vehicles from 2030. Similarly, Scotland plans to ban new petrol and diesel vehicles from 2032.
To find out more about funding the zero-carbon future, download our in-depth report |
Photo Credit: J. Kopecky
Bald Eagle (Haliaeetus leucocephalus)
Status: special concern
Bald eagles are found all across North America from Alaska to Florida and as far south as Central Mexico. A mostly solitary animal, they have been known to hunt cooperatively and often will snatch prey from other hunters. There primary diet is fish and they can most often be seen perching in coniferous or deciduous trees in area adjacent to large bodies of water. Bald Eagles have strong; powerful wings and can soar and glide for hundreds of kilometers a day while exploring their vast territories. Since conservation efforts began in the late 1960’s their numbers have recovered and in 2007, they were removed from the Endangered Species List. But continued conservation is needed to keep the populations of these majestic birds healthy and strong. |
Training in Sports
11.1 Meaning and concept of sports training
11.2 Principles of sports training
11.3 Warming up and limbering down
11.4 Load, Adaptation, and recovery
11.5 Skill, technique and style
11.6 Symptoms of overload and how to overcome it
11.7 Concept of free playFig: Sports trainingTRAINING IN SPORTS
11.1 Meaning and concept of sports training
Training has been referred to as a systematic exercise of effort for a considerable time, to develop the ability to five greater loads, especially for competitions. Sports training provides the other with the basic means to adapt to his particular stressor, through controlled exercise. This adaptation on the part of an athletes body answered, that his body is prepared fora greater load, This process is called training
Concept of Training :- Training for achieving something of for competition is not a new idea, with the passage of time more time and efforts are being devoted to training of preparation for competitions, with the invention of new techniques every now and then in the field of athletics, weight training methods has shown very encouraging results.
Training for any game or event has become very technical and a scientific approach is needed to get the desired results.
11.2 Principles of Sports Training
The principles of laws of sports training as follows:-
1. Principles of continuity
2. The principle of overload
3. The principle of Individual Differences
4. Principles of general and speech preparation
5. Principles of progression
6. Principles of specificity
7. Principles of variety
8. The principle of warmup and cool down
9. Principles of rest and recovery.
10. Warming up and Limbering Down
11.3 Warming up: It is a short-term activity carried out prior to any server or skilled activities. Warming up is an initial for a competition. Through such a workout we try to bring the group of muscles expected to take part in the activity to follow. It is primarily preparatory activity in which physiological and physiological of an athlete for the main activityFig: Warm upTypes of Worm up- (1) General Warm-up
(2) Specific warm-up
Limbering down or cooling down:-
At the and of the training session or competition, athletes are normally advised to worm down. This is done normally in the shape of light but continuous activity such as jogging or walking for some time at the end of the event. Such an activity after the competition of on activity after the competition of on event is called limbering down or cooling down.
11.4 Load [LOAD, ADAPTATION, AND RECOVERY]
1. Load:- The load is defined as weight or source of pressure. It is generally called an external stressor. It can also be explained as the amount of work to be done by a person or a machine. In the training of an athlete as a player, it refers to the total amount of work expected from him on a daily, weekly or monthly basis, heading in training can be done in various ways, therefore load is known a work of exercise that a sports person performs in a training can be done in various ways. A person performs in a training session.
2. Adaptation:- Continued exposure to an extra load in the training of an athlete leads to an adjustment in his body function that enables him to bear the extra load without feeling uneasiness. To obtain a higher degree of adaptation the lead or stressor should be increased gradually, overloading no doubt leads to adaptation.
Recovery means the restoration of the body to a normal state after a period of intense training for competition. This period is also referred to a period of regeneration during which stressor related efforts are gradually eliminated. The increase in heart rate and respiratory rate depends on the intensity of the workout.
Fig: RecoveryThe time take by an individual's pulse rate to come down to 80-85 beats per minute would be the recovery period. During the recovery period, the body's resting state is restored.
11.5 Skill [SKILL, TECHNIQUE, AND STYLE]
1. Skill is an element of performance that enables the performer to do a large amount of work with little effort. The apparent visible ease of muscular work indicates a skillful movement or performance. In other words, it can be said that skill is the ability to do something well. Skill is that are unnatural and complex can be learned more easily if the different elements in the movements can be separated and learned by parts.
It means the way of doing a particular task scientifically. This way of doing a thing should be based on scientific principles and be effective in achieving an aim. It is a basic movement of any sports or event. We can say that a technique is a way of performing the skill.
It is the manner of doing something that is characteristics of a particular person or pattern. It may or may not be based on sound principles. A style of doing a movement, If perfect, looks graceful and appealing. It is an individuals expression of technique in motor action, there fare each sportsperson due to his specific physical and biological capacities realize the technique in a different way. symptoms of overload and how to overcome it
11.6 What is overload?
Overload is not something that only needs to be applied on a daily basis, it must be applied over a lifetime of training. The final principle deals with the importance of applying overload logically overtime.Fig: Overloading in sportsDuring training, which is beyond the capacity is known as overload.
Causes of overload
There can be various causes of over-load It could be because of various factors. How to Overcome Over Load
a) Through observation
b) Plan the training
c) Proper Nutrition
d) Psychological strategies
e) Social Interaction
11.7 Free - Play
— “Play is fun”! This is how children usually respond when asked about the play. But the play is more than just fun. The play is engaging voluntary and spontaneous. Free play is a way for children to learn more about. ‘Who they are’? and “What they can do”.
— All children have a right to play. It is a process by which children learn. Good quality play opportunity has a significant impact on child development.
— Play allows children to use their creativity while developing their imagination, dexterity, physical, cognitive and emotional strength. The play is important to healthy brain development.
— It is through play that children a very early age engage and interact in the world around them. |
Upopoy: A Symbolic Space for Ethnic Harmony
In July 2020, the National Ainu Museum and Park, nicknamed Upopoy, opened in the town of Shiraoi, Hokkaido as a center from which to “revitalize and expand the Ainu culture.”
The Ainu are an indigenous people from the northern region of the Japanese archipelago, predominantly Hokkaido, and have developed a distinctive, rich culture that includes the unique Ainu language, a spirituality that holds that spirits dwell in every part of the natural world, traditional dances that are performed at a variety of events, uniquely patterned embroidery and carved wooden art.
According to the National Ainu Museum website, the history of the Ainu (“Ainu” means humans in the Ainu language) stretches back 30,000 years, to when humans first came to Hokkaido. The Ainu were hunter-gatherer-fishers, but in around the seventh century they began to grow cereals. The Ainu actively traded with people overseas and created a unique culture.
From the seventeenth century, however, Ainu society gradually became absorbed by the Japanese economy and society. Until the nineteenth century, the Ainu lived in Hokkaido, the islands around Hokkaido and the northern parts of the Tohoku region. Today, many Ainu people still live in Hokkaido, while some Ainu live in other parts of Japan, particularly in the Kanto region, and overseas.
Promoting and raising public awareness of Ainu culture are facing several challenges. The existence of the Ainu language and traditional crafts is in crisis as the number of people able to pass on these traditions declines, while levels of understanding regarding Ainu history and culture remain low.
To overcome these challenges, the government’s Council for Ainu Policy Promotion stated in July 2009 that the establishment of “a symbolic space for ethnic harmony” would be key to a policy based on the recognition of the Ainu as an indigenous people. Upopoy, which means “singing together in a large group” in the Ainu language, has been established as a base for the revitalization and expansion of Ainu culture, an invaluable culture that is at risk of extinction, and as a “symbol of the building of a forward-looking, vibrant society with a rich, diverse culture in which indigenous people are treated with respect and dignity without discrimination.”
In July 2020, Upopoy opened in Shiraoi in southwest Hokkaido, facing the Pacific Ocean. By limited express train, Shiraoi is about 40 minutes from New Chitose Airport, and it is about 1 hour from Sapporo. The foundations of Shiraoi were laid by the Ainu, and the town has many facilities to pass down Ainu traditions and culture amid beautiful natural surroundings.
The National Ainu Museum in Upopoy has a collection of about 10,000 items, about 800 of which are permanently exhibited exploring six themes. In the Our Language area, the Ainu language, stories, place names and current initiatives to promote its use are explained. Visitors can hear narration in the Ainu language, and there are games enabling visitors to study the pronunciation and grammar of the Ainu language. Videos explaining place names and conversational Ainu are also displayed. In the Our Universe area, spirituality, a central aspect of Ainu culture, is described. The Ainu belief that ramat (spirit) exists all around us is explained using graphics.
In the open-air center, the National Ainu Park, visitors are able to experience Ainu culture through dance, cooking and traditional crafts. For example, the Cultural Exchange Hall (uekari cise) features traditional Ainu mouth harp, or mukkuri, performances and traditional Ainu dances that are inscribed on the UNESCO Representative List of the Intangible Cultural Heritage of Humanity. At the Workshop (yayhanokkar cise), visitors are able to make and taste Ainu cuisine and play traditional Ainu instruments such as the mukkuri.
“Visitors love the carved wooden handicrafts, embroidery and the experience of preparing food. We would like to provide opportunities to as many people as possible to experience Ainu culture, while taking measures to protect them from COVID-19,” says a staff member.
NOTE: With the permission of Upopoy, this article draws on English-language materials published by the museum. |
A genetic disorder is a disease caused by a different form of a gene called a variation, or an alteration of a gene called a mutation. Many diseases have a genetic aspect. Some, including many cancers, are caused by a mutation in a gene or group of genes in a person's cells. These mutations can occur randomly or because of an environmental exposure such as cigarette smoke. Other genetic disorders are inherited. A mutated gene is passed down through a family and each generation of children can inherit the gene that causes the disease. Still other genetic disorders are due to problems with the number of packages of genes called chromosomes. In Down syndrome, for example, there is an extra copy of chromosome 21. If you know that you have a genetic problem in your family, you can have genetic testing to see if your baby could be affected. |
Bordetella bronchiseptica: A species of BORDETELLA that is parasitic and pathogenic. It is found in the respiratory tract of domestic and wild mammalian animals and can be transmitted from animals to man. It is a common cause of bronchopneumonia in lower animals.Bordetella: A genus of gram-negative, aerobic bacteria whose cells are minute coccobacilli. It consists of both parasitic and pathogenic species.Bordetella Infections: Infections with bacteria of the genus BORDETELLA.Bordetella pertussis: A species of gram-negative, aerobic bacteria that is the causative agent of WHOOPING COUGH. Its cells are minute coccobacilli that are surrounded by a slime sheath.Rhinitis, Atrophic: A chronic inflammation in which the NASAL MUCOSA gradually changes from a functional to a non-functional lining without mucociliary clearance. It is often accompanied by degradation of the bony TURBINATES, and the foul-smelling mucus which forms a greenish crust (ozena).Bordetella parapertussis: A species of BORDETELLA with similar morphology to BORDETELLA PERTUSSIS, but growth is more rapid. It is found only in the RESPIRATORY TRACT of humans.Virulence Factors, Bordetella: A set of BACTERIAL ADHESINS and TOXINS, BIOLOGICAL produced by BORDETELLA organisms that determine the pathogenesis of BORDETELLA INFECTIONS, such as WHOOPING COUGH. They include filamentous hemagglutinin; FIMBRIAE PROTEINS; pertactin; PERTUSSIS TOXIN; ADENYLATE CYCLASE TOXIN; dermonecrotic toxin; tracheal cytotoxin; Bordetella LIPOPOLYSACCHARIDES; and tracheal colonization factor.Turbinates: The scroll-like bony plates with curved margins on the lateral wall of the NASAL CAVITY. Turbinates, also called nasal concha, increase the surface area of nasal cavity thus providing a mechanism for rapid warming and humidification of air as it passes to the lung.Pertussis Vaccine: A suspension of killed Bordetella pertussis organisms, used for immunization against pertussis (WHOOPING COUGH). It is generally used in a mixture with diphtheria and tetanus toxoids (DTP). There is an acellular pertussis vaccine prepared from the purified antigenic components of Bordetella pertussis, which causes fewer adverse reactions than whole-cell vaccine and, like the whole-cell vaccine, is generally used in a mixture with diphtheria and tetanus toxoids. (From Dorland, 28th ed)Swine Diseases: Diseases of domestic swine and of the wild boar of the genus Sus.Pasteurella multocida: A species of gram-negative, facultatively anaerobic, rod-shaped bacteria normally found in the flora of the mouth and respiratory tract of animals and birds. It causes shipping fever (see PASTEURELLOSIS, PNEUMONIC); HEMORRHAGIC BACTEREMIA; and intestinal disease in animals. In humans, disease usually arises from a wound infection following a bite or scratch from domesticated animals.Siderophores: Low-molecular-weight compounds produced by microorganisms that aid in the transport and sequestration of ferric iron. (The Encyclopedia of Molecular Biology, 1994)Hemagglutinins: Agents that cause agglutination of red blood cells. They include antibodies, blood group antigens, lectins, autoimmune factors, bacterial, viral, or parasitic blood agglutinins, etc.Nasal Cavity: The proximal portion of the respiratory passages on either side of the NASAL SEPTUM. Nasal cavities, extending from the nares to the NASOPHARYNX, are lined with ciliated NASAL MUCOSA.Pasteurella Infections: Infections with bacteria of the genus PASTEURELLA.Dermotoxins: Specific substances elaborated by plants, microorganisms or animals that cause damage to the skin; they may be proteins or other specific factors or substances; constituents of spider, jellyfish or other venoms cause dermonecrosis and certain bacteria synthesize dermolytic agents.Swine: Any of various animals that constitute the family Suidae and comprise stout-bodied, short-legged omnivorous mammals with thick skin, usually covered with coarse bristles, a rather long mobile snout, and small tail. Included are the genera Babyrousa, Phacochoerus (wart hogs), and Sus, the latter containing the domestic pig (see SUS SCROFA).Bacterial Outer Membrane Proteins: Proteins isolated from the outer membrane of Gram-negative bacteria.Bacterial Proteins: Proteins found in any species of bacterium.Bordetella avium: A species of BORDETELLA isolated from the respiratory tracts of TURKEYS and other BIRDS. It causes a highly contagious bordetellosis.Gene Expression Regulation, Bacterial: Any of the processes by which cytoplasmic or intercellular factors influence the differential control of gene action in bacteria.Adenylate Cyclase Toxin: One of the virulence factors produced by virulent BORDETELLA organisms. It is a bifunctional protein with both ADENYLYL CYCLASES and hemolysin components.Bacterial Adhesion: Physicochemical property of fimbriated (FIMBRIAE, BACTERIAL) and non-fimbriated bacteria of attaching to cells, tissue, and nonbiological surfaces. It is a factor in bacterial colonization and pathogenicity.Pasteurella: The oldest recognized genus of the family PASTEURELLACEAE. It consists of several species. Its organisms occur most frequently as coccobacillus or rod-shaped and are gram-negative, nonmotile, facultative anaerobes. Species of this genus are found in both animals and humans.Antibodies, Bacterial: Immunoglobulins produced in a response to BACTERIAL ANTIGENS.Respiratory Tract Infections: Invasion of the host RESPIRATORY SYSTEM by microorganisms, usually leading to pathological processes or diseases.Adhesins, Bacterial: Cell-surface components or appendages of bacteria that facilitate adhesion (BACTERIAL ADHESION) to other cells or to inanimate surfaces. Most fimbriae (FIMBRIAE, BACTERIAL) of gram-negative bacteria function as adhesins, but in many cases it is a minor subunit protein at the tip of the fimbriae that is the actual adhesin. In gram-positive bacteria, a protein or polysaccharide surface layer serves as the specific adhesin. What is sometimes called polymeric adhesin (BIOFILMS) is distinct from protein adhesin.Pertussis Toxin: One of the virulence factors produced by BORDETELLA PERTUSSIS. It is a multimeric protein composed of five subunits S1 - S5. S1 contains mono ADPribose transferase activity.Trachea: The cartilaginous and membranous tube descending from the larynx and branching into the right and left main bronchi.Virulence: The degree of pathogenicity within a group or species of microorganisms or viruses as indicated by case fatality rates and/or the ability of the organism to invade the tissues of the host. The pathogenic capacity of an organism is determined by its VIRULENCE FACTORS.Hemagglutination: The aggregation of ERYTHROCYTES by AGGLUTININS, including antibodies, lectins, and viral proteins (HEMAGGLUTINATION, VIRAL).Genes, Bacterial: The functional hereditary units of BACTERIA.Whooping Cough: A respiratory infection caused by BORDETELLA PERTUSSIS and characterized by paroxysmal coughing ending in a prolonged crowing intake of breath.Bacterial Vaccines: Suspensions of attenuated or killed bacteria administered for the prevention or treatment of infectious bacterial disease.Molecular Sequence Data: Descriptions of specific amino acid, carbohydrate, or nucleotide sequences which have appeared in the published literature and/or are deposited in and maintained by databanks such as GENBANK, European Molecular Biology Laboratory (EMBL), National Biomedical Research Foundation (NBRF), or other sequence repositories.Respiratory System: The tubular and cavernous organs and structures, by means of which pulmonary ventilation and gas exchange between ambient air and the blood are brought about.Fimbriae, Bacterial: Thin, hairlike appendages, 1 to 20 microns in length and often occurring in large numbers, present on the cells of gram-negative bacteria, particularly Enterobacteriaceae and Neisseria. Unlike flagella, they do not possess motility, but being protein (pilin) in nature, they possess antigenic and hemagglutinating properties. They are of medical importance because some fimbriae mediate the attachment of bacteria to cells via adhesins (ADHESINS, BACTERIAL). Bacterial fimbriae refer to common pili, to be distinguished from the preferred use of "pili", which is confined to sex pili (PILI, SEX).Species Specificity: The restriction of a characteristic behavior, anatomical structure or physical system, such as immune response; metabolic response, or gene or gene variant to the members of one species. It refers to that property which differentiates one species from another but it is also used for phylogenetic levels higher or lower than the species.Transglutaminases: Transglutaminases catalyze cross-linking of proteins at a GLUTAMINE in one chain with LYSINE in another chain. They include keratinocyte transglutaminase (TGM1 or TGK), tissue transglutaminase (TGM2 or TGC), plasma transglutaminase involved with coagulation (FACTOR XIII and FACTOR XIIIa), hair follicle transglutaminase, and prostate transglutaminase. Although structures differ, they share an active site (YGQCW) and strict CALCIUM dependence.DNA, Bacterial: Deoxyribonucleic acid that makes up the genetic material of bacteria.Hydroxamic Acids: A class of weak acids with the general formula R-CONHOH. |
In the case of a communication system, errors happen while information is being transmitted, while it is being delivered to its destination, or while it is being received. However the errors occur, is there anything we can do to protect our information? The answer is yes. In fact, there are many things we can do, with each hardware or software technique falling into one of two categories: error detection or error correction. Error detection is easier to do than error correction, as we will see.
The first hardware technique involves the use of a parity bit. This bit is stored with a group of data bits and is used to indicate the even or odd parity of the data. Even parity means the number of 1s in the data (including the parity bit) is even. Odd parity means the opposite. Table 1 shows a few sample data items and their associated even and odd parity bits.
|8-Bit Data||Even Parity Bit||Odd Parity Bit|
Table 1: Even and odd parity examples.
The even and odd parity bits are always complements of each other. Figure 1 shows how a single parity bit can be generated using exclusive-OR (XOR) gates.
Figure 1. Generating an even parity bit.
The XOR gate outputs a 0 when its inputs are the same (both low or both high) and a 1 when its inputs are different. The input data 10101100 is broken into four groups of two bits, with each pair driving an XOR gate. The eight input bits are reduced to four intermediate bits, then two intermediate bits, then to a single output bit that represents the even parity for the data. Using an exclusive-OR gate as the last gate will generate an odd parity bit.
So, with only a handful of gates, we are able to generate odd or even parity bits. Now, after the parity bit is generated, it is stored with the data or transmitted with it to a receiver. When the data is read back or received, its parity is checked. If the parity does not match, you have detected an error.
Unfortunately, a single parity bit has limitations. It can only detect odd-numbered bit errors. If one bit — or three, five, or seven bits — change, the parity will also change and the error will be detected, but, if an even number of bits change, the parity will remain the same and the error will go undetected.
The limitations of a single parity bit can be overcome by using multiple parity bits. In fact, by using just four parity bits, we are able to detect and correct single bit errors in our eight bits of data. Figure 2(A) shows how four parity bits (three odd and one even) are generated using different groups of bits from the input data. These four bits are called check bits and are transmitted or stored with the original eight data bits.
Figure 2. Using multiple parity bits to detect and correct a single bit error. (A) Generating the check bits.
In Figure 2(B), the 12 received bits are again used to generate four parity bits, with these bits representing the error code. An error code of 0000 indicates that no errors have occurred. Any other error code will indicate the specific bit or even groups of bits in error. This technique was developed by Richard Hamming in the 1950s.
Figure 2. (B) Determining the four-bit error code.
Table 2 shows the four-bit error codes for the Hamming code used in Figure 2.
|Error Code||Error Bit(s)|
|0001||Check bit 0 (bit 8)|
|0010||Check bit 1 (bit 9)|
|0011||Data bit 0|
|0100||Check bit 2 (bit 10)*|
|0101||Data bit 1|
|0110||Data bit 3|
|0111||Data bit 6|
|1000||Check bit 3 (bit 11)|
|1001||Data bit 2|
|1010||Data bit 4|
|1011||All received bits are 0|
|1100||Data bit 5|
|1110||Data bit 7|
*All received bits are 1.
Table 2: Hamming code error patterns.
A deliberate error was introduced into data bit 4. The resulting error code of 1010 correctly identified this single bit error. Once a single bit error has been identified, it is easy to fix it: simply invert the bit that is incorrect.
The ability to detect and correct a single-bit error is important and useful. The price that we pay for this ability is the cost of the four check bits attached to each eight-bit data item. Thus, we have a 50% memory overhead (or bandwidth overhead, during transmission) that must be an acceptable trade-off in order to get the benefit of single bit error correction.
A third hardware technique that can be used with a serial stream of data uses multiple bits to create a check sequence. This technique is called a Cyclic Redundancy Check (CRC) and can be used with bit streams of varying lengths.
The basic method is to treat the serial data stream as a large, binary number. By dividing this number by a predefined binary polynomial, we end up with a remainder pattern (the check sequence) that gets tacked onto the end of the original data.
The check sequence essentially turns the serial stream into a number that is evenly divisible by the polynomial. So, on the receiving end, when the received data (which includes the check sequence) is passed through the CRC circuit, the resulting check sequence will be all 0s if there are no errors. Any 1s that show up indicate one or more errors in the bit stream, but we will not know where they are.
Suppose the data to transmit is 10101100 and the four-bit polynomial is 1011. Generating the check sequence is accomplished through the use of Modulo-2 arithmetic (once again using exclusive-OR). This process is illustrated in Figure 3. A three-bit pattern of 000 is tacked onto the end of the original data. This is done to reserve room for the actual three bits of the check sequence once they are determined.
Figure 3. Generating a CRC check sequence.
At each step, four bits of data are XORed with the four-bit polynomial 1011. This process repeats until there are no more bits left in the data. The final three bits remaining are the CRC check sequence (011). This sequence is now tacked onto the end of the original data (giving us 10101100011) and transmitted.
On the receiving end, the same process is used again, with the 011 sequence replacing the original three 0s. If there are no errors, the remainder will be 0. Change one of the bits yourself and verify that the remainder is non-zero.
The CRC generator is easily constructed using a few XOR gates and a shift register. Figure 4 shows the schematic of the CRC generator for the 1011 polynomial.
Figure 4. The three-bit shift register used to create a CRC check sequence.
A four-bit divider polynomial requires a three-bit shift register to hold the three remainder bits that make up the check sequence. After all bits have been clocked into the circuit, the shift register will contain the three remainder bits (LSB on the right and MSB on the left).
An eight-bit polynomial would require a seven-bit shift register (and seven 0s tacked onto the original data to begin the process). In general, the shift register has one less stage than the number of bits in the polynomial. Table 3 shows some typical CRC polynomials and their uses.
|CRC-16||X16 + X15 + X 2 + 1||Used with ATM|
|CRC-CCITT||X16 + X12 + X 5 + 1||Used with HDLC|
|CRC-32||X32 + X26 + X23 + X22 + X16 + X12 + X11 + X10 + X8 + X7 + X5 + X4 + X2 + X + 1||Used with Ethernet|
Table 3: Common CRC generator polynomials.
Software techniques for performing error detection and correction are especially useful in the world of networking and the Internet. When we download a web page or send an Email, we want to know that these operations are successful. This guarantees that images and other web page content — as well as Email text and binary attachments — are received without error. Perhaps a better expression would be transferred without error.
If we receive some information and it has been corrupted, we simply ask for it to be retransmitted. This is the beauty of the TCP (Transmission Control Protocol) transport protocol within the TCP/IP suite of network protocols. TCP is a connection-oriented protocol where a session is set up between the transmitter and receiver (a client computer and a server computer).
Reliable exchanges of information are made possible through the use of acknowledgement messages sent back and forth between the transmitter and receiver. Figure 5 shows the various fields of the TCP protocol header.
Figure 5. TCP header details.
The header is a block of information contained in a network message that provides important information to the application processing the message.
One of the fields in the TCP header is the checksum field. This field stores a 16-bit number that is generated by adding all of the values represented by the TCP data together, ignoring any carries out of the 16th bit position. The 1s complement of the final sum is saved as the checksum. For example, if the sum was the 3C85 hexadecimal, the 1s complement checksum would be C37A hex.
When a TCP message is received, its checksum is recomputed by adding all of the received data plus the checksum. Typically, the result must equal the 0000 hexadecimal (2s complement checksum) or the FFFF hex (1s complement checksum). If the checksum does not match, a message is sent back to the transmitter indicating that the data must be resent.
The checksum — together with acknowledgement messages — allows us to exchange data reliably. Checksums are also used to verify the contents of a file or EPROM or the contents of a line of text in a file used for downloading. For example, here is a text file encoded using Intel’s Hex record format:
The last byte on each line (9A on the first line, FF on the last) is the 2s complement checksum byte. If you add all the bytes on each line together, you should always end up with 00.
Whether we use hardware or software, protecting our data is becoming more and more important. It is worth the time spent investigating these, and other, techniques for error detection and correction. NV |
As a large, muscular pump, the heart relies on a constant supply of oxygen-rich blood to function. Without a blood supply, heart muscle begins to die. This is exactly what happens during a heart attack.
Myocardial infarction, the medical term for heart attack, literally means "heart tissue damage or death." Heart attacks most commonly occur when one or more of the coronary arteries — a network of blood vessels that supply blood to the heart — become blocked. Heart muscle becomes starved for oxygen and nutrients.
More than 1.2 million Americans suffer a heart attack each year. Approximately one-third of those who experience heart attack will die from it.
Fortunately, you can take several steps to prevent heart attack — starting with healthy lifestyle choices and seeking preventive medical care.
The leading cause of heart attack is coronary artery disease — narrowing or blockage of the coronary arteries. This narrowing process is the result of buildup of fatty substances, called plaques, on artery walls. The medical term for this process is atherosclerosis, which originates from the Greek words athero (gruel, or paste) and sclerosis (hardness).
How do plaques form, and how do arteries become clogged? Throughout your life, fats build up in streaks on artery walls. Our body's natural healing response is to release chemicals that trap and seal these fatty deposits into place.
Unfortunately, these chemicals also attract other substances — inflammatory cells, cellular waste products, proteins and calcium. This is plaque. A hard covering forms around plaque deposits; on the inside, they can be soft.
In time, plaque can rupture, exposing a deposit's fatty interior. In response, bloodclotting particles called platelets will try to re-seal the rupture. As a blood clot forms within a blood vessel, there's a chance it can block blood flow to the heart, or break away from the blood vessel and travel to a smaller artery around the heart. The result is heart attack.
A less common cause of heart attack is a spasm of a coronary artery, when a coronary artery closes off (constricts) intermittently, greatly diminishing blood supply to the heart muscle. If coronary artery spasm occurs for a long period of time, a heart attack can occur. It may occur at rest and can even occur in people without significant coronary artery disease.
Several treatments are available for heart attack patients. Among the most common are:
Surviving and recovering from a heart attack depends on two factors:
HonorHealth's emergency cardiac care specialists live by the slogan "time is muscle." The sooner we can provide emergency care that restores blood flow to your heart, the more likely you’ll survive without lasting heart damage.
One critical measure of emergency heart care is "door-to-balloon time" — the time that elapses between your arrival in an emergency department and the moment a coronary artery is re-opened with a balloon catheter, if appropriate. HonorHealth has refined its processes to consistently perform far better than the national standard of 90 minutes.
Likewise, four HonorHealth medical centers are certified Cardiac Arrest Centers, meaning that we provide specialized cardiac care that increases survival rates. One example is reducing patients' core temperature immediately following cardiac arrest, aiding chances of survival and full neurological recovery. |
Welcome to ComputerPedia™ -- The Computer Encyclopedia
Provide consumers with faster, easier access to the information, products and services they want.
We search the major search engines and remove the duplicates, the advertising sites, the pop-up ads, and anything that might harm your computer. Then we include all the related products and services in this easy-to-remember place where you spend less time searching, and more time finding what you want.
Computer News Links:
Powered by PediaNetwork®
A computer is a machine that manipulates data according to a list of instructions.
The first devices that resemble modern computers date to the mid-20th century, although the computer concept and various machines similar to computers existed earlier. Early electronic computers were the size of a large room, consuming as much power as several hundred modern personal computers(PC). Modern computers are based on tiny integrated circuits and are millions to billions of times more capable while occupying a fraction of the space. Today, simple computers may be made small enough to fit into a wristwatch and be powered from a watch battery. Personal computers, in various forms, are icons of the Information Age and are what most people think of as "a computer"; however, the most common form of computer in use today is the embedded computer. Embedded computers are small, simple devices that are used to control other devices for example, they may be found in machines ranging from fighter aircraft to industrial robots, digital cameras, and children's toys.
The ability to store and execute lists of instructions called programs makes computers extremely versatile and distinguishes them from calculators. The Churchï Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, computers with capability and complexity ranging from that of a personal digital assistant to a supercomputer are all able to perform the same computational tasks given enough time and storage capacity.
History of Computer Hardware:
The history of computer hardware encompasses the hardware, its architecture, and its impact on software. The elements of computing hardware have undergone significant improvement over their history. This improvement has triggered worldwide use of the technology, performance has improved and the price has declined. Computers are accessible to ever-increasing sectors of the world's population. Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.
The von Neumann architecture unifies our current computing hardware implementations. Since digital computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers. The major elements of computing hardware implement abstractions: input, output, memory, and processor. A processor is composed of control and datapath. In the von Neumann architecture, control of the datapath is stored in memory. This allowed control to become an automatic process; the datapath could be under software control, perhaps in response to events. Beginning with mechanical datapaths such as the abacus and astrolabe, the hardware first started using analogs for a computation, including water and even air as the analog quantities: analog computers have used lengths, pressures, voltages, and currents to represent the results of calculations. Eventually the voltages or currents were standardized, and then digitized. Digital computing elements have ranged from mechanical gears, to electromechanical relays, to vacuum tubes, to transistors, and to integrated circuits, all of which are currently implementing the von Neumann architecture.
It is difficult to identify any one device as the earliest computer, partly because the term "computer" has been subject to varying interpretations over time. Originally, the term "computer" referred to a person who performed numerical calculations (a human computer), often with the aid of a mechanical calculating device.
The history of the modern computer begins with two separate technologies - that of automated calculation and that of programmability.
Examples of early mechanical calculating devices included the abacus, the slide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150-100 BC). Hero of Alexandria built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.
The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliest programmable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shaped pointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who play music when struck by levers operated by a camshaft attached to a water wheel. The length of day and night could be re-programmed every day in order to account for the changing lengths of day and night throughout the year.
The end of the Middle Ages saw a re-invigoration of European mathematics and engineering, and Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers. However, none of those devices fit the modern definition of a computer because they could not be programmed.
In 1801, Joseph Marie Jacquard made an improvement to the textile loom that used a series of punched paper cards as a template to allow his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.
It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer that he called "The Analytical Engine". Due to limited finances, and an inability to resist tinkering with the design, Babbage never actually built his Analytical Engine.
Large-scale automated data processing of punched cards was performed for the U.S. Census in 1890 by tabulating machines designed by Herman Hollerith and manufactured by the Computing Tabulating Recording Corporation, which later became IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.
During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult (Shannon 1940). Notable achievements include:
Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.
The non-programmable Atanasoff�Berry Computer (1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory.
The secret British Colossus computers (1943), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.
The U.S. Army's Ballistics Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of these being completed in Great Britain. The first to be demonstrated working was the Manchester Small-Scale Experimental Machine (SSEM or "Baby"), while the EDSAC, completed a year after SSEM, was the first practical implementation of the stored program design. Shortly thereafter, the machine originally described by von Neumann's paper was completed but did not see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.
Computers that used vacuum tubes as their electronic elements were in use throughout the 1950s. Vacuum tube electronics were largely replaced in the 1960s by transistor-based electronics, which are smaller, faster, cheaper to produce, require less power, and are more reliable. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the 1980s, computers became sufficiently small and cheap to replace simple mechanical controls in domestic appliances such as washing machines. The 1980s also witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.
Stored Program Architecture:
Computer programs (also software programs, or just programs) are instructions for a computer. A computer requires programs to function. Moreover, a computer program does not run unless its instructions are executed by a central processor; however, a program may communicate an algorithm to people without running. Computer programs are usually executable programs or the source code from which executable programs are derived.
Computer source code is often written by professional computer programmers. Source code is written in a programming language that usually follows one of two main paradigms: imperative or declarative programming. Source code may be converted into an executable file (sometimes called an executable program or a binary) by a compiler. Alternatively, computer programs may be executed by a central processing unit with the aid of an interpreter, or may be embedded directly into hardware.
Computer programs may be categorized along functional lines: system software and application software. And many computer programs may run simultaneously on a single computer, a process known as multitasking.
Computer programming (often shortened to programming or coding) is the process of writing, testing, debugging/troubleshooting, and maintaining the source code of computer programs. This source code is written in a programming language. The code may be a modification of an existing source or something completely new. The purpose of programming is to create a program that exhibits a certain desired behavior (customization). The process of writing source code requires expertise in many different subjects, including knowledge of the application domain, specialized algorithms and formal logic.
The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that a list of instructions (the program) can be given to the computer and it will store them and carry them out at some time in the future.
In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time�with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions.
In practical terms, a computer program may run from just a few instructions to many millions of instructions, as in a program for a word processor or a web browser. A typical modern computer can execute billions of instructions per second (gigahertz or GHz) and rarely make a mistake over many years of operation. Large computer programs comprising several million instructions may take teams of programmers years to write, thus the probability of the entire program having been written without error is highly unlikely.
Errors in computer programs are called "bugs". Bugs may be benign and not affect the usefulness of the program, or have only subtle effects. But in some cases they may cause the program to "hang" - become unresponsive to input such as mouse clicks or keystrokes, or to completely fail or "crash". Otherwise benign bugs may sometimes may be harnessed for malicious intent by an unscrupulous user writing an "exploit" - code designed to take advantage of a bug and disrupt a program's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.
In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode, the command to multiply them would have a different opcode and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer just as if they were numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and this technique was used with many early computers, it is extremely tedious to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) tend to be unique to a particular type of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame) cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.
Though considerably easier than in machine language, writing long programs in assembly language is often difficult and error prone. Therefore, most complicated programs are written in more abstract high-level programming languages that are able to express the needs of the computer programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler. Since high level languages are more abstract than assembly language, it is possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles.
The task of developing large software systems is an immense intellectual effort. Producing software with an acceptably high reliability on a predictable schedule and budget has proved historically to be a great challenge; the academic and professional discipline of software engineering concentrates specifically on this problem.
How Computers Work:
A general purpose computer has four main sections: the arithmetic and logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by busses, often made of groups of wires.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with these) are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been constructed on a single integrated circuit called a microprocessor.
The control unit (often called a control system or central controller) directs the various components of a computer. It reads and interprets (decodes) instructions in the program one by one. The control system decodes each instruction and turns it into a series of control signals that operate the other parts of the computer. Control systems in advanced computers may change the order of some instructions so as to improve performance.
A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.
The control system's function is as follows note that this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an input device). The location of this required data is typically stored within the instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an output device. 8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow).
It is noticeable that the sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program - and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer that runs a microcode program that causes all of these events to happen.
Arithmetic/Logic Unit (ALU):
The ALU is capable of performing two classes of operations: arithmetic and logic.
The set of arithmetic operations that a particular ALU supports may be limited to adding and subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc) and square roots. Some can only operate on whole numbers (integers) whilst others use floating point to represent real numbers albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?").
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for creating complicated conditional statements and processing boolean logic.
Superscalar computers contain multiple ALUs so that they can process several instructions at the same time. Graphics processors and computers with SIMD and MIMD features often provide ALUs that can perform arithmetic on vectors and matrices.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is up to the software to give significance to what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers; either from 0 to 255 or -128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory as long as it can be somehow represented in numerical form. Modern computers have billions or even trillions of bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. Since data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random access memory or RAM and read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but ROM is pre-loaded with data and software that never changes, so the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM is erased when the power to the computer is turned off while ROM retains its data indefinitely. In a PC , the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the software required to perform the task may be stored in ROM. Software that is stored in ROM is often called firmware because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM by retaining data when turned off but being rewritable like RAM. However, flash memory is typically much slower than conventional ROM and RAM so its use is restricted to applications where high speeds are not required.
In more sophisticated computers there may be one or more RAM cache memories which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part.
I/O is the means by which a computer receives information from the outside world and sends results back. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O.
While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time", then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Before the era of cheap computers, the principle use for multitasking was to allow many people to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly - in direct proportion to the number of programs it is running. However, most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run at the same time without unacceptable speed loss.
Some computers may divide their work between one or more separate CPUs, creating a multiprocessing configuration. Traditionally, this technique was utilized only in large and powerful computers such as supercomputers, mainframe computers and servers. However, multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers have become widely available and are beginning to see increased usage in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.
Networking and the Internet:
Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems like Sabre.
In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. This effort was funded by ARPA (now DARPA), and the computer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
A typical personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:
* Motherboard - It is the "body" or mainframe of the computer, through which all other components interface.
* Central Processing Unit (CPU) - Performs most of the calculations which enable a computer to function, sometimes referred to as the "backbone or brain" of the computer.
* Computer Fan - Used to lower the temperature of the computer; a fan is almost always attached to the CPU, and the computer case will generally have several fans to maintain a constant airflow. Liquid cooling can also be used to cool a computer, though it focuses more on individual parts rather than the overall temperature inside the chassis.
* Random Access Memory (RAM) - It is also known as the physical memory of the computer. Fast-access memory that is cleared when the computer is powered-down. RAM attaches directly to the motherboard, and is used to store programs that are currently running.
* Firmware is loaded from the Read only memory eg. ROM run from the Basic Input-Output System (BIOS) or in newer systems Extensible Firmware Interface (EFI) compliant
+ Internal Buses - Connections to various internal components.
+ PCI (being phased out for graphic cards but still used for other uses)
+ ISA (obsolete in PCs, but still used in industrial computers)
+ CSI (expected in 2008)
+ AGP (being phased out)
+ VLB (outdated)
A case control, and (usually) a cooling fan, and supplies power to run the rest of the computer, the most common types of power supplies are AT and BabyAT (old) but the standard for PCs actually are ATX and Micro ATX.
Controllers for hard disk, CD-ROM and other drives like internal Zip and Jaz conventionally for a PC are IDE/ATA; the controllers sit directly on the motherboard (on-board) or on expansion cards, such as a Disk array controller. IDE is usually integrated, unlike SCSI Small Computer System Interface which can be found in some servers. The floppy drive interface is a legacy MFM interface which is now slowly disappearing. All these interfaces are gradually being phased out to be replaced by SATA and SAS.
Video Display Controller:
Produces the output for the visual display unit. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a Graphics Card.
Removable Media Devices:
* CD (Compact Disc) - the most common type of removable media, inexpensive but has a short life-span.
* CD-ROM Drive - a device used for reading data from a CD.
* CD Writer - a device used for both reading and writing data to and from a CD.
* DVD (Digital Versatile Disc) - a popular type of removable media that is the same dimensions as a CD but stores up to * 6 times as much information. It is the most common way of transferring digital video.
* DVD-ROM Drive - a device used for reading data from a DVD.
* DVD Writer - a device used for both reading and writing data to and from a DVD.
* DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.
* Blu-Ray - a high-density optical disc format for the storage of digital information, including high-definition video.
* BD-ROM Drive - a device used for reading data from a Blu-Ray disc.
* BD Writer - a device used for both reading and writing data to and from a Blu-Ray disc.
* HD DVD - a high-density optical disc format and successor to the standard DVD. It was a discontinued competitor to the Blu-Ray format.
* Floppy Disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium.
* Zip Drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994.
* USB Flash Drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable, and rewritable.
* Tape Drive - a device that reads and writes data on a magnetic tape,used for long term storage.
Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.
* Hard Disk - for medium-term storage of data.
* Solid-State Drive - a device similar to hard disk, but containing no moving parts. V * Disk Array Controller - a device to manage several hard disks, to achieve performance or reliability improvement.
Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade.
Connects the computer to the Internet and/or other computers.
* Modem - for dial-up connections.
* Network Card - for DSL/Cable internet, and/or connecting to other computers.
* Direct Cable Connection - Use of a null modem, connecting two computers together using their serial ports or a Laplink.
* Cable, connecting two computers together with their parallel ports.
* Dial Up Connections. * Broad Band Connections.
A peripheral is a device attached to a host computer behind the chipset whose primary functionality is dependent upon the host, and can therefore be considered as expanding the hosts capabilities, while not forming part of the system's core architecture.
Some of the more common peripheral devices are printers, scanners, disk drives, tape drives, microphones, speakers, and cameras. Peripheral devices can also include other computers on a network system. A device can also refer to a non-physical item, such as a pseudo terminal, a RAM drive, or a virtual network adapter.
Some people do not consider internal devices such as video capture cards to be peripherals because they are added inside the computer case; for them, the term peripherals is reserved exclusively for devices that are hooked up externally to the computer. It is debatable however whether PCMCIA cards qualify as peripherals under this restrictive definition, because some of them go fully inside the laptop, while some, like WiFi cards, have external appendages.
The term is different from computer accessories: Computer peripheral has a narrow meaning that refers only to the input output devices of a computer, whereas, computer accessories has a broader meaning, that refers, all the parts that support a computer which includes motherboards, sensors, chips, including all the input and output devices.
In addition, hardware devices can include external components of a computer system. The following are either standard or very common.
Includes various input and output devices, usually external to the computer system.
* Text Input Devices
* Keyboard - a device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.
* Pointing Devices
* Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.
* Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.
* Gaming Devices
* Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.
* Gamepad - a general handheld game controller that relies on the digits (especially thumbs) to provide input.
* Game Controller - a specific type of controller specialized for certain gaming purposes.
* Image, Video Input Devices
* Image Scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.
* Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.
* Audio Input Devices
* Microphone - an acoustic sensor that provides input by converting sound into electrical signals
* Image, Video Output Devices
* Audio Output Devices
If you have information or links that you would like included in ComputerPedia™, please email us at: |
The announcement was short. It lasted only a fraction of second — a blink of an eye. But a spacecraft in Earth’s orbit, keeping an eye on such events, captured it on June 3 this year. The announcement may have been brief, but it told us that two exotic dead stars, called neutron stars, have collided with each other. This is a relatively rare event, but it bears good news for the merchants in the Sona bazaar. This collision has created gold — lots of it.
But before you head over to Sona bazaar, you should know that this particular collision happened in a galaxy so far away that it has taken light — traveling at a stupendous speed of 186,000 miles every second — four billion years to reach us! In astronomical terms, this collision happened in a galaxy four billion light-years away. In comparison, light from our Sun gets to us in 8 minutes, and is therefore only 8 light-minutes away. The distance of billions of light-years doesn’t intimidate astronomers, as they routinely study events and objects that are even farther away than this particular galaxy. The significance of this event, however, resides in the fact that for the first time, astronomers have been able to study light from collisions that may help us understand the way elements like gold are created in the universe.
Before we get too caught up in the cosmic glamour, we should remember that almost all of the elements that make our bodies were cooked up inside the stars: the carbon in our DNA, oxygen in our lungs, and iron in our blood. Hydrogen in the water molecule, on the other hand, is a leftover from processes in the early history of the universe. The classic quote from the late astronomer Carl Sagan is indeed true: “We are made up of star stuff”.
But for years, astronomers had been seeking an explanation for elements like gold, lead, platinum etc. It was thought that most of them formed when large stars — stars that are ten times the size of our Sun — die in large explosions called Supernovae. However, calculations showed that supernovae in the universe could only account for a fraction of these elements. There must be another way to make gold in the universe.
Now we know how.
Here is the recipe: You take two stars that are orbiting each other. This is not as hard as it seems. Nearly half of all stars in our own Galaxy have at least one other star in its system. But make sure that both of these stars are at least 10 times bigger than our Sun. Then wait about 10 million years. This is the average lifetime of big stars. They will eventually exhaust all their fuel and explode in their individual supernovae. All that will be left of them will be their cores, called neutron stars. These are some of the strangest objects in the universe. Each of the neutron star contains mass equal to that of our Sun, but all packed in a size no greater than a city like Karachi. This means that they have very high density. A teaspoon of neutron star material would weigh as much as a mountain. Now you have two of these neutron stars orbiting each other. But orbits for such exotic objects are unstable. The two stars will eventually collide with each other — and this collision will result in the creation of gold and other rare elements.
However, in an act of ultimate charity, these elements are spread into the surrounding space.
By the time our Solar system was born, many such collisions had enriched our Galaxy with gold (and other elements). The gas cloud that formed the Sun and the Earth already contained these elements. Some of this gold became part of the Earth. Four-and-a-half billion years later, this rare element caught the attention of bipedal species and it became an object of desire and envy.
So the next time when you wear a gold ring or necklace, pause for a minute and appreciate how the cosmos gave us bling.
Salman Hameed is associate professor of integrated science and humanities at Hampshire College, Massachusetts, USA. He runs the blog Irtiqa at irtiqa-blog.com
Published in The Express Tribune, Sunday Magazine, September 15th, 2013. |
African-American Civil Rights Movement (1954–68) The African-American Civil Rights Movement or 1960s Civil Rights Movement encompasses social movements in the United States whose goals were to end racial segregation and discrimination against black Americans and to secure legal recognition and federal protection of the citizenship rights enumerated in the Constitution and federal law. This article covers the phase of the movement between 1954 and 1968, particularly in the South. A wave of inner city riots in black communities from 1964 through 1970 undercut support from the white community. The emergence of the Black Power movement, which lasted from about 1966 to 1975, challenged the established black leadership for its cooperative attitude and its nonviolence, and instead demanded political and economic self-sufficiency. During the same time as African Americans were being disenfranchised, white Democrats imposed racial segregation by law. Violence against blacks increased, with numerous lynchings through the turn of the century.
Miss K's English lessons: 2- jeux/activités en ligne Quelques activités pour apprendre/réviser les loisirs en anglais! Quulques activités pour bien réviser les couleurs en anglais!! (cliquez sur les images!) sur cette page, essayez de dire de quelle couleur sont écrits les mots! c'est le même principe ici, sous forme de jeu! Attention, c'est chronométré! (activités trouvées @ echalk.co.uk) Combien de verbes connaissez-vous en anglais? A few webistes pages to help you learn or revise how to tell the time in English! Quelques sites pour vous aider à apprendre ou réviser à dire l'heure en anglais! Cliquez sur ce lien pour jouer à un jeu en ligne! Il faut cliquer sur les "fusées" pour faire éclater les feux d'artifice sur Londres! a large choice of online winter games! Pour les 6ème en priorité (mais les 4ème ont le droit de réviser aussi!!)
The CNN Freedom Project: Ending Modern-Day Slavery - CNN.com Blogs Veterans of the Civil Rights Movement -- Literacy Tests Literacy Tests & Voter Applications Alabama Georgia Louisiana: Mississippi South Carolina Background Today, most citizens register to vote without regard to race or color by signing their name and address on something like a postcard. Prior to passage of the federal Voting Rights Act in 1965, Southern states maintained elaborate voter registration procedures deliberately designed to deny the vote to nonwhites. This process was often referred to as a "literacy test," a term that had two different meanings — one specific and one general. The more general use of "literacy test" referred to the complex, interlocking systems used to deny Afro-Americans (and in some regions, Latinos and Native Americans) the right to vote so as to ensure that political power remained exclusively white-only. Poll taxes. While in theory there were standard state-wide registration procedures, in real-life the individual county Registrars and clerks did things their own way. — © Bruce Hartford
"Black Power" Era The impressive March on Washington in the summer of 1963 has been remembered as one of the great successes of the Civil Rights Movement, a glorious high point in which a quarter of a million people—black and white—gathered at the nation's capital to demonstrate for "freedom now." But for many African Americans, especially those living in inner-city ghettos who discovered that nonviolent boycotts and sit-ins did little to alter their daily lives, the great march of 1963 marked only the first stage of a new, more radical phase of the Civil Rights Movement. You probably just finished reading the first chapter of the Civil Rights Movement. (Hint, hint.) Isn't it incredible how much had been accomplished by civil rights activists from World War II to the 1963 March on Washington? Isn't it staggering just how much had been sacrificed, how high the stakes had been raised, and how widespread the movement had become? Let's quickly review some highlights. How can this be? Not exactly.
Desegregation The Civil Rights Movement is sometimes defined as a struggle against racial segregation that began in 1955 when Rosa Parks, the "seamstress with tired feet," refused to give up her seat to a white man on a bus in Alabama. Brown v. Board of Education, the 1954 Supreme Court case that attacked the notion of "separate but equal," has also been identified as the catalyst for this extraordinary period of organized boycotts, student protests, and mass marches. These legendary events, however, did not cause the modern Civil Rights Movement, but were instead important moments in a campaign of direct action that began two decades before the first sit-in demonstration. The story of the American Civil Rights Movement is one of those tales that is told again and again and again, often with a few protagonists, a couple of key events, and one dramatic conclusion. Right? Well, not really. Absolutely. So, when did that movement emerge and how? Nope. Without a doubt!
From NY to Texas, KKK recruits with candies and fliers Your video will begin momentarily. Ku Klux Klan recruitment fliers are turning up on driveways across the countryFliers, usually left with candies, appear to be part of a wider recruitment effortThe Klan may be seizing on a time when race and immigration are dominant issues, some say (CNN) -- Carlos Enrique Londoño laughs at the Ku Klux Klan recruitment flier recently left on the driveway of his suburban New York home. It's unlikely the group would accept him. "I'm Colombian and dark-skinned," said Londoño, a painter and construction worker who has lived in Hampton Bays on Long Island for 30 years. The flier was tucked into a plastic bag along with a membership application, the address for the KKK national office in North Carolina, a list of beliefs and three Jolly Rancher candies. Gen. Actors in the silent film "The Birth of a Nation," released in 1915, portrayed Ku Klux Klan members dressed in full regalia and riding horses. Klan members march in a parade in Washington in 1927.
What can Teachers Learn from Nelson Mandela to Make a Difference? We teach language to help people communicate. Why do people want to communicate? To express the human story through myth, inspiration and powerful transformation. Let’s dig deeper into the story of Nelson Mandela and help our students think, communicate and become active narrators in the search for peace and what makes us human. What can we teach students about Nelson Mandela through the power of video and multi-media? Let’s dig a little deeper to find out;) 1) The Video: I chose this BBC video as a modern day look at Mandela’s legacy beyond South Africa. Then we ask questions and dig a lot deeper. Beyond politics, what other dark forces in our human nature perpetuate the kinds of violence and prejudice that can seem to be so innate in humanity as to be chilling to the core. When we stare into the black hole of violence and face the shadow side of life, how do we remain optimistic, inspired and willing to risk all for the common good? Our better natures. Where are they when we need them?
Strangers This EFL lesson is designed around a beautiful short film called Strangers directed by Erez Tadmor and Guy Nattiv, and the theme of racism. Students predict a story, watch a short film, speak about racism and write a narrative. I would ask all teachers who use Film English to consider buying my book Film in Action as the royalties which I receive from sales help to keep the website completely free. Language level: Intermediate (B1) – Upper Intermediate(B2.11) Learner type:Teens and adults Time: 90 minutes Activity: Predicting a story, watching short film, speaking and writing a narrative Topic: Racism Language: Adjectives to describe character and appearance, and narrative tenses Materials: Short film, discussion questions and anti-racism posters Downloable materials: strangers lesson instructions anti-racism posters racism discussion questions Support Film English Film English remains free and takes many hours a month to research and write, and hundreds of dollars to sustain. Step 1 Step 2 Step 3
Civil Rights Movement Heroes for Kids (Rosa Parks, Martin Luther King Jr.) by Borgna Brunner The civil rights movement of the 1950s and 1960s challenged racism in America and made the country a more just and humane society for all. Below are a few of its many heroes. Rosa Parks Rosa Parks On December 1, 1955, in Montgomery, Alabama, Rosa Parks, an African-American seamstress, left work and boarded a bus for home. Martin Luther King, Jr., heard about Parks's brave defiance and launched a boycott of Montgomery buses. Martin Luther King, Jr. Martin Luther King, Jr. It wasn't just that Martin Luther King became the leader of the civil rights movement that made him so extraordinary—it was the way in which he led the movement. These peaceful forms of protest were often met with vicious threats, arrests, beatings, and worse. Thurgood Marshall Thurgood Marshall was a courageous civil rights lawyer during a period when racial segregation was the law of the land. His most important case was Brown v. The Little Rock Nine Although Brown v.
Martin Luther King I Have a Dream Speech - American Rhetoric Martin Luther King, Jr. I Have a Dream delivered 28 August 1963, at the Lincoln Memorial, Washington D.C. Video Purchase Off-Site audio mp3 of Address [AUTHENTICITY CERTIFIED: Text version below transcribed directly from audio. (2)] I am happy to join with you today in what will go down in history as the greatest demonstration for freedom in the history of our nation. Five score years ago, a great American, in whose symbolic shadow we stand today, signed the Emancipation Proclamation. But one hundred years later, the Negro still is not free. In a sense we've come to our nation's capital to cash a check. But we refuse to believe that the bank of justice is bankrupt. We have also come to this hallowed spot to remind America of the fierce urgency of Now. It would be fatal for the nation to overlook the urgency of the moment. We cannot walk alone. And as we walk, we must make the pledge that we shall always march ahead. We cannot turn back. I have a dream today! But not only that: Free at last! U.S.
Martin Luther King, Jr. Advertisement. EnchantedLearning.com is a user-supported site. As a bonus, site members have access to a banner-ad-free version of the site, with print-friendly pages.Click here to learn more. (Already a member? Click here.) Martin Luther King, Jr., was a great man who worked for racial equality and civil rights in the United States of America. Young Martin was an excellent student in school; he skipped grades in both elementary school and high school . Martin experienced racism early in life. After graduating from college and getting married, Dr. During the 1950's, Dr. Dr. Commemorating the life of a tremendously important leader, we celebrate Martin Luther King Day each year in January, the month in which he was born. Timeline of Martin Luther King Jr.' Activities on MLK:
Rosa Parks Rosa Parks, born Rosa Louise McCauley (February 4, 1913 - October 24, 2005) was a pivotal figure in the fight for civil rights. She was a protester of segregation laws in the US, and her actions led to major reforms (changes), including a Supreme Court ruling against segregation. Arrested for Not Giving up Her Bus Seat to a White Man On December 1, 1955, a Montgomery, Alabama, bus driver ordered Mrs. Bus Boycott Mrs. On February 1, 1956, the MIA (the Montgomery Improvement Association, which was formed after Mrs. Supreme Court Ruling On November 13, 1956, the US Supreme Court ruled that segregation on city buses is unconstitutional. Continuing the Civil Rights Movement In 1957, after receiving many death threats, Mrs. After her death, on October 24, 2005, Mrs. Related Pages: |
The push of drive is the artificial force necessary to apply to people to get them to work contrary to their own goals.
Enter here the concept of friction. When you apply an external force to an object to get it to move, friction occurs. The amount of friction is the amount of energy lost in the transfer of momentum from one object to another. Loss of energy = waste.
In a pull system, things operate faster by removing friction or constraints. In a push system, things operate faster by applying more force. |
If you look at the definition of a mineral, you will find that natural ice (snow, lake ice, glaciers) is a mineral. Snow, lake ice, and glaciers also fit the definition of a rock. They are naturally occurring (not man-made), solid (not liquid, gas, plasma, etc.), and they can form large deposits. Snow, lake ice, and glaciers fit the definition, so they are rocks.
Snow is made up of many tiny pieces of ice, deposited by wind and gravity. That makes it a sedimentary rock. Keep that in mind the next time it snows. When you catch snowflakes our your tongue, you are eating a rock. When you hit someone with a snowball, you are hitting them with a rock.
As snow piles up on a glacier, it changes. The pressure of layer after layer of snow recrystallizes it into a very granular type of ice called firn. At that point, it becomes a metamorphic rock, changed by heat and/or pressure.
OK, then what about lake ice? When the surface of a lake freezes, the water changes from a liquid to a solid. Rocks that solidify from melted material are igneous rocks, so lake ice can be classified as igneous. If you get technical, it also means that water could be classified as lava. What?!?! Think about it. Lava is melted rock, right? Since snow, glaciers, and lake ice are rocks, then when the melt they form molten rock. Since it is on the surface, it is technically lava. How about a nice, refreshing glass of lava? |
Clouds trap solar energy, which helps to warm the atmosphere. A warmer atmosphere can hold more moisture and could build up even more clouds. These clouds would then trap more heat and…well, you get the idea. This is called a positive feedback mechanism. There are many positive feedback mechanisms in climate change. Another is albedo. As temperatures warm, snow and ice melt. This reduces albedo, which causes temperatures to warm and more snow and ice to melt.
Clouds also reflect energy and shade the land. This would help to reduce global temperatures. So scientists are not sure what the net effect of clouds on global temperatures is. Clouds are the second biggest uncertainty in climate models. The biggest is in how people’s behavior will change, to change human impacts on the atmosphere.
Courtesy of Rob Simmon and NASA. earthobservatory.nasa.gov/IOTD/view.php?id=44250. Public Domain. |
Scientists have discovered a solar system 30 light-years away from Earth that defies present understanding about planet formation, with a big Jupiter-like planet orbiting a diminutive star referred to as a red dwarf.
Stars generally are much bigger than even the biggest planets that orbit them. However, in this case, the star and the planet aren’t a lot different in size, the researchers mentioned on Thursday.
The star, known as GJ 3512, is about 12% the size of our sun, while the planet that orbits it has a mass of at least about half of Jupiter, our solar system’s giant planet.
The planet, which like Jupiter consists mainly of gas, was found using a telescope at the Calar Alto Observatory in Spain. It travels around its star in a very elliptical orbit lasting 204 days.
Red dwarfs are small, with relatively low surface temperatures. GJ 3512 isn’t only a lot smaller than our sun, it’s somewhat comparable in size to a very massive planet, being only about 35% bigger than Jupiter.
There is proof of a second planet at present orbiting the star, while a 3rd planet might need ejecting from the star system previously, explaining the elliptical orbit of the Jupiter-like planet, Morales stated.
Planets are born from the identical disk of interstellar gas and dust that produces the star around which they orbit. Under the leading model for planetary formation, known as the “core accretion” model, an object initially types from solid particles in the disk and the gravitational pull of this embryonic planet allows for an atmosphere to come up from the surrounding gas.
A competing model, known as the gravitational instability model, may clarify this unique system. |
Mosquito-borne diseases pose a growing risk to public health in urban areas. Asian tiger mosquitoes are a vector of high concern as they thrive in cities, live in close association with people, and can reproduce in very small pools of water.
The International Union for Conservation of Nature (IUCN) is the global authority for determining species’ vulnerability in the face of threats such as habitat loss and climate change. How widely a species can be found – its geographic range – is a key indicator used by the IUCN to assign an appropriate conservation status.
Most of the planet’s freshwater stores are found in the northern hemisphere, a region that is changing rapidly in response to human activity and shifting climate trends. A recent study analyzed 147 northern lakes and found that many rely on nutrients from tree leaves, pine needles, and other land-grown plants to feed aquatic life.
Tiny ticks are a big problem. Anyone taking a walk in the woods is advised to do a tick check. Ticks infect more than 325,000 people with Lyme disease each year, and this number continues to rise.
Filoviruses have devastating effects on people and primates, as evidenced by the 2014 Ebola outbreak in West Africa. For nearly 40 years, preventing spillovers has been hampered by an inability to pinpoint which wildlife species harbor and spread the viruses.
Ebola. Hantavirus. Lyme disease. What do they have in common? Like most emerging infectious diseases, they originated in mammals. So many debilitating pathogens make the jump from wildlife and livestock to humans, yet at the global scale little is known about where people are most at risk of outbreaks.
Scientists are calling for the creation of a global early warning system for infectious diseases. Such a system would use computer models to tap into environmental, epidemiological, and molecular data – gathering the intelligence needed to forecast where disease risk is high and what actions could prevent outbreaks or contain epidemics.
Citizen scientists play a vital role in raising awareness about the health of our nation’s freshwater resources. Their efforts can help document water clarity and track harmful algal blooms and other indicators of poor water quality instrumental to sound management.
Each year, more than 25 million shipping containers enter the U.S. All too often, highly destructive forest pests are lurking among their imported goods. Wood boring insects arrive as stowaways in wood packaging, such as pallets and crates. Other forest pests and pathogens hitchhike in on foreign-reared plants bound for American nurseries.
The discovery of acid rain in North America was made possible by environmental data collected at a biological field station nestled in the White Mountains of New Hampshire. Hubbard Brook Experimental Forest is just one of the many biological field stations located around the globe that are keeping a pulse on the health of our planet.
There are approximately 3,500 mosquito species in the world. Of those, only a few hundred are known to bite humans. And just two have adapted to breed almost exclusively in urban environments where they are in close proximity to people.
De-extinction, or the act of bringing extinct species back from the dead, has been riding a wave of enthusiasm. Nearly 2 million people have watched Steward Brand’s TED talk on the topic, and Beth Shapiro’s book How to Clone a Mammoth has received rave reviews.
Bolivia’s second largest lake has nearly disappeared. Lake Poopó, a saltwater lake located in a shallow depression in the Altiplano Mountains, used to cover an area about the size of Los Angeles. While it’s not the first time the lake has dried out, scientists believe its recovery hangs in the balance due to the combined stress of drought, climate change, and water diversion.
Have you ever wondered what happens when a fish encounters a dam or a culvert? Too often, these structures are barriers to breeding and nursery sites, feeding grounds, and vital genetic mixing. In a warming world, barriers also prevent fish from seeking refuge as stream temperatures change.
In New York’s Hudson Valley, it’s hard to go outside without stepping on an acorn. Oaks have ‘boom and bust’ acorn production cycles. In lean years, trees produce a handful of nuts. In boom years, acorns seem to rain down from the sky. We are currently experiencing an acorn bumper-crop, or what ecologists call a ‘mast’ year.
In some forests, there can be more than 100 acorns per square meter. This is welcome news to animals like mice, chipmunks, and squirrels. They can gorge on the bounty and stock their larders. Acorn caches help wildlife avoid predators and survive the lean months of winter. They even give well-fed rodents a jump-start on the breeding season.
For this reason, acorn “mast” years are also harbingers of future Lyme disease risk. In the summer following acorn booms, white-footed mouse numbers explode. In New York’s Hudson Valley, these mice play a major role in infecting blacklegged ticks with the agents that cause Lyme disease, Babesiosis, and Anaplasmosis.
Cary Institute disease ecologist Rick Ostfeld explains.
“The ticks that are emerging as larvae in August – just as the mice and chipmunks are reaching their population peaks – they have tons of excellent hosts to feed from. They survive well and they get infected with tick-borne pathogens. And that means that two years following a good acorn crop we see high abundance of infected ticks, which represents a risk of human exposure to tick borne disease.”
Predictions are based on 20 years of field studies that have confirmed the relationship among acorn mast years, mouse outbreaks, and the prevalence of infected ticks. Mark your calendars – 2017 will likely be a bad year for Lyme disease.
Full interview with Rick Ostfeld, a disease ecologist at the Cary Institute of Ecosystem Studies
Photo, posted August 16, 2012, courtesy of Rabiem22 via Flickr. |
Moss is Boss: Using Plants to Determine Direction
|Areas of Science||
|Time Required||Short (2-5 days)|
|Material Availability||You must have access to three lone trees that have moss growing on their bases, and that are not shaded by other trees or buildings.|
|Cost||Low ($20 - $50)|
AbstractHave you ever gone camping, looked up at the stars, and found the Big Dipper? Two stars in the dipper part of this constellation point to Polaris, the north star, which people have used for thousands of years to help them find their way. In this plant biology science fair project, you'll investigate whether plants, like moss, can help you find your way, too.
To determine whether the location of moss growth on trees is a good way to determine cardinal direction.
Kristin Strong, Science Buddies
Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed.
Last edit date: 2020-01-12
Can plants talk? No, but they can still tell you things, like whether spring or fall is coming, if there's a drought, if there's enough nitrogen in the soil, or if a plant has an infection. That's a lot of communication for something that doesn't have a mouth! People have used the signs and signals of nature for thousands of years to make calendars and to figure out the best time to plant crops, to migrate, or to hunt. Following nature was a matter of survival.
Paying attention to natural signs has also been critical to people as they travel. Before the development of the compass, sailors and land voyagers used stars and plants to figure out their cardinal direction (north, south, east, or west) and their latitude (distance from the equator) as they moved from place to place. Perhaps the best-known star for ancient travelers in the northern hemisphere of Earth was Polaris, the north star, which can be found using a star constellation called the Big Dipper (also known as the Drinking Gourd to people who were trying to escape slavery by fleeing north). Stars are fine to use by night, but during the day, people needed another method to figure out their direction. The Sun, of course, can give a general sense of east and west, since it rises in the east and sets in the west, but it's helpful to have more information, especially on cloudy days! The plant that people have turned to most to help them find their way are the lowly mosses.
Figure 1. This photo shows an example of a type of moss. (A. Lin, Wikipedia, 2006.)
Jewel-green, soft, and carpet-like, mosses love to grow where there is low light and moisture. You'll often find them along the edges of streams in the woods, or poking up through cracks in city sidewalks. Moisture is very important to mosses since they are very thin plants, so have no waxy coating to protect them from drying out, and need water to reproduce.
So how does moss relate to cardinal direction? Earth is a sphere and spins on a tilted axis. The result of this arrangement is that in the northern hemisphere, the southern side of any fixed object gets more sunlight than the northern side. The opposite is true in the southern hemisphere. As you can see in the drawings below, the southern side of the tree in the northern hemisphere gets more sunlight, whether it's summer or winter (the Sun is just higher in the sky and has more direct, or straight-on, rays in the summer).
Figure 2. These drawings show that the southern side of objects, like trees in the northern hemisphere, get more sunlight than the northern side, whether it's summer or winter.
So, if you have a lone tree in the northern hemisphere (that is not shaded by any other trees), on which side of the tree do you think moss would like to grow? In this science fair project, you're going to find three such trees and photograph their trunks to see if you really can tell the cardinal direction from the growth of moss on trees.
Terms and Concepts
- Cardinal direction
- Why do people pay attention to the signs and signals in nature?
- Before the development of compasses, how did people figure out in which direction they were going?
- Why does moss like to grow in low-light areas with lots of moisture?
- Which side of a tree gets the most Sun exposure in the northern hemisphere? What about in the southern hemisphere?
These sources discuss moss and the location of its growth:
- EarthSky Contributors. (2008). Does moss only grow on the north side of tree trunks? Retrieved April 10, 2009, from http://scienceline.ucsb.edu/getkey.php?key=629
- Indiana University. (2002, December 2). Is it true that moss grows on the north side of a tree? Don and Yael give the answer on this Moment of Science. Retrieved April 10, 2009, from http://amos.indiana.edu/library/scripts/moss.html
- Wikipedia Contributors. (2009, March 23). Moss. Wikipedia: The Free Encyclopedia. Retrieved April 10, 2009, from http://en.wikipedia.org/w/index.php?title=Moss&oldid=279097684
For help creating graphs, try this website:
- National Center for Education Statistics (n.d.). Create a Graph. Retrieved March 19, 2009, from https://nces.ed.gov/nceskids/CreateAGraph/default.aspx
This source describes how a compass works:
- Brain, M. (2009). How Compasses Work. Retrieved April 17, 2009, from http://www.howstuffworks.com/compass.htm
News Feed on This Topic
Materials and Equipment
- Trees (3)
- With easily visible trunks
- With enough moss on their bases to be visible by a picture taken from a few feet away
- Not crowded or shaded by other trees or buildings
- In a similar geographic area (within a mile or so of each other)
- Compass, available from Carolina Biological, item #: 758669
- Sticks, straight (4)
- Tape measure
- Digital camera
- Graph paper
- Colored pencil
- Lab notebook
Disclaimer: Science Buddies participates in affiliate programs with Home Science Tools, Amazon.com, Carolina Biological, and Jameco Electronics. Proceeds from the affiliate programs help support Science Buddies, a 501(c)(3) public charity, and keep our resources free for everyone. Our top priority is student learning. If you have any comments (positive or negative) related to purchases you've made for science projects from recommendations on our site, please let us know. Write to us at [email protected].
Note: Before starting this science fair project, read about how a compass works in the Bibliography, and play with the compass to familiarize yourself with how to use it. The needle of the compass has an arrow head or a white tip. This is the part to focus on during the experiment. You will line up the arrow head or white tip with N (for north) on the background of the compass.
Testing Your Trees
- Find three trees that meet the requirements listed above in the Materials and Equipment section.
- Place the compass on flat ground near the base of the first tree. Slowly turn the compass until the compass arrow head or white tip is pointing in the same direction as the "N" on the compass.
- Place the first stick at the base of the tree so that it is parallel to the compass needle and pointing north from the base of the tree. This will be your north stick. If desired, use another stick to scratch out an "N" in the dirt beside the north stick to help you remember its direction.
Figure 3. This photo shows how to make the north stick parallel with the needle of the compass.
- From where you are standing looking at the tree, take your compass to the right of the tree in approximately the area that you think west will be. Set your compass on flat ground near the base of the tree. Slowly turn the compass until the needle is again pointing in the same direction as the "N" on the compass.
- Place the second stick at the base of the tree so that it is perpendicular (at a 90-degree angle) to the compass needle (pointing in the direction of west on the compass). This will be your west stick. If desired, use another stick to scratch out a "W" in the dirt beside your west stick to help you remember its direction.
Figure 4. This photo shows how to make your west stick perpendicular to the needle of the compass.
- From where you are standing looking at the tree, take your compass to the right of the tree in approximately the area that you think south will be. Set your compass on flat ground near the base of the tree. Slowly turn the compass until the needle is again pointing in the same direction as the "N" on the compass.
- Place the third stick at the base of the tree so that it is parallel to the compass needle and pointing south from the base of the tree. This will be your south stick. If desired, use another stick to scratch out an "S" in the dirt beside the south stick to help you remember its direction.
Figure 5. This photo shows how to make your south stick parallel to the needle on the compass.
- From where you are standing looking at the tree, take your compass to the right of the tree in approximately the area that you think east will be. Set your compass on flat ground near the base of the tree. Slowly turn the compass until the needle is again pointing in the same direction as the N on the compass.
- Place the fourth stick at the base of the tree so that it is perpendicular to the compass needle and pointing east from the base of the tree. This will be your east stick. If desired, use another stick to scratch out an "E" beside the east stick to help you remember its direction.
Figure 6. This photo shows how to make the east stick perpendicular to the compass needle.
- You will now have four sticks pointing out from the base of the tree in each of the four directions, as shown below.
Figure 7. This photo shows a tree with the four directions marked around its base.
Go to each stick and take a photograph of the base of the tree, using the following instructions:
- Kneel down so that your camera is parallel to the ground as you take the photograph (but it doesn't need to be on the ground).
- Use the stick as a guide for each photograph. As you look through the viewfinder, make sure the stick will be in the bottom center of each photo, as shown below.
- Try to stay the same distance away from the tree for each photograph. You can use the measuring tape to help you stay approximately the same distance away.
- You may want to take a couple of photos just in case your first photo doesn't work out.
- Write down in your lab notebook which stick you photographed first, second, third, and fourth, so that when you take the photos home and print them out, you will know in which direction you were looking.
Figure 8. This photo shows two example photographs taken of a tree from different directions.
- Repeat steps 2–11 for two other trees.
Analyzing Your Photos
- Print out your photos onto graph paper.
- If desired, shade in the moss with one colored pencil and shade in the bark with a different colored pencil to make it easier to distinguish the two on the graph paper.
- Count up the approximate number of squares that are covered by moss. For partial squares, estimate how much of the square is covered by moss, for example, if it looks like one-fourth of the square is covered, add 0.25; if it looks like half of the square is covered, add 0.5; if it looks like three-fourths of the square is covered, add 0.75. For each tree, enter your counts in a data table, like the one below:
Tree 1 Square Counts Data Table
|Direction||Number of moss-covered squares||Total number of tree base squares. (Count up all the squares in the tree base, whether they are covered by moss or just bark.)||Percentage of the tree base covered by moss = (Number of moss-covered squares divided by the total number of tree base squares) x 100|
- Count up the total number of tree base squares (any square that falls within the tree base, whether it is covered by moss or just bark), and enter your count in the data table.
- Calculate the percentage of the tree trunk base that is covered by moss for each direction by dividing the number of moss-covered squares by the total number of tree base squares and multiplying by 100.
- Combine your three data tables into one by entering the last column of each table into a new data table, like the one below.
Percentage of the Tree Bases Covered by Moss Data Table
- For each direction, calculate the average percentage of the trees that were covered by moss and enter your calculations in the data table.
- Make a bar chart showing the four directions on the x-axis and the average percentage of the tree bases that were covered by moss on the y-axis. You can make the bar chart by hand or use a website like Create a Graph to make the graph on the computer and print it.
- In which direction did you observe the most moss growth? In which direction did you observe the least? Do you think moss growth is useful as an indicator of cardinal direction?
If you like this project, you might enjoy exploring these related careers:
- Try using mushroom growth around trees, instead of moss. Be careful not to touch the mushrooms, as they could be poisonous.
Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot.
Ask an Expert
News Feed on This Topic
Looking for more science fun?
Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity
Explore Our Science Videos
Flower Dissection - STEM Activity
Solubility Science - STEM Activity
Make a Hygrometer to Measure Humidity - STEM activity |
The Maiasaura was a genus of dinosaur that inhabited our planet during the Cretaceous period (about 70 million years ago). It is a specimen that is classified within the group of the hadrosaurid dinosaurs.
This genus would have been able to move using two of its four limbs. And that was relatively a heavy dinosaur, about 5 tons and about 10 meters long (it should be noted that half is due to the tail).
The Maiasaura is a relatively popular and well-known dinosaur, so much so that it has appeared in the famous Jurassik Park movie saga.
Do you want to know more? On our page you will find complete information about the Maisaura, read on and find out all about this herbivorous dinosaur from the Cretaceous period!
Taxonomy of the Maiasaura
- The specimen belongs to the Animalia kingdom.
- This dinosaur corresponds to the phylum Chordata.
- Its class is Sauropsida.
- The animal belongs to the super-order called Dinosauria.
- Its order is called Ornithischia.
- The Maiasaura corresponds to the suborder designated Neornithischia.
- It is part of the suborder Ornithopoda.
- It is located within the family called Hadrosauridae.
- It belongs to the subfamily Saurolophinae, specifically to the tribe Brachylophosaurini.
- Its genus is the Maiasaura.
As we already mentioned, it developed its life during the period identified as Cretaceous, precisely in the superior stage of this one. The curious thing about it is that when the first great finding of this enormous specimen was made, a huge group of nests was also found.
That is why the place is known today by many great and prestigious scientists, such as the famous Egg Mountain, which is obviously located in the first power of the world.
Since then, the entire scientific world of Paleontology came to recognize their error about the fact that large specimens were not in charge of providing the necessary food for their offspring; now everyone knows that it certainly happens that way, verified by the last finding we have just mentioned in the previous paragraph.
Characteristics of the Maiasaura
This unique specimen could reach a length of more than 10 meters, whose volume could be a maximum of 3 tons. On the other hand, the bone structure and the position of that one seems to us to indicate clearly that this animal had the great capacity and ability to be able to move from one point to another with the use of only two of its extremities.
Likewise, one of the parts of this enormous specimen was its tail, which fulfilled a really necessary function, that of maintaining the balance while this specimen was in charge of moving from one place to another.
Another detail that we know about this strange creature is that they used their bipedal form to be able to reach the vegetables that were at a much more considerable height, in this way the food for them was always available, nevertheless, it is also known that they were able to use their four extremities to be able to place themselves in a position that allowed them to feed on vegetables that were at less prominent heights.
Feeding the Maiasaura
As we have already mentioned, this huge animal based its main diet on the consumption of vegetables, among which we can mention some fruits and seeds, however it is known that it was also very common for the animal to choose which fruit or seed to eat.
In addition, this rare specimen used its solid and resistant teeth, which kept the shape of the beak it had, which was quite similar to that of the animals we know today as ducks, a quite common beak in the specimens known as hadrosaurids, said teeth were used frequently to be able to shred the food and later digest it with much more comfort.
The curious thing about its structure is that precisely in the area of the skull this huge animal had a strange prominence formed by bone tissue in the same way, this strange prominence was located just above the pair of basins that housed the visual elements of the animal.
Another interesting fact about this specimen is that it is stated that in order to develop a good full and healthy body, it had to eat at least a quantity of almost 100 kilograms of vegetable food, a really ridiculous but necessary quantity.
History of the Maiasaura
The main discovery of the Maiasaura was made by a guy known as dinosaur Jack Horner, who turned out to be actually one of the best scientists in Paleontology, and who was also the individual who actively participated in one of the most famous dinosaur movies of all times, the super famous Jurassic Park.
This great scientist, together with the prestigious Makela, were exploring a formation rich in prehistoric vestiges, a formation identified under the name of “Two Medicines”, located in Montana, precisely one year before reaching the 1980s.
It was in that formation that they found vestiges of this enormous specimen, at that time they gave it the name of Maiasaura due to the good reputation that this animal had as a mother, since a series of nests were found near the place.
These nests had the characteristic of being in reality medium sized depressions found in the mud, they were oval in shape and contained a set of egg shells that were fractured inside.
This detail indicates that the Maiasaura took care of their offspring until these small specimens could develop their life independently without any kind of problem, although at first there were several wrong conclusions about this situation.
What is really interesting about the discovery of these famous nests is that specimens of the same dinosaur of almost all ages were found, this allowed the whole scientific community to carry out an extensive and fruitful investigation, reaching diverse conclusions that today are exposed in diverse museums and different universities throughout our planet, a discovery that turned out to be one of the most important in the whole history of the Earth.
It is also believed, by the same fact, that this specimen was grouped in herds in order to protect itself from any threat, whether that threat is the climate or some other predators that wanted to feed on them or their offspring.
That is why they made a journey through long and extensive territories to later return to the initial place where they were born and raise their offspring. That is how the process or cycle was repeated in a tireless way time and time again, the new and young specimens replaced the old ones and again everything started from scratch.
About its solid defenses we can say that this animal had within its repertoire a number of resources that helped protect it from the stalking of other ferocious predators, one of these resources was the rapid escape to avoid becoming the food of another stronger dinosaur, another resource was that incredibly this dinosaur had the ability to camouflage with the environment in which it was, so predators could not detect their presence in almost no way.
Besides, it has been known that this animal was accompanied by the other members of its family, which has even been detailed that could be thousands, an exact figure that scientists have been able to determine is 10 thousand, a detail that has really managed to surprise more than one researcher, because it would be one of the largest agglomerations of animals in all prehistory.
On the other hand, it has also been possible to demonstrate that these specimens were in charge of the breeding of their descendants thanks to the manufacture of nests that they themselves made, small places in which their own children would be much safer, very well protected from the unfavorable climatic conditions that could come to occur as well as from other threats that existed at that time.
These places that we know particularly as nests, were arranged one from the other to a distance of more than 700 centimeters, this means that all these small homes were also very close, on the depth of these constructions we can see that it was of about 200 centimeters, which gave the appearance of a crater, a depression that had been dug by these incredible specimens.
Inside each nest they probably would have reached more than three or four dozens of eggs, and all of them could have been found in quite well ordered positions, the eggs were placed in a circle-shaped order, besides that it has been possible to determine that the size or dimensions of these elements were similar to the eggs of the ostriches.
At this point many of you may wonder how these eggs came to be developed, but what happened is that they actually incubated due to the weather conditions and environmental circumstances of that time.
In fact, these specimens were in charge of placing a series of vegetable products in the structure so that the intense heat that existed in that territory would facilitate the task of heating that element, so that none of the specimens would have to sit in that structure to provide their body heat.
In addition, it has been known that when this specimen was only a few months old, its extremities were not fully developed, therefore the ability to walk or move from one side to another was really only a desire for this specimen.
Another scientist came to interpret and extract more data about the remains of these specimens, and came to report that they used their teeth from a very young age, so we can deduce that the parents of these specimens provided them with food while they could not move to other places to look for it themselves, something that makes a lot of sense if we analyze the whole picture in general.
These small and young dinosaurs measured a total of 16 centimeters, while some others reached the size of 58 centimeters in just a few months, so when it reached that size probably the specimen was preparing to leave the nest to make their own life independently, clarifying that there were many who reached that point in their lives, with few deaths or losses in this type of dinosaur.
Since this happened only in some genera that had warm blood, the death rate was relatively low compared to other dinosaurs of that time.
On the other hand, these Maiasaura’s descendants also presented several differences in terms of their physical characteristics, if we compare them with a specimen that has reached full maturity, a young Maiasaura probably had larger ocular elements and above all the muzzle area was not as developed as the muzzle area of an adult, a whole series of specifications that are actually typical of creatures that live with their parents for a long time, to achieve the goal of independence and survival.
Another of the characteristics of this specimen was that its spine was arranged in an arc, in addition to the fact that the area of its neck was shaped like the letter “U”, this last quality was quite similar to that of the adult specimens, but not what we mentioned about the spine.
Presumably the consistency of the animal was not so important and its bones were not very strong during its first months of life, however, little by little they were gaining in size to achieve more resistance and finally leave the nest.
Another detail that we can know about this famous specimen is that the parents were in charge of taking the vegetable food to the nest, that we had already mentioned, but what was missing is that they were also in charge of chewing that food so that the son could eat it without any problem.
In this way we have been able to find worn out dental elements in the parents demonstrating what we have mentioned before, this happened especially when it was a question of vegetables of a very rigid consistency, since the food that they preferred, the one of minor rigidity, was not always available in the territory in which they were developing their existence, therefore, the circumstances could be very difficult for them.
On the other hand, it has been assumed, which indicates that it has not yet been proven, that these specimens came to take turns watching over them at night, to avoid the threats of other evil predators that inhabited the place with the aim of eating the eggs and their offspring, so we can see that it was a very organized social structure of these magnificent dinosaurs that continue to surprise us more and more.
Another fact that has come to prove is that these young dinosaurs managed to grow twice their initial size in a fairly short period of time, a period that only consisted of approximately 28 days, in that period of time both the upper and lower limbs managed to gain much more strength and even the hip area began to change, all with the sole aim that then can move independently without any problems.
It has also become known, due to recent scientific studies, that these young creatures came out of the nest only at certain times, when they were accompanied by one of the parents.
That way the animal could prepare itself for what it had to face at some point alone, and that is that in just a few months this dinosaur would have to search for the necessary food in a totally independent way, but in the meantime it would do it with the help of another adult dinosaur.
When this animal was more than 20 months old, its length was already quite considerable, even exceeding 300 centimeters. It is worth mentioning that at the age of 4 years, it is already considered an adult animal, in full stage of maturity.
A common behavior in these Maiasaura dinosaurs is that the female was in charge of returning every year to the same territory where the nests were built, with the same objective of continuing to perpetuate the species, to continue laying eggs and raising their young until they were alone in those territories.
Finally, when these animals reached the age of maturity, they knew that they had to leave for another place to avoid taking food away from the other members of their herd, so they had to form their own herds in other very distant places. In this way the cycle would repeat itself in an innumerable way and thousands of remains have been found in various geographical areas. |
It is common for young children with autism to have social isolation issues. This happens because they miss the link to communicate. Yes, some children give hugs, smile, laugh at appropriate times which is something, but you need to give your child an opportunity to structurally learn these skills early to build for their future. Early intervention is extremely important to help decrease this and increase awareness. It is not easy for a child to learn this, but they can learn it.
But your child needs to learn the basics before jumping ahead or it will be harder for he or she to understand the concept. When your child is ready you can introduce visual stories, role play, use video modeling, and even use adaptive material as props to make it more real. If your child is capable and ready to have interaction of somehow with another child, a good way to do this is with a trained therapist to guide the play.
Once your child has some understanding of the world around them and more aware, communication can become more frustrating if they are not taught how to do it. It does not have to be using expressive language, it could be using signs, pictures, or even augmentative communication devices.
Motivation is a very important part in teaching children with autism. It can be having your child pick a play activity where you can incorporate taking turns, and or using highly preferred reinforcement to learn. Music therapy is also becoming popular. Over the years I have found that children with autism tune into music and love it. You can dance, follow instructions, play games which all can incorporate interaction between one another. |
Draping the earth and entire universe in a thin, ever-present veil, their origin remains one of the greatest puzzles of cosmology. However, the mystique of gamma rays -- particles of light comprising the most energetic and penetrating form of electromagnetic radiation -- may soon diminish thanks to research by Dr. Eli Waxman of the Weizmann Institute's Condensed Matter Physics Department together with Prof. Abraham Loeb of the Harvard-Smithsonian Center for Astrophysics.
Their study, reported in the May 11 issue of Nature, suggests that most of the gamma radiation reaching the earth may actually be leftover energy from massive shock waves induced by gravitational forces. Operating on intergalactic clouds of gas, these forces caused them to collapse into themselves, creating giant galactic clusters. This process produced electrons moving at nearly the speed of light -- roughly 185,000 miles per second. The electrons then collided with low energy photons of the "cosmic microwave background radiation," which is believed to be an "echo" of the Big Bang (the point in time billions of years ago when the universe was created in a cosmic explosion). The collision scattered the photons and increased the energy of a fraction of them to that of gamma rays, thus producing the gamma-ray background radiation seen in today's universe.
The model proposed by Waxman and Loeb, which is consistent with the theory of particle development following the Big Bang, may shed light on the amount of gaseous material currently captured within intergalactic clouds, thereby unraveling another longstanding astrophysical mystery -- that of "missing matter." According to the Big Bang theory, the amount of ordinary matter (as opposed to "dark matter," which is invisible since it does not emit light) in the universe is much larger than that observed in stars and galaxies. Most of the ordinary matter in the universe may therefore be captured within intergalactic clouds, and the observed gamma-ray photons may be the first signature of its existence.
The model and its findings will be examined in upcoming years via an American research satellite probing gamma radiation throughout the universe, as well as a series of earth-based radio wave sensors.
Dr. Eli Waxman's research is supported by the Joseph H. and Belle R. Braun Center for Submicron Research.
The above story is based on materials provided by Weizmann Institute. Note: Materials may be edited for content and length. |
Chemotherapy is a mainstay of tumour therapy. The current treatment guidelines for prescribing a treatment rests heavily on how a treatment or treatment combination has fared statistically, in terms of rates of cancer comeback and survival without cancer spread. The protocol does not include testing the efficacy of the drug against the unique nature of a person's tumour.
Each person's cancer is unique. Totally unique. There are similarities, but the way in which a person's cancer responds to therapy is a result of that cancer's individual genetics and how those genetics play out in the function of the cancer cells. Two people with the same type of cancer might respond differently to the same treatment. There is now a way of helping to determine which way a person's cancer might respond. It's called chemosensitivity testing. Keep reading...
A term called progression-free survival is used to describe the successful situation in which a person's cancer goes away, and never comes back. That is, it doesn't spread, metastasize. It is the secondary tumour, called a metastasis, that is most often responsible for a fatal outcome, rather than the (first) primary tumour. To go into a bit more detail here, the treatment (surgery, chemotherapy, radiation therapy, other treatments) has not just killed the diagnosed tumour, it has also worked to kill some really sneaky cells that have escaped the tumour. These cells invade the blood stream and can stay there until they're triggered to exit the blood and form a metastasis. These cells, responsible for cancer spread, are called Circulating Tumour Cells, abbreviated as CTCs.
The idea behind chemosensitivity testing of these circulating tumour cells is this: consider a patient with, for example Cancer X. There might be two different treatment options for this patient - Treatment A has been statistically successful in most cases, and Treatment B seems to work in not as many cases, but is perhaps less toxic, or perhaps it works in cases where Treatment A has failed. Chemosensitivity testing is a tool that can be utilised at this point in time. It is possible to find out which treatment works to kill off a person's renegade Circulating Tumour Cells, before the treatment is given to the patient. Using a sample of patient blood, scientists can isolate the CTCs from the patient's blood sample, and simply expose the cells to both treatments. Each patient's cells will respond differently.
Perhaps with Patient X, the first treatment, Treatment A might be the Oncologist's preferred treatment choice. Prior to the start of treatment, the Oncologist and Patient X may decide to test to see how successful Treatment A and Treatment B are at killing the cells that cause cancer spread. Let's say that Treatment A kills more cells than Treatment B does. These results would confirm that Treatment A is indeed the better choice. However, if Treatment B kills more cells than Treatment A, then Treatment B may be the preferred treatment for this patient. Such information is incredibly valuable to practitioners seeking to understand the unique nature of each patient's cancer. Even though Treatment A has been statistically more successful than Treatment B, in the latter case above, it would be in fact Treatment B that might be more successful at decreasing the risk of cancer spread for this particular patient.
In addition to current treatment methods, a clinician and their patient may now choose to test whether the treatment of choice works to kill off these renegade cells. With a simple blood test, a patient's CTCs can be isolated and tested. Chemosensitivity testing tests the capacity a treatment has to kill off these CTCs. If a treatment successfully kills CTCs, the chance of cancer progression and relapse reduces. This entire process takes a lot of the 'wait and see' out of the equation, and helps to increase confidence that the chosen treatment will work to reduce the risk of metastasis.
Patients: For more information about chemosensitivity testing, click here
Practitioners: Pachmann, K et al., (2013) Chemosensitivity Testing of Circulating Epithelial Tumor Cells (CETCs) in Vitro: Correlation to in Vivo Sensitivity and Clinical Outcome, Journal of Cancer Therapy (4) 597-605 - Download article
The purpose of this blog is to provide information for a general audience. It is not intended as personal or professional medical advice, which should be obtained directly from your Healthcare Practitioner. |
Clades adapted to microgravity often need mechanical assistance when visiting planets or locations with artificial gravity
The human form evolved specifically to thrive in an Earth-like environment. When humanity began to colonise space, it was apparent that environments resembling the land surface of the Earth were in fact vanishingly rare. For this reason artificial environments were built in space, resembling Earth as closely as possible; the absence of gravity in freefall was a particular problem, one which could be remedied by rotating space habitats of various kinds. At the same time elaborate plans were made to terraform Mars and other bodies in the Solar System and elsewhere, with the aim of recreating the environment of the Earth on the surface of alien worlds.
But increasingly sophisticated genetic engineering technology at this time promised a different strategy; instead of adapting the environment to suit the colonists, they could themselves be modified to suit the environment. This process already had a name; pantropy, coined by Atomic Age writer James Blish, meaning a process which changes everything.
Pantropy can be achieved by a number of different methods;
Cyborgs, biological sophonts augmented with artificial components
Tweaks; biological clades with minor or major adjustments to their genome
Neogens; with entirely new genomes designed from scratch.
Xenogens; with genomes and biochemistry based on local examples
In the Interplanetary age a number of humans with prosthetic body parts volunteered to have specially designed bodyshells which would allow them to operate in extreme environments. Cyborgs of various sorts were involved in the colonisation of Luna, as asteroid miners, prospectors on Venus and on the moons of the outer planets. Since that time many worlds have been initially explored by a wide range of cyborgised bionts.
Early Genetic Tweaking
The first humans genetically adapted for a non-Earthlike environment were the Homo cosmoi; these new humans could survive indefinitely in freefall, and had some resistance to the increased radiation levels encountered in outer space.
Another very early adapted form of humanity were the Martian tweaks, capable of living for extended periods on the Martian surface in the very low atmospheric pressure there. However the terraformation of Mars had already begun; over time the Martian tweaks were displaced from that world, as the atmospheric pressure increased and made their adaptation unnecessary. The extraordinary expense involved in changing the Martian environment was seen at that time preferable to the idea of creating a race of humans who could only live comfortably on the surface of a nearly airless planet.
When humanity spread to the stars, a great range of planetary environments was encountered; but none exactly replicated that of the Earth. The somewhat higher gravity of Nova Terra and Penglai made mild pantropic adaptation necessary; on other worlds colonists were genetically altered to be tolerant of trace elements or variations in the stellar spectra. Even on Mars the terraformation process was never completed, and the population of that world today are all descended from people who have been altered to live comfortably in relatively low pressure.
An adaptation to an aquatic lifestyle was a very early form of pantropy that has become very widespread. Water is a very common compound in planetary environments, although it is generally locked up as ice. Heating an icy world, either locally or on a global scale, provides an environment that is suitable for water adapted clades like the Merpeople (or water-breathing clades like the Europans and their descendants). Very often the water has dissolved salts or other factors which require the waterdwellers to be modified still further.
Note that not all tweaks are designed to live in environments which are not Earth-like; many have been modified to occupy quite Earth-like environments, but not those which can be inhabited by nearbaseline humans. Examples include the aforementioned aquatic tweaks, flying tweaks, those adapted for tree-dwelling or to underground environments and so on. The Genetekker civilisation, and the later House Genen which emerged from it, have a long history of specialisation in genetic modification or various environments.
A prospective colonist who desires to become adapted to a new world may use a number of different pantropic techniques. The most basic is somatic gene therapy. Here genes are introduced by virus or nanomachine into the cells of an individual. This is adequate for low-level changes such as tolerance of different gravities or waterdwelling adaptations. Somatic therapy can change a baseline biochemistry to a limited extent, enough to produce tolerance of certain alien proteins, or a moderate change in ambient temperature.
But to radically change an individual's physical form, or biochemistry, or to allow tolerance of extreme temperatures, radiation or pressures, that individual's body must be completely rebuilt. Bionano-assembler technology can achieve this in certain cases, although this process is generally far too traumatic to occur while the individual is conscious. To adapt the colonist to a completely novel environment the best way is usually to upload that person's consciousness into virtual form, then download them into a newly constructed Neogen or Xenomorph body. Specialised bio-engenerator equipment is required for this process, which can take several days. Alternately the uploaded consciousness can be downloaded into a robotic body designed to withstand the conditions of the new world or environment.
Once the colonist has a new form, germ-line modifications will allow em to breed true; any offspring e may have will also be adapted to the new environment (allowing for normal genetic drift). In the case of colonists downloaded into artificial bodies, in many cases these bodies can be made self-replicating (effectively becoming von Neumann machines). Sometimes the new forms, either biological or artificial, cannot easily be made to replicate themselves, and must be grown or manufactured industrially.
Extreme environments and Xenobiochemistry
But other environments required much greater changes to be made before the population could thrive. On the planet Trees, for example, a complex biosphere already existed, which differed biochemically from Earth life quite radically. To allow colonists to live on this world they had to be radically changed at the molecular level, with many proteins and metabolic compounds being replaced with native versions. This adaptation allowed the colonists to digest local foodstuffs, but such a radical adaptation took many centuries of research and development, and many tragic failures, to achieve.
Extreme environments such as the hydrogen atmospheres of gas giants, and the cold surfaces of ammonia-ice worlds, required even more radical biochemical adaptations. In some cases, the discovery of alien life in an environment entirely different to that of Earth gave impetus to the development of pantropic adaptations suitable for such conditions. For instance the floating ecology found at 54 Hydrae (Ruach) consisted of life where the cytoplasm was gas-based rather than liquid based. After this discovery a range of very low density clades were created, capable of floating in the atmospheres of similar Jovian worlds. Similar discoveries of very-low temperature ammonia- and methane-based life allowed the development of low-temperature tweaks and neogen clades, using similar biochemistries.
When the high-temperature biosphere on To'ul'h was discovered, human ambassadors and xenologists were radically altered to tolerate the searing temperatures of that world. At first these adapted humans were given humanoid forms, despite their extremophile biochemistry; however the conditions on the surface of To'ul'h are so different to anything that a human form can accommodate, these To'ul'h-adapted humanoids were practically helpless in the thick, hot, flowing winds at the bottom of the atmosphere. It was not until visitors to To'ul'h were given forms resembling the local biota that they could begin to function in those conditions. Similarly those To'ul'hs who came out of their shrouded planet to meet with Terragens in their own society were entirely helpless until they were given suitable forms (generally, although not exclusively, humanoid).
Ethical aspects of Pantropy
The radical changes associated with pantropy can lead to radical changes in the mentality of the colonist. Once given a new biochemistry, or an artificial body and mind, the functions of the colonist's brain and mind may be subtly or radically altered. From the very outset of interstellar colonisation there has been a debate over the ethics of altering human bodies and mentalities in this way; although the inhabitants of extreme worlds may be descended from Terragen ancestors of one kind or another, they are often so changed to be almost entirely alien.
Sometimes the process of extreme modification required for pantrophic adaptations produce an entirely novel form of modosophont, entirely different in mind and body from the species from which it was derived. For instance the cold-adapted Methanoid clades use a xenogenetic template as a basis for their biology, and thrive at very low temperatures and low subjective speeds. Even though Methanoids are humanoid, and resemble baseline humans in many respects, they cannot tolerate temperatures above 100K and rarely interact culturally with warm-modosophonts. When the Methanoids were created some questioned the wisdom of creating such an extreme species, likely to become isolated from mainstream civilisation.
In fact many conflicts have developed between species modified by Pantropy and those who attempt to create an environment that more closely resembles that on Old Earth. The expulsion of the Martian Tweaks, similar expulsions of high temperature tweaks from Ribblehead and Venus, and various conflicts between Methanoids and warmer and faster clades culminating in the Epp War are examples of this sort of dispute. However the existence of so many planetary types in the galaxy means that in general there is plenty of room for a wide range of clades, each adapted to a different environment.
To what extent the diverse inhabitants of the Terragen Sphere can claim to be descended from lifeforms originally found on Old Earth is highly debatable, and for many of those inhabitants the question is practically irrelevant. Each citizen of a new colony has a unique perspective on reality that is tempered by their physical form, no matter how exotic, and the days when baseline humans were the only significant members of Terragen Civilisation are practically forgotten by all but a few.
Genemod - Text by M. Alan Kazlev Genetic modification/ genehack module. Can be phenotypic (modify phenotype only), genotypic (modify germ-cells only), or both. Note a phenotypic modded cell when cloned will have the same affect as a genotypic mod. Most genemods today are genomically modular, and carefully labelled and wetwared to ensure a compatibility test is run with the user's genome, before allowing insertion. Even so, it is advisable to ensure one's genome is backed up, even when using reputable brands.
Homorph - Text by M. Alan Kazlev Any non-human, such as a transhuman, posthuman, vec or alien, using an artificial humanoid body.
Homorph, Alien - Text by Steve Bowers Xenosophonts who deliberately adopt human (or general terragen hominid) form, morphology, or biochemistry. While homomorphism is fairly popular among some alien races and clades, many alien societies are under embargo to avoid cross-cultural pollution, so the homorph is often illegal. See also Toulhuman.
Space People - Text by M. Alan Kazlev (archaic) A common early term for Space Adapted Humans, still used by some descendants of the original clade. (Second Federation Era) A term for vacuum adapted tweaks, now rarely used.
Space Spiders - Text by Steve Bowers Vacuum and zero gee adapted, sentient or semisentient spidersplices, from micro-scale to giant; usually capable of producing buckyfibre silk. Contribute to many megascale building projects, sometimes controlled directly by transcended postspiders.
Terrachauvinism - Text by M. Alan Kazlev Belief that terragen life and intelligence is superior to other lifeforms and intelligences in the galaxy, and that this is the reason why the terragen bubble is the largest civilization at present. A sort of variant Anthropism that includes all terragen mindkind. Terrachauvinism completely ignores extinct alien civilizations of greater than terragen extent, as well as evidence of large empires detected by the Argus Array.
To'ul'human - Text by Steve Bowers Homorph To'ul'h who have been given or adopted human shape by advanced biotech. They also can take To'ul'h form in earthlike environments or adapted human form in toulovenusian environments. |
The decline and extinction of plant and animal species and their habitats isn’t only a loss of the world’s natural riches. Scientists find it is also increasing the spread of infectious diseases and influencing the emergence of new diseases.
Take the case of the opossum and the white-footed mouse and their surprising role in the spread of Lyme disease among humans. The more white-footed mice there are, the more Lyme disease humans get, while the presence of opossums may actually protect us. Opossum populations are declining as their natural forest habitats are bulldozed for development. White footed- mice, which are less dependent on forests, are flourishing. In several cases, researchers say, the species most likely to decline as biodiversity is lost are the ones most likely to reduce the transmission of pathogens.
The December 2 issue of the journal Naturereports on findings by 13 scientists that robust biodiversity tends to decrease the transmission of infectious diseases, while declining biodiversity increases that danger. They also say that principle seems to hold even if the population of the species hosting the pathogen remains stable: in an environment with a wide variety of species, the spread of disease is less likely. Researchers studied the spread several contagious human illnesses, including Lyme, West Nile virus, schistosomiasis, a parasitic affliction among people in tropical climates, and hantavirus pulmonary syndrome, an often-fatal disease spread by rats. They note that from 1940 to 2004 more than 300 new "emerging disease events” were found in humans, and that many old diseases such as malaria are reasserting themselves with a vengeance. Researchers also found that increased spread of disease in plants, animals and the corals in the sea accompanies biodiversity declines.
Conventional thinking has dictated that human diseases are best understood by looking at humans. "Now there is the beginning of a movement to bring epidemiology and econolgy together,” EPA scientist Montira Pongsiri told ScienceDaily. Pongsiri and Joe Roman, a biologist at the University of Vermont, wrote about biodiversity and global disease ecology in the December issue of BioScience.
Roman explained to ScienceDaily that Lyme disease was probably rare historically, because ticks once fed on a wide range of small mammals in the forests and some of those hosts were poor carriers for the disease, so only a small number of infected ticks reached human populations. The Nature article points outticks that try to bite sharp-clawed opossums are likely to get picked off and killed. White-footed mice thrive in species poor environments, such as small patches of forest on the edge of neighbor hoods, Roman said. They carry Lyme infections without getting sick themselves, and with other small mammals gone, they are a prime host for large numbers of ticks to feed on.
An NPR broadcast on the topic reports there were 30,000 confirmed cases of Lyme disease in the U.S. in 2009, up from 12,000 in 1995. The Nature study reports on land in Virginia where research showed ticks that fed on the mice were highly likely to be infected, and ticks that fed on opossums were not. Low bird diversity increases the numbers of mosquitoes that spread West Nile, The article also reported field studies showing the reservoir of rodent hantavirus increased when rodent diversity declined.
Diverse Voices in Joyful Song
A flash mob has been described as a group of friends and strangers who gather on short notice in some public place to do something spontaneous and entertaining. Enjoy The Philadelphia Opera Company’s "Hallalujah,” a Random Act of Culture, and the exuberant performance of the Christmas Food Court Flash Mob. |
|By varying the
thickness of a lens, we can alter the distance at which is focuses. When
the lens in our eye does this, it is called accomodation.
It does this by contracting the ciliary muscles that are attached to
the top and bottom of the lens. When the muscles contract, it pulls
vertically on the lens causing the thickness to decrease. In this next
part, there are three different lens thicknesses.
The first lens shown here is very thin, and has a focus that is relative close to the lens. In the image, there are also two shaded/dashed gray circles. These circles represent the two intersecting spheres that were used to from these particular lenses. As you look through the various lenses, notice that the two grey circles move closer and closer together. There is also a light blue line that runs through the center of each of the lenses. This line represents the seconday axis of the lens. It also does not change in each of the lens cases.
One thing that you may notice in the thin lens is that the rays that hit the extremities of the lens do not seem to focus where all the other rays do. Based on this model, it turns out the rays toward the edges do show this behaviour. We can also notice this "imperfection" in the design of our own eyes. Even though eyes are supposed to be spherically symmetrical about a central axis, we do not see images that are off-center as clearly. This forms our peripheral vision, which is in general quite blurry. This is based partially based on this "imperfect" focusing of our own eyes, and the fact that our receptors on our retina tend to be focused near the center of our retina, at the fovea.
In the medium lens, you will notice that the focus seems to have moved out a bit. Also, the rays from the extremities seem to focusing closer to the original focal point, at least versus the thin lens case. In each of the pictures, the lens thickness doubles.
In the thick lens, the lines almost focus together at the same point, and the focal point has been further still. This "thick" lens best approximates the lens of our eye at rest. When the eye is focusing on distance objects, it relaxes the ciliary muscles controlling our lens, and allows it to take this resting thickness. As we focus on closer objects, these reduced thickness of the lens causes the focal length to shorten, which brings the items into focus.
On the bottom, there is a line with the relative focal distances for each of the three lens thicknesses. When you consider that the human lens is only a fraction of the lens shown at the right, the actual variations in focal lengths is considerably smaller. Despite this, our eyes allow us to focus and see objects from 30 cm to kilometers away with only a small adjustment.
It turns out that there is another factor that allows the lens to focus properly. Since it is inefficient to have out eyes expand or contract everytime, there is another factor in play. |
The partial nuclear meltdown in Fukushima has made it very clear that nuclear facilities are NOT 100% safe. Approximately 3 million people within the USA live within 10 miles of a nuclear power plant and if you’re one of them, it would be wise to make preparations for the small chance that you may one day experience a nuclear emergency. Please consider that in the event of a global catastrophe, or large nationwide disaster that the resources used to maintain nuclear facilities will be at jeopardy, which could lead to complete meltdowns of facilities. If you’re an urban survivalist preparing for a TEOTWAWKI event, you must consider that there are over 400 nuclear power plants on the planet and therefore protecting you and your family from radiation should be a top priority.
Nuclear power plants use nuclear fission (the breakdown of unstable atoms) to generate massive amounts of heat. The heat is used to turn water into steam that will turn massive turbines and generators to produce electricity. A nuclear power plant requires a lot of water and systems to cool down the reactors, the problems occur when the coolant system malfunctions and the nuclear material starts to overheat. If the heat becomes great enough, the barrier protecting the nuclear material from the outside can literally melt away, thereby releasing nuclear particles into the air. The radioactive particles can potentially travel a great distance if the conditions are right and cause massive damage.
The government has taken many precautions to warn you in the event that a nuclear emergency occurs, and they have plans in place to immediately deal with nuclear emergencies. But you can’t always rely on the government and so we suggest you take the following actions now so you can be better prepared for if/when it happens:
- Make a plan with family/friends/close neighbors: discuss local and long distance emergency contact information of relatives/trusted people/emergency services (phone numbers/addresses/names/etc.). Also discuss evacuation routes, rally points, and ways to communicate in the event that a disaster occurs. Be sure to have several backup plans for everything you do.
- Create an emergency urban survival kit: a minimum of 3 days’ worth of food and water is essential along with basic urban survival gear such as flashlights, knives, extra clothing will be needed. To protect yourself from nuclear radiation you’ll want Potassium Iodide pills, duct tape, dust masks, disposable coverall suits, plastic sheeting, and scissors. Your urban survival kit should also contain copies of important documents so that you can get your life back on track after the disaster.
- Stay up to date with local emergency plans: if you live within 10 miles of a nuclear facility, your local government will likely have written material containing information about local emergency shelters, evacuation procedures, and basic information to better understand the dangers of a nuclear emergency.
- Purchase a radiation detector: You can’t always rely on the government to give you proper information, by having your very own Geiger counter, you can see for yourself what the radiation levels are in your area and you can make the decision to stay or evacuate yourself. Also keep in mind that in the event of a large enough nuclear disaster, there may not be any authorities standing to give you warning information, you may be on your own. If you’re going to purchase a radiation detector, be sure to study what levels of radiation are acceptable and what levels warrant an evacuation.
What to do During a Nuclear Power Plant Emergency
Depending on the severity you may be told to stay indoors, or evacuate immediately. If you’re asked to stay indoors, turn off your air conditioning and spend the majority of your time in the inner most room of your building (whatever area has the most walls between you and the outside). If you’re asked to evacuate, quickly gather your BOB (bug out bag) and travel away from the radiation. Know the wind currents in your area and travel sideways away from the wind direction (as oppose to downwind or upwind). If you’re traveling in a car, keep your car windows and vents close until you get enough distance between you and the incident area. During a nuclear incident you want to minimize exposure to radioactive material as much as possible, this will mean using a dust mask, and wearing disposable clothing that covers the majority of your skin. If you have Potassium Iodide, start taking the recommended dosage until the emergency is over.
What to do After a Nuclear Power Plant Emergency
The severity of the crisis could vary, it could end up a small problem or it could end up into something worse than Chernobyl. The following steps are some simple guidelines you could take to prevent radiation sickness, and ensure the health of your family:
- Once in a safe place, remove all clothing and take a thorough shower to remove all radioactive dust/particles from the surface of your body.
- Listen to radio/news networks to get the latest information on the emergency. HAM radio would be another excellent resource if available.
- Seek medical treatment immediately if you exhibit signs of radiation sickness (nausea, fatigue, rashes/burns, hair loss).
- Return home ONLY when you know it’s 100% safe.
- Once home, throw away any food that was either not in the fridge or wasn’t in a sealed container. |
This is a tutorial on how to use file permissions and the chmod command.
What are file permissions?
File permissions are rather important as they specify what you let people do to your files. There are generally three types of file permissions:
- Read Permission
This allows people to 'read' your files, for example to go into a text editor and load up a text file, however, read is all they can do, they can't edit it and save the changes, delete, or move the file but they can copy it into their own directory.
- Write Permission
This allows people to change your file or even delete it.
- Execute Permission
This means that people can 'run' your file, for example if it were a CGI or shell script. Folder-wise, it means that people can 'cd' into your directory, but remember, they can not list the files in that directory unless you have set read permissions on that directory.
Who can I give these permissions to?
These are the different types of person you can set permissions for:
This is yourself, and sometimes you'll have to set permissions for yourself. For example, if you write a script of some form, you have to give yourself execute permissions on the file in order for you to run it.
These are the people that belong to your group on the system. Some examples of groups on Redbrick would be 'member', 'committee' and 'guest'. Generally, you would be in the 'member' group, unless you are on the committee. Setting permissions specifying your group means that only your group can do whatever you let them do with your files.
Basically, this means everyone else, eg - those outside of your group. Another very important example is webpages as you need to specify others to allow people to view your HTML files.
Simply, this is everyone, the same as user, group and others.
How do I set permissions?
In order to set permissions on your files, you use the command 'chmod'. Now, to set permissions, you use the command chmod at the command line.
Say, for example, you have a file called 'hello.txt' in your directory, and you want to allow everyone else to read it, but not to be able to change it or delete it, you use the following command:
chmod go+r hello.txt
and then hit return. What this does is give group and others read and execute permissions on 'hello.txt'. The + means to 'give' those permissions to that file
Say, you wanted to let people change into a directory in your home directory called 'stuff', and let them list and read all the files in it use the following command:
chmod go+rx stuff (hit return) cd stuff (hit return) chmod go+r *.* (hit return)
Now, say you wrote a shell script called 'moo', and you want to be able to run it yourself, you give yourself execute permissions:
chmod u+x moo (return)
Beginning to see the pattern? Now say you wanted to allow everyone to be able to read a file called 'results.txt', and you needed to set the permissions for everyone including yourself for some reason, instead of using ugo for user, group and others, you can simply use:
chmod a+r results.txt (return)
Ok, finally one last example is where you want to remove the read permissions for that file called 'results.txt' from group and others. This is what you have to type:
chmod go-r results.txt (return)
This time it's - instead of +, where it means 'take away' those permissions. If there were no permissions there and you tried to take them away, then it will simply ignore it and carry on trying to remove the other permissions, if any others are specified to be removed.
chmod a=r results.txt
This time we are using the = instead or + or -. This means that the permissions are set what ever you tell it to, overwritting previous permission. In this case, everyone will only have read permission to that file. The main function of = is to make files read-only. Isn't there a way of setting permissions using numbers?
Yes, there is, its the octal notation. Say you did an 'ls -l' of the files in your directory. On the left hand side youll notice stuff like the following:
-rwxr-xr-x drwxr--r-- -rwxr--r--
and so on. These are the permissions that you've set. There are ten spaces there for characters if you notice. The first 'bit' as we call it, tells us if it's a directory. If there's a 'd' there, then it is a directory, otherwise, if there's a '-' there, then it's a file.
Now, divide the remaining nine characters into three groups of three. The first group of three represent the permissions for 'user', the next three for 'group' and the last three for 'others'. The first 'bit' of any of these three groups represent whether that group has 'read' permission (a '-' means no). The next bit means 'write' permission, and the third bit means 'execute' permission.
Instead of specifying rwx, you can do it using numbers. Take one group. Now replace the letters with 1's and 0's, as in the following, and replace that binary number with its octal equivalent, eg:
Permission Binary Octal rwx 111 7 rw- 110 6 r-x 101 5 r-- 100 4 -wx 011 3 -w- 010 2 --x 001 1 --- 000 0
So all you have to do is stick the numbers together for the three groups, some examples would be rwxr-xr-x, which is rwx r-x r-x, or 111 101 101, or in octal, 755 which means that to set the permission for rxwr-xr-x, you type:
chmod 755 filename
Some more quick examples:
For permission rwx--x--x, use 711. For permission rw-r--r--, use 644.
What permissions should I use for webpages?
For webpages, you should remember the following:
Your home directory (~) should be go+x or 711, so use the command chmod 711 ~ (return). Your public_html folder must also be go+x or 711, so again, use chmod 711 ~/public_html (return). All subfolders of your public_html to be used by your website must be go+x or 711 as well... See Webspace for more info on webpages on Redbrick.
Finally, all files, eg HTML files, must be go+r or 644. You can use a recursive chmod to change all files in public_html or subdirectories of public_html to have permission 644, by using the command chmod 644 ~/public_html/*.
You can get more detailed information on chmod by typing man chmod at the prompt and of course, contact Helpdesk if you have any problems.
lil_cain and file permissions
One day in #lobby, lil_cain, at this point an elected systems administrator, decided he was having trouble with Linux and asked the kind people of RedBrick for help. His question was something along the lines of "how do I check the permissions for a file in a directory?"
Now, as everyone knows, you can check the permissions of all files in a directory using either
ls is one of the most basic commands EVAR!!1!!!11 lil_cain U DESERVE NO R00T!1!!
<--> Actually, what I was looking for is stat. It's a far better tool for individual files.
<---> Orly? I just remember "thanks". |
MLA Documentation Overview (2009 MLA Guidelines)
By Kelli McBride
When using material from outside sources, we must document that use or else we are guilty of plagiarism. The Modern Language Association (MLA) has a 2-step process: parenthetical notation and a works cited page. The LB Brief handbook presents more detailed information. You can find a sample research paper that shows not only how to use parenthetical notation but also how to format a works cited page.
This is the in-text information you give the reader at the time you use outside material. At the end of the sentence or passage in which you have quoted, paraphrased, or summarized a source, you use a parenthesis to indicate author and page number.
· A quotation would look like this: “Anger and bitterness had preyed upon me continually for weeks” (Keller 147).
· A summary of Keller’s words would look like this: Before Helen Keller met Anne Sullivan and learned about language, her life was full of rage and frustration (147).
Notice the difference in the parenthetical information. If I use Keller’s name in my sentence and that clearly identifies her as the author of the information, then I only need to use the page number in parentheses. An exception to this rule is if I have more than one work by Keller that I cite in my essay. Look in the handbook for more details on using parenthetical notation.
The second step to complete documentation is the works cited page, which always appears as the last page in an essay. This page lists the complete publication information for each source you use in an essay. Without this page, we would have to put all the information in our text, which would be distracting. A works cited page has the following rules:
1. Always alphabetize by author’s last name (or if no author given, the title of the source). So Helen Keller’s piece would be listed under K for Keller.
2. Always double space, using NO extra space between sources.
3. Use hanging indent for each entry (first line is flush with the margin and subsequent lines in that entry are indented).
information concerning publication. This differs for the type of source, and
those differences are listed in the handbook. Here’s how a work cited entry
from Helen Keller would look:
Keller, Helen. “The Day Language Came into My Life.” The Power of Language; The Language of Power. Ed. Christian Morgan, et al. 3rd ed. NY: Learning Solutions, 2010. 147-49. Print.
Because individual sources have different information that needs citing, you should refer to your handbook for more details.
To create the works cited page, you can also use http://easybib.com. |
NIH Research Matters
March 8, 2010
Dry Air May Spur Flu Outbreaks
Researchers have long puzzled over why flu becomes so much more active in winter. A new study reveals that dry air is one likely culprit.
Scientists have proposed different explanations for why influenza makes more people sick in temperate regions during winter. One idea is that people simply spend more time indoors together because it's colder, giving the virus more opportunity to spread.
Another idea is that environmental factors affect the survival and transmission of the virus. For example, laboratory studies have found that higher temperatures affect the flu virus’s coat. That could potentially explain why flu doesn't spread during summer, but temperatures indoors, where most Americans spend the bulk of their time, are often tightly controlled.
Relative humidity is another suspect, but the data haven't established a strong link between relative humidity and flu outbreaks. Relative humidity, which is what you hear in weather reports, isn't the actual amount of water vapor in the air. Rather, given the current temperature, it tells you how close the air is to the point at which a cloud would start to form.
Dr. Jeffrey Shaman of Oregon State University wondered if absolute humidity—a measure of how much water vapor is in the air—could account for flu outbreaks. Last year, he reexamined laboratory data and found that absolute humidity could account for the airborne survival and transmission of the virus.
In the new study, Shaman and collaborators at several institutions, including NIH’s Fogarty International Center (FIC), compared death rates attributed to influenza over 31 years to absolute humidity readings nationwide. The researchers used a mathematical model of the influenza transmission cycle that incorporated Shaman's previous findings of how absolute humidity affects the survival and transmission of the virus. The study was funded by NIH's National Institute of General Medical Sciences (NIGMS), the Bill and Melinda Gates Foundation and others.
In PLoS Biology on February 23, 2010, the researchers reported that there were often significant drops in absolute humidity in the weeks prior to a flu outbreak. "This dry period is not a requirement for triggering an influenza outbreak, but it was present in 55-60% of the outbreaks we analyzed, so it appears to increase the likelihood of an outbreak," Shaman says. "The virus response is almost immediate; transmission and survival rates increase and about 10 days later, the observed influenza mortality rates follow."
This discovery might be used in the future to help predict when outbreaks will occur. It also has implications for treating influenza outbreaks. For example, hospitals may pay more attention to controlling humidity levels.
"Obviously there are tradeoffs because influenza is not the only pathogen out there," Shaman says. "There are pathogenic molds that flourish in higher humidity. But if the immediate concern is an outbreak of influenza, it may be worthwhile to raise humidity levels."
—by Harrison Wein, Ph.D.
- Flu Virus Fortified In Colder Weather:
- Influenza (Flu):
NIH Research Matters
Bldg. 31, Rm. 5B64A, MSC 2094
Bethesda, MD 20892-2094
About NIH Research Matters
Editor: Harrison Wein, Ph.D.
Assistant Editors: Vicki Contie, Carol Torgan, Ph.D.
NIH Research Matters is a weekly update of NIH research highlights from the Office of Communications and Public Liaison, Office of the Director, National Institutes of Health. |
A recent high-profile study led by US climatologist James Hansen has warned that sea levels could rise by several metres by the end of this century. How realistic is this scenario?
We can certainly say that sea levels are rising at an accelerating rate, after several millennia of relative stability. The question is how far and how fast they will go, compared with Earth’s previous history of major sea-level changes.
Seas have already risen by more than 20 cm since 1880, affecting coastal environments around the world. Since 1993, sea level has been rising faster still (see chapter 3 here), at about 3 mm per year (30 cm per century).
One key to understanding future sea levels is to look to the past. The prehistoric record clearly shows that sea level was higher in past warmer climates. The best evidence comes from the most recent interglacial period (129,000 to 116,000 years ago), when sea level was 5-10 m higher than today, and high-latitude temperatures were at least 2℃ warmer than at present.
The two largest contributions to the observed rise since 1900 are thermal expansion of the oceans, and the loss of ice from glaciers. Water stored on land (in lakes, reservoirs and aquifers) has also made a small contribution. Satellite observations and models suggest that the amount of sea-level rise due to the Greenland and Antarctic ice sheets has increased since the early 1990s.
Before then, their contributions are not well known but they are unlikely to have contributed more than 20% of the observed rise.
Together, these contributions provide a reasonable explanation of the observed 20th-century sea-level rise.
The Intergovernmental Panel on Climate Change (IPCC) projections (see chapter 13 here) forecast a sea-level rise of 52-98 cm by 2100 if greenhouse emissions continue to grow, or of 28-61 cm if emissions are strongly curbed.
The majority of this rise is likely to come from three sources: increased ocean expansion; glacier melt; and surface melting from the Greenland ice sheet. These factors will probably be offset to an extent by a small increase in snowfall over Antarctica.
With continued emissions growth, it is entirely possible that the overall rate of sea-level rise could reach 1 m per century by 2100 – a rate not seen since the last global ice-sheet melting event, roughly 10,000 years ago.
Beyond 2100, seas will continue to rise for many centuries, perhaps even millennia. With continued growth in emissions, the IPCC has projected a rise of as much as 7 m by 2500, but also warned that the available ice-sheet models may underestimate Antarctica’s future contribution.
The joker in the pack is what could happen to the flow of ice from the Antarctic ice sheet directly into the ocean. The IPCC estimated that this could contribute about 20 cm of sea-level rise this century. But it also recognised the possibility of an additional rise of several tens of centimetres this century if the ice sheet became rapidly destabilised.
This could happen in West Antarctica and in parts of the East Antarctic ice sheets that are resting on ground below sea level, which gets deeper going inland from the coast. If relatively warm ocean water penetrates beneath the ice sheet and melts its base, this would cause the grounding line to move inland and ice to flow more rapidly into the ocean.
Several recently published studies have confirmed that parts of the West Antarctic ice sheet are already in potentially unstoppable retreat. But for these studies the additional rise above the IPCC projections of up to 98 cm by 2100 from marine ice sheet instability was more likely to be just one or two tenths of a metre by 2100, rather than several tenths of a metre allowed for in the IPCC report. This lower rise was a result of more rigorous ice-sheet modelling, compared with the results available at the time of the IPCC’s assessment.
How stable are ice sheets?
Ocean temperatures were thought to be the major control in triggering increased flow of the Antarctic ice sheet into the ocean. Now a new study published in Nature by US researchers Robert DeConto and David Pollard has modelled what would happen if you factor in increased surface melting of ice shelves due to warming air temperatures, as well as the marine melting.
Such an ice-shelf collapse has already been seen. In 2002, the Larsen-B Ice Shelf on the Antarctic Peninsula disintegrated into thousands of icebergs in a matter of weeks, allowing glaciers to flow more rapidly into the ocean. The IPCC’s predictions had considered such collapses unlikely to occur much before 2100, whereas the new study suggests that ice-sheet collapse could begin seriously affecting sea level as early as 2050.
With relatively high greenhouse emissions (a scenario referred to in the research literature as RCP8.5), the new study forecasts a rise of about 80 cm by 2100, although it also calculated that this eventuality could be almost totally averted with lower emissions. But when the model parameters were adjusted to simulate past climates, the Antarctic contribution was over 1 m by 2100 and as much as 15 m by 2500.
Greenland’s ice sheet is crucially important too. Above a certain threshold, warming air temperatures would cause surface melting to outstrip snow accumulation, leading to the ice sheet’s eventual collapse. That would add an extra 7 m to sea levels over a millennium or more.
The problem is that we don’t know where this threshold is. It could be as little as 1℃ above pre-industrial average temperatures or as high as 4℃. But given that present-day temperatures are already almost 1℃ above pre-industrial temperatures, it is possible we could cross this threshold this century, regardless of where exactly it is, particularly for high-emission scenarios.
Overall, then, it is clear that the seeds for a multi-metre sea-level rise could well be sown during this century. But in terms of the actual rises we will see in our lifetimes, the available literature suggests it will be much less than the 5 m by 2050 anticipated by Hansen and his colleagues.
The wider question is whether the ice-sheet disintegration modelled by DeConto and Pollard will indeed lead to rises of the order of 15 m over the coming four centuries, as their analysis and another recent paper suggest. Answering that question will require more studies, with a wider range of climate and ice-sheet models.
John will be on hand for an Author Q&A between 2 and 3pm AEDT on Thursday, March 31, 2016. Post your questions in the comments section below. |
African lungfish live in freshwater swamps, backwaters and small rivers in West and South Africa. These prehistoric animals have survived unchanged for nearly 400 million years and are sometimes referred to as "living fossils."
Life of an African lungfish
African lungfish have some fascinating adaptations. They have two lungs, and can breathe air. This is a vital feature, since they live in flood plains in waterways that often dry up. To manage this life-threatening situation, the lungfish secretes a thin layer of mucus around itself that dries into a cocoon. It can live out of water in this cocoon for up to a year, breathing through its lungs until rains refill its waterway.
The African lungfish also hibernates in water. It digs 1-9 inches into the soil and debris at the bottom of its waterway, then wiggles in the mud to create a bulb-shaped chamber and rests there with its nose pointing upward. Its metabolic rate slows down, and the nutrients it needs to survive come from breakdown of its muscle tissue. It can remain up to 4 years in this state.
African Lungfish can use their thin hind limbs to lift themselves off the bottom surface and propel themselves forward. This is probably possible because they can fill their lungs with air, adding to the buoyancy of their bodies in water. Scientists believe that lungfish may be closely related to the animals that were able to evolve and come of the water and onto land.
African lungfish are omnivorous, eating a varied diet that includes frogs, fish and mollusks as well as tree roots and seeds. They grow between 6 ½ and 40 inches long, and can weigh up to nearly 8 pounds.
The female African lungfish lays its eggs in a nest in a weedy area of its habitat. Once the eggs hatch, the males guard the young for up to two months. The larvae have external gills that are reabsorbed during their metamorphosis into fully developed lungfish. As the African lungfish develops from juvenile to adult, its teeth fuse together to form tooth plates, which are used to chew its food.
African lungfish conservation
The African lungfish has a large range, and there are no widespread threats to the species. It is listed as of Least Concern with the International Union for Conservation of Nature (IUCN).
How you can help African lungfish
Help expand the Oregon Zoo's local and global conservation efforts by becoming a Wildlife Partner. Your donation will help build on the successes of the Future for Wildlife program, which has provided funding for conservation projects around the world since 1998.
African lungfish at the Oregon Zoo
The zoo's lungfish is a female named "Siti," which means "lady" in Swahili. She lives in the Africa Rainforest exhibit and eats a daily diet of smelt, small mice and earthworms, along with the occasional live crayfish for enrichment. |
Wax, any of a class of pliable substances of animal, plant, mineral, or synthetic origin that differ from fats in being less greasy, harder, and more brittle and in containing principally compounds of high molecular weight (e.g., fatty acids, alcohols, and saturated hydrocarbons). Waxes share certain characteristic physical properties. Many of them melt at moderate temperatures (i.e., between about 35° and 100° C, or 95° and 212° F) and form hard films that can be polished to a high gloss, making them ideal for use in a wide array of polishes. They do share some of the same properties as fats. Waxes and fats, for example, are soluble in the same solvents and both leave grease spots on paper.
Notwithstanding such physical similarities, animal and plant waxes differ chemically from petroleum, or hydrocarbon, waxes and synthetic waxes. They are esters that result from a reaction between fatty acids and certain alcohols other than glycerol, either of a group called sterols (e.g., cholesterol) or an alcohol containing 12 or a larger even number of carbon atoms in a straight chain (e.g., cetyl alcohol). The fatty acids found in animal and vegetable waxes are almost always saturated. They vary from lauric to octatriacontanoic acid (C37H75COOH). Saturated alcohols from C12 to C36 have been identified in various waxes. Several dihydric (two hydroxyl groups) alcohols have been separated, but they do not form a large proportion of any wax. Also, several unidentified branched-chain fatty acids and alcohols have been found in minor quantities. Several cyclic sterols (e.g., cholesterol and analogues) make up major portions of wool wax.
Only a few vegetable waxes are produced in commercial quantities. Carnauba wax, which is very hard and is used in some high-gloss polishes, is probably the most important of these. It is obtained from the surface of the fronds of a species of palm tree native to Brazil. A similar wax, candelilla wax, is obtained commercially from the surface of the candelilla plant, which grows wild in Texas and Mexico. Sugarcane wax, which occurs on the surface of sugarcane leaves and stalks, is obtainable from the sludges of cane-juice processing. Its properties and uses are similar to those of carnauba wax, but it is normally dark in colour and contains more impurities. Other cuticle waxes occur in trace quantities in such vegetable oils as linseed, soybean, corn (maize), and sesame. They are undesirable because they may precipitate when the oil stands at room temperature, but they can be removed by cooling and filtering. Cuticle wax accounts for the beautiful gloss of polished apples.
Beeswax, the most widely distributed and important animal wax, is softer than the waxes mentioned and finds little use in gloss polishes. It is used, however, for its gliding and lubricating properties as well as in waterproofing formulations. Wool wax, the main constituent of the fat that covers the wool of sheep, is obtained as a by-product in scouring raw wool. Its purified form, called lanolin, is used as a pharmaceutical or cosmetic base because it is easily assimilated by the human skin. Sperm oil and spermaceti, both obtained from sperm whales, are liquid at ordinary temperatures and are used mainly as lubricants.
About 90 percent of the wax used for commercial purposes is recovered from petroleum by dewaxing lubricating-oil stocks. Petroleum wax is generally classified into three principal types: paraffin (see paraffin wax), microcrystalline, and petrolatum. Paraffin is widely used in candles, crayons, and industrial polishes. It is also employed for insulating components of electrical equipment and for waterproofing wood and certain other materials. Microcrystalline wax is used chiefly for coating paper for packaging, and petrolatum is employed in the manufacture of medicinal ointments and cosmetics. Synthetic wax is derived from ethylene glycol, an organic compound commercially produced from ethylene gas. It is commonly blended with petroleum waxes to manufacture a variety of products. |
Horses played a vital role during the Anglo-Boer War (1899-1902). Such was the demand for them that a large number of horses were imported to South Africa by the British from all over the world, including 50 000 from the United States and 35 000 from Australia – most of them landing in Port Elizabeth. A variety of breeds of horses were used during the war, including English Chargers and Hunters from England and Ireland as well as Australian Walers bred, ironically, from an original shipment of Cape Horses in the 1700s. The term ‘Waler’ was first used in India in 1846 in reference to the horses that had come from New South Wales.
At first glance these large animals appeared to be superior to the hardy Boer horses that were no larger than the average pony. These horses were also descendants of the famous Cape Horses of the 18th and 19th centuries. Some Boers used Basuto Ponies that were well adapted to the rocky, mountainous terrain and were known for their endurance despite their small stature. The Boer horses were exceptionally hardy and nimble for they were used for hunting as well as tending cattle in all types of terrain and weather conditions; proving to be reliable and well suited to the environment the war was fought in.
Of course war horses were workhorses, being used as mounted infantry horses, gun horses, and cavalry horses. Not only were horses ridden by soldiers, they were also used to pull gun-carriages – sometimes through muddy battle grounds or over rough, uneven terrain as well as having to ford rivers and streams. Horses and mules were also required to pull heavily laden transport wagons.
A horse’s life expectancy was around six weeks from the time of its arrival in South Africa. Sixty percent of the horses died in combat or as a result of mistreatment. Apart from being killed by bullets or shell fire in battle, other reasons for their demise included:
- The failure to adequately rest and acclimatise horses after the long sea voyages prior to their arrival.
- The rough terrain of South Africa, including boulder-strewn hills, which the imported horses were unused to.
- Exhaustion and dehydration as a result of horses being ridden over hundreds of kilometres in all kinds of weather with little or no respite.
- Many horses sustained injuries to their fetlocks and hooves – there was not always the time or opportunity to treat the animals with the care they had been used to.
- Imported horses – unlike those used by the Boers – were unused to surviving on the veld grass, which is all many were exposed to for food for much of the time. The larger size of the British horses made them more dependent on fodder that had to be imported in great quantities from places such as Mexico. [See WEEDS WITH A HISTORY June 2015].
- Overloading the horses with unnecessary equipment and saddlery.
- African Horse Sickness.
- Horses were occasionally slaughtered for their meat, such as during the sieges of both Ladysmith and Kimberley.
The number of horses killed in the Anglo-Boer War was unprecedented. When one considers that over 300 000 of them died during active service – not counting the horses on the Boer side – one can begin to appreciate how important these animals were in that conflict. The war lasted for 970 days, which amounts to about 309 British horses dying a day. The Boer horses also died in in their thousands, many ridden to exhaustion. Not unexpectedly, dead horses were not buried but tended to be left where they fell.
Because their lives depended on their mounts many soldiers formed strong emotional bonds with their horses. That horses were held in high regard by the men who worked with them is evident from the two horse memorials that have been erected in South Africa.
Only three years after the end of the Anglo-Boer War, the first Horse Memorial was unveiled in Port Elizabeth on 11th February 1905 to commemorate the horses which had suffered and died during that war. The inscription on the base reads:
THE GREATNESS OF A NATION
CONSISTS NOT SO MUCH IN THE NUMBER OF ITS PEOPLE
OR THE EXTENT OF ITS TERRITORY
AS IN THE EXTENT AND JUSTICE OF ITS COMPASSION
ERECTED BY PUBLIC SUBSCRIPTION
IN RECOGNITION OF THE SERVICES OF THE GALLANT ANIMALS
WHICH PERISHED IN THE ANGLO BOER WAR 1899-1902
This fact seems to have been missed by members of the EFF who vandalised the monument on 6th April 2015 by toppling the kneeling soldier in front of the horse – offering it water from a bucket. This picture was published in The Herald 14th September 2015:
NOTE: On 7th May 2016 I was reliably informed by a resident of Port Elizabeth that the Horse Memorial has been repaired and is back to its former glory.
The other South African Horse Memorial is situated in the grounds of Weston Agricultural College in KwaZulu-Natal. It was unveiled on 31st May 2009 and is dedicated to horses, mules and other animals that perished serving men in war. Weston was the site of the British Army’s Number 7 Remount Depot, in service from 1899-1913. An estimated 30 000 horses and mules are believed to have been buried on the farmlands in the area. The memorial has been designed in a horseshoe shape, mounted by an obelisk-shaped monument created out of old horseshoes found on the farm. The inverted horseshoes of this centrepiece are in keeping with the tradition at a cavalryman’s funeral, where his boots are reversed in the stirrups on his horse.
The structure is topped with a specially crafted bronze statue of a horse.
Three examples from the remembrance plaques clearly demonstrate how the war horses were regarded by the men they served:
- Natal Field Artillery Established 1862: To the horses that served the guns and other animals in the supply chain.
- The Light Dragoons: In memory of gallant horses of the 13th, 18th and 19th Hussars that perished during the South African Campaign 1899-1902.
- The King’s Troop Royal Horse Artillery: In honour of horses that faithfully served during the South African War 1899-1902. |
|Science Home News in Science Features Explore TV & Radio Dr Karl Play Podcasts|
Venus is the brightest of all the "stars" in the sky. It's named after the goddess of love. It's always covered with thick cloud, so we can't see its surface. That could be why more spacecraft (about 20) have visited Venus than any other planet.
Even though Earth and Venus are almost the same size, Venus has much more atmosphere or gas. The pressure at ground level is nearly 100 times the pressure at sea level on Earth. The atmosphere is about 97% carbon dioxide and about 3% nitrogen. In fact there's roughly as much nitrogen on Earth (where it makes up 80% of our atmosphere) as there is on Venus.
The atmosphere is so thick, that according to some scientists, it would bend the light in a complete circle. So no matter where you looked, you would always see the back of your head - if the atmosphere were clear enough to see any distance at all.
On Earth, the clouds have just about finished by 15 km above sea level. On average, about 50% of the surface of the Earth is covered by cloud at any given time. But on Venus, the clouds cover 100% of the planet all of the time. The clouds begin at 50 km above the surface, and then continue for another 25 km above that. The clouds are made of droplets of sulphuric acid, which are about 50 times smaller than the thickness of a human hair. Venus takes 225 earth-days to make a complete loop around the Sun. We don't yet know why, but Venus rotates in an opposite direction to all the other planets in the Solar System (except Uranus). In fact, it rotates very slowly, and a Venus Day lasts 243 Earth Days. So on Venus a day is longer than a year.
Every 584 days, Venus and Earth come to their point of closest approach. And every time this happens, Venus shows Earth the same face. Is there some force that makes Venus align itself with the Earth rather than the Sun, or is this just a coincidence?
The temperature on the surface of Venus is about 480oC - hot enough to melt the zinc off your tin roof. But this enormous temperature is not just because Venus is closer to the Sun than we are. In fact, about two thirds of the energy of the Sun is absorbed by the clouds, and only one third of the heat energy reaches the ground. In fact less solar energy reaches the ground on Venus than on Earth. The reason that Venus is so hot is because of a massive runaway Greenhouse effect. Any heat that does get inside the clouds is trapped there.
Lightning seems to be as common on Venus as it is on Earth. Most of the lightning happens within the thick high cloud deck. Venus rotates very slowly. The clouds on Venus race around the planet some 60 times faster than the surface. They do a complete circle every 4 Earth days so the clouds pass through the planet's afternoon on roughly the same time scale as the clouds do on Earth. And just like on Earth, the thunderstorms and lightning on Venus seem to happen mostly in the afternoon and at dusk.
The most recent spacecraft to visit Venus was Magellan. The Magellan photographs show a very regular channel 4,000 kilometres long and 1 kilometre across. The Magellan photographs also show a mountain taller than Mount Everest.
Thanks to Magellan, Venus is mapped better than our planet Earth. 70% of the surface of our planet is covered by water, and we have very poor maps of the surface under the sea. But 95% of Venus has been mapped by Magellan. In fact Magellan has discovered so many new features (such as craters) on Venus, that the scientists have run out of names for them. But you can submit a person's name for a feature on Venus. By international agreement, features on Venus have to be named after famous and notable women. There are a few rules. The woman must have been notable and in some way worthy of the honour. She cannot have been a military or political figure from the 19th or 20th centuries. She cannot have been a person who was famous in any of the six major religions on our planet. She cannot have been a person who was famous only in one country. She must have been dead for at least three years.
Send your list of names to: Venus Names, Magellan Project Office, Mailstop 230-201, Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109, USA. Be sure to include her dates of birth and death, and a few reasons why she should receive this honour. A photostat from a book mentioning her is even better. Venus is a land of gigantic lava flows, thousands of craters, and strange features showing that very fierce volcanic activity has happened there. There are long rift valleys, huge craters and uplifted areas, and the crust of the planet is cracked and splintered.
But overall, Venus is incredibly flat. The average radius of Venus is 6,051.4 kilometres. About 70% of the planet is within 500 metres of this average radius. If you were to take all of the water that there is on Earth, and somehow magically pour it onto Venus, only two continents would appear above the surface. Between them these two lands cover about 10% of the surface. In the same way that the continents on our planet are an average of 3 kilometres above the average level of the ocean floor, these lands on Venus are 3 kilometres above the lands that would be below sea level.
In the northern hemisphere is a land called Ishtar (the Babylonian goddess of love). Ishtar is roughly the size of Australia. In the southern hemisphere is the other continent, Aphrodite (the Greek goddess of love), about the size of Africa or twice as big as Ishtar. There is a large diamond on Venus. This diamond helped to prove that Murphy's Law works on other planets besides Earth.
The Soviets used the diamond as a front glass to protect the lens of the camera on their spacecraft. Venera 13 and Venera 14 sent back colour photographs of the surface of Venus. On the way down through the atmosphere, the lens cap was left on the camera to protect the lens from the clouds of sulphuric acid. But once the spacecraft had landed, the lens cap was thrown off, exposing the diamond front glass. Diamond is the hardest substance known in the Universe to the human race, and the Soviets thought that it would not be affected by the terrible atmosphere.
Each spacecraft also had an experiment called the "Dynamic Penetrometer". The Penetrometer was a spring-loaded arm with a point on the end of it. The point would penetrate deep into soft ground, but not so deep into hard ground.
The photographs from Venera 13 show the penetrometer point embedded in the soil, and the lens cap off to one side. But the photographs from Venera 14 show that the point of the penetrometer landed exactly on the lens cap. This is proof that Murphy's Law is a universal law. The diamond is still waiting on the surface of Venus. If you want to get it, all you need is the space fare.
[Sun] [Mercury] [Venus] [Earth] [Moon] [Mars] [Jupiter] [Saturn] [Uranus] [Neptune] [Pluto]
Images courtesy of NASA |
But how did the snapdragon cousins create accented patterns that appeared to be equally effective in the same environment?
To find out how color differences in these snapdragons arose, scientists compared the genomes of the subspecies in a study published Thursday in Science.
They found that the magenta-yellow plants and their yellow-magenta mirrors shared most of the plant’s 30,000-some genes — but not a handful related to color. Those genes behaved like artists painting billboards with different techniques and palettes.
And nature appeared to respond to the colorful expressions of these genes. Bees favored a couple of patterns and neglected plants with other color schemes, the researchers believe, selecting for certain genetic combinations.
The key difference between the two most prevalent subspecies lie in how the snapdragon developed a yellow accent on a magenta flower.
To do this, it used something genetic researchers call small RNA — small because, in a structure shaped like a hairpin, it only holds a little bit of information that regulates a gene’s ability to express itself. (Generally speaking, RNA may hold a couple thousand units, whereas small RNA might hold no more than two dozen).
These proteins act something like a stencil, ensuring that the flower sprays yellow onto only a small portion of the lower lip. Without that genetic stencil, yellow would go everywhere.
The snapdragon with the opposite color scheme used something else for its magenta accent, just as many other plants and animals do for their color variations, said Enrico Coen, a biologist at the John Innes Centre in Britain who led the study.
Dr. Coen and his colleagues were surprised that one subspecies had this small RNA stencil and the other one didn’t. They’d seen small RNA turn off genes in plenty of other organisms, but in this case, “it was being used in nature to create differences that you could see,” he said.
Genetics are intricate, but the lesson here is simple. Whether creating a magenta arrow for a yellow flower, or vice versa — these subspecies both do the job well.
“Rather than always thinking what’s the best, what’s the best — maybe there are cases in which there are two equally good solutions,” said Dr. Coen.
After all, if there were only one way to live, there’d only be one organism out there. |
Platecarpus was a mosasaur, a kind of prehistoric lizard that was adapted to a life in the ocean, that lived during the Cretaceous period, between 84 and 81 million years ago. It swam in shallow oceans over what is now North America, parts of Europe, and also northern Africa. At fourteen feet from snout to tail, it was large for a lizard by today's standards (biggest living example is the Komodo Dragon at ten feet) but it was tiny compared to other known species of mosasaur, some of which grew to be gigantic. The genus name, Platecarpus, translates to "Flat Wrist" in reference to the anatomy of its limbs, which were flat paddles. Like all known mosasaurs, Platecarpus was a meat-eater.
|Platecarpus life reconstruction by Christopher DiPiazza|
Platecarpus is a well-studied mosasaur because its fossils tend to be very common in areas where it lived. In fact, we probably know a lot of what we know about mosasaurs in general thanks to Platycarpus. There is one specimen in particular that really made the picture of this creature more clear. It was so well preserved that paleontologists could identify patches of skin, the outline of the body, its last meal, and even some of its organs!
The skin of Platecarpus, according to the patches of it that preserved, was scaly, much like that of modern snakes and lizards. The scales were rounded and mosaic-like on the head, and became more like diamond-shaped shingles, overlapping slightly on most of the body. It also had small, thin scales on the throat and underbelly. This is similar to what you would find on most lizards and snakes today on a general level. The scales were also very small, so you would only be able to have really picked them out visually if you were very close to Platecarpus.
|This figure from the beautifully preserved Platecarpus specimen which was published in 2010 shows some of the scales, and the trachea.|
The organs that scientists think they may have been able to identify in the well-preserved Platecarpus specimen are the heart, liver, and kidney, based on their positions in the body. The most interesting preserved organ, however, was the trachea, the tube-like organ used to breathe air. What makes the preservation of this organ special, is that scientists could tell it was forked, and therefore would have led to two lungs inside the body. This is different from snakes, which are close relatives of mosasaurs, which typically only have one lung. Before this discovery, there were thoughts that perhaps mosasaurs only had one lung, too, to accommodate their streamlined bodies, which we now can confirm was not the case.
Platecarpus' tail was tall and flattened on the sides, which would be expected for a marine animal. It was also sharply downturned towards the tip. This indicates that Platecarpus had a fluke on the top of its tail in life, similar to that of a shark. Other extinct marine reptiles, like ichthyosaurs and some kinds of crocodiles, also exhibited this same feature. Before this discovery it was assumed that mosasaurs swam similar to snakes, moving their whole bodies in S-shaped motions. Now we know their style of swimming was probably closer to that of modern crocodilians and sharks, keeping the body stiff, while powering through the water with their powerful, broad tails.
|Platecaarpus skull on display at the Naturmuseum Senckenberg in Germany. Note the large eyes (supported by those scleral rings) and blade-like teeth.|
Unlike the teeth of some larger mosasaurs, like Tylosaurus, which were pretty broad, for crushing prey, the teeth of Platecarpus were relatively flat, and blade-like. They were also serrated, to slice meat even easier. This indicates that Platecarpus was probably more adapted for hunting prey with softer bodies, like squid and certain kinds of small fish. The remains of some digested fish were also found in the well-reserved Platecarpus specimen. Platecarpus' entire skull was proportionally smaller, with a shorter snout than what is typically seen in other mosasaurs, further supporting the idea that it was better at hunting small prey.
That is all for this week! As always feel free to comment below or on our facebook page!
Lindgren, J.; Caldwell, M.W.; Konishi, T.; Chiappe, L.M. (2010). Farke, Andrew Allen, ed. "Convergent Evolution in Aquatic Tetrapods: Insights from an Exceptional Fossil Mosasaur". PLoS ONE 5 (8): e11998.
Lingham-Soliar T. 1994. The mosasaur "Angolasaurus" bocagei (Reptilia: Mosasauridae) from the Turonian of Angola re-interpreted as the earliest member of the genus Platecarpus. Palaeont. Z. 68 (1/2): 267–282. |
- slide 1 of 4
Reasons for the Project
Conducting a school project survey involves the concepts of data analysis, probability and information interpretation. The process of the project assists in learning communication skills and the art of questioning for clear responses.
- slide 2 of 4
Design of the Project
Start with the basic construction. What is the basis or subject of the project? What information is needed to draw the needed conclusion? Once these questions are answered, the survey will begin to take shape.
Pick your target audience. For a school survey project, the audience may be students, teachers or other school employees. For answers from outside sources, be specific on the targets. The survey may be aimed at persons between a certain age range or those that live in particular areas. Registerd voters, business owners or other specific groups are options for audiences according to the subject of the survey.
When constructing questions for the project, they should be direct and to the point. Use questions that are answerable with a yes or a no. For more complex subjects, multiple choice answers can be used. Avoid question which require essay type answers as these do not give definitive data. Place examples in question that might seem unclear. If needed, break the subject down into parts and ask questions concerning each of these parts to come up with a complete an answer.
Keep the survey as brief as possible. People are more willing to answer three to five questions as opposed to a 10-20 question survey. The time involved in answering a long survey is a hindrance to the collection. By keeping the survey brief, the data interpretation will also be a simpler task.
The final piece of the construction is the time frame. The time frame is dependent on the depth of research intended for the data.
- slide 3 of 4
Conducting the Survey
Once the questions are in place and the target audience decided, it is time to conduct the survey. There are multiple methods of execution for conducting a survey.
In person surveys involve asking persons the selected questions directly and recording the responses. For first time surveyors, this may be the more difficult approach as it involves keeping records and asking only those who fit into the target audience. For surveys involving persons outside of the atmosphere, this method may work best as it does not require persons to fill out paper work and return their answers to the students.
Survey sheets, or individually written papers including questions and answer options, can be a more accurate way of collecting data. There is no option for mistakes in recording responses as the surveyed marks their own answers. The downside of these sheets is collecting the completed sheets after distribution.
For the simplest approach, a mass survey may be the best selection. A mass survey is conducted within a small group, such as classmates or the lunch ladies all at once with each individual giving their information on the spot.
- slide 4 of 4
The final stage of the school survey project involves the analysis of the data and creation of a representative display. The information is organized into a graph or chart to answer the original question.
For proper data analysis, separate the answers from each question to form a total for each. For example, make a list of all the responses for question one, then a list of all responses for question number 2.
Bar graphs created with a column for each question is a fairly easy method for beginners. Other options include creating pie graphs to show actual numbers or percentages of answers.
Make sure your purpose is clearly stated and presented in a logical way. Surveys can be very interesting and revealing. Have fun with your project! |
What is schizophrenia? Well, for starters, it’s not the same as multiple personality disorder, as many people think. Watch this video to learn about the characteristics of schizophrenia.
Transcript: People with schizophrenia may hear voices or believe that other people are reading their minds, controlling their thoughts, or plotting against them. These experiences are terrifying and can cause fearfulness, withdrawal, or extreme agitation. The chronic, severe, and disabling psychiatric disorder that we now call schizophrenia can be traced in written documents like the Egyptian Book of the Dead as far back as 2000 B.C. Many schizophrenics do not make sense when they talk-sometimes displaying “word salad: speech-here is an example: (Psychologist reads Patient Carl transcript 12 – 15 seconds. Eugen Bleuler first coined the term ‘schizophrenia’ in 1911 and defined the disorder with his four “A’s”: blunted Affect or diminished emotional response; loosening of Associations or reduced understanding of relationships; Ambivalence-an inability to make decisions; and Autism-a preoccupation with one’s own thoughts and reduced awareness of external events. The psychotic symptoms associated with schizophrenia-hallucinations and delusions-tend to emerge earlier in men than in women. For men, symptoms appear in their mid to late-20’s, while for women, schizophrenia symptoms surface in their mid-20’s to early-30’s. Symptoms don’t typically occur after age 45 and only rarely before puberty. Although schizophrenia is a serious illness, the outlook for those diagnosed with the disorder has improved over the last 30 years. There is still no cure, but effective treatments have been developed, and many people with schizophrenia improve enough to lead independent, satisfying lives. If someone you love has symptoms of schizophrenia, please consult a mental health professional. Want to learn more? Check out other videos and sources on this site for more information.
There is no single laboratory or brain imaging test for schizophrenia. Schizophrenia treatment professionals must rule out multiple factors such as brain tumors and other medical conditions (as well as other psychiatric diagnoses such as bipolar disorder). At the same time, they must identify different kinds of symptoms that manifest in specific ways over certain periods of time. To make matters more complicated, the person in need of mental health help and treatment may be in such distress that they have a hard time communicating. It often takes a decade for people to be properly diagnosed with schizophrenia. A health care provider who evaluates the symptoms and the course of a person’s illness over six months or more can help ensure a correct diagnosis. |
Relativity passes the pulsar test
Jul 11, 2001
A key prediction of Einstein's general theory of relativity - the 'Shapiro delay' - has been observed following highly accurate radio studies of a nearby pulsar. The theory forecasts that pulses of radiation streaming across the Universe should be impeded by relativistic distortions of space. Willem van Straten of Swinburne University of Technology in Australia and colleagues detected this effect while mapping the motion of the pulsar at the Parkes Observatory (W van Straten et al 2001 Nature 412 158).
Pulsars are rapidly spinning neutron stars that earn their name from the beams of radiation they emit, which appear as pulses to a stationary observer. Binary pulsars - systems in which a pulsar orbits another object - emit very regular pulses of radiation because their orbital and rotation periods are extremely regular. These properties make them excellent tools for probing the effects of general relativity.
At 450 light years from Earth, PSR J0437-4715 is the closest known binary pulsar, in which a pulsar orbits a white dwarf. Its proximity allows astronomers to measure its radio output from different angles, due to its motion relative to the Earth. Using this geometrical method, van Straten and colleagues calculated the orbit of the pulsar in three-dimensions. This also revealed the centre-of-mass of the system, from which the team calculated the masses of the pulsar and the white dwarf.
Next, van Straten and colleagues studied variations in the pattern of the radio pulses arriving on Earth. If the pulses emitted throughout the orbital period of the pulsar travelled through equivalent regions of space on their journey to Earth, they should all arrive at equal intervals.
But after eliminating geometrical effects, van Straten's team found that the pulses took longer to arrive when the plane of the binary system was in their line of sight. Conversely, when viewing the plane 'face on', the pulses were not delayed. This phenomenon arises because when the plane is 'edge on', the signal from the pulsar passes through a region of space that is distorted by the gravity of the white dwarf. This distortion means that the signal takes a longer route to Earth, and therefore arrives later. This is the Shapiro delay.
General relativity also states that binary systems should gradually slow down, and emit the excess rotational energy as gravitational waves. This predicted increase in orbital period has been observed in previous experiments, but attempts to detect gravitational waves have so far failed. The length of the observed Shapiro delay was consistent with the amount of energy the binary system should be losing through gravitational waves.
"To our knowledge, this verification of the predicted space-time distortion is the first confirmation - outside the solar system - in which the orbital inclination was determined independently of general relativity", say the authors.
The study also means that pulsar PSR J0437-4715 has the most accurately known location of any astronomical object. Now this system is known to exhibit the Shapiro delay, it is likely to be the subject of many more cosmological studies.
About the author
Katie Pennicott is Editor of PhysicsWeb |
The Merriam-Webster Collegiate Dictionary defines plagiarize as "to steal and pass off (the ideas or words of another) as one's own." Avoid stealing by citing your sources. Give credit to the author/creator of any work you use.
Read the following passage from Endangered South American Monkeys (Amanda Harman, Marshall Cavendish, 1996):
Since squirrel monkeys' most important food is usually fruit, female squirrel monkeys always give birth in the wet season when there is plenty of fruit available. Each female has one baby, which she carries on her back for the first month of its life. Soon the inquisitive youngster is old enough to get off and explore on its own. It learns all the information and skills it will need later in life by playing with other young squirrel monkeys. They chase each other through the trees, scream at other group members, and generally makenuisances of themselves. Squirrel monkeys are still quite widespread in South America, but they are in serious danger in Central America. They are losing their homes and food supplies very quickly as people cut down the rainforests. To make matters worse, in the past many squirrel monkeys have been captured and sold as pets or laboratory animals around the world.
YOUR TASK:TWO STEPS
- write at least 2 note cards using the information above, IN YOUR OWN WORDS. This is called PARAPHRASING.Remember to write naturally, as yourself, since teachers know the difference between how an adult writes and how a middle school student writes.
- Cite your source using the information that is given in the first paragraph. You may use Easybib under bibliomakers to cite your source.
For a quick tutorial on plagiarism go to the following link. Yes, it may be geared for a university because it says university in some places, but the same rules apply for all people no matter what grade they are in or who they are.
Free Bibliography Maker online:
Use www.easybib.com to create a bibliography that will ensure you are not plagiarizing.
To use this correctly, you will need to have all the information about the resource and input them into the correct fields. You are still responsible for finding the correct information and putting it into the correct fields. Feel free to watch the tutorial but ask for headphones so you can listen.
A new feature in Easy Bib is a tutorial for you to follow and QUICK CITE. Quick Cite is when you just type in the ISBN number in one box and it will fill in the rest of the bibliography for you. |
On a quest to locate water and other volatile minerals in the Moon’s soil, the LCROSS experiment—Lunar Crater Observation and Sensing Satellite—hurtled a spent Centaur rocket into a dark crater at the lunar South Pole last year. The crater, known as Cabeus, is one of the permanently shadowed regions of the Moon, and researchers believe it is also one of the coldest.
When the empty shell of the rocket struck the bottom of the crater, a plume of debris, dust, and vapor became visible to the “shepherding” LCROSS spacecraft trailing behind it. Now, data from that shepherding craft has allowed researchers to describe the impact event in detail and provide an estimate of the total concentration of water ice in Cabeus crater.
An image of debris, ejected from Cabeus crater and into the sunlight, about 20 seconds after the LCROSS impact. The inset shows a close-up with the direction of the sun and the Earth. | Image courtesy of Science/AAAS
About 155 kilograms (342 pounds) of water vapor and water ice were blown out of the darkness of the crater and into the LCROSS field of view, according to Anthony Colaprete from the NASA Ames Research Center in Moffett Field, California, and colleagues from across the United States who analyzed data from the near-infrared and ultraviolet/visible spectrometers onboard the shepherding spacecraft. They estimate that approximately 5.6% of the total mass inside Cabeus crater (plus or minus 2.9%) could be attributed to water ice alone.
These results and more from the LCROSS experiment are reported with six separate reports in the 22 October issue of the journal Science, which is published by AAAS.
“These permanently shadowed regions of the Moon have not received direct sunlight for billions of years,” said David Paige, a researcher from the University of California-Los Angeles (UCLA), who also interpreted the LCROSS data and authored one of the Science reports. “We suspected that they may be cold enough to trap water ice, but the big question prior to LCROSS was: how much?”
Aside from water, this collaboration of researchers also report the detection of other volatile compounds in the plume of debris during the few seconds it was visible to the spacecraft, including a number of light hydrocarbons, sulfur-bearing species, and carbon dioxide.
“The range of volatile compounds observed during the LCROSS impact is the same we see in icy comets in the outer solar system,” Paige said.
In one Science report, Peter Schultz from Brown University in Providence, Rhode Island, and colleagues in California describe how they monitored the many stages of the impact and its resulting plume of debris. These researchers say the rocket impact created a crater about 25 to 30 meters wide, and that somewhere between 4,000 kilograms (8,818 pounds) and 6,000 kilograms (13,228 pounds) of debris, dust, and vapor was blown out of the dark crater and into the sunlit LCROSS field of view.
And when the empty LCROSS rocket slammed into the pitch-black bottom of Cabeus crater, the Lunar Reconnaissance Orbiter (LRO) was also in orbit around the Moon—and this spacecraft captured many more important details of the impact.
“LRO and LCROSS were launched together on the same rocket, but they took very different paths once they got to the Moon,” said G. Randall Gladstone from the Southwest Research Institute in San Antonio, Texas. “LRO went into a low-altitude, two-hour orbit around the Moon, while LCROSS went into a 37-day orbit around the Earth. They intersected with the Moon’s Cabeus crater on 9 October 2009.”
Gladstone and colleagues utilized an ultraviolet spectrograph onboard the LRO to visualize the debris, dust, and vapor created by the LCROSS impact and identify numerous elements and compounds in the plume, including molecular hydrogen, carbon monoxide, calcium, mercury, and magnesium. These observations support the idea that the dark, freezing-cold, permanently shadowed regions of the Moon can trap volatile compounds—delivered from deep space or other areas of the Moon—and preserve them for eons.
In another Science report, Paul Hayne from UCLA and colleagues describe how they measured the thermal signature of the LCROSS impact with the Diviner Lunar Radiometer onboard the LRO. Their observations provide insight into how energy is dissipated and matter is slowly cooled during such planetary impacts.
Approximately 90 seconds after the LCROSS impact near the moon’s south pole, the Lunar Reconaissance Orbiter swept past the site, allowing the Diviner instrument to record its thermal brightness. The impact generated temperatures in excess of 1000 K, appearing as a tiny glowing dot near the center of the color swath. Pre-impact surface temperatures obtained by Diviner during October 2009 are shown in gray-scale, draped over topography from the Lunar Orbiter Laser Altimeter. This image relates to a paper by Paul O. Hayne and colleagues titled, “Diviner Lunar Radiometer Observations of the LCROSS Impact,” from the 22 October 2010 issue of Science. | Image courtesy of NASA/UCLA
These researchers suggest that, during the LCROSS impact, a region of 30 to 200 square meters of the Cabeus crater’s floor was heated from approximately 40 degrees Kelvin to at least 950 degrees Kelvin—and that the residual heat was enough to turn approximately 300 kilograms (661 pounds) of ice directly into vapor in just four minutes after the impact, consistent with the LCROSS findings.
But, how did scientists choose the LCROSS impact site in the first place? Well before the empty rocket crashed into Cabeus crater, researchers were analyzing the Moon’s surface, searching for regions where volatile minerals might become trapped.
At that time, Paige and colleagues from across the United States used instruments onboard the LRO to map surface temperatures near the Moon’s South Pole and identify expansive areas that were theoretically cold enough to trap such volatile minerals. The researchers developed a thermal model of the lunar surface that accurately balanced the Moon’s topography with solar and infrared radiation to calculate the average amount of heat that those regions harbored. Their findings suggested that the floors of large impact craters that receive no direct sunlight are the coldest regions of the Moon—and they identified Cabeus crater as one of the coldest candidates.
In another of the Science reports, Igor Mitrofanov from the Institute for Space Research of the Russian Academy of Science in Moscow, Russia, and colleagues from both Russian and the United States describe how they used the Lunar Exploration Neutron Detector, or LEND, onboard the LRO to analyze the distribution of hydrogen near the southern lunar pole. Their findings confirm that the Cabeus crater contained a high concentration of hydrogen—and that it was indeed an ideal impact site for LCROSS.
“To me, the take-home message is that, yes, as has long been speculated, the Moon’s permanently shadowed regions are great cold traps and hold lots of volatiles—not just water, but many other interesting materials,” Gladstone said. “It seems likely that some of the species found will have important implications for future exploration or resource utilization planning. Scientifically, since these permanently shadowed regions are thought to have existed for a billion years or more, they likely hold an impressive record of solar system history.”
Read the abstracts for the related articles in Science:
- “Detection of Water in the LCROSS Ejecta Plume”
- “The LCROSS Cratering Experiment”
- “LRO-LAMP Observations of the LCROSS Impact Plume”
- “Diviner Lunar Radiometer Observations of the LCROSS Impact”
- “Diviner Lunar Radiometer Observations of Cold Traps in the Moon’s South Polar Region”
- “Hydrogen Mapping of the Lunar South Pole Using the LRO Neutron Detector Experiment LEND”
Listen to Robert Frederick’s Science Podcast interview with Anthony Colaprete.
Read the abstracts for the related articles in Science. |
Today’s Perfect Picture Book Friday selection is a beautiful story of Harriet Tubman. You’ve probably heard of her in connection with the Underground Railroad. But did you know she played numerous roles in history?
Title: Before She Was Harriet
Written by: Lesa Cline-Ransome
Illustrated by: James E. Ransome
Holiday House, 2017, biography
Suitable for ages: 4-7
Themes/topics: African Americans, US History, leadership
Here she sits
an old woman
tired and worn
her legs stiff
her back achy
Overview: (flap copy)
Moses, General Tubman, Minty, Araminta—the woman we know today as Harriet Tubman went by many names. Each represented one of her many roles as a spy, as a liberator, as a suffragist, and more. A powerful poem and exquisite watercolor paintings pay tribute to a true American hero.
Activities and Resources:
• Find out more. Identify character traits.
This book offers a tremendous opportunity for a class to learn more about Harriet Tubman. Use an array of sources and have small groups each take one of her roles and learn more about her to share. Complete a character map with three character traits and textual evidence. Then compare the traits each group found. Did groups identify the same traits? Different traits? Take a second look at the different traits and see if evidence for those might also be found in the other roles she took.
• Writing: connect to self.
What roles would you like to play in history? What traits would help you?
Why I like this book:
This biography stands out from the rest with its lyrical language and creative structure. The story begins at the end of Harriet’s life, inviting the reader to wonder about the causes of her weariness, her worries, her walks under a clear night sky. The author takes us back in time, step by step, to see what Harriet did in each stage of her life. I think history’s great heroes are often hard for young readers to relate to, but by following Harriet back to where her journey began, children see a young girl with a dream learning something from her father that was key to her extraordinary future. The illustrations are rich with context and emotion. I especially love the cover. A gorgeous book all around!
Visit author Susanna Hill’s Perfect Picture Books for a plethora of picture books listed by title and topic/theme, each with teacher/parent activities and resources. |
- The sensory input is experienced and felt more intensely than it is for other individuals. It is felt to be painful, threatening, or uncomfortable. It has to do with the inability to modulate or habituate to sensory stimulation and is often seen to be anxious, distracted, and has difficulty with social interaction/engagement. This behavior can often lead to defensive, manipulative, and controlling behaviors, or self-isolation and aggression.
- This is seen when a person is slower to respond to sensory input or requires significantly more input in order to recognize something is happening and to respond to the stimuli. The individual may not feel pain and have a tendency to be quiet, seem disinterested, and have difficulty engaging with other individuals.
- Over-reactive: The individual cannot tolerate a haircut or trimming of the nails, face or hair being washed, and being touched unexpectedly. They also dislike feeling gritty/sticky/slimy/sweaty/greasy/ or rough textures. They are agitated by crowded, groups in elevators, malls, shops, etc. They will isolate themselves, react over emotionally, and have their own ritual in personal hygiene.
- Under-reactive: The individual feels pain/temperature less than others, they touch everything in sight to the point of irritating others, and do not notice a messy face or hands. They also crave touch, rough play, and push into others frequently.
- Treatment: Expose the individual to different textures in order to desensitize their body or introduce new textures for them to experience. Have the individual play with chalk, theraputty, play dough, water or rice play, feeling different clothing materials, etc.
- Over-reactive: This over-reactive system is rare and is an intrinsically inhibitory system.
- Under-reactive: The individual seeks and craves jumping, crashing, and falling. They are unable to modulate their force and speed; as they push and bump into others, pushes too hard while writing, breaks toys, and objects without meaning too.
- Treatment: Encourage safe play, use full body and gross motor play, use massage or brushing techniques to organize the body, bounce on a hippity hop, jump on a trampoline, swing on swings, deep pressure input to the body, wheelbarrow walking, etc.
- Over-reactive: The individual has a fear of falling and heights, can be anxious or distressed when their feet leave the ground or when walking down steps or riding an escalator. They dislike being upside down, they avoid playground equipment, moving toys, and amusement park rides. They easily become disoriented when bending over and get dizzy easily (carsick and seasick).
- Under-reactive: The individual seeks all type of movement on the playground or amusement park rides. They spin and twirl throughout the day, take excessive risks with movement, are thrill seekers, enjoy being upside down, and tend to stare at spinning objects.
- Treatment: Allowing the individual to receive vestibular input in an organized way such as swinging rotary and linearly on a swing; when you swing one way you must swing the opposite.
- Over-reactive: The individual has a negative response to unexpected or loud sounds such as a dog barking, truck beeping, siren, school intercom, ceiling fan, hair dryer, etc. They cannot work with background noise, they run from loud environments, and startle to sounds others are not aware of.
- Under-reactive: The individual enjoys strange noises, crave louder upbeat music, has a difficult time learning phonics, do not respond when their name is called, and frequently ask for information or directions to be repeated.
- Treatment: Desensitize the individual to different type of sounds in order to adjust their brain and auditory system to different sounds.
- Over-reactive: The individual wants the lights to be dimmed inside, may wear glasses inside or outside after dusk, become excited or disorganized when there are a variety of visual objects such as visual clutter. They are annoyed by flickering lights on a television or computer.
- Under-reactive: The individual gets lost easily, are overwhelmed by eye contact, have difficulty copying from a board, difficulty with puzzles or discriminating shapes and positions in space. They have difficulty reading and cannot find objects or toys in a busy visual field.
- Treatment: Focuses on enhancing visual memory skills, visual motor integration skills, and desensitize the eyes to different visual stimuli.
Gustatory and Olfactory System
- Over-reactive: The individual has a negative reaction to smell, are aware and easily repulsed by smells and tastes, and are picky eaters. They feel light headed with smells or chemicals in the environment and sometimes may dislike eating.
- Under-reactive: The individual may routinely smell foods and other people, put food or objects in their mouth, and may not smell strong orders.
- Treatment: Desensitize the individual to different type of food textures in order to adjust their brain and taste buds to new foods. |
You might have wondered, looking up at the night sky, how many other beings are out there looking back at us. Help is at hand. Using data from NASA’s Kepler Space Telescope, New Scientist has made an interactive map illustrating the stars that we might expect to host roughly Earth-sized, potentially habitable planets.
Take a journey through space: Discover how many Earths
The grid of squares to the right represents the patch of sky that Kepler stared at for nearly four years. So far, the space telescope – nicknamed the Planet Hunter – has confirmed the existence of 151 exoplanets and identified more than 3500 strong candidates.
Now, using what we know from Kepler, and simulations from its data by Courtney Dressing and David Charbonneau of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, New Scientist has estimated and mapped the density of habitable worlds across the whole sky. Given that the Milky Way is thought to contain between 100 and 200 billion stars, our best estimate of the total number of such planets in our galaxy is 15 to 30 billion.
“This illustrates the wow factor emerging from the Kepler mission,” says Jon Jenkins of the SETI Institute in Mountain View, California, who wrote the software that analyses the Kepler data. “The galaxy is just full of potentially habitable planets.”
How many of these worlds harbour life? We don’t know, but if we are alone in our galaxy, it’s not for a lack of accommodation.
Article amended on 23 February 2017
More on these topics: |
- Slides: 14
Literal and Non-Literal Meanings of Words and Phrases 3 rd Grade – ELA Common Core Aligned CCSS. ELA-LITERACY. L. 3. 4. A Use sentence-level context as a clue to the meaning of a word or phrase. CCSS. ELA-LITERACY. L. 3. 5. A Distinguish the literal and nonliteral meanings of words and phrases in context (e. g. , take steps). CCSS. ELA-LITERACY. RL. 3. 4 Determine the meaning of words and phrases as they are used in a text, distinguishing literal from nonliteral language.
Literal and Non-Literal Meanings of Words and Phrases
Literal What is literal language? v Literal language is language that means the literal, or dictionary, meaning of a word or phrase. v It would be the meaning of a word if you looked it up in a dictionary.
Non-literal What is non-literal language? v Non-literal expressions are sayings that have meanings beyond what can be understood by their common words. v Writers use non-literal language to help readers better picture or understand something. v Writers and speakers also use common expressions, called idioms, to indicate something beyond what the words actually mean.
Idiom An idiom is a word or phrase that means something different from its literal meaning. What is an idiom? v An idiom indicates something beyond what the words actually mean. This is called non-literal. It does not mean what it actually says. v You can figure out what an idiom means by using context clues, or familiar words around the idiom. v The word idiom is pronounced i-dee-um.
Let’s look at some examples:
Julian would not hurt a fly. Literal Meaning If a fly was bothering Julian, he would not swat at it because he wouldn’t want to hurt it. Non-literal Meaning Julian is kind and is never mean to anyone.
My dad is a couch potato. Literal Meaning My dad is an actual potato that grew underground and now sits on a couch. Non-literal Meaning My dad spends a lot of time doing very little and watches a whole lot of TV.
Ms. Lopez wanted to tell Martin to shape up or ship out. Literal Meaning Martin needs to start exercising to get in shape or else Ms. Lopez will put him on a ship. Non-literal Meaning Martin needs to start behaving and being kind to Julian.
When Martin spoke to Ms. Lopez, there was a frog in his throat. Literal Meaning While Martin was talking, somehow a frog was in his throat making noise. Non-literal Meaning Martin’s voice was hoarse or he couldn’t speak due to fear.
Now try some on your own:
Ms. Lopez told Martin to stop beating around the bush and tell her about teasing Which picture shows the literal meaning and which one shows the Julian. non-literal meaning? Write the answer on the line below each picture. Then click to check. Literal Meaning Non-literal Meaning
While reading the book, Mr. Higgins told the class he wants all eyes on him. Which picture shows the literal meaning and which one shows the non-literal meaning? Write the answer on the line below each picture. Then click to check. Non-literal Meaning Literal Meaning
Ms. Lopez told Martin that someone spilled the beans about him teasing Julian. Which picture shows the literal meaning and which one shows the non-literal meaning? Write the answer on the line below each picture. Then click to check. Literal Meaning Non-literal Meaning |
The scientists are developing a new genetic approach that will accelerate the study of the interaction between bacteria and viruses. The interactions between phages and microbes can have important consequences for the health, agriculture and climate of the planet. New ways to fight bacteria can include both eliminating dangerous strains and modifying beneficial viral strains. But even the smartest designs are sometimes shattered by the possibility of viruses infecting bacteria.
Phages, just like parasitic organisms, are constantly changing the way the bacterial host strain is used. It's like the constant struggle for survival that provides different molecular arsenals, and the researchers are eager to study them, but it's a very tedious and time-consuming process.
To study protective strategies, the specialists from the Berkeley laboratory developed an effective method. It consists of a combination of three methods that can reveal how bacterial receptors can use phages to infect a cell, and how cellular mechanisms use bacteria to respond to a phage infection. A researcher Vivek Mutalik believes that phages are the most common biological objects on the planet.
They are recognized as a key force in the nutrient cycle in the environment, in agriculture and are important for human and animal health. Understanding their interaction will help to know the microbiome of the planet better and to develop new drugs, vaccines or phage cocktails that will help to eliminate antibiotic resistance.
The scientists used technology to create gene deletions and increase gene expression, and thus were able to determine which bacteria are used to evade phages. So the scientists were able to figure out which receptors phages are targeting without analyzing their genome.
The scientists tested the new method on two strains that target 14 genetically diverse phages. The results confirmed that the new method works efficiently and quickly. It quickly discovered a set of phage receptors that had previously been identified in decades of research.
The scientists believe that the new method can be expanded and thereby simplify the study of the biology of our planet. |
How to celebrate Indigenous Peoples' Day
Over the last few years, the number of states, cities, and local communities opting to celebrate Native American history and culture in lieu of Columbus Day has been growing. There is an Indigenous People’s Day celebration being observed on the campus of Marshall University and West Virginia University will be holding its 25th Annual Peace Tree ceremony on October 10th. If you are not able to make it to Huntington or Morgantown but would still like to learn more, here are three simple ways you can expand your knowledge.
1. Learn and share with your family and friends the historically accurate story of Columbus and his trip to the Americas.
This piece of history and the outcomes of this initial European contact with the existing population of the Americas is an important part of our history that is often misunderstood and continues stereotypes. Check out "10 Things You May Not Know About Christopher Columbus." Two great books for 3rd to 5th grade elementary students are “Morning Girl” by Michael Dorris and “Encounter” by Jane Yolen. These two books tell the story of Columbus’ arrival from the point of view of Taíno children.
Also, take the time to learn about the Doctrine of Discovery and its effects on the Native population in the Americas. Below is a short video testimonial by the Haudenosaunee (Iroquois) on the Doctrine of Discovery and how it has effected them for hundreds of years.
2. Actively fight against Native American stereotypes.
Firstly, many ideas, images, and phrases have entered popular American culture that are derogatory, negative, or flat out wrong. Understanding Prejudice has created a great list of some of these cultural items that we can all stop using or allowing to be used.
Secondly, we can pay attention to the way that Hollywood has shaped how the Native American is viewed in popular culture via film.
Personally understanding that Native American culture is not homogenous, nor should it be spoken about as a past event relegated to our history books is an important step as well. Native American culture is richly interwoven in thriving communities throughout the nation today.
3. Use today to celebrate Native American culture and to shine light on issues facing Native American citizens of the United States.
From joblessness, violence, poverty, and other challenges, Native Americans are facing a plethora of issues that are not widely covered by the media. Take some time to research these issues and then speak up.
Use your voice to bring light to these issues on Indigenous People’s Day! |
Our goal is to develop collaborative agents (software or robots) that can efficiently communicate with their human teammates. Key threads involve designing algorithms for inferring human behavior and for decision-making under uncertainty.
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, a team of robots can self-assemble to form different structures with applications in inspection, disaster response, and manufacturing
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
One reason we don’t yet have robot personal assistants buzzing around doing our chores is because making them is hard. Assembling robots by hand is time-consuming, while automation — robots building other robots — is not yet fine-tuned enough to make robots that can do complex tasks.But if humans and robots can’t do the trick, what about 3-D printers?In a new paper, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) present the first-ever technique for 3-D printing robots that involves printing solid and liquid materials at the same time.The new method allows the team to automatically 3-D print dynamic robots in a single step, with no assembly required, using a commercially-available 3-D printer. |
How can we help students understand better, and remember more from a lesson? Can something that is effective be fun and creative at the same time?
What are the strategies that will help our students save precious time and at the same time make memories more lasting? The answer is definitely no underlining, highlighting, rereading, simple repetition, or end-of-term tests. According to the latest scientific research many commonly used study habits are unproductive, so we need to replace them with practices that actually work and result in durable learning.
Brain-Based Learning refers to techniques that take into account evidence-based information about how our brains work. This course questions and challenges conventional teaching methods which were previously believed to be unquestionable.
The goal of the course is to help educators improve and accelerate the learning process by designing their teaching activities according to what the human brain needs to perform well.
During the course, participants will be actively involved in trying out and reflecting on many different techniques. They will experience first-hand how brain based techniques work and how easy it is to apply them creatively once they understand the underlying principles about attention, context, memory formation, timing, testing, pre-testing and motivation. They will design activities, lessons, and courses that will result in student engagement, better understanding and better retention.
Participants will return home filled with new ideas, new approaches, tips and tricks and a fresh outlook on their teaching process. They will also be able to give students informed advice on how to become better learners by studying smarter, not harder.
And, hopefully, along the way participants will become better learners themselves.
The course will help the participants to:
- Recognize the practical implications of the latest research in how the brain learns;
- Understanding when and how learning takes place;
- Design activities, lessons based on the principles derived from science;
- Make informed decisions on what strategies to use and why to maximize the learning outcome;
- Apply memory techniques;
- Create motivating and engaging student experiences;
- Provide students with the experience of success and mastery;
- Support students with effective, creative, and fun learning techniques;
- Use better learning techniques themselves.
The schedule describes likely activities but may differ significantly based on the requests of the participants, and the trainer delivering the specific session. Course modifications are subject to the trainer’s discretion. If you would like to discuss a specific topic, please indicate it at least 4 weeks in advance.
Day 1 – How attention works
- Introduction to the course, the school, and the external week activities.
- Icebreaker activities.
- Presentations of the participants’ schools.
- Discovering participants’ beliefs about learning.
- How attention works, or doesn’t.
Day 2 – Attention magnets
- Attention magnets.
- Visual memory.
- The role of emotions.
- Elaboration strategies.
- Memory techniques.
Day 3 – Memory techniques
- Priming, pre-testing.
- Retrieval practice.
- Timing matters.
Day 4 – Create meaning
- The power of context, being mindful.
- Stories, Meaning.
- Feedback and feedforward.
- Building students’ self-confidence.
Day 5 – Find the right motivation
- Staying in Flow.
- Sharing ideas.
Day 6 – Excursion & course closure
- Course evaluation: round-up of acquired competencies, feedback, and discussion;
- Awarding of the course Certificate of Attendance;
- Excursion and other external cultural activities. |
Seaweed and algae aren’t the only determining factor in how humans have developed, but there could be reasons to be believe they played a significant role.
One of the things that makes humans different from any other species on earth is that our brains are large in proportion to our body mass. Some neuro-chemists believe this is due to a diet that has a sufficient sources of essential omega-3 fatty acids like DHA and EPA. These fatty acids play an important role in brain development and brain function, but are only found in fish, shellfish, seaweed and algae. In fact, fish and shellfish get their omega-3 fatty acids from consuming seaweed and algae. Because of this, many people also believe that humans did not evolve on dry grasslands but rather in damp regions, likely where the land meets the sea.
It is also true that the development of the human brain is dependent on several key micronutrients that include iodine, iron, copper, zinc, and selenium. Iodine is known for being essential in early brain development and proper growth, and many pregnant women try to increase their iodine while carrying. It’s been said that those living far from coastal regions several hundred years ago would have a hard time getting nutrients like iodine in ample quantities. Luckily for us, brown seaweeds like sugar kelp have plenty of iodine and are a great source of all of these nutrients. |
The word physician comes from the Ancient Greek word φύσις (physis) and its derived adjective physikos, meaning "nature" and "natural". From this, amongst other derivatives came the Vulgar Latin physicus, which meant a medical practitioner. After the Norman Conquest, the word entered Middle English, via Old French fisicien, as early as 1100. Originally, physician meant a practitioner of physic (pronounced with a hard C). This archaic noun had entered Middle English by 1300 (via Old French fisique). Physic meant the art or science of treatment with drugs or medications (as opposed to surgery), and was later used both as a verb and also to describe the medications themselves.
In English, there have been many synonyms for physician, both old and new, with some semantic variation. The noun phrase medical practitioner is perhaps the most widely understood and neutral synonym. Medical practitioner is lengthy but inclusive: it covers both medical specialists and general practitioners (family physicians, family practitioners), and historically would include physicians (in the narrow sense), surgeons and apothecaries. In England, apothecaries historically included those who now would be called general practitioners and pharmacists.
The term doctor (medical doctor) is older and shorter, but can be confused with holders of other academic doctorates (see doctor of medicine). Doctor (gen.: doctoris) means teacher in Latin and is an agent noun derived from the verb docere ('teach'). A cognate expression occurs in French as docteur médecin, a direct equivalent of medical doctor or doctor of medicine, and commonly found as its contraction, médecin (doctor, physician).
The Greek word ἰατρός (iatrós, doctor or healer) is often translated as physician. Ἱατρός is not preserved directly in English, but occurs in such formations as psychiatrist (translates from Greek as healer of the soul), podiatrist (foot healer), and iatrogenic disease (a disease caused by medical treatment). In Latin, the word medicus meant much what physician or doctor does now. Compare these translations of a well-known proverb (the nouns are in vocative case):
- Ἰατρέ, θεράπευσον σεαυτόν (Greek New Testament: Luke, 4:23)
- Medice, cura teipsum (from the Vulgate, early 5th century)
- Physician, heal thyself (from the Authorized King James Version, 1611)
The ancient Romans also had the word archiater, for court physician. Archiater derives from the ancient Greek ἀρχιατρός (from ἄρχω + ἰατρός, chief healer). By contraction, this title has given modern German its word for physician: Arzt.
Leech and leechcraft are archaic English words respectively for doctor and medicine. The Old English word for "physician", læċe, which is related to Old High German lāhhi and Old Irish liaig, lives on as the modern English word leech, as these particular creatures were formerly much used by the medical profession. Cognate forms for leech exist in Scandinavian languages: in modern Swedish as läkare, in Danish as læge, in modern Norwegian as lege (bokmål) or lækjar (nynorsk), and in Finnish as lääkäri. These Scandinavian words still translate as doctor or physician rather than as a blood-sucking parasite.
- Date: 13th century
- 1 : a person skilled in the art of healing; specifically : one educated, clinically experienced, and licensed to practice medicine as usually distinguished from surgery
- 2 : one exerting a remedial or salutary influence
A physician—also known as doctor of medicine, medical doctor, or simply doctor—practices the ancient profession of medicine, which is concerned with maintaining or restoring human health through the study, diagnosis, and treatment of disease or injury. This properly requires both a detailed knowledge of the academic disciplines (such as anatomy and physiology) underlying diseases and their treatment—the science of medicine—and also a decent competence in its applied practice—the art or craft of medicine.
Both the role of the physician and the meaning of the word itself vary significantly around the world, but as generally understood, the ethics of medicine require that physicians show consideration, compassion and benevolence for their patients.
- Life is short, and Art long;
- the crisis fleeting; experience perilous, and decision difficult.
- The physician must not only be prepared to do what is right himself,
- but also to make the patient, the attendants, and externals cooperate.
- —First aphorism of Hippocrates, c. 400 BCE, from the Hippocratic Corpus online (translated by Francis Adams) |
This article will cover the following subjects:
Discrepancy between reported capacity and actual capacity
Many people are confused when their operating system reports, for example, that their new 1 Terabyte (1 TB, or 1000 GB) hard drive is reporting only about 931 gigabytes (GB) in usable capacity. Several factors may come into play when you see the reported capacity of a disk drive. Unfortunately, there are two different number systems which are used to express units of storage capacity; binary, which says that a kilobyte is equal to 1024 bytes, and decimal, which says that a KB is equal to 1000 bytes. The storage industry standard is to display capacity in decimal. Even though in binary you have more bytes, the decimal representation of a GB shows greater capacity. In order to accurately understand the true capacity of your disk drive, you need to know which base unit of measure (binary or decimal) is being used to represent capacity. Another factor that can cause misrepresentation of the size of a disk drive is BIOS limitations. Many older BIOS are limited in the number of cylinders they can support.
Motivation for proposed prefixes for binary multiples
In the past, computer professionals noticed that 1024 or 2^10 (binary) was very nearly equal to 1000 or 10^3 (decimal) and started using the prefix "kilo" to mean 1024. That worked well enough for a decade or two because everybody who talked KB knew that the term implied 1024 bytes. However, almost overnight a much more numerous "everybody" bought computers, and the trade computer professionals needed to talk to physicists and engineers and even to ordinary people, most of whom know that a kilometer is 1000 meters and a kilogram is 1000 grams.
Two different measurement systems
|Name||Abbreviation||Binary Power||Binary Value (in Decimal)||Decimal Power||Decimal
Often when two or more people begin discussing storage capacity, some will refer to binary values and others will refer to decimal values without making distinction between the two. This has caused much confusion in the past. In an effort to dispatch this confusion, all major disk drive manufacturers use decimal values when discussing storage capacity.
How operating systems report drive capacity
In the example above, right above the pie chart are the two different capacity measures. The first one is the decimal value in total bytes. The second value is the binary equivalent. Those values are also represented next to the Used Space and Free Space fields just above.
From Windows Explorer, right-click on a drive letter, then click on Properties. This shows capacities in bytes and either MB or GB.
From Windows Explorer, right-click on a drive letter, then click on Properties. This shows bytes, MB, and GB.
DOS Prompt ? CHKDSK shows bytes
DOS Prompt ? FDISK shows MB
From the top menu bar on the Desktop, click on Go, then Utilities, then open Disk Utility. Click on the hard drive to highlight it.
The "Total Capacity" is shown in GB or TB, then Bytes.
Note: Much of this information is available from the foundation of modern science and technology at http://physics.nist.gov/cuu/Units/binary.html |
Living with a Red Dwarf
Roughly three quarters of the stars in the galaxy are red dwarfs. Planet searches have typically passed over these tiny faint stars because they were thought to be unfriendly to potential life forms. However, this prejudice has softened. Preliminary results from a dedicated research program have shown that planets around red dwarfs could be habitable if they can maintain a magnetic field for a few billion years.
Red dwarfs – also called M dwarfs – are between 7 and 60 percent as massive as our sun. Their lower mass means they don’t burn as hot or as brightly, emitting less than 5 percent as much light as the sun. However, they have strong magnetic activity, which makes them relatively bright in X-rays and UV radiation and causes them to flare frequently.
To understand the environment around these common stars, the "Living with a Red Dwarf" program was started three years ago. It is piecing together observational data to provide a profile of how red dwarfs vary in brightness and magnetic activity as they age.
"This is the information that you would want to know to model the suitability for life on a nearby planet," says Ed Guinan of Villanova University, a scientist working with the program.
The habitable zone (HZ) around a red (dM), orange (dK) and yellow (dG) dwarf star. The dotted pink circle is orbit which would have our Earth’s temperature.
Credit: Living with a Red Dwarf program.
As habitability goes, red dwarfs were thought to be the bad roommates of the cosmos.
Because they are so faint, the habitable zone — the distance from a star where liquid water can exist — is in many cases closer than the orbital distance between Mercury and our sun. When a planet orbits a star this closely, the gravitational pull of the star may cause the planet to become tidally locked with the same side always facing the star (similar to the Moon’s fixed gaze on the Earth).
Previously, scientists speculated that the dark side of a tidally locked planet would become so cold that it would freeze up the entire atmosphere, leaving even the sun-lit side with little air for breathing. But more recent models have shown that winds would distribute the heat sufficiently to avoid this atmospheric collapse.
Close-up of a solar flare. Similar magnetically-driven events are frequent around red dwarfs.
Credit: TRACE spacecraft
Still, life might not be a picnic around a red dwarf. Several times per day flares shoot off the star, causing the UV radiation to jump by 100 to 10,000 times normal. For several minutes, the star appears blue instead of red. This increased radiation could sterilize the surface of a nearby planet.
"You probably want to live on the dark side," Guinan says. "Or at least along the twilight zone where you would have less exposure."
Even between flares, the combination of UV light and stellar winds can strip away the atmosphere if nothing is protecting or replenishing it.
However, all hope is not lost. The high-energy radiation is predominantly emitted by young stars. As they age, red dwarfs become less magnetically active, while continuing to shine steadily at visible wavelengths for 100 billion years or more.
Therefore, if an orbiting planet can just hold onto its atmosphere through the wild early years of its red dwarf roommate, it could end up being a decent place to live.
Turning back the stellar clock
But just how long are red dwarfs dangerous?
Composite image of multiple solar flares on the sun.
To develop a model for how a star’s magnetic activity changes with time, Guinan and his colleague Scott Engle looked at the rotation rates of a large sample of red dwarfs. As expected, faster spinning stars had more X-ray and UV emission, as well as more flares. The rotation causes charged material inside the star to be churned around, and this "dynamo" action generates a magnetic field. Gas around the star becomes trapped in this field and heated to millions of degrees. This hot gas produces the observed high energy radiation.
By estimating the ages of stars in their sample, the researchers were able to build up a typical red dwarf life history.
The data show that a red dwarf is born spinning rapidly, and it exhibits the corresponding magnetic activity. However, the magnetic field also creates strong winds that carry away angular momentum, and thus slow the star down with time.
The conclusion is that a red dwarf will calm down after about 2 or 3 billion years. In comparison, our sun (a typical G star) was magnetically very active (with 2 to 5 big flares per day) for its first half a billion years.
A planet with a substantial magnetic field, like Earth’s, can deflect stellar winds and thereby avoid having its atmosphere stripped away.
"This could protect the planet for the 2 to 3 billion years that a red dwarf is active," Guinan says.
He is not completely optimistic, however. The fact that potentially habitable planets around a red dwarf are tidally locked implies they are rotating slowly around their axis. By the same physics that applies to stars, slow rotation will mean a weak magnetic field that could shut down completely.
Stellar winds can strip the atmosphere off of a nearby planet.
Credit: European Space Agency and Alfred Vidal-Madjar (Institut d’Astrophysique de Paris, CNRS, France)
This is what happened to Mars. It had a magnetic field 3.5 billion years ago, but when its liquid iron core solidified, the field turned off. Without this protective shield, the solar wind stripped away most of the planet’s atmosphere and liquid water.
To avoid this fate around a red dwarf, Guinan speculates that a planet might need to be more massive than Earth. The large liquid iron core inside a super Earth (with a mass between 2 and 10 times Earth’s) could perhaps maintain a magnetic field in spite of the slower rotation rate.
Interestingly, three of the two dozen planets detected so far around red dwarfs are super Earths. More will presumably be found in future searches: The MEarth Project is a planned survey of 2000 M stars using ground-based telescopes, and the Kepler spacecraft that launched in March has added more red dwarfs to its target list.
"M dwarf stars were overlooked in the past, but they have become more popular as people realize that life could potentially arise around them," Guinan says. |
Oak Tree Identification
The oak tree has long been the symbol of endurance and strength. Oak is actually a common name of a number of tree species in the genus Quercus. This genus is popularly bred in the northern hemisphere primarily in America and parts of Asia.
At A Glance The leaves of an oak tree are spirally arranged. Its leaves are serrated and have a smooth margin. It produces flowers during springtime called catkins. Acorn, a fruit nut with a cup-like appearance, is harvested from oak trees. An acorn may contain one to three seed that may take up to six months to one and a half year to fully mature. Some oak species may also have evergreen leaves.
Classifications of Oak Trees
- Cork Oaks (Leucobalanus; Lepidobalanus; Sect. Quercus)
This type of oak tree is known to be the white oaks that are usually found in North America, Asia and Europe. Lobes of its leaves are round and they produce hairless acorn shells.
- Turkey Oak (Sect. Cerris)
This type of oak tree produces the bitterest acorns. Its leaves have sharp tips with some bristles at its lobe.
- Hungarian Oaks (Sect. Mesobalanus)
Often seen in Asia and Europe. Its leaves are long and produce bitter acorns that also come in hairless acorn shells.
- Canyon Live Oaks (Sect. Protobalanus)
This type of oak trees is commonly found in America and Mexico. They produce acorn shell woolly with leaves similar to that of Turkey Oak.
- Red Oaks (Erythrobalanus; Sect. Lobatae)
Red oaks are popularly seen in Central, North and Northern South America. They are very much similar with Canyon Live Oaks except for the red hues of its bark.
Relevance And Uses of Oak Trees Since oak trees are hardwood, they are popularly used as lumber materials for furniture. It is also used as flooring. Oak trees are also a good source of corks and even barrels for wines. Oak barrels are said to add a unique taste, color and aroma to the wine it stores. In the medical arena, the barks of white oak trees are also used as key components in some medicines. Acorn coffee is also gaining its popularity. Acorns are also used in flour making. |
Much of what is known about the stars comes from studying the star closest to us, the Sun. At a distance of almost 150 million kilometers, the Sun is a few hundred thousand times closer to us than the next nearest star. Because of its proximity, astronomers are able to study our star in much, much greater detail than they can the other stars.
The Sun is a G2-type main sequence star that has been shining for almost 5 billion years. It is known from radioactive dating of the Earth, Moon, and meteorites, that these objects have been around for about that length of time and temperatures on the surface of the Earth have been pleasant since it formed. The Sun's energy has made this possible. What could power something as big as the Sun for so long? The process called nuclear fusion is now known to be the source of the Sun's enormous energy, as well as, other stars. This is a relatively recent discovery. However, using simple physical principles of gas physics, astronomers knew about the density and temperature structure of the interior of the stars long before they unlocked the secret to what could power them for so long. This chapter will cover these topics. I will first give a brief description of the Sun to give you an idea of what a star is like and then go into the basic principles of what the interiors of stars are like and what powers them. The vocabulary terms are in boldface.
Go to next section
last updated: January 7, 2011 |
It's often said that the Arctic is one of the most vulnerable places to climate change. Temperatures are climbing faster there than anywhere else on the planet. Increasing winter temperatures mean increasing amounts of rain instead of snow, and scientists are still working to understand exactly what this means on the ground.
Now, a team of biologists, meteorologists and geophysicists have reviewed years of snowpack and weather data from two locations in the arctic archipelago of Svalbard and found that the increased amount of winter rain has led to more ice on the ground there. Their findings have just been published in Environmental Research Letters.
The shift is so great, the researchers say, that the ground has virtually not been free from ice during winter since the beginning of this century.
In other words: Santa, it's time to buy crampons and a raincoat.
Rain-on-snow challenges ecosystem
Rain in places like Svalbard poses a challenge that's less of a problem in more southerly climes.
Here, at 78 degrees N, the polar night lasts for more than three months and the ground is underlain by permafrost. That means when it rains on snow, the rain can freeze at the bottom of the snowpack, creating a layer of ice.
Researchers call this kind of ice formation "basal ice", because it lies at the base of the snowpack.
It can sometimes be so thick that it can completely encapsulate and kill plants, and starve animals, like reindeer, which normally graze on mosses, dwarf shrubs and other browse that they find by pawing through the snow.
A cascade of events through the wildlife community
In a 2013 publication in Science magazine, Norwegian University of Science and Technology (NTNU) ecologist Brage Bremset Hansen and colleagues described how an extreme icing event one winter prevented voles, ptarmigan and reindeer from finding food, which caused their populations to crash. Hansen is the senior author of the newly published paper.
Populations of arctic foxes, which feed on the carcasses of dead reindeer, boomed as the other animals died.
But in the following year, the fox population struggled, because any reindeer that had survived the big die-off had more summer food, since their overall numbers had plummeted. That meant fewer carcasses for foxes in the year after the big icing event.
Basal ice can also pose problems for reindeer herders in northern communities, while rain-on-snow, even without ice formation, can saturate snow packs to the point where they can avalanche.
How much ice depends on snow levels
The amount of snow that is already on the ground is an important factor in basal ice formation, the researchers say.
"When, where and how much ice is formed is a complicated process," says Bart Peeters, the lead author of the article and a PhD candidate at NTNU's Centre for Biodiversity Dynamics. "But the main patterns are clear: basal ice forms when rain and meltwater freeze on the frozen ground, although this depends on how deep the snowpack is, if there is any snow at all, and when it starts raining."
When there's a lot of snow on the ground and just a little rain, the rain can get soaked up by the snow and can freeze inside the pack. That means no basal ice.
When there's a thick snowpack, however, and a lot of rain, the rain can accelerate snowmelt so that lots of water cascades through the entire snowpack. It then freezes when it lands on the frozen ground at the bottom of the pack.
These are the conditions that lead to the formation of a thick icy layer.
Widespread winter rain and icing
The researchers wanted to know how widespread the formation of basal ice is during the winter.
They examined 2539 snowpack measurements over 16 years from two locations on the main island of Spitsbergen; one on the coast in a town called Ny-Ålesund, and one in Nordenskiöld Land in the centre of the island. This last area is roughly 20 km south of the meteorological station at the Svalbard Airport in the main city of Longyearbyen.
In addition to data from the two meteorological stations at Longyearbyen and Ny-Ålesund, the researchers were able to get daily average air temperatures and amount of rain from five other stations scattered across Svalbard. This allowed them to look for patterns in winter rain across both space and time.
What they found was that the average amount of winter rain was more than twice as high on the coast than it was in the centre of the island. That also meant thicker basal ice on the coast.
However, they also found that when it rained in the winter, it usually rained across distances of several hundred kilometers. Likewise, if the ground was covered by thick ice on the coast, the same scenario was found in the centre of the island.
"Winter warm spells and rain-on-snow events on Svalbard tend to occur when the region is influenced by low-pressure systems from the southwest," said Ketil Isaksen, a senior researcher at the Norwegian Meteorological Institute and a co-author of the study. "These weather systems favour transport of water vapour from southerly Atlantic sources, bringing above-zero temperatures and rain across the entire archipelago."
Moving not an option
The bottom line is that icing tends to occur across large areas, and is not a localized event, the researchers said. As a result, animals that overwinter on the island, such as reindeer and ptarmigan, don't always have the option of moving to a better spot with less ice.
Hansen says that the most important ecological implication of the large-scale rain-on-snow events is that it can simultaneously affect all of the species in the island's overwintering animal community.
"This ultimately means that the ups and downs of populations as well as entire wildlife communities could be synchronized across large distances," he said. "In theory, the synchronization of population dynamics across large distances, caused by fluctuations in weather, will increase the long-term risk of extinction."
He says, however, that there is no evidence that the species that overwinter on the Svalbard tundra are currently at risk.
Hansen is head of a project called INSYNC, funded by the Research Council of Norway, which is looking at how climate change affects the population dynamics of arctic wildlife communities.
Svalbard a bellwether of things to come
Extreme weather events, such as rain-on-snow, are becoming more and more common, the researchers said, and they now understand better how basal ice forms and the factors that affect its thickness and timing.
But as the Arctic continues to warm, the complex relationships between rain, snowpack thickness, air temperatures and timing during the winter suggest that patterns may change, they said.
"We know with high confidence that global temperatures will continue to rise for decades. As winters get warmer and wetter in the Arctic, it will be interesting to observe the patterns in basal icing when the snowpack melts completely and the ground surface thaws," Peeters said. "However, because the Arctic is warming most rapidly on Svalbard, it's a bellwether of things to come for ecosystems in other regions of the high Arctic." |
Undergraduates learn that aromaticity means a specific number of electrons in conjugated π orbitals above and below a ring. But like many concepts taught in first-year organic chemistry, the truth is more complex. Chemists are actively debating the concept of aromaticity. Some argue that σ and δ orbitals can form aromatic systems and that multiple types of aromaticity can exist in a single molecule. Masaichi Saito of Saitama University and colleagues have added another log to that fire by confirming double aromaticity in a bench-stable compound (Commun. Chem. 2018, DOI: 10.1038/s42004-018-0057-4).
Chemists have previously predicted a number of double aromatic compounds, such as those involving boron. In 1988, Vanderbilt University’s James C. Martin reported a hexaiodobenzene dication that had σ aromaticity (J. Am. Chem. Soc. 1988, DOI: 10.1021/ja00225a038), although later research questioned if the molecule was truly a dication (Chem.—Eur. J. 2012, DOI: 10.1002/chem.201102960). Inspired by Martin’s work, Saito synthesized a hexakis(phenylselenyl)benzene dication, which has selenium atoms attached to each of the six carbon atoms in the central benzene ring. Martin’s group had investigated that molecule’s σ aromaticity as early as 1990 but did not report the research in the scientific literature and did not examine the compound’s π aromaticity.
Saito explains that selenium’s large size allows overlapping σ orbitals from the six atoms to create aromaticity. The phenylselenyl groups donate electron density to the benzene to make oxidation easier, and the phenyls' bulk improves the solubility of the dications by keeping them away from each other in solution.
A two-electron oxidation leaves the inner ring of carbon π orbitals with six electrons and the outer ring of selenium σ orbitals with 10, both satisfying the 4n+2 electron rule for aromaticity, according to the researchers. X-ray crystallography data showed that C–C bond lengths were nearly identical, and Se–Se bond lengths fell within a 0.1-Å range—suggesting the symmetry needed for aromaticity. Also, the group found that the C–Se distances were too long to be double bonds.
Saito’s work “appears to be the first σ and π doubly aromatic compound that has been crystallized and structurally characterized using X-ray diffraction,” says Brown University physical chemist Lai-Sheng Wang.
Alexander I. Boldyrev, a theoretical chemist at Utah State University who has predicted boron-containing double aromatic molecules, says Saito has indeed found a real example of π and σ aromaticity. “What is most important [is] this is a bottled compound,” he says. Boldyrev thinks the molecule could help chemists understand bonding, structure, and stability in other molecules, especially inorganic compounds.
CORRECTION: This story was updated on Oct. 17, 2018, to correct the explanation of how the phenylselenyl groups affect the molecule's behavior. |
Dietary fibre is found in cereals, fruits and vegetables. Fibre is made up of the indigestible parts or compounds of plants, which pass relatively unchanged through our stomach and intestines. Fibre is mainly a carbohydrate. The main role of fibre is to keep the digestive system healthy.
Other terms for dietary fibre include ‘bulk’ and ‘roughage’, which can be misleading since some forms of fibre are water-soluble and aren’t bulky or rough at all.
Benefits of fibre
Dietary fibre is mainly needed to keep the digestive system healthy. It also contributes to other processes, such as stabilising glucose and cholesterol levels. In countries with traditionally high-fibre diets, diseases such as bowel cancer, diabetes and coronary heart disease are much less common than in Western countries.
Most Australians do not consume enough fibre. On average, most Australians consume 20–25 g of fibre daily. The Heart Foundation recommends that adults should aim to consume approximately 25–30 g daily.
Children aged between four and eight should consume 18 g of fibre each day. Girls aged 9 to 13, and 14 to 18 years, need 20 g and 22 g per day respectively. Boys aged 9 to 13, and 14 to 18 years, need 24 g and 28 g per day respectively.
Disorders that can arise from a low-fibre diet include:
- irritable bowel syndrome
- heart disease
- some cancers.
Types of fibre in food
There are two categories of fibre and we need to eat both in our daily diets, which are:
- soluble fibre – includes pectins, gums and mucilage, which are found mainly in plant cells. One of its major roles is to lower LDL (bad) cholesterol levels. Good sources of soluble fibre include fruits, vegetables, oat bran, barley, seed husks, flaxseed, psyllium, dried beans, lentils, peas, soy milk and soy products. Soluble fibre can also help with constipation.
- insoluble fibre – includes cellulose, hemicelluloses and lignin, which make up the structural parts of plant cell walls. A major role of insoluble fibre is to add bulk to faeces and to prevent constipation and associated problems such as haemorrhoids. Good sources include wheat bran, corn bran, rice bran, the skins of fruits and vegetables, nuts, seeds, dried beans and wholegrain foods.
Both types of fibre are beneficial to the body and most plant foods contain a mixture of both types.
Resistant starch, while not traditionally thought of as fibre, acts in a similar way. Resistant starch is the part of starchy food (approximately 10 per cent) that resists normal digestion in the small intestine. It is found in many unprocessed cereals and grains, unripe bananas, potatoes and lentils, and is added to bread and breakfast cereals as Hi-Maize. It can also be formed by cooking and manufacturing processes such as snap freezing.
Resistant starch is also important in bowel health. Bacteria in the large bowel ferment and change the resistant starch into short-chain fatty acids, which are important to bowel health and may protect against cancer. These fatty acids are also absorbed into the bloodstream and may play a role in lowering blood cholesterol levels.
Fibre keeps the digestive tract healthy
The principal advantage of a diet high in fibre is in improving the health of the digestive system. The digestive system is lined with muscles that massage food along the tract from the moment a mouthful is swallowed until the eventual waste is passed out of the bowel (a process called peristalsis). Since fibre is relatively indigestible, it adds bulk to the faeces.
Soluble fibre soaks up water like a sponge, which helps to bulk out the faeces and allows it to pass through the gut more easily. It acts to slow down the rate of digestion. This slowing down effect is usually overridden by insoluble fibre, which does not absorb water and speeds up the time that food passes through the gut.
Drink lots of water
A high-fibre diet may not prevent or cure constipation unless you drink enough water every day. Some very high-fibre breakfast cereals may have around 10g of fibre per serve, and if this cereal is not accompanied by enough fluid, it may cause abdominal discomfort or constipation.
Fibre and ageing
Fibre is even more important for older people. The digestive system slows down with age, so a high-fibre diet becomes even more important.
Lowering blood cholesterol
There is good evidence that soluble fibre reduces blood cholesterol levels. When blood cholesterol levels are high, fatty streaks and plaques are deposited along the walls of arteries. This can make them dangerously narrow and lead to an increased risk of coronary heart disease. It is thought that soluble fibre lowers blood cholesterol by binding bile acids (which are made from cholesterol to digest dietary fats) and then excreting them.
Fibre and weight control
A high-fibre diet is protective against weight gain. High-fibre foods tend to have a lower energy density, which means they provide fewer kilojoules per gram of food. As a result, a person on a high-fibre diet can consume the same amount of food, but with fewer kilojoules (calories).
Fibrous foods are often bulky and, therefore, filling. Soluble fibre forms a gel that slows down the emptying of the stomach and the transit time of food through the digestive system. This extends the time a person feels satisfied or ‘full’. It also delays the absorption of sugars from the intestines. This helps to maintain lower blood sugar levels and prevent a rapid rise in blood insulin levels, which has been linked with obesity and an increased risk of diabetes.
Fibre and diabetes
For people with diabetes, eating a diet high in fibre slows glucose absorption from the small intestine into the blood. This reduces the possibility of a surge of insulin, the hormone produced by the pancreas to stabilise blood glucose levels.
Conditions linked to low-fibre diets
Eating a diet low in fibre can contribute to many disorders, including:
- constipation – small, hard and dry faecal matter that is difficult to pass
- haemorrhoids – varicose veins of the anus
- diverticulitis – small hernias of the digestive tract caused by long-term constipation
- irritable bowel syndrome – pain, flatulence and bloating of the abdomen
- overweight and obesity – carrying too much body fat
- coronary heart disease – a narrowing of the arteries due to fatty deposits
- diabetes – a condition characterised by too much glucose in the blood
- colon cancer – cancer of the large intestine.
Diet, cancer and heart disease
Increasing dietary fibre and wholegrain intake is likely to reduce the risk of cardiovascular disease, type 2 diabetes, weight gain and obesity, and possible overall mortality.
It is also very likely that these observed health benefits occur indirectly, through the protective effects of ‘phytochemicals’ (such as antioxidants) that are closely associated with the fibre components of fruits, vegetables and cereal foods.
Studies have shown that dietary fibre, cereal fibre and wholegrains are protective against colorectal cancer. Fibre is thought to decrease the risk of colorectal cancer by increasing stool bulk, diluting possible carcinogens present in the diet and decreasing transit time through the colon.
In addition, bacterial fermentation of fibre results in the production of short-chain fatty acids, which are thought to have protective effects against colorectal cancer. It is recognised that dietary fibre protects against colorectal cancer, each 10 g per day intake of total dietary fibre equates to a 10 per cent reduction in risk of colorectal cancer.
Ways to increase your fibre intake
Simple suggestions for increasing your daily fibre intake include:
- Eat breakfast cereals that contain barley, wheat or oats.
- Switch to wholemeal or multigrain breads and brown rice.
- Add an extra vegetable to every evening meal.
- Snack on fruit, dried fruit, nuts or wholemeal crackers.
A daily intake of more than 30 g can be easily achieved if you eat wholegrain cereal products, more fruit, vegetables and legumes and, instead of low-fibre cakes and biscuits, have nuts or seeds as a snack or use in meals.
You do not need to eat many more kilojoules to increase your fibre intake. You can easily double your fibre intake without increasing your kilojoule intake by being more selective. Compare the tables below.
Fibre intake of less than 20 g per day
|| Fibre (g)
| Kilojoules (kJ)
|1 cup puffed rice cereal
| 4 slices white bread
| 1 tablespoon peanut butter
| 1 piece of fruit (apple)
| 1/2 cup canned fruit, undrained
| 1/2 cup frozen mixed vegetables
| Mashed potato 120 g
| 1 cup white cooked rice
| 2 plain dry biscuits
| 1 slice plain cake 60 g
| 1 cup commercial fruit juice
| 17.9 g
| 5,557 kJ
Fibre intake of more than 30 g per day
A sudden increase in dietary fibre
A sudden switch from a low-fibre diet to a high-fibre diet can create some abdominal pain and increased flatulence (wind). Also, very high-fibre diets (more than 40 g daily) are linked with decreased absorption of some important minerals such as iron, zinc and calcium. This occurs when fibre binds these minerals and forms insoluble salts, which are then excreted.
This could increase the risk of developing deficiencies of these minerals in susceptible people. Adults should aim for a diet that contains 25 g to 30 g of fibre per day, and should introduce fibre into the diet gradually to avoid any negative outcomes.
It is better to add fibre to the diet from food sources rather than from fibre supplements, as these can aggravate constipation, especially if you do not increase the amount of water you drink daily.
Where to get help
- Your doctor
- Dietitians Association of Australia Tel. 1800 812 942
Things to remember
- Dietary fibre is found in the indigestible parts of cereals, fruits and vegetables.
- A diet high in fibre keeps the digestive system healthy.
- Most Australians don’t eat enough fibre. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
Numicon: Making a difference to Maths
Numicon is a multi-sensory approach to teaching maths, developed by experts in the classroom that’s designed to help children understand connections between numbers.
2D were taught, by Mrs Davies, how to play games involving their number and place value skills. They learnt how to set up their Numicon, discern odd numbers from even, ask pertinent questions to find a given number and also to work out which number was removed from the ordered line up of Numicon. 2D had such fun that they wanted to teach class 2ML the same games. This was a fantastic exercise in embedding their own learning by cascading their knowledge to their peers.
Miss Ribeiro, the Maths Coordinator, also came to watch the girls in action and was impressed by the mathematical knowledge that emerged with the handling of these new materials.
Parents also have the option of buying the same materials as part of the school’s Maths Toolkit, which also comes with the school’s Maths calculation policy. |
Since arriving in Mars orbit in 2006, NASA's Mars Reconnaissance Orbiter has seen countless weird and wonderful things on the Red Planet's surface. Many features resemble Earth's geology, while others are uniquely Martian, including the bizarre spider-like features that have been seen in the Martian dune fields.
Though well documented, these veined patterns have mysterious origins, defying attempts by the Mars orbiter's High-Resolution Imaging Science Experiment (HiRISE) camera of identifying how they form and spread. HiRISE scientists, who are pretty familiar with seeing the phenomenon, call regions with these spidery structures "araneiform" terrain - a nod to their arachnid shape. Now, researchers have finally spotted the "birth" of one of these spiders and watched it form over three Martian years (one Mars year is equivalent to roughly 1.9 Earth years) and think that it could grow into a large structure that may persist for centuries.
RELATED: Mars Orbiter Spies Alien Ice 'Spiders'
"We have seen for the first time these smaller features that survive and extend from year to year, and this is how the larger spiders get started," said Ganna Portyankina of the University of Colorado, Boulder, in a NASA Jet Propulsion Laboratory statement. "These are in sand-dune areas, so we don't know whether they will keep getting bigger or will disappear under moving sand."
Unlike Earth, Mars is covered with frozen carbon dioxide (commonly known as "dry ice") and sheets of the stuff can be found in polar regions. As the ground warms during spring, solid carbon dioxide locked in the ground will begin to sublimate - it doesn't pass through the liquid phase and instead turns straight to vapor. Sub-surface carbon dioxide gas will then build up pressure, eventually erupting through cracks in the ground creating furrows - basically long, narrow trenches - and this seasonal venting will disturb surface dust. The erosion process, which occurs every season, is responsible for dark fans that appear on the surface and is likely driving the formation of the Mars spiders.
Now, after spotting the genesis of one of these strange-looking features, planetary scientists have a clue as to what triggers them. This particular example was found during spring in the south of Mars, where there's less sand than in the north. Northern seasonal fans (dust that was also ejected after the formation of long furrows) are typically short-lived features, apparently quickly covered and filled in by wind-blown sand. The south, however, seems to allow longer-lived spidery structures to persist.
RELATED: NASA's Mars Rover Curiosity Still Dogged by Drill Malfunction
"There are dunes where we see these dendritic [or branching] troughs in the south, but in this area, there is less sand than around the north pole," said Portyankina. "I think the sand is what jump starts the process of carving a channel in the ground."
Earth's atmosphere is neither cold or thin enough to support the formation of carbon dioxide ice on or below the surface, so these spiders are a very alien concept that cannot occur naturally on Earth. We are watching Mars' landscape undergo fascinating processes, in real time, that we can only see by sending a long-duration satellite to the Red Planet.
WATCH VIDEO: Why Mars Is Better Than Venus |
Motivation, Process and Child Development
Experts representing child development, education, health, kinesiology, playground design, and child injury litigation responded to the question: Why do children climb? Children climb for fun, enjoyment, challenge, the sense of danger, and to access the top for success and observation. They climb to explore, gain new perspectives, access play options, play chase, engage in make-believe play, respond to parent and peer challenges and encouragement and to compete with peers (Frost, et al, 2004). They also climb to learn. Children are wired to learn and learning by climbing carries benefits in skill development, health, fitness, and injury prevention.
All healthy children are born to climb. They climb for the same reasons that fish swim and birds fly. Soon after birth, children employ built-in natural instincts to seek, see, explore, touch, and move objects and build mental and physical capabilities leading to initial climbing skills. Basic principles of child development, supported by decades of research, are at work here. All children are unique. Similar patterns of neurological functioning and consequent behavior allow general conclusions about child development, but novel patterns of cognitive, physical, and social experience form individual differences in children. Climbing behavior is no exception. The primitive tools for climbing and many other skills are present at birth, but growth and elaboration of these early skills depend upon use.
Developmental Progression of Climbing
Babies crawl and explore every object in reach. Toddlers learn to walk and “cruise” haltingly and discover toys, furniture, and climbable objects. They investigate every accessible object and search for new sensations, sources of pleasure, and satisfaction of curiosity. Early climbing behavior follows predictable developmental processes – reaching, touching, rolling, pulling up, balancing, sitting, crawling, holding on, standing with support, and eventually grasping, stepping up and pulling to higher levels, walking, and running. This is natural and developmentally appropriate — with safety caveats. During early stages, the play environment should be carefully assessed for toddler safety and carefully monitored by adults. Unsafe objects are removed or secured from climbing and falling, and reasonably safe play materials are added – both indoors and outdoors. As children gain experience and development, monitoring is relaxed and new challenges are introduced. Risk is valued but minimized and carefully monitored as infants, toddlers, and pre-school children develop basic, rudimentary climbing skills.
In elementary school, children employ climbing and related skills in mastering construction play, symbolic play, organized games, and pleasurable forms of work. During each stage, they are learning new skills for thinking, exploring, and climbing. Neuroscience and related sciences confirm that play, including climbing, builds fitness, brains, and bodies and promotes general health across generations. Climbing playground equipment, trees, fences, and other objects promote strength, confidence, vestibular stimulation, perceptual-motor skills, creativity, and neuromuscular development. Imaginative play and associated activities, such as planning, constructing, accessing, and using dens and tree houses, engage experiential learning and executive functioning (neurological skills for mental control). The instinctive bond (biophilia) existing between humans and other mammals is perhaps best expressed and enriched in opportunities to play outdoors in nature.
With experience and development children attempt to scale any object in range – indoor furniture, steps, boxes, tables, outdoor hills, fences, playground equipment, and low natural features such as logs. From the beginning, they may stand by, observe older children climb, and appear to reflect about trying to climb equipment that older children are climbing. Eventually, neophytes enter the play fray and set about to climb — one step at a time. In a rather clumsy way, they reach for bars, knobs, limbs, or edges to grab and step up on, and then they begin to climb one step at a time, often falling a few times before reaching the ultimate goal — the top of a deck, hill, or boulder, the handholds for an overhead bar, or the access to a slide or tunnel. Getting down poses yet another challenge and may require adult encouragement or assistance. Individual differences in children’s skill levels related to limited play experiences, excessive sedentary cyber play, excessive fear, motor and emotional disabilities, and overprotection by adults should be identified and remedied early.
Outdoor preschool playgrounds for ages two to five are typically designed to be sequentially more complex and challenging to match the continuing development of physical skills of preschool climbers. For example, climbers such as overhead ladders, installed at a lower height, less than 60 inches high for preschoolers versus less than 84 inches maximum for older children, installed in loose protective surfacing with resilient take-off steps results in the early development of skills needed to master the more complex natural and built challenges to follow. If climbers are available and allowed by adults, three-year-old children of normal weight choose to climb and traverse (brachiate) overhead ladders. Only obese children are typically unable to do this. Failing to develop basic intuitive skills and master increasingly complex physical play and playground challenges result in negative consequences across the developmental domain. So what should responsible adults do?
At home, in the neighborhood, and at school, climbing objects, natural or manufactured, should be available and time allowed for children to engage daily in free play. In so doing they learn what they can and can’t do, what challenges they can master, how they can avoid injury, and how to select and modify climbing experiences for fun and learning. During climbing and other playground activities (running, throwing, balancing, brachiating, creating, building), nerves and muscles are developed and brain circuits and cells are formed as children develop coordination, agility, strength, confidence, and motor skills such as depth and distance judgment - making climbing an efficient developmental activity. Free play should take place in a context of fun allowing for social exchanges with peers, game development (e.g., chase, follow the leader, make believe), and such expanded activities as building tree houses, competing with peers, play and work in nature, and accessing decks, nets, slides, climbers, trapeze bars, and tunnels. While play can generally be enjoyed in a safe environment, it often involves taking risks.
The Issue of Risk
Healthy development requires that children have many opportunities to take risks on playgrounds. Climbing is risky, and the higher children climb, the riskier it can become. For any particular climbing route or skill, beginners frequently fall a few or even many times before reaching their goal. Injury records of the National Electronic Injury Surveillance System (NEISS) and analysis of playground injury litigation records reveal that 60 to 80 percent of injuries to the 200,000 plus children annually admitted to emergency rooms for playground injuries result from falling onto hard surfaces. This despite the employment of national and state injury guidelines and standards, growing inspections of playgrounds by trained personnel, extensive use of standards by equipment manufacturers, and a decline in outdoor play.
At our research site of 35 years (Redeemer Lutheran School, Austin, Texas), initial climbing challenges are designed to take into account preschool children’s limited experience, and they feature special challenges and surfaces on equipment and in fall areas. Following mastery of early climbing options, children are successively moved to primary and elementary playgrounds featuring increasingly challenging equipment and natural features. The 500 children at this school have 30 minutes of free play on natural/built playgrounds and 30 minutes of physical education every day. The cafeteria serves food from the children’s gardens. In Texas with a 19 percent child obesity level, fewer than five percent of the subject children are obese. There is no record of a serious injury on the preschool playground and only one simple fracture per decade on the two more challenging playgrounds for primary and elementary school children.
Consider the injury rate at an elementary school enrolling about 700 children in the general vicinity of the above research-based school. This is one of many similar reviews received, followed by personal inspections each year. Serious injuries, especially fractures, are commonly seen among primary grades with children lacking extensive opportunities to play on equipment or natural challenges during the preschool years. Following installation of an expansive new playground with “state of the art” playground equipment for 5-12 year-old children, modern manufactured surfacing, and approval by a Certified Playground Safety Inspector, the following injury pattern resulted. Fifteen students sustained significant injuries in about 18 months playing on the new equipment, including ten with broken arms or legs after falling from a climber (a simple modification to reduce the risk factor resulted in no injuries during the following 10 months). The principal of this school remarked, “Something is terribly wrong here.”
Research points to growing sedentary time, loss of free play, high stakes testing, cyber play, lawsuits, and “helicopter parenting” for children’s declining play skills. Extensive experience in mastering climbing challenges from an early age is a more reliable predictor of success in climbing and safety than chronological age or grade level. Learners, even those introduced to climbing as late as elementary school, occasionally fall. Initially, they should be free to fall only from limited heights onto carefully prepared and maintained surfaces. As children gain skill, adults should stand back and release them to test their skills and enhance climbing abilities. However, a missing element in the risk and skill development equation is the readiness of the child to protect himself during risky playground activities such as climbing trees – a current topic of controversy.
Should children climb trees? Climbing trees offers similar benefits as other forms of climbing, but should also be subject to limited assistance by adults and older children during the learning stages. Climbing trees is often prohibited or limited by teachers, school administrators, and parents. Many school, park, and backyard playgrounds, perhaps most, do not include trees suitable for climbing, and owners of private property are reluctant to risk potential legal consequences of children falling from their trees. Reports of present-day elderly people who were skilled tree climbers during childhood typically paint pictures of pleasure, excitement, and reasonably safe, healthy consequences. The forgotten caveat here is that physical labor and extensive free play at home and school built readiness for physical challenges.
Readiness, Risk, and Intuition
A missing element in playground safety is a failure to provide sufficient free play time on challenging playgrounds for children to develop their natural intuition for protecting themselves in increasingly risky play. Over time and with experience, children given many opportunities to engage in risky play build remarkably rapid instinctive, reflective processes that kick in to trigger automatic compensatory actions to prevent falls and to protect themselves during falls. Children can gain the power to intuitively know immediately and without conscious reasoning when an activity transcends from risky but doable to dangerous. For example, during falls experienced climbers twist the body to fall onto the expansive, less vulnerable back, the arms are withdrawn toward the body prior to receiving too much shock, and the neck bends forward to protect the most vulnerable head. The ultimate expression of such automatic, intuitive action is seen in wrestling, gymnastics, ballet, and parkour - even on the most challenging and active playgrounds. Learning how to prevent falls and how to fall safely begins early and depends on early cues from peers, adults, and much experience. Levels of child development and readiness for risky play are fundamental components of safe, healthy play.
Frost, J. L., Brown, P. S., Thornton, C. D., & Sutterby, J. A. (2004). The Developmental Benefits of Playgrounds. Olney, MD: Association for Childhood Education International. |
Land, Flora, and Fauna
While Europeans introduced plants and animals including cattle, horses, and wheat to the Americas, they also encountered new species—ocelots, jaguars, javelinas, xoloitzcuintlis, alpacas, açaí palms, ceibas, jacarandas, and many more. Recognizing the impact of fauna in a society is a deeply relevant way to understand cultural differences, traditions, and belief systems. Lauren Derby notes that “bringing animals into the analysis might move us closer to local understandings of the natural world and syncretism on the ground between European, Indigenous, and creole views and practices, enabling new ways of thinking about environmental change” (2011: 603). This can be extended to local flora as well.
Early cartography of the Americas often included local flora and fauna as a means of exotification. Compare Tabula geographica regni Chile (1646) with the Relación Geográfica de Gueytlalpa (1581). The former includes a fictitious sea monster as well as land animals that are not drawn to scale alongside humans. The animals, coupled with the massive tracts of land, are portrayed as unruly and in need of European order. Here, knowledge was produced to create a narrative that furthered the colonial enterprise. In the Relación Geográfica de Gueytlalpa, Indigenous mapmakers, tasked with depicting their surrounding areas, give us a map with both Indigenous and European components to it. The colors blue, green, and red are all made from local flora and fauna in the Americas (the Indigo plant, green mineral, and cochineal). Glyphs symbolizing hills are present alongside European markers like the exaggerated bulls. The insertion of bulls, a non-native and domestic animal, into the map as markers of European estancias demonstrates the growing privatization of land.
Europeans imported the privatization of land to the Americas as a means to solidify their power, not only over other cultures, but over nature. Another European tactic was to name and rename flora and fauna as a means of imposing order on it rather than live alongside it in cooperation. This is perhaps best seen with the anteater that inhabits parts of Central America and South America, highlighted in Historia natural ediar (1940). Its name in Spanish, oso hormiguero(ant bear), first recorded in approximately 1545, reveals the limits of European knowledge. The anteater is not related to the bear, but because Spaniards used a frame of reference within their cultural understanding at the moment of early encounter, the term oso hormiguero continues to be prevalent today. This misnomer might seem relatively harmless, yet it reinforces the power of the colonizer’s language to corrupt or erase knowledge, while revealing Western ignorance.
This blithe mindset toward nature continues to negatively impact the environment and communities to this day, as shown in reports from LLILAS Benson’s post-custodial partners in Brazil and Colombia that demonstrate widespread ecological destruction to the benefit of multinational corporations. In “Carta ao Senhor Secretario do Meio Ambiente do Estado de São Paulo” (1990), MOAB, an Afro-Brazilian community organization in Vale do Ribeira, protests government plans to construct a hydroelectric dam because of the environmental damage it would cause while displacing Afro-Brazilian communities. A similar concern is expressed in Colombia at the arrival of wire fencing.
The Afro-Colombian collective Proceso de Comunidades Negras (PCN) describes this fencing as foreign to the community: “Aparición de cerco que no es nuestra cultura. Esa es una cultura de personas extranjeras que llegaron al Territorio y llegaron con este método. Esta foto resalta los cercos con Alambre de púa” (1998-1999). The letter and the photo reveal worldviews that compete with Western discourses of progress: living with nature, rather than dominating it by fencing off tracts of land as a means to ownership and privatization. Maribel Falcón contests this same notion in "Esta tierra es su tierra" (undated). These different perspectives prompt José Francisco Borges’s O Crime Ecologico (2006), a woodcut print on paper that juxtaposes the need for conservation alongside economic motivations as soy cultivation in Brazil continues to expand.
In addition to being utilized in narratives of environmental control and belief systems, flora and fauna have become appropriated as symbols of resistance and social justice, as seen in René Castro’s Hands off El Salvador (1981) and Sam Coronado’s Vote (undated). The dove, a symbol of peace, and the Aztec eagle, an homage to the United Farm Workers Union’s desire to connect with historic roots in support of Mexican migrant workers, here stand in as conveyors of messages that are tied to anti-imperial and anti-colonial sentiment.
Pablo Antonio Cuadra’s poems “Mitología del jaguar” (undated) and “La ceiba” (undated) express a deep and sacred connection to Central America’s flora and fauna. Cuadra draws on Indigenous reverence for the ceiba, a tree that in Maya culture connects the underworld (Xibalba) through its roots, the terrestrial plane through its trunk, and the sky plane through its high-reaching branches. Among Amazonian communities, the mighty ceiba serves as a home for several deities. It is one of several poems that Cuadra highlights in a poetry collection focused on trees native to Latin America. Likewise, the jaguar is present as a deity in all Mesoamerican Indigenous cultures and one of many animals imbued with cosmological relevance. By drawing on these symbols in his poetry, Cuadra vindicates Indigenous worldviews. |
Fruit and Vegetable Printing
- Various fruit and vegetables
- Paring knife
- Print pad or stamp pad
- Cut fruits and vegetables into halves, quarters, circles, or any other shapes, dip into tempera paints or on a print or stamp pad, and then press onto plain or colored paper.
- Apples cut in half will have a star design in the middle (where the seeds are), while green peppers make a great shamrock design.
- Cut a potato in half and use a small paring knife to create a relief design: circles, squares, hearts, and so on.
- If you make letters, don't forget to carve them backwards so they will print correctly.
More on: Activities for Preschoolers
Copyright © 1998 by Patricia Kuffner. Excerpted from The Preschooler's Busy Book with permission of its publisher, Meadowbrook Press.
To order this book visit Meadowbrook Press. |
Reader's Theater Play: Talk Like a Pilgrim
Learn how pilgrim children talked in this Scholastic News play.
- Grades: 1–2
How did Pilgrims greet one another? What were poppets? What chores were Pilgrim children responsible for? In this Reader’s theater play, your students will hear how Pilgrim boys and girls talked. They'll also learn interesting facts about the daily life of children in colonial America. The Reader's theater activity also features a glossary of Pilgrim vocabulary, a map showing the layout of a typical Pilgrim village, and bright color photographs of Pilgrim life. |
Due: Following Friday
Each week you should be reading at least five times as part of your home learning. This can be independently or to an adult at home. Remember to record the reading that you do at home in your reading record and ask an adult to sign it for you. Reading records will be checked on Fridays.
Adults - You can support your child with their reading at home and maximise the impact that this has by doing the following things with them:
New Vocabulary: Each time your child comes across a new word that is unfamiliar, you can discuss this with your child. It may be that your child comes across this word whilst they are reading independently, but encourage them to note it down and then to talk to you about it afterwards. You could look up the meaning in the dictionary, ask your child to write a sentence using the word and even keep a list of these words and regularly ask your child the meaning of each one.
Summarising: Ask your child to summarise what they have just read - e.g. Can you tell me what has just happened on this page? What are the most important events that have happened in the chapter you have just read?
Each week, you will be set some tasks on Mathletics to complete which will be based on what we have been covering in maths lessons in class. Log into your account on the following link to complete this:
If you experience any problems with logging onto Mathletics, please let Mr Brundle know.
Every week you will be allocated 20 minutes to complete on Times Table Rock Stars. This session will focus on a specific times table - and the related division facts - and can be completed at any point in the week by logging onto your account at home. You may wish to complete this in one block, or log in and complete a few minutes each day. You can either log onto the website (using the link below) or can download the Times Table Rock Star app to complete this. If you experience any problems in logging in, please let Mr Brundle know.
You will be given a list of spellings on a Friday which will be tested in school on the following Friday. These spellings will either be words that are based on the rule or pattern that we are learning in class that week, or will be taken from your personal common exception word list. You should practise these words at home each week in preparation for a test each Friday. Remember to use a strategy to help you practise these words. Have a look at the document below, to help you remember the strategies:
Optional Home Learning
In addition to the tasks above, you may wish to complete some additional work at home to deepen your understanding of our current topic, however the above tasks must take priority.
There are a range of tasks that you may choose to complete throughout the term. Remember to bring them into school once you have completed them to share with the rest of the class. |
Have you ever heard a family member talk about your first step or the first word you spoke? For kids with cerebral palsy, called CP for short, taking a first step or saying a first word may not be as easy. That's because CP is a condition that can affect the things that kids do every day.
Some kids with CP use wheelchairs and others walk with the help of crutches or braces. In some cases, a kid's speech may be affected or the person might not be able to speak at all.
Cerebral palsy (say: seh-REE-brel PAWL-zee) is a condition that affects thousands of babies and children each year. It is not contagious, which means you can't catch it from anyone who has it. The word cerebral means having to do with the brain. The word palsy means a weakness or problem in the way a person moves or positions his or her body.
A kid with CP has trouble controlling the muscles of the body. Normally, the brain tells the rest of the body exactly what to do and when to do it. But because CP affects the brain, depending on what part of the brain is affected, a kid might not be able to walk, talk, eat, or play the way most kids do.
Types of CP
There are three types of cerebral palsy: spastic (say: SPASS-tik), athetoid (say: ATH-uh-toid), and ataxic (say: ay-TAK-sik). The most common type of CP is spastic. A kid with spastic CP can't relax his or her muscles or the muscles may be stiff.
Athetoid CP affects a kid's ability to control the muscles of the body. This means that the arms or legs that are affected by athetoid CP may flutter and move suddenly. A kid with ataxic CP has problems with balance and coordination.
A kid with CP can have a mild case or a more severe case — it really depends on how much of the brain is affected and which parts of the body that section of the brain controls. If both arms and both legs are affected, a kid might need to use a wheelchair. If only the legs are affected, a kid might walk in an unsteady way or have to wear braces or use crutches. If the part of the brain that controls speech is affected, a kid with CP might have trouble talking clearly. Another kid with CP might not be able to speak at all.
For some babies, injuries to the brain during pregnancy or soon after birth may cause CP. Children most at risk of developing CP are small, premature babies (babies who are born many weeks before they were supposed to be born) and babies who need to be on a ventilator (a machine to help with breathing) for several weeks or longer.
But for most kids with CP, the problem in the brain happens before birth. Often, doctors don't know why.
Doctors who specialize in treating kids with problems of the brain, nerves, or muscles are usually involved in diagnosing a kid with cerebral palsy. These specialists could include a pediatric neurologist (say: nyoo-RAL-uh-jist), a doctor who deals with problems of the nervous system and brain in kids.
Three other kinds of doctors who can help kids with CP are:
a pediatric orthopedist (say: or-tho-PEE-dist), who handles problems with bones or joints
a developmental pediatrician, who looks at how a kid is growing or developing compared with other kids the same age
a pediatric physiatrist, who helps treat children with disabilities of many kinds
There is no special test to figure out if a kid has cerebral palsy. Doctors may order X-rays and blood tests to find out if some other disease of the brain and nervous system may be causing the problem. To diagnose CP, doctors usually wait to see how a kid develops to be sure.
A case of cerebral palsy often can be diagnosed by the age of 18 months. For example, if a child does not sit up or walk by the time most kids should be doing these things, the kid might have CP or some other problem that is causing development to go more slowly. Doctors follow infant and child development closely and look for problems with muscle tone and strength, movement, and reflexes.
How Is CP Treated?
For a kid with CP, the problem with the brain will not get any worse as the kid gets older. For example, a kid who has CP that affects only the legs will not develop CP in the arms or problems with speech later on. The effect of CP on the arms or legs can get worse, however, and some kids may develop dislocated hips (when the bones that meet at the hips move out of their normal position) or scoliosis (curvature of the spine).
That is why therapy is so important. Kids with CP usually have physical, occupational, or speech therapy to help them develop skills like walking, sitting, swallowing, and using their hands. There are also medicines to treat the seizures that some kids with CP have. Some medicines can help relax the muscles in kids with spastic CP. And some kids with CP may have special surgeries to keep their arms or legs straighter and more flexible.
Cerebral palsy usually doesn't stop kids from going to school, making friends, or doing things they enjoy. But they may have to do these things a little differently or they may need some help. With computers to help them communicate and wheelchairs to help them get around, kids with CP often can do a lot of stuff that kids without CP can do.
Kids with cerebral palsy are just like other kids, but with some greater challenges that make it harder to do everyday things. More than anything else, they want to fit in and be liked.
Be patient if you know someone or meet someone with CP. If you can't understand what a person with CP is saying or if it takes someone with CP longer to do things, give him or her extra time to speak or move. Being understanding is what being a good friend is all about, and a kid with CP will really appreciate it. |
Islam brought important developments in manufacture and trade to the towns of The Middle East. Two techniques were developed in the 19th century: in one the white surface was produced by Led glaze made opaque and white by the addition of small amounts of tin oxide or other opacifier, and in the other a white slip covered the coloured Earthenware body under a clear lead glaze.
The distinctive lustre effect which is particularly associated with Islamic pottery from the 9th to the 14th centuries was developed from a technique probably borrowed from the decoration of glass. After firing the ochre would be rubbed off to reveal the design in a variety of metallic colours from Gold to red and brown depending on the relative proportions of the metal oxides used. The paint was easy to apply to the glazed surface and fine and intricate designs were achieved.
The ancient technique of applying a thin layer of white clay as a liquid slip to the still wet earthenware body of a pot was later developed further, particularly in Iran. Designs were painted in slips of different colours - usually black, brown or red - on the white foundation and covered with a clear glaze fluxed with lead oxide. Designs were also incised or carved through the white slip to allow the coloured clay of the body to show through.
In Ancient Mesopotamia, as in ancient Egypt, simple geometry was used in measurement of land, in construction of buildings and in astronomical calculations.
Geometry became highly important in the Islamic world as its figures and constructions were permeated with symbolic, cosmological and philosophical significance.
In architecture strict adherence to geometric principles in plans and elevations was the basis of the Harmony and discipline which characterise all Islamic art. In decoration geometrically based designs covered entire surfaces, typically with a geometrical framework leaving spaces to be filled with interlaced and stylised Leaf and Floral Designs.
Geometrical designs are basically very simple: they may be constructed with only a compass and Rule and the knowledge of certain procedures which produce triangles, squares, hexagon, stars, etc. Their designs may be reduced and enlarged with great ease. By repeating these procedures, and 3 further divisions and the addition of straight and curved lines, almost limitless elaborate variations may be achieved. |
|Part of a series on|
LGBT or GLBT is an initialism that stands for lesbian, gay, bisexual, and transgender. In use since the 1990s, the term is an adaptation of the initialism LGB, which began to replace the term gay in reference to the broader LGBT community beginning in the mid-to-late 1980s.The initialism, as well as some of its common variants, functions as an umbrella term for sexuality and gender identity.
It may refer to anyone who is non-heterosexual or non-cisgender, instead of exclusively to people who are lesbian, gay, bisexual, or transgender. or LGBT+ to encompass spectrums of sexuality and gender. Other common variants also exist, such as LGBTQIA+, with the A standing for "asexual" or "aromantic" or "ally". Longer acronyms, with some being over twice as long as LGBT, have prompted criticism for their length, and the implication that the acronym refers to a single community is also controversial.To recognize this inclusion, a popular variant, LGBTQ, adds the letter Q for those who identify as queer or are questioning their sexual or gender identity. Those who add intersex people to LGBT groups or organizing may use the extended initialism LGBTI. These two initialisms are sometimes combined to form the terms LGBTIQ
The first widely used term, homosexual , now carries negative connotations in the United States. [ dubious ] and subsequently gay in the 1970s; the latter term was adopted first by the homosexual community.It was replaced by homophile in the 1950s and 1960s,
As lesbians forged more public identities, the phrase "gay and lesbian" became more common.A dispute as to whether the primary focus of their political aims should be feminism or gay rights led to the dissolution of some lesbian organizations, including the Daughters of Bilitis, which disbanded in 1970 following disputes over which goal should take precedence. As equality was a priority for lesbian feminists, disparity of roles between men and women or butch and femme were viewed as patriarchal. Lesbian feminists eschewed gender role play that had been pervasive in bars as well as the perceived chauvinism of gay men; many lesbian feminists refused to work with gay men, or take up their causes.
Lesbians who held the essentialist view, that they had been born homosexual and used the descriptor "lesbian" to define sexual attraction, often considered the separatist opinions of lesbian-feminists to be detrimental to the cause of gay rights.Bisexual and transgender people also sought recognition as legitimate categories within the larger minority community.
After the elation of change following group action in the 1969 Stonewall riots in New York City, in the late 1970s and the early 1980s, some gays and lesbians became less accepting of bisexual or transgender people. [ Like whom? ] said that transgender people were acting out stereotypes and bisexuals were simply gay men or lesbian women who were afraid to come out and be honest about their identity. Each community has struggled to develop its own identity including whether, and how, to align with other gender and sexuality-based communities, at times excluding other subgroups; these conflicts continue to this day. LGBTQ activists and artists have created posters to raise consciousness about the issue since the movement began.Critics
From about 1988, activists began to use the initialism LGBT in the United States.Not until the 1990s within the movement did gay, lesbian, bisexual, and transgender people gain equal respect. This spurred some organizations to adopt new names, as the GLBT Historical Society did in 1999. Although the LGBT community has seen much controversy regarding universal acceptance of different member groups (bisexual and transgender individuals, in particular, have sometimes been marginalized by the larger LGBT community), the term LGBT has been a positive symbol of inclusion.
Despite the fact that LGBT does not nominally encompass all individuals in smaller communities (see Variants below), the term is generally accepted to include those not specifically identified in the four-letter initialism.Overall, the use of the term LGBT has, over time, largely aided in bringing otherwise marginalized individuals into the general community. Transgender actress Candis Cayne in 2009 described the LGBT community as "the last great minority", noting that "We can still be harassed openly" and be "called out on television".
In 2016, GLAAD's Media Reference Guide states that LGBTQ is the preferred initialism, being more inclusive of younger members of the communities who embrace queer as a self-descriptor.However, some people consider queer to be a derogatory term originating in hate speech and reject it, especially among older members of the community.
Many variants exist including variations that change the order of the letters; LGBT or GLBT are the most common terms.Although identical in meaning, LGBT may have a more feminist connotation than GLBT as it places the "L" (for "lesbian") first. LGBT may also include additional Qs for "queer" or "questioning" (sometimes abbreviated with a question mark and sometimes used to mean anybody not literally L, G, B or T) producing the variants LGBTQ and LGBTQQ. In the United Kingdom, it is sometimes stylized as LGB&T, whilst the Green Party of England and Wales uses the term LGBTIQ in its manifesto and official publications.
The order of the letters has not been standardized; in addition to the variations between the positions of the initial "L" or "G", the mentioned, less common letters, if used, may appear in almost any order.Longer initialisms based on LGBT are sometimes referred to as "alphabet soup". Variant terms do not typically represent political differences within the community, but arise simply from the preferences of individuals and groups.
The terms pansexual , omnisexual, fluid and queer-identified are regarded as falling under the umbrella term bisexual (and therefore are considered a part of the bisexual community).
Some use LGBT+ to mean "LGBT and related communities".LGBTQIA is sometimes used and adds "queer, intersex, and asexual" to the basic term. Other variants may have a "U" for "unsure"; a "C" for "curious"; another "T" for "transvestite"; a "TS", or "2" for "two-spirit" persons; or an "SA" for "straight allies". However, the inclusion of straight allies in the LGBT acronym has proven controversial as many straight allies have been accused of using LGBT advocacy to gain popularity and status in recent years, and various LGBT activists have criticised the heteronormative worldview of certain straight allies. Some may also add a "P" for "polyamorous", an "H" for "HIV-affected", or an "O" for "other". Furthermore, the initialism LGBTIH has seen use in India to encompass the hijra third gender identity and the related subculture.
The initialism LGBTTQQIAAP (lesbian, gay, bisexual, transgender, transsexual, queer, questioning, intersex, asexual, ally, pansexual) has also resulted, although such initialisms are sometimes criticized for being confusing and leaving some people out, as well as issues of placement of the letters within the new title.However, adding the term "allies" to the initialism has sparked controversy, with some seeing the inclusion of "ally" in place of "asexual" as a form of asexual erasure. There is also the acronym QUILTBAG (queer and questioning, unsure, intersex, lesbian, transgender and two-spirit, bisexual, asexual and aromantic, and gay and genderqueer).
Similarly LGBTIQA+ stands for "lesbian, gay, bisexual, transgender, intersex, queer/questioning, asexual and many other terms (such as non-binary and pansexual)".The + after the "A" may denote a second "A" representing "allies".
In Canada, the community is sometimes identified as LGBTQ2 (Lesbian, Gay, Bisexual, Transgender, Queer and Two Spirit).Depending on the which organization is using the acronym the choice of acronym changes. Businesses and the CBC often simply employ LGBT as a proxy for any longer acronym, private activist groups often employ LGBTQ+, whereas public health providers favour the more inclusive LGBT2Q+ to accommodate twin spirited indigenous peoples. For a time the Pride Toronto organization used the much lengthier acronym LGBTTIQQ2SA, but appears to have dropped this in favour of simpler wording.
The term trans* has been adopted by some groups as a more inclusive alternative to "transgender", where trans (without the asterisk) has been used to describe trans men and trans women, while trans* covers all non-cisgender (genderqueer) identities, including transgender, transsexual, transvestite, genderqueer, genderfluid, non-binary, genderfuck, genderless, agender, non-gendered, third gender, two-spirit, bigender, and trans man and trans woman.Likewise, the term transsexual commonly falls under the umbrella term transgender, but some transsexual people object to this.
When not inclusive of transgender people, the shorter term LGB is used instead of LGBT.
The relationship of intersex to lesbian, gay, bisexual and trans, and queer communities is complex,but intersex people are often added to the LGBT category to create an LGBTI community. Some intersex people prefer the initialism LGBTI, while others would rather that they not be included as part of the term. Emi Koyama describes how inclusion of intersex in LGBTI can fail to address intersex-specific human rights issues, including creating false impressions "that intersex people's rights are protected" by laws protecting LGBT people, and failing to acknowledge that many intersex people are not LGBT. Organisation Intersex International Australia states that some intersex individuals are same sex attracted, and some are heterosexual, but "LGBTI activism has fought for the rights of people who fall outside of expected binary sex and gender norms". Julius Kaggwa of SIPD Uganda has written that, while the gay community "offers us a place of relative safety, it is also oblivious to our specific needs".
Numerous studies have shown higher rates of same sex attraction in intersex people,with a recent Australian study of people born with atypical sex characteristics finding that 52% of respondents were non-heterosexual, thus research on intersex subjects has been used to explore means of preventing homosexuality. As an experience of being born with sex characteristics that do not fit social norms, intersex can be distinguished from transgender, while some intersex people are both intersex and transgender.
The initialisms LGBT or GLBT are not agreed to by everyone that they encompass.For example, some argue that transgender and transsexual causes are not the same as that of lesbian, gay, and bisexual (LGB) people. This argument centers on the idea that being transgender or transsexual have to do more with gender identity, or a person's understanding of being or not being a man or a woman irrespective of their sexual orientation. LGB issues can be seen as a matter of sexual orientation or attraction. These distinctions have been made in the context of political action in which LGB goals, such as same-sex marriage legislation and human rights work (which may not include transgender and intersex people), may be perceived to differ from transgender and transsexual goals.
A belief in "lesbian & gay separatism" (not to be confused with the related "lesbian separatism"), holds that lesbians and gay men form (or should form) a community distinct and separate from other groups normally included in the LGBTQ sphere.While not always appearing of sufficient number or organization to be called a movement, separatists are a significant, vocal, and active element within many parts of the LGBT community. In some cases separatists will deny the existence or right to equality of bisexual orientations and of transsexuality, sometimes leading public biphobia and transphobia. In contrasts to separatists, Peter Tatchell of the LGBT human rights group OutRage! argues that to separate the transgender movement from the LGB would be "political madness", stating that:
Queers are, like transgender people, gender deviant. We don't conform to traditional heterosexist assumptions of male and female behaviour, in that we have sexual and emotional relationships with the same sex. We should celebrate our discordance with mainstream straight norms.[...]
The portrayal of an all-encompassing "LGBT community" or "LGB community" is also disliked by some lesbian, gay, bisexual, and transgender people.Some do not subscribe to or approve of the political and social solidarity, and visibility and human rights campaigning that normally goes with it including gay pride marches and events. Some of them believe that grouping together people with non-heterosexual orientations perpetuates the myth that being gay/lesbian/bi/asexual/pansexual/etc. makes a person deficiently different from other people. These people are often less visible compared to more mainstream gay or LGBT activists. Since this faction is difficult to distinguish from the heterosexual majority, it is common for people to assume all LGBT people support LGBT liberation and the visibility of LGBT people in society, including the right to live one's life in a different way from the majority. In the 1996 book Anti-Gay, a collection of essays edited by Mark Simpson, the concept of a 'one-size-fits-all' identity based on LGBT stereotypes is criticized for suppressing the individuality of LGBT people.
Writing in the BBC News Magazine in 2014, Julie Bindel questions whether the various gender groupings now, "bracketed together" ... "share the same issues, values and goals?" Bindel refers to a number of possible new initialisms for differing combinations and concludes that it may be time for the alliances to be reformed or finally go "our separate ways".In 2015, the slogan "Drop the T" was coined to encourage LGBT organizations to stop support of transgender people; while receiving support from some feminists as well as transgender individuals, the campaign has been widely condemned by many LGBT groups as transphobic.
In December 29, 2020, the Women's Liberation Front, an organisation noted for its opposition to transgender rights legislation,published a media style guide, in part as a response to the Trans Journalists Association's guide having been adopted by the Society of Professional Journalists. Amongst other advice, the style guide recommended avoiding the term "LGBT" unless discussing topics relevant to "trans-identified individuals" as well as "lesbians, gays [and] bisexuals".
Many people have looked for a generic term to replace the numerous existing initialisms.Words such as queer (an umbrella term for sexual and gender minorities that are not heterosexual, or gender-binary) and rainbow have been tried, but most have not been widely adopted. Queer has many negative connotations to older people who remember the word as a taunt and insult and such (negative) usage of the term continues. Many younger people also understand queer to be more politically charged than LGBT.
SGM, or GSM,an abbreviation for Sexual and Gender Minorities, has gained particular currency in government, academia, and medicine. It has been adopted by the National Institutes of Health; the Centers for Medicare & Medicaid Services; and the UCLA Williams Institute, which studies SGM law and policy. Duke University and the University of California San Francisco both have prominent Sexual and Gender Minority health programs. An NIH paper recommends the term SGM because it is inclusive of "those who may not self-identify as LGBT … or those who have a specific medical condition affecting reproductive development," a publication from the White House Office of Management and Budget explains that "We believe that SGM is more inclusive, because it includes persons not specifically referenced by the identities listed in LGBT," and a UK government paper favors SGM because initials like LGBTIQ+ stand for terms that, especially outside the Global North, are "not necessarily inclusive of local understandings and terms used to describe sexual and gender minorities." An example of usage outside the Global North is the Constitution of Nepal, which identifies "gender and sexual minorities" as a protected class.
"Rainbow" has connotations that recall hippies, New Age movements, and groups such as the Rainbow Family or Jesse Jackson's Rainbow/PUSH Coalition. SGL ("same gender loving") is sometimes favored among gay male African Americans as a way of distinguishing themselves from what they regard as white-dominated LGBT communities.
Some people advocate the term "minority sexual and gender identities" (MSGI, coined in 2000), so as to explicitly include all people who are not cisgender and heterosexual; or gender, sexual, and romantic minorities (GSRM), which is more explicitly inclusive of minority romantic orientations and polyamory; but those have not been widely adopted either.Other rare umbrella terms are Gender and Sexual Diversities (GSD), MOGII (Marginalized Orientations, Gender Identities, and Intersex) and MOGAI (Marginalized Orientations, Gender Alignments and Intersex).
In public health settings, MSM ("men who have sex with men") is clinically used to describe men who have sex with other men without referring to their sexual orientation, with WSW ("women who have sex with women") also used as an analogous term.
Various flags represent specific identities within the LGBT movement, from sexual or romantic orientations, to gender identities or expressions, to sexual characteristics.
That "A" is not for allies[,] [t]hat "A" is for asexuals. [...] Much like bisexuality, asexuality suffers from erasure.
human studies of the effects of altering the prenatal hormonal milieu by the administration of exogenous hormones lend support to a prenatal hormone theory that implicates both androgens and estrogens in the development of gender preference ... it is likely that prenatal hormone variations may be only one among several factors influencing the development of sexual orientation
To try and separate the LGB from the T, and from women, is political madness. Queers are, like transgender people, gender deviant. We don't conform to traditional heterosexist assumptions of male and female behaviour, in that we have sexual and emotional relationships with the same sex. We should celebrate our discordance with mainstream straight norms. The right to be different is a fundamental human right. The idea that we should conform to straight expectations is demeaning and insulting.
Queer is an umbrella term for sexual and gender minorities who are not heterosexual or are not cisgender. Originally meaning "strange" or "peculiar", queer came to be used pejoratively against those with same-sex desires or relationships in the late 19th century. Beginning in the late 1980s, queer activists, such as the members of Queer Nation, began to reclaim the word as a deliberately provocative and politically radical alternative to the more assimilationist branches of the LGBT community.
A cisgender person is a person whose gender identity matches their sex assigned at birth. For example, someone who identifies as a woman and was identified as female at birth is a cisgender woman. The word cisgender is the antonym of transgender. The prefix cis- is not an acronym or abbreviation of another word; it is derived from Latin and the word cissexual was invented in the 1990s from the German zissexuell.
The LGBT community is a loosely defined grouping of lesbian, gay, bisexual, transgender, LGBT organizations, and subcultures, united by a common culture and social movements. These communities generally celebrate pride, diversity, individuality, and sexuality. LGBT activists and sociologists see LGBT community-building as a counterweight to heterosexism, homophobia, biphobia, transphobia, sexualism, and conformist pressures that exist in the larger society. The term pride or sometimes gay pride expresses the LGBT community's identity and collective strength; pride parades provide both a prime example of the use and a demonstration of the general meaning of the term. The LGBT community is diverse in political affiliation. Not all people who are lesbian, gay, bisexual, or transgender consider themselves part of the LGBT community.
Queer studies, sexual diversity studies, or LGBT studies is the study of issues relating to sexual orientation and gender identity usually focusing on lesbian, gay, bisexual, transgender, gender dysphoria, asexual, queer, questioning, intersex people and cultures.
Heteronormativity is the belief that heterosexuality is the default, preferred, or normal mode of sexual orientation. It assumes the gender binary and that sexual and marital relations are most fitting between people of opposite sex. A heteronormative view therefore involves alignment of biological sex, sexuality, gender identity and gender roles. Heteronormativity is often linked to heterosexism and homophobia. The effects of societal heteronormativity on lesbian, gay and bisexual individuals can be examined as heterosexual or "straight" privilege.
Non-heterosexual is a word for a sexual orientation or sexual identity that is not heterosexual. The term helps define the "concept of what is the norm and how a particular group is different from that norm". Non-heterosexual is used in feminist and gender studies fields as well as general academic literature to help differentiate between sexual identities chosen, prescribed and simply assumed, with varying understanding of implications of those sexual identities. The term is similar to queer, though less politically charged and more clinical; queer generally refers to being non-normative and non-heterosexual. Some view the term as being contentious and pejorative as it "labels people against the perceived norm of heterosexuality, thus reinforcing heteronormativity". Still others say non-heterosexual is the only term useful to maintaining coherence in research and suggest it "highlights a shortcoming in our language around sexual identity"; for instance, its use can enable bisexual erasure.
LGBT culture is a culture shared by lesbian, gay, bisexual, transgender, and queer individuals. It is sometimes referred to as queer culture, while the term gay culture may be used to mean "LGBT culture" or to refer specifically to homosexual culture.
A sexual minority is a group whose sexual identity, orientation or practices differ from the majority of the surrounding society. Primarily used to refer to LGB or non-heterosexual individuals, it can also refer to transgender, non-binary or intersex individuals.
A pride flag is any flag that represents a segment or part of the LGBT community. Pride in this case refers to the notion of gay pride. The rainbow flag is the most widely used LGBT flag and LGBT symbol in general. There are derivations of the rainbow flag that are used to focus attention on specific similar-interest groups within the community. There are also some pride flags that are not exclusively related to LGBT matters, such as the polyamory flag. The terms LGBT flags and queer flags are often used interchangeably.
The LGBT community has adopted certain symbols for self-identification to demonstrate unity, pride, shared values, and allegiance to one another. LGBT symbols communicate ideas, concepts, and identity both within their communities and to mainstream culture. The two most-recognized international LGBT symbols are the pink triangle and the rainbow flag.
The questioning of one's sexual orientation, sexual identity, gender, or all three is a process of exploration by people who may be unsure, still exploring, or concerned about applying a social label to themselves for various reasons. The letter "Q" is sometimes added to the end of the acronym LGBT ; the "Q" can refer to either queer or questioning.
LGBT movements in the United States comprise an interwoven history of lesbian, gay, bisexual, transgender and allied movements in the United States of America, beginning in the early 20th century and influential in achieving social progress for lesbian, gay, bisexual, transgender and transsexual people.
A trans woman is a woman who was assigned male at birth. Trans women may experience gender dysphoria and may transition; this process commonly includes hormone replacement therapy and sometimes sex reassignment surgery, which can bring relief and resolve feelings of gender dysphoria. Trans women may be heterosexual, bisexual, homosexual, asexual, or identify with other terms.
The following outline offers an overview and guide to LGBT topics.
LGBTQ ageing addresses issues and concerns related to the ageing of lesbian, gay, bisexual and transgender (LGBT) people. Older LGBT people are marginalised by: a) younger LGBT people, because of ageism; and b) by older age social networks because of homophobia, biphobia, transphobia, heteronormativity, heterosexism, prejudice and discrimination towards LGBT people.
Intersex people are born with sex characteristics that "do not fit the typical definitions for male or female bodies". They are substantially more likely to identify as lesbian, gay, bisexual, or transgender (LGBT) than the non-intersex population, with an estimated 52% identifying as non-heterosexual and 8.5% to 20% experiencing gender dysphoria. Although many intersex people are heterosexual and cisgender, this overlap and "shared experiences of harm arising from dominant societal sex and gender norms" has led to intersex people often being included under the LGBT umbrella, with the acronym sometimes expanded to LGBTI. However, some intersex activists and organisations have criticised this inclusion as distracting from intersex-specific issues such as involuntary medical interventions.
Gender and sexual diversity (GSD), or simply sexual diversity, refers to all the diversities of sex characteristics, sexual orientations and gender identities, without the need to specify each of the identities, behaviors, or characteristics that form this plurality.
LGBT psychology is a field of psychology surrounding the lives of LGBT individuals, in particular the diverse range of psychological perspectives and experiences of these individuals. It covers different aspects such as identity development including the 'coming out' process, parenting and family practices and supports for LGBT individuals, as well as issues of prejudice and discrimination involving the LGBT community.
|Wikimedia Commons has media related to LGBT .|
|Look up LGBT or QUILTBAG in Wiktionary, the free dictionary.|
|Wikiquote has quotations related to: LGBT| |
This project combines Physics, Biology, and Construction Design to learn about limbs from insects to humans.
The physics lessons begin with studying torque and how it causes rotation, following the with experimenting with levers and how to calculate the mechanical advantage. Work and energy of simple levers and how it can be applied to the human arm is also introduced. Finally a model arm is constructed in the last lesson and experiments are conducted on mechanical advantage, applied forces and work and energy.
The wood working lesson has to do with how design evolves - whether it's the limb of an animal or an object being built, good design is EVERYTHING. If it doesn't work in an animal, the animal dies or that particular strain goes extinct. If it doesn't work in an object, the object does not sell; money, time and resources are lost - the capitalist's version of extinction. Elements and evaluating good design principals are the core of this arc of lessons, and the construction of a well designed oven push stick to keep from burning your fingers is the end result.
The Biology section covers the anatomy and physiology of limb movement, the evolution of limbs and the different types of limbs in the animal kingdom. The four lessons are part of an integrated project that provides knowledge of limbs that will apply to the culminating project of creating a model limb.
This unit is brought to you by Oscar Dominguez (Science), Howard White (CTE), and John Moorhead (Science) with support from the CTE Online curriculum leadership team and detailed coordination provided by the Course Team Lead Gregg Witkin. |
The speed of a snake depends on the species and size. On average, the fastest a snake moves is between 5 and 8 mph. The world's fastest snake is the Black Mamba, which can reach speeds of 10 to 12 mph in short bursts.
Snakes utilize different forms of movement, depending on their environment. The most common form is lateral undulation, which is used on land and water. The snake flexes its body from left to right and creates a wave-like motion that propels it forward, moving it about two body lengths per second.
Terrestrial lateral undulation uses the same motion, but the snake pushes against small objects in its path, such as rocks, trees or clumps of dirt. At each point where the snake's body touches the objects, it thrusts forward, creating more momentum and faster speeds. This is the fastest way snakes move, gaining up to eight body lengths a second.
Snakes use sidewinding when surfaces are smooth and they lack objects to push against, such as on sand dunes. A sidewinding snake uses the left-right motion of the lateral undulation, but one-half of the body pushes to the ground while the other is lifted into the air. The alternating motion moves the snake forward like a rolling wave. |
HUBUNGAN KERUSAKAN TANAH DAN NERACA HARA SEBAGAI INDIKATOR PERTANIAN BERKELANJUTAN*)
(RELATION OF SOIL DEGRADATION TO NUTRIENT BALANCE AS
AN INDICATOR OF SUSTAINABLE AGRICULTURE)
Soil degradation and nutrient balance are important for ecological nutrient management in sustainable agriculture. Soil degradation in term of soil erosion is the single most important environmental concern in the developing countries. Eroded sediment also acts as both a physical and chemical pollutant. It has been become an ecological, social and economic problem. Loss of nutrient by erosion from agricultural areas in a sloping land is one of the major causes of land degradation in Indonesia, yet the process by which the nutrient is depleted has often not been answered by appropriate study. To relate soil degradation to nutrient balance becomes important tool for monitoring sustainable agriculture.
*) Artikel di Jornal JOGLO Vol. 23 No. 1, Juli 2011. ISSN: 0215-9546
**) Dosen Fakultas Pertanian UNISRI Surakarta
Selengkapnya: artikel joglo_riyo samekto |
The total number of species declared officially Extinct is 784 and a further 65 are only found in captivity or cultivation. Of the 40,177 species assessed using the World Conservation Union (IUCN) Red List criteria, 16,119 are now listed as threatened with extinction. This includes one in three amphibians and a quarter of the world’s coniferous trees, on top of the one in eight birds and one in four mammals known to be in jeopardy.
The 2006 IUCN Red List of Threatened Species brings into sharp focus the ongoing decline of the earth’s biodiversity and the impact mankind is having upon life on earth. Widely recognized as the most authoritative assessment of the global status of plants and animals, it provides an accurate measure of progress, or lack of it, in achieving the globally agreed target to significantly reduce the current rate of biodiversity loss by 2010.
“The 2006 IUCN Red List shows a clear trend: biodiversity loss is increasing, not slowing down,” said Achim Steiner, Director General of the World Conservation Union (IUCN). “The implications of this trend for the productivity and resilience of ecosystems and the lives and livelihoods of billions of people who depend on them are far-reaching. Reversing this trend is possible, as numerous conservation success stories have proven. To succeed on a global scale, we need new alliances across all sectors of society. Biodiversity cannot be saved by environmentalists alone – it must become the responsibility of everyone with the power and resources to act,” he added.
Melting icecaps …
Polar bears (Ursus maritimus) are set to become one of the most notable casualties of global warming. The impact of climate change is increasingly felt in polar regions, where summer sea ice is expected to decrease by 50-100% over the next 50-100 years. Dependent upon Arctic ice-floes for hunting seals and highly specialized for life in the Arctic marine environment, polar bears are predicted to suffer more than a 30% population decline in the next 45 years. Previously listed by IUCN as a conservation dependent species, the polar bear moves into the threatened categories and has been classified as Vulnerable. (Clarifications on the IUCN Red List threat categories can be found in the Notes to Editors).
… dying deserts …
Humankind’s global footprint on the planet extends even to regions that would appear to be far removed from human influence. Deserts and drylands may appear relatively untouched, but their specially adapted animals and plants are also some of the rarest and most threatened. Slowly but surely deserts are being emptied of their diverse and specialized wildlife, almost unnoticed.
The main threat to desert wildlife is unregulated hunting followed by habitat degradation. The dama gazelle (Gazella dama) of the Sahara , already listed as Endangered in 2004, has suffered an 80% crash in numbers over the past 10 years because of uncontrolled hunting parties, and has been upgraded to Critically Endangered. Other Saharan gazelle species are also threatened and they seem destined to suffer the fate of the scimitar-horned oryx (Oryx dammah) and become Extinctin the Wild.
Asian antelopes face similar pressures. The goitered gazelle (Gazella subgutturosa) is widespread across the deserts and semi-deserts of central Asia and the Middle East and until a few years ago had substantial populations in Kazakhstan and Mongolia . Both countries have seen sharp declines because of habitat loss and illegal hunting for meat. The gazelle has been reclassified from Near Threatened to Vulnerable.
… and empty oceans
A key addition to the 2006 Red List of Threatened Species is the first comprehensive regional assessment of selected marine groups.
Sharks and rays are among the first marine groups to be systematically assessed, and of the 547 species listed, 20 % are threatened with extinction. This confirms suspicions that these mainly slow-growing species are exceptionally susceptible to over-fishing and are disappearing at an unprecedented rate across the globe.
The plight of the angel shark (Squatina squatina) and common skate (Dipturus batis), once familiar sights in European fish-markets, illustrates dramatically the recent rapid deterioration of many sharks and rays. They have all but disappeared from sale. The angel shark (upgraded from Vulnerable to Critically Endangered) has been declared extinct in the North Sea and the common skate (upgraded from Endangered to Critically Endangered) is now very scarce in the Irish Sea and southern North Sea .
As fisheries extend into ever deeper waters, the deep bottom-dwelling gulper shark (Centrophorus granulosus) is listed as Vulnerable with local population declines of up to 95%. This fishing pressure, for its meat and rich liver oil, is well beyond their reproductive capacity and sustainable fishing. Populations are destined to decline in the absence of international catch limits.
“Marine species are proving to be just as much at risk of extinction as their land-based counterparts: the desperate situation of many sharks and rays is just the tip of the iceberg,” said Craig Hilton-Taylor of the IUCN Red List Unit. “It is critical that urgent action to greatly improve management practices and implement conservation measures, such as agreed non-fishing areas, enforced mesh-size regulations and international catch limits, is taken before it is too late.”
Freshwater fish assumes top slot on extinction list
Freshwater species are not faring any better. They have suffered some of the most dramatic declines: 56% of the 252 endemic freshwater Mediterranean fish are threatened with extinction, the highest proportion of any regional freshwater fish assessment so far. Seven species, including carp relatives Alburnus akili in Turkey and Telestes ukliva from Croatia , are now Extinct. Of the 564 dragonfly and damselfly species so far assessed, nearly one in three ( 174) are threatened, including nearly 40% of endemic Sri Lankan dragonflies.
“We need fish for food, but human activities in watersheds, through forest clearance, pollution, water abstraction and eutrophication are major factors influencing water quality and quantity. This has a major impact on freshwater species, and in turn on the wellbeing of riparian communities,” said Dr Jean-Christophe Vié, Deputy Coordinator, IUCN Species Programme.
In East Africa , human impacts on the freshwater environment threaten over one in four (28%) freshwater fish. This could have major commercial and dietary consequences for the region. For example, in Malawi , 70% of animal protein consumed comes from freshwater fish. The lake trout or Mpasa (Opsaridium microlepis) from Lake Malawi is fished heavily during its spawning runs upriver but has suffered a 50% decline in the past ten years, due to siltation of its spawning grounds and reduced flows due to water abstraction. It is now listed as Endangered.
As well as being an important source of food, freshwater ecosystems are essential for clean drinking water and sanitation. Over a billion people worldwide still do not have access to safe water. The continuing decline in wetlands and freshwater ecosystems will make it increasingly difficult to address this need and maintain existing supplies.
With their semi-aquatic habitat, dragonflies are proving to be useful indicators of habitat quality above and below the water surface. In the densely populated Kenyan highlands, where many rivers originate, the Endangered dragonfly Notogomphus maathaiae of mountain forest streams is being promoted as a flagship species to create awareness for their potential as “guardians of the watershed”. Protecting its riverside forests will also help the farmers of the foothills, by guaranteeing soil stability and a steady flow of water. It is very appropriate that this dragonfly has been named in honour of African Nobel Prize winner Wangari Maathai, a tireless campaigner for the protection of the world’s natural resources in the fight against poverty.
95% decline of hippo populations in Democratic Republic of Congo – now listed as Vulnerable
Larger freshwater species, such as the common hippopotamus (Hippopotamus amphibius) are also in difficulty. One of Africa’s best known aquatic icons, it has been listed as threatened for the first time and is now classified as Vulnerable, primarily because of a catastrophic decline in the Democratic Republic of the Congo (DRC). In 1994 the DRC had the second largest population in Africa – 30,000 after Zambia ’s 40,000 - but numbers have plummeted by 95%. The decline is due to unregulated hunting for meat and the ivory of their teeth.
“Regional conflicts and political instability in some African countries have created hardship for many of the region’s inhabitants and the impact on wildlife has been equally devastating,” said Jeffrey McNeely, IUCN Chief Scientist.
Another casualty of political instability and unrest is the much less well known pygmy hippo (Hexaprotodon liberiensis), restricted to only a handful of West African countries. This shy forest animal was already classified as Vulnerable, but illegal logging and the inability to enforce protection in core areas has pushed it into ever decreasing fragments of forest. It is now classified in the higher threat category Endangered.
More comprehensive picture of threatened Mediterranean plants
The 2006 Red List includes additional species from the Mediterranean region, one of the world’s 34 biodiversity hotspots with nearly 25,000 species of plants – of which 60% are found nowhere else in the world. In the Mediterranean , the pressures from urbanization, mass tourism and intensive agriculture have pushed more and more native species, like the bugloss Anchusa crispa and centuary Femeniasia balearica (both Critically Endangered) towards extinction. The bugloss is only known from 20 small sites and less than 2,200 mature centaury plants remain.
The IUCN Red List – a wake up call to spearhead biodiversity action
But what can be done to halt and reverse the decline of the Earth’s biodiversity on which so much of our own well-being depends?
The IUCN Red List of Threatened Species acts as a wake up call to the world by focusing attention on the state of our natural environment. It has become an increasingly powerful tool for conservation planning, management, monitoring and decision-making. It is widely cited in the scientific literature as the most suitable system for assessing species extinction risk.
In addition to being the most reputable science-based decision-making tool for species conservation on a global scale, it is being more widely adopted at the national level. At least 57 countries now use national Red Lists, following IUCN criteria to focus their conservation priorities.
Conservation does work
Thanks to conservation action, the status of certain species has improved: proof that conservation does work.
Following large recoveries in many European countries, the numbers of white-tailed eagles (Haliaeetus albicilla) doubled in the 1990s and it has been downlisted from Near Threatened to Least Concern. Enforcement of legislation to protect the species from being killed, and protective measures to address threats from habitat changes and pollution have resulted in increasing populations.
On Australia’s Christmas Island, the seabird Abbott’s booby (Papasula abbotti) was declining due to habitat clearance and an introduced invasive alien species, the yellow crazy ant (Anoplolepis gracilipes), which had a major impact on the island’s ecology. The booby, listed as Critically Endangered in 2004, is recovering thanks to conservation measures and has now moved down a category to Endangered.
Other plants and animals highlighted in previous Red List announcements are now the focus of concerted conservation actions, which should lead to an improvement in their conservation status in the near future.
The 300 kg Mekong Catfish (Pangasianodon gigas) of South-east Asia is one of the largest freshwater fish in the world and was listed as Critically Endangered in 2003. Adopted as one of four flagship species by the Mekong Wetlands Biodiversity and Sustainable Use Programme, it is the focus of regional co-operation on fisheries management issues and conservation activities.
Swift action since the dramatic 97% population crash of the Indian Vulture (Gyps indicus), listed as Critically Endangered in 2002, means that the future for this and related species is more secure. The veterinary drug that unintentionally poisoned them, diclofenac, is now banned in India . A promising substitute has been found and captive breeding assurance colonies will be used for a re-introduction programme.
Many other species, such as the humphead wrasse (Cheilinus undulatus) (listed as Endangered since 2004), Saiga antelope (Saiga tatarica) (listed as Critically Endangered since 2002) are also the subject of concerted conservation campaigns.
“These examples show that conservation measures are making a difference,” concluded Achim Steiner. “What we need is more of them. Conservation successes document that we should not be passive by-standers in the unfolding tragedy of biodiversity loss and species extinction. IUCN together with the many actors in the global conservation community will continue to advocate greater investments in biodiversity and to mobilize new coalitions across all sectors of society.”
Cite This Page: |
Back to Basics – Macular Degeneration
This week we are looking at the Retina & some of the conditions which affect the function of the retina. Today the focus is on Macular degeneration.
What is Macular Degeneration?
Macular Degeneration is also referred to as Age Related Macular Degeneration or AMD as it generally affects people as they age. There is a genetic component to macular degeneration and the onset can vary from person to person depending on the condition of their macula. The macula (fovea) is a small area in the centre of the retina. It is the part of the eye that we use to read and see fine detail. The macula consists of several layers of tiny cells and as we get older, some of these can fail to function properly. This can lead to a build-up of deposits in the retina and the growth of new blood vessels. Unfortunately, these new blood vessels are fragile and bleed easily. If this happens, there may be a sudden loss of central vision and objects may appear distorted. Below is an image of a healthy retina, as seen through an ophthalmoscope. Macular Degeneration presents as one of two forms: Wet or Dry. Dry AMD is the more common of the two forms. It is an early stage of the disease related to ageing and thinning of macula tissues or depositing of cellular debris, called drusen, in the macula, creating shadowy or blurry spots in the central vision field. In its later stages, the blind spots can grow and darken. Wet AMD is a more advanced and damaging form of the disease. As the retina is starved of oxygen, new blood vessels grow behind the retina, underneath the macula in order to resupply oxygen to the retina. These new blood vessels are weak and fragile, leading to them leak blood and fluid which causes the macula to swell. Vision loss can happen rapidly with this stage if the blood vessels haemorrhage. The damage caused by the blood vessels can also scar the retina. Symptoms can include distortion in vision; straight lines may appear wavy. Blind spots can also appear in the central vision. [https://www.hollows.org.nz]
What causes Macular Degeneration?
As mentioned, there is a genetic component to AMD, however other factors that increase the risk of AMD have been identified. Our retinal specialist Dr Daniel Polya reports that “smoking, high blood pressure and a family history of the disease can increase the risk of developing macular degeneration”. Dr Polya also recommends regular eye exams to help detect macular degeneration early, which could prevent permanent vision loss.
What are the symptoms of Macular Degeneration?
As the macular is responsible for central vision, symptoms of AMD are often detected soon after the onset of the disease. Some of the possible symptoms include:
- Loss of ability to see objects clearly
- Gradual loss of colour vision
- Distorted or blurry vision
- A dark or empty area appearing in the centre of vision
How is Macular degeneration diagnosed?
Modern technology has helped in early and prompt diagnosis of AMD. When looking for AMD, your ophthalmologist will perform a dilated retinal examination & may request some additional tests such as an OCT (Optical Coherence Tomography), Retinal Photo or a Fundus fluorescein angiography (FFA). Each of these tests are used to examine the tiny blood vessels in the retina. In an FFA procedure, a dye called Fluorescein is injected into the patient’s vein to examine the blood flow of the retina. There is also a very simple test you can perform at home called the Amsler Grid (see image below). If the grid lines are wavy, broken or distorted, or if there are blurred or missing patches, this may be a symptom of macular degeneration.
To download your own Amsler grid click on the following link http://www.mdfoundation.com.au/resources/1/amsler_grid.pdf
How is Macular Degeneration treated?
The Macular Disease Foundation Australia® advise that there is currently no medical treatments available for dry macular degeneration, however a substantial amount of research is being conducted to find a treatment. A large American study, the Age-Related Eye Diseases Study (AREDS), found that certain combinations of vitamins could reduce the chance of AMD getting worse by about one quarter. This was indicated for both Dry and Wet AMD. In addition to vitamin supplements, it is advised to quit smoking & make certain dietary changes. There are several foods which promote macular health, including nuts & fish oils. For Wet AMD the most common form of treatment is with a drug referred to as anti-VEGF. A protein called Vascular Endothelial Growth Factor (VEGF) is predominantly responsible for the leaking and growth of the new blood vessels in wet macular degeneration. To slow or stop this process various drugs that block this protein (called anti-VEGFs) may be injected into the eye. Clinical trials have shown that the use of anti-VEGF drugs maintains vision in the vast majority of wet macular degeneration patients. These anti-VEGF drugs are administered as injections into the eye. The usual treatment regimen begins with monthly injections for three months. Then to maintain control of the disease injections must usually be continued on an indefinite basis. The interval between these ongoing injections is determined on an individualised basis by the eye specialist in consultation with the patient. Lucentis® (ranibizumab), Eylea® (aflibercept), Avastin® (bevacizumab) The choice of the most appropriate drug should be discussed with the eye specialist. For any inquiries regarding the treatment of Macular Degeneration, and to make an appointment with our retinal specialist Dr Dan Polya, please phone our rooms on (02) 9241 2913. References |
English Grammatical Terms
- Verb Phrase – A phrase that functions as a verb, consisting of a main verb and any auxiliaries. Example: “I am explaining the meaning of a verb” – “…explaining the meaning…” is the verb-phrase witin-in the full sentence.
- Verb Tense –
- Vocabulary – A collection of words with-in any language that can be organized into specific groups or can represent a list of words that a person knows or is learning.
- Voice – The way in which a sentence is expressed – either, “Active” or “Passive” – so that the focus is either on the action “Passive” or the doer or receiver of the action, “Active” – Also an example of grammar where-in the terms do not, immediately, make logical sense, and seem to be backwards.
- Vowel – The letters in the alphabet which are represented by the symbols of: Aa, Ee, Ii, Oo, Uu, and sometimes, Yy
– (Back To Index) –
Visit My Campaigns at:
Or Make A One-Time Donation |
Yellow Fever and the Limits of Cuban Independence
On April 25, 1898, the United States declared war on Spain and promptly invaded the Spanish colony of Cuba. Why? Drawing on our junior high school history classes, many of us would respond, “To ‘Remember the Maine,’” the U.S. warship that exploded and sank in Havana’s harbor a few months earlier. But, as Mariola Espinosa, a newly-recruited assistant professor in the Section of the History of Medicine, points out, the U.S. government had already decided on war the preceding fall, before the Maine was ever sent to Cuba. At that time, U.S. minister to Spain Stewart Woodford was charged with laying out the grounds for war to Europe’s other powers. First and foremost, Woodford explained, was yellow fever.
Espinosa tells the story of how the devastating power of yellow fever dramatically transformed and defined the relationship between Cuba and the United States in her new book, Epidemic Invasions: Yellow Fever and the Limits of Cuban Independence. Yellow fever was endemic to nineteenth century Havana, and commercial traffic with that port regularly brought the fearsome disease to the U.S. South. When an epidemic again spread across the panic-stricken southern states in 1897, it underscored the fact that Spain was unwilling or unable to control the source of infection in Cuba, and the U.S. government at last decided to take the matter into its own hands.
U.S. military doctors eventually succeeded in eliminating yellow fever from the island, but Espinosa reveals that this was no act of beneficence: most Cubans were immune to the disease from childhood infection and suffered little from it. And concerns in the United States about yellow fever continued to powerfully shape the limits of Cuban independence: the United States demanded that Cuba pledge in its constitution to maintain the public health measures established during the occupation before withdrawing its troops. Only after the U.S. Army again occupied the island from 1906 to 1909 did Cuban government officials and sanitarians come to realize that keeping the island free of yellow fever was critical to its independence, but they refused to accept that the U.S. domination that made these efforts so paramount was legitimate.
By illuminating this history, Espinosa brings to the Americas for the first time the insights garnered by historians of colonial public health in other parts of the world. Scholars of European colonial public health have long recognized that disease-control policies primarily served the interests of the colonizers rather than the colonized: public health was at once a means of protecting imperial interests in trade and an instrument by which social control was exerted and justified. She reveals that these insights hold no less true for the often informal imperialism of the United States than for the overt colonialism of the European powers. The U.S. programs of sanitation and disease eradication in the Latin America and beyond that began with the efforts in Cuba were not charitable endeavors, but served to eliminate threats to the United States and the continued expansion of its influence in the world.
This fall, her first at Yale, Espinosa has been sharing her expertise on the relationships between disease, medicine, and empire in the Caribbean with students in a graduate seminar; in the spring, she will teach an undergraduate course on the history of medicine in Latin America and another on historical methods. She is now in the beginning stages of a project that will examine how disease affected the maintenance of empire and competition between imperial powers across the Caribbean, with the twin goals of comparing the Cuban experience with those of people elsewhere in the Caribbean and finding the interconnections that the participants and contemporary observers made among these critical episodes in the history of the region.
“The Caribbean,” Espinosa recently explained, outlining her ongoing research program, “provides me with numerous angles of inquiry into the role of disease in relations between peoples in an age of empires. There you have a diverse and often hostile environment, where peoples from all parts of the globe converge in a limited space, and competed with each other. So I can look at, for example, the role of yellow fever in Spanish loss of empire in Cuba in 1898, and evaluate that alongside the British incursions within the French colonies during the French Revolution. The Spanish Caribbean has traditionally been studied in isolation; my goal is to show how these colonies, while having different metropoles, were not as isolated as we study them.”
“The Caribbean is a particularly challenging disease environment for imperial powers. It became a crossroads between Europe, Africa, North and South America. And each region contributed its diseases to this environment making it completely inhospitable to outsiders. At the same time it was very desirable and became the object of struggle between and among all the major imperial powers. I want to know, how did each of them cope with the extraordinarily lethal combination of diseases found there, and how it affected the relationship between the colonized with their colonizers? Did the locals use this to their advantage? Not only will I do this by comparing historical episodes, but I will also trace the ways in which conceptions of disease and reactions to them in the region change over time and how previous experiences influence later ones.”
Professor Espinosa’s plans at Yale include convening an international conference in New Haven that will bring together scholars exploring medicine, colonialism, and imperialism in Asia and Africa with scholars whose research focuses on the history of medicine and public health in Latin America. |
The Nile Hippopotamus is a good swimmer and diver - even graceful under water. When submerged it closes its slitlike nostrils and ears. It stays submerged for 3 to 5 minutes at a time normally, but can stay under for 15 to 20 minutes if necessary. Hearing, sight and smell are all well developed, and all function with only the top of the skull above the water surface. The incisors and canines are tusk-like and grow continuously. The incisors are rounded, smooth, and widely separated. The barrel-like body is covered by short, fine hairs, but it appears naked. All four toes on each foot support the weight of the body. The skin is glandular and exudes droplets of moisture that contain red pigment. Light reflected through this appears red, giving it the name 'blood sweat.' The weight of males ranges from 3528 to 7056 lbs. Female weight is from 1444 to 5168 lbs. Birth weight is from 60 to 110 lbs.The young can swim before they can walk, and nurse under water.
The word 'hippopotamus' is Greek for 'river horse,' although this animal is not closely related to the horse at all.
Location: Off Exhibit
The Nile hippopotamus range is Kenya, Somalia, and Tanzania.
the Nile hippopotamus inhabits deep water with adjacent reed beds and grassland.
The Nile hippopotamus has a gestation period of 227 to 240 days.
Nile hippopotamus has one calf.
The Nile hippopotamus is a good swimmer and diver — even graceful under water. When submerged it closes its slitlike nostrils and ears. It stays submerged for 3 to 5 minutes at a time normally, but can stay under for 15 to 20 minutes if necessary. Fights between males are vicious and can last for several hours. Many bulls have scars on their backs and bellies from such fights. Deaths are common. Bulls display and threaten by defecating while swishing the tail back and forth, creating a large cloud of fecal matter in the water. Most eating is done on land at night, often several miles from the river or lake. Hippos return to the water by dawn to spend the day digesting and resting. They have a three-chambers, non-ruminating stomach.
Nile hippopotamus females mate in February and August, always at the end of the dry season. The young can swim before they can walk, and nurse under water. Sexual maturity occurs at 6 to 13 years for males and 7 to 15 years for females, but in captivity both sexes mature in only 3 to 4 years.
Mostly grass, aquatic plants, reeds
Hay and grain, vegetables |
The African continent is dotted with weather stations busy collecting data on rainfall, temperature wind conditions, and so forth. When this meteorological information is widely shared, it helps government policymakers and health professionals predict, prevent or mitigate weather-influenced diseases such as malaria, meningitis and plague.
The problem is, this critical data is not being widely shared in Africa, mainly because of the prohibitive costs of acquiring and sharing it. Now, scientists at Columbia University and elsewhere are hoping to change that at a global conference on climate and health to be held in Addis Ababa, Ethiopia, beginning April 4.
Across Africa, it’s easy to see the direct connection between climate conditions and disease. In humid parts of the continent more rainfall creates conditions where malarial mosquitoes can live longer and infect more people. In Africa’s semi-arid Sahel region, the hot dusty dry season makes people more susceptible to epidemic meningitis.
Finding ways to share information that connects climate and health data is one mission of Columbia University’s International Research Institute for Climate and Society, or IRI. It’s no easy task. IRI climatologist Bradfield Lyon explains that in Kenya, for example, where some 1,500 weather stations collect detailed data, fewer than 50 make their measurements publicly available. This forces scientists to rely on global satellite data, which produce less accurate models for local climates.
“There are different types of data sets we can view as climate scientists, like that which is remotely sensed from satellites," says Lyon. "There are certain data analyses that are put together that sort of interpolate between stations what is going on. But that’s not very useful information at a specific location.”
Indeed, while global or even continental climate data may be of interest in the abstract, people need to know about local climate trends. The best way to do that, says IRI epidemiologist Judy Omumbo, is to have daily records concerning rainfall, humidity, winds and other weather parameters over decades. Those measurements can then be correlated with local health records to detect overall trends.
“But because there are financial constraints, these often don’t get collected and put together into a central data base," says Omumbo. "Now with the passage of time there is very high likelihood that this information will get lost.”
Obtaining climate information can be very expensive in Africa because weather services often rely on the sale of data to generate revenues.
"So if you want to have the information, you’ve got to pay for it," says Omumbo. "And at the moment the going rate is something like $2 per parameter per day. So, if you are looking at picking up daily information or hourly information, it really adds up and becomes very expensive.”
Omumbo has had personal experience with these difficulties. For example, she and her colleagues at IRI learned that a tea plantation in the Kenyan highlands had been taking daily weather readings every day for over 50 years, but the plantation wanted to charge a high fee for releasing the data.
Then Omumbo learned that Kenya’s national weather service also had the data. Her team asked officials there to provide the data free of charge in the spirit of scientific collaboration for the common good. Eventually, they did, and Omumbo was able to use the data to confirm that temperatures in western Kenya have been rising.
“So a big part of our work is to build the awareness about the power of information and how by sharing information, you don’t actually lose," she says. "It is a gain, a benefit to society.”
At the conference on climate and health in Addis Ababa, Ethiopia, Omumbo is presenting policymakers with case studies and other empirical evidence that show how sharing climate data can help mitigate epidemics.
“And this is actually the real agenda that we are having in Addis. It’s to bring together policymakers in Africa and to look at the problems and the gaps, particularly about data information. We also have policy people in the health sector coming as well, representatives from the regional office of the WHO. We have representatives from the ministries of health at the highest level. If you do not address climate, then we cannot address the whole development issue as well. For me, development is empowering people to be able to do for themselves." |
Speed of Light
by Ron Kurtus (revised 6 October 2007)
The speed or velocity of light is approximately 186,000 miles per second or 300,000 kilometers per second in a vacuum.
All electromagnetic waves, including visible light, travel at that speed. With such an enormous speed, it has been difficult to devise experiments to measure it. The speed of the electromagnetic waves slows down when they pass through matter.
According to the Theory of Relativity, the speed of light is the fastest at which anything can travel.
Questions you may have include:
- How is the speed of light measured?
- What is this speed's relationship to matter?
- Can things go faster than light?
This lesson will answer those questions. Useful tool: Units Conversion
Measuring the speed of light
Since the speed of light is so great, it is very difficult to measure.
Note that the terms speed of light and velocity of light are used. Either one is acceptable, but you must remember that speed means how fast something is going, while velocity means how fast it is going in a given direction.
Echo method not practical
It was thought that the velocity of light could be measured that same way as for the velocity of sound. A common method to measure the velocity of sound is to calculate the time it takes for an echo to return and then divide that by the distance the sound travels there and back. Since distance equals velocity times time:
c = d/t
- c is the speed of light (light's speed is always denoted as c)
- d is the distance traveled
- t is the time it takes to go that distance
- d/t is d divided by t
But the velocity of light is so large at 186,000 miles/sec (300,000 km/sec), that in 1/1000 of a second, the light would travel from Milwaukee to Chicago and back (or from Los Angeles to San Diego and back). That is over 90 miles (150 km) one way.
If you used a timer or shutter that could measure in 1/100,000 of a second, it might be be more practical.
One clever method that was one of the first to measure the speed of light was to shine a light through pinhole on a spinning disk. The light was then reflected off a mirror that was some distance away.
Since light travels so fast, it would normally be reflected back through the pinhole (provided everything was lined up properly). By adjusting the speed of the spinning disk and/or the distance, the pinhole could be made to move enough that the light would not pass through it. Then, by calculating the size of the pinhole, the speed of the disk, and the distance to the mirror, the speed of light could be calculated.
With modern electronics, the speed of light can now be measured in a physics lab.
One example to measure the speed of red light is to use equipment that includes a LED (light-emitting diode) that emits a regular series of pulses of red light that are only 20 nanoseconds in duration.
(A nanosecond is one-billionth of a second or 1/1,000,000,000 second. It is also written as 10-9 seconds, 10^-9 or 1e-9, where the -9 indicates the number of zeros in the denominator.)
That means the pulse of light blinks on for 20 billionths of a second or 20/1,000,000,000 second and blinks 40,000 times per second. Having such a short pulse allows the scientist to measure the difference in time it takes to travel two different paths. If the duration was longer, the distance traveled would have to be longer.
By splitting the light beam with a half-silvered mirror, one beam travels to a mirror 10 meters away and then back to a photodiode detector. The other beam is reflected off a mirror only a few centimeters away. The time difference for the two beams is about 67 nanoseconds, which can be displayed on a regular dual beam oscilloscope.
The total distance the light travels is 20 meters, which equals 0.02 kilometer (20/1000).
The speed of light is then:
c = 0.02 kilometers/67*10-9 seconds = 298,500 kilometers per second
This is a fairly accurate reading and pretty close to the actual speed of 299,792 km/s.
Speed through matter
The speed of electromagnetic waves passing through transparent matter is slower than it is in a vacuum. Glass is transparent to visible light, radio waves will easily pass through non-metals, and x-rays pass through most materials except lead.
Most measurements of the speed of light are made in the atmosphere. Since the effect on the speed when passing through air is so very small, the speed of light in air is almost the same as it is in a vacuum. The difference is negligible.
The reason electromagnetic waves travel slower though transparent materials is the effect that the electrons have on the waves. They act somewhat like a "friction" on the waves.
Light through glass
The fact that light moves slower through matter can be seen when visible light passes through glass. If you shine a light at an angle through a piece of glass, the light beam will be bent or refracted.
(See Refraction of Light for more information.)
The ratio of the speed of light in vacuum divided by the speed of light in the material is called the index of refraction for the material. The index of refraction of glass or other material indicates how much slower the light travels through the material than in a vacuum.
Typically, the index of refraction of glass is from 1.2 to 1.5. That means the speed in a material of index 1.5 is 66% of the speed in vacuum.
Although the speed of an electromagnetic wave through matter is can be up to 50% less than the speed through a vacuum, scientists were able to greatly reduce the speed through matter in special situations. This was first done in 1999.
Danish physicists performed an experiment where they slowed light down to only 38 miles per hour or about 57 kilometers per hour. They did this by sending a beam through a material made of sodium atoms cooled to near absolute zero (-273°C or -460°F). They achieved this low temperature by using lasers to slow down the atoms, through a special method used in quantum mechanics called the Bose-Einstein condensate. (Explanation of this goes away beyond the scope of this course).
Speed is the maximum
The speed of light is supposed to be the maximum speed at which matter can travel.
In fact, according to Einstein's Theory of Relativity, strange things happen to matter as it approaches the speed of light. Matter becomes compressed as it gets within 90% of the speed of light, such that a ruler would appear shortened. Also, the mass of the matter starts to increase.
Another interesting phenomenon happens when matter approaches the speed of light, and that is that time for the matter slows down. In other words, if you were traveling through space near the speed of light, you would age more slowly than a person on Earth. A year year trip might seem like 10 minutes to you, while it would seem like a full year to everyone else.
Time travel or going at "warp speed" as is seen in such TV shows as Star Trek is not physically possible, as far as we know.
The speed of light is approximately 186,000 miles per second or 300,000 kilometers per second in a vacuum. All electromagnetic waves, including visible light, travel at that speed. Modern electronics allow the measurement of the speed of light in a physics lab. Light and other electromagnetic waves slow down when they pass through matter. According to the Theory of Relativity, the speed of light is the fastest at which anything can travel. The speed of light is the maximum speed for matter.
Get your work done quickly
Resources and references
Schaum's Outline of Optics by Eugene Hecht; McGraw-Hill (1974) $16.95
Introduction to Modern Optics by Grant R. Fowles; Dover Publications (1989) $16.95
Optics by Eugene Hecht; Addison Wesley (2001) $108.00 - Textbook covers wave motion, electromagnetic theory, propagation of light, optics, lasers and other advanced aspects of light
Questions and comments
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Share this page
Click on a button to bookmark or share this page through Twitter, Facebook, email, or other services:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Speed of Light |
For adults, BMI is a measure of whether you're a healthy weight for your height.
For children aged two and over, BMI centile is used. This is a measure of whether the child is a healthy weight for their height, age and sex.
If you have a BMI above the healthy range you are at raised risk of the serious health problems linked to being overweight, such as type 2 diabetes, heart disease and certain cancers. In children, BMI centile indicates whether the child is a healthy weight.
You can go straight to information on:
Who can use BMI and BMI centile?
BMI is the best assessment of weight in adults, and BMI centile is the best assessment for children aged two and over.
Some adults who have a lot of muscle may have a BMI above the healthy range. For example, professional rugby players can have an "obese" BMI result despite having very little body fat. However, this will not apply to most people.
BMI for adults
BMI takes into account that people come in different shapes and sizes. That's why a range of BMIs is considered healthy for an adult of any given height.
A BMI above the healthy range indicates that you're heavier than is healthy for your height.
The ranges below only apply to adults. BMI results are interpreted differently for children.
- BMI below 18.5: a score this low means that you may be underweight. There are a number of possible reasons for this. Your GP can help you find out more, and achieve a healthy weight.You can learn more by reading Nutrition for underweight adults.
- BMI between 18.5-24.9: this is a healthy range. It shows that you're a healthy weight for your height. However, it's still important to eat a healthy, balanced diet and include physical activity in your daily life if you want to maintain a healthy weight.
- BMI score of 25 or more: your BMI is above the ideal range and this score means you may be overweight. This means that you're heavier than is healthy for someone of your height. Excess weight can put you at increased risk of heart disease, stroke and type 2 diabetes. It’s time to take action. See the section below for the next step, and learn more in our Lose weight section.
- BMI of 30 or more: a BMI above 30 is classified as obese. Being obese puts you at a raised risk of health problems such as heart disease, stroke and type 2 diabetes. Losing weight will bring significant health improvements, and your GP can help. See the section below and learn more in Lose weight.
Ethnicity, BMI and diabetes risk
New BMI advice was issued in July 2013 by the National Institute for Health and Care Excellence (NICE) to south Asian and Chinese adults, who have a higher risk of developing type 2 diabetes than white populations. These groups are advised to maintain a BMI lower than the standard 25.
The advice is:
- BMI of 23: Asians with a BMI score of 23 or more are at increased risk of developing type 2 diabetes.
- BMI of 27.5: Asians with a BMI of 27.5 or more are at high risk of developing type 2 diabetes.
Although the evidence is less clear-cut, black people and other minority groups are also advised to maintain a BMI below 25 to reduce their risk of type 2 diabetes.
If you're overweight
If your BMI shows that you're overweight or obese it's time to take action. There’s lots of information, advice and support on NHS Choices that can help you.
- Lose weight has information and advice on achieving a healthy weight
- Food and diet contains information and advice on healthy eating
- Health and fitness is full of fun and practical ideas to help you get into shape
Your GP or practice nurse can also offer advice on lifestyle changes, and may refer you to a weight loss group or discuss other treatments. Find out more in How your GP can help.
They may also measure your waist circumference. This can provide further information on your risk of certain health conditions, such as type 2 diabetes and heart disease. You can learn more by reading Why body shape matters.
Why lose weight?
For adults who are overweight or obese, losing even a little excess weight has health benefits. You’ll lower your risk of serious health problems such as heart disease, stroke, high blood pressure and type 2 diabetes.
Weight loss can also improve back and joint pain. Most people feel better when they lose excess weight.
The key is to make small, long-lasting changes to your lifestyle. If you are overweight or obese, changing your lifestyle so that you eat fewer calories can help you to become a healthier weight. Combining these changes with increased physical activity is the best approach.
To start with, you can cut down on excess calories by swapping high-calorie meals and snacks for healthier alternatives. Read Healthy food swaps to learn more.
Physical activity is an important part of losing weight, as long as it is combined with eating fewer calories. The amount of physical activity that is recommended depends on your age. Adults aged between 19 and 64 should get at least 150 minutes of moderate-intensity aerobic physical activity – such as fast walking or cycling – a week. Adults who are overweight are likely to need to do more than this to lose weight. If it's been a while since you've done any activity you should aim to build up to this recommendation gradually. Find out more in Benefits of exercise.
For more ideas on how to get you and your family active, visit Change4Life.
Height and weight chart
You can also use the height and weight chart to check if you're a healthy weight for your height. The chart is only suitable for adult men and women.
BMI for children
BMI results are interpreted differently for children.
When interpreting BMI for a child, health professionals look at a child's weight in relation to their height, age and sex. The result is called the child’s BMI centile. BMI centile is a good way of telling whether a child is a healthy weight, and is used by healthcare professionals.
Using your child’s BMI centile, a healthcare professional can tell whether they're growing as expected. You may have done something similar when your child was a baby, using the growth charts in the Personal Child Health Record ("red book").
Once your child’s BMI centile has been calculated, they will be in one of four categories:
- underweight: below 2nd BMI centile
- healthy weight: between the 2nd and 90th BMI centile
- overweight: between 91st and to 97th BMI centile
- obese: at or above 98th BMI centile. This BMI centile category is called "very overweight" in letters that are sent by the National Child Measurement Programme.
Most children should fall in the healthy weight range. A BMI at or above the 91st centile is likely to indicate your child has an increased risk of obesity-related health problems.
Some medical conditions or treatments may mean that BMI centile is not the best way to measure whether your child is a healthy weight. Your GP or other health professional can discuss this with you.
If your child is overweight
Research shows that children who are overweight or obese have a higher risk of ill health during childhood and in later life. If your child is overweight, it’s time to take action.
A GP or practice nurse can give advice and support on helping your child achieve a healthy weight as they grow. Find out more in When your child is overweight. |
An intergalactic cloud has slammed into the Milky Way Galaxy and left a big hole.
Our galaxy is full of gas. It’s a good thing, too, because that’s what stars are made of. But this gas isn’t evenly distributed: sometimes it clumps into gigantic, dense clouds, other times it’s carved out by stellar outbursts or supernovae.
Among the largest cavities are supershells. These structures are big empty bubbles in a galaxy’s gas — so big, they can easily be 2,000 light-years wide. It takes a whole lot of energy to inflate that kind of bubble: the equivalent of at least 30 supernovae, planted like little vicious bombs at the cavity’s center.
But oddly, most of the supershells we know of in the Milky Way (about 20) or in nearby galaxies don’t have aging star clusters at their centers. So what blew them up, if not stars?
One alternative is high-velocity clouds. These clouds shower the Milky Way in a steady drizzle, helping to fuel star formation. We’re not sure if they’re debris from torn-up satellite galaxies, or stuff spewed out by our own supernovae, or gunk from the greater cosmos. Whatever they are, if such a cloud rammed into a galaxy, it could blast out a big bubble.
Geumsook Park (Seoul National University, South Korea) and colleagues have now found the first solid example of a high-velocity cloud doing just that. Astronomers already knew of the cloud, called HVC 040+01-282 (for its coordinates), which lies below the plane of the Milky Way’s disk in the outer galaxy, near the Scutum-Centaurus arm. This high-velocity cloud is of the “compact” kind, as opposed to being large and extended, so it’s nicknamed CHVC040 (the C is tacked on for “compact”).
As part of the I-GALFA survey with the Arecibo radio telescope, the team discovered a supershell surrounding CHVC040. The shell is a whopping 3,000 light-years wide, with neat spoke-looking structures inside it to boot. And smack-dab in its center sits CHVC040.
The team estimates in an upcoming Astrophysical Journal Letters paper that, if the cloud collided with its max possible velocity, it would have had more than enough energy to blow out the supershell.
Coauthor Bon-Chul Koo (Seoul National University) explains that supershells play an important role in galaxy ecology. They serve as "chimneys" that spew the heavier elements produced by supernovae in the disk into a galaxy’s halo, changing the gas’s composition. They can also heat and stir up the interstellar medium, or trigger star formation by compressing gas.
CHVC040 is the first high-velocity cloud linked to one of these bubble structures. But it’s probably not unique. Astronomers know of about 300 compact HVCs in the Milky Way, and many are stretched out into comet-like shapes, presumably as they burrow through the hot gas surrounding our galaxy in a big halo. There could be many more such pairings.
G. Park et al. “A High-Velocity Cloud Impact Forming a Supershell in the Milky Way.” Accepted to Astrophysical Journal Letters. Prepublication draft available here. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.