content
stringlengths 275
370k
|
---|
The Great Wave Off Kanagawa by Hokusai
most famous Japanese Tsunami painting by the 18th Century artist Hokusai,
full name Katsushika Hokusai (1760-1849).
The painting depicts a tsunami passing
in front of Mount Fuji.
definition: Tsunami. Noun. An extremely
large wave caused by movement of the earth under the sea, often caused by
is also the scientific word
for a sea wave/sea surge generated by an undersea landslide¹, volcanic
eruption, asteroid impact not to mention any man-made catalysts.
¹ The Dec 26th catastrophe
(see below) was caused
by an underwater quake.Tsunamis are also caused by
terrestrial or submarine landslides.
Several cases of Tsunamis caused by landslides have been recorded history. One such happened ±
2 million years ago when a block of 4 000 cubic kilometers slid off the
Hawaii island of Oahu into the ocean causing a giant Tsunami.
A Tsunami is not a Tidal Wave. It has nothing to do with tidal movements.
Sea Wave or Sea Surge would be a more appropriate name if you don't feel easy with Tsunami*
*Tsunami is the Japanese word for "harbour wave".
Defences, natural or man made
Although very rarely brought to the general public's notice, unless they reach such magnitudes as Dec 26th catastrophe, local or regional tsunamis do in fact occur several times every ten years, at different spots around the Pacific ocean's "Ring of fire".
fact that the populations of tsunami risk coastal regions have increased sharply
in recent years means that the death and injury tolls are higher today that
it ever was.
The fact also that such regions are now prolific tourist resorts² means
that there is also a highly developed media structure and with the acceleration
in the transmission of media information news of such natural hazards is quasi-immediate
bringing them global coverage.
(² The Gross Domestic Product(GDP) of certain countries is reliant on
tourism. Tourism in Thailand's represents 8% of the country's GDP.)<
Man's implantation in these coastal zones has weakened natural defences leaving the zones open to erosion and destruction that would have otherwise been avoided
had the defences, ex: indigenous mangrove swamps or coral reefs, not been
clear to allow for tourist installations or prawn farming, as is the case
in the Thailand.
Settlements, town and even cities have been built in estuaries and Deltas
whose natural defences mechanisms, that once protected the regions against
such severe natural hazards, have been weakened or totally removed. Dams are
built upstream from these settlings hampering the natural depositing of sediment
and silt in the estuaries leaving them open to erosion by the sea.
has a vast experience with such calamities and has intensively studied them and learnt lessons from them.
The densely populated coastal regions on Japan's numerous islands now possess
Tsunami protection systems such as Tsunami walls* and evacuation plans.
walls are often no higher than
a couple of meters and only provide a relative protection against the
"Average" tsunami but are generally of no real protection against
the more violent Tsunamis. Fortunately the Japanese, being signatory of
the Pacific TWS, are equipped with a highly effective electronic early
warning system. The PTWS (see below) continually
informs the Japanese
Meteo Agency of all Tsunami and seismic activity, be it in the immediate
vicinity or more generally around the entire ring of fire as earthquakes
and landslides in Alaska or off the coast of Chile. This information is
vital as such far away activities have as much an impact on Japan as does
the seismic activity immediately off their coast*.
* Due to the the proximity and convergence of the Philippine& Pacific
Tectonic plates, just off the coast of Japan there is a particularly high
seismic activity. As such Tsunami warnings may be followed immediately by
the Tsunami themselves, leaving the population little time to evacuate. However,
fortunately the Japanese follow a very rigorous evacuation program and as
a result the mortality rate of the Tsunamis is often minimum.
Other causes of Tsunamis
Not all Tsunamis are the result of seismic activity and as such the seismic
activity monitoring alone may not be enough to help warn against Tsunamis
generated by other causes. Landslides, Asteroids or, helas, Human activity
are other causes of Tsunamis.
- Landslides. As mentioned above, in
the intro, landslides are another major cause of Tsunamis. Landslides can
be caused by earthquakes but also by tides, storm waves or sediment deposition
By definition a landslide is caused by the downward sliding movement of a
mass of sediment or rock. The sliding mass pushes the mass of water before
it and sucks or pulls the water behind it. The frontal pushing effect elevates
the water in front of the sliding land mass and a depression above and behind
the landslide which is immediately filled by the frontal elevated mass of
water thus creating the tsunami, see landslide graphic below.
Asteroids. Asteroids are another cause of Tsunamis, the most famous example
probably being the asteroid that crashed into the Yucatan ± 65 millions
years ago. However Tsunamis created by asteroid impacts are not as rare as
that in fact Asteroid generated Tsunamis are recorded every year. Example.
In 1998 an asteroid generated tsunami killed more than 2000 people in New
Human activity. Helas human activity is a another cause of Tsunamis. In
an effort to understand severe natural climatic or geological hazards certain
countries now have the technology to re-create calamities such as Hurricanes
specialists are starting to voice the theory that the Dec 26th earthquake
was a man made simulation gone wrong and that one these numerous countries
was trying to artificially re-create an earthquake. This theory is given a
certain credibility as although Indonesia is on the Western rim of the "Ring
of fire" * the Island of Sumatra has itself never experience an earthquake
of such a magnitude. In fact one of the reasons why the Indian ocean does
not have a protection system as exists in the Pacific ocean, is because the
seismic activity in the Indian ocean is minimum, verging on the non-existent!
* As Indonesia is part of the Pacific "Ring of Fire" The Japanese
PTWS centre duly registered the seismic activity off Sumatra before the quake
actually occurred. The only problem was that they had no pre-established contact
in or around the Indian ocean basin to warn of the forthcoming calamity.
man made sources of Tsunamis can be attributed to undersea drilling that may
accidentally trigger seismic activity or underwater nuclear testing such as
the French nuclear testings in the South Pacific before they were "Officially"
stopped in the mid 1990s.
The mechanics of a Tsunami
1- Seismic Tsunami
Click on image to see full size
2- Landslide Tsunami
Most Tsunami originate around the Ring of Fire, a zone
of volcanoes and seismic activity 32,500 km long, that encircles the Pacific
Ocean, from the Americas to Asia (see graphic below) although Tsunami
may happen anywhere in the world, such as the in the Atlantic.
- Fact: Since 1990 82 Tsunamis
have been reported, principally within the Pacific Ring of Fire
- Fact: A major submarine
slope failure in the N. Atlantic could give rise to a Tsunami large
enough to flood major cities on the coast of America or Europe.'
- Fact: Mount St Helens
(present on the map below). It has been calculated that if the
Mount St Helens' eruption, in 1980, had happened underwater it would have
created a 'Tsunami/Tidal wave' with waves over ± 300 meters
high/ 900 feet high!!!
- Fact: A Tsunami can have
a wavelength ( distance between wave crests) in excess of 100km and with
up to an hour between them. Tsunami travel at great speeds across
an ocean with hardly any energy losses and are barely noticeable out at
- Fact: Over the deep Pacific
Ocean, a Tsunami travels at about 500mph. If an earthquake happened
in Los Angeles, a Tsunami could hit Tokyo quicker than than it
takes to fly between LA & Tokyo.
- Fact: Tsunami
waves hitting coastlines have shifted 20-tonne rocks hundreds
of metres inland
Pacific Ring of Fire
Click on image to see full size
Exceptions to the rule
notable exception to the above mentioned rule happened approximately 7200
years ago, off the Norwegian coast, when an underwater landslide, called
the 'Storegga' slide created a tsunami reaching
West to Iceland and South to the Shetland Isles, totally devastating habitants
and fauna of the affected regions*.
*NB. The British Geological Society, based in Edinburgh, evaluate the
risk of submarine landslides and resulting tsunamis in the region,
today, sufficiently serious to undertake extensive studies to gauge the consequences
of such a natural hazard occurring today. While the impact of the 'Storegga'
slide, ±7200 years ago, were sufficiently important, from a physical
point of view, it's socio-economical impact was minimum. Today it's consequences
would be radically different and the fact that the B.G.S. have recorded
various other submarine movements underlines the importance of such a risk
exception is The Marmara sea, in the eastern Mediterranean, off the Turkish
coast, leading to the Bosphorous and the Black sea, regularly witnesses
tsunamis following earthquakes, such as happened in 1999.
another exception, forecasted this time, and published on BBC.com, anticipates
that a flank of the Cumbre Vieja volcano on the Canaries Las Palmas island
is unstable and could plunge into the ocean, creating a landslide generated
Tsunami (see above). Swiss researchers
have modelled the landslide and estimate that half a trillion tonnes of
rock falling into the water all at once would create a wave 650 metres
high (2,130 feet) that would spread out and travel across the Atlantic.
It would have a wavelength of 30 to 40 kilometres (18 to 25 miles) travelling
westwards across the Atlantic at speeds up to 720 km/h (450 mph) towards
America. The wall of water would weaken as it crossed the ocean, but would
still be 40-50 metres (130-160 feet) high by the time it hit land. The
surge would create havoc in North America as much as 20 kilometres (12
miles) inland. ..."
also estimate that the effects on African & European coastlines would
be catastrophic. A wall of water a 100 meters high would totally submerge
extensive regions of the West African coastline and devastate coastal regions
of the United Kingdom, France, Belgium and the Netherlands....
on images to enlargen
tsunami can race across the ocean at ± 500 mph and at sea
its waves are only a few feet high with the waves increasing in energy
and height, often with waves over 100 feet high, when the waves approach
the coast,. Often before a tsunami hits, there is a giant vacuum effect,
and water is sucked from harbours and beaches.
When tsunamis approach the coast they give warning by creating
a vacuum effect, retracting sea water from the shore in order to feed the
can see the bare sea floor, uncovering fish and stranding boats. That is because
waves are made out of crests and troughs. When a trough hits land first, the
water level drops drastically. Usually another wave blasts ashore about 15
minutes later, repeating the action for a while after the fist wave hits the
shore. After a tsunami many people return to their homes, not realizing that
come in groups, and the first not always the largest. Some of the most destructive
damage caused by a tsunami is not from the arrival of the wave itself, but
from the undertow it creates as it leaves land and heads back to the sea,
carrying objects and people with it.
mention that Tsunamis killed more than 50,000 people during the course
of the 20th century. In order to help saves lives, scientists established
the Pacific Tsunami Warning System, based in Honolulu, Hawaii*, Its network
of earthquake detectors and tide gauges detects quakes that may cause a tsunami
and the monitoring of sea floor activity helps give warning of a potential
tsunami and this warning may give as much as a few hours.
organizations, such as the NOAA study and
analyse Tsunamis, as a part of their global surveillance, creating
simulations to better
understand the impact of Tsunamis on the environment.
seriously does the take Tsunamis that Hawaiian authorities have issued instructions
on what to do in case of a Tsunami and have elaborated plans
informing people of flood zones and safety zones to which they may retreat.)
certain of the facts and statistics PTWS never issue false or ill
founded warnings about Tsunamis simply because that any
one given area around the Pacific 'Ring of Fire' can, one day or another,
expect a visit from a Tsunami and PTWS member states
are kept updated on all developments.
26/12/2004 : A date to remember
26th 2004. At
±10.15AM local time the entire Indian ocean was struck by a series
of Tsunamis resulting from an earthquake just off the north west coast
of Sumatra. Initially estimated at 8.9 on the Richter scale the earthquake
was later upgraded to 9 on the Richter scale. The force of the quake was
so strong that it was felt in Western Africa and with the tsunami reaching
the East African coast, hitting Somalia, some 5 thousand miles to the
west and causing the loss of over a hundred fishermen.
speaking the Dec 26 earthquake and following Tsunami was the result of a movement
in the Eurasian plate on the western side of the ring of fire but outside
of the Tsunami Warning system's
(TWS) surveillance zone set up around the ring of fire to
monitor such seismic or Tsunami activity.
of such buoys dot the pacific ocean and feedback all activity, via GPS satellites,
to TWS monitor stations
the Indian ocean currently possesses no such warning system and the death
toll impacted to the actual Tsunami catastrophe, initially estimated at ±
30 000, is now reaching 150 000 as Indonesian authorities reach areas where
communications and tourism are not as developed. The same applies as Indian
authorities access territories such as the, hitherto, little known Andaman
& Nicobar islands*.
This could have been avoided had countries such as India and Indonesia been
member of the International Warning system.
* NB. Like a lot of people it is unfortunately true that the South East Asian earthquake
& resulting Tsunami put such places as the Andaman & Nicobar islands
onto my personal World map.
I had never heard of them before and its a shame that the only time such
places come to the attention of the general public is when a cataclysm hits them such as when Cyclone
"Zoe" devastated the Solomon Islands Dec 28 2002.
Many thanks to the NOAA for above mentioned info, images and links. All material
appearing is for a purely personal and documentary use
01/ 2005 & 11/2007 |
Australian Capital Territory — History and Culture
The Australian Capital Territory (ACT) is the smallest and youngest state in Australia, but still boasts an interesting history. The birth was conspired under political purposes and prior to 1911, the land now claimed as ACT was nothing more than bushland encompassed by New South Wales. Over the course of a century, this fascinating little piece of Commonwealth territory has developed a unique identity and has spawned and shaped Australia’s political growth.
Before Australia’s federation in 1901, the Australian Capital Territory was predominantly farm land, with several European-colonial settlements dotting the region, including Molonglo and Ginninderra. During this time, the area fell under the administration of colonial New South Wales. The transformation from bushland to Australian Capital Territory began following Australia’s federation. Both Victoria and New South Wales were at loggerheads as to which would host the national capital. Of course, Melbourne and Sydney were at the center of these debates too. Eventually, through parliamentary vote, a bill was passed designating New South Wales (NSW) as the state where the capital would be located. The next decision was where the capital city would be. Sydney was overlooked, as towns like Dalgety and Albury were thrown into the mix. However, NSW refused to cede land for these sites.
In 1906, the Yass-Canberra region was elected as the new national capital site, and by 1909 NSW ceded the required land to the Commonwealth government. The Federal Capital Territory was legally established in 1910. Unfortunately, the capital city, Canberra, took much longer to build. The Royal Military College of Duntroon (Staff Cadet Avenue, Campbell, Australian Capital Territory) was among the first federal facilities constructed in the new capital territory. In 1912, an American architect by the name of Walter Burley Griffin had already designed the city of Canberra, and building began the following year. Unfortunately, due to poor oversight, World War influences, and the Great Depression, the construction of Canberra was a slow process.
The original House of Parliament (Capital Hill, Canberra, Australian Capital Territory) was built in 1927, and later replaced by the New Parliament House (Parliament Drive, Capital Hill, Canberra ACT) in 1988. Between 1955 and 1975, ACT’s population almost doubled yearly, primarily due to the growth of Canberra. Outside the capital, other significant civic landmarks were constructed to cope with the population explosion, such as Bendora Dam in 1961 and Googong Dam in 1979. Today, the history of the ACT is on show at the National Capital Exhibition (Barrine Drive, Commonwealth Park, Canberra, ACT).
The Australian Capital Territory has a unique culture in comparison to other Australian states and a sense of local pride resonates strongly. The population is far less multicultural than other parts of Australia, and of the 333,000 residents in the state, more than 99 percent reside within Canberra. The ACT, unfortunately, has a high turnover rate of locals, with most living in the city for several years due to their employment in the government sector, and then moving on. Nevertheless, Canberrians are fiercely loyal to their state.
Originally built to house the national parliament of Australia, this has significantly shaped the landscape over the last 100 years. The ACT’s main tourist draw is usually government-related, including both Parliament Houses, the Australian War Memorial, and Royal Australian Mint. Of course, local council has attempted to add to the tourism industry over the year, with sites like the Reptile Sanctuary and Dinosaur Museum to offer a more well rounded experience. |
How do my eyes work?
Lesson 9 of 13
Objective: SWBAT explain how human eyes are similar to a box model.
Next Generations Science Standard Connection
This lesson connects to 1-PS4-3, and the students engage in an investigation to determine how the human eye works. When light enters our eyes it has to go through different materials in order for us to see an image. I tried to create a model of the human eye using a box, glass of water, tape, and a flashlight. The light enters the box through two holes just like light enters our eyes. Then the light passes through material similar to the water and glass. The image is then created in our brain kind of like the image is on the back of our box.
Now, the real reasons I am teaching this is because it was one of the original questions my students wanted to learn when we began our light unit. So, I tried to research models and investigations that might help my first graders understand how the human eye works. I found a very interesting experiment on this website that I think will allows my students to create a model that is similar to the human eye. They follow my plan to carry out an investigation, and then we read about how the human eye works. Students can really understand the way things work by observing models and this is my attempt to help my students understand how our eyes work.
My students tend to follow through with instructions and develop a better understanding when they can work together with their friends. They also seem to persevere through lengthy complex lessons if they can move around frequently. So, I use collaborative heterogeneous ability group partners and transitions.
My heterogeneous ability group partners are called peanut butter jelly partners. The students are assigned a partner, and they work with this partner throughout every lesson. Students even have assigned seats beside their partners. They help each other read, create, and evaluate their work.
Transitions help the students by giving them frequent brain breaks. The students get to move around often, and my transitions are usually the same in every lesson. This kind of lets the students know what to expect in every lesson.
We begin the lesson in the lounge where I try to excite the class about the lesson, because their excitement can help the students persevere through the lesson. I also assess their prior knowledge, so I know how much support I am going to need to provide in the lesson. Last, I share the plan for the lesson, so my students know what to expect.
First, I project the lesson image on the Smart Board and ask the class to look at it. Then I direct them to their question from the beginning of the unit. I say, "Look at the KWL chart and think about we see with our eyes. Remember you had this question in the beginning of the unit. Today I hope to help you explore and investigate how the human eye works."
Then I ask, "Will you please tell your partner what you know about how our eyes work?" I listen and I am pretty my class has no prior knowledge, because they were all wanting to know this in the beginning of the unit. But, one of my student may have went home, ask their parents, and research the answer. In this case I would allow that student to share their knowledge.
Last, I share the plan for the lesson with the class. I say, "We are going to create and explore a model that is similar to the human eye. Then we are going to read about how our model is like our eye."
At this point we transition to the desks in the center of the room. The students are seated in groups of four. I give the class a model to help fill out their science journal, explore, and record our investigations.
So, I begin by explaining the model on the Smart Board, and I ask the students to copy it down. This enables us to have the information from today's lesson to reflect upon in our culminating activity. As the students copy the date, topic, and plan down I walk around and monitor their work. This makes sure everyone gets finished near the same time.
Then I distribute materials and allow the students to explore. My materials include a box, glass of water, and a flashlight. Then I watch them carry out the experiment. I have already cut the slits in the box to save time, and fingers. Now, it is time for my students to really explore the concept of how light enters and exits objects. Here is an example: box without glass of the students exploring without the glass in the box, and here is an example:box with glass of the class exploring with the glass in the box. Then I have an example of the students observations. I called number one the observations without the glass, and number two the observation with the glass.
I am expecting to have to do some extra explanations and help my ELL. I know this concept of how the human eye works is very complex, and I have tried to simplify it to the best of my ability. This is a lesson designed to answer one of my students question in the initial lesson in this unit. When helping the ELL I first rely on their partner, but I often end up engaging in a conversation.
Now my students begin to practice communicating with their peers as they share across the table. I try to engage the class in a whole group conversation, because I am trying to teach my students to share what they learn. As they share what they learn I also hope they begin to build upon the ideas of their peers.
First I say, "Please share what happened when you added the glass of water with the group across the table." Then I listen, and make sure everyone is participating. I hope to hear the students saying, "When I added the water the light shined on one spot instead of two." After about one minute of talking I say, "If you and the group opposite the table as you find something different you may redo the experiment or just talk about why you saw different things." If I see a group not participating then I stop and ask, "What did you record for your observations when there was no glass, and when the glass was present?" This prompt engages them in a conversation. Then I listen hoping to hear comments like, "I saw two rays of light on the box. One was blue and the other was white. The blue one looked like and oval, and the white one looked like little lines." This is really dependent on the color of the plastic over one slit and the way the slits are cut on the box.
Finally, we engage in a whole class discussion where the students share their experience and add to what their peers share. I say, "Will a volunteer share what they saw?" Then I listen, and ask, "Will somebody add to that?" Again I listen hoping to hear comment like, "The light is blue, because the plastic over one slit is blue, and the glass with water helps the light bend. I remember this from previous lessons." I find this is an excellent way to help students collaborate and share what they learned.
At this point I focus the class on two questions: "How is this box like my eye?" and "How is the box different than my eye." I give each child a text I found on this helpful website, and l help the students research to find their own answer. Here is an example of how a student answers the question: "How is the box like my eye?" student work.
First I say, "Please write your questions down in your science journal under your observation." Once the students record the questions I say, "Please read the text to yourself and try to find the answers. One partner can read while the other listens, so work together. When you think you have found the answers go record it under each question."
Next, I distribute the text to the class. If I pass it out before I give instruction the class may start reading, because they are so curious. But they will not listen to any of my instructions. When I anticipate their excitement I try to give my instructions before I give out any materials. It just helps the students listen a little better.
Now, I walk around and help the students anyway I can. Many students like to ask me if they have the answer correct, and I usually tell them. This allows them time to change their answer if it is wrong, and we can talk about it.
Now, I try to assess the students and let them share what they learned from reading the text. So, the first thing I do is get the students seated in the lounge area or carpet of our room. Then I use a fun chant to get them settled by calling out what I want them to do. We all chant, "Criss cross, apple sauce pockets on the floor, hands in your laps, talking no more." Then I add, "Your eyes are on the speaker and you are listening to what they say, so you can give them peers feedback."
Then I use my spreadsheet of all my students names to check off who's turn it is to share their answer to the question. Then I call on them and ask them to share in front of the class. After the student shares I ask, "Will a volunteer add to or give them feedback?" Then I listen. If nobody can give feedback then I model verbal feedback.
Last, the students assess their own work using a rubric template I created. Then they trade with their partner and evaluate:rubric their partner's work: proficient student work. Last, I check their work after school to save time. But, the rubric is nice, because it brings a real life scenario to the table of using fractions. I my students really get parts and wholes as well, because they know they record how many they got right above how many there were possible. |
Standards*: H.4.3: Show how science has contributed to meeting personal needs, including hygiene, nutrition, exercise, safety, and health care.
D.4.6: Observe and describe physical events in objects at rest or in motion.
Objective: Students will develop an understanding of the safety reasons for the wearing of helmets in relation to the prevention of head injuries.
- 2 Brain Molds Plastic Safety Helmets
- Gelatin Plastic Bags
Activity: Prepare your favorite gelatin recipe as directed on box.
- Pour mixture into brain molds and let set.
- Place each “brain” into a plastic bag.
- Place one brain into a Safety Helmet and secure.
- From a height of approximately six to eight feet, drop the brain in the plastic bag only to the floor. Observe results.
- From a height of approximately six to eight feet, drop the brain in the plastic bag and helmet to the floor. Observe results.
- Compare and discuss results. Note that the helmet protected the brain more so than the no helmet at all. Do point out that while a helmet may not save you from injury, it may lessen the injury.
Assessment: Students to write/illustrate the results of activity and write a conclusion.
Variations: Raw eggs may be substituted for gelatin brains. Have students use various types of plastics to secure the egg in and repeat drop. (When using the raw egg, wrap the egg in a single row of packaging “peanuts”). Compare the levels of protection each type of plastic offered. Remind students to take in account the design and amount of materials used to create the protection.
*Wisconsin Model Academic Standards |
Reducing Sports Injuries to Keep Kids Playing Safely
Now that the weather is warmer, kids are hitting the athletic fields and skate parks for some healthful exercise. Parents need to keep an eye on their children and help them prevent a nasty injury that could take them out of their favorite game.
The first rule in sports safety is proper gear. This especially applies to footwear. Ankle sprains are the most common injury in the United States and often occur during sports or recreational activities. Approximately 1 million ankle injuries occur each year and 85 percent of these are sprains. The right shoes that fit can help reduce your child's chances of twisting an ankle.
Warm-up exercises, such as stretching and light jogging, can help minimize the chance of muscle strain or other soft tissue injury during sports. Warm-up exercises make the body's tissues warmer and more flexible. Cooling down exercises loosen the body's muscles that have tightened during exercise. Encourage "warm-ups" and "cool downs" as part of your child's routine before and after sports participation.
If your child receives a soft tissue injury, commonly known as a sprain or a strain, or a bone injury, the best immediate treatment is easy to remember: R-I-C-E (Rest, Ice, Compression, and Elevation).
Another often overlooked sports injury is heat stroke, heat exhaustion and dehydration. Heat injuries are always dangerous and can be fatal. Because children perspire less than adults and require a higher core body temperature to trigger sweating, it's important to know the signs of heat exhaustion, which are: nausea, dizziness, weakness, headache, pale and moist skin, heavy perspiration, normal or low body temperature, weak pulse, dilated pupils, disorientation, fainting spells. The signs of heat stroke are headache, dizziness, confusion, and hot dry skin, possibly leading to vascular collapse and coma.
Tips to avoid heat-related injuries include:
Exercise is beneficial for kids. Don't let the fear of injury stop you from letting your child lead an active life and participate in sports. Just make sure they have the proper training in the rules of the sports they're playing, and that they know how to use the equipment safely. It can help to match the child to the sport. If your child has a hard time running long distances, it may be counterproductive to push the track team. Finding a sport he or she truly enjoys will build good lifetime fitness habits and is worth the effort.
For more information on helping your child stay active, or for other questions regarding youth sports injuries, contact a National University of Health Sciences Whole Health Center. |
Thistles thrive in many types of climates and soil conditions. All thistles share a few similar characteristics, including sharp spines and invasive root systems. The tall thistle is one type of biennial thistle. This plant requires two years to mature and reproduce. Suitably named, tall thistles reach a height of 10 feet in many areas. These invasive weeds often pose problems in pastures and landscapes, creating an eyesore and damaging nearby plants and grasses.
Examine your yard or pasture early in the spring to locate small thistles. Look for rosettes on the surface of the soil. This early growth signals the presence of thistles before they mature and reproduce.
Discourage single plants by cutting through the stem beneath the surface of the soil. Use a sharp shovel to separate the top of the plant from its root system. Use this method in areas containing small amounts of tall thistles.
Remove existing tall thistles before they reproduce and scatter seeds. Cut the plants with a mower in late spring to remove maturing flower buds before they go to seed. Cutting at this stage kills many tall thistles. Avoid cutting the thistles too early, as these hardy weeds tend to grow back and continue to mature.
Apply an herbicide to remove tall thistles from your yard. Purchase an herbicide labeled for use on tall thistles. Apply the herbicide to the area with tall thistles late in the fall or early in the spring, before flower buds form on stems. Apply when temperatures are moderate, between 60 and 85 degrees F. Follow the package instructions when applying herbicides. |
An account of the early history of the manufacture, testing, and use of optical components can be found in Twyman's book. Of the four surface metrics worthy of measurement, form, as we have seen, is probably the most important due to its direct influence on system performance. If two easily abradable surfaces are rubbed together in all directions, their area of contact increases to the point where both surfaces assume a spherical form of the same radius. This basic process is used in surface generation, but we still need means for determining the end point depending on acceptance tolerances.
A flat contacting metal template can be made to the desired shape and placed against a ground surface to assess the accuracy of form produced by viewing variations in the width of gap. Skill is needed to achieve a precision of 0.01 mm.
Polished surfaces require a much higher degree of measurement accuracy. This can be obtained by using some form of interferometer. In its simplest form, a carefully made reference plate can be put into close contact with the surface under test. When illuminated from above with a diffuse monochromatic source, as shown for flat surfaces in Fig. 2.1, Newton's fringes can be seen at the interface.
The advantage of this method is that the whole surface can be seen, so that areal cover as opposed to line cover, using the template method, is available. High sensitivity of 50 nm is typically achieved if the fringes are analyzed by eye, due to the small unit of optical wavelength employed, but surface damage can arise if the surfaces are brought into contact. Skill is needed in this case to avoid dust in the interface zone.
A third technique for form measurement involves probe gauging. A proximity probe is scanned in a straight line over the surface to be measured and height variations are recorded. The method, which is slow to achieve areal cover, is more usually applied in coordinate measuring machines using a contacting probe. It typically achieves an accuracy of 0.001 mm and requires some skill to apply.
Online access to SPIE eBooks is limited to subscribing institutions. |
Cavities, or tooth decay, is the destruction of your tooth enamel, the hard, outer layer of your teeth. It can be a problem for children, teens and adults. Plaque, a sticky film of bacteria, constantly forms on your teeth. When you eat or drink foods containing sugars, the bacteria in plaque produce acids that attack tooth enamel. The stickiness of the plaque keeps these acids in contact with your teeth and over time the enamel can break down. This is when cavities can form. A cavity is a little hole in your tooth.
Cavities are more common among children, but changes that occur with aging make cavities an adult problem, too. Recession of the gums away from the teeth, combined with an increased incidence of gum disease, can expose tooth roots to plaque. Tooth roots are covered with cementum, a softer tissue than enamel. They are susceptible to decay and are more sensitive to touch and to hot and cold. It’s common for people over age 50 to have tooth-root decay.
Decay around the edges, or a margin, of fillings is also common for older adults. Because many older adults lacked benefits of fluoride and modern preventive dental care when they were growing up, they often have a number of dental fillings. Over the years, these fillings may weaken and tend to fracture and leak around the edges. Bacteria accumulate in these tiny crevices causing acid to build up which leads to decay.
You can help prevent tooth decay by following these tips:
- Brush twice a day with a fluoride toothpaste.
- Clean between your teeth daily with floss or interdental cleaner.
- Eat nutritious and balanced meals and limit snacking.
- Check with your dentist about the use of supplemental fluoride, which strengthens your teeth, and about use of dental sealants (a plastic protective coating) applied to the chewing surfaces of the back teeth (where decay often starts) to protect them from decay.
- Visit your dentist regularly for professional cleanings and oral examination. |
Use this convertor to convert the units of surface from American/English units to the metrical unit system (SI).
In the calculator mentioned below, you can calculate the area of a circle. You need to fill in the radius. You can also calculate the radius if you fill in the area. It does not matter in what units you fill in. You only have to know that the unit of the surface is derived of the length. If you fill in m, than the surface is m2.
The surface is derived from the length. So The unit of surface (A) is derived from the unit of length. Because the unit of length is the meter (m) and the surface is m · m, the unit of surface is m2.
England has had a long time her own metric system. These consist feet, yards, inches, miles etc for length. Just like the SI, the units will be multiplied with each other. For example: yd · yd will be yd2. In 1985 the English went official over to the standard metric systems. You can see this in The Weights and Measures Act 1985 (Metrication) (Amendment) Order 1994.
The United States for a long time had their own metric system. Under pressure of the industry the United States converted to the international standard metric system (1988). This was decided in the Omnibus Trade and Competitiveness Act Of 1988. More information in the The United States and The Metric System.
History of SI:
The SI is the abbreviation for Système International d'Unités. Nowadays it is the standard metric system. The SI originated in France. In 1790 the French Academy of Science got an instruction of the National Assemble to design a new standard of unit for the whole world. They decided that the system should be based on the follow conditions:
- The units in the system should be based on invariable quantities in nature
- All units, except the basic units, should be derived from the basis units
- Multiplying of the units should go in factors of then (decimals)
Only in 1875 the world was beginning to show some interest in the French development. Because more and more countries were interested in the French system, the Bureau International des Poids et Mesures (BIPM) was founded, nowadays: Conférence Générale des Poids et Mesures (CGPM). In 1960, on the 11th CGPM, the system was named officially International d'Unité. You can see more on http://physics.nist.gov/cuu/Units/history.html, or on the official site of the BIPM: http://www.bipm.fr/enus/3_SI/si-history.html.
The institution in The Netherlands that controls the units is Nederlands Meetinstituut (NMi).
The official institution from the world standard of measurements is Conférence Générale des Poids et Mesures (CGPM).
The official institution in the US is The National Institute of Standards & Technology (NIST).
The English institution for measurement standards is the National Physical laboratory (NPL).
Lenntech BV is not responsible for programming or calculation errors on this sheet. Feel free to contact us for any feedback. |
Yes, but very little loss occurs.
Our planet, along with all planets that have an atmosphere, lose gases to outer space.
The escape velocity is the minimum speed needed for an object to escape from the gravitational influence of Earth. The escape velocity is a function of how close the object is to Earth’s surface and the molecule’s mass.
Different processes drive this escape, and they operate at different time scales. One loss process is through molecular kinetic energy.
Temperature is a measure of the average kinetic energy of a gas. The collisions between molecules in that gas cause the velocities of individual molecules to gain and lose kinetic energy.
The kinetic energy and mass of a molecule determine its velocity. The more massive the molecule of a gas is, the lower the average velocity of molecules of that gas at a given temperature.
Therefore at the same temperature, it is less likely that heavier gases will reach escape velocity than lighter gases. Hydrogen will escape from an atmosphere more easily than carbon dioxide, which has more mass.
If the planet has a high mass, like Jupiter, the escape velocity is greater, and fewer particles will escape. Given Earth’s temperature and mass, our atmosphere does not lose a significant proportion of its atmosphere through molecules reaching escape velocities.
Stripping of the atmosphere by a solar wind is a process that can strip an atmosphere of its gases. Earth’s magnetic field helps to protect us from large losses by this process.
Steve Ackerman and Jonathan Martin, professors in the UW-Madison department of atmospheric and oceanic sciences, are guests on WHA radio (970 AM) at 11:45 a.m. the last Monday of each month. |
The word “palette” (or “pallet”) has several meanings: it can refer to a tray used to transport items, or to a board used by artists to mix colors (as shown in the fantasy illustration above, which I produced many years ago for a talk on Computer Artwork). In this article, I’ll discuss the principles of Digital Color Palettes. If you’re working with digital graphics files, you’re likely to encounter “palettes” sooner or later. Even though the use of palettes is less necessary and less prevalent in graphics now than it was years ago, it’s still helpful to understand them, and the pros and cons of using them.
I discussed the distinction between bitmap and vector representations in a previous post [The Two Types of Computer Graphics]. Although digital color palettes are more commonly associated with bitmap images, vector images can also use them.
The Basic Concept
A digital color palette is essentially just an indexed table of color values. Using a palette in conjunction with a bitmap image permits a type of compression that reduces the size of the stored bitmap image.
In A Trick of the Light, I explained how the colors you see on the screen of a digital device display, such as a computer or phone, are made up of separate red, green and blue components. The pixels comprising the image that you see on-screen are stored in a bitmap matrix somewhere in the device’s memory.
In most modern bitmap graphic systems, each of the red, green and blue components of each pixel (which I’ll also refer to here as an “RGB Triple” for obvious reasons) is represented using 8 bits. This permits each pixel to represent one of 224 = 16,777,216 possible color values. Experience has shown that this range of values is, in most cases, adequate to allow images to display an apparently continuous spectrum of color, which is important in scenes that require smooth shading (for example, sky scenes). Computers are generally organized to handle data in multiples of bytes (8 bits), so again this definition of an RGB triple is convenient. (About twenty years ago, when memory capacities were much smaller, various smaller types of RGB triple were used, such as the “5-6-5” format, where the red and blue components used 5 bits and the green component 6 bits. This allowed each RGB triple to be stored in a 16-bit word instead of 24 bits. Now, however, such compromises are no longer worthwhile.)
There are, however, many bitmap images that don’t require the full gamut of 16,777,216 available colors. For example, a monochrome (grayscale) image requires only shades of gray, and in general 256 shades of gray are adequate to create the illusion of continuous gradation of color. Thus, to store a grayscale image, each pixel only needs 8 bits (since 28 = 256), instead of 24. Storing the image with 8 bits per pixel (instead of 24 bits) reduces the file size by two-thirds, which is a worthwhile size reduction.
Even full-color images may not need the full gamut of 16,777,216 colors, because they have strong predominant colors. In these cases, it’s useful to make a list of only the colors that are actually used in the image, treat the list as an index, and then store the image using the index values instead of the actual RGB triples.
The indexed list of colors is then called a “palette”. Obviously, if the matrix of index values is to be meaningful, you also have to store the palette itself somewhere. The palette can be stored as part of the file itself, or somewhere else.
To restate, whether implemented in hardware or software, an image that uses a palette does not store the color value of each pixel as an actual RGB triple. Instead, each color value is stored as an index to a single entry in the palette. The palette itself stores the RGB triples. You specify the pixels of a palettized* image by creating a matrix of index values, rather than a matrix of the actual RGB triples. Because each index value is significantly smaller than a single triple, the size the resulting bitmap is much smaller than it would be if each RGB triple were stored.
The table below shows the index values and colors for a real-world (albeit obsolete) color palette; the standard palette for the IBM CGA (Color Graphics Adapter), which was the first color graphics card for the IBM PC. This palette specified only 16 colors, so it’s practical to list the entire palette here.
(* For the action associated with digital images, this is the correct spelling. If you’re talking about placing items on a transport pallet, then the correct spelling is “palletize”.)
In this context, a palette is a range of specific colors that can be used by an artist creating a digital image. The usual reason for selecting colors from a palette, instead of just choosing any one of the millions of available colors, is to achieve a specific “look”, or to conform to a branding color scheme. Thus, the palette has aesthetic significance, but there is no technical requirement for its existence. The use of aesthetic palettes is always optional.
(* As I explained in Ligatures in English, this section heading could have been spelled “Esthetic Palettes”, but I personally prefer the spelling used here, and it is acceptable in American English.)
This type of palette is used to achieve some technological advantage in image display, such as a reduction of the amount of hardware required, or of the image file size. Some older graphical display systems require the use of a color palette, so their use is not optional.
Displaying a Palettized Image
The image below shows how a palettized bitmap image is displayed on a screen. The screen could be any digital bitmap display, such as a computer, tablet or smartphone.
The system works as follows (the step numbers below correspond to the callout numbers in the image):
- As the bitmap image in memory is scanned sequentially, each index value in the bitmap is used to “look up” a corresponding entry in the palette.
- Each index value acts as a lookup to an RGB triple value in the palette. The correct RGB triple value for each pixel is presented to the Display Drivers.
- The Display Drivers (which may be Digital-to-Analog Converters, or some other circuity, depending on the screen technology) create red, green and blue signals to illuminate the pixels of the device screen.
- The device screen displays the full-color image reconstituted from the index bitmap and the palette.
In the early days of computer graphics, memory was expensive and capacities were small. It made economic sense to maximize the use of digital color palettes where possible, to minimize the amount and size of memory required. This was particularly important in the design of graphics display cards, which required sufficient memory to store at least one full frame of the display. By adding a small special area of memory on the card for use as a palette, it was possible to reduce the size of the main frame memory substantially. This was achieved at the expense of complexity, because now every image that was displayed had to have a palette. To avoid having to create a special palette for every image, Standard color palettes and then Adaptive color palettes were developed; for more details, see Standard vs. Adaptive Palettes below.
One of the most famous graphics card types that (usually) relied on hardware color palettes was the IBM VGA (Virtual or Video Graphics Array) for PCs (see https://en.wikipedia.org/wiki/Video_Graphics_Array).
As the cost of memory has fallen, and as memory device capacities have increased, the use of hardware palettes has become unnecessary. Few, if any, modern graphics cards implement hardware palettes. However, there are still some good reasons to use software palettes.
Generally, the software palette associated with an image is included in the image file itself. The palette and the image matrix form separate sections within one file. Some image formats, such as GIF, require the use of a software palette, whereas others, such as BMP, don’t support palettes at all.
Standard & Adaptive Palettes
Back when most graphics cards implemented hardware palettes, rendering a photograph realistically on screen was a significant problem. For example, a photograph showing a cloud-filled sky would include a large number of pixels whose values are various shades of blue, and the color transitions across the image would be smooth. If you were to try to use a limited color palette to encode the pixel values in the image, it’s unlikely that the palette would include every blue shade that you’d need. In that case, you were faced with the choice of using a Standard Palette plus a technique called Dithering, or else using an Adaptive Palette, as described below.
Given that early graphics cards could display only palettized images, it simplified matters to use a Standard palette, consisting of only the most commonly-used colors. If you were designing a digital image, you could arrange to use only colors in the standard palette, so that it would be rendered correctly on-screen. However, the standard palette could not, in general, render a photograph realistically—the only way to approximate that was to apply Dithering.
The most commonly-used Standard palette for the VGA graphics card was that provided by BIOS Mode 13H.
One technique that was often applied in connection with palettized bitmap images is dithering. The origin of the term “dithering” seems to go back to World War II. When applied to palettized bitmap images, the dithering process essentially introduces “noise” in the vicinity of color transitions, in order to disguise abrupt color changes. Dithering creates patterns of interpolated color values, using only colors available in the palette, that, to the human eye, appear to merge and create continuous color shades. For a detailed description of this technique, see https://en.wikipedia.org/wiki/Dither.
While dithering can improve the appearance of a palettized image (provided that you don’t look too closely), it achieves its results at the expense of reduced image resolution, because of the fact that the dithering of pixel values introduces “noise” into the image. Therefore, you should never dither an image that you want to keep as a “master”.
Instead of specifying a Standard Palette that includes entries for any image, you can instead specify a palette that is restricted only to colors that are most appropriate for the image that you want to palettize. Such palettes are called Adaptive Palettes. Most modern graphics software can create an Adaptive Palette for any image automatically, so this is no longer a difficult proposition.
A significant problem with Adaptive Palettes is that a display device that relies on a hardware palette can typically use only one palette at a time. This makes it difficult or impossible to display more than one full-color image on the screen. You can set the device’s palette to be correct for the first photograph and the image will look great. However, as soon as you change the palette to that for the second photograph, the colors in the first image are likely to become completely garbled.
Fortunately, the days when graphical display devices used hardware palettes are over, so you can use Adaptive Palettes where appropriate, without having to worry about rendering conflicts.
Should you Use Digital Color Palettes?
Palettization of an image is usually a lossy process. As I explained in a previous post [How to Avoid Mosquitoes], you should never apply lossy processes to “master” files. Thus, if your master image is full-color (e.g., a photograph), you should always store it in a “raw” state, without a palette.
However, if you want to transmit an image as efficiently as possible, it may reduce the file size if you palettize the image. This also avoids the necessity to share the high-quality unpalettized master image, which could be useful if you’re posting the image to a public web page.
If it’s obvious that your image uses only a limited color range, such as a monochrome photograph, then you can palettize it without any loss of color resolution. In the case of monochrome images, you don’t usually have to create a custom palette, because most graphics programs allow you to store the image “as 8-bit Grayscale”, which achieves the same result.
In summary, then, in general it’s best not to use palettes for full-color images. However, if you know that your image is intended to contain only a limited color range, then you may be able to save file space by using a palette. Experimentation is sometimes necessary in such cases. You may also want to palettize an image so that you don’t have to make the high-quality original available publicly. If you’re an artist who has created an image that deliberately uses a limited palette of colors, and you want to store or communicate those choices, then that would also be a good reason to use a palettized image. |
Common Redshank: Large sandpiper, scaled black and brown upperparts, dark-streaked neck and breast, white eye-ring broken in front, pale belly and sides with dark chevrons. Dark wings with white trailing edges visible in flight. Legs are orange-red. Short bill is red with black tip.
Range and Habitat
Common Redshank: Widespread across Eurasia; accidental in Newfoundland. Preferred habitats include mudflats, marshes, and grassy fields.
They are quickly identified by their red legs, but confusion can occur if their legs are mud-covered. Juveniles may have greenish-yellow legs.
They find their food by sight and only rarely probe into the mud or sand.
Wary and nervous birds, Common Redshanks are often the first to panic and give noisy alarm calls to other nearby waders.
A group of sandpipers has many collective nouns, including a "bind", "contradiction", "fling", "hill", and "time-step" of sandpipers.
The Common Redshank is a wader which breeds throughout Europe and northern Asia. During winter months, these birds migrate to the Mediterranean coastline, southern Asia, and the Atlantic coast of Europe south of Great Britain. This species prefers to nest in wetlands, such as damp meadows and salt marshes. They nest in highly dense colonies throughout European and Asian territories. The preferred diet of the Common Redshank is small invertebrates. This species is replaced by the Spotted Redshank in Arctic regions. Due to maintained and rising population levels, the Common Redshank’s current conservation rating is Least Concern. |
Global Curricula: How to Choose and What to Use
Nelson Mandela is one of my heros. As he lies in critical condition, one quote stands out for me above all others: "Education is the most powerful weapon which you can use to change the world."
For a man who has changed this world, we should take heed. But in order for the next generation to also make a difference, they must understand this ever-changing world.
It's promising that there are a number of organizations dedicated to creating teacher resources that ultimately help students understand this complex world. But what's the best way to judge their quality?
Here are some questions to consider when reviewing global issues curricula:
- Does the curriculum use primary resources and respectable secondary resources, such as perspectives from academic experts?
- Does it lead students to deeper learning through critical analysis of a wide range of evidence?
- Is the essential question something experts grapple with in the real world, where there is no clear answer?
- Are there case studies from different areas of the world? Or allow students to model similar investigations of new case studies?
- Does the curriculum allow learners to examine the roots and diverse perspectives that surround the global issue?
- Does the curriculum promote project-based learning?
- Does the curriculum have room for students to connect with experts and peers around the world?
- Can your students apply their learning in a real-world way that makes a difference locally, regionally, or globally?
Probably no curriculum can accomplish all of these things. Educators are masterful at adapting lessons to fit the needs of their own students. What other objectives can you build into global learning opportunities?
Here are some notable curricula broadly categorized by big world issues:
The Natural World
Humans thrive off the natural world, using its resources to survive and to create better societies. But humans must also consider the delicate balance between development and preservation of limited natural resources for future generations. Forces, like global warming, are often borderless problems that require global cooperation to mitigate.
Exploring Global Issues: Social, Economic, and Environmental Interconnections
Facing the Future created a global studies guide that examines 24 issues, most of them relating to environmental topics, that are student centered and action oriented. Furthermore, the learning activities are focused heavily on deeper learning skills specified in the Common Core State Standards. This is a teacher's guide; Facing the Future also puts out a student guide. For purchase.
Human-Environment Interactions in India
India has within its borders nearly one-fifth of the entire human population. This lesson, from Primary Source, examines how India has attempted to balance development needs, including policies that would lift millions out of severe poverty, with environmental preservation. Free.
How Can Biodiversity be Preserved?
This curriculum, by Stanford University's SPICE, looks at what biodiversity is, how it's valued, and case studies on how different groups hope to preserve complex ecosystems and their species. For purchase.
Buy, Use, Toss? A Closer Look at the Things We Buy
Students learn about the five major steps of the materials economy: extraction, production, distribution, consumption, and disposal. Teachers coach students to analyze the sustainability of these steps, determining how responsible consumption can benefit people, economies, and environments. Free.
Global climate change is a big topic. We turn to National Geographic Education to break it down for learners, from closed lab experiments to hypothesizing what happens on a large scale and over time. This lesson deals with the science of climate change; there are many interesting case studies from around the world that looks at the impact of climate change that would pair nicely with this. Free.
The Human World
Patterns of human settlement create a new geography that needs to be understood on its own terms. Humans have always moved for various push and pull reasons, but the forces of globalization means that the scale of human migrations is greater than ever before, and human settlements must be reconceptualized.
Global Patterns of Human Migration
An introductory lesson from National Geographic Society that allows students to examine factors that lead to human migration, and to study different case studies throughout time on why humans moved, and how they settled. A great primer. Free.
Explorers, Traders & Immigrants: Tracking the Cultural and Social Effects of the Global Commodity Trade
This is a terrific tool to help students understand the drivers of the global economy over time. Experts from different fields at the University of Texas in Austin created this resource guide and curriculum that dives deeply into the roots of global trade, and offers a springboard for students to do further explorations. Free.
A World of Ideas
Human ingenuity has changed the world. But in some instances, such as governance and international relations, humans often do not reach agreement on which ideas are the best. Students who examine some of the gray areas will have a clearer understanding of how the world works.
Dilemmas of Foreign Aide: Debating U.S. Policies
Sometimes the best intentioned ideas backfires. This curriculum, from Brown University's CHOICES program, compels students to take a critical look at foreign aide case studies and examine what worked and what went wrong. For purchase.
Confronting Genocide: Never Again?
The greatest failure of human society is genocide. But even in the century after international peacekeeping made its debut, genocides are still a reality. Students grapple with this real-world problem by understanding what went wrong in the past in an effort to prevent future atrocities in the future. For purchase.
Competing Visions for Human Rights: Debating U.S. Policies
Despite an internationally accepted human rights treaty, dozens of meetings, new institutions, and social movements, human rights abuses persist. Students use case studies and primary sources to examine the evolving role that human rights has played in international politics and explore the current debate on U.S. human rights policy. For purchase.
Security, Civil Liberties, and Terrorism
This curriculum was written nearly a decade ago, but the recent NSA controversy demonstrates that it is still a topic that national security, government, and citizens are still negotiating. The curriculum starts with an examination of what terrorism is, and the forms that it takes before exploring how governments fight it, and the role of people's privacy and voice in the matter. For purchase.
Why is Civil Society Important?
Civil society--religious institutions, unions, community groups, the media, and other non-governmental, non-business organizations--help citizens shape the culture, politics, and economies of their nation. There is no formal definition or role for civil society worldwide. Students study how civil societies shape different areas of the world, and how lessons learned from one area could be applied to other areas. Free.
Nelson Mandela also observed that there "can be no keener revelation of a society's soul than the way in which it treats its children." To me, this involves how we prepare them for the future, including how to think about the world, which is growing more interconnected every year. |
One of the ocean’s most iconic symbols, the killer whale, is a charismatic species that has probably the largest geographic distribution of any species (after humans). Killer whales live in all latitudes, in all oceans, from the Arctic Ocean to Antarctica. With its well-known tall dorsal fin and characteristic black and white color pattern, the killer whale has been known to coastal peoples for thousands of years and is one of the more recognizable species today.
Killer whales get their name from their reputation of being ferocious predators, exhibiting almost hateful behaviors when toying with their prey. Interestingly, however, killer whales are not true whales. They are very large dolphins, reaching lengths of 33 feet (10 m) and weights of at least 10 metric tones (22,000 pounds). Killer whales and other dolphins are thought to be some of the smartest animals on the planet, challenging the great apes (chimps and gorillas) for the top spot. They are also extremely curious and often approach people to investigate. Their intelligence is likely both a result of and a driver of their complex social structures. They generally live in small groups and organize complex, group behaviors when mating and hunting. They are intelligent, playful, powerful animals – a worrisome combination if you happen to be their preferred prey. Different killer whale populations specialize on different prey types, including large bony fishes; seals, sea lions, and other large marine mammals; and penguins; among other things.
Individual killer whales are known to reach ages of 100 years old. Like all mammals, killer whales reproduce through internal fertilization, and females give birth to live young. Juveniles are able to swim from the moment they are born, but they are totally dependent on nursing their mothers’ milk for one to two years.
Though all killer whales, worldwide, are considered to be members of the same species, there are several known populations that have slightly different appearances, sizes, and behaviors. These include populations that are somewhat territorial and do not migrate long distances (the so called resident populations) and those that are more migratory in nature (the so called transient populations). Furthermore, some transient populations stay near the coast and overlap with resident populations, while others are oceanic. Some killer whale scientists believe that these populations may represent different species, and recent research suggests that there may be as many as 16 different species of Killer Whale. To date, the new species have yet to be described, and the cosmopolitan species Orcinus orca is considered to cover all individuals around the world, regardless of behavior or appearance.
Though they are powerful hunters and are known to exhibit somewhat tortuous behavior towards large sharks and other marine mammals, killer whales have never been known to attack humans in the wild. This is a somewhat puzzling lack of aggressive behavior, as people would be extremely easy prey for this species. In captivity, however, male killer whales have killed several trainers in the last few decades. These large, marine predators are not meant to be kept in small tanks in captivity, and they seem to eventually snap and exhibit aggressive behaviors toward their handlers. In addition to their capture for display in public aquaria, low numbers of killer whales have been regularly hunted for food in some regions around the world. In the United States and some other places, this species is given complete legal protection as a result of it being a highly intelligent, marine mammal. Its global distribution and the confusing relationships between populations/potential new species described above contribute to scientists not believing that they have enough data to determine the conservation status of the killer whale. Further study and continued monitoring are both necessary to understand any potential risks that this species faces. |
Friday Afternoon Questions, Kindergarten-1st Grade Classroom
Every Friday afternoon the K1 teachers email “Friday questions” to all the parents in the K1 classroom. This is just one of the ways parents can peek into their child's classroom, and it can help spark conversations at home. Here are a few question that went home this year:
Friday Questions September 2015
- What does is mean to "persevere"?
- Who in our class has chickens at home?
- What happened to your ice cube during Investigations?
- Can you name a color in Spanish?
- What was invented when an engineer (in the 1940's) looked at the tiny hooks on a burr?
- What was our burr investigation? Can you remember two things that the burrs stuck to? Why do burrs stick to things? (hint: what's inside the burr?)
- How many Unifix cubes tall are you?
- What did we cook on Wednesday? How did the matter (the apples) change from their solid form?
- What is a delta? (hint: it's the shape of a triangle)
- What kind of animals did our visitor show us during his presentation on Tuesday? Which was your favorite animal?
Friday Questions October 2015
- What is a legend? What's one legend you listened to during Read Aloud?
- What does "circumference" mean? What did you measure the circumference of?
- What is a cattail and where did you find them?
- What are hot colors? What are cool colors?
- What is your Native American name? Why did you pick that name?
- Who was Squanto? How did he know English?
- What is a Wampanoag word that you realized you already knew and used a lot?
- What song did you sing today in music class?
- What symbol are you sewing on your pouch?
- How do you create a story using symbols?
- How many pumpkin seeds did we count in the big pumpkin?
- Can you give an example of an "I statement"?
Friday Questions: November 2015
- Did your boat float during Investigations? What was it made out of?
- What math materials did you use to do patterning?
- What are some of your ideas for your "Pourquoi" tale? (How the ____ got its _____)
- What's inside an acorn? How do you make acorn flour?
- What does it mean to "Read the Room"?
- What letters did you practice in handwriting this week?
- First graders, what is one pattern you discovered on the hundred chart?
- Can you make an ABB or ABCC pattern using clapping? What others can you make?
- Which legend story did you pick to write? Who are your characters?
- What does "te paso bondat" mean when you say it at Morning Meeting
- How much is 5 bunches of ten and 4 leftover?
- What did you say you were grateful for on your leaf on the tree?
Friday questions, December 2015
- What Native American tool are you making in Investigations?
- Why is it getting darker earlier in the afternoon?
- What is one difference between a coniferous tree and a deciduous tree?
- What did Amelia Bloomer do to change the clothes women wear today? Why didn't she want to wear a dress?
- What materials did you use to make a lantern? Why are you making lanterns?
- How many hours a day does a koala sleep?
- What unit are we starting in January?
Friday questions, January 2016
- Who was Sally Ride? What are a few things she needed to do in her training?
- Can you name something that flies in the troposphere? What about the stratosphere?
- What is the name of our classroom rocket ship? What does it look like?
- What is the International Space Station (ISS)? What happens there?
- Where did you travel in Spanish class? What did you make?
- How do astronauts sleep in space? What other things did you learn about the space station this week?
- What are drag and thrust? How did you experiment with them during Investigations?
- What was the lunar module?
- Where did the 3 astronauts land when the returned to Earth from the moon?
- What is symmetry?
- What is the Reading Challenge you started this week?
- How much does a space suit weigh? How did we make our own moon craters? What did you notice?
- What chemical reaction propelled your rockets in class this week?
- Which astronaut stayed in the command module during the Apollo 11 mission?
- What books did you read with your reading buddy this week?
- What does the mirror do to make a telescope work? |
Morphology is the study of how words are put together. A lively introduction to the subject, this textbook is intended for undergraduates with relatively little background in linguistics. Providing data from a wide variety of languages, it includes hands-on activities such as 'challenge' boxes, designed to encourage students to gather their own data and analyze it, work with data on websites, perform simple experiments, and discuss topics with each other. There is also an extensive introduction to the terms and concepts necessary for analyzing words. Unlike other textbooks it anticipates the question 'is it a real word?' and tackles it head on by looking at the distinction between dictionaries and the mental lexicon. This second edition has been thoroughly updated, including new examples and exercises as well as a detailed introduction to using linguistic corpora to find and analyze morphological data.
Op voorraad. Vandaag besteld, uiterlijk 29-05 bezorgd |
This map shows the native and introduced (adventive) range of this species. Given appropriate habitat and climate, native plants can be grown outside their range.
Growing your own plants from seed is the most economical way to add natives to your home. Before you get started, one of the most important things to know about the seeds of wild plants is that many have built-in dormancy mechanisms that prevent the seed from germinating. In nature, this prevents a population of plants from germinating all at once, before killing frosts, or in times of drought. To propagate native plants, a gardener must break this dormancy before seed will grow.
Each species is different, so be sure to check the GERMINATION CODE listed on the website, in the catalog, or on your seed packet. Then, follow the GERMINATION INSTRUCTIONS prior to planting. Some species don't need any pre-treatment to germinate, but some species have dormancy mechanisms that must be broken before the seed will germinate. Some dormancy can be broken in a few minutes, but some species take months or even years.Seed dormancy can be broken artificially by prolonged refrigeration of damp seed in the process of cold/moist STRATIFICATION. A less complicated approach is to let nature handle the stratifying through a dormant seeding, sowing seeds on the surface of a weed-free site in late fall or winter. Tucked safely beneath the snow, seeds will be conditioned by weathering to make germination possible in subsequent growing seasons.
To learn more, read our BLOG: How to Germinate Native Seeds
DORMANT BARE ROOT PLANTS:
We dig plants when they are dormant from our outdoor beds and ship them April-May and October. Some species go dormant in the summer and we can ship them July/August. We are among the few still employing this production method, which is labor intensive but plant-friendly. They arrive to you dormant, with little to no top-growth (bare-root), packed in peat moss. They should be planted as soon as possible. Unlike greenhouse-grown plants, bare-root plants can be planted during cold weather or anytime the soil is not frozen. A root photo is included with each species to illustrate the optimal depth and orientation. Planting instructions/care are also included with each order.
Download: Installing Your Bare-Root Plants
3-packs and trays of 32, 38, or 50 plants leave our Midwest greenhouses based on species readiness (being well-rooted for transit) and order date; Spring shipping is typically early May through June, and Fall shipping is mid-August through September. Potted 3-packs and trays of 38 plugs are started from seed in the winter so are typically 3-4 months old when they ship. Trays of 32/50 plugs are usually overwintered so are 1 year old. Plant tray cells are approximately 2” wide x 5” deep in the trays of 38 and 50, and 2.5" wide x 3.5" deep in the 3-packs and trays of 32; ideal for deep-rooted natives. Full-color tags and planting & care instructions are included with each order.
Download: Planting and Care of Potted Plants
Shipping & Handling Charges:SEED $100.00 and under: $5.00
Retail SEED orders over $100.00 ship free! Custom seed mixes or wholesale seed sales over $100, add 5% of the total seed cost
(for orders over $1,000 a package signature may be required)
BARE ROOT and POTTED PLANTS
$50.00 and under: $9.00
over $50.00: 18% of the total plant cost. (For orders over $1,000 a package signature may be required.)
TOOLS and BOOKS have the shipping fee included in the cost of the product (within the contiguous US).
**We are required to collect state sales tax in certain states. Your state's eligibility and % will be calculated at checkout. MN State Sales Tax of 7.375% is applied for orders picked up at our MN location. Shipping & handling charges are also subject to the sales tax.
SEED, TOOLS and BOOKS are sent year-round. Most orders ship within a day or two upon receipt.
BARE ROOT PLANTS are shipped during optimal transplanting time: Spring (April-May) and Fall (Oct). Some ephemeral species are also available for summer shipping. Since our plants are field-grown, Nature sets the schedule each year as to when our season will begin and end. We fill all orders, on a first-come, first-serve basis, to the best of our ability depending on weather conditions beyond our control.
POTTED PLANTS (Trays of 32/38/50 plugs and 3-packs) typically begin shipping early May and go into June; shipping time is heavily dependent on all the species in your order being well-rooted. If winter-spring greenhouse growing conditions are favorable and all species are well-rooted at once, then we ship by order date (first come, first serve). We are a Midwest greenhouse, and due to the challenges of getting all the species in the Mix & Match and Pre-Designed Garden Kits transit-ready at the same time, we typically can't ship before early May. Earlier shipment requests will be considered on a case-by-case basis.
*We are unable to ship PLANTS (bare root or potted) outside the contiguous US or to CALIFORNIA due to regulations.
We ship using USPS, UPS and Spee Dee. UPS and Spee Dee are often used for expediting plant orders; they will not deliver to Post Office Box numbers, so please also include your street address if ordering plants. We send tracking numbers to your email address so please include it when you order.
FOR MORE DETAILED SHIPPING INFORMATION, INCLUDING CANADA SHIPPING RATES (SEED ONLY), PLEASE SEE 'SHIPPING' AT THE FOOTER OF THIS WEBSITE.
- Germination Code
- Life Cycle
- Sun Exposure
- Full, Partial, Shade
- Soil Moisture
- Wet, Medium-Wet, Medium
- 3 feet
- Bloom Time
- June, July
- USDA Zones
- Catalog Number |
For decades, especially leading up to the 2000s, educators and researchers have debated how children should be taught to read, and how exactly the process of reading works in our brains. The discussion has been at times intense and impassioned, which is understandable given what is at stake. After all, we live in the age of information where reading is really not optional, meaning those who don’t read well are significantly disadvantaged. Statistics Canada reported in 2012 a 70% higher household income for adults with “level 4-5” reading skills (able to integrate information from multiple dense texts, using reasoning and inferencing) compared to those with “level 1” reading skills (able to locate single pieces of information from short texts with basic vocabulary). We all want the very best for our children, so it’s no wonder that differing opinions about literacy instruction have sparked debates contentious enough to be dubbed The Reading Wars.
Fortunately, there is a way to move beyond the debate, and it involves looking to what scientific research can tell us about literacy development and what kind of instruction leads to the best outcomes. Overwhelmingly, the research supports what Gough and Tunmer proposed 35 years ago. Their simple view of reading explains the process of reading like this:
The Simple View of Reading (Gough & Tunmer, 1986) is indeed a simple framework, but the implications were complex because it proposed that two separate processes are required to achieve the ultimate goal of reading comprehension. Basically, to read, you have to be good at two things:
- Decoding– figuring out the words on the page
- Language comprehension– understanding what the words mean
Because reading comprehension is the product (not the sum!) of the two, if you have trouble with either one, you will have difficulty understanding the words on the page. For short: D x LC = RC. When someone is experiencing reading difficulties, the very first thing we need to do is figure out is whether it’s because they need to work on decoding, language comprehension, or both. Over the past three and a half decades, the simple view has held up, having been put to the test countless times (e.g. Catts, Hogan, & Adlof, 2005; Kendeou, Savage, & van den Broek, 2009), including in settings were children learn to read in a second language (e.g. Erdos, Genesee, Savage, & Haigh, 2010).
Putting the Simple View to work:
If we plot the two processes of reading against each other (D x LC), we get a model that shows four possible scenarios of reading ability. Although it differs somewhat from what Gough and Tunmer originally proposed, the model below is based on current understandings of reading difficulties (e.g. Bishop & Snowling, 2004; Adlof & Hogan, 2018). A clear understanding of this model is crucial; it will help us understand the problem and figure out the right solutions.
Breaking down the quadrants:
- Stronger decoding and stronger language comprehension. These children still need to be taught how to read, but literacy skills should develop typically, with relative ease. They are advantaged when early reading instruction is approached systematically, but even if instruction is not totally systematic, they will still be able to learn to read.
- Weaker decoding and stronger language comprehension. Children with this profile will require structured instruction in order to become proficient readers. Teaching must focus on foundational skills required for word recognition, including phonemic awareness and phonics. If good attempts have been made to teach decoding, a person who continues to struggle significantly in this area may have specific learning disorder (SLD) in reading, also called dyslexia.
- Stronger decoding and weaker language comprehension. This can lead to a subtler type of reading problem where a person may be able to figure out the words on the page, but have trouble getting at their meaning. When difficulties are more or less specific to language skills, it is called developmental language disorder (DLD). Children with this profile may become strong readers, if they have some help to further develop their language skills. For instance, we would want to apply a good approach to vocabulary development, and help organizing ideas from more complex texts.
- Weaker decoding and weaker language comprehension. A combination of both factors leads to reading difficulties. Children with this profile will need a careful teaching approach that meets their needs in both areas.
Don’t put me in a box!
What is the point of these quadrants, anyway? Not to label children and put them neatly into a box. The purpose is to understand where a child is at in his or her journey to becoming a reader, using an evidence-based model of reading development. When we figure out what a child needs help with, we can get to work right away, using targeted interventions that will be of the most benefit.
It really does make sense.
A while back, I was telling a friend all about the virtues of the simple view of reading (I’m fun at parties). Her first reaction was a blank stare, and then something along the lines of, “That is simple, why on earth is this a big deal?” So if you were thinking that the equation D x LC = RC is a no-brainer, you’re not alone. However, the simple view doesn’t actually claim that reading is simple, just that D x LC is a simple way to explain the big-picture processes, both of which are deeply complex. Furthermore, we need to consider the context from which the simple view arose in 1986. Although the 1980s were awesome for a lot of things (like slouch socks and Cyndi Lauper), it was a tough time to be a reading researcher and likely even tougher to be a teacher of reading, because the “reading wars” were in full swing. There were a lot of people who flat-out rejected the idea that decoding words was a process to be distinguished from other language skills, preferring to view reading as a holistic process, even for novice readers.
The belief in a holistic approach affected the way in which reading was taught to many children in the 1980s and 90s, as an approach called whole language became very popular. Many educators and researchers in the field of education at the time believed that reading came naturally to young learners, who would benefit from being immersed in a world of books, without getting bogged down in the details of word decoding. But it turned out that whole language was based on faulty assumptions; there is no credible evidence that teaching reading more holistically— without a focus on decoding— is the way to go. On the contrary, we have heaps of evidence to indicate that emphasis on phonics and phonemic awareness instruction in the early years of schooling results in the best outcomes (e.g. Ehri et al. 2001). Sadly, children on the left side of the quadrant, for whom decoding is difficult even when it is explicitly taught, were particularly vulnerable to the shortcomings of the whole language approach. Being a child with dyslexia in most 1980s or 90s classrooms would have been extremely difficult.
Given all of this knowledge, you might assume that in 2021 it would be case closed on whole language. But in reality, it’s not that straightforward. Although the term whole language is no longer in vogue in most circles, there are still vestiges of it that appear in many popular reading programs, classroom materials, and teacher trainings. But the history of whole language is a whole other topic for a whole other day. Until then, we’ll go with what the research confirms about reading: keep it simple with D x LC = RC. |
Ages Six-to-Twelve Months
Motor Development Milestones
- Learns to sit up by his/her self for a short period of time (approximately 5 minutes)
- Uses finger and thumb to pick up objects such as pieces of food (pincer grip)
- Begins to hold own bottle
- Pulls self into a crawling position by raising up on arms and drawing knees up beneath
- May accidentally begin scooting backwards when placed on his/her stomach; soon will transition to crawling forward
- Enjoys being held and supported in the standing position; may jump in place
Cognitive Development Milestones
- Uses hand, mouth, and eyes in coordination to explore own body, toys
- Imitates actions such as pat-a-cake, waving bye-bye, and playing peek-a-boo
- Shows some fear or hesitation when placed on a high surface such as a changing table, stairs; depth perception is becoming evident
- Searches for a toy or food that has been completely hidden under cloth; beginning to understand that objects continue to exist even when they cannot be seen
- Explores objects in many ways, turns them, feels surfaces, bangs & shakes them
- Reaches accurately with either hand
- Plays actively with small toys such as a rattle
- Holds small object in one hand while reaching toward another object
- Inspects objects with hands and eyes simultaneously
- Ten to Twelve Months
- Speech and Language Development
- Babbles or jabbers deliberately to initiate a social interaction
- Shakes head for no
- Looks for a voice when name is called
- Babbles in sentence like sequences; followed later with a language like inflection
- Waves “bye-bye”
- Claps hands together when asked
- Imitates sounds that are similar to those already learned
- Imitates motor noises, tongue clicks etc
- May say “dada” or “mama”
- Enjoys rhyming and simple songs
- Shows interest in vocalizing while listening to music
- Hands toy to an adult when appropriate gestures accompany the request
Social Development Milestones
- Clings to parent or caregiver while resisting separation, exhibits a fear of strangers
- Desires for caregiver to be in constant sight
- Enjoys being near and included in daily activities of caregiver or family members
- Enjoys novel experiences and opportunities to examine and explore new objects
- Begins to assert self be resisting caregiver’s requests
- Offers toys or objects to others
- Becomes attached to a favorite toy or object
- Looks toward and smiles at person calling his/her name
- Repeats behaviors that get attention
- Jabbers continuously
- Can execute simple directions and request
- Understands the meaning of the word or sign for “no”
Day Care Resources
7 Domains of Learning
Personal and Social Development
Children learn to develop healthy relationships with adults and their peers while cultivating a positive personal identity.
Children are taught to channel their natural curiosity to articulate questions and draw conclusions about the world around them.
This discipline focuses on numbers and their relationships, in addition to the geometric concepts that explain spatial awareness.
Language and Literacy
The basis of learning, this domain opens the door of communication through writing and drawing as well as grammar and comprehension.
This sphere examines societal relationships based on social structures and the connections between people and their communities.
An exploration into visual arts, music, dance and theater, this domain celebrates distinctive expression of emotions and ideas.
Physical Development and Health
Children are taught regard for health and nutrition in addition to the benefits of exercise to develop a strong mind and body. |
Today, you must compete for student attention. You compete with hundreds of channels available through cable and satellite television. Our phones and tablets stream most of them too! This is not to mention computer games, social media, and everything else available through the internet and cyberspace.
Our kids are overstimulated and our teachers overwhelmed as they endeavor to get their students interested and involved in the curricula. Paradoxically, it is our own lessons that teach students when they can be off-task and when they can minimally participate in class! The notion that fairness and that everyone must participate the very same way often create habits and gaps in student attention in the classroom.
The good news is that promoting attention and interest is immediately doable!
Differentiated Instruction Benefits for Students
- Students in Differentiated Instruction classrooms enjoy a number of advantages over those in traditional “one-size fits all” approach settings. Students are able to be active participants in their own learning.
- The curriculum is no longer pointed to the middle of the group but available and interactive to all students. Those students that found a traditional classroom lesson too difficult or something they had already learned which led to feeling of being “bored” or “not engaged” are now excited about their learning.
- Students receive the content and curriculum in ways that ensure they are engaged. This engagement allows students to connect and learn the content at a deep level of understanding. |
The Reading Like a Historian curriculum engages students in historical inquiry. Each lesson revolves around a central historical question and features a set of primary documents designed for groups of students with a range of reading skills.
This curriculum teaches students how to investigate historical questions by employing reading strategies such as sourcing, contextualizing, corroborating, and close reading. Instead of memorizing historical facts, students evaluate the trustworthiness of multiple perspectives on historical issues and learn to make historical claims backed by documentary evidence. To learn more about how to use Reading Like a Historian lessons, watch these videos about how teachers use these materials in their classrooms. |
The Fraught Relationship between African Americans and Military Service by Mateo Mérida
Historically, patriotism has been intimately tied with acts of military service. Images of Uncle Sam, the raising of the flag on Iwo Jima, or even Captain America are seen not only as symbols of American military excellence, but as defining American patriotism itself. By consequence, the connection between the military and patriotism has been an imperative connection for marginalized people as a means of elevating the social standings of their communities, which is consistent for Black Americans as well.
While Black veteranship is visible as early as the American Revolution in both American and British ranks, the Civil War was pivotal in the history of Black Soldiers in North America. Frederick Douglass, a firm advocate for the recruitment of Black soldiers in the war said “Once let the Black man get his person the brass letter U.S.; let him get an eagle on his button, and a musket on his shoulder, and bullets in his pocket, there is no power on the earth or under the earth which can deny that he has earned the right to citizenship in the United States.”1 When Black soldiers were permitted to join the army, African American men enlisted in droves to actively eliminate slavery across the United States. For Black men like Douglass, it was believed that by joining the military, fighting for American values, and sacrificing their lives in the process, there could be no argument made that Black people born in the U.S. were not proud Americans and there could also be no argument strong enough to completely deny them citizenship.
Shortly after the end of the Civil War, the 13th and 14th Amendments banished slavery (except for those who committed a crime), and granted all African American people citizenship in the United States. While the passing of these Amendments was a major victory for Black civil rights in the United States, a full Civil Rights reform wouldn’t see any more meaningful change until the 1960’s. In the period between the Civil War and the civil rights movement, a military career did not uplift Black Americans in all of the ways they might have anticipated.
When Black veterans returned home to the United States, they had hoped that their veteran status would make them heroes in their communities. However, this was not the case. Violence was still a staple of Black oppression. In 1917, after returning home from service at the end of WWI, the Black veteran Charles Lewis returned to his home in Ohio. Roughly a month later, a strange man burst into Lewis’ home, accusing him of robbery, leading Lewis to run from his home, dressed in his old uniform. When he was apprehended by the white legal authorities, he was thrown in jail, and lynched the very next day.2 Other soldiers, such as the world renowned Buffalo Soldiers were so angered by being required to abide by the Jim Crow laws of Houston, Texas—despite their military status—that it led to a riot in August 1917.
Despite their experiences immediately following the First World War, many Black veterans hoped for a substantial change in the domestic battle against racism and inequality after the United States became involved in World War II. The “Double V campaign” emerged during this period in reference to a victory against fascism abroad and a victory against racism in the United States. Still, no sweeping changes arrived for Black Americans domestically, even
where radical policies were created that superficially guaranteed a tangible impact in life at first glance.
Following WWII, the G.I. Bill enabled American veterans to pursue an advanced education with the knowledge that their finances were covered by the U.S. government. In theory, this policy also enabled Black veterans to attain a higher education themselves. However, Black veterans were met with a myriad of caveats that white veterans were not. For the majority of Black veterans who lived in the South, segregation prevented these potential students from advancing their education, leaving historically Black colleges and universities to turn many students away as a result of a lack of resources and the overwhelming number of applications they received. In Northern states, Black applicants were actively discriminated against in the admission process, while white veterans were admitted en masse. Policies like the G.I. Bill launched much of the white middle class into well paying positions as a result of their newly earned college degrees, while the families of Black veterans remained in poverty and in segregation, with little else to show for a military career.3
Frustrated by the lack of progress in the fight against racism at home, many Black WWII veterans sought to make the changes themselves. Organizers and activists like Medgar Evers, James E. Campbell, and Isaac Woodward and many other WWII veterans laid the groundwork for the civil rights movement using the discipline and training they received in the armed services to mobilize thousands in the struggle for Black liberation.
By the 1960’s, the view many African Americans previously held about military service as a tool for changing social conditions began to wane considerably. Some saw it as an active tool of oppression. In an essay written in the mid-20th century by activist and educator, Septima Clark, she declared: The contradiction between squandered wealth and dehumanizing poverty; the contradiction between a congenital racism and feeble efforts at becoming a democracy; the contradiction between a tradition of civilian controlled government, academic and other institutions on the one hand and the institutional power requirements of the military industrial complex on the other, all of these are exacerbated by the escalation of the power of the military on the affairs of the nation today.4
With the military being the primary driver of American affairs, according to Clark, it is then the chief reinforcer of all related social issues domestically and internationally in relation to the United States. Clark’s overarching view of the role of the military in the United States shows that the view of the military among Black Americans in the 1960’s was not a solitary fixed ideology. In contrast to the soldiers of the Civil War, World War I and II eras, many Black Americans in the Civil Rights Movement no longer saw the military as a legitimate tactic in the struggle for Black liberation.
This view was not restricted inside the United States. The Vietnam War became the second longest war in American history, second only to the War in Afghanistan. To create a disruption within the American forces in Vietnam, the South Vietnam National Liberation Front created pamphlets geared towards Black soldiers fighting in Southeast Asia. “Your real enemies are those who call you ‘N*s’. Your genuine struggle is on your native land. GO HOME NOW AND ALIVE!”5
The legacy of African American soldiers in the military is one that is not easy to define. While the military has been an important mechanism for improving the lives of many Black American families, it has also been little overall to close the gaping disparities Black people face daily in the United States. The place that the military holds in Black American history is variable, and has been subject to change with the times and events of the day.
- Kelley, W. D, Anna E Dickinson, and Frederick Douglass. Address at a Meeting for the Promotion of Colored Enlistments, Philadelphia. July 6, 1863. Manuscript/Mixed Material. https://www.loc.gov/item/mfd.22007/.
- Williams, Chad. “African-American Veterans Hoped Their Service in World War I Would Secure Their Rights at Home. It Didn’t.” Time Magazine. November, 12, 2018. Accessed November 4th, 2021. https://time.com/5450336/african-american-veterans-wwi/.
- Herbold, Hilary. “Never a Level Playing Field: Blacks and the GI Bill.” The Journal of Blacks in Higher Education. No. 6. Winter, 1994-1995. https://doi.org/10.2307/2962479.
- Clark, Septima Poinsette. Avery Research Center: Septima P. Clark Papers, ca. 1910-ca. 1990. “The New Resistance Movement.” 2016. Pp 2. https://lcdl.library.cofc.edu/lcdl/catalog/lcdl:92641.
- Avery Research Center: Walter Pantovic Slavery and African American History Collection. “Vietnam War Propaganda Card.” 2014. https://lcdl.library.cofc.edu/lcdl/catalog/lcdl:80152. |
Throughout history humans have had to adapt to changing conditions in order to survive. Food shortages are one of the major pressures that have shaped past populations. Because of this, the human body has many physiological adaptations that allow it to go extended periods of time consuming little to no food. These adaptations also allow the body to recover quickly once food becomes available. They include changes in metabolism that allow different fuel sources to be used for energy, the storing of excess energy absorbed from food in the forms of glycogen and fat to be used in between meals, and a reduction in the basal metabolic rate in response to starvation, as well as physiological changes in the small intestines. Even in places where starvation is not a concern today, these adaptations are still important as they also have an effect on weight gain and dieting in addition to promoting survival when the body is in a starved state.
Disclaimer: The initial goal of this project was to present this information as a podcast episode as a part of a series aimed at teaching the general public about human physiological adaptations. Due to the circumstances with COVID-19 we were unable to meet to make a final recording of the podcast episode. A recording of a practice session recorded earlier in the year has been uploaded instead and is therefore only a rough draft. |
Mined to make the first compass needles, the mineral magnetite is also made by migratory birds and other animals to allow them to sense north and south, and thus navigate in cloudy or dark atmospheric conditions or under water. Researchers have compositionally modified magnetite to capture visible sunlight and convert this light energy into electrical current. This current may be useful to drive the decomposition of water into hydrogen and oxygen. The team generated this material by replacing one third of the iron atoms with chromium atoms.
Generating materials that can harness the power of the Sun to make a combustible fuel such as hydrogen, which would have no carbon footprint, represents an extremely attractive pathway to new clean energy sources. By taking advantage of the compositional precision, purity, and low defect densities found in oxide films prepared by molecular beam epitaxy, the team showed that an unusual semiconducting phase — which is ferrimagnetic well above room temperature and absorbs light in the visible portion of the solar electromagnetic spectrum — can be stabilized on magnesium oxide (MgO(001)) substrates.
This phase results when precisely one third of the iron (Fe) in magnetite (Fe3O4) is replaced with chromium (Cr). The investigation revealed that chromium ions (Cr3+) substitute for iron exclusively at octahedral sites in the spinel lattice, occupying half of these sites. As a result, the charge transport mechanism involves electron-hopping between iron cations at octahedral and tetrahedral sites in the lattice.
Having shown that chemically modified magnetite (Fe2CrO4) meets the basic criteria required for an air-stable, visible-light photocatalyst, the investigators plan to carry out experiments in which they will transfer freshly grown Fe2CrO4 surfaces to a photoelectrochemical cell under a dry nitrogen atmosphere to avoid picking up surface carbon contamination. There they will measure the photocatalytic activity for the oxygen evolution and hydrogen evolution reactions that occur when light energy is successfully used to break water down into usable fuel. |
Prevalence of obesity among a group of Kirkuk women
Obesity is globally considered as a pandemic with potentially serious consequences for human health. It was estimated that more than 20% of adults in the UK, and more than 30% in USA, are obese (i.e. body mass index, BMI ≥ 30 kg/m2. Obesity prevalence has increased threefold within the last 20 years and continues to rise. About the prevalence of obesity in the developing countries, average national rates are not nearly so high, but the real figures show alarmingly high rates of obesity in many urban communities.
What is Body Mass Index (BMI)?
The BMI is a measured by dividing the weight on the height square. It is considered to be a good indicator for a person's healthy body weight, but it does not measure the exact percentage of body fat. The BMI measurement can sometimes be misleading - a muscular man may have a high BMI but have much less fat than an unfit person whose BMI is lower. However, in general, the BMI measurement is considered to be a useful indicator for the 'average person'. |
Sugars and Tooth Decay
Updated: May 9, 2021
Tooth decay, or ‘dental caries’, occur when acid in the mouth attacks the enamel and dentine of the teeth causing holes or cavities to form. The acid is produced by bacteria that are found within the plaque – a sticky and thin film that continuously forms over the teeth. When sugar is consumed it interacts with the bacteria within the plaque to produce acid. This acid is responsible for tooth decay because it slowly dissolves the enamel creating holes or cavities in the teeth. Tooth decay can lead to abscesses, which may result in the tooth having to be root canal treated or worse removed.
Despite the decreasing levels of tooth decay over the past decades, it still remains one of the most common diseases. According to the Canadian Dental Association untreated dental caries in adults accounts for 35% of the world's population, ranking it number one of disease in prevalence.
Sugars in food and drinks play a major role in the development of dental caries. Bacteria within the plaque use the sugar as energy and release acid as a waste product, which gradually dissolves the enamel in the teeth
In 2010, the World Health Organization (WHO) commissioned a systematic literature review to answer a series of questions relating to the effects of sugars on dental caries. The systematic review showed consistent evidence of moderate quality supporting a relationship between the amount of sugars consumed and dental caries development. There was also evidence of moderate quality to show that dental caries is lower when free sugars intake is less than 10% of energy intake. Dental caries progresses with age, and the effects of sugars on the teeth are lifelong. Even low levels of caries in childhood are of significance to levels of caries throughout the life-course. Analysis of the data suggests that there may be benefit in limiting sugars to less than 5% of energy intake to minimize the risk of dental caries throughout the life course.
Here are some facts having to do with sugar and teeth, according to the Canadian Dental Association (CDA):
Each year, three Canadians out of every four see a dentist or other dental professional.
Around 84 percent of Canadians think their oral health is good or even excellent.
Worldwide, 60 to 90 percent of school-aged kids and almost 100 percent of adults experience tooth decay.
Students miss around 2.26 million school days each year in Canada because of dental-related conditions.
About one-third of all surgeries performed during the day for kids between one and five years old is due to tooth decay.
Approximately 96 percent of adults have cavities.
Around 73 percent of Canadians brush their teeth twice daily.
Around 28 percent of Canadians floss a minimum of five times each week.
96 percent of adults have had cavities despite them being extremely preventable
Around 21 percent of Canadian adults who have teeth, have or did have moderate to severe gum problems
Six percent of Canadian adults don’t have their natural teeth any longer
There are things you can do to lower the dangers of sugar for teeth, starting with how much sugar you’re consuming.
The first step is to be more mindful of your diet. The recommended sugar intake by the Heart and Stroke Foundation of Canada should be no more than about 10 percent of your daily calorie intake.
You don’t have to avoid sugar altogether to have proper dental hygiene. If you follow some basic practices, you can still consume sugar and maintain healthy teeth. Here are some easy tips to help you avoid cavities while still enjoying your sugary treats.
Brush your teeth 2-3 times a day to remove plaque that can cause acid in your mouth.
Use a fluoridated toothpaste to help remineralize enamel and prevent further decay.
Floss once a day to remove plaque from between the teeth.
Use a mouth rinse or rinse with water after consuming a sugary snack.
Eat snacks with less sugar. Fruits are the healthiest option for satisfying your sugar craving.
Try chewing sugar-free gum. Sugar-free gum can help to clean out your teeth. It also can assist you in producing saliva, which helps remove the coating of sugar from your teeth.
The most critical component to good oral care is scheduling regular dental cleanings and checkups. The dentist and dental hygienist can identify any signs of tooth decay early enough to help reduce or reverse the damage. To set up your appointment, contact Clinique Dentaire WM Dorval for a dental cleaning and exam. Call us at 514-631-3811 to book an appointment.
NHS Choices, 2014. “Tooth Decay,” URL: <http://www.nhs.uk/conditions/dental-decay/Pages/Introduction.aspx>. |
Table of contents:
- What are behavioral examples?
- What are the three behavioral theories?
- What is an example of learning in psychology?
- Is all human behavior learned?
- How can learned behavior be prevented?
- How many types of human behavior are there?
- What are behavioral skills?
- What are the two basic types of human behavior?
- What are the basis of human Behaviour?
- What is normal Behaviour?
- What is an abnormal behavior?
- What are the 3 criteria for abnormal behavior?
What are behavioral examples?
Examples of words to describe task-oriented behavior with a positive connotation include:
- Active: always busy with something.
- Ambitious: strongly wants to succeed.
- Cautious: being very careful.
- Conscientious: taking time to do things right.
- Creative: someone who can make up things easily or think of new things.
What are the three behavioral theories?
Behavioral Theories. Define and contrast the three types of behavioral learning theories (contiguity, classical conditioning, and operant conditioning), giving examples of how each can be used in the classroom.
What is an example of learning in psychology?
Examples of observational learning include: An infant learns to make and understand facial expressions. A child learns to chew. After witnessing an older sibling being punished for taking a cookie without asking, the younger child does not take cookies without permission.
Is all human behavior learned?
Just about all human behaviors are learned. Learned behavior is behavior that occurs only after experience or practice. Learned behavior has an advantage over innate behavior: it is more flexible. Learned behavior can be changed if conditions change.
How can learned behavior be prevented?
The Habit Change Cheatsheet: 29 Ways to Successfully Ingrain a Behavior
- Keep it simple. Habit change is not that complicated. ...
- The Habit Change Cheatsheet. ...
- Do just one habit at a time. ...
- Start small. ...
- Do a 30-day Challenge. ...
- Write it down. ...
- Make a plan. ...
- Know your motivations, and be sure they're strong.
How many types of human behavior are there?
A study on human behavior has revealed that 90 percent of the population can be classified into four basic personality types: optimistic, pessimistic, trusting and envious. However, the latter of the four types, envious, is the most common, with 30 percent compared to 20 percent for each of the other groups.
What are behavioral skills?
Behavioral skills are interpersonal, self-regulatory, and task-related behaviors that connect to successful performance in education and workplace settings. The behavioral skills are designed to help individuals succeed through effective interactions, stress management, and persistent effort.
What are the two basic types of human behavior?
The two types of behaviour are:
- Efficiency investment behaviour. This behaviour is a one-shot action. ...
- Habitual or 'curtailment' behaviour. This type of behaviour usually entails unconscious decisions, routines.
What are the basis of human Behaviour?
Behavior is also driven, in part, by thoughts and feelings, which provide insight into individual psyche, revealing such things as attitudes and values. Human behavior is shaped by psychological traits, as personality types vary from person to person, producing different actions and behavior.
What is normal Behaviour?
Normality is a behavior that can be normal for an individual (intrapersonal normality) when it is consistent with the most common behavior for that person. Normal is also used to describe individual behavior that conforms to the most common behavior in society (known as conformity).
What is an abnormal behavior?
Abnormality (or dysfunctional behavior) is a behavioral characteristic assigned to those with conditions regarded as rare or dysfunctional. Behavior is considered abnormal when it is atypical or out of the ordinary, consists of undesirable behavior, and results in impairment in the individual's functioning.
What are the 3 criteria for abnormal behavior?
There are four general criteria that psychologists use to identify abnormal behavior: violation of social norms, statistical rarity, personal distress, and maladaptive behavior.
- Why do some parents not love their child?
- What are signs your body is shutting down?
- What is a shadow personality?
- What can I text a girl to make her want me?
- Can you fail the MMPI?
- How does social media affect mental health and body image?
- Does morality depend on God?
- What is the weakest creature in D&D?
- How do you manage fixation?
- Does everyone cry in therapy? |
Biological viruses infect cells and alter their DNA to make them create copies of themselves. Computer viruses are similar in the sense they infect programs to replicate. Obviously their aim is not to indefinity replicate according to the evolution law, but to execute a payload upon certain conditions:
- After a certain time when a high number of targets are infected, to maximize the amount of damages,
- When a special condition is encountered, for example if the infected program is run with administrator privileges.
They have to be as light as possible to disguise themselves. That’s why they are written in assembly or possibly in C. Rust is also a good candidate, as a system programming language with a very optimized compiler, no garbage collection and a good interoperability with assembly.
In this article, we will give a quick overview of the two most common types of computer viruses, but others exist:
- Executable viruses,
- Macro viruses.
1. Executable viruses
1.1 Overwriting viruses
The simplest type of virus, which differs from the biologic one, overwrites other binaries with itself.
This basic virus could be improved in many ways:
- Check the binary we are going to infect has not already been infected (by writing a magic code),
- Infect only a small fraction of the binaries in order to not be discovered instantly.
1.2 Parasitic viruses
A better model for a virus to hide is to attach itself to an existing program, letting it work as usual in addition to replicate.
The virus is generally attached to the end of the executable program because not having to relocate the executable section make it way easier to program.
The virus has to alter the starting address field to point to itself to be executed and use relative adressing because its position depends of the target binary. That’s the kind of operations that require low programming languages:
Indeed, to attach itself to the front, the virus as to copy the entire program to RAM, put itself, then copy the program back from RAM and then relocate the program after it has been executed:
There is a third possibility, more difficult to program but resolutely possible, which offers the advantage to hide the virus from antiviruses in almost of cases.
Nearly all modern binary formats on Linux and Windows allow programs to have multiple texts and data segments in order to be relocated on fly. Those segments are of fixed size (512 bytes for Windows Portable Executable exe) and filled out with 0s when not full. The magic happens when viruses hide themselves in holes. That’s why they are called “cavity viruses”:
2. Macro viruses
Excel and others programs are able to execute macros. Excel uses VBA, an interpreted but complete language, powerful enough to contain a macro virus.
It’s pretty easy to send emails to employees and have them open our tricked excel sheet attachment (impersonating as the boss or the consultant or the supplier…). As everyone, including myself, accepts the warning genuinely displayed at the document opening, it has been a very efficient way of attacking companies for years.
The marketing guy who thought it was a great feature to add to Excel may regret his decision. As usual, keep it simple is the greatest tip to those who think cybersecurity matters.
3. Other type of viruses
We discussed the two most common virus types but others exist:
- Companion viruses: it’s just a program that gets to run instead of the one that is supposed to run. They were common in the time of MS-DOS: when a user typed prog, MSDOS searched to execute prog.com before prog.exe. It’s a rare but possible thing today (strange java classpath for example),
- Device drivers viruses: viruses that focus on device drivers. They are started at the OS boot and run in kernel mode!
- Source Code Viruses: programs that try to add their payload to sources rather than binaries (our python example above belongs to this category),
- And also viruses hidden in the BIOS and viruses hidden in RAM to intercept traps… |
Life in the Ghetto
Related ImagesSee the photographs related to this lesson
Using the Analyzing Visual Images strategy and the Critical Analysis Process for exploring an artwork, print off the Related Images with the captions on their reverse sides and arrange them into the specified groups. Place each group of images on tables or display them on a wall for students to see. Ensure there is an obvious separation between each set of images.
If your students have not used the Analyzing Visual Images strategy before, model it for the class using another image from the collection. After modelling the strategy, divide students into evenly numbered groups and assign each group a set of images. Each student should select an image from the group and apply the Analyzing Visual Images strategy. More than one student may select the same image.
After completing this process, students should read the captions from the backs of the photographs and share their observations and analyses with the group. When all of the students have shared their ideas, ask them to discuss the following questions: What do the images in this collection have in common? What differences do you see within this collection of images? What title would you give to this collection of images? After completing this discussion, the groups should rotate to the next set of images and repeat the process. Continue this process until each group has worked with all four image sets.
Once your students have seen all four collections of images, they should return to their seats and participate in a Think, Pair, Share discussion using a large piece of paper with two columns labelled "Collection Similarities" and "Collection Differences." Students should start writing individually in their notebooks, pair to fill in the large piece of paper, and then share their ideas with the whole group.
The last piece of this lesson is an Exit Card. On their exit card, ask your students to do two things. First, they should answer the question: How do the photographs of Henryk Ross represent the complexity of life in the Lodz Ghetto? Second, ask students to pose a question of their own about the images. Students should hand in these cards as they exit the room.
Analyzing Visual Images and Stereotyping
This video shows Nazi footage of the Lodz Ghetto in the winter of 1940.
Testimony of Leo Schneiderman on life in the Lodz Ghetto.
Testimony of Blanka Rothschild on life in the Lodz Ghetto. |
The topic of the assignment is “Exchange rates”. “Price of one currency expressed in respect of another currency is called exchange rate”. It has 2 parts: That is domestic currency and foreign currency. Often exchange rates are quoted against the foreign currency that is the US Dollar. Regularly trade rates are cited against the remote money that is the US Dollar.
Price of one currency expressed in terms of another currency is called exchange rate. An exchange rate has 2 currencies. They are base money and counter cash. The foreign currency is base money and the counter cash is the domestic currency in case of direct quotation. In case of indirect quotation this is reverse. The vast majority of the trade rates utilize outside money as the base cash. Eg: US Dollar
The utilization of outside monetary forms in exchange is called Remote exchange. It is resolved in the outside trade showcase, which is where distinctive monetary standards are exchanged. € can be estimated as the measure of outside money that we can be purchased with one unit of local cash .
e = foreign currency/domestic currency
A fall in the cost of a money with respect to another cash is called deterioration. An expansion in the cost of one cash when contrasting and another money is called an appreciation. Certain nations decreases the cost of their cash. It is called Cheapening. When a nation raises the cost of their cash it is called Revaluation.
There are two types of exchange rates. They are Real exchange rate and Nominal exchange rate.
Nominal exchange rate is the relative price between domestic currencies and foreign currencies. In Foreign exchange (FX) markets there are two types of nominal exchange rates notations.
The Price notation, which is used by the central banks, US FED or Bank Of India.
Quantity notation, which is used by European Central Bank or Bank Of England.
- In price notation, the goods are represented by foreign currency, and the price of the good is the exchange rate et. There is an inverse relationship between the movement of exchange rate et and the value of Indian rupee.
- When Indian Rupee appreciates it leads to decrease in exchange rate and when Indian rupee depreciates it leads to increase in exchange rate.
- They are inversely related with each other.
Real Exchange Rate do not measure the value of currencies, but they measure the relative price of domestic currency and foreign currencies. Real Exchange rate measure the real purchasing power of the currency of a country for a representative They measure the real purchasing power of a country’s currency for a representative baskets of goods and services.
Real exchange rate = Et * Pf / Pd
Where Et= Nominal exchange rate, Pf= price of foreign market, Pd= price of domestic market
There are mainly three types of exchange rate. They are Fixed Exchange rate or pegged , Flexible exchange rate and managed float exchange rate.
In Fixed exchange rate, the rate at which dollars can be converted to different currencies like yen, pesos etc is specified by Government. It is not fluctuating in nature. It provides stability in movements in capital and foreign trade. Inorder to achieve stability what the government does is, they buy foreign currency when the exchange rate is weak and sells the foreign currency when the exchange rate becomes stronger.
In Flexible exchange rate or floating exchange rate, Here the exchange rates are determined by demand and supply in the market where foreign currencies are traded. It is also called Floating Exchange rate. The Government doesn’t involve in determining the exchange rate like that in fixed exchange rate system.
Managed Floating Exchange rate is adopted by many countries by the end of Bretton Wood’s system. In managed floating exchange rate, exchange rate is determined by the forces of demand and supply in the market and also it is determined by government intervention. It is combination of both fixed exchange rate and floating exchange rate. It can also called as Dirty floating.
In the graph , it is seen that in the year 2018,the value of Indian rupees is diminishing and the value of US Dollar is Increasing. We can see that by November the value of US Dollar is increasing.
Here it is seen that the graph is not stable. It is fluctuating .In January, the value of Hongkong dollar is less, but it is Increasing. During march to July it is somewhat showing a stable rate even though it is slightly increasing and decreasing. During September the value of US Dollar is decreasing. And by November it again increases.
The value of Japanese yen is decreasing in January. But in march it is growing. Then from may to November it is again decreasing at a slow pace.
In the above graph, the value of euro is decreasing when compared to US Dollar. In march the value of US dollar is declining. And from may to November the value of US Dollar increases.
The above graph shows the trends in Rupee Dollar exchange rate. We can see that as time passes the value of Indian Rupee is diminishing and the value of US Dollar is increasing. During 1970-1987 the value of INR is stable. And then it is fluctuating.
“Floating exchange rates can cause big trouble” is an article in Bloomberg written by Peter Coy. According to IMF, financial maturity can be gained when a country determine exchange rate through demand and supply. It is advisable to adopt floating exchange rate by the market countries across the globe. Usually the floating exchange rate is considered to be the best among all but this paper is against that opinion. Although it is a symbol of mature economy, this paper is of the opinion that most of the benefits are not actually existing it is just overstated.
“Non linear exchange rate models; A selective overview” is an article by Lucio Sarno. It was published on may 1 2003. This paper helps us in understanding the exchange rate behaviour. There are two questions that are noticed. The first is whether purchasing power parity puzzles can be resolved by autoregressive models of real exchange rate and the next question is whether non linear models of nominal exchange rate can overcome random walk model. In this paper, with reference to different forecast accuracy criteria importance of evaluation of exchange rate is also determined.
Stock markets and the genuine exchange rate : An intertemporal approach” is an article composed by Benoit Mercereau and it was distributed on may 1 2003. This paper shows a nation with securities exchanges, where genuine conversion scale is resolved. It likewise decides the portion of dangerous resources and the hazardous resource costs. It indicates how statistic factors oe dangerous resources impact genuine conversion standard. It is negating with the Balass samuelson impact.
“The effects of exchange rate fluctuationson output and prices: evidence from developing countries” is written by Magda E Kandil and it was published on October 1 2003. This paper shows how fluctuations of exchange rate is affected by taking the price level of developing countries including 33 of them. The movements in exchange rate includes two components that is anticipated and unanticipated components. In unanticipated movements aggregate supply is determined by cost of goods sold and aggregate demand is distinguished by demand for domestic currency, imports and exports.
“Expectations and exchange rate dynamic” is a diary composed by Rudiger Dornbusch. Under predictable desires, flawless capital versatility this paper presents a hypothesis on exchange rate. A way is presented and exchange rate is deteriorated because of money related development.
“Monetary Policy and Exchange Rate Volatility in a Small Open Economy” is an article by Jordi Galí and Tommaso Monacelli and it was published on 1 july 2005. In this paper it is shown that how the equilibrium dynamics can reduce to a simple representation in output gap and domestic inflation by exhibiting a small open economy of “Calvo sticky price model”. This paper is used to analyse the impact of macro economy of three alternatives, mainly domestic inflation, exchange rate peg and CPI based Taylor rules.
“Floating trade rates can cause enormous inconvenience” is an article in Bloomberg composed by Diminish Demure. As per IMF, a nation which decides their exchange rate through powers of Interest and Supply is indicating money related development. It is prudent to embrace skimming conversion scale by the market nations over the globe. Typically the gliding conversion scale is viewed as the best among everything except this paper is against that assessment. Despite the fact that it is an image of develop economy, this paper is of the assessment that the majority of the advantages are not really existing it is simply exaggerated.
“Price of a currency expressed in respect of another currency is called Exchange Rate”. Presently all nations utilize settled conversion scale framework or adaptable exchange rate framework. But China, Today all different nations have adaptable conversion scale. The fundamental weakness of drifting exchange rate is that it is fluctuating in nature. The estimation of money is very fluctuating. Amid prior period the majority of the market analysts gave significance for settled and adaptable exchange rate. Overseen exchange rate is utilized just at the remainder of Bretton wood’s framework.
- Economics- Paul A Samuelson , William D Nordhaus
- World Development Indicators
- X- Rates
- Google Scholar
- International Monetary Fund(IMF) |
To infinity and beyond: Light goes infinitely fast with new on-chip material
Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have done just that, designing the first on-chip metamaterial with a refractive index of zero, meaning that the phase of light can travel infinitely fast.
This new metamaterial was developed in the lab of Eric Mazur, the Balkanski Professor of Physics and Applied Physics and Area Dean for Applied Physics at SEAS, and is described in the journal Nature Photonics.
“Light doesn’t typically like to be squeezed or manipulated but this metamaterial permits you to manipulate light from one chip to another, to squeeze, bend, twist and reduce diameter of a beam from the macroscale to the nanoscale,” said Mazur. “It’s a remarkable new way to manipulate light.”
Although this infinitely high velocity sounds like it breaks the rule of relativity, it doesn’t. Nothing in the universe travels faster than light carrying information — Einstein is still right about that. But light has another speed, measured by how fast the crests of a wavelength move, known as phase velocity. This speed of light increases or decreases depending on the material it’s moving through.
When light passes through water, for example, its phase velocity is reduced as its wavelengths get squished together. Once it exits the water, its phase velocity increases again as its wavelength elongates. How much the crests of a light wave slow down in a material is expressed as a ratio called the refraction index — the higher the index, the more the material interferes with the propagation of the wave crests of light. Water, for example, has a refraction index of about 1.3.
When the refraction index is reduced to zero, really weird and interesting things start to happen.
In a zero-index material, there is no phase advance, meaning light no longer behaves as a moving wave, traveling through space in a series of crests and troughs. Instead, the zero-index material creates a constant phase — all crests or all troughs — stretching out in infinitely long wavelengths. The crests and troughs oscillate only as a variable of time, not space.
This uniform phase allows the light to be stretched or squished, twisted or turned, without losing energy. A zero-index material that fits on a chip could have exciting applications, especially in the world of quantum computing.
“Integrated photonic circuits are hampered by weak and inefficient optical energy confinement in standard silicon waveguides,” said Yang Li, a postdoctoral fellow in the Mazur Group and first author on the paper. “This zero-index metamaterial offers a solution for the confinement of electromagnetic energy in different waveguide configurations because its high internal phase velocity produces full transmission, regardless of how the material is configured.”
Read more at technology.org website. |
This paper is written from a teacher’s point of view and it is going to deal with techniques and methods that have a great chance to bring literature back to its well-worth place: the classroom – whether it is a controlled learning process or an after school type activity. The paper is also going to answer the question: what should come first the book or the film in the learning / teaching process.
TECHNIQUES and METHODS
According to Edward Anthony, cited by Jack C. Richards and Theodore S.Rodgers in ‘Approaches and Methods in Language Teaching’ (2001) ”(…) technique is the level at which classroom procedures are described”.
Nowadays there are numerous teaching techniques that can bring literature back to the classroom. They can be used successfully if they are chosen carefully, taking into consideration the students’ age, comprehension level, areas of interest. It is also of great importance to bring in authentic materials for many reasons. Authentic texts – either written or audio – increase motivation, interest and engagement and encourage contextualization of both vocabulary usage and correct grammar. Authentic texts help students develop cognitive skills, provide cultural background and develop the sense of belonging to a certain culture. Literature for children, if properly chosen, can be considered an appropriate means for intermediate language knowledge students, provided the language is simple, the theme is universal and the patterns are predictable. Contextualization of the information and the interesting illustrations (if any) give an extra credit to the reading material.
Auditory techniques – books on tape, peer reading, and films – can be used for discussions; they can also lead to exciting writing assignments. Compared with the written texts from the books, students might be more interested in the pictures and sounds from the films, thus being able to concentrate on the story easier. As films are a good means of providing realistic examples of authentic language and show the actions and the communication aspects visually as well as verbally, they can be used successfully into the reading lessons. If the film is a book- based one, both the movie and the written text can provide the reader and the viewer, almost in the same time, a large linguistic acquisition and audio-video satisfaction.
Reading aloud will be considered a good technique to bring literature into the classroom provided some important aspects are taken into account: the choice of a balanced book collection at all grade levels; putting the information into a rhythmic pattern such as a poem, a song, a rap; the choice of texts that facilitate a cross-curricular experience.
Reading skills, enquiry skills, understanding and creativity can be developed through drama techniques (dialogue reading, role-play, acting out). They can also enhance character development and storytelling and can be used both in everyday classroom activities and in after school type activities. Literary texts and films can be explored to a higher extent through drama techniques as they both build on the students’ innate ability for fantasy and imaginative play providing a student – friendly context for exposure to language. They make use of TPR – total physical response – thus developing more than reading and speaking skills, it also increases socio-cultural knowledge, intercultural awareness and concentration and communication skills. Drama techniques provide opportunities for multisensory, kinesthetic responses to stories and stimulate visual reasoning and learning by doing at a number of different levels. Role-plays are fun and highly motivating activities when the real dialogues from the book that has been just read or the film that has just been watched are used.
According to Edward Anthony, cited by Jack C. Richards and Theodore S.Rodgers in ‘Approaches and Methods in Language Teaching’ (2001) ‘(…) method is the level at which theory is put into practice and at which choices are made about the particular skills to be taught, the content to be taught, and the order in which the content will be presented’.
Project based learning is an extension of the above mentioned techniques. Cooperative learning – in the format of a project – as a method of teaching is very popular among teachers and students as it is a dynamic approach to teaching and learning. There are endless methods that fit the project based learning approach. We will mention just a few, the most appropriate ones for the lower secondary level students, such as chapter / episode presentations; time period presentations; turning a text into a dialogue; read and make. All these methods have the advantage of leading the students into intensive reading; if the piece of reading is carefully chosen, students are expected to fulfill the reading activities easily, without having the impression that the tasks are a burden.
Students can be assigned these techniques in groups on both the given literary work (actually here I mean using literary work as language teaching resource) and the film adaptation. Turning a text into dialogues may be as challenging as watching the dialogue in the film and, turning it into a narrative. But the most rewarding method for students, after having tried all the above mentioned ones, seems to be the one in which the students are asked to read a text and then make a film adaptation of it by themselves by creating vivid descriptions and by discussing the impact of their work. And only after having done it, they should watch and compare with the ‘Hollywood’ or “Disney” film adaptations. It is no doubt that such a task should be assigned to the ones that are skillful enough in using the ICT and master, to some extent, the techniques that are to be involved in making and editing short films.
The most popular method that is tried and tested on my students (lower secondary level) remains the board game and card game method that can be applied both to teaching through literature and films. It is a very flexible method and not very demanding as soon as the templates are made. The method can serve the learners’ needs at any moment of the lesson, as well as in activities that are designed to be taken place in after school assignments. Again, the method is very well applicable to different kinds of learners, so as the ones that are into reading could chose the literary text while the ones that are more motivated by the video materials could work on the visual elements that seem to them to be more meaningful and alive and help them to bring the real world into the classroom.
As the using of films and the literary texts language teaching does not mean merely play the film for the students in the classroom or simply read the text, board game and card game method is one more time rewarding as it requires the students full participation in order to be able to accomplish the tasks successfully. The next group of students to use the games will not be capable of asking the questions until they have read and watched the film adaptation.
WHAT SHOULD COME FIRST – IN TEACHING – THE BOOK OR THE FILM?
There are teachers who consider that reading the book before watching the film is essential. Then, there are educators who consider that it is better to watch the film and only then read the book.
Some statistics claim that today’s teenagers daily spend more than an hour and a half listening to music, over an hour using the computer, less than an hour playing video games, about a quarter of an hour reading and 25 minutes watching movies (Rideout, Roberts & Foehr, 2005). For movies, that means 152 hours per year. This statistics is thus in favour of those who say that reading first is important. But everything is adaptable. Some of the books have to be condensed when adapted to the big screen due to the massive abstract words that cannot be transferred to the film.
In this way, considerable parts from the literary texts are omitted. This might not be a problem as far as films convey images and sounds that can, at a certain point, highlight other parts of the story in such a way that the message of the text is not altered or modified.
What teachers have to keep in mind when assigning tasks whether to read the book or watch the film is each student’s or group of students’ abilities. There are students who enjoy reading, debating, using their imagination, but there are students who are visual and they prefer watching films and developing speaking, acting skills by imitating or role playing what they have seen. As it is not necessary for films to be carbon copies of the book, in this case it is better to read the book first and then watch the film in order to start a debate on what is to be found in the film, what has been modified or even eluded and why.
Books and films have different roles. They can each have strong points: books are better in characterization, highlight more accurately the inner conflict, while films create a much better visual and acoustic effects.
Given the statistics above, maybe teachers should ask students to read the book first in order to develop imagination, creativity, enrich vocabulary which is more important than being told what to think while watching the film.
Rideout, V., D. F. Roberts, and U. G. Foehr (2005). Generation M: Media in the lives of 8-18 year-olds. Executive summary. Menlo Park, CA: Henry J. Kaiser Family Foundation.
Jack, C., Richards, and Theodore S. Rodgers (2001). Approaches and Methods in Language Teaching. Second Edition. Cambridge University Press. |
The representation of blacks as subjects in Jamaican art remained almost absent until Albert Huie (1920-2010) entered the art scene. Huie was artistically formed in an era where Ethiopianism, Rastafarianism, Garveyism, and cultural nationalism transformed the island’s social and political landscape. He incorporated the collective ideas of these movements about a black African consciousness and a black Jamaican culture in his works.
By Jorge Cuartas
In early Jamaican art, black inhabitants played a marginal role; they were portrayed as part of the scenery. This image was replaced in the late nineteenth century by the ‘market woman’, a stereotype introduced on postcards, photographs and advertisements as part of the first efforts to promote Jamaica as a tourist winter resort. Although represented in the foreground, the market woman is characterized as primitive, backward, childlike, barefooted, picturesque, tropical, and full of queer superstitions.
Albert Huie’s The Counting Lesson (1938) represents an important turning point in Jamaican art. In it a black young girl is the central point. The girl, looking intently at what is in front of her, is counting. She wears a polka dot dress, her hair is neatly coifed with a red bow, and the finger poised in midair stresses her mental calculations. All elements of the painting point to the girl’s education, respectability, and civility.
On its surface the work is fairly unremarkable. However, in the Jamaican context of the 1930s, the painting changes the focus of black people as subjects in art. No longer are they part of the scenery, or used to emphasize stereotypes, but now they are the central focus of the painting. By fitting the girl into the frame of art, Huie allowed black viewers to attribute to themselves the signs of distinction, prestige, and self-hood formerly reserved for the white colonial elite.
Today, Albert Huie is locally and internationally acclaimed as a key figure in Jamaican art and remembered as ‘The Father of Jamaican painting’, but in many ways it is ‘The Counting Lesson’ that set him apart from others. The painting can be seen at National Gallery of Jamaica, where it is on permanent display. |
English is one of the world's most widely spoken languages. At any given time millions of people are learning it. Whether you're a native trying to memorize vital conventions or picking it up as a second language, you will need to master the fundamental rules of English grammar. Understanding grammatical structure gives you a foundation that can help you become a better writer and a more confident reader. A test of English grammar will often focus on several specific principles. You must learn elemental rules.
Learn the terminology. Make sure that you fully understand terms such as verb, noun, adjective, predicate, subject, adverb and apostrophe. You should know the exact definition of each term that will be used on the test. Write sentences that fully illustrate at least one example of the term in question. Don't use unfamiliar words.
Learn the basics. Memorize the spelling of common words. Study the rules of English punctuation, such as when to use a dash and how to use a question mark and a period. If the rules are different than your original language, note exactly how they differ. Write sample sentences in your own language and contrasting sentences in English illustrating the differences between the two.
Read a wide variety of materials. Get a feel for how native speakers use the rules of English grammar. Read widely circulated English-language newspapers such as the New York Times and the Wall Street Journal. Read books published in English. Examine the sentences used by the writers. Note how they fit together and adhere to the conventions of the English language.
Take practice tests. A good practice test should have samples of questions used on the exam in previous years. A test of English grammar should include sections that assess your knowledge of basic grammar as well as additional material that allows you to demonstrate your writing skills. If you don't have a copy, read through previous homework assignments. Use examples from prior assignments to provide you with sample questions. Time yourself as you take the test.
Research the TOEFL exam. The initials TOEFL stand for Test of English as a Foreign Language. The TOEFL essay section has two parts: the independent and the integrated. The independent is a stand alone essay in response to a prompt. The integrated section consists of a spoken and written prompt. Writers must indicate how the speaker's points are in contrast with the written prompt. Practice TOEFL tests are widely available in book form and online. |
Isaac Newton could never have imagined that upper elementary and middle school students could explore his mathematically derived laws of motion using straws, compact discs, wooden spools, and balloons.
- Explore the laws of motion
- Build air gliders with simple materials
Students use these materials and a bit of hot glue to build their own CD Air Gliders. They then use their models to strengthen their inquiry skills while they cultivate their understanding of Newton’s Laws of Motion. Glue gun, teachers guide, student procedure and journal page are also included. Grades 4–8. |
The term “periodontal”means “around the tooth.” Periodontal disease (also known as periodontitis and gum disease) is a common inflammatory condition which affects the supporting and surrounding soft tissues of the tooth; also the jawbone itself when in its most advanced stages.
Periodontal disease is most often preceded by gingivitis which is a bacterial infection of the gum tissue. A bacterial infection affects the gums when the toxins contained in plaque begin to irritate and inflame the gum tissues. Once this bacterial infection colonizes in the gum pockets between the teeth, it becomes much more difficult to remove and treat. Periodontal disease is a progressive condition that eventually leads to the destruction of the connective tissue and jawbone. If left untreated, it can lead to shifting teeth, loose teeth and eventually tooth loss.
Periodontal disease is the leading cause of tooth loss among adults in the developed world and should always be promptly treated.
Types of Periodontal Disease
When left untreated, gingivitis (mild gum inflammation) can spread to below the gum line. When the gums become irritated by the toxins contained in plaque, a chronic inflammatory response causes the body to break down and destroy its own bone and soft tissue. There may be little or no symptoms as periodontal disease causes the teeth to separate from the infected gum tissue. Deepening pockets between the gums and teeth are generally indicative that soft tissue and bone is being destroyed by periodontal disease.
Here are some of the most common types of periodontal disease:
Chronic periodontitis – Inflammation within supporting tissues cause deep pockets and gum recession. It may appear the teeth are lengthening, but in actuality, the gums (gingiva) are receding. This is the most common form of periodontal disease and is characterized by progressive loss of attachment, interspersed with periods of rapid progression.
Aggressive periodontitis – This form of gum disease occurs in an otherwise clinically healthy individual. It is characterized by rapid loss of gum attachment, chronic bone destruction and familial aggregation.
Necrotizing periodontitis – This form of periodontal disease most often occurs in individuals suffering from systemic conditions such as HIV, immunosuppression and malnutrition. Necrosis (tissue death) occurs in the periodontal ligament, alveolar bone and gingival tissues.
Periodontitis caused by systemic disease – This form of gum disease often begins at an early age. Medical condition such as respiratory disease, diabetes and heart disease are common cofactors.
Treatment for Periodontal Disease
There are many surgical and nonsurgical treatments the periodontist may choose to perform, depending upon the exact condition of the teeth, gums and jawbone. A complete periodontal exam of the mouth will be done before any treatment is performed or recommended.
Here are some of the more common treatments for periodontal disease:
Scaling and root planing – In order to preserve the health of the gum tissue, the bacteria and calculus (tartar) which initially caused the infection, must be removed. The gum pockets will be cleaned and treated with antibiotics as necessary to help alleviate the infection. A prescription mouthwash may be incorporated into daily cleaning routines.
Tissue regeneration – When the bone and gum tissues have been destroyed, regrowth can be actively encouraged using grafting procedures. A membrane may be inserted into the affected areas to assist in the regeneration process.
Pocket elimination surgery – Pocket elimination surgery (also known as flap surgery) is a surgical treatment which can be performed to reduce the pocket size between the teeth and gums. Surgery on the jawbone is another option which serves to eliminate indentations in the bone which foster the colonization of bacteria.
Dental implants – When teeth have been lost due to periodontal disease, the aesthetics and functionality of the mouth can be restored by implanting prosthetic teeth into the jawbone. Tissue regeneration procedures may be required prior to the placement of a dental implant in order to strengthen the bone.
Ask your dentist if you have questions or concerns about periodontal disease, periodontal treatment, or dental implants. |
- 12.6 million homes built since 1990 have been built in the wildland-urban interface (WUI), where human development meets undeveloped vegetation and forests.
- As of 2010, the WUI was 10% of the contiguous U.S. land area but held over 43 million homes (33% of U.S. total) and 97.7 million people (32% of U.S. total)
- As homes are built within and next to wildland vegetation, the threat of wildfires infiltrating urban areas increases.
- Green infrastructure, which includes reconstructed wetlands, urban forests, and practices that mimic natural processes, is increasingly used to reduce the impacts of major storms, natural disasters, and wildfires, while filtering water at the same time.
- When green infrastructure projects are evaluated, it is often revealed that they do not supply the ecosystem services that they were designed to provide. Designers and land managers must construct ecosystems containing plant, animal, or soil microbial communities that meet performance criteria and that are resilient to disturbance.
- Mapping the local hazards and existing green infrastructure is a good way for regional collaborators to visualize where green infrastructure might benefit a community, and it also acts as a tool for communication. |
Corona viruses are a large family of viruses which may cause illness in animals or humans. In humans, several coronaviruses are known to cause respiratory infections ranging from the common cold to more severe diseases such as Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS).1, 2 Most recently discovered coronavirus causes COVID-19 is the infectious disease caused by the most recently discovered corona virus. This new virus and disease were unknown before the outbreak began in Wuhan, China, in December 2019.3
The most common symptoms of COVID-19 are fever, tiredness, and dry cough.4 Some patients may have aches and pains, nasal congestion, runny nose, sore throat or diarrhea.5 These symptoms are usually mild and begin gradually. Although some people become infected with the virus but they do not develop any symptoms. Most people (about 80%) recover from the disease without needing special treatment. Around 1 out of every 6 people who gets COVID-19 becomes seriously ill and develops difficulty breathing.6 Older people, and those with underlying medical problems like high blood pressure, heart problems or diabetes, are more likely to develop serious illness.7 People with fever, cough and difficulty breathing should seek medical attention.
People can catch COVID-19 from others who have the virus. The disease can spread from person to person through small droplets from the nose or mouth which are spread when a person with COVID-19 coughs or exhales.8 These droplets land on objects and surfaces around the person. Other people then catch COVID-19 by touching these objects or surfaces, then touching their eyes, nose or mouth. People can also catch COVID-19 if they breathe in droplets from a person with COVID-19 who coughs out or exhales droplets.9 This is why it is important to stay more than 1 meter (3 feet) away from a person who is sick.
The risk of catching COVID-19 from the feces of an infected person appears to be low. While initial investigations suggest the virus may be present in feces in some cases, spread through this route is not a main feature of the outbreak.10, 11
It is not certain how long the virus that causes COVID-19 survives on surfaces, but it seems to behave like other corona viruses.12 Studies suggest that corona viruses (including preliminary information on the COVID-19 virus) may persist on surfaces for a few hours or up to several days. This may vary under different conditions (e.g. type of surface, temperature or humidity of the environment).13
The chances of being infected or spreading COVID19 can be reduced by taking some simple precautions:
However there is no significant drug is can be used as hundred 100% cure for this disease. There are some antibiotics and vitamins which are used in lowering down the effect of corona virus.16
Though azithromycin is an antibiotic and thus ineffective alone against viruses, some clinicians have seen limited success in COVID-19 coronavirus diseased patients when adding it to chloroquine and/or hydroxy-choloroquine in the sickest patients.17, 18
Antiviral effects of doxycycline
Hydroxychloroquine with doxycycline seem to be a better alternative to azithromycin for the treatment of corona infection especially in geriatric patients.19 Doxycycline and other tetracycline derivatives such as minocycline exhibit anti-inflammatory effects along with in vitro antiviral activity against several RNA viruses.20 Use of these agents has been associated with clinical improvement, even reversal of cytokine storm in some infections caused by RNA viruses, such as dengue fever.21
The mechanism of the antiviral effects of tetracycline derivatives may be secondary to transcriptional up regulation of intracellular zinc finger antiviral protein (ZAP), an encoding gene in host cells. ZAP can also bind to specific target viral mRNAs and represses the RNAs translation.22, 23
Doxycycline can repress Dengue virus infection in Vero cells through the inhibition of dengue serine protease enzymes and of viral entry. Doxycycline showed the capacity to inhibit dengue virus replication in Vero cells culture and likely it interacts with the dengue virus E protein that is required for virus entry. 24, 25
Similarly, doxycycline controls Chikungunya virus (CHIKV) infection through the inhibition of CHIKV cysteine protease of Vero cells and showed significant reduction of CHIKV blood titer of mice. 26
In addition, doxycycline is highly lipophilic antimicrobials that chelate zinc compounds on matrix metalloproteinases (MMPs) of mammalian cells, and an in vitro study showed that murine coronaviruses rely on MMPs for cell fusion and viral replication.22 Different components of viral combination and replication by corona viruses use host proteases could be a potential objective to doxycycline. As lung insusceptible injury/ARDS is conspicuous in serious COVID patients, restraining MMPs may help fix the harmed lung tissue and upgrade recuperation.
Anti-inflammatory effects of doxycycline
In COVID-19, elevated levels of blood interleukin (IL)-6 have been more commonly observed in severe COVID-19 illness and among non-survivors,27 suggesting that mortality might be due to virally-driven hyper inflammation and to cytokine storm. Intense pro-inflammatory state has a central role in the pathogenesis of COVID 19, leading to cytokine storm.28
Importantly, doxycycline reduced pro-inflammatory cytokines, including IL-6 and tumor necrosis factor (TNF)-α, in patients with dengue hemorrhagic fever, and the mortality rate was 46% lower in the doxycycline-treated group (11.2%) than in the untreated group (20.9%).29
In addition, severe acute respiratory syndrome–related coronavirus (SARS-CoV) encompasses a papain-like protease that significantly triggers an early growth response protein 1 (Egr-1)–dependent activation of transforming growth factor beta 1 (TGF-β1), resulting in up-regulation of pro-fibrotic responses in vitro and in vivo in the lungs. [30-31] Recent computational methods study identified doxycycline among the drugs that could potentially be used to inhibit SARS-CoV-2 papain-like protease.30, 31
Doxycycline is an expanded spectrum antibiotic that has additionally documented antiviral and anti-inflammatory properties. As doxycycline is reasonable and generally accessible, has a safe therapeutic index profile, and is an alluring alternative for the treatment of COVID-19 as well as potentially alleviating the lung sequelae. In spite of the fact that there is no contention on the multiple utilization of doxycycline; outrageous consideration is suggested while repurposing the medication for COVID-19 treatment. 'Battling a sickness with previously existing anti-infection agents' and 'antimicrobial obstruction progression' resemble two arms of an equilibrium that must be deliberately equilibrated. Any unevenness by the unseemly or unpredictable application of the repurposed medication would cause an appalling increase in AMR (anti- microbial resistance).
Because of the COVID-19 pandemic and lack of medications, numerous nations have begun their own mass generic medication manufacturing units which are probably going to modify the drug production scene expanding the likelihood of more ecological defilement and drug abuse. Aside from its therapeutic application, doxycycline is also utilized in veterinary medication and farming. It is quite possibly the most richly utilized antibiotics around the world and the general wellbeing crisis presented by the COVID-19 pandemic has influenced its utilization. These hitches must be cautiously and carefully observed to contain the developing AMR which is unarguably an inconspicuous pandemic that would endure beyond the corona virus pandemic. |
the atmosphere causes it to expand (special application of the gas law,
explained below) obtained by substitution of the hydrostatic equation (4) into
the equation of state.(1). This relation provides the basis of explaining
many, many things that synoptic meteorologists see on weather maps and charts.
Thickness of layer between two pressure surfaces is directly related to the mean temperature of the layer.
Also, if we consider the thickness of a layer that is often of importance to synoptic meteorologists, the layer approximately between the ground (1000 mb) and about 6 km (500 mb), the Hypsometric Relation is
where k = R/g ln 2 and ∆Z is the thickness of the layer between 1000 mb and 500 mb and Tv is the Virtual Temperature, which we will assume here is very close to the actual temperature. |
This activity is intended to supplement Geometry, Chapter 8, Lesson 1.
Time required: 60 minutes
In this activity, students will use the Cabri Jr. application to construct figures that prove the Pythagorean Theorem in two different ways.
Topic: Right Triangles & Trigonometric Ratios
Construct and measure the side lengths of several right triangles and conjecture a relationship between the areas of the squares drawn on each side.
Prove and apply the Pythagorean Theorem.
This activity is designed to be used in a high school or middle school geometry classroom.
This activity is designed to be student-centered with the teacher acting as a facilitator while students work cooperatively. Use the following pages as a framework as to how the activity will progress. Feel free to print out the following pages for your students.
The Pythagorean Theorem states that in a right triangle, the square of the length of the hypotenuse equals the sum of the squares of the lengths of the legs. This can be expressed as where is the length of the hypotenuse.
Depending on student skill level, you may wish to download the constructed figures to student calculators. If the files are downloaded, skip the construction steps for each problem and begin each at Step 10.
Note: Measurements can display 0, 1, or 2 decimal digits. If 0 digits are displayed, the value shown will round from the actual value. To change the number of digits displayed:
- Move the cursor over the value so it is highlighted.
- Press + to display additional decimal digits or - to hide digits.
Problem 1 – Squares on Sides Proof
The Pythagorean Theorem states that, the square of the length of the hypotenuse of a right triangle is equal to the sum of the squares of the legs. In this activity, you will construct a right triangle and verify the Pythagorean Theorem by constructing squares on each side and comparing the sum of the area of the two smaller squares to the area of the square of the third side.
Step 1: Open a new Cabri Jr. file.
Construct a segment using the Segment tool.
Select the Alph-Num tool to label the endpoints and as shown.
Step 2: Construct a line through that is perpendicular to using the Perp. tool.
Step 3: Construct a point on the perpendicular line and label it .
Hide the perpendicular line with the Hide/Show tool and construct line segments and .
For the time being, keep the sides of the triangle fairly small so that squares can be constructed on the sides.
Step 4: In the lower left corner, use the Alph-Num tool to place the number 90 on the screen. This will be the angle of rotation.
Note: Press ALPHA to access numerical characters. A small “1” will appear in the tool icon in the upper left corner of the screen.
Step 5: Use the Rotation tool to rotate point about point through an angle of .
- Press ENTER on point as the center of rotation.
- Press ENTER on the angle of rotation (the number 90).
- Press ENTER on point , the object to be rotated.
Notice that the number now has a degree symbol associated with it and that the point has been rotated in the counter-clockwise direction.
Step 6: What we want to do next is to rotate point about point through an angle of in the clockwise direction. To do this we will need an angle of -90. Place this number on the screen.
Using the value of -90, rotate point about point through an angle of .
Step 7: You should now have two points below the line segment . Use the Quad. tool to construct a quadrilateral using points , and the two points constructed in Steps 5 and 6.
Answer Question 1 on the worksheet.
Step 8: In a similar fashion, rotate point about point through an angle of and rotate point about point through an angle of . This will allow us to construct a second square.
Use the Quad. tool again to construct the square on side .
Step 9: Finally, rotate point about point through an angle of and rotate point about point through an angle of .
Then construct a third square on hypotenuse .
Step 10: Start with this step if you are using the pre-constructed file “PYTHAG1”.
Select the Measure > Area tool and measure the area of the three squares.
Step 11: Using the Calculate tool, press ENTER on the measurements of the two smaller squares and then press the + key. Place the sum off to the side of the screen.
How does this sum compare to the square of the hypotenuse? Record your observations in the table for Question 2 on the worksheet.
Step 12: To test your construction, drag points , and/or to a new location on the screen.
Answer Question 3 on the worksheet.
Problem 2 – Inside a Square Proof
In this problem, we are going to look at a proof of the Pythagorean Theorem. We hope to prove the statement that, if is the length of the hypotenuse of a right triangle and and are the lengths of the legs of the right triangle, then .
Step 1: Construct a line segment .
Use the Alph-Num tool to place the value on your screen.
Step 2: Access the Rotation tool and press ENTER on point as the center of the rotation, then on 90 as the angle of rotation and finally on line segment as the object to be rotated.
Label the new point .
Step 3: Continue by rotating line segment about point through an angle of .
Label the new point .
Step 4: Complete the square by constructing line segment .
Step 5: Using the Point on tool, add point on as shown and overlay a line segment .
Step 6: Select the Compass tool to construct circles with radius equal to the length of .
- Press ENTER on . A dashed circle will appear and follow the pointer.
- Press ENTER on point . The compass circle is anchored at center .
Create a point of intersection of this circle with . Label this point .
Hide the compass circle.
Step 7: Use the Compass tool again to construct circles with centers at and and radius .
Create points of intersection of these circles with and . Label these points and .
Hide the compass circles.
Drag point to confirm that , , and all move as moves.
Step 8: Construct the quadrilateral . Can you prove that this quadrilateral is a square?
Step 9: Use the Alph-Num tool to place the labels , , and on the figure as shown.
is labeled , therefore .
is labeled , therefore .
Since is a square, each of the angles at , , and are , so we have four congruent triangles, namely and .
Step 10: Start with this step if you are using the pre-constructed file “PYTHAG2”.
Let’s examine the algebra in this situation.
is a square with sides of length .
The area of the square is .
Each of the triangles , , and is a right triangle with height and base . So, the area of each triangle is . The sum of the areas of the four triangles is .
is a square with sides of length . So the area of is .
Looking at the areas in the diagram we can conclude that:
On the worksheet, substitute the area expressions (with variables , , and ) into the equation above and simplify.
Step 11: Let’s look at this numerically as well to confirm what we just proved algebraically. Measure , and .
Note: Measure and by pressing ENTER on each endpoint, since these do not have separate segments constructed.
Use the Calculate tool to find the squares of these lengths.
Record your observations in the table for Question 6 on the worksheet.
Step 12: Find the sum of the squares of the lengths of segments and .
In the right triangle , is ?
Drag point to ensure that the relationship holds for other locations of the points , , , and .
What would happen if you dragged one of points or ? Would the relationship still hold?
Answers Questions 7 and 8 on the worksheet. |
ANEMIA & ANAEMIO
What is anemia?
Anemia is a condition marked by a deficiency or decrease in red blood cells or of hemoglobin (Hb) content in the blood, resulting in pallor and weariness. This is caused by a limited number of mechanisms that can function independently or occur synergistically.
Hemoglobin is a protein that transports oxygen into the body and carbon dioxide out of the body. A lack of hemoglobin leads to hypoxia and acidosis.
At the center of the hemoglobin molecule is the mineral iron, which is vital in the synthesis of hemoglobin and myoglobin (muscle cells). Hemoglobin in blood carries the oxygen you breathe into your lungs to all tissues throughout the body, and myoglobin in muscle holds and stores oxygen for use during exercise. Myoglobin is particularly important for aerobic muscle fibers that are also called slow-twitch red (or type I) fibers. In fact, it is the myoglobin that makes endurance muscle reddish in color.
The iron in hemoglobin and myoglobin is essential because it has special biochemical properties that allow it to carry oxygen, and then release it to the tissues when necessary. Human cells — particularly working muscle cells — need a regular supply of oxygen to generate or produce energy. Iron-containing hemoglobin is also instrumental in assisting the elimination of carbon and hydrogen atoms that are released during the use of carbohydrate and fat fuels for energy, forming carbon dioxide and hydrogen. Therefore, having adequate iron stores is particularly vital during exercise when the hemoglobin-rich red blood cells are shuttled between the lungs and the exercising muscle, supplying fresh oxygen while eliminating carbon dioxide. In addition to its role in oxygen and carbon dioxide shuttling, iron assists many enzymes in the energy-generating pathways. Iron is also needed to produce new cells, hormones, neurotransmitters, and amino acids. A deficiency in iron is, therefore, a main cause of anemia.
There are two ways to get iron from foods:
1. Heme iron (from animal products) - which is the easiest way to absorb iron according to western medicine
2. Nonheme iron (from plants) - claimed to be the less optimal way to obtain iron
Iron is so imperative to the body that is has been referred to as the body’s gold: an absolute precious mineral to be hoarded. Following absorption in the intestines, an important protein called transferrin escorts it to various tissues in the body. Iron is stored primarily in the liver and bone marrow as part of two other proteins called ferritin and hemosiderin. Some storage also occurs in the spleen and in muscle. A minute amount of the storage protein ferritin also circulates in the blood. Only a very small amount of unescorted iron circulates in the blood.
The liver assigns iron, sent from bone marrow or from its own stores, into new red blood cells, made in the small intestines, and releases them into the blood. Red blood cells typically live for three to four months. When the red blood cells die, the spleen and liver salvage the iron from the dead cells where it is rerouted back to the bone marrow and stored for reuse. In this way, iron is truly hoarded. Trace amounts of iron are lost daily through the shedding of cells in the skin, scalp, gastrointestinal (GI) tract and through perspiration. The greatest loss of iron, however, occurs through bleeding. Normal average daily iron loss is approximately 1 milligram for men and non-menstruating women, and approximately 1.4 to 1.5 milligrams for “normal” menstruating women. Monthly menstrual losses account for the higher average iron loss in women.
The Top Causes of Anemia in Women:
• Estrogen Dominance related diseases, e.g. uterine fibroids, endometriosis, ovarian cysts, and polycystic ovary syndrome (PCOS)
• Blood loss and depletion, e.g. menstruation, childbirth, pregnancy and lactation
• Nutrient deficiencies, e.g. iron, vitamin B12, folate (vitamin B9 required for the synthesis of red blood cells), vitamin C (enhances the absorption of nonheme iron by reducing dietary iron to an absorbable, iron-ascorbic acid complex other organic acids like citric, malic, tartaric, and lactic acids also enhance iron absorption), copper (necessary for normal iron metabolism and red blood cell formation)
• Omega 6 to Omega 3 imbalance (caused by seed oils and animal product consumption)
• Anti-nutrient factors, e.g. calcium (when consumed at the same time, calcium decreases the absorption of heme and nonheme iron), phytates (phytic acid inhibits nonheme iron absorption, reducing iron absorption by 98%), polyphenols (found in some fruits, vegetables, coffee, tea, wines and spices can inhibit the absorption of nonheme iron), oxalates, soy protein (inhibits effect on iron absorption independent of phytate content ANFs)
• Intestinal inflammation, e.g. lectins, food sensitivities, celiac/bowel disease, dysbiosis
• Vital/blood deficiency
• Pelvic congestion/stasis - blood pools in the lower half of the body causing difficulty for the liver to pull blood up against gravity for rejuvenation/replenishing with oxygen
Other causes: hypothyroidism, autoimmune hemolytic anemia
Anti-nutrient Factors (ANFs):
Calcium (like iron) is an essential mineral, which means the body gets this nutrient exclusively from diet. Calcium is found in typical standard American diet (SAD) foods such as milk, yogurt, cheese, sardines, canned salmon, tofu, broccoli, and also in almonds, figs, turnip greens and rhubarb. It is the only known substance to inhibit absorption of both non-heme and heme iron. In amounts of 50 milligrams or less, calcium has little, if any, affect on iron absorption. Calcium in amounts 300-600 milligrams profoundly inhibits the absorption of heme and non-heme iron. One cup of skimmed cow’s milk contains about 300 milligrams of calcium.
Eggs contain a compound that drastically impairs the absorption of iron. Phosphoprotein, also called phosvitin, is a protein with an iron binding capacity that may be responsible for the low bioavailability of iron from eggs. This iron inhibiting characteristic of eggs is known as the “egg factor”. The egg factor has been observed and documented in several separate case studies. One boiled egg can reduce absorption of iron within one meal by as much as 28%.
Oxalates impair the absorption of nonheme iron. Oxalates are compounds derived from oxalic acid, and are found in foods such as spinach, kale, beets, nuts, chocolate, black and green tea, wheat bran, rhubarb, strawberries, and in herbs such as oregano, basil, and parsley. The high presence of oxalates in spinach explains why the iron in spinach is sparsely ever absorbed. In fact, it is reported that the little iron from spinach that does get absorbed is probably only incidental, i.e. from the minute particles of sand or dirt clinging to the plant, rather than the iron contained within the plant.
Polyphenols are major inhibitors of iron absorption. Polyphenols or phenolic compounds include chlorogenic acid found in cocoa, coffee and some common herbs. Phenolic acid found in apples, peppermint and some herbal teas, also tannins found in black teas, coffee, cocoa, spices, walnuts, fruits such as apples, blackberries, raspberries and blueberries, all have detrimental abilities to inhibit iron absorption. Of the polyphenols, Swedish cocoa and certain teas exhibit the most powerful iron absorption inhibiting capabilities, in some cases up to 90%. Coffee is high in tannin and chlorogenic acid; one cup of certain types of coffee can inhibit iron absorption by as much as 60%. These foods or substance should not be consumed within two hours prior to and following your main iron-rich meal.
Phytate is a compound found in soy protein and fiber. Even low levels of phytate (about 5 percent of the amounts in cereal whole flours) have a powerful inhibitory effect on iron bioavailability. Phytate is also found in walnuts, almonds, sesame, dried beans, lentils, peas, and in cereals and whole grains. Phytate compounds are known to reduce iron absorption by 50 to 65 percent.
Shortlist of Foods & Supplements that Prevent Iron Absorption:
• Rapid heart rate or heart murmur
• Rapid or difficulty breathing
• Pale or cold skin
• Dizziness, vertigo and fainting
• Weight loss
• Poor immunity
• Conjunctiva, buccal mucosa, and nail bed may be pale in color
Severe cases may display:
• Pica (a craving for dirt, burnt toast, paint, chalk, glue, hair or ice)
• Glossitis (inflammation of the tongue)
• Cheilosis (sores about the lips and mouth)
• Koilonychias (thinning, concave nails)
electric Health's Remedy
Foods that MUST be Incorporated into the Diet to Reverse Iron Deficient Anemia:
- Black Mission Figs
- Black Mulberries
- Black Grapes
- Black Elderberries
- Black Cherry
- Wild Yam
- Irish Moss
- Green Bananas
- Blood Orange
- Camu Camu |
The difference between a vertex and a corner is not in their definitions. Rather, it is in the way these terms are used. One is used when discussing mathematical instances, while the other is used to describe objects and areas within the physical world.
The definition of a vertex and a corner are the same. In both cases, it is the convergence of two lines or surfaces to a point. The meeting of these lines results in the creation of an angle. Multiple convergences are called corners or vertices.
Usage of the Term Vertex
The term vertex is used within the world of mathematics to measure the amount of surface on an object. For example, a triangle contains three vertices while a pentagon has five. These vertices are connected by line segments known as edges.
Usage of the Term Corner
The term corner is used within the physical world to describe the meeting of two lines of an object or an area. Examples of this are the point where two roads meet or where two sides of a building converge into one another. According to a 20th century court ruling, the variation in a particular line does not constitute a corner.
Other Uses of Vertex and Corner
In mathematics, a vertex can also be defined as a boundary of a certain shape. For instance, the circumference of a circle can be defined as its vertex. A corner can also be defined as an object that is used to protect or adorn an area where two surfaces meet.
Use of Vertices in Formulas
The use of vertices play a role in the mathematical theorem known as Euler's formula. In the theorem, the amount of faces in an object, like a cube, plus the number of vertices, minus the number of edges, will always equal 2. This formula applies to all complex, three-dimensional polyhedrons.
- Jupiterimages/Comstock/Getty Images |
The theory of graph-cuts is used often in the field of computer vision. Graph-cuts are employed to efficiently solve a wide variety of computer vision problems, such as image smoothing, the stereo correspondence, and many other problems that can be formulated in terms of energy minimization. Hold on, energy minimization? It basically refers to finding the equilibrium state. We will talk more about it soon. Many computer vision algorithms involve transforming a given problem into a graph and cutting that graph in the best way. When we say “graph-cuts”, we are specifically referring to the models which use a max-flow/min-cut optimization. Too much jargon? Let’s just dissect it and see what’s inside, shall we?
What is energy minimization?
The reason behind using the term “energy” is that typical object detection/segmentation tasks are posed as energy minimization problems. We define “energy” to be the term that would capture the solution we desire and perform gradient-descent to compute its lowest value, resulting in a solution for the given problem. Energy is like the information present in the image. For example, compressing an image causes energy-loss (i.e. if we use lossy compression like JPEG). Sometimes the energy can be a negative measure to be minimised and sometimes it is a positive measure to be maximized.
What is a cut?
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. The cut-set of the cut is the set of edges whose end points are in different subsets of the partition. For example, in the graph here, the dotted line represents a cut. The weight of the cut is 4+5+9+7+11+7+3=46. Edges are said to be crossing the cut if they are in its cut-set. In an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the same term is defined by the sum of the weights of the edges crossing the cut.
When we are dealing with, say image segmentation, we treat every pixel as a node on a graph. As mentioned earlier, we first transformed the given problem into a graph. Now we need to cut the image into multiple parts. We have to “cut” the graph in the best possible way. The reason we do this is because graph theory is very well developed branch of mathematics with robust formulations and it gives really optimized results.
What is minimum-cut?
A cut is minimum if the size of the cut is not larger than the size of any other cut. In the above image, the minimum cut of the weighted graph G is the the cut of the graph with minimum weight. The minimum cut between two vertices v and w in G is the minimum weight cut of G that puts v and w in different partitions. In the graph given here, the dotted line represents the minimum cut that separates the source from the sink. The max-flow min-cut theorem proves that the maximum network flow and the sum of the cut-edge weights of any minimum cut that separates the source and the sink are equal. Graph-cuts formulate a given problem as a graph problem and use minimum cut to get the optimized result. It is widely used for image segmentation. |
Throughout the day parents can incorporate problem solving practice during the daily routine. For example, in the morning for breakfast prompt the child to try to figure out what he will eat, what he will wear and where you will go. During daily activities or outings try to encourage your child to use problem solving skills effectively. A variety of methods of problem solving might be used. He might list the options of the choices he has. For example, even a simple thing like making breakfast might be broken down into steps, list the choices of foods and choices of how to make the foods. If there is an argument in the morning with sibling over who can sit in a certain chair you might discuss the options. For example, maybe you can take turns each day sitting in that chair, maybe you can buy another chair like that one, they might ask for your help in working out a problem, or could bargain by offering another activity to his sibling in in in in in in using the chair.
Parents should use reinforcement for getting along. The research shows that co-operative play can be increased through the use of reinforcers. Parents can offer additional reinforcing activities if behavior is co-operative and appropriate in the morning. Social praise or special treats for getting along can be effective depending on what is reinforcing for your child. Remember to be specific when using praise. For example, "you are such gentleman the way you work together on making breakfast!" Another example might be "you are such a good brother and so kind helping each other to set the table!" Parents should try to reinforce only if play is co-operative and not if only one child is good following each activity.
Finally, practice at specific times during the day problem solving is helpful. Often parents have time while driving, waiting at a doctor office or at bedtime to practice problem solving with their child. For example, at bedtime the parent can review a situation, list some optional solutions, reinforce appropriate solutions and try to come up with other examples of this type of situation. Remember it is better to use hypothetical situations about other people, in a movie or from your experience rather than a situation your child has currently. If you use his current specific problem you may inadvertently reinforce him for having problems frequently with others. The more you practice solving problems that are imaginary then when he has real problems he will be ready to solve them quickly! |
Table of Contents How to Use This Resource Common Core Connections
Tables in this resource detail how the lessons connect to the Common Core State Standards. Using the tables alongside your own curriculum, standards, or pacing guides will help you determine which lessons meet the concepts and skills you need to address with your students.
"It Makes Sense! Using the Hundreds Chart to Build Number Sense provides teachers with engaging activities based on important mathematical ideas; clear and concise lessons structured with the "introduce, explore, summarize" format; questions and tips to guide, assess, and extend student thinking; and on-the-spot professional development in the form of Teacher Reflections and A Child's Mind. This resource is an important companion text to any math program!"
Karen Economopoulos, Codirector
Investigations in Number, Data and Space
"It Makes Sense! Using the Hundreds Chart to Build Number Sense is a superb resource for teachers! The content is rich, focused on the big ideas of mathematics, appropriate for young students, and conceptually based. The teaching methods are carefully detailed; it is obvious that these are lessons that have been taught to real children in real settings. Melissa and Stephanie have done a masterful job of describing what teaching for understanding really means."
Juanita V. Copley
Professor Emerita, University of Houston
"I've used the activities in It Makes Sense! numerous times in my class. Having the visual of the hundreds chart is very helpful for my students. My students now have a grasp of sequencing, patterns, addition, subtraction, and the relationship between the numbers. Th rough repeated practice, they are able to internalize these skills, all while having fun playing a game."
First Grade Team Leader/ESL Coordinator
Eickenroht Elementary, Houston, Texas
"It Makes Sense! Using the Hundreds Chart to Build Number Sense . . . and I couldn't agree more! In a primary math classroom, is there any other tool that is more important and appropriate than the hundreds chart? Melissa and Stephanie have teamed together again to provide teachers with engaging lessons accessible to all! Your students build, navigate, investigate, and communicate their way to a stronger sense of numbers. I am not sure if you can find many first graders who wouldn't love to build a wacky hundreds chart and then talk about it with their peers. What a fabulous resource for educators!"
Math Specialist, Alexandria City Public Schools
Virginia, 2004 PAEMST Awardee—Colorado
"This rich resource provides teachers with easy-to-implement lessons that help children use the hundreds chart to develop key base-ten number concepts—an important thread that runs through the K–5 Common Core Standards. Not only does this book offer engaging activities that can be used with any textbook series, but it also comes with ideas for homework, technology connections, and tips for supporting language learners. It Makes Sense! is at the top of my list when recommending math resources that support number sense development."
Lecturer and Supervisor of Teacher Education at the University of California, San Diego and Co-author of Developing Number Sense, Grades 3–6
Melissa Conklin is the author of Math Solutions highly acclaimed resource It Makes Sense! Using Ten-Frames to Build Number Sense, Grades K–2. Melissa works with Math Solutions as a consultant and was previously a full-time Math Solutions education specialist. She designs and provides professional development, sitebased coaching and e-coaching with classroom teachers, math coaches, and administrators. Melissa lives in Irving, Texas.
Stephanie Sheffield is the author or coauthor of four Math Solutions publications: Math and Literature, Grades 2–3; Math and Literature, Grades K–1; Math and Nonfiction, Grades 3–5; and Teaching Arithmetic: Lessons for First Grade. She is a long-time Math Solutions consultant and has taught in the Houston area for more than twenty-five years. |
After facing complaints for the paddling of a female student by a male staffer, a Texas high school has changed its policy to allow opposite-sex paddlings. Who invented paddling?
Sailors. Corporal punishment is as old as the Hebrew Bible, and bare-handed spanking was used to discipline children by the 18th century. Other devices older than the paddle, such as the birch rod, have also been used to flog the buttocks. But people didn’t commonly paddle each other on land until the 19th century. In the 18th century paddling was used at sea to discipline naughty seafarers, such as if they slacked off and went to sleep during night watch. Instead of paddling, though, it was called cobbing. William Falconer’s 1769 Universal Dictionary of the Marine explained that cobbing “is performed by striking the offender a certain number of times on the breech [meaning buttocks] with a flat piece of wood called the cobbing-board.” Traditionally the cobbing board was made from a plank taken from the front of a barrel, and the punisher would use the bunghole end to strike the sailor’s butt. In one seaman’s account in Peter Parley’s Magazine, a sailor is told to “Get out, you lubber” and “Bear a hand, Mr. Dogfish” by a superior officer, but when he complains about the abuse, he is “ordered to have a ‘cobbing’ ” for giving “insulting looks.”
Paddling came to American shores as a way to punish slaves without scarring them. Slave owners and slave traders began paddling because they didn’t want to damage the people they saw as their valuable property. As James Glass Bertram wrote in his 1869 History of the Rod, “In order not to mark the backs of the slaves, and thus deteriorate their value, in Virginia they substituted the pliant strap and the scientific paddle.” A more recent historian further explains that “a scarred slave was a troublesome one, and no one wanted to purchase trouble.” This didn’t make the paddle any less cruel. A Mrs. Mann of Missouri was known for her fearsome “six pound paddle,” which she wielded with both hands. Some slaves were given hundreds of strokes from the paddle, and were left near dead. And it wasn’t just slaves who were paddled. At least one report suggests that some portion of American soldiers used paddling as early as the Revolutionary War, to punish “crimes characterized by meanness and low cunning,” but it wasn’t as well-known a practice then as it became in the following century.
Soon paddling spread to Europe and Brazil—where they used a wooden paddle called the palmatoria—and it wasn’t used only to discipline slaves and unruly sailors. Some of the first schoolchildren to be paddled were Irish students, who were paddled for failing to take off their hats. The French were wielding the paddle by the 1920s, except they called it the bâton de justice. In the 20th century, paddles were used for military initiations, discipline in the home, and erotic play. Paddling seems to have been used to haze freshmen since as early as the 1890s. Vanderbilt’s sophomore honorary society was dissolved by Chancellor James H. Kirkland in 1937, according to Life magazine, after he saw pictures of them paddling younger students. Over 20 states banned corporal punishment in schools over the course of the 1980s and 1990s, but it is still allowed in 19 states. |
We love learning about animals. In fact, I think most kids enjoy learning about animals, especially when you start bringing in hands-on activities and great animal books. This Science for Kids Animal Sort Activity and Printable pack is fantastic. Teach your children what different types of animals eat and what it is called with this Montessori inspired activity.
Sort and Classify Animals Omnivore, Herbivore, Carnivore
For The Best Kids Activities
Science for Kids Animal Sort Activity Printables
This Science idea is the perfect animal activity to add to your themed learning.
It is always better to include hands on activities when you are teaching or sharing Science with kids.
Use the vocabulary cards to explain what each word means. Add a bowl or basket of animals (we use the Safari Toob animals for activities) and sort and classify each animal into omnivore, herbivore or carnivore.
If you want to expand on this activity, even more, you can have samples of the types of food for the children to touch, taste and see.
We used real pictures for the printables making them easier to work with, and we believe it is good for children to see real animals while learning.
Questions to expand on the activity:
Ask the children what they ate today?
Ask which group they would be in?
What category their favorite animal would be in?
Sorting and Classify Animals by Omnivore, Herbivore, and Carnivore can be a lot of fun.
This animal sort activity is perfect for young learners and can encourage imaginative play as well. Enjoy!
A few books that we love for expanding these activities and learning more about animals are:
Dragons and Marshmallows Plus all of the Zoey and Sassafras Series
Click below for your free Science for Kids Animal Sort printables
If you are interested in more Montessori Activities Click Here. You’ll find over 150 Montessori Ideas, activities, and printables.
You Are Also Going to Love:
Studies have shown that if you really like this, you will also love the following articles. I have pulled them together for you right here! |
Just two weeks after the confirmation of a planet that's within the habitable zone of a distant star, the Kepler team is back with the discovery of two Earth-sized planets orbiting in what is now a five-planet system (three other planets orbiting the star, Kepler-20, had been spotted earlier). Although these planets are much too hot to support liquid water, one of them (Kepler-20e) is the smallest exoplanet yet detected.
Kepler-20 was already a busy star system, with three small planets orbiting close in to the star: Kepler-20b is about twice the size of Earth and orbits once every 3.7 days; Kepler-20c is three times Earth's radius and orbits every 11 days; and Kepler-20d is 2.75 Earth radii with an orbit of 77.6 days. If that seems somewhat tightly packed, the new finds actually jam a couple more planets within the orbit of Kepler-20d. Kepler-20e has an orbit of six days, while Kepler-20f takes 19.6 days to orbit its host star.
Given that distance, it's possible to estimate the planets' radii based on the amount of light they block while transiting in front of their host star. And neither of these block very much at all. Kepler-20e results in a drop of only 82 parts-per-million in the light from its host star, which corresponds to a radius of 0.87Re. Kepler-20f is a bit larger, with a signal of 101ppm, placing its radius at roughly that of Earth's.
The fast orbits indicate that these planets are very close to their host star, which has a surface temperature of a toasty 5,500K, a few hundred K shy of our Sun's. That makes the planets correspondingly toasty. Kepler-20e is predicted to have a surface temperature of over 1,000K, and is close enough that any hydrogen atmosphere would have been heated off. Any water on its surface would have been boiled into vapor, broken down by UV exposure, and the resulting gasses also driven off.
Kepler-20f might have been able to hold onto its water if it had formed further out and then migrated inward to its current orbit. Its surface temperature is "only" 700K, and it's far enough from the host star that it could retain a water vapor atmosphere for several billion years.
Theoretical considerations based on what we know about planet formation suggest that both of these planets are rocky, and may even have a composition similar to Earth's. To confirm this, however, would require measuring the mass of the planets. Unfortunately, the best way to do this is to measure how much they pull their host star around as their orbits take them to opposite sides of Kepler-20. Their small size leads to correspondingly small Doppler shifts, however, and those are currently below our ability to detect. Improvements expected in our telescope hardware should allow us to do so within a few years, but we're stuck with theory for now.
The one caveat from all of this is that these signals haven't been confirmed with another piece of hardware, something the Kepler team normally requires before shifting something from the "candidate" to "confirmed" category. In this case, they have been able to call these candidates confirmed by showing that all the other possible sources of a signal are extremely unlikely. The host star shows no sign of having a companion brown dwarf or other dim star, based on the lack of any wobbles in its orbit. The chances that a second star in the same line of sight has a large planet that's producing these signals are very, very small.
The previous find of planets in this system also boost the odds that the new signal comes from a planet, since we already know that things are orbiting in the right plane to pass between Earth and Kepler-20. As a result, the authors conclude that these signals come from small planets with a greater than three sigma certainty (something that the Higgs hunters wouldn't find satisfying, but good enough for an astronomer).
NASA will be holding a press conference on the results shortly; we'll update this story if any additional information is provided during the event. |
When boosting the immune system or providing protection against viruses, not many people think of vitamin A. It is well known that vitamin A levels plummet with infection and with chemical exposure. There are some studies that show it to be of value in protecting against viral infections.
There are a number of animal studies showing vitamin A as antiviral. One study (Int J Vitam Nutr Res. 2010 Apr;80(2):117-30), showed that it alleviated inflammatory responses in reproductive tracts of male mice infected with pseudorabies virus. Another animal study (Vaccine. 2014 May 7;32(22):2521-4), showed that improves IgA production in the mucosa. Vitamin A deficient mice have increased viral antigens and enhanced cytokine/chemokine production in nasal tissues following respiratory virus infection, according to another study (Int Immunol. 2016 Mar;28(3):139-52).
Human studies exist as well. One study (Kansenshogaku Zasshi. 1999 Feb;73(2):104-9.) showed supplementation to be helpful for both measles and RSV. There are several studies involving HIV infected children and vitamin A. One study (Nutrition. 2005 Jan;21(1):25-31.), demonstrated a reduction in mortality in HIV infected children. Another study (J Nutr. 2019 Oct 1;149(10):1757-1765), showed that vitamin A supplementation reduced mortality in patients infected with Ebola virus. Supplementation with vitamin A may potentiate vaccines. In one study (Viruses. 2019 Sep 30;11(10)), researchers concluded, “Overall, our study demonstrates that vitamin A&D supplementation can improve immune responses to vaccines when children are vitamin A and D-insufficient at baseline. Results provide guidance for the appropriate use of vitamins A and D in future clinical vaccine studies.”
Vitamin A is important for membrane health. One study(J Nutr Biochem. 2010 Mar;21(3):227-36), looked at alveolar membranes in rats. Researchers concluded, “Vitamin A deficiency results in alterations of the structure and composition of the alveolar BM which are probably mediated by TGF-beta1 and reverted by retinoic acid.”
In another animal study (PLoS One. 2015 Sep 30;10(9):e0139131), researchers stated, “In conclusion, vitamin A deficiency suppressed the immunity of the airway by decreasing the IgA and mucin concentrations in neonatal chicks. This study suggested that a suitable level of vitamin A is essential for the secretion of IgA and mucin in the respiratory tract by regulating the gene expression of cytokines and epithelial growth factors.”
Vitamin A and vitamin D are important for the health of the gut membrane. So much so, that deficiency can affect the microbiota, and thus the immune system. Researchers in one study (Crit Rev Biochem Mol Biol. 2019 Apr;54(2):184-192) state, “There are some unique functions of vitamin A and D; for example, vitamin A induces gut homing receptors on T cells, while vitamin D suppresses gut homing receptors on T cells. Together, vitamin A- and vitamin D-mediated regulation of the intestinal epithelium and mucosal immune system shape the microbial communities in the gut to maintain homeostasis.” |
Flashcards in B New Testament chapter 2 Deck (16):
Chester Beatty Papyri
Contain earliest copy of many books of the New Testament. Twelve volumes come from the year 200
A manuscript in the form of a modern book. (Pages, Not a continuous roll)
Fourth-century codex of the Bible housed in the Vatican. One of the most reliable manuscripts for book of the New testament. (Does not contain books at the beginning or the end)
A translation where the translator tries to convey the original meaning of the originating text. (doesn't need to be same wording)
A type of translation in which the translator tries to stay as close as a word-for-word translation as possible.
King James version
Translation of the bible into English in 1611. It becomes the standard English translation through the first half of the twentieth century
Standard text of the Hebrew bible that comes from Masoretes. They added vowels and accents to make the text easier to read.
Much of the New Testament written on this. (type of paper)
A field of biblical studies that tries to establish the earliest possible wording of biblical texts.
A Greek text produced in 1516 by the scholar Erasmus. Became the basis for the 1611 King James Version
manuscript written in all capital letters. Gives us sthe Codex Vaticanus and Codex Sinaiticus
much nicer paper
Early Manuscript Corruption
Mark-----making up the last 8 stories to avoid the bad ending
Characteristics of NT Scribes
somewhat limited clarity and grammar
Transmission of the Hebrew Bible
texts were written, edited, and reedited over a course of centuries. At some point these texts came to be persevered as word of God. It was important to keep the texts in their exact form |
To rate this resource, click a star:
In this investigation students explore the connection between competition for mates and the evolution of elaborate traits in birds. Using the online database Birds of North America, students develop and test a set of hypotheses to explain the variation in sexual dimorphism among bird species.
Cornell Lab of Ornithology
An excellent lab for focusing on the process of science while learning about evolutionary concepts.
Students should be familiar with social mating systems (at least monogamy and polygyny) as well as the concept of extra-pair paternity (EPP).
Computers with multimedia software (Flash or QuickTime) required. Access to access to Birds of North America recommended.
Correspondence to the Next Generation Science Standards is indicated in parentheses after each relevant concept. See our conceptual framework for details.
- Sexual selection occurs when selection acts on characteristics that affect the ability of individuals to obtain mates.
- Sexual selection can lead to physical and behavioral differences between the sexes.
- Scientific knowledge is open to question and revision as we come up with new ideas and discover new evidence.
- A hallmark of science is exposing ideas to testing.
- Scientists test their ideas using multiple lines of evidence. |
Cross pollination is ordinarily not a problem for vegetables grown from their leaves, such as spinach, cabbage and greens of one sort or another. And there are few pollination problems with carrots, beets, radishes and vegetables grown from roots. However, some vegetables experience cross-pollination problems that can often be quite severe.
These vegetables produce flowers that are fertilized by their own pollen. Their flowers usually contain both the male and female parts. These plants don't need insects or wind to be pollinated properly. Tomatoes, peas, lima beans, bush and pole beans, and lettuce are examples of self-pollinators.
Cross-pollinating vegetables need pollen from another plant in order to produce seed. The pollen is usually carried by the wind or by bees and other insects. The wind pollinates vegetables like chards, corn, spinach and beets. Bees, butterflies and other insects pollinate asparagus, broccoli, Brussels sprouts, cabbage, carrots, cauliflower, celery, collards, cucumbers, eggplant, gourds, kale, kohlrabi, muskmelons, mustard, okra, onions, parsley, parsnip, hot pepper, pumpkin, radish, rutabaga, spinach, squash, turnips and watermelon.
This is when pollen from a different strain or variety of vegetable is introduced. This is how hybrid vegetables are developed. Seeds from hybrids usually revert back to the traits of their ancestors.
Varieties of winter squash, jumbo pumpkins and ornamental gourds that are closely related will cross-pollinate if you plant them close together. Don't worry about cucumbers or melons. Although tomatoes are self-pollinating, they can cross pollinate.
Preventing Cross Pollination
You do not have to worry about cross pollination if you buy seed for each growing season. Likewise, cross pollination between different kinds of vegetables is not a concern. If you save your seed, you have to worry about cross pollination between two varieties of the same vegetable. Self-pollinating vegetables are less susceptible to cross-pollination, but it can still happen.
If your vegetables are pollinated by the wind, you can isolate them and hope that space will act as a barrier. Corn is an example of a vegetable that you can separate by space.
Plants that are pollinated by bees, butterflies and other insects are more difficult to prevent from cross pollinating. Prevent insects from cross-pollinating your plants by putting them under a plastic tent and pollinating them by hand.
Remember that you need bees to pollinate many if not most of your vegetables. Do not use an insecticide to kill bees to prevent cross-pollination. |
ESLN 3400 Beginning High 4
Completion of ESLN 3300
(Beginning High 3)
Beginning High 4 students further develop and expand their knowledge of beginning high language skills. Students learn to comprehend spoken English in familiar contexts, communicate about basic needs and common activities, and participate in basic conversations in routine social situations. They learn to generate sentences into short, loosely organized paragraphs.
After successful completion of this course, students will be able to:
- Outcome 1: Identify spoken English from common topics in familiar contexts to participate in simple conversations
- Outcome 2: Produce simple conversations in a variety of common social situations
- Outcome 3: Determine the meaning of new words by applying basic word analysis and vocabulary development skills
- Outcome 4: Identify words in longer reading passages in familiar contexts.
- Outcome 5: Compose sentences organized into short loosely organized paragraphs
- Outcome 6: Use Beginning High Level 4 language structures and forms. |
What emotions do everybody share? [All Ages]
Are there any emotions that everyone experiences, no matter where they are from?
Present the audience with each of the following pictures in turn and ask them to shout out what each of the emotions is displayed on the person’s face.
There are thought to be six basic types of emotion – anger, fear, surprise, happiness, sadness and disgust. Point out to the audience that they had no difficulty recognising them even though the people are all from different places in the world. These emotions are triggered without your conscious control (although as you get older you can get better at hiding them). Most basic emotions are related to our primitive survival instincts.
Q: What are emotions and why do we have them?
A: Emotions are signals to our bodies that act like the fuel to motivate us to take some form of action. They can affect our breathing, our pulse rate, even how we digest our food. They are states of the body that prepare us to do something. This explains why emotions are often associated with all the really important things we have to do to survive as a species. If you think about it, we need to find food, drink and shelter. We need to avoid danger. We need to breed and we need to look after our children. Each one of these activities involves emotions and feelings.
Q1. What is the difference between an emotion and a mood?
A1. An emotion is usually an uncontrollable reaction to something whereas a mood is a longer term state that arises from that initial emotion. For instance, you might feel frighted (an emotion) and then afterwards feel nervous (a mood) for a long time.
Q2: Where in the brain do emotions come from?
A2: Emotions are generated in the limbic system which operates mostly unconsciously.
As far as we are aware, humans are the only animals that cry with sadness. Nobody is entirely sure why we cry but we do know that crying can wash away some of the natural chemicals that make you unhappy. This might be why people often feel a lot better after crying even though the situation hasn’t changed. Crying might also be an honest plea for help because it is very difficult to pretend to cry believably.
What emotions do we learn from our parents? [All Ages]
Are emotions learned or evolved?
Play the following video and ask the audience what they felt when Bruce drank out of the toilet. Ask them to mimic the sort of face they pulled.
Most people experience a sense of disgust when they see Bruce drinking out of the toilet even though he explains that it is a completely new toilet that has never been used. This is because we have learned to associate toilets with going to the toilet so that emotion overwhelms the logic of us knowing that there is no danger.
Disgust is an interesting emotion because, unlike anger, fear, surprise, happiness and sadness, disgust is the only one of the 6 basic emotions that doesn’t seem to be experienced by babies. Its not until around their second year that children begin to find things disgusting.
Q: If we don’t develop disgust until 2 years of age, does that mean that we are learning what to be disgusted about?
A: Yes, one reason that disgust may have evolved is that it is a useful way of signaling to others what is safe to eat. This is because humans are omnivores and can eat many different things. One theory is that it is important to learn which foods are safe and which may be potentially harmful. Children use adults as their foodtasters! For instance, in the West we don’t normally eat insects and find the idea disgusting but in other countries insects are highly prized because they are rich in vitamins and protein.
How do emotions spread from one person to another? [All Ages]
Can you catch other people’s emotion?
Play the following video to the audience and point out that when we watch videos of painful things happening to other people we often feel a twinge of concern or pain on behalf of the person in the video even though we don’t know them and are not experiencing what they are.
Emotions such as laughter, crying and pain can become contagious as we empathize and share mental states with others. We literally experience some of the unpleasantness of other people’s suffering by putting ourselves in their shoes. This is why moments of extreme negative emotion, such as when someone is in pain, can be almost unbearable to watch.
Q: What is happening in the brain when we empathise with someone else’s emotion?
A: One reason that emotions and behaviours seem to copy is that we have a circuitry of neurons only discovered fairly recently called the “mirror neuron” system. Within our brains are areas that control movements called the ‘motor areas’. There are around 1 in 10 neurons in these regions that seem to respond to watching other people’s movements as if we are mirroring their behaviours. What is remarkable is that it is not the actual movements that we mirror but the intended goal. For example, if I watch you activate a switch, the same mirror neurons would fire whether you used your left or right hand. It is the goal and not the actual movement that is registered in the brain. |
A study in which full-sized stone fruit trees are grown in sand tanks is yielding valuable data that will help scientists revise nutritional guidelines and threshold levels. Considered a definitive study on peach, plum, and nectarine nutrition by industry officials, the study allows researchers to determine for the first time the detrimental effects and benefits of specific nutrients on the health of stone fruit trees and fruit quality.
In 2000, Dr. Scott Johnson, University of California research scientist, installed 60 tanks measuring 6 feet by 12 feet and 4 feet deep, and filled each one with 19,000 pounds of sand. He planted Zee Lady peaches, Grand Pearl white-fleshed nectarines, and Fortune plum trees in each tank. White-fleshed peaches and nectarines make up about 20 to 25 percent of California peach and nectarine production.
Nutritional studies on white-fleshed stone fruit haven’t been done before. The sand holds the tree up but supplies no nutrients, allowing researchers to spoon-feed all the nutrients for tree development. One by one, macro- and micronutrients were withheld from the mature trees, including nitrogen, phosphorus, potassium, boron, and zinc. “We’ve been able to control the nutrients reasonably well,” Johnson said, adding that they are learning new things, particularly about phosphorus levels.
“We’re starting to see the effects of deficiencies in the tree before it shows up in the leaves as damage or symptoms,” he said. Kevin Day, UC Cooperative Extension tree fruit advisor, shared preliminary data from the sand tank project with Washington stone fruit growers during a soft fruit meeting held in Buena, Washington. Day said that the research team observed a distinct phosphorus deficiency in mature fruit trees. Only one other case in the South has documented phosphorus deficiency in soft fruit trees.
“We knew that you can have a phosphorus deficiency in young trees, but not in older trees.” The phosphorus-deficient trees were weaker with smaller leaves. Cracking and russetting was found on the fruit, especially in nectarines, which seem to be more susceptible to cracking. Johnson said that phosphorus deficiencies may have fooled growers and consultants in the past because deficiencies weren’t expected on mature trees.
“Most growers would probably throw nitrogen at the tree when they see such symptoms,” he said. “But in reality, it may be something else, like phosphorus.” After gathering sand tank data for several years, the research team is expanding the nutritional study to commercial orchards, surveying 60 orchard sites that are in sandy soil conditions.
They will use data collected from the field to determine average nutritional levels found in commercial orchards and fine- tune nutritional recommendations and guidelines. Nutritional levels for stone fruit trees were first developed about 50 years ago, according to Johnson. “Nutritional studies have been ongoing throughout the years, and the recommended levels for nutrients like nitrogen, boron, zinc, and phosphorus are constantly being tweaked.” Armed with this new data, Johnson plans to tweak the recommended nutritional levels once again when the field work is completed.
New sampling method
An outgrowth of the sand tank nutritional research is the development of a new sampling method using dormant shoots to determine the nutrient status of fruit trees. By collecting dormant shoot samples in January and analyzing the nutritional status, growers are armed with nutritional information going into the growing season.
They can make nutritional adjustments in the first half of the season. Growers typically collect leaf samples in midsummer to determine the nutritional status of stone fruit trees. Fertilizer applications are then made in midsummer or fall when it is often too late to help the current crop. Some nutrients applied in the front end of the growing season can benefit the current year’s fruit growth.
“With the dormant shoot technique, you can get an idea of what is stored in the tree and what it will take up in the spring,” Johnson added. The dormant shoot method can also be used as a tool to help growers gauge the effectiveness of previous fertilizer applications. He noted that the shoot sampling method might work better for some nutrients than others. The amount of nitrogen in dormant shoots was not that much different than the nitrogen amounts found in shoots from trees that the scientists knew were nitrogen deficient.
“It may be that for nitrogen, we have to measure it in a different way, such as measuring arginine or other amino acid levels,” he said. After samples from the commercial orchards in the study are analyzed, Johnson plans to publish revised nutritional threshold levels for several nutrients. Preliminary data already suggests that deficiency thresholds can be lowered for zinc, while the threshold levels for boron may be higher than what was previously published. |
Another Difficulty with Darwinian Accounts of How Human Bipedalism Developed
A Darwinian evolutionary bedtime story tells of how proto-man achieved his upright walking status when the forests of his native East Africa turned to savannas. That was 4 to 6 million years ago, and the theory was that our ancestors stood up in order to be able to look around themselves over the sea of grasslands, which would have been irrelevant in the forests of old.
A team of researchers led by USC's Sarah J. Feakins, writing in the journal Geology, detonate that tidy explanation with their finding that the savannas, going back 12 million years, had already been there more than 6 million years when the wonderful transition to bipedalism took place ("Northeast African vegetation change over 12 m.y.").
Science Daily summarizes:
The research combines sediment core studies of the waxy molecules from plant leaves with pollen analysis, yielding data of unprecedented scope and detail on what types of vegetation dominated the landscape surrounding the African Rift Valley (including present-day Kenya, Somalia and Ethiopia), where early hominin fossils trace the history of human evolution.The Economist enjoys this revelation, observing how "A cherished theory about why people walk upright has just bitten the dust":
Dr Feakins has shown that early humanity's east African homeland was never heavily forested, so the idea that people were constrained to walk upright by the disappearance of the forests is wrong.Of course there's much more to the enigma of upright walking than just the question of whether it was a response to grassland encroaching on the forest (that, we now see, wasn't there in any event). In Science and Human Origins, see Ann Gauger's discussion of the engineering difficulties in making a transition to the nearly modern anatomy of Homo erectus.
Perhaps it was more pull than push -- a pre-existing, but empty ecological niche crying out to be filled by an enterprising species that could make the transition. But perhaps those who seek an ecological explanation of this sort are, as it were, barking up the wrong tree. |
THE LIVING WORLD
Unit two. The Living Cell
7. How Cells Harvest Energy from Food
7.3. Harvesting Electrons from Chemical Bonds
The first step of oxidative respiration in the mitochondrion is the oxidation of the three-carbon molecule called pyruvate, which is the end product of glycolysis. The cell harvests pyruvate’s considerable energy in two steps: first, by oxidizing pyruvate to form acetyl-CoA, and then by oxidizing acetyl-CoA in the Krebs cycle.
Step One: Producing Acetvl-CoA
Pyruvate is oxidized in a single reaction that cleaves off one of pyruvate’s three carbons. This carbon then departs as part of a CO2 molecule, shown in figure 7.5 coming off the pathway with the green arrow. Pyruvate dehydrogenase, the complex of enzymes that removes CO2 from pyruvate, is one of the largest enzymes known. It contains 60 subunits! In the course of the reaction, a hydrogen and electrons are removed from pyruvate and donated to NAD+ to form NADH. The Key Biological Process illustration below shows how an enzyme catalyzes this reaction, bringing the substrate (pyruvate) into proximity with NAD+. Cells use NAD+ to carry hydrogen atoms and energetic electrons from one molecule to another. NAD+ oxidizes energy-rich molecules by acquiring their hydrogens (this proceeds 1 → 2 → 3 in the figure) and then reduces other molecules by giving the hydrogens to them (this proceeds 3 → 2 → 1). Now focus again on figure 7.5. The two-carbon fragment (called an acetyl group) that remains after removing CO2 from pyruvate is joined to a cofactor called coenzyme A (CoA) by pyruvate dehydrogenase, forming a compound known as acetyl-CoA. If the cell has plentiful supplies of ATP, acetyl-CoA is funneled into fat synthesis, with its energetic electrons preserved for later needs. If the cell needs ATP now, the fragment is directed instead into ATP production through the Krebs cycle.
Figure 7.5. Producing acetyl-CoA.
Pyruvate, the three-carbon product of glycolysis, is oxidized to the two-carbon molecule acetyl-CoA, in the process losing one carbon atom as CO2 and an electron (donated to NAD+ to form NADH). Almost all the molecules you use as foodstuffs are converted to acetyl-CoA; the acetyl-CoA is then channeled into fat synthesis or into ATP production, depending on your body's needs.
A Closer Look
Metabolic Efficiency and the Length of Food Chains
In the earth's ecosystems, the organisms that carry out photosynthesis are often consumed as food by other organisms. We call these "organism-eaters” heterotrophs. Humans are heterotrophs, as no human photosynthesizes.
It is thought that the first heterotrophs were ancient bacteria living in a world where photosynthesis had not yet introduced much oxygen into the oceans or atmosphere. The only mechanism they possessed to harvest chemical energy from their food was glycolysis. Neither oxygen-generating photosynthesis nor the oxidative stage of cellular respiration had evolved yet. It has been estimated that a heterotroph limited to glycolysis, as these ancient bacteria were, captures only 3.5% of the energy in the food it consumes.
Hence, if such a heterotroph preserves 3.5% of the energy in the photosynthesizers it consumes, then any other heterotrophs that consume the first heterotroph will capture through glycolysis 3.5% of the energy in it, or 0.12% of the energy available in the original photosynthetic organisms. A very large base of photosynthesizers would thus be needed to support a small number of heterotrophs.
When organisms became able to extract energy from organic molecules by oxidative cellular respiration, which we discuss on the next page, this constraint became far less severe, because the efficiency of oxidative respiration is estimated to be about 32%. This increased efficiency results in the transmission of much more energy from one trophic level to another than does glycolysis. (A trophic level is a step in the movement of energy through an ecosystem.) The efficiency of oxidative cellular respiration has made possible the evolution of food chains, in which photosynthesizers are consumed by heterotrophs, which are consumed by other heterotrophs, and so on. You will read more about food chains in chapter 36.
Even with this very efficient oxidative metabolism, approximately two-thirds of the available energy is lost at each trophic level, and that puts a limit on how long a food chain can be. Most food chains, like the East African grassland ecosystem illustrated here, involve only three or rarely four trophic levels. Too much energy is lost at each transfer to allow chains to be much longer than that.
For example, it would be impossible for a large human population to subsist by eating lions captured from the grasslands of East Africa; the amount of grass available there would not support enough zebras and other herbivores to maintain the number of lions needed to feed the human population. Thus, the ecological complexity of our world is fixed in a fundamental way by the chemistry of oxidative cellular respiration.
Photosynthesizers. The grass under this yellow fever tree grows actively during the hot, rainy season, capturing the energy of the sun and storing it in molecules of glucose, which are then converted into starch and stored in the grass.
Herbivores. These zebras consume the grass and transfer some of its stored energy into their own bodies.
Carnivores. The lion feeds on zebras and other animals, capturing part of their stored energy and storing it in its own body.
Scavengers. This hyena and the vultures occupy the same stage in the food chain as the lion. They also consume the body of the dead zebra, after it has been abandoned by the lion.
Refuse utilizers. These butterflies, mostly Precis octavia, are feeding on the material left in the hyena's dung after the food the hyena consumed had passed through its digestive tract.
A food chain in the savannas, or open grasslands, of East Africa.
At each of these levels in the food chain, only about a third or less of the energy present is used by the recipient.
Step Two: The Krebs Cycle
The next stage in oxidative respiration is called the Krebs cycle, named after the man who discovered it. The Krebs cycle (not to be confused with the Calvin cycle in photosynthesis) takes place within the mitochondrion. While a complex process, its nine reactions can be broken down into three stages, as indicated by the overview presented in the Key Biological Process illustration below:
Stage 1. Acetyl-CoA joins the cycle, binding to a four-carbon molecule and producing a six-carbon molecule.
Stage 2. Two carbons are removed as CO2, their electrons donated to NAD+, and a four-carbon molecule is left. A molecule of ATP is also produced.
Stage 3. More electrons are extracted, forming NADH and FADH2; the four-carbon starting material is regenerated.
To examine the Krebs cycle in more detail, follow along the series of individual reactions illustrated in figure 7.6. The cycle starts when the two-carbon acetyl-CoA fragment produced from pyruvate is stuck onto a four-carbon sugar called oxaloacetate. Then, in rapid-fire order, a series of eight additional reactions occur (steps 2 through 9). When it is all over, two carbon atoms have been expelled as CO2, one ATP molecule has been made in a coupled reaction, eight more energetic electrons have been harvested and taken away as NADH or on other carriers, such as FADH2, which serves the same function as NADH, and we are left with the same four-carbon sugar we started with. The process of reactions is a cycle—that is, a circle of reactions. In each turn of the cycle, a new acetyl group replaces the two CO2 molecules lost, and more electrons are extracted. Note that a single glucose molecule produces two turns of the cycle, one for each of the two pyruvate molecules generated by glycolysis.
Figure 7.6. The Krebs cycle.
This series of nine enzyme-catalyzed reactions takes place within the mitochondrion.
In the process of cellular respiration, glucose is entirely consumed. The six-carbon glucose molecule is first cleaved into a pair of three-carbon pyruvate molecules during glycolysis. One of the carbons of each pyruvate is then lost as CO2 in the conversion of pyruvate to acetyl-CoA, and the other two carbons are lost as CO2 during the oxidations of the Krebs cycle. All that is left to mark the passing of the glucose molecule into six CO2 molecules is its energy, preserved in four ATP molecules and electrons carried by 10 NADH and two FADH2 carriers.
Key Learning Outcome 7.3. The end product of glycolysis, pyruvate, is oxidized to the two-carbon acetyl-CoA, yielding a pair of electrons plus CO2. Acetyl-CoA then enters the Krebs cycle, yielding ATP, many energized electrons, and two CO2 molecules. |
X-ray: NASA / CXC / NCSU / M.Burkey et al; Optical: DSS
This is the remnant of Kepler's supernova, the famous explosion discovered by Johannes Kepler in 1604. The red, green and blue colors show low, intermediate and high energy X-rays observed with NASA's Chandra X-ray Observatory, and the star field is from the Digitized Sky Survey. Image released on March 18.
By Megan Gannon
Scientists have conducted a postmortem exam on the last gigantic star explosion ever observed by the naked eye in our galaxy, revealing that the supernova was triggered by a compact white dwarf containing more heavy elements than the sun.
The supernova suddenly appeared in the night sky in 1604. Brighter than all other stars and planets at its peak, it was observed by German astronomer Johannes Kepler, who thought he was looking at a new star. Centuries later, scientists determined that what Kepler saw was actually an exploding star, and they named it Kepler's supernova.
The recent cosmic autopsy — made possible by X-ray observations from the Japan-led Suzaku satellite — could help scientists better understand phenomena known as Type Ia supernovae. [Supernova Photos: Great Images of Star Explosions]
"Kepler's supernova is one of the most recent Type Ia explosions known in our galaxy, so it represents an essential link to improving our knowledge of these events," Carles Badenes, an assistant professor of physics and astronomy at the University of Pittsburgh, said in a statement from NASA.
Type Ia supernovae are thought to originate from binary systems where one at least one star is a white dwarf — a tiny, superdense core of a star that has ceased undergoing nuclear fusion reactions.
Gas transferred from a "normal" star in the pair may accumulate on the white dwarf, or if both stars in the system are white dwarfs, their orbits around each other may shrink until they fuse together. In either case, when the white dwarf or white dwarf conglomerate puts on too much weight (around 1.4 times the sun's mass), a runaway nuclear reaction begins inside, eventually leading to a brilliant supernova.
To get a better picture of the star's makeup before it blew up, Badenes and colleagues probed the chemical signatures in the shell of hot, rapidly expanding gas left by Kepler's supernova using 2009 and 2011 observations from the Suzaku satellite's X-ray Imaging Spectrometer.
The X-ray spectrum revealed faint emissions from highly ionized chromium, manganese and nickel, as well as a bright emission line from iron. The ratios of these trace elements in the supernova remnant show that that the original white dwarf likely had about three times the amount of metals found in the sun, the researchers said.
Kepler's supernova remnant is thought to be 23,000 light-years away. Compared with our solar system, it is much closer to the Milky Way's crowded central region, where star formation was probably more rapid and efficient, leaving interstellar gas enriched with greater proportions of metals. This would explain why Kepler's supernova seems to have formed out of material that already had a higher fraction of metals.
The study didn't solve which type of binary system triggered the supernova, but the researchers say the white dwarf was relatively young when it exploded — no more than a billion years old, or less than a quarter of the sun's current age.
"Theories indicate that the star's age and metal content affect the peak luminosity of Type Ia supernovae," Sangwook Park, an assistant professor of physics at the University of Texas at Arlington, explained in a statement. "Younger stars likely produce brighter explosions than older ones, which is why understanding the spread of ages among Type Ia supernovae is so important."
By better understanding Type Ia supernovae, Park added, "we can fine-tune our knowledge of the universe beyond our galaxy and improve cosmological models that depend on those measurements."
Astrophysicists from the United States and Australia won the Nobel Prize in physics in 2011 for their discovery that the universe's expansion is accelerating — a revelation based on measurements of Type Ia supernovae that led to the concept of dark energy.
The new findings were detailed in the April 10 issue of The Astrophysical Journal Letters.
- Photos: New Supernova Explodes in Galaxy M95
- Top 10 Star Mysteries
- Star Quiz: Test Your Stellar Smarts
Copyright 2013 Space.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. |
The earthquake of March 11, 2011, off the coast of Sendai, Japan, brought enormous suffering to a nation that is likely more accustomed and better prepared than any other to the threat of earthquakes. But more unfortunate and destructive than the quake itself, however, was the massive tsunami following the quake. As it has been established, the tsunami knocked out the emergency cooling system in the Fukushima Daiichi nuclear power plant. The incident culminated in three explosions, on March 12, 14, and 15, that caused substantial damage to the containment buildings of reactor units 1, 3, and 2, correspondingly. The reactor vessels, though, are considered not catastrophically compromised (e.g. exposed reactor cores).
Under the current worst case scenario, it is likely that some radioactive particles from the reactor's cores, or from the spent fuel storage pools, or both, were released into the atmosphere. In such a case, rain and snow would be major components contributing to the sedimentation of these particles back to the surface, into the soil, and potentially into agricultural produce.
Figure 1. Precipitation retrival from AIRS (left), and from GEOS-5 assimilation (right), over Japan.
However, NASA’s Earth Observing System precipitation and model wind data leaves hope that the worst of this scenario have not occurred over land for at least two reasons: the prevailing westerly winds and the light precipitation over Japan during the period subsequent to the quake and tsunami. Figure 1 shows images of the precipitation data retrieval from the Atmospheric Infrared Sounder (AIRS, left column) on the Aqua satellite, and from the Goddard Earth Observing System Data Assimilation System (GEOS-5, right column). Wind vectors at 850 mb (approximately 1.5 km altitude) from GEOS-5 are also overlaid in the right column.
For these images, we used the AIRS Level 2 support product (AIRSX2SUP) gridded to a 0.5° x 0.5° map, and the GEOS-5 G5.2.0 products tavg2d_met_x (precipitation) and inst3d_met_p (850-mb winds). Each image represents a 5-day average for the period shown above the image. Despite expected differences between the satellite observations and the assimilated data, both agree that only light precipitation prevailed over Japan, with the bulk of the intense precipitation fortunately occurring downwind and offshore.
Perhaps the worst period was around March 18-22, where AIRS and GEOS-5 show most of Japan under light precipitation. GEOS-5 indicates that most of the accumulation most likely appeared during this period, peaking west from Tokyo (see KMZ). Also, on March 18, smoke remained visible over unit 4, which may have helped inject radioactive materials higher into the atmosphere (above the boundary layer, into the entrainment zone).
Indeed, the International Atomic Energy Agency first reported of agricultural food contamination around Fukushima on March 19. Radiation readings were also taken by IAEA around Tokyo, who reported dose rates, although elevated, well below those which are dangerous to human health.
The prevailing winds over the entire period were consistently from the west (westerlies), which must have helped a great deal to transport the bulk of any material from the power plant offshore, into the Pacific Ocean. Meanwhile, AIRS and GEOS-5 unambiguously show large areas of intensive precipitation downwind from Japan. Thus, there exists a good likelihood that substantial portion of this material was captured by precipitation processes (e.g. condensation nuclei and downdrafts) and precipitated over the ocean. |
Mischievous Bird Grasshopper (Schistocerca damnifica)
Detailing the physical features, habits, territorial reach and other identifying qualities of the Mischievous Bird Grasshopper.
Updated: 10/27/2017; Authored By Staff Writer; Content ©www.InsectIdentification.org
Mischievous Bird Grasshoppers are strong fliers with large bodies, and may also fly in swarms like their avian namesake .
One of the smallest of its genus, the Mischievous Bird Grasshopper is still larger than most other Orthopterans. Populations in its northern and southern ranges show minor physical differences. The southern populations can overwinter as adults and are active longer thanks to the warmer temperatures.
They are brown though some individuals have hints of orange or red in them. Small dots or speckles cover the body. Nymphs (juveniles) have shorter wings than adults, which extend the length of the abdomen. Wings will become longer after each instar (molting phase). They are part of a group of grasshoppers called bird grasshoppers because they fly so well. Some species in this genus are known pests to crops and/or ornamental plants like hibiscus. |
Number of repeaters in a given grade in a given school year, expressed as a percentage of enrolment in that grade the previous school year.
Divide the number of repeaters in a given grade in school year t+1 by the number of pupils from the same cohort enrolled in the same grade in the previous school year t.
Enrolment by grade for school year t and number of repeaters from the same cohort by grade for year t+1.
School register, school survey or census for data on enrolment and repeaters by grade.
Repetition rate ideally should approach zero percent. High repetition rate reveals problems in the internal efficiency of the educational system and possibly reflect a poor level of instruction. When compared across grades, the patterns can indicate specific grades for which there is higher repetition, hence requiring more in depth study of causes and possible remedies.
In some cases, low repetition rates merely reflect policies or practices of automatic promotion. The level and maximum number of grade repetitions allowed can in some cases be determined by the educational authorities with the aim of coping with limited grade capacity and increasing the internal efficiency and flow of pupils (or students). Care should be taken in interpreting this indicator, especially in comparisons between education systems.
To measure the rate at which pupils from a cohort repeat a grade, and its effect on the internal efficiency of educational systems. In addition, it is one of the key indicators for analysing and projecting pupil flows from grade to grade within the educational cycle.
Like other pupil-flow rates (promotion and dropout rates), repetition rate is derived by analysing data on enrolment and repeaters by grade for two consecutive years. One should therefore ensure that such data are consistent in terms of coverage over time and across grades. Special attention should also be paid to minimizing some common errors which may bias these flow rates, such as: Over-reporting enrolment/repeaters (particularly in grade one); incorrect distinction between new entrants and repeaters; transfers of pupils between grades and schools.
By grade and by sex. |
When the harmful rays from sun reaches on the earth surface due to depletion of the ozone then the earth’s surface is heated due to the traped radiations this effect is know as global warming.The effects of global warming are the ecological and social changes caused by the rise in global temperatures. Evidence of climate change includes the instrumental temperature record, rising sea levels, and decreased snow cover in the Northern Hemisphere. According to the Intergovernmental Panel on Climate Change (IPCC), most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in human greenhouse gas concentrations. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency of some extreme weather events. Parties to the United Nations Framework Convention on Climate Change (UNFCCC) have agreed to implement policies designed to reduce their emissions of greenhouse gases to avoid dangerous climate change. |
Everyone can help conserve sandy beach ecosystems. Here are a few tips:
Coexist with kelp
Kelp wrack supports a variety of critters, promoting ecosystem health and maintaining the diversity of life on the beach. Support local beaches that avoid destructive grooming practices.
Avoid trampling vegetation and dunes
Stay on designated paths. Trampling vegetation on dunes can change the structure of dunes, alter erosion patterns, and affect the critters that use the dunes as habitat.
Leash your dog
If bringing a canine companion, check beforehand to see if dogs are allowed on that beach. Many beaches do not allow dogs, or allow them only during certain hours when leashed. Unless the beach specifically allows dogs to run free, all dogs should be leashed. Always clean up after your pet.
Share the beach with the birds
Follow these guidelines:
- Be alert to the presence of birds like snowy plovers and avoid disturbing them.
- Walk around flocks of birds to avoid disturbance.
- Respect signs designating restrictions to sensitive habitat areas.
- Stay out of fenced areas – the fences are there for a reason
- Clean up after yourself – pick up and remove trash.
- Keep children/dogs from chasing the birds and do NOT feed wildlife.
- Some people find, if they walk slowly and avoid eye contact with shorebirds, the birds won't fly away!
Don’t drive on the beach
Studies have shown that normal beach recreation does not significantly disturb beach life but vehicles do. Conserve beach ecosystems by avoiding driving or off-roading on dunes and other beach habitats.
Climate change may impact different sandy beaches in different ways. Find out if there are any efforts to study the impacts of climate change on your local beach and consider supporting that research.
Contact your local government
State and local government planners play a major role in determining where various land uses are allowed. Support and encourage their efforts to conserve sandy beaches. Explore the section below to learn more about the steps planners may take to conserve sandy beaches in the face of climate change. |
This article needs additional citations for verification. (February 2008) (Learn how and when to remove this template message)
Pyrolytic carbon is man-made and is not thought to be found in nature. Generally it is produced by heating a hydrocarbon nearly to its decomposition temperature, and permitting the graphite to crystallise (pyrolysis). One method is to heat synthetic fibers in a vacuum. Another method is to place seeds on a plate in the very hot gas to collect the graphite coating.[clarification needed] It is used in high temperature applications such as missile nose cones, rocket motors, heat shields, laboratory furnaces, in graphite-reinforced plastic, and in biomedical prostheses.
Pyrolytic carbon samples usually have a single cleavage plane, similar to mica, because the graphene sheets crystallize in a planar order, as opposed to graphite, which forms microscopic randomly oriented zones. Because of this, pyrolytic carbon exhibits several unusual anisotropic properties. It is more thermally conductive along the cleavage plane than graphite, making it one of the best planar thermal conductors available.
Pyrolitic graphite forms mosaic crystals with controlled mosaicities up to a few degrees.
It is also more diamagnetic (χ = −4×10−4) against the cleavage plane, exhibiting the greatest diamagnetism (by weight) of any room-temperature diamagnet. By way of comparison, pyrolitic graphite has a relative permeability of 0.9996, whereas Bismuth has a relative permeability of 0.9998 (table).
Few materials can be made to magnetically levitate stably above the magnetic field from a permanent magnet. Although magnetic repulsion is obviously and easily achieved between any two magnets, the shape of the field causes the upper magnet to push off sideways, rather than remaining supported, rendering stable levitation impossible for magnetic objects (see Earnshaw's Theorem). Strongly diamagnetic materials, however, can levitate above powerful magnets.
With the easy availability of rare earth permanent magnets in recent years, the strong diamagnetism of pyrolytic carbon makes it a convenient demonstration material for this effect.
In 2012, a research group in Japan demonstrated that pyrolytic carbon can respond to laser light or sufficiently powerful natural sunlight by spinning or moving in the direction of the field gradient. The carbon's magnetic susceptibility changes upon illumination, leading to an unbalanced magnetization of the material and thus a sideways force.
- It is used nonreinforced for missile nose cones, and ablative (boiloff-cooled) rocket motors.
- In fiber form, it is used to reinforce plastics and metals (see Carbon fiber and Graphite-reinforced plastic).
- Pebble bed nuclear reactors use a coating of pyrolytic carbon as a neutron moderator for the individual pebbles.
- Used to coat graphite cuvettes (tubes) in graphite furnace atomic absorption furnaces to decrease heat stress, thus increasing cuvette lifetimes.
- Pyrolytic carbon is used for several applications in electronic thermal management: thermal interface material, heat spreaders (sheets) and heat sinks (fins)
- It is occasionally used to make tobacco pipes.
- It is used to fabricate grid structures in some high power vacuum tubes.
- It is used as a monochromator for neutron and X-ray scattering studies.
- Prosthetic heart valves
- Radial head prosthesis
- It is also used in automotive industries where a desired amount of friction is required between two components
- Highly oriented pyrolytic graphite (HOPG) is used as the dispersive element in HOPG spectrometers which are used for X-ray spectrometry.
Because blood clots do not easily form on it, it is often advisable to line a blood-contacting prosthesis with this material in order to reduce the risk of thrombosis. For example, it finds use in artificial hearts and artificial heart valves. Blood vessel stents, by contrast, are often lined with a polymer that has heparin as a pendant group, relying on drug action to prevent clotting. This is at least partly because of pyrolytic carbon's brittleness and the large amount of permanent deformation which a stent undergoes during expansion.
Pyrolytic carbon is also in medical use to coat anatomically correct orthopaedic implants, a.k.a. replacement joints. In this application it is currently marketed under the name "PyroCarbon". These implants have been approved by the U.S. Food and Drug Administration for use in the hand for metacarpophalangeal (knuckle) replacements. They are produced by two companies: Tornier (BioProfile) and Ascension Orthopedics. (On September 23, 2011, Integra LifeSciences acquired Ascension Orthopedics.) The FDA has also approved PyroCarbon interphalangeal joint replacements under the Humanitarian Device Exemption.
- Ratner, Buddy D. (2004). Pyrolytic carbon. In Biomaterials science: an introduction to materials in medicine. Academic Press. p. 171-180. ISBN 0-12-582463-7. Google Book Search. Retrieved 7 July 2011.
- Phillip Broadwith (4 January 2013). "Laser guided maglev graphite air hockey". Chemistry World. RSC.
- Cook, Stephen D.; Beckenbaugh, Robert D.; Redondo, Jacqueline; Popich, Laura S.; Klawitter, Jerome J.; Linscheid, Ronald L. (1999). "Long-Term Follow-up of Pyrolytic Carbon Metacarpophalangeal Implants". The Journal of Bone and Joint Surgery. 81 (5): 635–48. PMID 10360692.
- "Ascension PIP: Summary of Safety and Probable Benefit HDE # H010005" (PDF). Food and Drug Administration. 22 March 2002. Retrieved 7 July 2011. |
Understanding the C.P.U
How Does C.P.U Work ???
Video - What Does Memory (RAM) Do?
A computer needs all the components to work properly for example the CPU is responsible for executing a sequence of stored instructions called a program. This program will take inputs from an input device, process the input in some way and output the results to an output device. If the computer didn't have the cpu it wouldn't be able to work.
What is the motherboard?
A motherboard (sometimes alternatively known as the mainboard, system board, planar board or logic board, or colloquially, a mobo) is the main printed circuit board (PCB) found in computers and other expandable systems. It holds many of the crucial electronic components of the system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard contains significant sub-systems such as the processor and other components. |
1 Answer | Add Yours
Molecules are made when two or more atoms join together chemically (via covalent, ionic, or metallic bonds). Examples of a molecule would be oxygen gas (O2), water (H2), and methane (CH4). A compound is a molecule that contains at least two different elements. Examples of compounds would be glucose (C6H12O6), hydrochloric acid (HCl), and water (H2O). Oxygen gas would not be an example because is only contains one type of atom, oxygen. Therefore, all compounds are molecules but not all molecules are compounds.
Thus, to answer your question:
Molecules can be made of atoms that are different or atoms that are alike.
The following link is to a YouTube video that give an explanation of the difference between compounds and molecules. The video also provides practice in the identification of each.
We’ve answered 318,989 questions. We can answer yours, too.Ask a question |
This interactive visualization depicts sea surface temperatures (SST) and SST anomalies from 1885 to 2007. Learn all about SST and why SST data are highly valuable to ocean and atmospheric scientists. Understand the difference between what actual SST readings can reveal about local weather conditions and how variations from normalâcalled anomaliesâcan help scientists identify warming and cooling trends and make predictions about the effects of global climate change. Discover the relationships between SST and marine life, sea ice formation, local and global weather events, and sea level.
An interactive that illustrates the relationships between the axial tilt of the Earth, latitude, and temperature. Several data sets (including temperature, Sun-Earth distance, daylight hours) can be collected using this interactive.
The Climate Momentum Simulation allows users to quickly compare the resulting sea level rise, temperature change, atmospheric CO2, and global CO2 emissions from six different policy options projected out to 2100.
This well-designed experiment compares CO2 impacts on salt water and fresh water. In a short demonstration, students examine how distilled water (i.e., pure water without any dissolved ions or compounds) and seawater are affected differently by increasing carbon dioxide in the air.
This interactive animation focuses on the carbon cycle and includes embedded videos and captioned images to provide greater clarification and detail of the cycle than would be available by a single static visual alone.
These slide sets (one for the Eastern US and one for the Western US) describe how citizen observations can document the impact of climate change on plants and animals. They introduce the topic of phenology and data collection, the impact of climate change on phenology, and how individuals can become citizen scientists. |
(Internet Protocol address) The address of a device attached to an IP network (TCP/IP network). Every client, server and network device is assigned an IP address, and every IP packet traversing an IP network contains a source IP address and a destination IP address.|
Every IP address that is exposed to the public Internet is unique. In contrast, IP addresses within a local network use the same private addresses; thus, a user's computer in company A can have the same address as a user in company B and thousands of other companies. However, private IP addresses are not reachable from the outside world (see private IP address).
Logical Vs. Physical
An IP address is a logical address that is assigned by software residing in a server or router (see DHCP). In order to locate a device in the network, the logical IP address is converted to a physical address by a function within the TCP/IP protocol software (see ARP). The physical address is actually built into the hardware (see MAC address).
Static and Dynamic IP
Network infrastructure devices such as servers, routers and firewalls are typically assigned permanent "static" IP addresses. The client machines can also be assigned static IPs by a network administrator, but most often are automatically assigned temporary "dynamic" IP addresses via software that uses the "dynamic host configuration protocol" (see DHCP). Cable and DSL modems typically use dynamic IP with a new IP address assigned to the modem each time it is rebooted.
The Dotted Decimal Address: x.x.x.x
IP addresses are written in "dotted decimal" notation, which is four sets of numbers separated by decimal points; for example, 18.104.22.168. Instead of the domain name of a Web site, the actual IP address can be entered into the browser. However, the Domain Name System (DNS) exists so users can enter computerlanguage.com instead of an IP address, and the domain (the URL) computerlanguage.com is converted to the numeric IP address (see DNS).
Although the next version of the IP protocol offers essentially an unlimited number of unique IP addresses (see IPv6), the traditional IP addressing system (IPv4) uses a smaller 32-bit number that is split between the network and host (client, server, etc.). The host part can be further divided into subnetworks (see subnet mask).
Class A, B and C
Based on the split of the 32 bits, an IP address is either Class A, B or C, the most common of which is Class C. More than two million Class C addresses are assigned, quite often in large blocks to network access providers for use by their customers. The fewest are Class A networks, which are reserved for government agencies and huge companies.
Although people identify the class by the first number in the IP address (see table below), a computer identifies class by the first three bits of the IP address (A=0; B=10; C=110). This class system has also been greatly expanded, eliminating the huge disparity in the number of hosts that each class can accommodate (see CIDR). See private IP address and IP.
NETWORKS VERSUS HOSTS IN IPV4 IP ADDRESSES
Maximum Maximum Number of
Class Number Hosts Bits used in
Number of per Network/Host
Class Range Networks Network ID ID
A 1-126 127 16,777,214 7/24
B 128-191 16,383 65,534 14/16
C 192-223 2,097,151 254 21/8
127 reserved for loopback test
An IP address is first divided between networks and hosts. The host bits are further divided between subnets and hosts. See |
Mars' valleys and volcanoes
Pictures sent back by spacecraft such as ESA's Mars Express show that Mars has many large impact craters. Most of these are to the south of the equator. They seem to have been made by meteorites crashing onto the surface billions of years ago. The largest crater is about 1800 km across – large enough to swallow half of Europe.
However, the surface of Mars has changed during its lifetime. One obvious feature is the huge system of valleys – the Valles Marineris - near the planet's equator. About 5000 km long, they would stretch all the way from Paris to New York.
The valleys seem to have been formed by cracking in the planet's surface, when the rocky crust stretched and pulled apart. The valleys are now so wide that a person standing on one edge would not be able to see the other side.
Not far to the west are five enormous volcanoes. Most impressive of all is Olympus Mons, the largest volcano in the Solar System. It is wider than England and three times higher than Mount Everest, the highest mountain on Earth. None of these volcanoes are active at the present time. |
A significant implication is that it is impossible to ever directly observe a black hole from outside the event horizon.* That's right, you will never see a black hole. So, how does one observe a black hole? Thankfully, due to the nature of black holes, scientists have developed several effective methods of observing black holes indirectly, essentially deducing their presence and properties by the black hole's effect on other, observable phenomena. Some of these techniques include:
- Gravitational lensing—Strong gravitational fields will cause light (including all electromagnetic radiation) to curve as it passes by a massive object. In a way, the light trapped within a black hole's event horizon is simply light whose path has been curved by gravity to such an extreme that it bends completely back around on itself inside the event horizon. Light passing a great distance from the black hole will be unaffected. But light passing near the black hole will indeed take a curved path, and this curving can be detected by observing objects (such as stars or galaxies) that are out of their expected positions or by noticing multiple images of one object (known as a gravitational mirage).
- Accretion discs—Black holes will attract gas particles, which will then form a disc spiraling into the black hole which can be observed directly. Although many cosmic bodies can create an accretion disc, the presence of an accretion disc with no observable center object is indicative of a black hole.
- X-rays—As gas is drawn into the black hole from the accretion disc, it will superheat, releasing energy, in particular X-rays, which can be detected. In fact, this is one of the most energy-efficient processes ever observed, transforming up to 40% of the matter to energy. This process occurs just outside the event horizon, enabling the x-rays to escape the black hole's gravitational field. In many cases, the x-rays will be released from electromagnetic poles perpendicular to the accretion disc via relativistic jets (often shown on diagrams of black holes, such as those below).
- Black hole binary star systems—In some binary star systems (where two stars orbit around a common point), one of the stars is a black hole which cannot be seen, but its gravitational effect on the companion star can be detected. A famous example is Cygnus X-1.
- Gamma ray bursts—These are short bursts of high energy radiation which occur when a large star collapses into a black hole after a supernova, or when two black holes or a black hole and a star collide to form a bigger black hole.
- Quasars—Quasars are supermassive black holes at the centers of young galaxies that emit a high volume of x-rays from a large accretion disk.
- Gravitational waves—Fluctuations or distortions in space-time are caused by the movements of certain massive objects, including black holes, and those distortions then ripple out from the object. Although the techniques are still experimental, scientists hope to eventually detect black holes by detecting gravitational waves.
So, what do black holes have to do with poker? Well, other than the obvious analogy that poker seems to suck all of the interpersonal skills and human decency out of the souls of some players ...
Black holes came to mind during my last live poker session because of the concept of indirect detection. I sat down at a 2/5 NLHE table, and there were a few regulars as well as a few players I did not know (which is rather unusual for the Meadows ATM). Two of the regulars are players whose games I respect. One is rather loose preflop, the other is rather tight, but both players are solid postflop players, aggressive when possible, cautious when necessary, but rarely putting chips into the pot without a good reason. Early on, I was fairly card dead, so I wasn't playing a lot of hands. However, by watching the two players I did know, I got a feel for the table. Two of the newbies could be bullied. One was a calling station. Most importantly, there was one newbie with a bigger stack who was given respect by the players I knew. When this newbie played a hand, the regulars showed respect to his bets and raises, and never made moves on him. Clearly, then, two players who I respected felt that this newbie was a solid player and stayed out of his way.
So, about an hour into the session, I found AQ on the button. Limped to me, I made a standard raise, solid new guy was the only caller in early position. Flop was A-K-Q rainbow. Yahtzee! New guy checked, I bet 1/2 pot, new guy check-raised for 3x my bet, his standard raise. Now, there are some players who check-raise that flop with any two cards, hoping I have a pocket pair under the board. But given that the regulars respected him, I was worried about the check-raise. A hand that limp-called preflop out of position, then check-raised the flop could easily be something like AK or QQ, possibly JTs. There weren't many hands a tight player would play that way that I could beat. I finally laid it down, deciding there were softer spots at the table. Though I rarely do it, I mucked face up, and the newbie smiled and obligingly showed AK.
So, even if you don't have personal history with a player, you can still get a read on him from observing how he interacts with other players, and how other players react to him. Also, a cool way to deal with table d-bags is to throw them into a black hole.
* It's also pragmatically impossible to observe a black hole from inside the event horizon, even setting aside the impossibility of ever sending a signal from inside the event horizon to anyone outside the event horizon to describe any observations that might be made. As an observer crossed the event horizon, he would be destroyed as gravitational tidal forces caused spaghettification—essentially, if falling feet first, his feet would be accelerate faster than his head due to stronger gravitational forces, causing his body to be stretched and eventually ripped apart. However, all the observer's matter would eventually join the singularity, which would be a rather cool way to go, if you ever get to pick the way you go. |
A behavioral trait is an action commonly observed in individuals throughout a species, such as human beings laughing and smiling or cats grooming themselves. In animals, such traits are generally ascribed to instinct, though they can often be modified. In humans, behavioral traits are often learned rather than instinctive.Continue Reading
Behavioral traits are at the heart of the nature versus nurture controversy debating which human behaviors are inborn and which are learned, according to Scitable. It was thought for a long time, for example, that addicts were weak-willed, but more recent science has shown that addicts are often genetically predisposed to addictive behaviors. Today scientists generally agree that human behaviors are made up of complex interactions between socially learned behaviors and inherited behavioral traits.
The human ability to modify behavioral traits through learning has proven to be evolutionarily advantageous. Humans were able to leave the warm climate of African and move into Europe, for instance, because they figured out how to clothe themselves and hunt new animals. Tool-making appears to be an instinctive behavioral trait among primates, but teaching the young how to make and use tools is a social behavior. Other instinctive human behavioral traits, such as the fight-or-flight response, can be modified in some people but appear to be relatively hard-wired in most.Learn more about Psychology |
In linear control, for mathematical simplicity, it is typically assumed that noise is additive. Typical dynamics look like
are the state, input, and noise, respectively. In particular, note that the noise does not multiply the state or the input.
For human and animal movements, however, the noise is not additive. The inputs to muscles are motor neurons. As the motor neuron firing rate increases, it becomes more variable. In other words, larger inputs lead to more noise.
Linear dynamics with signal-dependent noise, as just described, can be modeled as
While the math gets more complicated when the noise multiplies the input, such models can predict numerous qualitative features of natural movements. For example, according to Fitts’s law, higher precision movements require more time. Under additive noise, fast movements will be more precise than slow movements because less noise is injected into the system. Under the signal-dependent noise model, the large inputs associated with fast movements will lead to large noises, and hence imprecise movements. Thus, the two models lead to opposite predictions, with the signal-dependent noise model being more consistent with the data.
Harris CM & Wolpert DM (1998) Signal-dependent noise determines motor planning |
From Statistics Explained
Traditionally, typologies of territory were determined by population size and density of local administrative units at level 2 (LAU level 2), such as communes, municipalities or local authorities. The new typologies that are described here use a population grid, which is a more accurate basis to characterise areas and regions. This article provides a short overview of the typologies, including definitions, terminology and some basic statistical data.
These typologies start by classifying grid cells of 1 km² to a typology of clusters according to their similarities in terms of population size and density: each grid cell is classified to one type of cluster only. Areas (LAU level 2) or regions (NUTS level 3) can then be classified to area or regional typologies based on the population share in different types of clusters: again, each LAU level 2 area or NUTS level 3 region is classified to one type only. In each of these various typologies (of clusters, areas or regions) the whole geographical territory of the European Union (EU) is covered without any overlaps or omission.
The area typology applied to LAU level 2 is primarily used in surveys such as the labour force survey (LFS) and the survey on income and living conditions (SILC); the regional typology applied to the NUTS level 3 regions is mainly used to monitor rural development.
- 1 Typologies
- 2 Main statistical findings
- 3 Data sources and availability
- 4 Context
- 5 See also
- 6 Further Eurostat information
- 7 External links
- 8 Notes
The typology of clusters classifies 1 km² grid cells (and clusters thereof), splitting them into three types. The criteria used are the population density in the individual grid cells and the combined population level of clusters, where clusters are made up of contiguous cells (in other words, neighbouring or adjoining cells); see later for a more detailed explanation of contiguous cells and the so-called gap-filling technique used for high-density clusters. The three types of grid cells or clusters in the typology are the following.
- High-density clusters/city centres/urban centres: clusters of contiguous grid cells of 1 km² with a density of at least 1 500 inhabitants per km² and a minimum population of 50 000 after gap-filling.
- Urban clusters: clusters of contiguous grid cells of 1 km²with a density of at least 300 inhabitants per km² and a minimum population of 5 000.
- Rural grid cells: grid cells outside high-density clusters and urban clusters.
Contiguous cells and filling gaps in the cluster typology
To determine population size, the grid cells need to be grouped in clusters. The methods presented here use three different rules for contiguity to create clusters. These three rules are explained below.
- Contiguous including diagonals — used for urban clusters If the central square (grid cell) in Figure 1 is above the density threshold, it will be grouped with each of the other surrounding eight grid cells that exceed the density threshold.
- Contiguous excluding diagonals — used for high-density clusters If the central square in Figure 1 is above the density threshold, it will be grouped with each of the four cells directly above, below or next to the central square that also exceed the density threshold. This means that cells numbered 2, 4, 5 and 7 can be included in the same cluster. Cells with number 1, 3, 6 and 8 cannot as they have a diagonal connection.
- The majority rule or gap-filling — used for high-density clusters
The goal for the high-density clusters is to identify urban centres without any gaps. Therefore, enclaves need to be filled. If the central square in Figure 1 is not, in its own right, a part of a high-density cluster, it will be added to a high-density cluster if five or more of the eight surrounding cells (therefore including diagonals) belong to a single high-density cluster. This rule is applied iteratively until no more cells can be added.
Degree of urbanisation typology for LAU level 2 areas — an area typology
Depending on the share of the population living in the different types of cluster, LAU level 2 areas are classified into three degrees of urbanisation.
- Densely-populated areas/cities/large urban areas: at least 50 % of the population lives in high-density clusters .
- Intermediate density areas/towns and suburbs/small urban areas: less than 50 % of the population lives in rural grid cells and less than 50 % lives in high-density clusters.
- Thinly-populated areas/rural areas: more than 50 % of the population lives in rural grid cells.
Urban-rural typology for NUTS level 3 regions — a regional typology
Depending on the share of the rural population (in other words, the share of the population living in rural grid cells), the NUTS level 3 regions are classified into the following three groups.
- Predominantly urban regions/urban regions: the rural population is less than 20 % of the total population.
- Intermediate regions: the rural population is between 20 % and 50 % of the total population.
- Predominantly rural regions/rural regions: the rural population is 50 % or more of the total population.
In a last step, the size of the cities in the region is considered.
- A region classified as predominantly rural by the criteria above becomes intermediate if it contains a city of more than 200 000 inhabitants representing at least 25 % of the regional population.
- A region classified as intermediate by the criteria above becomes predominantly urban if it contains a city of more than 500 000 inhabitants representing at least 25 % of the regional population.
Summary table: names and alternative names
The names of typologies and items may differ according to context, users or means of dissemination. Table 1 gives a summary of the vocabulary used as well as the geographical scale.
Main statistical findings
Although these typologies show similar patterns, the use of different typologies may produce rather different figures. Thus, as Table 2 shows, around 33 % of the EU-27 population lived in rural grid cells, 28 % in thinly populated areas and 23 % in predominantly rural regions.
Moreover, the variability between the figures is more pronounced at the national level than for the EU as a whole. As Table 2 illustrates, 35 % of the Bulgarian population lived in high-density clusters, 43 % in densely populated areas and 16 % in predominantly urban regions.
The data produced using these different typologies present a broader range in terms of surface area than in terms of the population. As Table 3 shows, 3 % of the EU-27’s land area was covered by urban clusters, 13 % by intermediate density areas and 39 % by intermediate regions. Again, there is greater variability at the national level than for the EU as a whole, as Table 3 clearly shows.
Data sources and availability
These typologies classify different territories, defined at different geographical scales, namely grid cells, LAU 2 areas or NUTS level 3 regions. However, the analysis of the statistical data using these typologies may be disseminated at a higher geographical level. Hence, the proportion of EU-27 land area classified as composed of intermediate regions is an indicator for the EU based on a regional typology. A similar indicator could also be disseminated at national, NUTS level 1, NUTS level 2 and NUTS level 3 levels. However, in some cases statistical data using these typologies can only be calculated and disseminated for the EU as a whole or at the national level. This is mainly to do with representativeness, confidentiality and reliability of the indicator. Some surveys, for example SILC, can provide reliable statistics by degree of urbanisation for thinly populated areas at the national level, but not at NUTS level 3.
The European Commission has introduced typologies based on population size and density to monitor situations and trends in urban and rural areas and regions. The Treaty on European Union (also called the Treaty of Maastricht) specifically mentions that particular attention should be paid to rural areas and rural regions.
The Lisbon Treaty included territorial cohesion alongside economic and social cohesion as an objective for the EU. This new concept was presented in a ‘Green Paper on territorial cohesion — Turning territorial diversity into strength’ (COM(2008) 616) and the debate has been summarised in the ‘Sixth progress report on economic and social cohesion’ in 2009. The report ‘Investing in Europe's future — Fifth cohesion report on economic, social and territorial cohesion’ explains the main issues related to territorial cohesion and how these could be transposed into policy proposals. One of the main issues related to territorial cohesion is the need for data on different territorial levels, particularly for lower geographical levels. The classification of the degree of urbanisation provides a unique insight into trends at the local level, and highlights the differences between urban and rural areas.
- All articles on regions and cities
- Territorial typologies for European cities and metropolitan regions (background article)
- Urban-rural typology (background article)
- Urban-rural typology update (background article)
Further Eurostat information
- City statistics - Urban Audit
Methodology / Metadata
- A file with all the classifications can be found here.
- Please note that this link may not work with certain versions of Internet Explorer - please try another web browser (e.g. Chrome, Opera or Firefox) if you encounter problems.
Source data for tables and graphs on this page (MS Excel)
- A system of urban-rural typologies - overview poster published by DG Regional policy
- OECD regional typology (pdf file, 1.93 Mb)
- In addition, each high-density cluster should have at least 75 % of its population in densely-populated LAU level 2 areas. This also ensures that all high-density clusters are represented by at least one densely-populated LAU level 2, even when this high-density cluster represents less than 50 % of the population of that LAU level 2. |
Histomoniasis is caused by a protozoan that infects the ceca, and later the liver, of turkeys, chickens, and occasionally other galliform birds. In turkeys, most infections are fatal, whereas in other galliforms susceptibility varies between species and breeds.
The causative agent of histomoniasis is the anaerobic, single cell protozoan parasite Histomonas meleagridis that can exist in flagellated (8–15 μm in diameter) and amoeboid (8–30 μm in diameter) forms. Histomonas is most often transmitted in embryonated eggs of the cecal nematode Heterakis gallinarum. A large percentage of chickens and other gallinaceous birds harbor this worm, which serves as a reservoir. Three species of earthworms can act as vectors for H gallinarum larvae containing H meleagridis, which are infective to both chickens and turkeys. H meleagridis survives for long periods within Heterakis eggs, which are resistant and may remain viable in the soil for years. Histomonads are released from Heterakis larvae in the ceca a few days after entry of the nematode and replicate rapidly in the ceca. The parasites migrate into the submucosa and muscularis mucosae and cause extensive and severe necrosis. Histomonads reach the liver either by the vascular system or via the peritoneal cavity, and rounded necrotic lesions quickly appear on the liver surface. Histomonads interact with other gut organisms, such as bacteria and coccidia, and depend on these for full virulence. In turkeys, transmission is by direct cloacal contact with infected birds or via fresh droppings, resulting in histomoniasis quickly spreading throughout the flock. Infection has not been shown to spread in this manner in chickens.
Traditionally, histomoniasis has been thought of as affecting turkeys, while doing little damage to chickens. However, outbreaks in chickens may cause high morbidity, moderate mortality, and extensive culling. Liver lesions tend to be less severe in chickens but often involve secondary bacterial infections. Morbidity can be especially high in young layer or breeder pullets. Layer flocks recover but lack uniformity. Experimental infections with Histomonas of 16-wk-old layers have demonstrated reduced egg production during infection. Tissue responses to infection may resolve in 4 wk, but birds may be carriers for another 6 wk.
Signs of histomoniasis are apparent in turkeys 7–12 days after infection and include listlessness, reduced appetite, drooping wings, unkempt feathers, and yellow droppings in the later stages of the disease. The origin of the name “blackhead” is obscure and misleading, with only a few birds displaying a cyanotic head. Young birds have a more acute disease and die within a few days after signs appear. Older birds may be sick for some time and become emaciated before death.
The primary lesions of histomoniasis are in the ceca, which exhibit marked inflammatory changes and ulcerations, causing a thickening of the cecal wall. Occasionally, these ulcers erode the cecal wall, leading to peritonitis and involvement of other organs. The ceca contain a yellowish green, caseous exudate or, in later stages, a dry, cheesy core. Liver lesions are highly variable in appearance; in turkeys, they may be up to 4 cm in diameter and involve the entire organ. In some cases, the liver will appear green or tan. The liver and cecal lesions together are pathognomonic. However, the liver lesions must be differentiated from those of tuberculosis, leukosis, avian trichomonosis, and mycosis. Lesions are also seen in other organs, such as the kidneys, bursa of Fabricius, spleen, and pancreas. Studies by PCR show that Histomonas DNA can be found in the blood and in the tissues of most organs, whether lesions are present or not. Histopathologic examination is helpful for differentiation of diseases.
Histomonads are intercellular, although they may be so closely packed as to appear intracellular. The nuclei are much smaller than those of the host cells, and the cytoplasm less vacuolated. Scrapings from the liver lesions or ceca may be placed in isotonic saline solution for direct microscopic examination; Histomonas spp must be differentiated from other cecal flagellates. Molecular diagnosis is possible with published PCR primers.
Prevention and Treatment
Because healthy chickens and gamebirds often carry the cecal worm vector, any contact between turkeys and other galliforms should be avoided and care should be taken to reduce the worm population. Worm eggs, from contaminated soil, can be tracked inside by workers, causing infection. Arthropods such as flies may also serve as mechanical vectors. Because H gallinarum ova can survive in soil for many months or years, turkeys should not be put on ground contaminated by chickens. Once established in a turkey flock, infection spreads rapidly without a vector through direct contact. Dividing a facility into subunits using barriers can contain the outbreaks to specific units. Histomonads that are shed directly into the environment die quickly. Thus, in a turkey facility, where Heterakis is unable to complete its life cycle, decontamination is not required.
Immunization has only been partially successful in controlling histomoniasis, and reports differ on its effectiveness. The immune response of turkeys to live attenuated Histomonas requires 4 wk to develop. Vaccination of 18-wk-old pullets 5 wk before experimental infection has been shown to prevent a drop in egg production. Most workers have concluded that immunization of birds against this disease using live cultures is not practical. Killed organisms stimulate some immunity when given SC or IP but do not offer protection.
No drugs are currently approved for use as treatments for histomoniasis. Nitarsone is available for prophylaxis by feed medication. Nitarsone is mixed with the feed at 0.01875% and fed continuously. A 5-day withdrawal period is required for animals slaughtered for human consumption. Under most conditions, nitarsone is effective, although some outbreaks in turkeys on medication have been reported. Historically, nitroimidazoles such as ronidazole, ipronidazole, and dimetridazole were used for prevention and treatment and were highly effective. Some of these products can be used by veterinary prescription in non-food-producing birds. Frequent worming of chickens with benzimidazole anthelmintics helps reduce exposure to heterakid worms that carry the infection.
Last full review/revision July 2014 by Robert B. Beckstead, PhD |
Food Allergies are immune system reactions that develop quickly after consuming a specific food. Even a trace amount of an allergy-causing item might cause symptoms such as stomach issues, rashes, or congested airways. A food allergy can induce severe symptoms or even a life-threatening response in some people, which is known as anaphylaxis.
According to the Centers for Disease Control and Prevention, food allergies affect 4% to 6% of children and 4% of adults. Over 50 million Americans are allergic to something. You’ve undoubtedly met or are one of those individuals.
The immune system of the body keeps you healthy by battling infections and other challenges to your health. A food allergy reaction happens when your immune system overreacts to a food or an ingredient in a food, perceiving it as a threat and activating a protective response.
While allergies seem to run in families, it is difficult to know whether a kid will inherit a parent’s food allergy or whether siblings will be affected in the same way. A food allergy is easily confused with a much more frequent reaction known as food intolerance. Food intolerance, while bothersome, is a less dangerous condition that does not affect the immune system.
Symptoms Of Food Allergy
According to Hopkins, Symptoms can range from moderate to severe, and each person is affected differently. Not everyone will experience all of the probable symptoms, and each response will be unique. However, common signs and symptoms include:
- Tingling in the mouth
- A burning feeling in the lips and mouth
- Nausea, vomiting, or diarrhea
- Facial swelling
- A hive-like skin rash
- A runny nose and
- Watery eyes
Symptoms Of Anaphylaxis
Anaphylaxis is the most severe allergic reaction, a potentially fatal whole-body allergic response that can compromise your breathing, produce a sudden drop in your blood pressure, and change your heart rate. Anaphylaxis can occur within minutes of being exposed to the trigger food. It is potentially deadly and must be treated immediately with an epinephrine injection (adrenaline).
The signs and symptoms usually appear fast and worsen. They may include:
- A rapid drop in blood pressure
- Fear or apprehension
- An itchy tickly throat
- A fast heartbeat, known as tachycardia
- Loss of consciousness
- Severe swelling of the throat, lips, face, and mouth
- Respiratory issues such as wheezing or shortness of breath, which often get worse over time
- Itchy skin or a rash that spreads quickly and covers much of the body
Common Food Allergies
According to FDA, While any food might induce an allergic reaction, eight foods account for almost 90% of all reactions:
One of the most common causes of food allergy in children is an egg allergy. It is common to be allergic to egg whites but not yolks, or vice versa. However, because the majority of the proteins that cause an allergy are present in egg whites, an egg white allergy is more common.
- Cow’s Milk:
It is one of the most prevalent childhood allergies, affecting approximately 2-3% of infants and toddlers. However, 90% of children outgrow the disease by the age of three, making it far less common in adulthood.
Cow’s milk allergies can be IgE or non-IgE, although IgE cow milk allergies are the most frequent and potentially the most severe. Children and adults with IgE allergies usually experience a reaction within 5-30 minutes of consuming cow’s milk.
A shellfish allergy results from your body targeting the proteins present in the crab and mollusc. Shellfish examples include
- Shrimp And Prawns.
- Fish Allergies:
Fish allergies are frequent, affecting up to 7% of individuals. A fish allergy, like a shellfish allergy, can induce a significant and potentially fatal allergic reaction. The most common symptoms are vomiting and diarrhea, however, anaphylaxis can develop in rare cases.
This means that people who are allergic to fish are frequently given an epinephrine auto-injector to carry with them in case they eat fish by accident.
Proteins in soybeans and soybean products cause a soy allergy. If you have a soy allergy, the only treatment is to avoid eating soy.
- Tree Nuts:
A tree nut allergy is a reaction to some of the nuts and seeds found on trees. People who are sensitive to tree nuts will also be allergic to food products containing these nuts, such as nut butter and oils. They should avoid all sorts of tree nuts, even if they are only allergic to one or two.
Peanut allergies, like tree nut allergies, are extremely common and can result in severe and possibly deadly allergic reactions. However, because peanut is a legume, the two conditions are regarded as separate.
Wheat allergies are caused by an allergic reaction to one of the proteins present in wheat. It primarily affects kids. Children with a wheat allergy, on the other hand, usually outgrow it by the age of ten.
Wheat allergies, like other allergies, can cause digestive discomfort, hives, nausea, rashes, swelling, and, in severe cases, anaphylaxis. It is frequently mistaken for celiac disease and non-celiac gluten sensitivity, both of which can cause similar stomach symptoms.
What You Can Do About Food Allergies?
Once a food allergy has developed, the best method to avoid an allergic reaction is to get familiar with and avoid foods that trigger signs and symptoms. For some, this is just an irritation, while for others, it is a major problem. Moreover, some foods may be well disguised when used as ingredients in particular cuisines. This is especially true in restaurants and other public places.
If You Know You Have A Food Allergy, take the following steps:
- Understand what you’re drinking and eating. Always read food labels thoroughly.
- If you’ve already had a severe attack, wear a medical alert bracelet or necklace that alerts others to your food allergy in case you have an attack and are unable to communicate.
- Consult your doctor about getting emergency epinephrine. Carrying an epinephrine autoinjector may be necessary if you are at risk of having a severe allergic response (Adrenaclick, EpiPen).
- Be cautious at restaurants. Ensure that your waiter or chef is aware that you cannot eat the food to which you are allergic and that the dish you order does not contain it. Also, make certain that food is not prepared on surfaces or in pans that have previously included any of the foods to which you are allergic.
- Don’t be afraid to express your needs. When restaurant workers properly understand your request, they are usually willing to help.
Avoiding the foods that trigger signs and symptoms is the only approach to avoid an allergic response. Despite your best efforts, you may be exposed to a food that triggers an allergic reaction.
For A Mild Allergic Reaction:
Antihistamines, either prescribed or over-the-counter, may help alleviate symptoms. These medications can be taken after being exposed to an allergen-causing food to help reduce itching and hives.
For A Severe Allergic Reaction:
You may require an emergency epinephrine injection and a visit to the emergency department. Many allergy sufferers have an epinephrine autoinjector on hand (Adrenaclick, EpiPen). When placed against your thigh, this instrument introduces a single dose of medication.
Testing For Food Allergy
A food allergy usually results in some form of reaction every time the trigger food is consumed. Symptoms differ from person to person, and you may not always have the same symptoms after each reaction.
While food allergies can occur at any age, they are most common in childhood. If you suspect you have a food allergy, consult an allergist, who will examine your family and medical history, determine which tests to run (if any), and use this information to determine whether or not you have a food allergy.
Allergists ask comprehensive questions about your medical history and symptoms to make a diagnosis. Be prepared to answer questions on your diet, including how much you eat.
- How soon did the symptoms start to show up?
- What symptoms you had, and how long they remained?
After reviewing your medical history, an allergist may recommend a blood test or a skin prick test.
A Blood Test:
A blood test can assess your immune system’s response to specific foods by detecting immunoglobulin E, an allergy-related antibody (IgE).
A sample of blood taken in your doctor’s office is forwarded to a medical laboratory where different foods can be analyzed for this test.
A Skin Prick Test:
A skin prick test can evaluate how you react to a specific food. A little amount of the suspicious food is applied to the skin of your forearm or back. A doctor or other health care professional will then prick your skin with a needle to permit a tiny amount of the chemical to penetrate below your skin’s surface. You get a raised bump or response if you are allergic to the substance being examined.
The findings of these tests will be used by your allergist to make a diagnosis.
Walk-In Lab provides food allergy testing services for various foods. If you suspect that you or any of your family members have an allergy to some food, you can consider booking an appointment with Walk-In Lab. An early diagnosis can help you and your doctor makes a better management plan for you. So, book your appointment with Walk-In Lab today.
- Food Allergy. Retrieved from aaaai.org: https://www.aaaai.org/Conditions-Treatments/Allergies/Food-Allergy
- Food Allergies. Retrieved from cdc.gov: https://www.cdc.gov/healthyschools/foodallergies
- Food Allergies. Retrieved from fda.gov: https://www.fda.gov/food/food-labeling-nutrition/food-allergies
- What Is Food Allergy? Retrieved from hopkinsmedicine.org: https://www.hopkinsmedicine.org/health/conditions-and-diseases/food-allergies
- Food Allergy Tests. Retrieved from walkinlab.com: https://www.walkinlab.com/categories/view/allergy-tests/food |
7 ways Einstein changed the world
There are many ways Einstein changed the world, and his ideas have shaped the way we see and interact with the universe.
We take a look at seven ways Einstein changed the world. Albert Einstein (1879-1955) is one of the most famous scientists of all time, and his name has become almost synonymous with the word "genius." There are many ways Einstein changed the world, we explore some of our favorites here. While his reputation owes something to his eccentric appearance and occasional pronouncements on philosophy, world politics and other non-scientific topics, his real claim to fame comes from his contributions to modern physics, which have changed our entire perception of the universe and helped shape the world we live in today.
Here's a look at some of the world-changing concepts we owe to Einstein
Related: Why some physicists really think there's a 'mirror universe' hiding in space-time
One of Einstein's earliest achievements, at the age of 26, was his theory of special relativity — so-called because it deals with relative motion in the special case where gravitational forces are neglected. This may sound innocuous, but it was one of the greatest scientific revolutions in history, completely changing the way physicists think about space and time. In effect, Einstein merged these into a single space-time continuum. One reason we think of space and time as being completely separate is because we measure them in different units, such as miles and seconds, respectively. But Einstein showed how they are actually interchangeable, linked to each other through the speed of light — approximately 186,000 miles per second (300,000 kilometers per second).
Perhaps the most famous consequence of special relativity is that nothing can travel faster than light. But it also means that things start to behave very oddly as the speed of light is approached. If you could see a spaceship that was traveling at 80% the speed of light, it would look 40% shorter than when it appeared at rest. And if you could see inside, everything would appear to move in slow motion, with a clock taking 100 seconds to tick through a minute, according to Georgia State University's HyperPhysics website. This means the spaceship's crew would actually age more slowly the faster they are traveling.
2. Einstein's equation: E = mc^2
An unexpected offshoot of special relativity was Einstein's celebrated equation E = mc^2, which is likely the only mathematical formula to have reached the status of a cultural icon. The equation expresses the equivalence of mass (m) and energy (E), two physical parameters previously believed to be completely separate. In traditional physics, mass measures the amount of matter contained in an object, whereas energy is a property the object has by virtue of its motion and the forces acting on it. Additionally, energy can exist in the complete absence of matter, for example in light or radio waves. However, Einstein's equation says that mass and energy are essentially the same thing, as long as you multiply the mass by c^2 — the square of the speed of light, which is a very big number — to ensure it ends up in the same units as energy.
This means that an object gains mass as it moves faster, simply because it's gaining energy. It also means that even an inert, stationary object has a huge amount of energy locked up inside it. Besides being a mind-blowing idea, the concept has practical applications in the world of high-energy particle physics. According to the European Council for Nuclear Research (CERN), if sufficiently energetic particles are smashed together, the energy of the collision can create new matter in the form of additional particles.
Lasers are an essential component of modern technology and are used in everything from barcode readers and laser pointers to holograms and fiber-optic communication. Although lasers are not commonly associated with Einstein, it was ultimately his work that made them possible. The word laser, coined in 1959, stands for "light amplification by stimulated emission of radiation" — and stimulated emission is a concept Einstein developed more than 40 years earlier, according to the American Physical Society. In 1917, Einstein wrote a paper on the quantum theory of radiation that described, among other things, how a photon of light passing through a substance could stimulate the emission of further photons.
Einstein realized that the new photons travel in the same direction, and with the same frequency and phase, as the original photon. This results in a cascade effect as more and more virtually identical photons are produced. As a theoretician, Einstein didn't take the idea any further, while other scientists were slow to recognize the enormous practical potential of stimulated emission. But the world got there in the end, and people are still finding new applications for lasers today, from anti-drone weapons to super-fast computers.
Related: The world's biggest laser: Function, fusion power and solving a supernova
4. Black holes and wormholes
Einstein's theory of special relativity showed that space-time can do some pretty weird things even in the absence of gravitational fields. But that's only the tip of the iceberg, as Einstein discovered when he finally succeeded in adding gravity into the mix, in his theory of general relativity. He found that massive objects like planets and stars actually distort the fabric of space-time, and it's this distortion that produces the effects we perceive as gravity.
Einstein explained general relativity through a complex set of equations, which have an enormous range of applications. Perhaps the most famous solution to Einstein's equations came from Karl Schwarzschild's solution in 1916 — a black hole. Even weirder is a solution that Einstein himself developed in 1935 in collaboration with Nathan Rosen, describing the possibility of shortcuts from one point in space-time to another. Originally dubbed Einstein-Rosen bridges, these are now known to all fans of science fiction by the more familiar name of wormholes.
5. The expanding universe
One of the first things Einstein did with his equations of general relativity, back in 1915, was to apply them to the universe as a whole. But the answer that came out looked wrong to him. It implied that the fabric of space itself was in a state of continuous expansion, pulling galaxies along with it so the distances between them were constantly growing. Common sense told Einstein that this couldn't be true, so he added something called the cosmological constant to his equations to produce a well-behaved, static universe.
But in 1929, Edwin Hubble's observations of other galaxies showed that the universe really is expanding, apparently in just the way that Einstein's original equations predicted. It looked like the end of the line for the cosmological constant, which Einstein later described as his biggest blunder. That wasn't the end of the story, however. Based on more refined measurements of the expansion of the universe, we now know that it's speeding up, rather than slowing down as it ought to in the absence of a cosmological constant. So it looks as though Einstein's "blunder" wasn't such an error after all.
6. The atomic bomb
Einstein is occasionally credited with the "invention" of nuclear weapons through his equation E = mc^2, but according to the Max Planck Institute for Gravitational Physics's Einstein Online website, the link between the two is tenuous at best. The key ingredient is the physics of nuclear fission, which Einstein had no direct involvement with. Even so, he played a crucial role in the practical development of the first atomic bombs. In 1939, a number of colleagues alerted him to the possibilities of nuclear fission and the horrors that would ensue if Nazi Germany acquired such weapons. Eventually, according to the Atomic Heritage Foundation, he was persuaded to pass on these concerns in a letter to the president of the United States, Franklin D. Roosevelt. The ultimate outcome of Einstein's letter was the establishment of the Manhattan Project, which created the atomic bombs used against Japan at the end of World War II.
Although many famous physicists worked on the Manhattan Project, Einstein wasn't among them. He was denied the necessary security clearance because of his left-leaning political views, according to the American Museum of Natural History (AMNH). To Einstein, this was no great loss — his only concern had been to deny a monopoly on the technology to the Nazis. In 1947 Einstein told Newsweek magazine, "Had I known that the Germans would not succeed in developing an atomic bomb, I would have never have lifted a finger," according to Time magazine.
7. Gravitational waves
Einstein died in 1955, but his huge scientific legacy continues to make headlines even in the 21st century. This happened in a spectacular way in February 2016, with the announcement of the discovery of gravitational waves — yet another consequence of general relativity. Gravitational waves are tiny ripples that propagate through the fabric of space-time, and it's often bluntly stated that Einstein "predicted" their existence. But the reality is less clear-cut than that.
Einstein never quite made up his mind whether gravitational waves were predicted or ruled out by his theory. And it took astronomers decades of searching to decide the matter one way or the other.
Eventually they succeeded, using giant facilities such as the Laser Interferometer Gravitational-Wave Observatories (LIGO) in Hanford, Washington, and Livingston, Louisiana. As well as being another triumph for Einstein's theory of general relativity (albeit one he wasn't too sure about himself), the discovery of gravitational waves has given astronomers a new tool for observing the universe — including rare events like merging black holes.
- Discover 3 everyday inventions Einstein made possible, with aerospace company Thales.
- Read the collected works of Albert Einstein (The complete works PergamonMedia).
- Explore 5 fun facts about Albert Einstein with the American Nuclear Society.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Daisy Dobrijevic joined Space.com in February 2022 as a reference writer having previously worked for our sister publication All About Space magazine as a staff writer. Before joining us, Daisy completed an editorial internship with the BBC Sky at Night Magazine and worked at the National Space Centre in Leicester, U.K., where she enjoyed communicating space science to the public. In 2021, Daisy completed a PhD in plant physiology and also holds a Master's in Environmental Science, she is currently based in Nottingham, U.K. |
We recently held a competition for parents and caregivers, inviting them to write a blog on essential parenting topics. Our winner, Julian Latimer, provides valuable insights on the benefits of encouraging children to ask questions. Julian emphasises that this practice not only fosters their development but also lays the foundation for future success. Research reveals that children who ask more questions tend to exhibit higher IQs, excel academically, display creativity, possess effective problem-solving skills, and are more likely to thrive in the professional world.
In today’s complex and ever-changing society, nurturing critical thinking and problem-solving skills in children is crucial. Encouraging them to approach problems from different angles and explore diverse perspectives enables them to navigate and adapt to new situations.
By creating a safe and open environment where children feel comfortable asking any question, regardless of how trivial it may seem, they are more inclined to tackle challenges with curiosity and an open mind. This approach often leads to the development of innovative and creative solutions. Moreover, studies indicate that children who feel supported and secure in their learning environment demonstrate higher academic achievement and engagement.
Promoting curiosity is another essential strategy for cultivating critical thinking skills. Through exploration and inquiry about their surroundings, children gain a deeper understanding of the world. This process fosters empathy, enhances their appreciation for diverse perspectives and cultures, and enables them to broaden their knowledge.
Utilising books and media as tools to stimulate curiosity is also effective in encouraging children to ask questions and think critically. Pleasure reading has been associated with higher academic achievement and cognitive development, while educational media can pique children’s curiosity and inspire them to inquire about their surroundings.
Engaging children in games that promote questioning and critical thinking is yet another valuable approach to nurturing their curiosity and problem-solving abilities. Research confirms that playing games contributes to cognitive development and academic success. Additionally, games offer an enjoyable and interactive way to teach children about various subjects and encourage their inquisitiveness.
Ultimately, by nurturing our children’s natural curiosity and fostering a love for learning, we are investing in their future success. Equipping them with critical thinking skills positions them to shape the world and tackle its challenges with confidence. As parents, let us celebrate the importance of encouraging our children to ask questions and approach problems with curiosity and an open mind. In doing so, we are empowering them to thrive in the future. |
Lessons for Educators on Teaching Climate Change
Researchers have identified a teaching strategy that could help students understand climate change and the role that humans play in global warming, but they also found that what high school students learn is influenced by the beliefs they bring into the classroom.
In a randomized controlled trial of 357 high school students from three regions in the United States, researchers studied whether teaching students how humans cause climate change would increase their acceptance of climate change overall. The work was done by three researchers from the University of Utah, North Carolina State University and the nonprofit BSCS Science Learning.
Researchers gave students a lesson on climate change that included an activity where they analyzed data on temperature and precipitation. In addition, about half of the students received information on how humans cause climate change by burning fossil fuels to accelerate the greenhouse effect. After completing the lessons, students were asked to make an argument in response to the claim: “I don’t think climate change is real … it’s natural for weather to change.”
Researchers saw that explaining how humans cause climate change was related to increased acceptance. However, they also saw that students’ worldview played a larger role in whether students rejected or accepted the lesson.
“If we don’t think about how worldview will impact how students think about climate change, we’re not going to be successful as science teachers,” said K.C. Busch, assistant professor of STEM Education in the NC State College of Education and a co-author of the study.
We talked (virtually) with Busch; lead author Lynne Zummo, assistant professor of learning sciences at the University of Utah; and Brian Donovan, a research scientist with BSCS Science Learning, to talk about the ramifications of the study on teaching climate change to high schoolers.
TA: What should science teachers take away from this study?
Zummo: Science education around climate change for teenagers requires a lot of thought and requires a lot of careful planning. Our study showed that when you carefully design a climate change science curriculum that focuses on the mechanism behind climate change, you can support moving students toward the scientific community’s consensus.
It also showed that education that focuses on knowledge alone probably isn’t enough because there are these complicating factors where teenagers, like adults, draw on their cultural views of the world to navigate their understanding of climate change.
Busch: As educators, we have to be sensitive to the social aspects of climate change, and how youth are bringing those ideas into the classroom. We don’t offer perhaps a solution here or an intervention that can address that – that would be a next step.
What this does show is you should include mechanistic knowledge in the lesson of how humans cause climate change; that it did have a positive effect on receptivity of climate change overall.
TA: How strongly did political beliefs impact students’ acceptance or rejection?
Busch: For most students, if they got the mechanistic knowledge of how humans cause climate change, it increased their belief in climate change, or their receptivity to climate change.
However, knowledge explained 2 percent of the variability – so a small percentage of the variability in student’s receptivity. Whereas other factors, including worldview, explained 25 percent of the variability. Does knowledge matter? Sure it does, but not as much as these other things. That said, the 2 percent of explained variability was associated with only a brief, individual 45-minute intervention.
What our results also show is the ability of students to analyze data was linked to their willingness to be accepting of climate change.
TA: What role exactly did students’ data analysis have on their acceptance?
Zummo: The theory behind this is that the stronger a person’s quantitative reasoning skills are, the better they are at selecting information that advances their cultural viewpoint.
Students with more liberal worldviews who have high quantitative reasoning skills, we saw them really engaging with the data from the study, and using that data to advance the argument about climate change. In contrast, students with more conservative worldviews and who have high quantitative reasoning skills would usually address the data that was available in the lesson that showed how the climate was changing, and then pull in their resources from outside the lesson about natural variability. So, they would find other ways to reason around the reality of climate change.
TA: What was missing from this study that you think could improve teaching of climate change?
Zummo: In science education, we’re very focused on knowledge, historically. We want students to understand scientific concepts. It seems that with climate change, the topic allows students to bring in these cultural and political resources, and we need to rethink the focus on knowledge. One, the lesson can’t just be 45 minutes. It has to be something longer, and in some way, it has to address student world views, and how world views influence data interpretation and ideas about climate change. That’s the next big question of research – how are we going to do that, and what kinds of lessons will avoid this ideologically motivated reasoning.
TA: What motivated you to study this?
Busch: I was a teacher for 12 years, and I remembered teaching about climate change in one of my first years of teaching. I remember talking about the greenhouse gases and how it was going to make everything hotter and the ramifications of that. I was drawing the energy budget on the chalk board. I remember turning around and expecting the class to be like, “rah – we need to do something,” and it was crickets.
Donovan: This isn’t just a story about climate change education, or a story about politically motivated reasoning or a story about science education. It is a story about how biased thinking about human identity is triggered by science education.
Science educators need to be aware of such triggers and why they occur if they want to help students understand how scientific knowledge is misused, misconstrued and used to abuse the Earth and the people who live on it.
The study, “Complex influences of mechanistic knowledge, worldview, and quantitative reasoning on climate change discourse: Evidence for ideologically motivated reasoning among youth,” was published July 28 in the Journal of Research in Science Teaching. The research was supported by the National Science Foundation through Grant Award No. 1660985. |
Thermohaline circulation, also called the Global Ocean Conveyor, moves water between the deep and surface ocean worldwide.
Image courtesy Argonne National Laboratory
Melting Arctic Sea Ice and the Global Ocean Conveyor
Seawater moves through the Atlantic as part of the Global Ocean Conveyor. That’s the way that seawater travels the world’s oceans. The water in the Global Ocean Conveyor moves around because of differences in water density. In the North Atlantic, the differences in water density are mainly caused by differences in temperature. Colder water is denser than warmer water. Water heated in the warm tropics. The warm water travels at the surface of the ocean north into the cold north polar region. The chilly temperatures cool the water. As it cools, it becomes denser and sinks to the deep ocean. More warm surface water flows in to take its place. Then that water cools, sinks, and the pattern continues.
Melting Arctic sea ice could change this pattern, or stop it altogether.
Arctic sea ice is melting fast as the climate warms. When the sea ice melts, it adds freshwater to the ocean making the seawater less dense. The less dense water will not be able to sink and circulate through the deep ocean as it does currently. This will disrupt or stop the Global Ocean Conveyor and could cause climate to cool in Europe and North America.
This would not be the first time that the Global Ocean Conveyor was stopped. There is evidence from sedimentary rocks and ice cores that it has shut down several times in the past. Those shut downs have caused changes in climate that are preserved in the rocks and fossils from the time.
You might also be interested in:
The world’s oceans, the Pacific, the Atlantic, the Indian, the Arctic, and the Southern Ocean, have different names, but they are really not that different. Water moves between them all the time. So they...more
Density is a measure of how much mass is contained in a given unit volume (density = mass/volume). Put simply, if mass is a measure of how much ‘stuff’ there is in an object, density is a measure of how...more
Sea ice is frozen seawater. It floats on the oceans that are in Earth's polar regions. The salt in the seawater does not freeze. Very salty water gets trapped in the sea ice when it forms. The pockets...more
In the Arctic, you will find the Arctic Ocean surrounded by the continents of Europe, Asia, and North America. You will find the geographic North Pole and the magnetic North Pole there; both are in the...more
Looking for online content that can be used for a climate change education course or module? Pages linked below can be used to support an introductory climate change education for either a unit or a full...more
Polar exploration includes the exploration of the Arctic and the Antarctic. The Arctic is the area around the Earth's north pole. Antarctica is a continent that surrounds the South Pole. When you think...more
What Will You Find There? If you travel to the South Pole, you will find the continent of Antarctica surrounded by the Southern Ocean. The geographic South Pole is marked by a large sign that scientists...more |
Human immunodeficiency virus (HIV) is a virus that attacks the body’s immune system, which is our body’s defense against infection. HIV is most commonly transmitted through unprotected sexual contact, sharing needles, or coming into contact with infected blood. Early symptoms of HIV can include fever, fatigue, headache, and rash. However, these symptoms can also be caused by other viruses or illnesses, so it’s important to get tested for HIV if you think you may have been exposed. There is no cure for HIV, but there are treatments available that can prolong your life and keep you healthy. HMS is working to develop a vaccine to prevent HIV infection, but until one is available, the best way to prevent HIV is to practice safe sex and avoid sharing needles.
There is currently no vaccine to prevent HIV infection. However, there are many ways to reduce your risk of becoming infected with HIV.
The best way to prevent HIV is to avoid exposure to the virus. This means not having sex or sharing needles with someone who is infected. If you are sexually active, you can reduce your risk by using condoms during sex and limiting your number of sexual partners.
If you are a drug user, you can reduce your risk of HIV infection by not sharing needles with others. If you must share needles, be sure to clean them with bleach before each use. You can also get a prescription for clean needles from a doctor or a syringe exchange program.
If you are pregnant or thinking about getting pregnant, you can take steps to prevent your child from becoming infected with HIV. If you are infected with HIV, you can receive treatment during pregnancy that can greatly reduce the risk of your child becoming infected.
Table of Contents
What is caused by the human immunodeficiency virus?
HIV is a serious virus that can attack the body’s immune system. If left untreated, HIV can lead to AIDS. There is no effective cure for HIV at this time, so once someone has it, they have it for life. It is important to get tested for HIV if you think you may have been exposed to the virus, and to get treatment as soon as possible if you test positive.
The symptoms of HIV can vary from person to person, and can range from mild to severe. Some people may experience only a few symptoms, while others may experience more.
The most common symptoms of HIV include:
Swollen lymph nodes
If you are experiencing any of these symptoms, it is important to see a doctor as soon as possible for a test. Early diagnosis and treatment of HIV is important for maintaining good health and preventing the progression of the disease.
How can you prevent immunodeficiency virus
There are several ways to reduce your risk of contracting HIV, including abstaining from sex, using condoms correctly every time you have sex, and never sharing needles. You may also be able to take advantage of HIV prevention medicines such as PrEP and PEP.
HIV is a serious infection that can lead to a number of health problems. It is caused by infection with the human immunodeficiency virus (HIV), which leads to loss of immune cells and leaves individuals susceptible to other infections and the development of certain types of cancers. HIV can be transmitted through sexual contact, sharing needles, or exposure to infected blood or body fluids. There is no cure for HIV, but there are treatments that can prolong life and improve quality of life.
What is the most common disease of immunodeficiency?
If you have common variable immunodeficiency (CVID), you may experience repeated infections in your ears, sinuses, and respiratory system. CVID is an immune system disorder that causes you to have low levels of the proteins that help fight infections. Treatment for CVID may include antibiotics, immunoglobulin therapy, and/or vaccines.
B-cell immunodeficiencies can be broadly classified into two types: those that are antibody-deficient and those that are not. Antibody-deficient B-cell immunodeficiencies are the most common type of B-cell immunodeficiencies, accounting for approximately 50% of all PID diagnoses. The most common antibody-deficient B-cell immunodeficiency is Common Variable Immunodeficiency (CVID), which is characterized by a deficiency in the production of all immunoglobulin isotypes (IgA, IgG, IgM, and IgE). Patients with CVID typically present with recurrent bacterial infections, particularly of the respiratory and gastrointestinal tracts. Other less common antibody-deficient disorders include X-linked agammaglobulinemia, Wiskott-Aldrich syndrome, and severe combined immunodeficiencies (SCID).
Patients with B-cell immunodeficiencies that are not antibody-deficient typically present with a more limited range of infections, often involving only particular types of bacteria. These disorders include leukocyte adhesion deficiency, agranulocytosis, and chronic granulomatous disease.
What are the three stages of the human immunodeficiency virus infection?
The three stages of HIV infection are acute HIV infection, chronic HIV infection, and AIDS. There is no cure for HIV, but treatment with HIV medicines can slow or prevent HIV from advancing from one stage to the next.
The human immunodeficiency virus (HIV) is a retrovirus that infects cells of the human immune system and causes acquired immunodeficiency syndrome (AIDS). HIV is transmitted through contact with the blood, semen, or other body fluids of an infected person.
There are two types of HIV, HIV-1 and HIV-2. HIV-1 is the most common and is responsible for the majority of HIV infections worldwide. HIV-2 is less common and is predominantly found in West Africa.
HIV attacks the body’s immune system, making the person infected more susceptible to other infections and illnesses, which can lead to AIDS. AIDS is the most advanced stage of HIV infection and can lead to death.
There is no cure for HIV, but there are treatments available that can prolong a person’s life.
What is the diagnosis of human immunodeficiency virus
Blood tests are the most common way to diagnose the human immunodeficiency virus (HIV), the virus that causes acquired immunodeficiency syndrome (AIDS). These tests look for antibodies to the virus that are present in the blood of infected individuals. People exposed to the virus should get tested immediately.
There is no known cure for HIV/AIDS, however treatments are available to manage the virus and its effects. People with HIV/AIDS often experience a wide range of symptoms that can make everyday activities difficult. These symptoms can also lead to other health problems.
What are the 10 most common diseases that can cause a secondary immunodeficiency?
There are a number of conditions that can lead to a secondary immunodeficiency disorder. These include severe burns, chemotherapy, radiation, diabetes mellitus, and malnutrition. All of these can cause the body to be unable to produce enough of the immune cells needed to fight off infection.
Immunodeficiency disorders can result from the use of certain drugs or from having a long-term serious disorder, such as cancer. In some cases, immunodeficiency disorders are inherited. People with these disorders often have frequent, severe, or prolonged infections and may also develop autoimmune disorders or cancer. Early diagnosis and treatment are important in order to prevent serious health complications.
How can I check my immune system at home
Your immune system is your body’s defense against infection and disease. It is made up of a network of cells, tissues, and organs that work together to protect you from foreign invaders. These include bacteria, viruses, and fungi.
The Immune Health blood test looks at a variety of different immune system markers to give you a comprehensive overview of your immune health. This includes looking at your white blood cell count,subtypes of white blood cells,and other markers of inflammation.
The standard screening test for antibody deficiency is a blood test to measure levels of immunoglobulin (Ig). The test measures IgG, IgA, and IgM levels. Higher than normal levels of Ig may indicate an antibody deficiency. Other tests for specific antibody production may also be done.
What can weaken your immune system?
It’s important to take care of your immune system to avoid getting sick. Infections like the flu virus, mono (mononucleosis), and measles can weaken the immune system for a brief time, so it’s important to get the proper rest and nutrition to recover quickly. Also, smoking, alcohol, and poor nutrition can weaken the immune system, so it’s important to avoid those activities and eat a healthy diet.
The good news is that life expectancy for those who have CVID has significantly improved in the last 30 years. Thanks to the pioneering of immunoglobulin replacement therapy as a CVID treatment, people with CVID can now expect to live for over 50 years after diagnosis. This is a huge improvement from the 12 year life expectancy that was once the norm.
What happens if your body doesn’t make antibodies
IgG deficiencies occur when your body does not produce enough Immunoglobulin G (IgG). This can leave you susceptible to infections. IgG is the most common type of antibody in the body and helps to protect against bacterial and viral infections.
HIV is most often transmitted through sexual contact with an infected person, sharing needles or other injecting equipment, or via blood transfusions. Mothers can also pass HIV to their children during pregnancy, birth, or breastfeeding.
HIV can be present in blood, semen, pre-seminal fluid, rectal fluids, vaginal fluids, and breast milk. The virus enters the body through mucous membranes or damaged tissue, or it may be injected directly into the bloodstream. From there, it begins to attack the immune system.
What does human immunodeficiency virus target
HIV is a sexually transmitted infection that can result in acquired immunodeficiency syndrome (AIDS). It is caused by the human immunodeficiency virus (HIV). HIV attacks the body’s immune system, making the person infected susceptible to other infections and illnesses, which can lead to AIDS. AIDS is the most advanced stage of HIV infection, and can dramatically reduce the body’s ability to fight infections and illnesses.
It is very important to protect yourself and your family from preventable infectious diseases. Some of the most common diseases include chickenpox, dengue, diphtheria, ebola, flu, hepatitis, and hib disease. These diseases can be prevented by vaccinations, good hygiene, and avoiding contact with sick individuals.
What are 5 common diseases of the immune system
Diseases of the immune system can be caused by a number of factors, including genetic predisposition, infection, and environmental factors. Some diseases of the immune system, such as allergies, autoimmune diseases, and cancer, are more common in developed countries, where exposure to environmental factors, such as pollutants and chemicals, is more likely.
Infections, such as HIV and mononucleosis, can weaken the immune system and lead to serious illness. Certain types of cancer, like leukemia, lymphoma, and myeloma, can also affect the immune system directly. These cancers occur when immune cells grow uncontrollably.
There is no one easy answer to this question as it is a complex topic with many different facets. However, in general, HIV is caused by infection with the human immunodeficiency virus. This virus attacks the body’s immune system, making the person infected susceptible to other infections and illnesses, which can lead to AIDS. There is no cure for HIV, but there are treatments available that can prolong a person’s life. Prevention of HIV is primarily through education and awareness, as well as through practices like using condoms during sex and avoiding sharing needles.
There is no cure for HIV, but it can be treated. With proper medical care, people with HIV can live long, healthy lives. There are many steps that people can take to prevent HIV, such as using condoms during sex, getting tested for HIV regularly, and not sharing needles. Although HIV can be a serious illness, it is important to remember that people with HIV can lead happy, fulfilling lives. |
Study With Me : Introduction to Blood Transfusion 28
SECTION 1 : Haematology
Questions and Answers
What are the characteristics of red cells (erythrocytes)?
The answer: Red cells are non-nucleated, biconcave discs with a characteristic red color due to the pigment haemoglobin.
What gives the characteristic red color to oxygenated blood?
The answer: Oxygenated blood appears bright red due to the pigment haemoglobin in red cells.
What is the color of deoxygenated blood?
The answer: Deoxygenated blood loses its bright red color and becomes a dull, dark red.
What is the composition of haemoglobin, the pigment in red cells?
The answer: Haemoglobin consists of four closely linked polypeptide chains (globin) attached to an iron-containing complex (haem).
What is the molecular mass of haemoglobin?
The answer: The molecular mass of haemoglobin is about 68,000 Da. |
Understanding the concept of Annual Percentage Rate (APR) is essential for both borrowers and investors alike. APR represents the yearly interest generated by a sum that is charged to borrowers or paid to investors. Expressed as a percentage, it reveals the actual yearly cost of obtaining credit or investing, making it a crucial aspect when comparing financial products and investments.
- APR represents the yearly cost of borrowing or investing, making it essential for comparing financial products
- Various financial products, such as loans, mortgages, and credit cards, use APR
- Federal consumer law requires disclosure of APRs, and a good credit score often results in a lower APR
What Does APR Stand For?
APR stands for Annual Percentage Rate. It represents the yearly interest generated by a sum that’s charged to borrowers or paid to investors. Generally, it is expressed as a percentage that represents the actual yearly cost of funds over the term of a loan. It is the comprehensive measure of the real cost of borrowing, including interest rates, fees, and other charges associated with the credit provided.
APR can be found in various financial products, including loans, mortgages, and credit cards. Federal consumer law even requires lenders to disclose APRs, creating transparency to help consumers comprehend the potential costs associated with their borrowing decisions. It’s important to keep in mind that a good credit score often translates to a lower APR for borrowers.
Origin and Context of APR
The concept of APR was introduced by the Truth in Lending Act in the United States in 1968. The intention was to provide a standard measure to allow consumers to compare the costs of various credit offers. As a result, APR has become a critical piece of information for federal consumer law, mandating lenders to disclose APRs to borrowers. This allows borrowers to make more informed decisions about their loans and credit before entering a borrowing relationship.
Related Terms to APR
- Interest Rate: The base rate at which a lender charges borrowers for lending money.
- Fixed APR: An APR that remains constant throughout the life of the loan.
- Variable APR: An APR that fluctuates based on an underlying index, such as the prime rate.
- Balance Transfer: The process of transferring debt between credit accounts, typically to take advantage of lower APRs.
- Purchase APR: The APR that applies to purchases made with a credit card.
- Cash Advance APR: The APR that applies to cash advances obtained through a credit card. This rate is typically higher than the purchase APR.
- Penalty APR: A higher APR imposed on borrowers who have missed payments or violated the terms of their credit agreement.
- Introductory APR: A temporary, typically lower, APR offered for a limited period to attract new customers.
- Grace Period: The period during which no interest is charged on new purchases if the balance is paid in full by the due date.
- Annual Percentage Yield (APY): A similar concept to APR that applies to investment accounts, representing the effective annual interest rate earned on an investment, taking into account the effect of compounding.
Understanding and comparing APRs are essential for borrowers seeking loans, credit cards, and other financial products. By examining the APR, borrowers can make more informed decisions about the cost of borrowing money and the impact of the various terms and conditions associated with different loans.
More about APR Terminology
The term Annual Percentage Rate (APR) is often used interchangeably with other phrases in the lending world. Some synonyms for APR include:
- Yearly interest rate
- Annual interest rate
- Yearly percentage rate
These terms essentially convey the same concept: the yearly cost of borrowing money expressed as a percentage.
Other Meanings of APR
While APR predominantly stands for “Annual Percentage Rate” in the context of lending and borrowing, it may have different meanings in other contexts. For example:
- In the corporate world, APR can refer to the “Accounts Payable Reconciliation” process, which involves matching an organization’s unpaid invoices with its purchase orders and receiving reports.
- In the medical field, APR can represent “Anterior Pelvic Rotation,” referring to the forward rotation of the pelvis.
- In the horticulture industry, APR might stand for “Apple Production Research,” focusing on improvements and innovations in the cultivation of apple trees.
It’s important to note that when discussing loans, credit cards, or financing, APR almost always refers to the Annual Percentage Rate. The context in which the term is used will help determine the intended meaning of APR.
Frequently Asked Questions
What is considered a good APR?
A good APR depends on the type of loan or credit card you are considering. In general, a lower APR is better as it represents the cost of borrowing money. For credit cards, an APR below 15% is generally considered good. For personal loans, an APR under 10% is usually considered favorable. However, a “good” APR also depends on your creditworthiness, financial situation, and the current market conditions.
APR vs interest rate: what’s the difference?
While both APR and interest rate refer to the cost of borrowing money, they serve different purposes in measuring this cost. The interest rate represents the cost of borrowing the principal amount; it does not include any fees or other charges. On the other hand, APR is a broader measure that includes not only the interest rate but also any fees and other costs associated with the loan. In essence, APR provides a more comprehensive idea of the total cost of borrowing over the life of the loan or credit line.
How does APR work on credit cards?
APR on credit cards works slightly differently than on loans. Most credit cards use a variable interest rate that is tied to an index, such as the prime rate. This means that the APR can change over time. Furthermore, credit cards generally use compound interest, which means that interest is calculated on both the principal balance and any accrued interest.
Credit card issuers often offer a grace period, allowing cardholders to avoid paying interest if they pay their statement balance in full by the due date. However, if they carry a balance from month to month, interest will be applied to the outstanding balance, using the APR to determine the amount of interest charged. This is why it’s crucial for consumers to understand the APR on their credit cards and aim to pay their balances off each month to avoid costly interest charges. |
Racism underscores an emotional, elusive, and historical pervasive fact in the society of the United States. In the societal context, racism is depicted in the realms of historical legacy, encompassing involuntary slavery, legal support of second-hand citizenship, constitutional denial of equal rights, and ubiquitous (Jones, 1988). That is together with numerous forms of emotional, social, economic, psychological exploitation, and oppression of black Americans of African descent (Jones, 1988). Such oppressions, discriminations, and exploitations have been normalized as problems that present unequal opportunities on minorities who are considered in the status of immigrants, the elderly, chronic poverty, and ethnicity in the spheres of color. Racism has developed and refined regarding black Americans due to their black skin and biological roots that further masks merger fundamental problems of in-group preferences, cultural ethos, and individual comparisons. The big question is, what effects does racism present on black men? Racism therefore shows dangerous effects through institutional and interpersonal racism, which have an adverse physiological and psychological impact on black men.
Interpersonal racism underlies the discriminatory and extreme behaviors directed towards individuals because of their race and ethnicity. On the other hand, institutional racism underscores both the informal and formal policies and practices aimed at denying individuals their values and forcing them into internalization of the conceptions of racism as held by their oppressors (Clark, 2001). Such interpersonal and institutional racism may exhibit aspects of stressors, especially among black men that subsequently results in increased psychological reactivity that, when sustained for an extended period may lead to cardiovascular diseases and disorders (Clark, 2001). Therefore, there is the imperative need to eliminate racism and its effects, coupled with ways of shielding the black men from such menacing stressors of racism that have adverse effects on them.
The unique psycho-social and contextual factors emanating from the pervasive exposure of racism and discrimination create additional daily stressors for African-American men. Such effects can be manifested in the spheres of trauma, especially on racial violence (American Psychology Association 2020). Moreover, racism presents adverse impacts on the self-consciousness of black males’ conceptions of masculinity and their psychological functioning. Racism furthered presents complexities in black men's gender role socialization, especially in the paradigms of historical colonization and slavery which shapes their world view experiences in the United States and their adaptations to the Eurocentric standards of masculinity (Pierre et al.,2001). It is also worth noting that racism affects the black men's well-being, hence a need for an Afrocentric counseling approach, coupled with indigenous healing to traditional counseling models.
Similarly, racism affects black men in the realms of high-profile shootings by the police and deaths of many black men in custody and some even when jogging, thus presenting cries across the entire country. A typical example is the death of a black man George Floyd on May 25th 2020 by a white police officer in Minneapolis. The shooting to death of Ahmadu Arbery on Feb 23rd, 2020, at Brunswick in Georgia by a white father and son (Assari,2020). Such incidences provoked a wide protest and outrage in the cities across the US. Therefore, racism has adverse effects on the livelihood and health of black men, which go beyond the shootings by the police, and black men have to pay the highest costs of racism.
Additionally, black men's life expectancy is 79.9 years, which is far much below white men, that is 76.4 years, white women, which is 81.2 years, and black women, which is 78.5 years as a result of racism and its associated injustices and poverty. Black men are more vulnerable to dying from vast types of cancer stroke, HIV, and homicide (Assari, 2020). Many researches link such deaths, the poor physical and mental outcomes of black men to racism (Assari, 2020). Racism is an experience that daily harms the health of black men and subsequently resulting in chronic diseases and poor health. A report shows that approximately 66% of black men experience daily discrimination at higher levels. (Assari, 2020). Some of the common examples of racism experienced by black men juxtaposed being turned down for a job, being treated differently at the workplace, thus presenting primary risk factors for health problems.
Also, racism affects black men in the realms of education, which protects black men less than it protects black women in the effects of attainment of knowledge, on their psychological distress and symptoms of depression. As a result of that, there are higher diminishing returns on economic and non-economic resources, which is more pronounced on black men. Black men with higher levels of motivation and aspiration hence get discouraged, feel unhealthy, get sick, and die earlier. Conversely, black men are still unfairly treated in the realms of health care systems by receiving lower-quality healthcare as compared to white men. That subsequently results in deteriorating their ability to manage diseases, develop worse outcomes, get sicker, and die earlier. Also, the recent shootings of black men by police and others have depicted that black men are targeted by white men and the group in charge of law and order (Assari, 2020). That is a depiction of how biases and social structures resulting in poor health, depressions, and deaths of black men.
Racism also underlies blocked opportunities for black men among other types of discrimination that are very considerable, and such discriminatory experiences make livers harder and shorter for black men. Racism confers risk factors on black men encompassing heart diseases, suicides, depressions, and even deaths, since black men experience racial discrimination more than other groups, including black women (Assari,2020). Therefore, racism affects the ethnic and racial minority, especially the black men in the spheres of discrimination and racism, which are the realms of anxiety, depression, use of substances, suicide, and adverse effects on both their mental and physical health.
In a nutshell, racism has adverse interpersonal and institutional effects on black men's psychological and physiological well-being. Racism on black men comes on the forms of prejudicial and discriminatory behaviors directed towards them, coupled with denying equitable treatment because of their race or ethnic affiliation. Effects of racism may make black men develop vast stressors that may subsequently result in diseases, cardiovascular disorders, psychological disorders, and depressions.
American Psychology Association. (2020). Physiological and Psychological Impacts of Racismand Discrimination for African Americans. American Psychological Association. https://www.apa.org/pi/oema/resources/ethnicity-health/racism-stress
Assari, S. (2020). George Floyd and Ahmaud Arbery deaths: Racism Causes Life-ThreateningConditions for Black Men Every day. The Conversation. https://theconversation.com/george-floyd-and-ahmaud-arbery-deaths-racism-causes-life-threatening-conditions-for-black-men-every-day-120541
Clark, V. (2001). The Perilous Effects of Racism. PubMed, National Library of Medicine.11(4):769-772. https://pubmed.ncbi.nlm.nih.gov/11763300/
Jones, M. (1988). Racism in Black and White. In: Katz P.A., Taylor D.A. (eds) EliminatingRacism. Perspectives in Social Psychology (A Series of Texts and Monographs).Springer, Boston, MA.DOI. https://doi.org/10.1007/978-1-4899-0818-6_6
Pierre, M., Mahalik, J., & Woodland. M. (2001). The Effects of Racism, Africans SelfConsciousness, and Psychological Functioning on Black Masculinity: A Historicaland Social Adaptation Framework. Journal of African American Men. 6 (2):19-39. https://www.jstor.org/stable/41819424?seq=1
Cite this page
Racism: Historical Legacy, Exploitation, and Oppression. (2023, Sep 17). Retrieved from https://speedypaper.com/essays/racism-historical-legacy-exploitation-and-oppression
If you are the original author of this essay and no longer wish to have it published on the SpeedyPaper website, please click below to request its removal:
- Essay Example: Self Interest vs. The Merits in Politics
- Essay Sample: Psychological and Behavioral Factors of Individual Terrorists
- Free Essay: The Experiences of Female Students With Sexual Abuse in School
- Access Health Care and Social Services. Essay Sample
- Essay Sample on Women in Islam Societies
- Free Essay - Teaching Self-Efficacy of International Faculty and Acculturation Strategy
- Essay Sample on Summary of the Main Ideas Presented in the Video |
Forests where the sustainable production of wood is carefully balanced with protecting other important resources such as water quality and wildlife habitat are known as "working forests." After trees are harvested from these forests, they are replanted and harvested again in a sustainable process that can span lifetimes.
Working forests are a real solution to reduce the amount of carbon in the environment. Scientists commonly refer to the cycle of growth and harvest in working forests as carbon sequestration. Carbon sequestration is the process of capturing, securing, and storing carbon dioxide from the atmosphere. The idea is to stabilize carbon in solid and dissolved forms so that it doesn’t cause the atmosphere to warm.
Carbon sequestration is directly related to the growth rate of a tree. Newly panted and young trees grow quickly and absorb more carbon dioxide from the atmosphere than older ones. Older trees will have more carbon stored because they have spent more time absorbing it. However, if these older trees are not harvested, they are more susceptible to fire damage, pests, and diseases. Also, their carbon absorption plateaus. Therefore, in a working forest, it is important that older trees ready to become lumber are harvested and replaced with robust growing young trees. This process will maximize the CO2 absorption of the forest.
The process continues with more carbon being stored during a tree’s high-growth period (when they are younger) and less being stored in older phases of growth. One of the best parts of this process is that after harvest, the wood continues to store the carbon as lumber, wood, and paper products.
Overall, working forests increase CO2 absorption and prevent catastrophic fire, disease and insects that kill trees and emit carbon dioxide. They provide drinking water, a healthy climate, wildlife habitat, and green jobs.
To manage a working forest, you do not have to live in Idaho or Washington. In fact, urban forests account for almost 20% of the sequestration of carbon in U.S forests. Working forests are popular, and a 10% increase in the number of entry level urban forestry jobs (most related to tree trimming and pruning) is expected from 2018 to 2028.
Although working forests and carbon sequestering are extremely efficient in reducing the human carbon footprint, humans can put in some work, too! Below are helpful ways of reducing your carbon footprint whether you live on a farm or in a metropolitan area:
- Use public transportation, walk, or bike. In NYC, this one is easy!
- Unplug your electronic devices after they are done charging. We are all guilty of falling asleep with our cell phones fully charged and still plugged in.
- Build with wood. As we know, tress absorb carbon from the atmosphere and store carbon in wood products!
- Shop resale instead of buying trendy; there will be a lot less waste in our landfills!
Carbon Benefits of Wood-Based Products and Energy. (June, 2017). U.S. Department of Agriculture, Forest Service, Climate Change Resource Center. https://www.fs.usda.gov/ccrc/topics/forest-mgmt-carbon-benefits/wood
Jobs for people who love being outdoors. U.S. Bureau of Labor Statistics. https://www.bls.gov/careeroutlook/2017/article/outdoor-careers.htm
UC Davis Researchers. Carbon Sequestration. https://climatechange.ucdavis.edu/science/carbon-sequestration/
Legislative News: Legislature passes Ramos’ bipartisan carbon sequestration bill (March, 2020). https://housedemocrats.wa.gov/ramos/2020/03/09/legislative-news-legislature-passes-ramos-bipartisan-carbon-sequestration-bill/
The Life of a Working Tree, https://oregonforests.org/working-forests
What Makes a Working Forest Work? https://www.workingforests.org |
This page is going to take a simple look at the origin of color in complex ions - in particular, why so many transition metal ions are colored.
White light and Colors
If you pass white light through a prism it splits into all the colors of the rainbow. Visible light is simply a small part of an electromagnetic spectrum most of which we cannot see - gamma rays, X-rays, infra-red, radio waves and so on. Each of these has a particular wavelength, ranging from 10-16 meters for gamma rays to several hundred metersfor radio waves. Visible light has wavelengths from about 400 to 750 nm. (1 nanometer = 10-9 meters.)Figure: The diagram shows an approximation to the spectrum of visible light.
If white light (ordinary sunlight, for example) passes through copper(II) sulfate solution, some wavelengths in the light are absorbed by the solution. Copper(II) ions in solution absorb light in the red region of the spectrum. The light which passes through the solution and out the other side will have all the colors in it except for the red. We see this mixture of wavelengths as pale blue (cyan). The diagram gives an impression of what happens if you pass white light through copper(II) sulfate solution.
Working out what color you will see is not easy if you try to do it by imagining "mixing up" the remaining colors. You wouldn't have thought that all the other colors apart from some red would look cyan, for example. Sometimes what you actually see is quite unexpected. Mixing different wavelengths of light doesn't give you the same result as mixing paints or other pigments. You can, however, sometimes get some estimate of the color you would see using the idea of complementary colors.
If you arrange some colors in a circle, you get a "color wheel". The diagram shows one possible version of this.
colors directly opposite each other on the color wheel are said to be complementary colors. Blue and yellow are complementary colors; red and cyan are complementary; and so are green and magenta. Mixing together two complementary colors of light will give you white light. What this all means is that if a particular color is absorbed from white light, what your eye detects by mixing up all the other wavelengths of light is its complementary color. Copper(II) sulfate solution is pale blue (cyan) because it absorbs light in the red region of the spectrum. Cyan is the complementary color of red.
We often casually talk about the transition metals as being those in the middle of the Periodic Table where d orbitals are being filled, but these should really be called d block elements rather than transition elements (or metals). This shortened version of the Periodic Table shows the first row of the d block, where the 3d orbitals are being filled.
The usual definition of a transition metal is one which forms one or more stable ions which have incompletely filled d orbitals. Zinc with the electronic structure [Ar] 3d104s2 does not count as a transition metal whichever definition you use. In the metal, it has a full 3d level. When it forms an ion, the 4s electrons are lost - again leaving a completely full 3d level. At the other end of the row, scandium ( [Ar] 3d14s2 ) does not really counts as a transition metal either. Although there is a partially filled d level in the metal, when it forms its ion, it loses all three outer electrons. Technically, the Sc3+ ion does not count as a transition metal ion because its 3d level is empty.
The diagrams show the approximate colors of some typical hexaaqua metal ions, with the formula [ M(H2O)6 ] n+. The charge on these ions is typically 2+ or 3+.
Non-transition metal ions
These ions are all colorless.
Transition metal ions
The corresponding transition metal ions are colored. Some, like the hexaaquamanganese(II) ion (not shown) and the hexaaquairon(II) ion, are quite faintly colored - but they are colored.
So, what causes transition metal ions to absorb wavelengths from visible light (causing color) whereas non-transition metal ions do not? And why does the color vary so much from ion to ion?
The Origin of Color in Complex Ions containing transition metals
Complex ions containing transition metals are usually colored, whereas the similar ions from non-transition metals are not. That suggests that the partly filled d orbitals must be involved in generating the color in some way. Remember that transition metals are defined as having partly filled d orbitals.
For simplicity we are going to look at the octahedral complexes which have six simple ligands arranged around the central metal ion. The argument is not really any different if you have multidentate ligands. When the ligands bond with the transition metal ion, there is repulsion between the electrons in the ligands and the electrons in the d orbitals of the metal ion. That raises the energy of the d orbitals. However, because of the way the d orbitals are arranged in space, it doesn't raise all their energies by the same amount. Instead, it splits them into two groups. The diagram shows the arrangement of the d electrons in a Cu2+ ion before and after six water molecules bond with it.
Whenever 6 ligands are arranged around a transition metal ion, the d orbitals are always split into 2 groups in this way - 2 with a higher energy than the other 3. The size of the energy gap between them (shown by the blue arrows on the diagram) varies with the nature of the transition metal ion, its oxidation state (whether it is 3+ or 2+, for example), and the nature of the ligands. When white light is passed through a solution of this ion, some of the energy in the light is used to promote an electron from the lower set of orbitals into a space in the upper set.
Each wavelength of light has a particular energy associated with it. Red light has the lowest energy in the visible region. Violet light has the greatest energy. Suppose that the energy gap in the d orbitals of the complex ion corresponded to the energy of yellow light.
The yellow light would be absorbed because its energy would be used in promoting the electron. That leaves the other colors. Your eye would see the light passing through as a dark blue, because blue is the complementary color of yellow.
Simple tetrahedral complexes have four ligands arranged around the central metal ion. Again the ligands have an effect on the energy of the d electrons in the metal ion. This time, of course, the ligands are arranged differently in space relative to the shapes of the d orbitals. The net effect is that when the d orbitals split into two groups, three of them have a greater energy, and the other two a lesser energy (the opposite of the arrangement in an octahedral complex). Apart from this difference of detail, the explanation for the origin of color in terms of the absorption of particular wavelengths of light is exactly the same as for octahedral complexes.
Non-transition metals do not have partly filled d orbitals. Visible light is only absorbed if some energy from the light is used to promote an electron over exactly the right energy gap. Non-transition metals do not have any electron transitions which can absorb wavelengths from visible light. For example, although scandium is a member of the d block, its ion (Sc3+) hasn't got any d electrons left to move around. This is no different from an ion based on Mg2+ or Al3+. Scandium(III) complexes are colorless because no visible light is absorbed. In the zinc case, the 3d level is completely full - there are not any gaps to promote an electron in to. Zinc complexes are also colorless.
Factors Affecting the Color of Transition Metal complexes
In each case we are going to choose a particular metal ion for the center of the complex, and change other factors. color changes in a fairly haphazard way from metal to metal across a transition series.
The nature of the ligand
Different ligands have different effects on the energies of the d orbitals of the central ion. Some ligands have strong electrical fields which cause a large energy gap when the d orbitals split into two groups. Others have much weaker fields producing much smaller gaps. Remember that the size of the gap determines what wavelength of light is going to get absorbed. The list shows some common ligands. Those at the top produce the smallest splitting; those at the bottom the largest splitting.
The greater the splitting, the more energy is needed to promote an electron from the lower group of orbitals to the higher ones. In terms of the color of the light absorbed, greater energy corresponds to shorter wavelengths. That means that as the splitting increases, the light absorbed will tend to shift away from the red end of the spectrum towards orange, yellow and so on.
There is a fairly clear-cut case in copper(II) chemistry. If you add an excess of ammonia solution to hexaaquacopper(II) ions in solution, the pale blue (cyan) color is replaced by a dark inky blue as some of the water molecules in the complex ion are replaced by ammonia.
The first complex must be absorbing red light in order to give the complementary color cyan. The second one must be absorbing in the yellow region in order to give the complementary color dark blue. Yellow light has a higher energy than red light. You need that higher energy because ammonia causes more splitting of the d orbitals than water does.
It is not often as simple to see as this, though! Trying to sort out what is being absorbed when you have murky colors not on the simple color wheel further up the page is much more of a problem. The diagrams show some approximate colors of some ions based on chromium(III).
It is obvious that changing the ligand is changing the color, but trying to explain the colors in terms of our simple theory is not easy.
The oxidation state of the metal
As the oxidation state of the metal increases, so also does the amount of splitting of the d orbitals. Changes of oxidation state therefore change the color of the light absorbed, and so the color of the light you see. Taking another example from chromium chemistry involving only a change of oxidation state (from +2 to +3):
The 2+ ion is almost the same color as the hexaaquacopper(II) ion, and the 3+ ion is the hard-to-describe violet-blue-grey color.
The coordination of the Ion
Splitting is greater if the ion is octahedral than if it is tetrahedral, and therefore the color will change with a change of co-ordination. The problem is that an ion will normally only change co-ordination if you change the ligand - and changing the ligand will change the color as well. Hence, you cannot isolate out the effect of the co-ordination change. For example, a commonly quoted case comes from cobalt(II) chemistry, with the ions [Co(H2O)6]2+ and [CoCl4]2-.
The difference in the colors is going to be a combination of the effect of the change of ligand, and the change of the number of ligands.
Contributors and Attributions
Jim Clark (Chemguide.co.uk) |
People experience and explore the world through their senses. In fact, as children, the human body solely relies on experience and sensory stimuli to learn about the world around them. In most people’s lives, their senses work together to teach them about their surroundings. However, what happens when your senses start contradicting each other?
You’ve probably experienced something similar before. Sometimes, senses clash — and it leaves the mind confused. Funny and curious, this fascinating phenomenon would make for an intriguing science fair entry.
People’s senses can be surprisingly easy to fool: apples can look like apples, but they can taste like bananas. This may sound odd, but science can play tricks on our senses. This phenomenon is explainable through the ways our sensory systems. Noses and tongues are especially easy to manipulate. These body parts can easily be manipulated into receiving contrasting stimuli, lighting a fire for the battle of the senses.
One prominent scenario is when you experience a stuffy nose. Stuffy noses are one of the instances where you might notice that the scent of the food contributes to how things taste. Imagine eating some strawberries but, at the same time, smelling heavy and strong banana scents. Which sense do you think contributes more to how the fruit will taste? Does the banana scent overpower the strawberry flavor? Studying this curious occurrence will take you through the interesting phenomenon of the battle of the senses.
Science is everywhere. It’s in the house, the school, and everything people do. But one of the most frequent exhibitions of science people have fun with is their experience with food. This project will require some volunteers who will contribute to the study by sharing their experiences.
This project is an entry under food science, an intermediate-level activity that you will surely enjoy executing for the science fair. Make sure you familiarize yourself with the guidelines and rules your science fair has about human volunteers.
This science project will take one to two weeks to finish and will likely cost you about under a hundred dollars to execute. The concept of this science fair project is inspired by Dr. Svenja Lohner‘s work online, seeking to inquire into the effect of scents on taste, exploring the comparison between gustatory and olfactory stimuli and their influence on food and flavor.
To understand what concepts you’re looking into, you dive into what elements influence flavor. This science fair project explores how people perceive taste and scents, how these two sensory stimuli interact, and the many curious things these two are responsible for.
What is Flavor?
Contrary to misconception, flavor isn’t something as simple as taste, although they are casually used synonymously. The definition of flavor, in science, culinary technicalities, and gastronomical studies is not something arbitrary or vague. It is a precise and technical concept that describes the combination of gustatory and olfactory stimuli.
Scientifically, professionals define flavor as the sensory impression of food or other substances primarily through the chemical senses of taste and smell. The tongue and the nose, respectively, connect these senses to the brain, sending messages to the organ through external input received by the nerves or sensory receptors.
The tongue is able to determine the many basic tastes. The most prominent tastes, however, are bitter, salty, sweet, sour, and umami. These tastes, in conjunction with the aromas and scents detected by the nose through its olfactory receptors, determine flavor.
Flavor is the experience of the intermingling of aroma and taste. However, it is also important to note that flavor also considers the texture of food as the mouth reacts to the sensation food or substances deliver.
The Tongue and Tastes
If you have a favorite food, you are able to enjoy it through the organ called the tongue. The tongue is an organ primarily responsible for detecting gustatory stimuli or taste. It is a part of the digestive system and facilitates the movement of food as you eat. It assists people in chewing, eating, and other important processes like speaking.
You may already know this, but the tongue is a muscle. To be completely accurate, the tongue is a group of muscles. The tongue consists of many different muscles that work together to do its job, contributing to important bodily processes through digestion and communication.
In the context of this project, however, you focus on the tongue’s ability to recognize taste and the factors that influence people’s perception of taste. The tongue, through nerves and sensory receptors, introduces taste to people and their brains.
Tastes are gustatory stimuli created by the elements that come in contact with the nerve endings on tongues. Tastebuds receive these elements as they come in contact with the tongue, testing the things people eat and drink. These tastebuds house receptors that remember the chemical substances that create tastes.
The Tastebud and Sensory Receptors
The taste receptors that recognize tastes are located in the taste buds on the tongue. Arranged like orange-fruit sections around a fluid-filled funnel, the chemical substances wash into these funnels and provide sensory input that becomes taste.
Microscopic hairs called the microvilli also help people process taste. These tiny hairs send messages to the brain. The brain then interprets these messages and signals that help identify tastes for the body.
Taste is an important stimulus, as it helps people recognize harmful substances. The sense of taste is vital to people’s survival, as it serves as a warning system of sorts. It helps people recognize good and poisonous food.
Identifying and recognizing tastes is your brain’s way of keeping you safe. It allows you to figure out what you have in your mouth. Have you ever experienced accidentally drinking milk that had turned sour? As the bad milk comes in contact with the taste buds, your brain starts to process what’s in your mouth and instantly recognizes it’s spoilt milk. The brain probably goes, “this is milk – but it kind of tastes funny; you might want to spit it out.”
As your brain gets a clear idea of the messages your tongue is sending to it, it starts to recognize that it’s dangerous, warning you about the problem. However, you may notice that certain things or circumstances dull your sense of taste. Cold food and drinks often mess with people’s ability to taste things. Another more interesting reason, at least in the context of this project, for a dulled sense of taste is a cold or a stuffy nose. This project will help you explore this curious circumstance.
The Nose and Scents
How do we smell things?
This question has been unanswered for most of humanity’s history. Scientists and experts, in fact, had only recently understood the processes at work as people detect scents through their noses.
The nose and the brain work together to process, recognize, and make sense of the many things they constantly detect. Invisible particles, chemicals, and substances float around constantly. The nose detects all these things consistently and remembers each one, allowing people to be familiar with the scents around them and intrigued or surprised by those they don’t remember.
The nose helps people survive, allowing them to recognize and find food. It helps people avoid danger as scents often act as a warning about which substances or chemicals are bad for a person’s health. Noses also help with tasting food. This is why food tastes bland whenever you have a stuffy nose.
Brains process scents very differently from how it processes visual and auditory stimuli. When you see with your eyes, you are able to isolate the things that you see, separating colors and components with your eyes. You are also able to isolate things you hear. When you listen to a band, you can hear the drums separately from the guitars, the bass, and the pianos. In contrast to these instances, the brain often detects mixtures as a single thing, not isolating various parts of the mixture. Coffee smells like coffee, not sugar, cream, and coffee.
The Olfactory Sensors
Inside your nose, up your nostrils, there are tiny sensory receptors called neurons. These neurons constantly communicate with each other, relaying messages to the brain as chemicals in the air come in contact with them.
These receptors are especially good at detecting scents. This is why they’re called olfactory neurons, as they help the process of olfaction or the act of smelling. These neurons act like cables, relaying the message they receive from outside and to the front of the brain called the olfactory bulb. The olfactory then sends the message across the other parts of the brain, giving people an idea of what they’re smelling or detecting.
Olfactory sensors also play a massive part in highlighting flavor. As you put food into your mouth, chemicals called odorants – scents in food and other things – enter your nostrils and contribute to flavor. However, nostrils aren’t the only ways for the nose to detect scents.
The mouth connects with the nasal cavity around people’s throats and allows the olfactory receptors to receive olfactory stimuli from inside the mouth. This is another way how smelling the things you chew — and the food in your mouth — contributes to flavor.
The part of the brain that recognizes scents is also in charge of storing memories and provoking emotions. This is why people often associate memories and emotions with the things they smell.
The Battle of the Senses
As mentioned earlier, a stuffy nose is an impediment to flavor detection. For some reason, food and drinks don’t taste as strong or as sharp as they do whenever you have a cold or a nose that feels stuffed up.
This is because the tongue doesn’t take the sole responsibility for flavor. The tongue can’t take full credit for providing your brain with the signals for flavor. The nose contributes massively to flavor detection, arguably deserving just as much credit for the tastes and flavors you now know. This is your and this project’s objective. To settle the score between the nose and the tongue, this science fair entry seeks to discover which contributes more to flavor recognition.
The nose helps recognize flavor by smelling the food before it enters the mouth. The nose even detects scents as you chew and swallow the food you put in your mouth. In fact, it has been an intriguing phenomenon when the smell of food overpowers the gustatory signals the tongue sends to the brain and alters the flavor in some way. Apples start to taste like bananas, and strawberries can start tasting like lemons.
This activity will explore the many ways scents affect taste and which of the two senses bears a stronger influence over the mind as it recognizes flavor.
This science project takes loose inspiration from the old trick of smelling something stronger before you take a bite out of something else. It’s a classic way of messing with your brain that has been done at school to test the senses.
As an experiment fit for the science fair, this project will provide you with a more systematic way to gather data. The research design will require volunteers, so you may want to call a few friends. Essentially, the experiment will be somewhat similar to the old parlor trick that manipulates the senses but much more organized and systematic.
Surprisingly, professionals and scientists believe that flavor is 80% aroma and 20% taste. This project will put this conclusion to the test. Scientific studies believe that smell is the main determinant of flavor, contrary to what most people believe. This is because taste buds and sensory receptors on the tongue can only detect fundamental tastes, while the nose can detect a wider range of stimuli. These olfactory stimuli serve as a modifier of sorts to the tastes people detect in food, impacting flavor.
Through this science project, you will explore how smell and taste work together to create flavor. You will explore how this connection affects perception. In fact, the connection between these two senses is so strong that people are able to associate certain scents or aromas with flavor. Through this science fair entry, you will be able to test, experiment, and explore how the two senses interact to introduce flavor.
Through this experiment, you will be working to answer key questions and concepts regarding the science behind flavor. Here are some of the questions that will guide you through the experiment:
- Why are there differences in the taste of food?
- At what point in the eating process do we start tasting food?
- Which sense is more important to detecting flavor?
- How are the two senses, smelling and tasting, connected?
- What are the other factors that influence the perception of flavor?
- What are the ways to most effectively manipulate flavor?
These questions will guide you towards a better understanding of the machinations behind your senses and the dynamic at work between body parts and substances. The understanding you gain from this subject is relevant discoveries that you can apply in real-life scenarios.
Materials, Equipment, and Resources You Need
There will be quite a number of materials and resources needed for this experiment. Most of these materials and resources can be procured through a quick run to the grocery store, a pharmacy, and some online shopping. Save for the volunteers, the materials and equipment are readily accessible and available in stores.
Here is the complete list of the things you need for the project:
- Molecule-R Aroma E-evolution Kit
- Several Spoons
- Several Mini plastic cups with lids (2 oz)
- Medicine dropper
- Cotton balls
- Ten different foods
- Glasses of water
- Cardboard box
- Permanent marker
- Pens or pencils
- Lab notebook
Now don’t be intimidated by the name Molecule-R Aroma E-evolution Kit – it’s not as complex as it sounds. It’s actually just a science kit available in Walmart and other online shops. For the spoons, you’ll need a lot of these as the experiment has to be sanitary and hygienic as possible. Prepare two spoons per food sample per volunteer. This way, a single volunteer has at least two spoons for each sample, ensuring a clean and hygienic experiment.
For the mini plastic cups with lids, you have to prepare a cup for each food and two for each scent. These mini cups are available both online and in physical stores around your proximity. You also have to prepare a lot of cotton balls as you will need two for each scent.
The blindfold can be anything that impedes vision, such as scarves, handkerchiefs, eye masks, or even swim goggles blacked out by some paper. The ten foods can be just about anything, but it’s always recommended to keep it simple. Perhaps stick to fruits, vegetables, sweets, and other simple and readily available and safe food.
Finally, when it comes to volunteers, make sure you follow and observe all science fair guidelines when it comes to dealing with human test subjects, as most committees are very strict with these activities. The recommended number of volunteers is ten, but you can always check a guide to help you decide on the number of volunteers you can have.
Molecule-R Aroma r-Evolution Kit
Perfect for aroma-based activities, the Molecule-R Aroma r-Evolution Kit is an ideal fit for this experiment. It provides you with four aromaforks, 21 different volatile aromas, four molecule-R droppers, and 50 diffusion pastilles. This innovative product by MoleculeR is an excellent resource for multi-sensory experiments for science fairs.
The experiment proper has three main parts: the food preparation, the flavor testing, and the analysis. This article will walk you through all these parts to ensure that you produce the best results for the science fair.
Since this experiment is all about food, one of the most important factors of the methodology is to keep everything fresh. You have two ways to go about this: the first is to prepare the ingredients on the night before the testing and store them in the refrigerator; the second is to prepare everything on the same day. There are pros and cons to both methods, so choose wisely.
If you prepare the food the night before, it is important to remember that food can easily lose flavor when stored for too long. On the other hand, preparing the ingredient on the same day will likely have you work more before the test.
Another important aspect of the project is preparing the food and the scent that will work together to manipulate your volunteer’s senses. As mentioned before, it’s always recommended to keep things simple, so here are some pairings that would work well for this experiment:
The food stated in the table above is easy to access or purchase, giving you an easier time with procurement and logistics. You can definitely make the changes you wish, but it’s important that you understand the method behind pairing food together.
The most important thing is to remember that when you pair food together, choose two things that have similar textures and temperatures. You will find that these two things help people identify the flavor. To find similarly-textured food, buy the ready-to-eat products in groceries or stores.
Preparing for the Food for the Flavor Test
As you finish deciding on which food and scents to use together, it’s time to organize for the flavor test. Start by putting all the food in separate cups. Make sure that as you fill up each cup, you use a clean spoon to avoid contaminating the taste of the food. Label each cup with a number and keep tabs on which number corresponds to each food through your notebook.
If the foods you prepared for the experiment have different textures, stir the food until they have the same consistency. Additionally, if you intend to do the preparation on the same day as the flavor test, keep all the food at room temperature to avoid loss of taste.
Preparing the Scents for the Flavor Test
Prepare the scents by setting a pair of cotton balls in a mini cup with lids. Put a single drop into the mini cup to capture the aroma within the cotton balls. The cotton should not be too wet, but the smell should still be noticeable. You may adjust the drops needed accordingly. Cover the mini cups with lids to keep the aromas inside, and number the mini cups with a marker. Keep track of the numbers corresponding to each scent by writing them down in your notebook.
Prepare a separate pair of cotton balls and put one drop of tap water on each of them. These cotton balls will serve as control smell samples, which you will use to test the preliminary taste of the food without the influence of a scent. Remember to rinse the droppers you use well to avoid cross-contamination between scents.
Assemble the food in a box, so your volunteers see the food until the flavor test.
Performing the Taste Test
Before you perform the taste test, there are three important things to remember: first, the testing area should be free of any overpowering scents that may distract the volunteers from the experiment, and second, the volunteers must not have any colds or any condition that may impede their ability to smell and taste, and three, the food must be at room temperature when executing the experiment as the cold will reduce the strength of the taste.
You may try the test yourself first so you have a better understanding of the flow of the test and can better explain it to the volunteers. Be sure that when you recruit your volunteers, you disclose the time, the date, and how long the experiment will take.
List out all the flavors and scents on a printout and hand them over to your volunteers; make sure not to label which are scents and which are food, so they don’t know which is which. You may consider doing this alphabetically to avoid grouping the scents and the flavors together.
Prepare a table that will help you record responses and results for each volunteer throughout the test. The table can look like this:
|Sample #||Food Sample||Scent Sample||Volunteer Number|
|1||Food 1||Scent 1|
|3||Food 2||Scent 2|
|5||Food 3||Scent 3|
|7||Food 4||Scent 4|
|9||Food 5||Scent 5|
As you do the test, fill out the areas where it says Food and Scent with an actual Food or Scent in the experiment. Each sample will have two parts: the flavor test with the food and the scent and another with just the food and the control scent or the cotton ball with the water.
Throughout the test, randomize the sequence of the food-scent pairing you use on the flavor test, so the volunteers don’t find a pattern in the experiment. Do the experiment with one volunteer in the room at a time to ensure that the others don’t hear the responses. Assign a number to each volunteer so you can present your findings anonymously.
Before you start, prepare an introduction and explanation of the experiment for your volunteer. Let them know that all the food and scents are neither unsafe nor disgusting and that you have taken all the means necessary to ensure sanitation and hygiene. Explain that the experiment is flavor testing, where they will be attempting to identify the flavor blindfolded through the taste and scent of the food they’re about to eat. After they taste the food, they will refer to the list you handed over to them to identify what they had just eaten.
Finally, blindfold the volunteer and only then bring out the box of flavors and scents. Right before you do the actual test, explain the process to the volunteer one more time and perform a dry run with empty spoons and cotton balls. The testing may become confusing at times as they will need to taste and inhale at the same time.
After the dry run, start with your first volunteer and repeat the steps below until you finish.
- Pick the food-scent pairing from your list and take the samples out of the box. Make sure you use a fresh spoon for each food and each volunteer.
- Ask the volunteer which hand they prefer to eat the food with and immediately let them know not to taste it yet. Let the volunteer know that as they taste the sample, you will be holding up cotton balls near their noses that may or may not contain a certain scent.
- Before they taste the sample, instruct them to exhale, empty their lungs, and not inhale again until you tell them to, you may need to practice this process before the testing starts.
- Once they’re ready, tell them to empty their lungs and hold the cotton balls as close to their nose as possible until the cotton balls touch their nostrils lightly. They should be holding their breath at this point and only inhaling upon instruction.
- Once the cotton balls are in place, you may tell the volunteer to start eating and inhaling. As soon as they start to do so, immediately ask them what flavor comes to mind first. Do not remove the cotton balls yet, as it is important that the cotton balls stay in place until the volunteer swallows the food. You may need to read the list out to them so they can remember and identify what flavor they’re experiencing.
- Proceed to record the results as needed in the table you created.
Before you move on to the next food/scent pairing, cleanse the volunteer’s taste buds with a glass of water and repeat the process. Repeat the process until you have tested all of the food you chose, both with and without a scent (using the cotton ball with water). Remember to use fresh spoons each time you give the volunteers a sample.
After you finish all the samples with one volunteer, thank them and move on to the next volunteer. Remember to use fresh spoons each time.
Analyzing Your Data
Now that you’ve finished your experimentation, there will be a lot of data to observe and analyze. Tally the responses that match the food’s real identity. Analyze whether the volunteer’s responses match the real identity of the more frequently during test samples (with scents) or during control samples (without scents).
Record the total number of correct responses for food samples with extra scents and the number of correct responses in samples where there aren’t any scents. Visualize your findings using graphs to present your data.
You may also count the times when the volunteers identified the scent instead of the food and the times when they both did not match. Extract all possible data from the experiment through an analysis of the responses.
Analyze what your findings tell you about the power of smells over the taste of the food and draw your conclusions. With the results in front of you, who do you think won the battle of the senses? Why do you think this is the case? How could this knowledge help the flavor of food in practice?
What other conclusions can you draw from the data you fathered through the experiment? Was there a particular scent that overpowered all the tastes? Was there a taste that dominated all the other scents? What scents are more powerful than other scents? Did sweet scents dominate the other scents?
This experiment may yield other variations ranging from food-scent combinations and adding other elements into the equations, such as other factors that may influence flavor. What if you serve your volunteers the same food but with different textures? You may opt for chopped, blended, dried, and many more.
Perhaps explore the concept called phantom aromas. Studies have concluded that ham and beef odors make people think that food is saltier. On the other hand, vanilla scents are automatically associated with sweet things, fooling people into thinking that food has more sugar than there actually is.
This science experiment proves that there is more to what our senses know. Always keep an open mind about the world and approach every inquiry with curiosity. If this experiment interests you, you may want to explore a career in food science, opening doors to becoming a food scientist or technologist, a food science technician, a dietitian or nutritionist, and even a neurologist.
There is much in store for STEM-passionate children. It is their curiosity that pushes humanity forward, opening opportunities for progress and innovation. |
A virus is a tiny disease-causing agent. Viruses are not cells and are also smaller. They carry a programme inside them, similar to a computer. Viruses do not change outside a living body. They can only do that in the cell of an animal or a plant. Scientists therefore disagree about whether viruses belong to living organisms or not.
Viruses cannot move on their own. Under favourable circumstances, they can float in the air, enclosed in tiny water droplets. Then they belong to the aerosols. Mostly, however, they live in a moist place, for example on our mucous membranes. They are found, for example, in the mouth, nose and intestines.
There they penetrate the cells. Once they are inside, they wake up. They use all the possibilities that the cell actually needs for itself and multiply. If there are a lot of viruses, the cell usually dies, which releases the viruses again. Now, sooner or later, they stick to new cells and the whole thing starts all over again.
Some viruses cause colds or flu. When you cough or sneeze, some of the viruses are thrown out of your nose or mouth along with fine droplets of mucus. If someone else then breathes in these mucus droplets with the viruses, they also stick to the mucous membrane of that person. You can also get the viruses on your hands and bring them there when you touch your nose. You can also pass them on when you shake hands or leave them on a door handle, for example. From there, the next person takes them with them. The coronavirus is also transmitted in these ways.
Other viruses spread through food and even through contact with other people's blood. The HI virus, known by the abbreviation HIV, is one of these. It can also be transmitted during sex, because there are also many mucous membranes in the sexual organs.
Viruses can trigger very different diseases. These include coughs, colds and diarrhoea. The treatment of the disease is completely different. It depends on which type of virus someone is ill with. However, the medicines known so far mainly combat complaints such as headaches or fever. There is no actual medication against viruses. For bacteria, on the other hand, there are antibiotics. However, they are not effective against viruses. |
What is Autism Spectrum disorder and its symptoms? Autism Spectrum Disorder (ASD) is a developmental disorder that affects communication, social interaction, and behavior. It is called a “spectrum” disorder because the symptoms and their severity can vary widely among individuals.
Autism Spectrum Disorder Symptoms
The symptoms of ASD typically appear in early childhood, and may include difficulties with:
Social Interaction Difficulties
Individuals with ASD may struggle to make and maintain social relationships. They may have difficulty understanding social cues, such as facial expressions and tone of voice. They may also have trouble with nonverbal communication, such as gestures and body language.
People with ASD may have difficulty with verbal and nonverbal communication. They may not speak at all or may have delayed language development. Also, autistic kids have a tendency to repeat phrases or words. In addition, it may be a struggle for them to understand jokes, sarcasm, and figurative language.
Autistic kids may engage in repetitive behaviors or routines. This could include things like rocking back and forth, hand flapping, or lining up objects in a specific way. They may also have an intense interest in a particular topic or activity.
Many individuals with ASD have sensory issues, such as being oversensitive or undersensitive to certain stimuli, like sounds, lights, textures, or smells.
Difficulty with Change
People with ASD may have difficulty with change, especially unexpected changes in routine. Again, they may become upset or anxious if their daily schedule is disrupted.
Mental Health Issues
In addition to emotional sensitivity and regulation, some individuals with ASD may also experience mental health conditions such as anxiety, depression, and attention-deficit/hyperactivity disorder (ADHD). These conditions can further complicate emotional well-being and make it harder for individuals with ASD to manage their emotions and behaviors.
It is important to note that while ASD is a lifelong condition, early intervention, and support can greatly improve outcomes for individuals with this disorder. Also, you should note that the severity of these symptoms can vary widely between individuals. Some people with ASD may have mild symptoms, while others may have more severe symptoms that can greatly impact their daily lives.
Do you know anyone with Autism Spectrum Disorder?
1 thought on “What is Autism Spectrum Disorder?”
This is very important information to know and learn about Autism Spectrum disorder. |
The Society of the South in the Early Republic
THEMATIC FOCUS Geography and the Environment(GEO)
Geographic and environmental factors, including competition over and debates about natural resources, shape the development of America and foster regional diversity. The development of America impacts the environment and reshapes geography, which leads to debates about environmental and geographic issues.
Learning Objective M
Explain how geographic and environmental factors shaped the development of the South from 1800 to 1848.
In the South, although the majority of Southerners owned no slaves, most leaders argued that slavery was part of the Southern way of life.
Southern business leaders continued to rely on the production and export of traditional agricultural staples, contributing to the growth of a distinctive Southern regional identity.
As overcultivation depleted arable land in the Southeast, slaveholders began relocating their plantations to more fertile lands west of the Appalachians, where the institution of slavery continued to grow. |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Panic disorder, anxiety disorder characterized by repeated panic attacks that leads to persistent worry and avoidance behaviour in an attempt to prevent situations that could precipitate an attack. Panic attacks are characterized by the unexpected, sudden onset of intense apprehension, fear, or terror and occur without apparent cause. Panic attacks often occur in people with breathing disorders such as asthma and in people experiencing bereavement or separation anxiety. While about 10 percent of people experience a single panic attack in their lifetimes, repeated attacks constituting panic disorder are less common; the disorder occurs in about 1–3 percent of people in developed countries. (The incidence in developing countries is unclear due to a lack of diagnostic resources and patient reporting.) Panic disorder typically occurs in adults, though it can affect children. It is more common in women than men, and it tends to run in families.
The underlying cause of panic disorder appears to arise from a combination of genetic and environmental factors. One of the most significant genetic variations that has been identified in association with panic disorder is mutation of a gene designated HTR2A (5-hydroxytryptamine receptor 2A). This gene encodes a receptor protein in the brain that binds serotonin, a neurotransmitter that plays an important role in regulating mood. People who possess this genetic variant may be susceptible to irrational fears or thoughts that have the potential to induce a panic attack. Environmental and genetic factors also form the basis of the suffocation false alarm theory. This theory postulates that signals about potential suffocation arise from physiological and psychological centres involved in sensing factors associated with suffocation, such as increasing carbon dioxide and lactate levels in the brain. People affected by panic disorder appear to have an increased sensitivity to these alarm signals, which produce a heightened sense of anxiety. This increased sensitivity results in the misinterpretation of nonthreatening situations as terrifying events.
Altered activity of neurotransmitters such as serotonin can give rise to depression. Thus, there exists a close association between panic disorder and depression, and a large percentage of persons suffering from panic disorder go on to experience major depression within the next few years. In addition, about 50 percent of people with panic disorder develop agoraphobia, an abnormal fear of open or public places that are associated with anxiety-inducing situations or events. Panic disorder also may coincide with another anxiety disorder, such as obsessive-compulsive disorder, generalized anxiety disorder, or social phobia.
Because persistent worry and avoidance behaviour are major characteristics of panic disorder, many patients benefit from cognitive therapy. This form of therapy typically consists of developing skills and behaviours that enable a patient to cope with and to prevent panic attacks. Exposure therapy, a type of cognitive therapy in which patients repeatedly confront their fears, becoming desensitized to their fears in the process, can be effective in panic disorder patients who are also affected by agoraphobia. Pharmacotherapy can be used to correct for chemical imbalances in the brain. For example, tricyclic antidepressants, such as imipramine and desipramine, are effective treatments for panic disorder because they increase the concentrations of neurotransmitters at nerve terminals, where the chemicals exert their actions. These agents may also provide effective relief of associated depressive symptoms. Other antidepressants, including benzodiazepines, monoamine oxidase inhibitors (MAOIs), and serotonin reuptake inhibitors (SRIs), also can be effective in treating both anxiety- and depression-related symptoms.
Learn More in these related Britannica articles:
anxiety disorder: Panic disorderPanic disorder is characterized by sudden, sometimes spontaneous attacks of terrifying anxiety accompanied by symptoms such as the experience of terror, heart palpitations, and sweating. Fear of the attacks themselves generates a pattern of avoidance that can severely constrict the person’s life. During…
diagnosis: Mental examination…most common anxiety disorders are panic disorder, generalized anxiety disorder, post-traumatic stress disorder, phobic disorder, and obsessive-compulsive disorder. There is a close association between panic disorder and depression, and a large percentage of persons suffering from panic disorder go on to experience a major depression within the next few years.…
Anxiety, a feeling of dread, fear, or apprehension, often with no clear justification. Anxiety is distinguished from fear because the latter arises in response to a clear and actual danger, such as one affecting a person’s physical safety. Anxiety, by contrast, arises in response to apparently innocuous situations or is… |
A combination is a way of choosing elements from a set in which order does not matter.
Consider the following example: Lisa has different ornaments and she wants to give ornaments to her mom as a birthday gift (the order of the gifts does not matter). How many ways can she do this?
We can think of Lisa giving her mom a first ornament, a second ornament, a third ornament, etc. This can be done in ways. However, Lisa’s mom is receiving all five ornaments at once, so the order Lisa decides on the ornaments does not matter. There are reorderings of the chosen ornaments, implying the total number of ways for Lisa to give her mom an unordered set of ornaments is .
Notice that in the answer, the factorials in the denominator sum to the value in the numerator. This is not a coincidence. In general, the number of ways to pick unordered elements from an element set is . This is a binomial coefficient, denoted .
1. How many ways are there to arrange 3 chocolate chip cookies and 10 raspberry cheesecake cookies into a row of 13 cookies?
Solution: There are ways to chose 3 distinct chocolate chip cookies. However, the order of these chocolate chip cookies doesn't matter. Hence, we divide by , to obtain ways.
2. How many ordered non-negative integer solutions are there to the equation ?
Solution: To solve this problem, we use a technique called "Stars and Bars", which was popularized by William Feller. We create a bijection between the solutions to and sequences of 13 digits, consisting of ten 1's, and three 0's. Given a set of four integers whose sum is 10, we create a sequence that starts with 1's, then has a 0, then has 1's, then has a 0, then has 1's, then has a 0, then has 1's. Conversely, given such a sequence, we can set to be equal to the length of the initial string of 1's (before the first 0), set equal to the length of the next string of 1's (between the first and second 0), set equal to the length of the third string of 1's (between the second and third 0) and set equal to the length of the fourth string of 1's. It is clear that such a procedure returns the starting set, hence we have a bijection .Now, it remains to count the number of such sequences. We pick 3 positions for the 1's and the remaining positions are 0's. Hence, there are such sequences.
3. Both of the previous answers are 286. Is this a happy coincidence?
In the second question, we gave a bijection between solutions to and sequences of length 13 with 10 0's and 3 1's. For the first question, we can also create a bijection between the arrangement of cookies and the sequences of length 13 with 10 0's and 3 1's, by letting each raspberry cheesecake cookie be a 1, and each chocolate chip cookie be a 0. Therefore, we have a bijection between solutions to the first and second questions, which explains why they have the same answer! |
The cell cycle allows multiicellular organisms to grow and divide and single-celled organisms to reproduce.
- Explain the role of the cell cycle in carrying out the cell’s essential functions
- All multicellular organisms use cell division for growth and the maintenance and repair of cells and tissues.
- Single-celled organisms use cell division as their method of reproduction.
- Somatic cells divide regularly; all human cells (except for the cells that produce eggs and sperm) are somatic cells.
- Somatic cells contain two copies of each of their chromosomes (one copy from each parent).
- The cell cycle has two major phases: interphase and the mitotic phase.
- During interphase, the cell grows and DNA is replicated; during the mitotic phase, the replicated DNA and cytoplasmic contents are separated and the cell divides.
- somatic cell: any normal body cell of an organism that is not involved in reproduction; a cell that is not on the germline
- interphase: the stage in the life cycle of a cell where the cell grows and DNA is replicated
- mitotic phase: replicated DNA and the cytoplasmic material are divided into two identical cells
Introduction: Cell Division and Reproduction
A human, as well as every sexually-reproducing organism, begins life as a fertilized egg or zygote. Trillions of cell divisions subsequently occur in a controlled manner to produce a complex, multicellular human. In other words, that original single cell is the ancestor of every other cell in the body. Once a being is fully grown, cell reproduction is still necessary to repair or regenerate tissues. For example, new blood and skin cells are constantly being produced. All multicellular organisms use cell division for growth and the maintenance and repair of cells and tissues. Cell division is tightly regulated because the occasional failure of regulation can have life-threatening consequences. Single-celled organisms use cell division as their method of reproduction.
Cell Division and Growth: A sea urchin begins life as a single cell that (a) divides to form two cells, visible by scanning electron microscopy. After four rounds of cell division, (b) there are 16 cells, as seen in this SEM image. After many rounds of cell division, the individual develops into a complex, multicellular organism, as seen in this (c) mature sea urchin.
While there are a few cells in the body that do not undergo cell division, most somatic cells divide regularly. A somatic cell is a general term for a body cell: all human cells, except for the cells that produce eggs and sperm (which are referred to as germ cells), are somatic cells. Somatic cells contain two copies of each of their chromosomes (one copy received from each parent). Cells in the body replace themselves over the lifetime of a person. For example, the cells lining the gastrointestinal tract must be frequently replaced when constantly “worn off” by the movement of food through the gut. But what triggers a cell to divide and how does it prepare for and complete cell division?
The cell cycle is an ordered series of events involving cell growth and cell division that produces two new daughter cells. Cells on the path to cell division proceed through a series of precisely timed and carefully regulated stages of growth, DNA replication, and division that produces two identical (clone) cells. The cell cycle has two major phases: interphase and the mitotic phase. During interphase, the cell grows and DNA is replicated. During the mitotic phase, the replicated DNA and cytoplasmic contents are separated and the cell divides.
The Cell Cycle: The cell cycle consists of interphase and the mitotic phase. During interphase, the cell grows and the nuclear DNA is duplicated. Interphase is followed by the mitotic phase. During the mitotic phase, the duplicated chromosomes are segregated and distributed into daughter nuclei. The cytoplasm is usually divided as well, resulting in two daughter cells |
Most people know what the outside of a volcano looks like, but not many can describe what they’re like on the inside.
That’s something that has captivated Emilie Hooft, a physicist-turned-geologist who uses seismic imaging techniques to map the deep system of pathways that transport magma to the earth’s surface.
“What exactly does the plumbing system beneath a volcano look like?” Hooft wonders. “How are things organized underground? Is it a big vat? A flat thing? A long column? A bunch of layers of magma sills? To some extent that’s all up for grabs. It used to be that people thought there were big magma chambers under volcanoes, but it turns out they are often quite small.”
In her talk, Hooft will detail her cutting-edge research techniques — which she likens to a CAT scan for volcanoes — and zero in on two recent projects: one involving seismic instruments set on land around Central Oregon’s Newberry Volcano and one requiring a floating research vessel and underwater seismometers on the floor of the Aegean Sea covering Greece’s Santorini volcano.
Hooft has studied all kinds of volcanoes — everything from arc volcanoes like the ones in the Oregon Cascades to midocean ridges where volcanic melt is rising to fill a gap to deep mantle plumes like the volcanoes of Hawaii or Iceland. She likes to say, simply, that she studies volcanoes and she uses physics to do so.
The seismic techniques Hooft employs do require a firm understanding of physical properties. By sending sound waves into the earth and measuring their rate of travel, she is able to turn sound into visuals. If the rocks are hot, or broken, sound travels more slowly and vice versa.
She captures that data, interprets its meaning and creates three-dimensional models of the nooks and crannies in the subsurface, sometimes with the aid of digital artists.
The techniques also differ widely, depending on the volcano. In the case of the Newberry volcano, she relied on almost a hundred instruments set in 300- and 800-meter increments across the land to record one explosive charge. In Santorini, her team submerged 90 instruments on the seafloor and, as a sound source, used massive compressors on the National Science Foundation’s largest and most advanced research vessel.
“The value of the research is that you can better understand, from a societal or hazards point of view, what’s there underground,” Hooft said. “If you have more information about what’s underground you can try to predict what might happen when the volcano becomes restless.”
Additionally, the research helps scientists better understand how the earth functions, how big magma systems assemble themselves, and how they reset and regrow after major volcanic episodes.
“These volcanoes are where most of the continental crust is actually cooked and made,” Hooft said. “But even as scientists are getting a better idea of the chemical steps that form the continental crust, we still don’t really know in what kinds of vessels these reactions are happening.”
Source: University of Oregon |
The Old Testament is also called ‘Ta•nach.’ In Hebrew it is not a name but the initials of the three sections of the Old Testament: Torah (Torah, the Law); ‘Ne•vi•eem’ (Prophets); and Ke•tu•vim (Writings). The Torah includes the five books of Moses (Pentateuch), and ‘De•va•rim’ (Deuteronomy) is its fifth and last book following ‘Be•re•sheet’ (Genesis); ‘Shemot’ (Exodus); ‘Va•ik•ra’ (Leviticus); and Be•mid•bar (Numbers).
Each of the Torah books is named after either the very first word in that book (as in Genesis where the very first word ‘be•re•sheet’ is the name of the book), or the second (as in Exodus where it is the second word in the verse) with one exception occurring in the book Be•mid•bar (Numbers), which gets its name from the fifth word of the fifth verse of the book.
The Book of ‘De•va•rim,’ ‘Deuteronomy,’ means: ‘words.’ It can also mean ‘things’ or ‘essences.’ Note that in the first verse of the this book, the translation of the word ‘de•va•rim’ may actually mean either one of these words.
“These are the words which Moses spoke to all Israel on this side of the Jordan in the wilderness, in the Arabah opposite the Red Sea, between Paran, and Tophel, and Laban, and Hazeroth, and Dizahab” |
In the United States and in Europe, hydraulic fracturing is one of the most controverted aspects of the shale gas debate. The dialogue of the deaf around the impacts on the environment generates many overstatements from both camps. The arguments put forward by the two sides deserve attention, but one would be wrong to stay stuck there: research is already in motion and new techniques are emerging.
Hydraulic fracturing is used for the production of hydrocarbons but also for deep geothermal exploration. It was implemented for the first time in 1947 in Kansas. Two years later, the first commercial fracturing treatments were conducted in oil wells in Oklahoma; but it is only with the massive exploitation of shale gas, during the last decade, that the process has become extremely widespread. In 2008, over 50,000 well fractures were carried out around the world and it is estimated that over one of two wells drilled today undergoes a fracturing treatment.
Hydraulic fracturing is the injection under high-pressure of a fluid in a wellbore, at a specified depth. When the pressure applied by the fluid compensates the lithostatic gradient (weight of the rock above the place where the pressure is applied) and the local resistance of the rock, a fracture is created that can extend over several hundreds of meters, provided that enough fluid is injected to keep sufficient pressure in order to sustain the load. During the process, a proppant (generally grains of sand or ceramic) is injected to prevent the crack from closing. Drilling water contains additives suited to the type of rock encountered, to facilitate the fracturing operation and to retain the created cracks. These cracks act as drains, granting access to volumes of rock located far from the wellbore, but close enough to the created drain.
Hydraulic fracturing has first been applied to conventional geological reservoirs. However its use in low permeability fields called Tight Gas Reservoirs (TGR, thousand times less permeable than conventional reservoirs) has faced real problems. TGRs contain gas which offers a very limited recovery rate when using conventional methods: from 3 to 10% of the hydrocarbons. Hydraulic fracturing can increase this performance. The extracted gas originates from a volume of rock near the surface of the crack, through which the gas will migrate due to the difference of pressure. Gas production consists in draining this zone where permeability is low.
This technique reaches its own limits: the performance is increased, but once the drainage is carried out, the production undergoes a very rapid decline. The gas trapped between the drained areas remains inaccessible.
Shale formations combine two difficulties: the low permeability of the rock (i.e. its permeability is insufficient to allow significant fluid flow) and a natural heterogeneity of the environment.
Shale is a clastic sedimentary rock that has been strengthened during the geological evolution of the Earth's crust. The organic materials that it contained originally transformed into kerogen, a source of hydrocarbons - in our case, of “thermogenic” methane (generated by an increase of temperature and pressure). Accumulated by sedimentation, shales have a structure similar to that of a slate (they are called anisotropic). The gas is stored between the slices. Thus, shales are both the bedrock and the reservoir from which the gas is extracted.
During their geological evolution, these rocks undergo what one could call “natural” fractures. Indeed, shale formations are even less homogeneous than TGRs or conventional reservoirs. Their physical properties, e.g. their permeability, depend on stratigraphy and they contain cracks at multiple scales: at the scale of a layer – or microscopic scale – but also over several tens of meters – the macroscopic scale. At the macroscopic level, it is quite common to observe crack planes (e.g. perpendicular to the layers) with a fairly regular spacing of a few meters. These faults can be as large as several tens of meters.
The purpose of the hydraulic fracturing is not only to create a macro-fracture (a drain), it is also to connect and reactivate the initial cracks at all scales in order to pull out the gas. Thus, the interactions between the cracks created by hydraulic fracturing and this network of cracks must also be taken into account. The created crack will be much more permeable than the secondary network. The fluid under pressure will have problems to penetrate this existing network. Pressure will not always be important enough to reactivate it.
The questions raised by hydraulic fracturing concern primarily two aspects. First, the rise of methane to the ground surface or to water tables has fueled public debate, but the extent of the phenomenon is still being discussed. It seems, for example, that the famous movie Gasland confused biogenic methane, found at the surface, with thermogenic methane formed in the depths of the ground. In any case, the question arises especially at the level of wells and drill pipes.
The second aspect requires more attention: the water used during the fracturing process contains chemical additives that could contaminate water tables. Beyond the debate between opponents and defenders of shale gas extraction, who tend to exaggerate or minimize risks, there is a public health issue that cannot be overlooked or denied. Under these circumstances and while waiting for the necessary feedback from experience, the alternatives to hydraulic fracturing must be carefully examined.
A first option is to change the fracturing fluid. The penetration of the fracturing fluid within the network of existing cracks depends directly on its viscosity. One can easily imagine that by reducing this parameter, fluids will penetrate more easily in the existing cracks and apply enough pressure to reactivate them. This principle being assumed, the problem comes to finding the “right” fluid. There are many candidates: propane, nitrogen, carbon dioxide... In its liquid state, CO2 has a viscosity ten times lower than water; in supercritical state (normal conditions of deposit), its viscosity is even lower. Beyond a decrease of viscosity, the interaction with rock is also important. If the fluid adheres (adsorption) on the rock, preferentially to the methane, it will improve the extraction of the gas present at the surface of the rock in the secondary network.
Each solution has benefits, beyond the simple fact that the fracturing fluid is no longer water and that strictly speaking, one can no longer speak of “hydraulic fracturing”. Nitrogen is not harmful for the environment, and using carbon dioxide can help store it at the same time.
There are also disadvantages and difficulties: replacement fluids are more compressible than water, which makes the process less efficient; CO2 can recombine with water and form a corrosive acid that will affect the surrounding carbonate rocks. It can also provoke a swelling of the rock and decrease its permeability. It is not clear that the secondary network will not close itself, once the fracturing fluid is evacuated. The retaining of this secondary network remains an unresolved issue.
The second approach is dynamic loading. In statics, the surface of crack created in a material is proportional to the energy transferred to the volume of material that will break. Dynamic loading brings a large amount of energy to a small volume of material. In this volume, there is such an amount of energy, that a large area of cracks will be created. As the loading wave spreads inside the material, it will create fragmentations, thereby connecting the initial and newly created network of cracks.
Dynamic loading can be induced for example by explosives placed at the bottom of wells or by electrical impulses, an original technique inspired by tunnel drilling methods. The load applied to the rock in the proximity of the drilling site is a pressure wave generated by an electrical discharge between two electrodes placed in a wellbore filled with water. The amplitude of this wave of pressure can reach up to 200 MPa (2000 times the atmospheric pressure) while its duration is around a hundred of microseconds. This pressure wave will be transmitted to the rock by the fluid inside the wellbore, and will create micro-cracks of decreasing density, according to the distance from the well. Models indicate rock permeability increases only up to several meters from the wellbore.
Electric pulse fracturing could facilitate the reactivation of existing cracks by focusing more easily on the concerned rock volumes and avoiding important needs for water. However, the relevance of this process remains to be seen.
The two approaches that we have just mentioned are considered to have a moderate environmental impact: water needs and used water reprocessing are much more limited. Regarding the additives contained in the water used during fracturing, it is very likely that in a near future, chemistry science will offer substitutes, as in the case of Guar gum, used as a gelling agent for fracturing fluids and also used in the agri-food field. Dynamic fracturing can also help confine the volume of rock cracked, thus reducing the risk of accidental connection between the network of cracks and its surrounding area.
However, these techniques do not prevent possible gas leaks or contamination of subsurface aquifers related to leaks in the wellbore. These two issues depend on a good control of the conditions of implementation of the wellbore (good quality casing and sealing) and of the extraction process, even regarding conventional resources in non-fractured wells. In this area also, research is very active.
Are we going to achieve environmentally acceptable production processes that are economically viable? This prospect is neither science fiction nor a long-term possibility. Scientific literature grows month after month with new results from Japanese, American, Chinese and European laboratories.
The use of supercritical CO2 for heavy oil recovery is already a reality; there are nearly a hundred of pilot projects only in the United States. Its extension to shale gas or even coal gas could be a realistic line of research (there are several pilot projects of coal gas extraction based on this principle).
The use of fragmentation by electrical pulse is the subject of several international patents and has whetted the interests of oil companies.
There are also other alternatives: for instance, the heating of the rock mass (as for shale oils) or the effects of bacterial flora in the bottom of the well.
To this already extensive literature, one must add all the work aiming at a better understanding and optimization of hydraulic fracturing while reducing its environmental impact, specifically its water consumption and reprocessing.
In every country possessing shale gas, there is a great temptation to reproduce the shale gas revolution experienced by the United States. Europe possesses this unconventional type of resource, especially France and Poland. However, there is a consensus to admit that the production means, as well as the legal and socio-economic conditions, are so drastically different in Europe that the old continent will not experience the same boom before 5 to 10 years. This period of time should be used to optimize the fracturing processes and develop new alternative techniques. In fact, this is quite timely because everyone agrees to say that no alternative technique will be viable from an industrial point of view before five years at least.
The two approaches discussed in this article require lab studies, but also and above all, the implementation of on-site validation procedures, the creation of underground laboratories equivalent to those of nuclear energy field in France and Switzerland. This would imply drilling in perfectly known rock formations to lead full-scale experiments with enhanced instruments and total transparency in terms of environmental impact. It is important indeed to have testing facilities beyond the laboratory as well as pre-industrial pilots that will define a stage of evaluation and industrial feasibility that digital simulation alone cannot achieve. At this stage and given the need to deploy such considerable means, only a national or European initiative involving public and private actors will be able to create an infrastructure of research at the level of the challenge. |
Students of English often complain about the difficulty of learning phrasal verbs. Simply put, a phrasal verb is a combination of a verb (an action word like look, take, set) and a preposition (a short connecting word like up, out, over) in which the preposition gives the verb a new meaning. In this sense, we can say that the meaning is idiomatic – in other words the phrase can’t be translated word by word but only by looking at the phrase as a whole.
Sometimes verb + preposition combinations are not idiomatic, as in the phrase listen to. To is simply the preposition that’s required after the verb listen if you want to say what it is you’re listening to, as in: She’s listening to the radio.
Sometimes a single phrasal verb can have both a literal, non-idiomatic meaning and one or more idiomatic or figurative meanings. For example, if you want to see the moon you have to look up at the sky. The word up here is used as a kind of adverb (adverb particle is the tehnical term) and it doesn’t really change the meaning of the verb look — it just tells us the direction you’re looking.
However, when you don’t know the meaning of a word and you look up the word in a dictionary, there’s nothing directional about the word up. Look in this phrase still means use your eyes, but the meaning of the phrase as a whole has a very specific focus – searching for information in a reference book or online.
There are some grammatical issues with phrasal verbs – can another word come between the verb and preposition or not? – but learning how to use phrasal verbs is best accomplished the same way that you go about learning any new vocabulary.
How to Learn Phrasal Verbs:
1. Read and listen. When you see or hear a phrasal verb you don’t know, write it down. But don’t just write down the verb and the preposition, copy the whole sentence. Understanding the context – how the phrase is used with the other words in the sentence – is what will make it possible for you to use the phrase yourself in the future.
2. Find out the meaning in that specific context. This is where a teacher or native English speaker can save you time, because there is often more than one meaning for each phrasal verb, but if you’re on your own, look it up in a dictionary and decide which definition fits best in context.
3. Practice it in conversation and/or writing. Get feedback from a teacher or native English speaker about whether or not you’re using it the way native speakers do.
4. Study your list of phrasal verbs and keep adding to the list. If you find a phrasal verb from your list used in a new way, write down the new example.
Why is learning phrasal verbs in context better than learning them from a dictionary or book about phrasal verbs? Four reasons.
1. You can be sure you’re learning the most common uses of the most common phrasal verbs first. You don’t want to waste your time learning the more obscure uses.
2. They will be easier to remember. Dictionary.com has 15 different phrasal verbs based on the verb set (set in, set off, set out, etc.) and 15 different meanings for just the single phrasal verb set up – and the meanings vary widely. If you try to learn them all together, it’ll be too difficult to remember each separate meaning. Take them one at at time, in context.
3. When you’re learning phrasal verbs in context, through reading and listening, you’re learning a lot of other things about English as well, including other vocabulary words and grammatical structures.
4. It’s much more interesting to learn from stories and conversation than from printed lists. And the fact that you’re interested in the context will make it much easier to remember the phrasal verb later.
If you have any questions about the meaning of specific phrasal verbs, or if you have your own tips on how to learn phrasal verbs, leave a comment here! |
UAM, Verónica García Vegas
When dried, for example in a water-free solvent, the sensor material turns purple.
A new, versatile plastic-composite sensor can detect tiny amounts of water. The 3d printable material, developed by a Spanish-Israeli team of scientists, is cheap, flexible and non-toxic and changes its colour from purple to blue in wet conditions. The researchers lead by Pilar Amo-Ochoa from the Autonomous University of Madrid (UAM) used DESY’s X-ray light source PETRA III to understand the structural changes within the material that are triggered by water and lead to the observed colour change. The development opens the door to the generation of a family of new 3D printable functional materials, as the scientists write in the journal Advanced Functional Materials (early online view).
In many fields, from health to food quality control, environmental monitoring and technical applications, there is a growing demand for responsive sensors which show fast and simple changes in the presence of specific molecules. Water is among the most common chemicals to be monitored. “Understanding how much water is present in a certain environment or material is important,” explains DESY scientist Michael Wharmby, co-author of the paper and head of beamline P02.1 where the sensor-material was examined with X-rays. “For example, if there is too much water in oils they may not lubricate machines well, whilst with too much water in fuel, it may not burn properly.”
The functional part of the scientists’ new sensor-material is a so-called copper-based coordination polymer, a compound with a water molecule bound to a central copper atom. “On heating the compound to 60 degrees Celsius, it changes colour from blue to purple”, reports Pilar Amo-Ochoa. “This change can be reversed by leaving it in air, putting it in water, or putting it in a solvent with trace amounts of water in it.” Using high-energy X-rays from DESY's research light source PETRA III at the experimental station P02.1, the scientists were able to see that in the sample heated to 60 degrees Celsius, the water molecule bound to the copper atoms had been removed. This leads to a reversible structural reorganisation of the material, which is the cause of the colour change.
“Having understood this, we were able to model the physics of this change,” explains co-author José Ignacio Martínez from the Institute for Materials Science in Madrid (ICMM-CSIC). The scientists were then able to mix the copper compound into a 3D printing ink and printed sensors in several different shapes which they tested in air and with solvents containing different amounts of water. These tests showed that the printed objects are even more sensitive to the presence of water than the compound by itself, thanks to their porous nature. In solvents, the printed sensors could already detect 0.3 to 4 per cent of water in less than two minutes. In air, they could detect a relative humidity of 7 per cent.
If it is dried, either in a water free solvent or by heating, the material turns back to purple. A detailed investigation showed that the material is stable even over many heating cycles, and the copper compounds are evenly distributed throughout the printed sensors. Also, the material is stable in air over at least one year and also at biological relevant pH ranges from 5 to 7. “Furthermore, the highly versatile nature of modern 3D printing means that these devices could be used in a huge range of different places,” emphasises co-author Shlomo Magdassi from The Hebrew University of Jerusalem. He adds that the concept could be used to develop other functional materials as well.
“This work shows the first 3D printed composite objects created from a non-porous coordination polymer,” says co-author Félix Zamora from the Autonomous University of Madrid. “It opens the door to the use of this large family of compounds that are easy to synthesize and exhibit interesting magnetic, conductive and optical properties, in the field of functional 3D printing.”
The Autonomous University of Madrid, the Hebrew University of Jerusalem, the Nanyang Technological University in Singapore, the Institute for Materials Science in Madrid and DESY contributed to this research. |
Medicine is the first of the Nobel Prizes awarded each year, and this year’s prize was awarded to a British-American scientist and a Norwegian couple for their discovery of “an inner GPS in the brain.”
“The discoveries of John O´Keefe, May-Britt Moser and Edvard Moser have solved a problem that has occupied philosophers and scientists for centuries – how does the brain create a map of the space surrounding us and how can we navigate our way through a complex environment?” the Nobel Assembly said.
From the Nobel Assembly press release:
How do we know where we are? How can we find the way from one place to another? And how can we store this information in such a way that we can immediately find the way the next time we trace the same path? This year´s Nobel Laureates have discovered a positioning system, an “inner GPS” in the brain that makes it possible to orient ourselves in space, demonstrating a cellular basis for higher cognitive function.
Read the full press release at: http://www.nobelprize.org/nobel_prizes/medicine/laureates/2014/press.html
Or find out more information at the New York Times:
Nobel Prize for medicine(Photo: AP)
Research Summary – The Nobel Committee for Physiology or Medicine (Illustration and layout: Mattias Karien) |
The way of life in the colonies before the Revolution was far more different than the way of life after the war. The colonies were completely run by Britain and didn't have to fend for their own needs. Trading, taxing, and other parts of the economy were run by the mother-country. However, during the Revolutionary War, idealists like Thomas Paine produced concepts that fruited the idea for a more republican society. These new beliefs were reflected in the Declaration of Independence, after the war it played a huge part in the Articles of Confederation, and it was later the ideas established in the American Constitution. In the years before the Revolution, the colonies were still growing. The New World was a melting pot for different European cultures and social status played a huge part in how people viewed each other. Even though the colonists left Britain to escape social structure, they found themselves once again ranking people by how educated they were or how much money or land they had. The landowners were better off than the widowed, the poor and the indentured servants. But it was possible for citizens to earn their way into a higher class. Much like after the Revolution, the slaves had no worth and were at the bottom of the pyramid. Education was only offered for men to prepare them for ministry and it taught them the dead language of Latin that was important for interpreting the Bible’s scriptures. The link to religion was prominent in politics as well. A majority of the colonies were run by Parliament appointed officials that had close ties to the established churches of the colonies. The early years of the colonies were revolved around religion and were greatly affected by how England ran their government. England’s hierarchical society was all that the colonist knew and it wouldn’t be for a hundred years before they would find new ways to establish the colonies.
Trading in the colonies was heavily intertwined with Britain. No trading with other...
Please join StudyMode to read the full document |
Primordial dwarfism is a rare disease and a form of dwarfism that results in a smaller body size in all stages of life. Moreover, the small size begins even before the birth, as a fetus. The primordial dwarfism differs from the other forms of dwarfism by the fact that all of the bones and organs of the patient’s body are proportionally smaller than in an average person. The disease is typically diagnosed even before the patient’s birth, when doctors detect a fetus that is very small for the gestational age. These children are also born with very low birth weights, and their growth continues at a stunned rate.
Types of primordial dwarfism
Primordial dwarfism divides into five different subtypes. These are the most severe forms of the dwarfism. These patients usually do not live past the age of 30, and they usually die because of cardiovascular problems.
Seckel Syndrome is the first type of primordial dwarfism, characterized by microcephaly. Microcephaly is a neurodevelopmental disorder in which the perimeter of the head is more than two standard deviations smaller than average for the person's age and sex. These patients may also suffer from scoliosis, hip dislocation, delayed bone age, radial head dislocation and seizures.
Osteodysplastic Primordial Dwarfism
This subtype comes in two different types: type I and type II
Osteodysplastic Primordial Dwarfism, Type I (ODPDI) is the second type of primordial dwarfism characterized by undeveloped corpus callosum of the brain. The corpus callosum connects the left and right cerebral hemispheres and facilitates their communication. These patients usually suffer from seizures and apnea; they have very thin hairs all over the body and have short vertebrae, elongated clavicles, bent femora and hip displacement.
Osteodysplastic Primordial Dwarfism, Type II (ODPDII) is the third sub-type of primordial dwarfism characterized by milder symptoms such as squeaky voice, small and widely spaced teeth, poor sleep patterns, delayed mental development, immune problems, breathing problems, eating problems, hyperactivity, farsightedness and brain aneurysms.
Russel-Silver Syndrome is yet another sub-type of primordial dwarfism. These patients are usually a bit taller than the patients with other types of this disease. They usually have webbed toes, non-descended testicles, low muscle tone, thin upper lip, high voice, small chin and board forehead. Their heads are very large in comparison to their body size, and usually have a somewhat triangular shape. These patients often suffer from hypoglycemia.
This is the last sub-type of primordial dwarfism. Patients suffering from Meier-Gorlin syndrome often have small ears and no kneecaps. Their clavicles are curved, their ribs are extremely skinny, and their elbows are discolored. They are usually a bit taller than patients with Seckel Syndrome and osteodysplastic primordial dwarfism. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.