content
stringlengths 275
370k
|
---|
NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) mission has identified the process that appears to have played a key role in the transition of the Martian climate from an early, warm and wet environment that might have supported surface life to the cold, arid planet Mars is today.
MAVEN data have enabled researchers to determine the rate at which the Martian atmosphere currently is losing gas to space via stripping by the solar wind. The findings reveal that the erosion of Mars’ atmosphere increases significantly during solar storms. The scientific results from the mission appear in the Nov. 5 issues of the journals Science and Geophysical Research Letters.
“Mars appears to have had a thick atmosphere warm enough to support liquid water which is a key ingredient and medium for life as we currently know it,” said John Grunsfeld, astronaut and associate administrator for the NASA Science Mission Directorate in Washington. “Understanding what happened to the Mars atmosphere will inform our knowledge of the dynamics and evolution of any planetary atmosphere. Learning what can cause changes to a planet’s environment from one that could host microbes at the surface to one that doesn’t is important to know, and is a key question that is being addressed in NASA’s journey to Mars.”
MAVEN measurements indicate that the solar wind strips away gas at a rate of about 100 grams (equivalent to roughly 1/4 pound) every second. “Like the theft of a few coins from a cash register every day, the loss becomes significant over time,” said Bruce Jakosky, MAVEN principal investigator at the University of Colorado, Boulder. “We’ve seen that the atmospheric erosion increases significantly during solar storms, so we think the loss rate was much higher billions of years ago when the sun was young and more active.”
In addition, a series of dramatic solar storms hit Mars’ atmosphere in March 2015, and MAVEN found that the loss was accelerated. The combination of greater loss rates and increased solar storms in the past suggests that loss of atmosphere to space was likely a major process in changing the Martian climate.
The solar wind is a stream of particles, mainly protons and electrons, flowing from the sun’s atmosphere at a speed of about one million miles per hour. The magnetic field carried by the solar wind as it flows past Mars can generate an electric field, much as a turbine on Earth can be used to generate electricity. This electric field accelerates electrically charged gas atoms, called ions, in Mars’ upper atmosphere and shoots them into space.
MAVEN has been examining how solar wind and ultraviolet light strip gas from of the top of the planet’s atmosphere. New results indicate that the loss is experienced in three different regions of the Red Planet: down the “tail,” where the solar wind flows behind Mars, above the Martian poles in a “polar plume,” and from an extended cloud of gas surrounding Mars. The science team determined that almost 75 percent of the escaping ions come from the tail region, and nearly 25 percent are from the plume region, with just a minor contribution from the extended cloud.
|
Bees have actually been much in the news of late, and for the saddest of reasons: due to environment loss, worldwide warming, pesticides, and mono-crop farming, their numbers are in sharp decline throughout the United States. Regardless of bumblebee vs honeybee, the loss of any bees and other threatened pollinators could harm not just the world’s economy, however also endanger its very ecosystem.
And although we can’t recognize with all of them, we can take the first step by considering 2 of the most typical varieties: the bumblebee and the honeybee. Let’s get the taxonomy out of the way initially. Although the various bumblebee and honeybee species both belong to the Apidae family, bumblebees belong to the Bombus genus and honeybees to Apis.
Bumblebees Vs. Honeybees: What’s The Difference
Bumblebees are round and fuzzy; honeybees are smaller sized and thinner it would be easy, in truth, to error them for wasps. And while honeybees have a clear distinction in between head and abdominal area, bumblebees are “all of one piece.” Honeybees also have 2 clear sets of wings: a bigger set in front and a smaller sized set in the back – bumblebee vs honeybee.
Hyper-social honeybees reside in hives with 10s of countless their brethren: those hives can either be domesticated nests kept by beekeepers or wild ones discovered in hollow trees. As their name suggests, they are honey producers, and their long-lived colonies survive the winter season intact the queen, in truth, can live for some 3 to four years.
Comparing Bumblebees With Honeybees
Where honeybees build hives, bumblebees reside in nests with up to a few hundred fellow bees. These nests are found specifically in the wild (bumblebees are not domesticated), and can typically be discovered in burrows or holes in the ground. In fact, the queen, which is the only member of a bumblebee colony to survive the winter season, hibernates in the ground.
Of the two groups, bumblebees are the better pollinators – bumblebee vs honeybee. The reason for this is incomparably useful: as there are more types of bumblebees, there is a wider variety in lengths of tongue and, therefore, the type of owers they feed on. They are quick workers and, due to the fact that of their bigger bodies, can carry larger loads.
And this higher exibility makes them adept at cross-pollination, which is particularly crucial for fruit trees. Additionally, bumblebees are more resistant to climate conditions such as cold, rain, and restricted light. The one benefit honeybees have is communication: they in fact carry out a dance to let their fellow employees know where excellent materials of pollen can be discovered! Although this benefits their nest and honey production, it can in fact be a drawback in regards to pollination.
One last difference: honeybees can only sting once before passing away. Bumblebees can sting numerous times, however, they do not form swarms like honeybees and they just sting when genuinely provoked. Both bee types are safe adequate to host in your yard, so take reasonable safety measures and don’t let the worry of stings prevent you from planting wildflowers to attract bees and reverse decades of environment loss. bumblebee vs honeybee.
Is It A Honeybee Or A Bumblebee?
It’s this time of year that honey bees are swarming and becoming more active due to the warmer weather and the brighter, longer days. Bumblebees have actually made nests in birdbox’s, sheds, compost bins, and roofing system spaces and are out gathering pollen and nectar. Wasps appear to delight in disturbing our charming picnics at parks! We get lots of messages about “swarms” in birdbox’s when really these are 99% of the time little bumblebee nests.
If you see this, please email us and let us know if you’re based in the Hull location and we can hopefully come out and gather the swarm for you. A honey bee swarm on a bird box. These are bumblebees. It holds true that lots of people think there’s just one kind of bee – the fuzzy round bumblebees we see on the flowers in our garden and typically the cartoon-like bees we see in kids’ books.
In fact, there are about 270 species of bees in overall. From honeybees to bumblebees, leaf-cutter bees, mason bees, mining bees, carpenter bees, sweat bees, and many many more. We have actually created a helpful bee-guide which reveals the distinctions in honeybees, bumblebees, and wasps.
It is essential that if you do have a bumblebee nest, to leave them be if they’re not causing an issue. If you have a honey bee nest or swarm, call your local wildlife removal expert.
Contact the experts at Covenant Wildlife now to help you remove your bees and relocate them safely if you need to have them off your property.
|
The German cockroach is a known vector for diseases including:
- Salmonellosis – Salmonella food poisoning causes diarrhea, fever, and abdominal cramps within 12 to 72 hours. Symptoms are generally mild, but can be severe, especially for those with a compromised immune system.
- Staphylococcus infections – This gastrointestinal illness develops soon after food is eaten and usually last about a day. The toxins are heat resistant so are not destroyed by cooking.
- Escherichia coli (E. coli) bacteria normally lives in the intestines of people and animals and some types can cause illness with diarrhea .
- Typhoid fever – This life-threatening illness is caused by Salmonella Typhi. When a contaminated food is consumed, the bacteria multiply and spread into the bloodstream.
- Gastroenteritis – inflammation of the stomach and small and large intestines, generally leading to vomiting or diarrhea.
- General diarrhea.
People may become infected with any of these by eating or drinking a contaminated food or beverage. Cockroaches can also trigger asthma and other allergies.
|
Atmospheric patterns resembling those that appeared during the latter half of California’s ongoing multiyear drought are becoming much more common, a new study finds.
“The current record-breaking drought in California has arisen from both extremely low precipitation and extremely warm temperature,” says Noah Diffenbaugh, associate professor of earth system science at Stanford University. “In this new study, we find clear evidence that atmospheric patterns that look like what we’ve seen during this extreme drought have in fact become more common in recent decades.”
Diffenbaugh and colleagues investigated whether atmospheric pressure patterns similar to those that occurred during California’s historically driest, wettest, warmest, and coolest years have occurred with different frequency in recent decades compared with earlier in California’s history.
The scientists focused on the northeastern Pacific Ocean and far western North America, encompassing the winter “storm track” region where the vast majority of California precipitation originates.
They used historical climate data from US government archives to investigate changes during California’s October to May “rainy season.” They identified the specific North Pacific atmospheric patterns associated with the most extreme temperature and precipitation seasons between 1949 and 2015. Their analysis shows a significant increase in the occurrence of atmospheric patterns associated with certain precipitation and temperature extremes over the 67-year period.
In particular, they found robust increases in the occurrence of atmospheric patterns resembling what has occurred during the latter half of California’s ongoing multiyear drought.
“California’s driest and warmest years are almost always associated with some sort of persistent high pressure region, which can deflect the Pacific storm track away from California,” says Daniel Swain, first author of the study published in the journal Science Advances and a graduate student in Diffenbaugh’s lab.
“Since California depends on a relatively small number of heavy precipitation events to make up the bulk of its annual total, missing out on even one or two of these can have significant implications for water availability.”
Fewer ‘average’ years
Blocking ridges are regions of high atmospheric pressure that disrupt typical wind patterns in the atmosphere. Scientists concluded that one such persistent ridge pattern—which Swain named the Ridiculously Resilient Ridge (the Triple R)—was diverting winter storms northward and preventing them from reaching California during the state’s drought. In 2014, researchers published findings that showed that the increasing occurrence of extremely high atmospheric pressure over this same part of the Northeastern Pacific is “very likely” linked to climate change.
The group next wanted to investigate whether the particular spatial pattern associated with the Triple-R has become more common—a question not asked in the original 2014 study. The new study provides a more direct answer.
“We found that this specific extreme ridge pattern associated with the ongoing California drought has increased in recent decades,” Swain says.
Despite the fact that the number of very dry atmospheric patterns in California has increased in recent decades, the number of very wet atmospheric patterns hasn’t declined.
“We’re seeing an increase in certain atmospheric patterns that have historically resulted in extremely dry conditions, and yet that’s apparently not occurring at the expense of patterns that have historically been associated with extremely wet patterns,” Swain says. “We’re not necessarily shifting toward perpetually lower precipitation conditions in California—even though the risk of drought is increasing.”
That might sound contradictory, but it’s not, the scientists say. Imagine looking at a 10-year period and finding that two of the years are wet, two are dry, and the rest experienced precipitation close to the long-term average. Now imagine another decade with three very dry years, three very wet years, and only four years with near-average precipitation.
“What seems to be happening is that we’re having fewer ‘average’ years, and instead we’re seeing more extremes on both sides,” Swain says.” “This means that California is indeed experiencing more warm and dry periods, punctuated by wet conditions.”
The National Science Foundation, the Switzer Foundation, the ARCS Foundation, the US Department of Energy and a G.J. Lieberman Fellowship from Stanford University funded the work.
Source: Stanford University
|
The U.K.’s International Slavery Museum recently released the Contemporary Slavery Teachers’ Resource, which will educate students in England and Wales about modern day slavery and how they may take informed action against it. It is hoped that this monumental educational material will be embraced and disseminated by teachers worldwide.
The International Slavery Museum opened on August 23, 2007, the bicentenary of the abolition of the British slave trade. Located in Liverpool’s Albert Dock, just yards away from where 18th century slave trading ships once stood, the museum highlights the historic and contemporary significance of slavery in an international context.
“Our aim is to address ignorance and misunderstanding by looking at the deep and permanent impact of slavery and the slave trade on Africa, South America, the USA, the Caribbean and Western Europe. Thus we will increase our understanding of the world around us,” said Dr. David Fleming, Director of National Museums Liverpool.
Although officially abolished, slavery has not vanished; rather, it is rampant and affects 27 million people today. The new Teachers’ Resource will educate students aged 10 to 14 in England and Wales on contemporary slavery as part of their education in Citizenship, which informs students on social justice issues and emphasizes the importance of human rights and responsibilities.
This exciting resource includes key terms, descriptions of the various forms of slavery, case studies and testimonies, human rights legislation, worksheets, and a list of the world’s notable campaigns, among which is Free the Slaves. Free the Slaves has contributed photographs, slavery survivor transcripts, and other resources to the material.
If you or someone you know is interested in educating today’s youth on slavery, access the downloadable Teachers’ Resource here!
Slavery can be defeated within the next 25 years, if everyone is engaged and joins this collaborative effort for freedom.
|
Electrical Short Circuit
An electrical short circuit, also called a circuit interruption, often occurs when the wire coating is stripped or when a nail passes through the wire. This generates a spark, which can set fire to nearby combustible material or damage an appliance or other fixture connected to the wires.
The most common short circuits occur between a live wire and a neutral wire, but any two wires can short circuit. For example, if a neutral wire comes into contact with a ground wire, it is possible for that connection to create a short circuit.
If a homeowner is hammering a nail through a wall, and the nail comes into contact with two wires, it can establish a connection. The same thing happens when the coating is stripped from wires that can come into contact.
Short circuits can also occur in batteries. If the positive and negative terminals are connected by a wire, a surge of electrical current can generate sufficient heat to cause an explosion.
|
Despite the abundance of wildlife resources in the basin, there are pressures that threaten the existence of this resource. Species that have become extinct in the basin in recent times include the blue wildebeest in Malawi, the Tsetsebe in Mozambique, and the Kob in Tanzania (SADC and SARDC 2008). Others face a high risk of extinction, and the number of threatened species across the basin continues to rise. The White (Grass) rhinocerous, Black (Browse) rhinoceros, and the Black Wildebeest are critically close to disappearing altogether, even though decisive conservation action is allowing some populations to revive (SADC and SARDC 2008). The Wattled Crane is endangered in the basin partly due to controlled flooding in the Kafue Flats which has reduced its nesting sites. The population of the lechwe (Kobus lechwe kafuenis) has also fallen in the Kafue due to alteration of their marshy habitat (SADC and SARDC 2008).
From collection: Zambezi River Basin - Atlas of the changing Environment
|
Giant viruses over the dike
“We cracked the DNA-code of a giant algae virus; this is the first algae virus that belongs to the ‘Giant Viruses’”, concludes prof. Dr Corina Brussaard of the Royal Netherlands Institute for Sea Research and the University of Amsterdam in the 10 June issue of the journal PNAS. Until now the few known giant viruses all had an animal host, but now it appears that giant viruses with a plant host also exist.
Moreover, a host that is very common to sea water. The algae Phaeocystis globosa is well known for the enormous accumulations of foam on the beach formed by dead algae. The newly discovered giant virus ‘PgV-16T’ is also very common and capable of playing an important role in regulating this algae species that is common all over the world.
Giant viruses were first discovered only a few years ago. Most of them contain much more DNA than ‘ordinary’ viruses, although the currently discovered giant algae virus is quite modest in this respect. A more important conclusion is that the evolution of this type of virus most likely has been very different than that of the ‘ordinary’ smaller viruses.
“This is a very exciting discovery that changes our view of viruses”, says Corina Brussaard. “It is the largest algae virus of which we completely unravelled its DNA-code. Moreover, it’s the first ever giant virus that uses algae as host, which means they are much more common than we thought. But that is not all, within this virus we found a virophage; a virus that infects the giant virus. This implies that virophages are not restricted to the largest viruses, but can also infect smaller giant viruses like the one we discovered. These virophages are responsible for an exchange of genetic material between the different kinds of viruses.
These discoveries have prompted a discussion about the origin of viruses. The giant viruses have characteristics that until now were unique to living cells, which puts them far back in the evolution. It appears that the giant viruses originally were independent cells that existed inside other organisms as symbiont or parasite, but later lost genetic material making them dependent on their host. Probably, the giant viruses represent an extinct form of life that existed at the time of the last universal common ancestor, 3-4 billion years ago, and contributed to the development of our modern cells.
Brussaard: “If this is really the case, it means a potentially new pathway for the formation of viruses and the exchange of genetic material, with all sorts of possible unforeseen implications for our perception of viruses as pathogens.”
The study was conducted together with the research group of prof. dr Jean-Michel Claverie of the University of Marseille, France, also the discoverer of the first giant virus ‘Mimivirus’. The Dutch study was funded by the NIOZ.
Sebastien Santini, Sandra Jeudy, Julia Bartoli, Olivier Poirot, Magali Lescot, Chantal Abergel, Valérie Barbe, K. Eric Wommack, Anna A.M. Noordeloos, Croina P.D. Brussaard, Jean-Michel Claverie. The genome of Phaeocystis globosa virus PgV-16T highlights the common ancestry of the largest known DNA viruses infecting eukaryotes. Proceedings of the National Academy of Sciences of the United States of America (PNAS). doi:10.1073/pnas.1303251110
|
Paintings of Oahu akialoa (top) & Lanai akialoa (bottom). Images public domain
Not long ago, the Hawaiian archipelago supported a plethora of pollinating birds. Today, many are extinct, with others feared lost or experiencing worrisome declines. Since the early nineteenth-century, twelve of Hawaii’s specialist avian nectar-eaters have become extinct, leaving at most eight known species behind. Seeing images of all twelve of these birds gathered together in one place is an effective and affecting way of getting a true sense of Hawaii’s losses.
The Hawaiian Islands are thought to have first been settled fewer than eight hundred years ago, at the tail end of Polynesian expansion across the Pacific. The arrival of humans in these isolated ecosystems bought with it our species’ calling card: an extinction pulse. The Polynesians, along with the pigs, dogs, chickens and rats who accompanied them, are thought to have been responsible for the loss of many lifeforms.
European arrival from the late eighteenth-century onwards contributed its own wave of death and destruction. Smallpox and other diseases killed many native Hawaiians, and the fauna of the islands experienced further depletion.
Hawaiian nectarivorous birds are divided into two groups: honeycreepers and honeyeaters (though the latter are no relation of the other bird species which are referred to as honeyeaters) Akialoa were honeycreepers, in a genus containing four living members when Europeans arrived in Hawaii. Today all are gone.
The Lanai akialoa disappeared first. Its decline seems to have predated European arrival, as fossils suggest it once inhabited other islands besides Lanai. Habitat loss and the ongoing damage wrought on Hawaii’s ecosystem by the Polynesians’ pigs likely doomed it.
Oahu’s akialoa fared a little better, with the last report dating from 1940. Forest clearance for American sugarcane ventures deeply damaged these birds. They also suffered a more sinister scourge: avian influenza. Spread by mosquitoes, who may have arrived in the bilge water of whaling ships, this disease continues to ravage Hawaiian avifauna to this day.
Paintings of Hawaii mamo (top) & black mamo (bottom). Images public domain
The Big Island’s Hawaii mamo was imperilled before James Cook’s 1778 landfall on the archipelago. The birds’ six to eight yellow feathers were used to manufacture garments for Hawaiian nobles and royalty. One particular cloak may have cost an obscene sixty thousand mamo their lives. Still, the Hawaii mamo might yet live today without the bitter blows dealt by European-led deforestation for cattle ranching, and avian flu.
Often seen with a pollen-dusted forehead after feeding on lobelia flowers, the black mamo, whose range had already been reduced by the Polynesians, was scientifically described in 1893. The last recorded bird was shot fourteen years later. The introduction of cattle, deer and mongooses is blamed for the loss of this species.
Taxidermic greater amakihi. Image public domain
Eating both nectar and insects, the greater amakihi does not appear to have been known to the natives of Hawaii’s Big Island. Western collectors discovered this bird perhaps only a decade before Western investors destroyed it. Scientifically described in 1892, the species was last recorded in 1901, just before its tiny home range was cleared to make way for a sugarcane plantation.
Painting of female (left) & male (right) Laysan honeycreepers. Image public domain
The Laysan honeycreeper, which favoured the nectar of its island’s native flowers, was last recorded in 1923. Europeans, not Polynesians, seem to have been the first people to settle Laysan. Just one of them served to seal the birds’ fate: Max Schlemmer, who released rabbits there in the 1890s, hoping to use them for meat. The rabbits bred explosively, eradicating most of the vegetation on which the Laysan honeycreepers fed.
Taxidermic lesser akialoa (top) & Kauai akialoa (bottom). Images public domain
So far as can be gleaned, none of the akialoa species were common by the time Europeans reached the Hawaiian Islands. Both of these pollinators were ultimately undone by the sugarcane industry’s ruination of forests working in tandem with the invisible spread of mosquito-borne avian diseases. The lesser akialoa has not been reported since 1940. In 1969, when the ultimate agent of its demise first walked on the moon, the Kauai akialoa was last reported. With its passing, the entire akialoa genus ended.
Taxidermic kioea. Image public domain
Whilst gravely damaged by human activity, several nectarivorous Hawaiian honeycreepers yet persist. The Hawaiian honeyeater family (Mohoidae) was less fortunate. They are generally thought to be the only avian family extinct in modern times. Even within an order as large as that of the perching birds, losing a whole family is significant. For instance chameleons, in all their distinctiveness, represent a single family amongst Earth’s snakes and reptiles. The kioea, last recorded in 1859, is thought to have been a victim of logging, introduced species and hunting.
Paintings of Oahu Oo (top) & Hawaii Oo (bottom). Images public domain
Black, yellow and beautiful, these two species have been extinct for some time. The Oahu Oo vanished nearly two hundred years ago, last being recorded in 1837. Hunting by native Hawaiians for its yellow feathers may have contributed, though the prime causes of extinction are thought to have been introduced disease and habitat destruction in the wake of European contact. The Hawaii Oo was last recorded in 1934, suffering a similar fate to its relative on Oahu.
Taxidermic Kauai Oo (top) & Bishop’s Oo (bottom). Images public domain
Bishop’s Oo was last definitively recorded in 1904, although reports persisted for decades afterwards on its former island home of Molokai. As with other species in this piece, fossil remains indicate it may have been more widespread before the arrival of Polynesian settlers. In recent times, the range of these birds was much more restricted. The most recent notable sighting was in 1981. Given this species has not been unequivocally seen alive in over a century, it is surely lost now. Cattle ranching and pineapple cultivation have much altered Molokai, and introduced avian diseases are as problematic there as elsewhere.
Kauai’s Oo was the last survivor of the Mohoidae. Once common, it entered a steep decline during the early twentieth century. Again, habitat destruction and disease-bearing mosquitoes were the key culprits. In 1987, the mating song of a male Kauai Oo was recorded. Over untold millennia, his species had evolved a delicate call-and-response duet. But for Earth’s last Kauai Oo, there would be no answer. He died later that year.
|
Supporting your child...
I believe home school links are essential! Below are some resources that may help you understand the curriculum, how we teach, and what the assessments look like. If you have any questions or would like to know more, please ask! Mrs. T
Questions to ask:
Click on title to download: Guided Reading Prompts KS1 by Primary English
This is a wonderful list I found from the Primary English Consultancy. It is filled with questions for you to use with your child before, during, and after reading a book. It is broken up into Year 1 and Year 2 to help you gage the depth of understanding and ability your child may have within the National Curriculum 2016 study standards. I intend to send a modified version home for you to keep and use!
This is just a guide and each child will respond to each book differently based on their interest and understanding of what they are reading.
Other Year 2 Resources
Mental maths can be tricky for some children. However, developing speed and fluency with basic maths facts helps them tackle the tougher world problems. At the same time, I want more than simple memorizing numbers. It's important for children to understand the concept of when they 'do' the math! One way tho help children at home is by using real examples and maths vocabulary when you are organsing at home, shopping and such!
Problem Solving and Reasoning
Below are some helpful websites that may help you understand where I get the resources I use when planning, as well as the thinking behind what I teach and expect children to show as evidence of learning.
Below are resources to help your child as they develop handwriting, spelling, and writing skills. You can also find examples of our marking ladders and fix it symbols.
Writing: Target ladders below
Here is an example of a writing target ladders used in class. More can be found below the Fix it symbols.
We use fix it symbols when marking work to help children understand the feedback provided on their writing. To view the symbols and what they represent, click on the button above.
Some of the writing marking ladders. We review and revise them for each writing piece to help us focus on the key elements with are learning.
|
Seismology is the science of studying earthquakes and related phenomena. In Greek, Seismos = Shaking & logos = Science. A Seismologist is a scientist who studies earthquakes and seismic waves
Earthquake is a sudden movement or vibration of a part of the earths top layers caused by sudden release of energy stored as elastic strain in the underlying rocks. This energy reaches us as series of vibrations travelling through the body of earth and are called as seismic waves .
There are two types of seismic waves.
1:Body waves - travel through the earth's interior.
2:Surface waves - can only move along the surface of the planet
P-WAVE:The first kind of body wave is the P wave or primary wave. The particle motion of P waves is parallel to the direction of propagation of the wave. This is the fastest of seismic waves. The P wave can move through solid rock and fluids, like water or the liquid layers of the earth. It pushes and pulls the rock as it moves through; just like sound waves push and pull the air. Have you ever heard a big clap of thunder and heard the windows rattle at the same time? The windows rattle because the sound waves were pushing and pulling on the window glass much like P waves push and pull on rock. Sometimes animals can hear the P waves of an earthquake. Usually we only feel the bump and rattle of these waves.
S-WAVE:The second kind of body wave is the S wave or secondary wave. The particle motion of the wave is perpendicular to the direction of propagation of the wave. An S wave is slower than the P wave and can only travel through solid rock. This wave moves rock up and down, or side-to-side.The arrow shows the direction that the wave is moving.
Love Waves:Named after A.E.H. Love, a British mathematician who worked out the mathematical model for this kind of wave in 1911. It's the fastest surface wave and moves the ground from side-to-side.The arrow shows the direction that the wave is moving.Named after A.E.H. Love, a British mathematician who worked out the mathematical model for this kind of wave in 1911. It's the fastest surface wave and moves the ground from side-to-side.The arrow shows the direction that the wave is moving.
Rayleigh Waves:The other kind of surface wave is the Rayleigh wave, named for John William Strutt, Lord Rayleigh, who mathematically predicted the existence of this kind of wave in 1885. A Rayleigh wave rolls along the ground just like a wave rolls across a lake or an ocean. Because it rolls, it moves the ground up and down, and side-to-side in the same direction that the wave is moving. Most of the shaking felt from an earthquake is due to the Rayleigh wave, which can be much larger than the other waves. .
Earthquakes occur all the time all over the world, both along plate edges and along faults.
Most earthquakes occur along the edge of the oceanic and continental plates. The earth's crust (the outer layer of the planet) is made up of several pieces, called plates. The plates under the oceans are called oceanic plates and the rest are continental plates. The plates are moved around by the motion of a deeper part of the earth (the mantle) that lies underneath the crust. These plates are always bumping into each other, pulling away from each other, or past each other. The plates usually move at about the same speed that your fingernails grow. Earthquakes usually occur where two plates are running into each other or sliding past each other.
Earthquakes can also occur far from the plate boundaries, along faults. Faults are cracks in the earth where sections of a plate (or two plates) are moving in different directions. Faults are caused by all that bumping and sliding the plates do. They are more common near the edges of the plates.
Normal faults are the cracks where one block of rock is sliding downward and away from another block of rock. These faults usually occur in areas where a plate is very slowly splitting apart or where two plates are pulling away from each other.
Strike-slip faults are the cracks between two plates that are sliding past each other. We can find these kinds of faults along the mid-oceanic ridges. The San Andreas fault is a strike-slip fault. It's the most famous California fault and has caused a lot of powerful earthquakes.
Reverse faults are cracks formed where one plate is pushing into another plate. They also occur where a plate is folding up because it's being compressed by another plate pushing against it. At these faults, one block of rock is sliding underneath another block or one block is being pushed up over the other.
When an earthquake fault ruptures, it causes two types of deformation: static; and dynamic. Static deformation is the permanent displacement of the ground due to the event. The earthquake cycle progresses from a fault that is not under stress, to a stressed fault as the plate tectonic motions driving the fault slowly proceed, to rupture during an earthquake and a newly-relaxed but deformed state. Seismic Deformation Typically, someone will build a straight reference line such as a road, railroad, pole line, or fence line across the fault while it is in the pre-rupture stressed state. After the earthquake, the formerly straight line is distorted into a shape having increasing displacement near the fault, a process known as elastic rebound.
While most of the plate-tectonic energy driving fault ruptures is taken up by static deformation, up to 10% may dissipate immediately in the form of seismic waves. The mechanical properties of the rocks that seismic waves travel through quickly organize the waves into two types. Compressional waves, also known as primary or P waves, travel fastest, at speeds between 1.5 and 8 kilometers per second in the Earth's crust. Shear waves, also known as secondary or S waves, travel more slowly, usually at 60% to 70% of the speed of P waves. P waves shake the ground in the direction they are propagating, while S waves shake perpendicularly or transverse to the direction of propagation. Although wave speeds vary by a factor of ten or more in the Earth, the ratio between the average speeds of a P wave and of its following S wave is quite constant. This fact enables seismologists to simply time the delay between the arrival of the P wave and the arrival of the S wave to get a quick and reasonably accurate estimate of the distance of the earthquake from the observation station. Just multiply the S-minus-P (S-P) time, in seconds, by the factor 8 km/s to get the approximate distance in kilometers. The dynamic, transient seismic waves from any substantial earthquake will propagate all around and entirely through the Earth. Given a sensitive enough detector, it is possible to record the seismic waves from even minor events occurring anywhere in the world at any other location on the globe. Nuclear test-ban treaties in effect today rely on our ability to detect a nuclear explosion anywhere equivalent to an earthquake as small as Richter Magnitude 3.5..
Earthquakes are measured by their magnitude which is a number that characterizes the relative size of an earthquake. Magnitude is based on measurement of the maximum motion recorded by a seismograph. Several scales have been defined but the most commonly used are:
1:Local Magnitude Ml
2:Body Wave Magnitude Mb
3:Surface Wave Magnitude Ms
4:Moment Magnitude Mw
Sensitive seismographs are the principal tool of scientists who study earthquakes. Thousands of seismograph stations are in operation throughout the world, and instruments have been transported to the Moon, Mars, and Venus. Fundamentally, a seismograph is a simple pendulum. When the ground shakes, the base and frame of the instrument move with it, but intertia keeps the pendulum bob in place. It will then appear to move, relative to the shaking ground. As it moves it records the pendulum displacements as they change with time, tracing out a record called a seismogram.
On this example it is obvious that seismic waves take more time to arrive at stations that are farther away. The average velocity of the wave is just the slope of the line connecting arrivals, or the change in distance divided by the change in time. Variations in such slopes reveal variations in the seismic velocities of rocks. Note the secondary S-wave arrivals that have larger amplitudes than the first P waves, and connect at a smaller slope
While the actual frequencies of seismic waves are below the range of human hearing, it is possible to speed up a recorded seismogram to hear it. You can click on this earthquake recording to hear a seismogram from the 1992 Landers earthquake in southern California, recorded near Mammoth Lakes in an active volcanic caldera by the USGS. The original record, 800 seconds long, has been speeded up 80 times so that you hear it all within 10 seconds.
The clicks at the beginning of the recording are the sharp, high-frequency P waves, followed by the rushing sound of the drawn-out, lower-frequency S waves. This recording is also interesting because of the small, local earthquakes within the Mammoth caldera that sound like gunshots. The passage of the S wave from the magnitude 7.2 Landers event through the caldera actually triggered a sequence of small earthquakes there. The triggered earthquakes are similar to a burst of creaks and pops you hear from your house frame after a strong blast of wind. Landers triggered earthquakes up to magnitude 5.5 throughout eastern California and Nevada, and in calderas as far away as Yellowstone.
The pricipal use of seismograph networks is to locate earthquakes. Although it is possible to infer a general location for an event from the records of a single station, it is most accurate to use three or more stations. Locating the source of any earthquake is important, of course, in assessing the damage that the event may have caused, and in relating the earthquake to its geologic setting.
Given a single seismic station, the seismogram records will yield a measurement of the S-P time, and thus the distance between the station and the event. Multiply the seconds of S-P time by 8 km/s for the kilometers of distance. Drawing a circle on a map around the station's location, with a radius equal to the distance, shows all possible locations for the event. With the S-P time from a second station, the circle around that station will narrow the possible locations down to two points. It is only with a third station's S-P time that you can draw a third circle that should identify which of the two previous possible points is the real one.
|
Voltage, Current and Resistance | HowStuffWorks
The relationship between voltage and current is defined (in ohmic devices like to the Hagen–Poiseuille equation, as both are linear models relating flux and that can then be used to power lights and appliances in homes and businesses. We've seen the formula for determining the power in an electric circuit — by multiplying the voltage in “volts” by the current in “amps” we arrive at an answer in “watts. relationship between power dissipation and current through a resistance. so commonly associated with the Ohm's Law equations relating voltage, current. Voltage, Power, Resistance and Current Press yourself at a point on your leg. appliance - which is essential for different power output from different appliances. What is the relationship between power, current and voltage? Together with resistance, they make up the Ohm's law that relates the three variables together.
If you plug in a light and it draws half an amp, it's a watt light bulb. Let's say that you turn on the space heater and then look at the power meter outside.
The meter's purpose is to measure the amount of electricity flowing into your house so that the power company can bill you for it.
Let's assume -- we know it's unlikely -- that nothing else in the house is on, so the meter is measuring only the electricity used by the space heater.
Your space heater is using 1. If you leave the space heater on for one hour, you will use 1. If your power company charges you 10 cents per kilowatt-hour, then the power company will charge you 12 cents for every hour that you leave your space heater on. Now let's add one more factor to current and voltage: We can extend the water analogy to understand resistance, too. The voltage is equivalent to the water pressure, the current is equivalent to the flow rate and the resistance is like the pipe size.
A basic electrical engineering equation called Ohm's law spells out how the three terms relate. Current is equal to the voltage divided by the resistance. This drop is due to inherent internal resistance within the source. The resistance is not due to an actual resistor, but can be modelled as such, and is composed of actual resistance of conductors, electronic components, electrolyte in batteries etc.
Examples of DC sources are batteries, DC generators known as dynamos, solar cells and thermocouples.
Calculating Electric Power | Ohm's Law | Electronics Textbook
AC This stands for "alternating current" and means that the current "alternates" or changes direction. So current flows one way, reaches a peak, falls to zero, changes direction, reaches a peak and then falls back to zero again before the whole cycle is repeated. The number of times this cycle happens per second is called the frequency.
In other countries it is 50 Hz. The electricity supply in your home is AC. The advantage of AC is the ease by which it can be transformed from one voltage level to another by a device known as a transformer. AC sources include the electrical supply to your home, generators in power stations, transformers, DC to AC inverters allowing you to power appliances from the cigarette lighter in your carsignal generators and variable frequency drives for controlling the speed of motors.
The alternator in a vehicle generates electricity as AC before it is rectified and converted to DC. New generation brushless, cordless drills convert the DC voltage of the battery to AC for driving the motor. Reducing Costs of Transmitting Electricity Over the Grid Because AC can so easily be transformed from one voltage to another, it is more advantageous for power transmission over the electricity grid. Generators in power stations output a relatively low voltage, typically 10, volts.
Transformers can then step this up to a higher voltage, ,volts or higher for transmission through the country. A step up transformer, converts the input power to a higher voltage, lower current output. Now this decrease in current is the desired effect for two reasons. Power is wasted as heat in transmission cables, which obviously isn't wanted.
The AC waveform of the the domestic supply to our homes is sinusoidal. Source Transformer in an electrical sub-station. The function of a transformer is to either increase or decrease voltage. Source What is Three-Phase Voltage? Very long distance transmission lines may use DC to reduce losses, however power is normally distributed nationwide using a 3 phase system.
Each phase is a sinusoidal AC voltage and each of the phases is separated by degrees. So in the graph below, phase 1 is a sine wave, phase 2 lags by degrees and phase 3 lags by degrees or leads by degrees. Only 3 wires are needed to transmit power because it turns out that no current flows in the neutral for a balanced load. The transformer supplying your home, has 3 phase lines as input and the output is a star source so it provides 3 phase lines plus neutral.
Calculating Electric Power
In countries such as the UK, homes are fed by one of the phases plus a neutral. In the US, one of the phases is split to provide the two 'hot' legs of the supply.
Why Is 3 Phase Used? More power can be transmitted using just 1. The incoming supply is typically 11kv and output phase voltage is volts in countries which use this voltage 3 Phase voltages.
Each phase is sinusoidal with a phase difference of degrees. Source Delta-Star Wye transformer which can supply single or 3-phase supply. Once we add a bulb to the circuit, resistance is created. There is now a local "blockage" or narrowing of the pipe, per our water pipe analogy where the current experiences some resistance.
Watts, Amps and Volts Explained — Kilowatt Hours (Kwh) and Electrical Appliances
This greatly reduces the current flowing in the circuit, so the energy in the battery is released more slowly. As the battery forces the current through the bulb, the battery's energy is released in the bulb in the form of light and heat.
In other words, the current carries stored energy from the battery to the bulb, where it is turned into light and heat energy. The image above shows a light bulb as the main cause of electrical resistance. A watt is the base unit of power in electrical systems.
It can also be used in mechanical systems. It measures how much energy is released per second in a system. In our battery diagram, the size of both the voltage and the current in the bulb determine how much energy is released.
|
Scientists at the Massachusetts Institute of Technology may have created the ultimate green battery technology: They’ve engineered a virus that could potentially form a battery that would outlast and out power those available today.
The batteries could be used to power small electronic devices such as cell phones and MP3 players. In the future, they could also be used to power automobiles. The M13 virus used to create the batteries infects only bacteria, so it is harmless to humans.
The recent discovery builds upon research that was performed three years ago when an MIT team genetically engineered viruses that could build an anode by coating themselves with cobalt oxide and gold and self-assembling to form a nanowire. Traditional batteries have two anodes, a positive terminal (often made of cobalt oxide) and a negative terminal (often comprised of graphite).
Researchers at MIT took this research and focused on building a powerful cathode that could pair up with the anode. This was no easy task, but eventually the scientists were able to engineer the viruses to first coat themselves with iron phosphate and then attach to carbon nanotubes to create a network of highly conductive material. Electrons can travel along the carbon nanotubes to the iron phosphate networks very easy, thereby transferring energy in a very short amount of time.
Using these developments, the researchers created coin-sized batteries as seen in the photo above. According to lab tests, the batteries can be charged and discharged at least 100 times without losing any capacity. Although that’s fewer charge cycles than today’s lithium-ion batteries, materials scientist Angela Belcher said the expectation is that the batteries "will be able to go much longer."
|
A substitution drill is a classroom technique used to practise new language. It involves the teacher first modelling a word or a sentence and the learners repeating it. The teacher then substitutes one or more key words, or changes the prompt, and the learners say the new structure.
The following sequence is an example of a substitution drill:
Teacher: I have a new car Learners: Have you? Teacher: I don't like fish Learners: Don't you? Teacher: I love coffee Learners: Do you?
In the classroom
Despite a move away from drilling as a classroom technique, many teachers still use it to provide practice. One way to move a drill away from being teacher-centred is to ask a learner to lead the activity.
|
Bronze Head from Ife
This Bronze Head from Ife is one of eighteen sculptures that were unearthed in 1938 at Ife in Nigeria, the religious and former royal center of the Yoruba people. The Yoruba people regard Ife as the place where their deities created humans. It was made in the thirteenth century, well before any European contact with the local population. The realism and sophisticated craftsmanship of the objects challenged Western conceptions of African art at the time.
The head is made using the lost wax technique, and the artist designed the head in a very naturalistic style. The face is covered with incised striation, and the headdress consists of a crown composed of layers of tube-shaped beads and tassels. A crest tops the king’s crown with a rosette and a plume. The lifelike rendering of sculptures from medieval Ife is exceptional in sub-Saharan African art.
When the Ife heads first appeared in the Western World, experts did not believe that Africa had ever had a civilization capable of creating artifacts of this quality. Later excavations in Nigeria have provided scientific evidence of the existence of a metalworking culture and bronze artifacts that may be dated to the ninth or tenth centuries.
The oldest signs of human settlement at Ife’s current site date back to the 9th century, and its material culture includes terracotta and bronze figures. For centuries, various peoples from this region traded overland with traders from North Africa. Ancient Ife also was famous for its glass beads, which have been found at sites as far away as Mali, Mauritania, and Ghana. In the 16th century, Spanish and Portuguese explorers were the first Europeans to begin significant, direct trade with peoples of modern-day Nigeria. Europeans traded goods with peoples at the coast, and this exchange marked the beginnings of the Atlantic slave trade. The majority of those enslaved were captured in raids and wars.
Nigeria is in West Africa, bordering Niger, Chad, Cameroon, and Benin, and its capital is Abuja. Nigeria has been home to several ancient and indigenous kingdoms and states over the millennia. The modern state originated from British colonial rule beginning in the 19th century.
In the 16th century, Portuguese explorers were the first Europeans to begin significant, direct trade with peoples of modern-day Nigeria. Coastal trade with Europeans also marked the beginnings of the Atlantic slave trade. The port of Calabar became one of the most significant slave trading posts in West Africa in the era of the transatlantic slave trade.
A changing legal imperative when transatlantic slave trade outlawed by Britain in 1807 and a desire for political and social stability led most European powers to support the widespread cultivation of agricultural products, such as the palm, for use in the European industry.
- When the Ife heads first appeared in the Western World, why did the experts find it so hard to believe that Africa had ever had a civilization capable of creating artifacts of this quality?
- What belief systems today similarly blind us?
- Created before any European contact, but lost after European contact, to be rediscovered less than 100 years ago, what does this tell us about the impact of slavery and colonialism?
- Where the heads buried to protect them from coastal raids?
- Double-Headed Serpent
- Hoa Hakananai’a / Moai from Easter Island
- Hawaiian Feathered Helmet
- Bronze Head from Ife
- Benin Ivory Mask
- Masterpieces of the British Museum
Bronze Head from Ife
- Title: Bronze Head from Ife
- Date: 1300 C.E.
- Culture: Yoruba people, Nigeria
- Find Spot: Ife in Nigeria,
- Materials: Heavily leaded zinc-brass
- Acquisition: 1939
- Dimensions: 35 cm high
- Museum: The British Museum
Nigerian Proverbs & Sayings
“A man being short does not make him a boy.”
“Love will always be better than a whip.”
“A bird does not change its feathers because the weather is bad.”
“The habits that a child forms at their home will determine how they behave in their marriages.”
“A bird that flies from the ground onto an anthill does not know that it is still on the ground.”
“The same sun that melts wax is also capable of hardening clay.”
“If you sleep with itching anus, you will wake up with your hands smelling.”
“A child is what you put into him.”
“If life has beaten you severely and your face is swollen, smile, and act as a fat man.”
“A child who fears a beating would never admit that he played with a missing knife.”
“In the moment of crisis, the wise build bridges, and the foolish build dams.”
– Nigerian proverb
Photo Credit: 1) I, Sailko [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0)], from Wikimedia Commons
|
Published on in Health Tip of the Week
Your child doesn’t have to be a prodigy or really into the arts to be creative. Creativity comes in all forms, and is an important skill when it comes to learning to think outside the box and problem-solve. There are many ways to help foster creativity in your child, including the following:
- Take your time. Get on the floor and play with your child — and you needn’t have any bells and whistles! Good old-fashioned toys such as blocks, Lincoln Logs® and Tinkertoys® are great ways to get your child thinking.
- Limit electronics. Electronic toys do come in handy when we need an hour to get something done, but they don’t allow the child to do a lot of free thinking. Use what is around your home. Empty boxes, dress-up clothes and household items can all be used as props for imaginative play. Even an empty laundry basket can become a boat, a bus, a stage and anything else your child can come up with.
- Get in the game. If your child wants to pretend he is an airplane, stretch out your arms and fly alongside him to Rio or Paris or anywhere else he wants to go. Let your child be the pilot and direct the activity, as play should be the result of your child’s ideas.
- Don’t overschedule. While language classes, flute lessons, soccer and baseball are all great activities, make sure you leave time for unstructured play.
- Get moving. Put on some music and invite your child to make up dances or the words to her own songs.
- Make an art corner. Designate a space in your home for art projects. A simple table with some crayons and paper plus a small box full of pipe cleaners, googly eyes and cotton balls will be enough to get the creative juices flowing.
- “Tell me about that.” Ask open-ended questions. If your child writes a story or draws a picture, ask him to describe what he created for you.
- Turn off the TV. All the experts agree: Too much TV restricts your child’s thinking. If you’ve ever seen what happens to their eyes and faces as they watch, you know why. Limit television and screen time so they can form their own ideas and keep their minds active.
- Read together. Reading books together not only improves children’s literary skills, but it allows them to see the story in their own minds and encourages open thinking.
- Do your research. Kids have tons of questions, and many you won’t know the answers to. Look up the answers with them and discover something together. If the question has many answers, point out that sometimes there are many ways to think about problems.
Contributed by: Patrick S. Pasquariello, MD
Categories: Weekly Health Tips
|
An adjustment disorder is defined as an emotional or behavioral reaction to an identifiable stressful event or change in a person's life that is considered maladaptive or somehow not an expected healthy response to the event or change. The reaction must occur within three months of the identified stressful event or change happening. The identifiable stressful event or change in the life of a child or adolescent may be a maladaptive response to a family move, a parental divorce or separation, the loss of a pet, or the birth of a brother or sister. A sudden illness, or restriction to a child's life because of chronic illness may also provoke an adjustment response.
Adjustment disorders are a reaction to stress. There is not a single direct cause between the stressful event and the reaction. Children and adolescents vary in their temperament, past experiences, vulnerability, and coping skills. Their developmental stage and the capacity of their support system to meet their specific needs related to the stress are factors that may contribute to their reaction to a particular stress. Stressors also vary in duration, intensity, and effect. No evidence is available to suggest a specific biological factor that causes adjustment disorders.
Adjustment disorders are quite common in children and adolescents. They occur equally in males and females. While adjustment disorders occur in all cultures, the stressors and the signs may vary based on cultural influences. Adjustment disorders occur at all ages. However, it is believed that characteristics of the disorder are different in children and adolescents than they are in adults. Differences are noted in the symptoms experienced, in the severity and duration of symptoms, and in the outcome. Adolescent symptoms of adjustment disorders are more behavioral, such as acting out, while adults experience more depressive symptoms.
In all adjustment disorders, the reaction to the stressor seems to be in excess of a normal reaction, or the reaction significantly interferes with social, occupational, or educational functioning. There are six subtypes of adjustment disorder that are based on the type of the major symptoms experienced. The following are the most common symptoms of each of the subtypes of adjustment disorder. However, each adolescent may experience symptoms differently:
Adjustment disorder with depressed mood. Symptoms may include:
Feelings of hopelessness
Adjustment disorder with anxiety. Symptoms may include:
Adjustment disorder with anxiety and depressed mood. A combination of symptoms from both of the above subtypes (depressed mood and anxiety) is present.
Adjustment disorder with disturbance of conduct. Symptoms may include:
Violation of the rights of others
Violation of societal norms and rules (truancy, destruction of property, reckless driving, or fighting)
Adjustment disorder with mixed disturbance of emotions and conduct. A combination of symptoms from all of the above subtypes are present (depressed mood, anxiety, and conduct).
Adjustment disorder unspecified. Reactions to stressful events that do not fit in one of the above subtypes are present. Reactions may include behaviors such as social withdrawal or inhibitions to normally expected activities (for example, school or work).
The symptoms of adjustment disorders may resemble other medical problems or psychiatric conditions. Always consult your adolescent's healthcare provider for a diagnosis.
A child and adolescent psychiatrist or a qualified mental health professional usually makes the diagnosis of an adjustment disorder in children and adolescents following a comprehensive psychiatric evaluation and interview with the child or adolescent and the parents. A detailed personal history of development, life events, emotions, behaviors, and the identified stressful event is obtained during the interview.
Specific treatment for adjustment disorders will be determined by your adolescent's healthcare provider based on:
Your adolescent's age, overall health, and medical history
Extent of your adolescent's symptoms
Subtype of the adjustment disorder
Your adolescent's tolerance for specific medications or therapies
Expectations for the course of the stressful event
Your opinion or preference
Treatment may include:
Individual psychotherapy using cognitive-behavioral approaches. Cognitive-behavioral approaches are used to improve age-appropriate problem solving skills, communication skills, impulse control, anger management skills, and stress management skills.
Family therapy. Family therapy is often focused on making needed changes within the family system, such as improving communication skills and family interactions, as well as increasing family support among family members.
Peer group therapy. Peer group therapy is often focused on developing and using social skills and interpersonal skills.
Medication. While medications have very limited value in the treatment of adjustment disorders, medication may be considered on a short-term basis if a specific symptom is severe and known to be responsive to medication.
Preventive measures to reduce the incidence of adjustment disorders in adolescents are not known at this time. However, early detection and intervention can reduce the severity of symptoms, enhance the adolescent's normal growth and development, and improve the quality of life experienced by children or adolescents with adjustment disorders.
|
Learn something new every day
More Info... by email
Four forces are understood to govern the universe: the strong and weak nuclear forces, the electromagnetic — or electrical — force, and gravity. The latter two types, electrical force and gravity, are the only of these forces that extend to a macro range and therefore interact with matter on a large scale. Electromagnetism is responsible for chemical reactions, light, vision and virtually all interplay of matter. Almost all technology requires electricity to function, and there are several vital aspects and measurements of the electrical force. The basis of this force is the movement of electrons and the workings of positive and negative electrical charges.
Particles of matter can have positive or negative electical charges. Protons, which form the nucleus of an atom, have a positive charge, whereas the electrons that orbit the nucleus have a negative charge. Opposite charges attract one another in an effort to neutralize charge, and like charges repel, so putting opposite poles of two magnets together causes the ends of the magnets to pull toward one another. Electricity, at its most basic form, is the movement of electrons from one location to another in a static discharge or in an electronic circuit; electricity can only flow where there is an available conductive path.
The electromagnetic force is so named because an electric current and a magnetic field can create each other. Passing a magnet through a coil of wire causes the electrons in the wire to move away from the magnet due to the repulsion of the electrical force. Similarly, running an electric current through a coiled wire produces a magnetic field whose direction is opposite the current due to electrical inertia.
Two main measurements of electrical force govern most of the behavior that electricity exhibits when interacting with objects: voltage and resistance, from which the measurement for current derives. Voltage is the amount of electrical potential that exists from one point to another, similar to the pressure built up inside an activated water hose. The higher the voltage between two points is, the greater the electrical pressure and the more easily current will flow. The concept of resistance describes an object's propensity to resist electrical flow. The electrical current in amperes that flows from one point to another can be expressed as the voltage divided by the resistance in ohms.
Electrical current is either alternating current or direct current. The difference is the direction of flow; alternating current switches directions dozens of times per second with reversed polarities. Direct current maintains polarity and therefore only flows in one direction, such as through a battery.
|
door, barrier of wood, stone, metal, glass, paper, leaves, hides, or a combination of materials, installed to swing, fold, slide, or roll in order to close an opening to a room or building. Early doors, used throughout Mesopotamia and the ancient world, were merely hides or textiles. Doors of rigid, permanent materials appeared simultaneously with monumental architecture. Doors for important chambers were often made of stone or bronze.
Stone doors, usually hung on pivots, top and bottom, were often used on tombs. A marble, paneled example, probably from the time of Augustus, was found at Pompeii; a Greek door (c. ad 200) from a tomb at Langaza, Turkey, has been preserved in the museum at Istanbul.
The use of monumental bronze doors is a tradition that has persisted into the 20th century. The portals of Greek temples were often fitted with cast-bronze grills; the Romans characteristically used solid bronze double doors. They were usually supported by pivots fitted into sockets in the threshold and lintel. The earliest large examples are the 24-foot (7.3-metre) double doors of the Roman Pantheon. The Roman paneled design and mounting technique continued in Byzantine and Romanesque architecture. The art of casting doors was preserved in the Eastern Empire, the most notable example being double doors (c. 838) of the Hagia Sophia cathedral in Constantinople (now Istanbul). In the 11th century bronze castings from Constantinople were imported into southern Italy. Bronze doors were introduced into northern Europe, notably in Germany, when Charlemagne installed a Byzantine pair (cast c. 804) for the cathedral at Aachen. The first bronze doors to be cast in one piece in northern Europe were made for the Cathedral of Hildesheim (c. 1015). They were designed with a series of panels in relief, establishing a sculptural tradition of historical narrative that distinguishes Romanesque and, later, bronze doors.
Hollow casting of relief panels was revived in the 12th century in southern Italy, notably by Barisanus of Trani (cathedral doors, 1175), and carried northward by artists such as Bonanno of Pisa. In 14th-century Tuscany the principal examples are the pairs of sculptured, paneled bronze doors on the Florentine Baptistery; the Gothic south doors (1330–36) are by Andrea Pisano, and the north doors (1403–24) by Lorenzo Ghiberti. Ghiberti’s east doors (1425–52) have come to be known as the “Gates of Paradise” (“Porta del Paradiso”). Bronze doors with relief panels by Antonio Filarete were cast for St. Peter’s Basilica, Rome. Bronze doors were not generally used in northwestern Europe until the 18th century. The first monumental bronze doors in the United States were erected in 1863 in the Capitol at Washington, D.C.
The wooden door was doubtless the most common in antiquity. Archaeological and literary evidence indicate its prevalence in Egypt and Mesopotamia. According to Pompeiian murals and surviving fragments, contemporary doors looked much like modern wood-paneled doors; they were constructed of stiles (vertical beams) and rails (horizontal beams) framed together to support panels and occasionally equipped with locks and hinges. This Roman type of door was adopted in Islāmic countries. In China the wooden door usually consisted of two panels, the lower one solid and the upper one a wooden lattice backed with paper. The traditional Japanese shoji was a wood-framed, paper-covered sliding panel.
The typical Western medieval door was of vertical planks backed with horizontals or diagonal bracing. It was strengthened with long iron hinges and studded with nails. In domestic architecture, interior double doors appeared in Italy in the 15th century and then in the rest of Europe and the American colonies. The paneled effect was simplified until, in the 20th century, a single, hollow-core, flush panel door has become most common.
There also are several types of specialized modern doors. The louvered (or blind) door and the screen door have been used primarily in the United States. The Dutch door, a door cut in two near the middle, allowing the upper half to open while the lower half remains closed, descends from a traditional Flemish-Dutch type. The half door, being approximately half height and hung near the centre of the doorway, was especially popular in the 19th-century American West.
Glazed doors, dating from the 17th century, first appeared as window casements extended to the floor. French doors (double glazed) were incorporated into English and American architecture in the late 17th and 18th centuries. At about this time, the French developed the mirrored door.
Other types of 19th- and 20th-century innovations include the revolving door, the folding door, the sliding door inspired by the Japanese shoji, the canopy door (pivoting at the top of the frame), and the rolling door (of tambourlike construction), also opening to the top.
|
|Artifact List||What's New||Gallery||References||Books|
|Archeological Site||Contact Us||Educational Activites||Preservation|
LENGTH OF LESSON:
Two Classroom Periods
John Versluis and Ralph Gibson
Students will understand the following:
2. The current and most widely accepted theory on the origins of the moon and how
the geological evidence collected on the lunar surface by Apollo astronauts
supports this theory.
3. How successful the Apollo Missions were from a scientific perspective focusing
on the geological evidence that was collected.
4-5 river rocks that are round or oval and very smooth
1-2 samples of sandstone (combine with the above rocks to form one group)
2-3 samples of vesicular basalt
1-2 samples of volcanic glass called Obsidian (combine with the vesicular basalt to form one group)
A tutorial briefly describing the scientific method, the history of the theoretical origins of the moon, and the contributions the Apollo Program made to assist in the overall geological history of the moon is provided below. Other materials needed are:
Up-to-date reference materials about the Apollo missions.
Up-to-date reference materials on geology and planetary geology.
Visual reference materials that students can access and adapt for their reports.
Click here for a list of websites and suggested readings that will assist with this exercise.
Geologists piece together the physical evolution and history of our planet by studying materials such as rocks and minerals, as well as the processes that operate far below ground. But the science of geology is not limited to the earth. Geologists also study our moon, other planets and their moons, asteroids, and meteorites. While geologists can study meteorites without much difficulty, the study of other planets and their moons is done with space probes launched from earth. The information geologists gather from these remote places adds to our understanding of the origin and evolution of the solar system. But there are always mysteries. The origin of our moon is one mystery that has plagued geologists and scientists for many years. The successful Apollo Program, however, provided geologists and scientists with over 840 pounds of moon rocks that led to new discoveries and a new hypothesis on the origin of our moon. This tutorial will explain what the scientific method is, how it is applied by geologists studying the origins of the moon, and how the Apollo Program significantly added to the growing body of knowledge about our planet and its moon.
THE SCIENTIFIC METHOD
Already in this tutorial, I have used the term theory. A theory is a generally accepted explanation of one or more related natural phenomena�such as the origin of the solar system and the moon�that is supported by a large body of evidence. Theories are formulated through a process called the scientific method. The first step in the scientific method is to gather all relevant information on the phenomena of study. This information is then used to formulate tentative explanations for the phenomena called hypotheses. Each hypothesis, if true, makes certain predictions. Each hypothesis is then tested to see if what they predict actually occurs. Over time and through many tests, competing hypotheses are eliminated if their predictions do not occur. If one hypothesis better explains the phenomena and all the predictions it makes are found to occur, that hypothesis is then proposed as a theory. Once a hypothesis becomes a theory, its testing and refinement do not end.
ORIGIN OF THE MOON
Have you ever stared up at the moon and wondered how it got there? If so, you�re not alone. For thousands of years, humans have looked up and wondered what the moon was and why it dominated the black sky. The moon was a mystery that people tried to solve. The Iroquois Indians believed that the moon was created by the Sky Woman. The Mayans believed that the moon inhabited the earth before changing into a celestial body. Without science, people were left to speculate about a celestial body they could not fully comprehend from earth. Ancient Greeks thought that the dark portions on the moon were seas (which they called maria) and that the light portions of the moon represented dry land (which they called terrae) (figure 1).
It wasn�t until the astronomer Galileo peered through his newly invented telescope in 1610 that the true nature of the moon�s vast, desolate landscape came into focus. But the world was not yet ready for science to explain natural phenomena. Galileo (figure 2) was tried for heresy for stating that the earth revolved around the sun instead of the other way around. This prompted many scientists working in the 17th century to keep their findings, hypotheses and theories to themselves.
It wasn�t until 1878 that the first real hypothesis of the origin of the moon was proposed. George Howard Darwin (Charles Darwin�s son) believed that the moon was once part of the earth. He stated that after the formation of the earth it spun so rapidly that it elongated. The sun�s gravity then ripped off a chunk of the planet. This chunk settled into orbit around the planet earth. The deep Pacific Ocean basin was the area from which this chunk was ripped. This hypothesis on the origin of the moon was accepted by most scientists well into the 20th century. The next hypothesis to gain scientific support was the �captured planet�. This hypothesis, proposed by Thomas Jefferson Jackson See, stated that the moon was a small planet that was captured by the earth�s gravity. The third hypothesis is coaccretion. Many astronomers, including Edouard Roche, believed that the moon was created at the same time as the earth, that the matter that had begun to accrete to form the earth, was already being orbited by another cloud of matter that was accreting to become the moon. By the time the Apollo Program began, the first hypothesis proposed by George Howard Darwin was no longer accepted by most scientists. The second and third hypotheses, however, were the leading candidates that explained the origin of the moon. But the Apollo Program was such a success that the information collected by the Apollo astronauts led to the proposal of a new hypothesis on the origin of the moon: the big whack.
I should note here that some of these hypotheses are referred to as theories by articles and reports for the general public. This is because most people do not understand the difference between a hypothesis and a theory. So whenever you see someone using the word theory, always question whether or not it truly is a theory, or whether the word theory is being used in place of hypothesis.
THE BIG WHACK
The major difference between the Apollo Program and all other explorations on other celestial bodies is that humans explored another world firsthand�they were there; they walked upon the moon�s surface; they bought back samples they had collected with their hands. It should not be surprising that the samples collected by the Apollo astronauts provided scientists with the most valuable source of information regarding another world that has ever been collected. Space probes and robotic rovers that gather information on other celestial bodies do provide scientists with important information, but these devices are limited. Nothing can replace a human�s ability to evaluate and re-think a situation or to use intuition. Sending humans to other worlds is expensive, but the information they could gather would be invaluable (figure 3).
Figure 3: James B. Irwin collecting lunar samples (Apollo 15)
The information gathered by the Apollo astronauts was just that: invaluable. Before the Apollo Program, scientists thought they had narrowed down the origin of the moon to two possibilities. But the samples collected by the Apollo astronauts proved otherwise. The key factors geologists looked at regarding the moon�s relationship to the earth was density, volatile elements, and the size of the core. The samples collected by the Apollo astronauts revealed that the moon was less dense than the earth and that the moon was lacking volatile elements. The moon also has a relatively smaller core than the earth. Yet, there were enough similarities between elements found in lunar samples with samples of rocks here on earth that scientists believed the earth and the moon were related, kind of like a cosmic DNA test. None of the previous hypotheses could explain all these facts effectively. Plus, there was one more factor that had not been considered by scientists investigating the origin of the moon: the tilt of the earth�s axis. Most of the other planets in the solar system rotate with their poles perpendicular to their orbital planes around the sun. If a planet has a tilt in its axis, it usually means that something quite dramatic had occurred in its history, like a collision with a large meteorite, asteroid, moon, or even another planet.
Taking into account all these factors, scientists formulated the big whack hypothesis. This hypothesis states that a large planet-sized celestial body collided with the earth soon after it�s formation. This impact was so great that it destroyed the celestial body and nearly the earth itself. A great cloud of matter swirled around the wounded earth. Some of the matter fell back into the earth, some drifted off into space, and a small amount of matter began orbiting the earth. This matter then accreted over time into a spherical planetoid: our moon. This hypothesis seems to explain all the above differences. The earth absorbed most of the impacting body�s core, which would explain the difference in core size; the earth gained its volatile elements through the bombardment of comets and meteorites, which would explain why the earth has more than the moon; and the impact would have sent much of the impactor�s remains into orbit along with large portions of the earth�s mantle, which would explain the difference in density. This hypothesis also explains why the earth�s axis is tilted. The collision was so great that the earth was simply tilted off its axis. To view an animation of this hypothesis, click on the link below.
Click to view high speed ISDN
Click to view with modem speed 28.8
With each Apollo mission on the moon, beginning with Apollo 11, the size and scope of the scientific investigations grew. The astronauts stayed longer and ventured further away from their Lunar Modules with each progressive mission. Because these explorers were humans and not rovers or robots, the information they collected provided scientists with the clues they needed to begin to understand the origin and history of the moon, of our earth, and ultimately of ourselves. The Apollo Program was born in the Cold War. Were it not for the Soviet Union�s early exploits in space, the Apollo Program might not have ever got off the ground. But the Apollo Program cannot be thought of as purely a battle in the Cold War. Once it gained momentum, it became something else. It transformed us from a species that roamed our planet, to a species that ventured into space and roamed on another world. The Apollo Program marked a new step in the cultural evolution of our species. Aside from this lofty perspective, the Apollo Program also should be thought of in terms of its scientific value. While the program was criticized for its expense, the information gathered during its run has proven to be invaluable. Hopefully, the adventurous and explorative spirit of the Apollo Program will continue as we look to explore other worlds in our solar system. Imagine what we could finally learn about Mars were humans allowed to roam the landscape, evaluating, re-thinking tasks, and using their intuition as they searched for clues. Exploration performed by machines is limited. Exploration performed by humans can be limitless.
Albedo - The percentage of the incoming sunlight that is reflected by a natural surface.
Asteroids - One of the thousands of small planets between Mars and Jupiter.
Basin - A large impact crater, usually with a diameter in excess of 100 kilometers. Most basins have been modified by degradation of the original basin relief through downslope movement of debris and flooding of the basin interior by lavas.
Crater - A typically bowl-shaped or saucer-shaped pit or depression, generally of considerable size and with steep inner slopes, formed on a surface or in the ground by the explosive release of chemical or kinetic energy; e.g., an impact crater or an explosion crater.
Density - The mass of a substance or body.
Ejecta - The material thrown out of an impact crater by the shock pressures generated during the impact event. Ejecta generally covers the surface around an impact crater to a distance of at least one crater diameter, with individual streamers of material extending well beyond this distance (see rays). The ejecta blanket of a crater becomes less visible with increasing age of the crater.
Highlands - The densely cratered portions of the Moon that are typically at higher elevations than the mare plains. The highlands contain a significant proportion of anorthosite, an igneous rock made up almost entirely of plagioclase feldspar.
Lava - A volcanic rock protruded by the eruption of molten material.
Mare - The low albedo plains covering the floors of several large basins and spreading over adjacent areas. The mare material is comprised primarily of basaltic lava flows, in contrast to the anorthosites in the highlands.
Massif - A massive topographic and structural feature, especially in an orogenic belt, commonly formed of rocks more rigid than those of its surroundings. These rocks may be protruding bodies of basement rocks, consolidated during earlier orogenies, or younger plutonic bodies. Examples are the crystalline massifs of the Helvetic Alps, whose rocks were deformed mainly during the Hercynian orogeny, long before the Alpine orogeny.
Meteorite - A small particle in the solar system (called a meteor) which falls through the atmosphere and reaches the surface of the earth without being completely vaporized.
Mineral - A naturally occurring, inorganic crystalline material with a unique chemical structure.
Phase angle - The angle between the incident sunlight and the viewing direction when looking at an illuminated surface. Low phase angles result in relatively few shadows being cast by the surface relief.
Ray - A streamer of ejecta associated with an impact crater. Rays are most often of higher albedo than their surroundings. The albedo contrast may result from either disruption of the local surface by the ejecta or by emplacement of ejecta on the surroundings, or both.
Rille - One of the several trench-like or crack-like valleys, up to several hundred kilometers long and 1-2 km wide, found on the Moon's surface. Rilles may be extremely irregular with meandering courses ("sinuous rilles"), or they may be relatively straight ("normal rilles"); they have relatively steep walls and usually flat bottoms. Rilles are essentially youthful features and apparently represent fracture systems originating in brittle material.
Rocks - A consolidated mixture of minerals.
Scarp - A change in topography along a linear to arcuate cliff. The cliff may be the result of one or more processes including tectonic, volcanic, impact-related, or degradational processes.
Secondary craters - Craters produced by the impact of debris thrown out by a large impact event. Many secondary craters occur in clusters or lines where groups of ejecta blocks impacted almost simultaneously.
Volatile Elements � typically gases, such as water vapor and carbon dioxide, which can make volcanoes erupt violently.
Curious Kids Museum:
The Nine Planets:
Title: Book of the Moon: A Lunar Introduction to
Astronomy, Geology, Space, Physics and Space Travel
Author(s): Thomas A. Hockey
Title: Geology on the Moon
Author(s): John E. Guest and Ronald Greeley
Title: Pieces of Another World: The Story of Moon Rocks
Author(s): Franklyn Mansfield Branley
Title: Project Apollo
Author(s): Hal Marcovitz
|
Data storage needs to keep up with our desire to snap pictures, download clips from the Internet, and create new digital documents. Since the early stages of computer technology, magnetic storage has been the method of choice to handle digital data. It has stayed that way because of our ability to continually shrink the area used to hold a single magnetic bit.
But we're closing in on the limits of this approach, as clusters of three to 12 atoms have been used as a functional system. Last week, however, scientists demonstrated the ability to magnetically store data in a single atom.
The basics of magnetic storage
Magnetic storage requires the magnetization of a ferromagnetic material to record data. These materials rely on the atom’s electrons, which themselves behave like tiny magnets. The electrons carry a magnetic dipole moment that is determined by the direction the electron spins and the shape of the path the electron travels (quantum mechanical spin and orbital angular momentum, to be technical). There are only two directions the electron can spin, either “up” or “down.”
Although this system is dynamic, its two stable equilibrium states, “up” or “down,” provide what's often referred to as magnetic bistability. Having a bit stored indefinitely usually involves a cluster of atoms all set to the same state. This provides a bigger signal, ensuring the bit is maintained even if any given atom doesn't behave stably.
So, while there were many advances in the miniaturization of magnetic bistability, there were some obvious questions about the limits it could reach.
A single-atom approach
In this investigation, scientists worked with holmium atoms (Ho) supported on magnesium oxide (MgO). Although many Ho atoms formed clusters on the surface, the researchers identified single atoms located atop oxygen to use as a magnetic storage material.
Using a scanning tunneling microscope, the researchers applied current pulses to the Ho atoms to switch the direction of the magnetic moments, demonstrating the ability to control the magnetic behavior of individual Ho atoms. They were then able to read the magnetic patterns by placing another magnetic material in close proximity, which enabled electrons to tunnel from one magnet to another (tunnel magnetoresistance).
Magnetic storage requires the ability to read and write information, but it also requires information to be retained over time. To gauge the storage retention time, the scientists observed how long a Ho atom would remain in a single state after being switched using a pulse of current. They found that the bits would last for hours before starting to randomize.
The researchers performed a clever trick to test that what they were seeing was truly due to the magnetic moment of the Ho atom. They used the microscope tip to place a single iron atom near the Ho atom. In this experimental set-up, the iron atom functions as a local magnetometer, since it has an external out-of-plane magnetic field that's influenced by nearby magnetic materials. When the scientists applied current pulses to the Ho atom, they saw corresponding shifts in the magnetic field of the iron atom, demonstrating that the individual Ho atoms did in fact possess two distinct magnetic orientations.
Finally, the researchers explored the ability of an array of two Ho atoms to store two bits of information, again using a nearby Fe atom to locally read the magnetic state. They were also able to use other advanced techniques to remotely read the magnetic states.
These experiments demonstrate that high-density magnetic storage at the atomic level is possible, though significant research is still needed to further develop this technology and understand the practical feasibility. After all, something that's only stable for a few hours would require a very different approach than past magnetic media.
|
Structural Biochemistry/Chemical Bonding/Hydrophobic interaction
The tendency of nonpolar molecules in a polar solvent (usually water) to interact with one another is called the hydrophobic effect. The interactions between the nonpolar molecules are called hydrophobic interactions. The relative hydrophobicity of amino acid residues is defined by a system known as hydrophobicity scales.
The interactions between nonpolar molecules and water molecules are not as favorable as interactions amongst just the water molecules, due to the inability of nonpolar molecules to form hydrogen bonding or electrostatic interactions. When nonpolar molecules are introduced to the water molecules, the water molecules will initially surround the nonpolar molecules, forming a "cages" around the molecules. However, the tendency of nonpolar molecules to associate with one another will draw the nonpolar molecules together, forming a nonpolar aggregate.
Based on the second law of thermodynamics, the total entropy of the system plus its surrounding must always be increasing. Therefore, it is favorable for the nonpolar molecules to associate without the interference of water. The water molecules that initially "caged" the nonpolar molecules are released from the nonpolar molecules' surfaces, creating an increase in entropy in the surrounding. The favorable release of water molecules from nonpolar surfaces is responsible for phenomenon of the hydrophobic effect.
Hydrophobic interactions can also be seen in the clustering of amphipathic/amphiphillic molecules such as phospholipids into bilayers and micelles. The hydrophobic areas of amphipathic molecules cluster together to avoid the ordered "cage" of water molecules that would surround them and orient the hydrophillic ends as a shield-like outer structure that interacts amicably with the polar water molecules. Micelles occur when a spherical fatty acids structure is formed with a hydrophobic core and hydrophillic outer shell. Bilayers can be commonly seen in cell membranes with hydrophillic outer (outside the cell) and inner (inside the cell) linings has hydrophobic (inside the membrane) center. The Lipid bilayer is a more favored formation in nature due to the micelle formation may contain bulky fatty acids causing hindrance in its formation.
Electric Properties of Plasma Membrane
Most cell membranes are electrically polarized, such that the inside is negative [typically 260 millivolts (mV)]. Membrane potential plays a key role in transport, energy conversion, and excitability. For example, membrane transport. Some molecules can pass through cell membranes because they dissolve in the lipid bilayer. Additionally, most animal cells contain a high concentration of K1 and a low concentration of Na1 relative to the external medium. These ionic gradients are generated by a specific transport system, an enzyme that is called the Na1–K1 pump or the Na1–K1 ATPase. The hydrolysis of ATP by the pump provides the energy needed for the active transport of Na1 out of the cell and K1 into the cell, generating the gradients. The pump is called the Na1–K1 ATPase because the hydrolysis of ATP takes place only when Na+ and K+ are present. This ATPase, like all such enzymes, requires Mg2+
When two nonpolar molecules come together, structured water molecules are released allowing them to interact freely with bulky water. The release of water from such cages is favorable. The result is that non-polar molecules show an increased tendency to associate with one another in water compared with others - less polar and less self-associating solvents. This tendency is called the hydrophobic effect and the associated interactions are called hydrophobic interaction.
The release from the cage-like clathrates is more favorable because it increases the entropy of the system.
http://en.wikibooks.org/wiki/Structural_Biochemistry/Second_law http://en.wikibooks.org/wiki/Structural_Biochemistry/Water http://en.wikibooks.org/wiki/Structural_Biochemistry/Lipids/Biological_Membranes
|
|This article needs additional citations for verification. (February 2010) (Learn how and when to remove this template message)|
In mechanical engineering, backlash, sometimes called lash or play, is a clearance or lost motion in a mechanism caused by gaps between the parts. It can be defined as "the maximum distance or angle through which any part of a mechanical system may be moved in one direction without applying appreciable force or motion to the next part in mechanical sequence",p. 1-8 and is a mechanical form of deadband. An example, in the context of gears and gear trains, is the amount of clearance between mated gear teeth. It can be seen when the direction of movement is reversed and the slack or lost motion is taken up before the reversal of motion is complete. Another example is in a valve train with mechanical tappets, where a certain range of lash is necessary for the valves to work properly.
Depending on the application, backlash may or may not be desirable. It is unavoidable for nearly all reversing mechanical couplings, although its effects can be negated or compensated for. In many applications, the theoretical ideal would be zero backlash, but in actual practice some backlash must be allowed to prevent jamming. Reasons for the presence of backlash include allowing for lubrication, manufacturing errors, deflection under load, and thermal expansion.
Factors affecting the amount backlash required in a gear train include errors in profile, pitch, tooth thickness, helix angle and center distance, and run-out. The greater the accuracy the smaller the backlash needed. Backlash is most commonly created by cutting the teeth deeper into the gears than the ideal depth. Another way of introducing backlash is by increasing the center distances between the gears.
Backlash due to tooth thickness changes is typically measured along the pitch circle and is defined by:
|= backlash due to tooth thickness modifications|
|= tooth thickness on the pitch circle for ideal gearing (no backlash)|
|= actual tooth thickness|
Backlash, measured on the pitch circle, due to operating center modifications is defined by:
|= backlash due to operating center distance modifications|
|= difference between actual and ideal operating center distances|
|= pressure angle|
Standard practice is to make allowance for half the backlash in the tooth thickness of each gear. However, if the pinion (the smaller of the two gears) is significantly smaller than the gear it is meshing with then it is common practice to account for all of the backlash in the larger gear. This maintains as much strength as possible in the pinion's teeth. The amount of additional material removed when making the gears depends on the pressure angle of the teeth. For a 14.5° pressure angle the extra distance the cutting tool is moved in equals the amount of backlash desired. For a 20° pressure angle the distance equals 0.73 times the amount of backlash desired.
As a rule of thumb the average backlash is defined as 0.04 divided by the diametral pitch; the minimum being 0.03 divided by the diametral pitch and the maximum 0.05 divided by the diametral pitch.
In a gear train, backlash is cumulative. When a gear-train is reversed the driving gear is turned a short distance, equal to the total of all the backlashes, before the final driven gear begins to rotate. At low power outputs, backlash results in inaccurate calculation from the small errors introduced at each change of direction; at large power outputs backlash sends shocks through the whole system and can damage teeth and other components.
In certain applications, backlash is an undesirable characteristic and should be minimized.
Gear trains where positioning is key but power transmission is light
The best example here is an analog radio tuning dial where one may make precise tuning movements both forwards and backwards. Specialized gear designs allow this. One of the more common designs splits the gear into two gears, each half the thickness of the original. One half of the gear is fixed to its shaft while the other half of the gear is allowed to turn on the shaft, but pre-loaded in rotation by small coil springs that rotate the free gear relative to the fixed gear. In this way, the spring tension rotates the free gear until all of the backlash in the system has been taken out; the teeth of the fixed gear press against one side of the teeth of the pinion while the teeth of the free gear press against the other side of the teeth on the pinion. Loads smaller than the force of the springs do not compress the springs and with no gaps between the teeth to be taken up, backlash is eliminated.
Leadscrews where positioning and power are both important
Another area where backlash matters is in leadscrews. Again, as with the gear train example, the culprit is lost motion when reversing a mechanism that is supposed to transmit motion accurately. Instead of gear teeth, the context is screw threads. The linear sliding axes (machine slides) of machine tools are an example application.
Most machine slides for many decades, and many even today, were simple-but-accurate cast iron linear bearing surfaces, such as a dovetail slide or box slide, with an Acme leadscrew drive. With just a simple nut, some backlash is inevitable. On manual (non-CNC) machine tools, the way that machinists compensate for the effect of backlash is to approach all precise positions using the same direction of travel. This means that if they have been dialing left, and now they want to move to a rightward point, they move rightward all the way past it and then dial leftward back to it. The setups, tool approaches, and toolpaths are designed around this constraint.
The next step up from the simple nut is a split nut, whose halves can be adjusted and locked with screws so that one side rides leftward thread faces, and the other side rides rightward faces. Notice the analogy here with the radio dial example using split gears, where the split halves are pushed in opposing directions. Unlike in the radio dial example, the spring tension idea is not useful here, because machine tools taking a cut put too much force against the screw. Any spring light enough to allow slide movement at all would allow cutter chatter at best and slide movement at worst. These screw-adjusted split-nut-on-an-Acme-leadscrew designs cannot eliminate all backlash on a machine slide unless they are adjusted so tight that the travel starts to bind. Therefore, this idea can't totally obviate the always-approach-from-the-same-direction concept; but backlash can be held to a small amount (1 or 2 thousandths of an inch), which is more convenient and in some non-precise work is enough to allow one to ignore the backlash (i.e., act as if there weren't any).
CNCs can be programmed to use the always-approach-from-the-same-direction concept, but that is not the normal way they are used today, because hydraulic anti-backlash split nuts and newer forms of leadscrew other than Acme/trapezoidal, such as recirculating ball screws, effectively eliminate the backlash. The axis can move in either direction without the go-past-and-come-back motion.
The simplest CNCs, such as microlathes or manual-to-CNC conversions, which use nut-and-Acme-screw drives can be programmed to correct for the total backlash on each axis, so that the machine's control system will automatically move the extra distance required to take up the slack when it changes directions. This programmatic "backlash compensation" is a cheap solution, but professional grade CNCs use the more expensive backlash-eliminating drives mentioned above. This allows them to do 3D contouring with a ball-nosed endmill, for example, where the endmill travels around in many directions with constant rigidity and without delays.
Some motion controllers include backlash compensation. Compensation may be achieved by simply adding extra compensating motion (as described earlier) or by sensing the load's position in a closed loop control scheme. The dynamic response of backlash itself, essentially a delay, makes the position loop less stable and prone to oscillation.
Minimum backlash is the minimum transverse backlash at the operating pitch circle allowable when the gear tooth with the greatest allowable functional tooth thickness is in mesh with the pinion tooth having its greatest allowable functional tooth thickness, at the tightest allowable center distance, under static conditions.
Minimum backlash is defined as the difference between the maximum and minimum backlash occurring in a whole revolution of the larger of a pair of mating gears.
Non-precision gear couplings use backlash to allow for slight angular misalignment. However, backlash is undesirable in precision positioning applications such as machine tool tables. It can be minimized by tighter design features such as ball screws instead of leadscrews, and by using preloaded bearings. A preloaded bearing uses a spring or other compressive force to maintain bearing surfaces in contact despite reversal of direction.
There can be significant backlash in unsynchronized transmissions because of the intentional gap between dog gears (also known as dog clutches). The gap is necessary so that the driver or electronics can engage the gears easily while synchronizing the engine speed with the driveshaft speed. If there was a smaller clearance, it would be nearly impossible to engage the gears because the teeth would interfere with each other in most configurations. In synchronized transmissions, synchromesh solves this problem.
- Bagad, V.S. (2009). Mechatronics (4th revised ed.). Pune: Technical Publications. ISBN 9788184314908. Retrieved 28 June 2014.
- Backlash, archived from the original (PDF) on 2010-02-09, retrieved 2010-02-09.
- Jones, Franklin Day; Ryffel, Henry H. (1984), Gear design simplified (3rd ed.), Industrial Press Inc., p. 20, ISBN 978-0-8311-1159-5.
- Adler, Michael, Meccano Frontlash Mechanism, archived from the original on 2010-02-09, retrieved 2010-02-09.
- Gear Nomenclature, Definition of Terms with Symbols. American Gear Manufacturers Association. p. 72. ISBN 1-55589-846-7. OCLC 65562739. ANSI/AGMA 1012-G05.
|
In general, infinity is the quality or state of endlessness or having no limits in terms of time, space, or other quantity. In mathematics, infinity is the conceptual expression of such a "numberless" number. It is often symbolized by the lemniscate (also known as the lemniscate of Bernoulli ), which looks something like the numeral 8 written sideways ( ). This symbol for infinity was first used in the 1600s by the mathematician John Wallis.
Infinity can be defined as the limit of 1/ x as x approaches zero. Sometimes people say that 1/0 is equal to infinity, but technically, division by zero is not defined. Another notion is that infinity is a quantity x such that x + 1 = x . The idea is that the quantity is so large (either positive or negative) that increasing its value by 1 does not change it.
A set (see set theory ) can be defined as infinite if there exists a one-to-one correspondence between that set and a proper subset of itself. According to this definition, the set of integer s is infinite because its elements can be paired off one-to-one with all the even integers:
The converse of the above statement is not always true. Some infinite sets have infinite proper subsets such that they cannot be paired off one-to-one. An example is the set of real number s and its proper subset, the set of integers.
In the 1800s, Georg Cantor defined infinity in terms of the cardinalities of infinite sets. The cardinality of a set is the number of elements in the set. In this sense, the cardinality of the set of integers is smaller than the cardinality of the set of real numbers, even though both sets are infinite. The set of integers is denumerable (its elements can all be accounted for by means of a listing scheme), while the set of real numbers is not denumerable.
In a more down-to-earth sense, the words "approaches infinity" are used in place of the words "increases without limit." Thus, it is said that the limit of 1/ x , as x approaches infinity, is equal to zero. In this context, infinity does not represent a defined quantity, but is merely a convenient expression.
Also see Mathematical Symbols .
|
Types of Compounds 2 Types of Compounds Ionic Covalent Contrasting Ionic and Covalent Compounds Covalent Ionic Result from sharing e Result from a transfer in e Metal & a nonmetal 2 nonmetals Strong crystal structure Interpartical F is weak Solid at room temp liquid or gas High melt pt Low melt pt Dissolve in water Less soluble in water Electrolytes Poor conductors of electricity I. Ionic Compoundscompounds composed of ions 1. 2. Ionic bonds- attractive F between ions of opp. charge Result from a transfer of e- Na [Na]+ Cl [Cl]- A. Binary Compounds- a compound that contains only 2 elements To name a binary compound 1. 2. 3. Write the name of the + charged ion (metal) Add the name of the – charged ion (nonmetal) Change the end of the name of the nonmetal to “ide” Examples NaCl Sodium Chlorine ide You try it! Name MgO magnesium oxide B. Formula- in ionic compounds, the smallest ratio of atoms or ions in the compound 1. formula unit – simplest ratio of ions in a compound Example: How many formula units of each are present? 2NaCl 3NaCl NaCl C. Predicting charges 1. 2. 3. 4. Metals lose e-, so they form positive ions. Nonmetals gain e-, so they form negative ions. Charge = oxidation # Oxidation # is used to determine formulas. D. Writing formulas 1. aluminum + oxygen = Aluminum has 3 valence, so it gives away 3 e- Oxygen has 6 valence, so it accepts 2 e- Al3+ + O2- Al2O3 The oxidation numbers must add up to zero. If not you have to multiply to make them. Practice Problems E. Polyatomic ion- ion that has 2 or more different elements 1. 2. 3. 4. A group of atoms is covalently bonded together The individual atoms have no charge, but the group as a whole does See page 109 & handout Quiz tomorrow on the 1st 9 polyatomic ions on the chart 5. Compounds containing P.I. Positive metal ions bond to negative P.I. ions + OH- = NaOH Negative nonmetal ions bond to Positive P.I. Ex: Na+ Ex: NH4+ I- = + NH4I Positive P.I. can bond with Negative P.I. Ex: NH4+ + OH- = NH4OH 6. If you have to multiply P.I., use parentheses Examples: (NH4)2 Mg2+ or (H3O)3 + NO3- = Mg(NO3)2 7. To name compounds containing P.I. Name the + ion first, followed by the – ion. Never change the name of a polyatomic ion!!!! Example CaCO3 = calcium carbonate Practice Problems Do a-d & f on Practice Problem Exercise 4.7 on pg 111 omit e & f. F. Transition metals 1. 2. Transition metals can have more than 1 oxidation #. ex: Copper can be Cu+ or Cu2+ ex: Fe can be Fe2+ or Fe3+ Exceptions a. Zn has a +2 charge b. Ag has a +1 c. Cd has a +2 Transition metals cont. 3. 4. A Roman numeral is put in parentheses after the name of the transition element to show its oxidation #. Example: Cu+ Cu2+ + Cl+ Cl- CuCl = copper(I) chloride CuCl2 = copper(II) chloride Practice Problems p 102 II. Molecular Substances (covalent compounds) Covalent bonds-form when e- are shared 1. Covalent compound- compound held together by covalent bonds 2. Molecule- 2 or more atoms held together by covalent bonds B. Ionic & Covalent Compounds can be Separated 1. Distillation- separation method that uses evaporation & condensation of a liquid http://www.ktf-split.hr/glossary/en_o.php?def=distillation C. Molecular element- form when atoms of the same element bond example: O2, N2, H2, F2, Cl2 D. Allotropes- combinations of a single element that differ in structure Example: O2 is oxygen gas O3 is ozone E. Formulas & Names 1. 2. 3. Write out the name of the 1st nonmetal, followed by the 2nd nonmetal with “ide” for an ending. 1st element is farther to the left on periodic table. If both are in the same group, the one closest to the bottom goes 1st. Formulas & Names Cont. 4. Put a prefix on the name of each element to show the # of atoms present. Example: C2O = dicarbon monoxide C3O5 = tricarbon pentoxide 5. Exceptions a. If only 1 atom of 1st element present, leave off “mono” Example: NO6 = nitrogen hexoxide b. If o-o or a-o appear, leave off the 1st vowel. F. Common Names (p182) Some substances are known by their common name more than their proper name. Examples: 1. What is the proper name of H2O? HCl = hydrochloric acid H2SO4 = sulfuric acid H3PO4 = phosphoric acid HC2H3O2 = acetic acid NH3 = ammonia 2. Organic compounds a. Hydrocarbon – compounds that contain H and C only Examples: CH4 = methane C2H6 = ethane C3H8 = propane C4H10 = butane Test!!!!!!!
|
“Woohoo! I did it!”
“Finally an idea worked. Finally, a lesson that helped me successfully stay in the target language for a long amount of time!”
It’s a simple lesson that I came up with before I started staying in the target language. It can be modified to help learners of all ages and proficiency levels.
All you need is crayons (for each student) and a worksheet that looks like this:
For Novice Low or Novice Mid
Walk around the classroom. As you give one worksheet to each student, say sentences like these, “Here’s a paper for you. A paper for you. A paper for you. And one for you. Here’s a paper for you. For you, and you and you.”
Then, pass out crayons in the same way: “Crayons for you. For you. Here are some crayons for you…etc.”
Once the materials are passed out, display a sample worksheet at the front of the classroom. Hold a box of crayons in your own hands. Take out a red crayon and hold it up in the air. Motion for the students to do the same. As students are taking out their red crayons, say things like, “Good! Good Aiden! Good! Yes, red. Red. Red. The red crayon! Good Jessica…etc.”
Once all students are holding up the red crayon, have them repeat the word, “red,” after you. Then, turn your back to the class and start coloring in space #1 on the rainbow with the red crayon. When you finish coloring that section, start walking around the room saying, “Good Aiden! Good. Yes. Red. Good.” Hold up a few papers of students who are coloring in space #1 correctly.
When most students are done, hold up your red crayon and say, “Goodbye red!” and put the crayon back in the box. Keep saying, “Goodbye red,” until all students have put away their red crayon.
Go back to the displayed sample worksheet and say, “Okay. Number 1…red,” or, “Okay. Number 1 was red.” Point to space #2 and say, “Number TWO. TWO. Number TWO is orange. Take out orange.” (Hold up the orange crayon.)
Make a coloring motion with the orange crayon and say, “Class. Color #2 orange.” (You may want to say the sentence a few times.) Turn around and start coloring space #2 with the orange crayon.
Repeat this pattern until the rainbow activity is finished. If you want (and if your students would like it) make up a little tune that you can sing while the students are coloring using ONLY the L2 color and number words. (i.e. “Number 1…red. Number 1…red. Number 2…orange. Number 2…orange…etc.”)
For Novice High or Intermediate Low
Follow the same pattern (as with Novice Low or Novice Mid) except substitute the simple L2 words for L2 phrases and/or questions.
After the materials are passed out, hold up a crayon and say things like, “Aiden. What color is this? Is this color red or is this color orange? Aiden. Point to something else in this class that is the color red.” (Aiden points to something red.) Teacher says, “Good Aiden. Yes. That flag is red.” Teacher turns to address the whole class and says, “Class. Take out the color red.” As students are taking out the red crayon say things like, “Not the blue crayon. NOT the green crayon. Don’t take out the purple crayon. The RED crayon. The RED crayon. Take out the RED crayon. Good! Yes! Yes! Like Jessica. Good Jessica! Yes class. Take out the RED crayon.”
Ideas For Interpersonal Mode
After you’ve done the rainbow lesson as a whole class, pass out blank worksheets and give instructions for students to work in pairs. Tell the class that they will color the rainbows with mix-matched colors. “Space #1 WON’T be RED. It will be a different color. It will be the color that your partner tells you.” Pass out a small piece of paper to all the Partner #1s in the class and tell them to keep it hidden. The paper will tell them what mix-matched colors to use for all the rainbow spaces.
#1 – Green
#2 – Red
#3 – Purple
Walk around the room and make sure each pair of students is speaking only in L2 and coloring according to Partner #1’s instructions.
Intermediate Mid – Advanced Mid
Pass out the worksheet and the crayons. Instruct students to color space #1 RED, space #2 ORANGE and space #3 YELLOW. Tell them not to color spaces 4-6. Write your instructions on the board and have them start coloring.
While they are coloring, SECRETLY change your written instructions by erasing the word, “yellow” and replacing it with the L2 word for “purple.” On your page, color space #1 RED, space #2 ORANGE and space #3 PURPLE.
When all the students are done, start walking around the room with a confused look on your face. Take one of the students’ rainbows (choose a student who is confident and NOT easily embarrassed) and say things like, “Tyler. You colored #1 RED, #2 ORANGE and #3 YELLOW! Yellow!? Why did you color it YELLOW!?” (Let Tyler answer.) Then say, “No, Tyler. I did NOT say to color it YELLOW. I asked you to color it PURPLE! See! Look at the instructions I wrote on the board!”
Let the students start venting their frustration at you in the target language. Encourage them to say things like, “No, Miss. You did NOT say to color it PURPLE. You must have changed your instructions!” Argue back and say, “Why?! Why would I change something like that!? And we all know that the third color of the rainbow is NOT yellow. It’s obviously PURPLE. All of you don’t know what you’re talking about.”
Continue the argument for as long as you’d like. Repeat the incident with instructions for coloring spaces 4-6.
Ideas For Presentational Mode
Ask the students to write a story about a mom/dad doing this rainbow activity with her/his child. Tell the students that their L2 narrative must include dialogue. Have them model their story after the frustrating experience they just had with following your rainbow-coloring instructions. Give them some sample sentences like, “Son…you shouldn’t have colored #2 YELLOW. I told you a thousand times that it was supposed to be ORANGE. I told you that #1 was supposed to be RED and #2 was supposed to be ORANGE. It would be better if you listen more carefully in the future.”
Share your target language teaching experiences!
|
Anvils have recently lost their former commonness, as mechanized production requires more specialized components for forging. They are still used by blacksmiths producing custom work, and by farriers.
The primary work surface of the anvil is known as the face. It is generally made of hardened steel and should be flat and smooth with rounded edges for most work. Any marks on the face will be transferred to the work. Also, sharp edges tend to cut into the metal being worked and may cause cracks to form in the workpiece. The face is hardened and tempered to resist the blows of the smiths hammer so the anvil face does not deform under repeated use. A hard anvil face also reduces the amount of force lost in each hammer blow. Hammers should never directly strike the anvil face as they may damage it.
The horn of the anvil is a conical projection used to form various round shapes, and is generally unhardened steel or iron. The horn is used mostly in bending operations. It also is used by some smiths as an aid in drawing out stock, "making it longer and thinner". Some anvils, mainly European, are made with two horns, one square and one round. Also, some anvils are made with side horns or clips for specialized work.
The step or pad, commonly referred to as the table, of the anvil is used for cutting, to prevent damaging the face by conducting such operations there, although most professional smiths shun this practice, as it can damage the anvil.
The hardy hole is a square hole into which specialized forming and cutting tools are placed. It is also used in punching and bending operations.
The pritchel hole is a small round hole that is present on most modern anvils. Some anvils have more than one. It is used mostly for punching. At times smiths will fit a second tool to this hole to allow the smith more flexibility when using more than one anvil tool.
There are many designs for anvils, which are often tailored for a specific purpose or to meet the needs of a particular smith or which originated in diverse geographic locations.
The common blacksmith's anvil is made of either forged or cast steel, tool steel, or wrought iron (cast iron anvils are generally shunned, as they are too brittle for repeated use, and do not return the energy of a hammer blow like steel). Historically, some anvils have been made with a smooth top working face of hardened steel welded to a cast iron or wrought iron body, though this manufacturing method is no longer in use. It has at one end a projecting conical bick (beak, horn) used for hammering curved work pieces. The other end is typically called the heel. Occasionally the other end is also provided with a bick, partly rectangular in section. Most anvils made since the late 1700s also have a hardy hole and a pritchel hole where various tools, such as the anvil-cutter or hot chisel, can be inserted and held by the anvil. Some anvils have several hardy and pritchel holes, to accommodate a wider variety of hardy tools and pritchels. An anvil may also have a softer pad for chisel work.
An anvil for a power hammer is usually supported on a massive anvil block, sometimes weighing over 800 tons for a 12-ton hammer, and this again rests on a strong foundation of timber and masonry or concrete.
An anvil may have a marking indicating its weight, manufacturer, or place of origin. American made anvils were often marked in pounds. European anvils are sometimes marked in kilograms. English anvils were often marked in hundredweight, the marking consisting of three numbers, indicating hundredweight, quarter hundredweight and pounds. For example, a 3-1-5, if such an anvil existed, would be 3x112 lb + 1x28 lb + 5 lb = 369 lb ~= 168 kg.
Cheap anvils made from inferior steel or cast iron which are unsuitable for serious use are derisively referred to as "ASOs", or "Anvil Shaped Objects". Some amateur smiths have used a piece of railroad track as a makeshift anvil.
Top quality modern anvils are made of cast or forged tool steel and are heat treated for optimum hardness and toughness. Some modern anvils are made mostly from concrete. While the face is steel, the horn is not and can be easily damaged. These anvils can be hard to recognize because the gray paint used is the same shade as the steel face. They tend to weight about half as much as a comparable steel anvil.
A metalworking vise may have a small anvil integrated in its design.
|
Image provided by Dr. Adam Miller
Identifying genes involved in biological processes is essential for understanding cellular and organismal function. While the genomes of many organisms have been sequenced, the function of a large fraction of genes even in genomes as well-studied as human and mouse are unknown. Perhaps the most widely used method to discover gene function is the forward genetic screen, in which chemical mutagens are used to cause random mutations throughout the genome, followed by tedious work to identify mutant animals and the mutated gene. This process can take several years, even using modern high-throughput sequencing technology to facilitate mutation identification. To develop a more efficient way to screen for gene function, research technician Arish Shah and colleagues in the lab of Dr. Cecilia Moens (Basic Sciences Division) turned to the CRISPR (clustered regularly interspaced short palindromic repeats) genome editing system. CRISPR involves a nuclease called Cas9 that is targeted to specific DNA sequences by engineered single guide RNAs (sgRNAs). The authors took advantage of this for genetic screening by expressing Cas9 and sgRNAs that cause small insertions or deletions (indels) in zebrafish. "We show the utility of the system by examining 48 genes for their requirement in electrical synapse formation and find two new genes that are involved in the process. Our screen took ~1 month to complete, showing that this can greatly speed up discovery," said Dr. Adam Miller, a postdoctoral fellow involved in the study.
The authors first assessed the potential of CRISPR to cause indels in single zebrafish genes by targeting a gene involved in retinal pigmentation and two genes involved in neuronal migration. This was done by injecting sgRNAs and mRNA encoding Cas9 into one-cell zebrafish embryos. Each gene disruption produced the expected phenotype, despite that fact that embryos were mosaic for a number of mutations due to a delay in Cas9 protein expression following injection.
Having confirmed that CRISPR could be used to screen for known phenotypes in injected embryos, the authors next wanted to ask whether they could screen for new genes involved in a process of interest. The authors chose to screen for electrical synapse formation because little is known about the genes required for their formation. The authors used the Mauthner circuit (M) in the zebrafish spinal cord due to the accessibility of its electrical synapses. CRISPR targeting of the gjd1a gene, previously found to be required for M electrical synapse formation, with two separate sgRNAs resulting in a >95% decrease in synapse number in 95% and 60% of injected embryos. This difference in efficiency was subsequently found to be a consequence of the sequence composition of the sgRNA target.
To screen a large number of genes simultaneously, the authors tested the efficiency of pooling sgRNAs for a single injection with Cas9. The most efficient gjd1a sgRNA was pooled with sgRNAs targeting five genes not involved in electrical synapse formation and injected. All pool-injected embryos displayed loss of synapses, suggesting that pooling of sgRNAs is effective. Following up on this, they designed a set of 48 sgRNAs targeting genes potentially involved in synapse formation. These were injected in pools of six and eight, with each sgRNA present in two pools. This led to six pools and a total of nine genes likely to be involved in synaptogenesis. Injection of individual sgRNAs from the pools recapitulated the pooled phenotype in a subset of injections, and allowed the authors to identify two genes not previously implicated in electrical synapse formation. The genes were subsequently confirmed in stable mutant lines. From sgRNA synthesis to phenotypic screening of embryos injected with individual sgRNAs, the screen took 3 weeks. Furthermore, sequencing of the target sites for each sgRNA showed that most targets were successfully disrupted.
"Looking forward this approach can certainly be expanded to many different phenotypes," said Dr. Miller. "Ongoing work in lab has found that the technique can be used to examine genes involved in neuronal migration, cell polarity, and cancer metastasis. Moreover, our screen of 50 genes was a pilot, and we hope to expand the screen to hundreds, perhaps even up to 1,000 genes in future work. CRISPR reverse genetic screening greatly increases the speed of doing genetics in a vertebrate system."
Shah AN, Davey CF, Whitebirch AC, Miller AC, Moens CB. 2015. Rapid reverse genetic screening using CRISPR in zebrafish. Nat Methods 12(6):535-540.
|
(Return to list of 3D Math articles)
Welcome to the second part of 3D Math Basics articles. Here I will try to explain two very important vector operations - the cross and dot product.
Cross product is very important in 3D graphics. With it, we can calculate the normal of polygon. For those, who doesn't know what is the normal, it is a directional vector, which is perpendicular to the plane (there is a right angle between normal and plane), in which polygon lies, so it is also perpendicular to this polygon. It is used for thousands of things, for example lighting calculations or back-face culling. And for those, who doesn't understand what is the plane, it's just a flat area in some 3D space. It can be defined by 3 points, which are not collinear - they don't lie on the same line. Look at the picture:
So this is the plane (with yellow color):
It goes on forever, so actually this is just part of a plane. I hope you got the idea of what is the plane. When we have 3 (or more) points there, they form a triangle (or polygon). But in order to set back-face culling and achieve nice lighting effects, we need to define a normal vector. How to do it? We need to take cross product. It's always taken from two directional vectors and resulting vector will be normal. This normal is perpendicular both to first and second directional vector. Look at this:
We've got two directions - green is P2 - P1 and red is P3 - P2. Now, here is how is the cross product calculated:
CVector3 vCross; // Here we will store it // Get the X value
vCross.x = ((vVector1.y * vVector2.z) - (vVector1.z * vVector2.y)); // X value
vCross.y = ((vVector1.z * vVector2.x) - (vVector1.x * vVector2.z)); // Y value
vCross.z = ((vVector1.x * vVector2.y) - (vVector1.y * vVector2.x)); // Z value
Green vector is (0, 0, 1) and red is (1, 0, 0). If we put these vectors as parameters, the resulting vector is (0, 1, 0) - vector pointing straight up! And it is perpendicular to both vectors: Look:
I hope you now got it. But there is one problem. What if the resulting vector was (0, -1, 0)? It would be perpendicular too. How to know which direction will be vector pointing? Well, I heard something about
right-hand rule, but I didn't get the idea of it. But I found out that it depends on whether vectors are in clockwise or anticlockwise order. The best way to explain it is an example:
Edit on 21.01.2012: I was a n00b back then, and I'm not much of a pro right now, but I should be less n00b . Right-hand rule is very easy and I didn't even try to understand it back then. Take your right hand, have thumb, index finger and middle finger be perpendicular, your thumb finger points as vVector1, your index finger as vVector2, and middle finger shows direction of resulting vector from cross product. Take a look at wikipedia for a picture http://en.wikipedia.org/wiki/Right-hand_rule.
It's a cube. Now, we want to calculate its normals. For lighting effects and back-face culling. We will take as example the red side (front):
We want its normal to be pointing to the front (so the resulting vector should have direction (0, 0, 1) ). Look at this code:
fourNormals = vecCross(P2 - P1, P3 - P2);
fourNormals = vecCross(P3 - P2, P4 - P3);
fourNormals = vecCross(P4 - P3, P1 - P4);
fourNormals = vecCross(P1 - P4, P2 - P1);
for(int i = 0; i < 4; i++)vecNormalize(fourNormals[i]);
Now all of the four normals are the same and are pointing to the front (to us). Of course, you don't need to calculate all 4 (or more, if polygon has more vertices), you just need to calculate one normal.
You cannot forget to normalize it, because its length probably won't be 1. Normal's length must always be 1. As you can see, it's pointing where we wanted it to point. I used counter-clockwise
order. Look (green vector is normal):
Edit on 21.01.2012: Directions put here correspond with right hand rule, ignore clockwise and counter-clockwise stuff .
Another important operation is dot product. It can be used to find angle between two directional vectors. It can be used when you want to find angle to rotate some object to face another. The dot product doesn't return a vector, it returns only a number (scalar). The dot product is calculated this way:
return (vVector1.x * vVector2.x) +
(vVector1.y * vVector2.y) +
(vVector1.z * vVector2.z);
If we have only 2 dimensions, we would remove z from this formula (the dot product calculation is same for any number of dimensions, but this isn't much important here). So this is how we calculate it.
In any book that deals with math, you will find something like this about dot product:
A.B = |A|*|B|*cos theta
It means that dot product of vector A and B equals to length of A * length of B * cosine of angle between them. We want to find the angle. From that:
cos theta = (A.B) / (|A|*|B|)
So now we have the cosine of angle. To get actual angle, we need to use arc cosine. Following code does it all:
float fDotProduct = vecDot(vVector1, vVector2);
float fVectorsMagnitude = vecMagnitude(vVector1) * vecMagnitude(vVector2) ;
if(fVectorsMagnitude == 0.0)return 0.0; // Avoid division by zero
double angle = acos(fDotProduct / fVectorsMagnitude);
// The angle is in radians, so convert it to degrees
return angle * 180 / PI;
It returns angle from 0 to PI (0 to 180).
If you want to see usage of dot and cross products, check the codes of Ruined city of Verion (OpenGL Misc).
Now that's all I know about this. And I think it's enough. Yet. Maybe later it won't be enough and I will have to dive deeper into cross and dot products. I hope you learned something from this article. If you've got any questions about this article, put them either into Message Board or e-mail them to [email protected]. And try to play around with these cross and dot products in order to fully understand them.
|
Zinc is an essential element needed as a co-factor for the enzymes that read our DNA. It catalyzes close to 100 enzymes in the body involved in various aspects of cellular metabolism. Recently, zinc was found to play a key role in the development of new brain cells, and research has shown it can boost levels of BDNF (brain-derived neurotrophic factor), which promotes and maintains brain cell connections. Zinc is even used as a neurotransmitter, passing signals between brain cells. Also at the center of receptors for molecules like vitamin D, estrogen, and thyroid hormone, it plays an important role in growth and development, the immune response, neurological function, and reproduction. This mineral can even boost the efficacy of anti-depressant medications.
Getting zinc in your diet is important as, unlike other key minerals, the body has no specific storage system for zinc. A diet low in zinc can cause behavior disturbances, dysphoria, and cognitive impairments. Patients with clinical depression are more likely to have low levels of zinc, while normal levels promote better treatment outcomes. A deficiency has been shown to contribute to impaired physical and neuropsychological development and heightened vulnerability to life-threatening infections in young children. Individuals at risk for zinc deficiency include infants, children, pregnant and lactating women.
Top Farmacy Sources: Oysters, Crab, Grass-fed Beef and Lamb, Pasture Raised Pork, Beans, Cashews, Peanuts, Dark Chocolate.
|
Because of their intermittence in generating energy at global level it contributes only with 6 per cent. Clean energy’s share remains small but the benefits prevent construction of more fossil fuel power plants that would emit CO2 for next years.
Power outputs were typically between 100KW and 300KW range for the wind turbines in the early days. Now the modern aerodynamic wind turbines outputs between 1MW to 5 MW. The larger power output needs large area for the turbine and a larger footprint on the ground to locate the utilities and to give access for periodic maintenance.
The major trend in wind turbines has been towards bigger and better turbines. Technologies continue to advance, the emergence of Micro Wind Turbines should add to the availability of clean energy sources especially to the off-grid rural areas in large parts of the developing world.
The rural areas of large parts of Asia, Africa and South America are ill served with grid power. Even where power lines have been laid, there are frequent power interruptions and brownouts. The rural areas, therefore, install and use portable gen-sets burning petroleum fuels to provide back-up power for their needs of lighting and ventilation and for their entertainment and communication needs. These portable gen-sets pollute worse than large fossil power plants and fuel supply in rural areas is as erratic as power supply.
Micro Wind Turbines, are now available for costs of around $6 per watt of output, still expensive but not beyond reach with some government subsidies. Battery storage of power can help tide over dips in power output. The installation and maintenance of Micro wind turbines are not very complex and can be handled by rural mechanics who service farm equipment and water pumps.
|
Small Streams, Big Impact
A new study finds that healthy streamseven very small onesplay a significant role in keeping pollutants from ending up in lakes and oceans
WHEN IT COMES to the cleaning power of waterways, size doesn't always matter. A new study published in the journal Nature finds that healthy streams--even very small ones--play a significant role in keeping pollutants such as nitrogen from ending up in lakes and oceans. Excess nitrogen, often found in runoff from farms, can wreak havoc downstream by stimulating algae blooms and depleting the water of oxygen, situations that threaten many important fisheries.
Researchers added small amounts of a harmless nitrogen isotope in 72 streams in the United States and Puerto Rico, then traced the isotope's movement through the waterways. They found that in small to moderate amounts, the nitrate was effectively removed by the streams, either by algae and other tiny organisms or by denitrification, which occurs when microbes convert nitrate to nitrogen gas, allowing it to return, inert, to the atmosphere. In much higher amounts, that effectiveness disappeared.
Based on their findings, the researchers recommend that even small stream ecosystems should be protected and restored--and not overused as a means of filtering pollutants. "Our results show streams can help us use natural resources, but this capacity has its limits," says study coauthor Stuart E.G. Finlay of the Cary Institute of Ecosystem Studies in Millbrook, New York.
|
Hey Diddle Diddle
Sing and Learn the Actions!
Hey diddle diddle,
the cat and the fiddle.
The cow jumped over the moon.
The little dog laughed
to see such fun,
and the dish ran away with the spoon.
- Play the song - ask can you hear the fiddle or violin in the song? What is a violin? Do you like the sound of a violin? What part of the orchestra section is the violin in?
- Listen to a song played by an orchestra.
- Think about a non-sense picture - can you come up with your own funny characters?
- Use your body to show and express the actions to the words of the song.
- Sing and read along to the YouTube song to achieve multi-sensory learning "Do it, see it and hear it!"
Print out the song PDF
- Read the song lyrics - ask children a variety of questions.
- Re-read the song lyrics and ask children to join in.
- Recognise and use a variety of punctuation when reading. " " ! ? . ,
- Look at print and conventions (bold, italics).
- Talk about interesting/challenging words and discuss what they mean.
- Word study - phonic knowledge, compound words, rhyming word, contractions etc.
|
Education for females in Pakistan is not easy. Malala Yousafzai, the Pakistani teenager who was shot by the Taliban, showed the world just how difficult it is to receive an education as a female in Pakistan. Other girls similar to Malala are struggling to become educated and earn the right to have a career in Pakistan. Listen to learn more about Malala and other young Pakistani girls like her who are fighting for their rights to receive an education.
Story Length: 5:03
Socrative users can import these questions using the following code: SOC-1234
Fact, Question, Response
Language Identification Organizer
Deeper Meaning Chart
India and Pakistan have been in conflict since the British drew a line across India in 1947 that created two opposing nations. Pakistan’s military focuses on preparing for a conflict with India, and its government teaches its citizens to fear India. India and Pakistan have gone to war twice over the disputed region Kashmir that lies between them like a no-mans-land. Listen to learn about the legacy of the 1947 partition.
The rivalry between India and Pakistan dates back to the partition of the former British colony in 1947. Lines were drawn along religious lines. Pakistan was a region for Muslims and India a region for Hindus. More than 60 years later the relationship remains tense. Listen to hear a story about partition from the perspective of India and learn about recent events in India that have intensified the rivalry. This piece, told from the viewpoint of India, is a companion piece to the audio story at the heart of the lesson Trouble between India and Pakistan Dates Back to Partition which focuses on partition and the Pakistani perspective.
In recent decades, Afghanistan has been a country plagued by war. Author Khaled Hosseini’s debut novel, “The Kite Runner,” is set in Afghanistan in the 1960s and 1970s through the 2000s. The book tells the story of two young friends, Amir and Hassan, who are from very different classes and ethnic groups. The story follows them as they navigate life before and after the coup that toppled the Afghan king in 1973, the Russian occupation in the 1980s, and the rule of the Taliban in the 1990s. Listen as the author Afghan-native Hosseini describes how his life experiences are significant to his novel and how he has set out to change the public perception of this Middle Eastern country.
The United States declared war on Afghanistan in response to the terrorist attacks of September 11, 2001. But Afghanistan had already been a troubled and war torn country for many, many years. In 1996, the Taliban seized control of the country, imposing strict rule over all of its citizens. This story focuses on how the strict rules of society in Afghanistan continue to affect its people--especially children and girls. Listen to this interview with the author of “The Kids of Kabul” and learn more about the challenges faced by Afghan children and women, especially in the area of education.
These levels of listening complexity can help teachers choose stories for their students. The levels do not relate to the content of the story, but to the complexity of the vocabulary, sentence structure and language in the audio story.
NOTE: Listenwise stories are intended for students in grades 5-12 and for English learners with intermediate language skills or higher.
These stories are easier to understand and are a good starting point for everyone.
These stories have an average language challenge for students and can be scaffolded for English learners.
These stories have challenging vocabulary and complex language structure.
|
The basic idea is to first create a set of fairly similar-sized cells over the surface of the model. We do this by placing points randomly over the surface, one point for each cell. This random placement of points is not likely to give good cell placement. To fix this, we have each point push away all of the neighboring points. The result of this repulsion by each point is that the points spread themselves in a fairly even distribution over the surface of the model.
Next, we construct what are known as the Voronoi cells surrounding each point. The Voronoi cell for a given point is just the set of all the positions on the surface that are nearer to the given point than to any other points. The lines separating two neighboring Voronoi cells is exactly midway between the points in the center of each cell. These Voronoi cells are now used to simulate a reaction-diffusion system.
To simulate a reaction-diffusion system, each cell initially contains a random amount of two given chemicals. Then over time, the chemicals spread from one cell to its neighbors ("diffusion") by moving from cells that have a lot of the chemical to other cells that have a lower concentration. If diffusion were the only process going on, then eventually all the cells would end up with the same amount of each chemical. We also simulate reaction steps, however. These reaction processes can, for instance, cause high amounts of one chemical to break down the other chemical. Another reaction process might cause one of the chemicals to be produced if the other chemical is present in large enough quantities. Depending on the nature of the reaction processes and the rates of chemical diffusion, a variety of patterns of chemical concentration can be formed.
Once we have a stable variation of chemical concentration, we can use the amount of a given chemical to guide the color of the surface. In the above image, white means a high concentration of a certain chemical and blue means low chemical amounts. If we color each cell according only to the amount of chemical in that one cell, we get the blocky texture seen in the lower left. If we average together nearby colors, though, we get the smoother texture seen on the lower right.
Go to the Reaction-Diffusion Page.
|
First year students, you will answer this Essential Question:
Which Voyage of Discovery (1200-1800) would your class, as a whole, most like to have been part of? Why?
“So many voyages, just one trip!” said Professor Acme…
Your class has the opportunity to go on a fantastic field trip. Using a time machine, just invented by the ACME Time Travel Company for this special journey, all of you will be going back in time to be guests on a voyage with a famous explorer from 1200 to the 1800. Working with a partner, you will help your class decide which voyage you all would most like to be part of.
In order for your class to make an informed decision as to which of the many voyages you all could be part of you will present your findings to the class. Important: your task is not to try to persuade your class that your voyage was the best, but rather provide great information that will help your whole class decide which voyage you all would most have liked to be part of.
Answer the following questions about your voyage:
1. Who was the leader of the voyage?
2. What country did he sail for?
3. When did he sail?
4. What were some the leader’s characteristics? (example: kind, cruel)
5. What was the purpose of the voyage?
6. What were some significant events that occurred during the voyage?
7. What was the outcome of the voyage?
8. What route did he take?
Size: 22” x 28”
1. All the above information
2. 2 Pictures
3. A Map
Project Due November 29th
|
What is the cultural literary environment around the time Bret Harte wrote the short story "The Outcasts of Poker Flat"?
When Bret Harte wrote his California gold mine short stories, such as "The Outcasts of Poker Flat," he was following along in a tradition that had been established by his literary predecessors and contempories. As an illustration, both Nathaniel Hawthorne and Mark Twain addressed their literary works to the injustice of dogmatic and/or hypocritical society imposing artificial constraints upon citizens. Hawthorne's most famous work of this nature, set in the colonial era, is The Scarlet Letter. Mark Twain famously addressed the same theme set in a different time period in The Adventures of Huckleberry Fin. Tyranny from within the community was a great concern and one which many writers undertook to examine including others like Herman Melville and Henry James. In "The Outcasts of Poker Flat," Bret Harte exposes the hypocritical tyranny of the gold rush town of Poker Flat as they vote to exile a gambler--an act of revenge for winning their money--and prostitutes in the winter storm season, while the behavior of these undesirables prove that they have hearts of compassion capable of the ultimate in altruism, kindness and sacrifice.
|
"Let's get ready to solve our math word problem using the tape diagram," said Tari Geisler, Bush Elementary School fourth-grade teacher, pointing to a diagram on the classroom screen, which shows a long rectangle resembling a piece of tape. The rectangle is divided into three parts shaded black, gray and white.
Geisler asks the class, "How long is the whole piece of tape?"
"7,104 inches," said the students.
The pictured word problem was recently solved by Bush Elementary School fourth graders during their math lesson with teacher Tari Geisler. It is a tape diagram that helps students to visualize and represent quantities, in order to better understand the relationships between them.
Bush Elementary School fourth graders Nevaeh Erickson and Ethan Brown work on a math word problem using tape diagrams in Geisler’s class.
"How much is the whole shaded part? Both black and grey?"
"You need to add the black area, 4,295, and the grey area, 982, together," said a student.
"Correct, so the whole shaded area is 5,277. But, we've only partially solved the problem. How do we solve for A, the part that is white?"
"Take the entire tape amount, 7,104 and subtract 5,277, the amount of the shaded areas," said a student.
Geisler was teaching a lesson from the NYS fourth grade curriculum modules. Bush Elementary School's team of fourth grade teachers, Tiffany MacCallum, Amy Vezina and Geisler, are all using the NYS math modules in their classrooms.
The teachers implemented the new math modules to align their curriculum to the Common Core Learning Standards. As all students are instructed in the modules, the teachers are able to differentiate their lessons by offering more instruction for those students who encounter difficulty, and enrichment opportunities for students who master concepts quickly.
Fourth-grade students are working toward mastery as they add and subtract large numbers beginning with values such as 387 and 2,438 and progressing to millions (6,588,395).
Later this year, students will delve into multi-digit multiplication and division (ex: 23 x 124 = 2,852), and will round out their year with the study of fractions.
As students work to develop their skills in computation, they will also use visual models, such as tape diagrams, to represent and solve world problems. Tape diagrams are one of many models used in mathematics to help students to visualize and represent quantities, in order to better understand the relationships between them. These models help students to access what may otherwise be abstract concepts that are difficult to understand. By understanding these concepts more deeply, students can become more efficient and accurate problem solvers.
If parents have more questions on the Common Core Learning Standards, they can find more information at EngageNY.org under "Parent and Family Resources." (www.engageny.org/parent-and-familyresources) Parents should also always feel comfortable talking to their child's teacher and principal to learn more about the curriculum and how to help their child at home.
|
A new approach to robotics and artificial intelligence (AI) could lead to a revolution in the field by shifting the focus from what a thing is to how it can be used.
Identifying what a robot is looking at is a key approach of AI and machine cognition. So far ambitious researchers have managed to teach a computer’s vision system to recognise up to 100 objects. Granted, this is a huge achievement, yet far short of an "I, Robot" scenario.
But there is another radically different approach available that European researchers have applied to the study of robotics and AI. The MACS project does not attempt to get robots to perceive what something is, but how it can be used.
This is an application of the cognitive theory of ‘affordances’, developed by the American psychologist James J. Gibson between 1950 and 1979. He rejected behaviourism and proposed a theory of ‘affordances’, a term signifying the range of possible interactions between an individual and a particular object or environment. The theory focuses on what a thing or environment enables a user to do.
Computer vision might identify the object as a chair, but a system of affordances will instruct the robot that it can be used for sitting. This system is key to the new approach. The system means that once an affordance-perceiving robot ‘sees’ a flat object of a certain height and rigidity, it knows that the object can be used for sitting.
But it also means that an affordance-based robot will be able to determine that the flat object of a certain height and rigidity is too heavy to lift, and must be pushed, and that it can be used to hold a door open.
Ultimately, the aim of goal-oriented, affordance-based machine cognition is to enable a robot to use whatever it finds in its environment to complete a particular task.
“Affordance based perception would look at whether something is graspable, or if there is an opening, rather than worrying about what an object is called,” explains Dr Erich Rome, coordinator of the MACS project.
Five ambitious goals
‘MACS’ stands for multi-sensory autonomous cognitive systems interacting with dynamic environments for perceiving and learning affordances. Started in September 2004, the project began with five scientific and technological goals.
First the researchers sought to create new software architecture to support affordance-based robot control. Second, they wanted to use affordances to direct a robot to complete a goal-directed task. Third, they wanted to establish methods for perceiving, learning and reasoning about affordances.
Next, they wanted to create a system so the robot could acquire knowledge of new affordances through experimentation or observation. Finally the MACS team planned to demonstrate the entire system on a robotic platform called the Kurt3D.
The EU-funded project successfully created an integrated affordance-inspired robot control system. This included the implementation of a perception module, a behaviour system, an execution control module, planner, learning module and affordance representation repository.
The proof-of-concept has been shown in various experiments with the simulator MACSim and in the real robot Kurt3D.
“We performed a physics-based simulation using a model of the robot,” says Rome. We tested single components like perception and learning, and also the entire architecture in simulation. And then we tested the whole system in the robot.”
In that test, Kurt3D used affordance-based perception to identify what could be grasped, where there was free space, and what was traversable. The robot found an object, picked it up, and put it on a pressure-activated switch that controlled a door. Then, once the robot detected the passage, it opened and moved through the door.
The robot improvises
The tests were a remarkable achievement. The robot essentially figured out how to manipulate its environment to achieve a real-world goal. It showed a capacity for improvisation.
“This is the very early stages of this approach,” warns Rome. “So we are a long way from commercialisation. There are others working on it. But what is unique about the MACS project is that we introduced direct support for the affordances concept in our architecture.”
And MACS has also made affordances a more mainstream concept in robotics, perception and cognition. Some of the partners are involved in other projects, like ROSSI, which tracks the relation of language to actions.
“The project helped generate a lot of interest in the concept and it is also now a very visible topic,” says Rome.
In all, MACS and its work have moved robotics into a new paradigm, teaching robots to identify what they can do.
Cite This Page:
|
Introduction to Rabbits
Rabbits are small mammals in the family Leporidae. The European or Old World rabbit (Oryctolagus cuniculus) is the only genus of domestic rabbits. Wild rabbits and hares include cottontail rabbits (Sylvilagus) and the “true” hares or jackrabbits (Lepus). In Western nations, rabbits have been kept as pets since the 1800s. As pets, they need a considerable amount of care and attention. Many different breeds of rabbits are available; common differences between breeds include size, color, and length of fur.
A male rabbit is called a buck, a female is called a doe, and a baby is called a kit. Rabbits are born blind and hairless. In the wild, they are usually born and live in underground burrows.
|
Aqua regia, a mixture of nitric acid and hydrochloric acid, is one of the few materials that will dissolve gold. This material, translated to "royal water" in English, was named so because it could dissolve the royal metal gold. First noted in the fourteenth century, aqua regia could be used to help ascertain whether a particular material was actually gold or was some trickery of the alchemist. Nitric acid by itself will not dissolve gold, but will in combination with hydrochloric acid. The chemistry of the process is rather complex, with both acids reacting with the metal to form soluble gold compounds. The gold can be recovered from the solution, making the process useful for purification purposes.
Acids and bases have had many uses throughout history. In this chapter we want to explore the properties of acids and bases and the reactions in which they take part.
User:Yugo312/Wikimedia Commons. commons.wikimedia.org/wiki/File:%E7%8E%8B%E6%B0%B4.JPG. Public Domain.
|
Measuring Body Fat
In this body fat measurement worksheet, students will conduct an experiment to determine how the "under-water weighing" technique of measuring body fat works. Once the experiment is complete, students will complete 1 short answer question.
8 Views 30 Downloads
Middle School Sampler: Science
Focus on inquiry-based learning in your science class with a series of activities designed for middle schoolers. A helpful packet samples four different texts, which include activities about predator-prey relationships, Earth's axis and...
6th - 8th Science CCSS: Designed
All Fats Are Not Created Equal
Apply robotics to connect physical properties to chemical properties. Future engineers use robots to determine the melting points of various fats and oils. The robots can do this by measuring the translucency of the fats as they heat up.
5th - 7th Science CCSS: Designed
|
Ernest Everett Just 1883-1941
He Changed the Way We Thought About Cells
-Going against conventional wisdom he shattered long-held beliefs about the structure and functions of cells by proving that ectoplasm--which had been largely ignored--was vital to cell and egg development.
-His findings changed the way scientists thought about evolution, the difference between plants and animals, the difference between non-living and living things, ways to determine sex in advance, the functions of the liver, kidneys, pancreas, and other vital organs. His findings also affected cancer research.
-The importance he placed on ectoplasm was too extreme, just as the scientists of his day placed too much importance on the nucleus. Nevertheless his findings were groundbreaking and critical.
|
Kwanzaa is an African American and Pan-African holiday which celebrates family, community, and culture. Celebrated annually from December 26th thru January 1st, Kwanzaa is the world's fastest growing holiday with over 20 million celebrants worldwide.
Kwanzaa seeks to enforce a connectedness to African cultural identity, provide a focal point for the gathering of African peoples, and to reflect upon the Nguzo Saba, or the seven principles. People of all religious faiths and backgrounds practice Kwanzaa. As Maya Angelou explains in The Black Candle, "It is a time when we gather in the spirit of family and community, to celebrate life, love, unity, and hope."
The first Kwanzaa celebration was held on Dec. 26 1966 in Los Angles California. Kwanzaa began here in the United States, but its roots reach back to African harvest festivities called First Fruits Celebrations. The word Kwanzaa comes from the phrase matunda ya kwanza which means first fruits in the pan-African language Swahili. First Fruits Celebrations date as far back as ancient Egypt and Nubia.
Rooted in this ancient history and culture, Kwanzaa develops as a flourishing branch of the African American life and struggle as a recreated and expanded ancient tradition. Thus, it bears special characteristics only an African American holiday but also a Pan-African one, For it draws from the cultures of various African peoples, and is celebrated by millions of Africans throughout the world African community. Moreover, these various African peoples celebrate Kwanzaa because it speaks not only to African Americans in a special way, but also to Africans as a whole, in its stress on history, values, family, community and culture.
Kwanzaa was established in 1966 by Dr. Maulana Karenga in the midst of the Black Freedom Movement and thus reflects its concern for cultural groundedness in thought and practice, and the unity and self-determination associated with this. It was conceived and established to reaffirm and restore our rootedness in African culture, serve as a regular communal celebration to reaffirm and reinforce the bonds between us as a people, and to introduce and reinforce the Nguzo Saba (the Seven Principles).
Dr. Maulana Karenga, founder of Kwanzaa, explains: "The central message and meaning of Kwanzaa is rooted in its raising up and bringing forth the ancient African model and practice of producing, harvesting and sharing good in the world. Kwanzaa stresses the importance of our sowing the seeds of goodness everywhere, of cultivating them with care and loving kindness, of harvesting the products of our efforts with joy and of sharing the good of it all throughout the community and the world. Thus, of all the rich and expansive ways we can express the meaning and message of Kwanzaa, none is more important than seeing it and embracing it as a season and celebration of bringing good into the world."
|
Distance measures (cosmology)
|Part of a series on|
Distance measures are used in physical cosmology to give a natural notion of the distance between two objects or events in the universe. They are often used to tie some observable quantity (such as the luminosity of a distant quasar, the redshift of a distant galaxy, or the angular size of the acoustic peaks in the CMB power spectrum) to another quantity that is not directly observable, but is more convenient for calculations (such as the comoving coordinates of the quasar, galaxy, etc.). The distance measures discussed here all reduce to the common notion of Euclidean distance at low redshift.
In accord with our present understanding of cosmology, these measures are calculated within the context of general relativity, where the Friedmann–Lemaître–Robertson–Walker solution is used to describe the Universe.
There are a few different definitions of "distance" in cosmology which all coincide for sufficiently small redshifts. The expressions for these distances are most practical when written as functions of redshift , since redshift is always the observable. They can easily be written as functions of scale factor , cosmic or conformal time as well by performing a simple transformation of variables. By defining the dimensionless Hubble parameter
and the Hubble distance , the relation between the different distances becomes apparent. Here, is the total matter density, is the dark energy density, represents the curvature, is the Hubble parameter today and is the speed of light. The following measures for distances from the observer to an object at redshift along the line of sight are commonly used in cosmology:
Transverse comoving distance:
Angular diameter distance:
Note that the comoving distance is recovered from the transverse comoving distance by taking the limit , such that the two distance measures are equivalent in a flat Universe.
Peebles (1993) calls the transverse comoving distance the "angular size distance", which is not to be mistaken for the angular diameter distance. Even though it is not a matter of nomenclature, the comoving distance is equivalent to the proper motion distance, which is defined as the ratio of the transverse velocity and its proper motion in radians per time. Occasionally, the symbols or are used to denote both the comoving and the angular diameter distance. Sometimes, the light-travel distance is also called the "lookback distance".
The comoving distance between fundamental observers, i.e. observers that are comoving with the Hubble flow, does not change with time, as it accounts for the expansion of the Universe. It is obtained by integrating up the proper distances of nearby fundamental observers along the line of sight (LOS), where the proper distance is what a measurement at constant cosmic time would yield.
Transverse comoving distance
Two comoving objects at constant redshift that are separated by an angle on the sky are said to have the distance , where the transverse comoving distance is defined appropriately.
Angular diameter distance
An object of size at redshift that appears to have angular size has the angular diameter distance of . This is commonly used to observe so called standard rulers, for example in the context of baryon acoustic oscillations.
If the intrinsic luminosity of a distant object is known, we can calculate its luminosity distance by measuring the flux and determine , which turns out to be equivalent to the expression above for . This quantity is important for measurements of standard candles like type Ia supernovae, which were first used to discover the acceleration of the expansion of the Universe.
This distance is the time (in years) that it took light to reach the observer from the object multiplied by the speed of light. For instance, the radius of the observable Universe in this distance measure becomes the age of the Universe multiplied by the speed of light (1 light year/year) i.e. 13.8 billion light years. Also see misconceptions about the size of the visible universe.
- Big Bang
- Comoving distance
- Friedmann equations
- Physical cosmology
- Cosmic distance ladder
- Friedmann-Lemaître-Robertson-Walker metric
- Scott Dodelson, Modern Cosmology. Academic Press (2003).
- 'The Distance Scale of the Universe' compares different cosmological distance measures.
- 'Distance measures in cosmology' explains in detail how to calculate the different distance measures as a function of world model and redshift.
- iCosmos: Cosmology Calculator (With Graph Generation ) calculates the different distance measures as a function of cosmological model and redshift, and generates plots for the model from redshift 0 to 20.
|
Every society has to prepare its young people for a place in adult life and teach them societal values through a process called education.
Education is an important agent of socialization and encourages social integration, especially in countries with diverse populations, such as the United States. Through their schools, students from a variety of cultural backgrounds come into contact with mainstream culture.
The vast majority of the children in the United States attend public schools, but these schools are far from equal. Public schools located in affluent, predominantly white, suburban areas tend to have more modern facilities and smaller class sizes than schools in urban, less affluent areas, which means that economic status often determines the quality of education a student receives. Children whose parents are wealthy enough to send them to private school enjoy an even greater advantage. Studies show that graduates of private schools are more likely to finish college and get high-salary jobs than are graduates of public schools.
|
The exhaust system is meant to move out the burnt air and fuel mixture out of the engine. However, there is more to this system than this. It must also clean the emission and reduce the amount of noise produced. It also has an impact on the performance of your car. How does the exhaust system in your vehicle work? What are its key components? Find out in this guide.
What are Vehicle Exhaust Systems?
Exhaust systems are designed to remove gases produced inside the engine’s combustion chamber. It will have many components that work together to remove gases including carbon monoxide and hydrocarbons. If your vehicle’s exhaust system stops working properly, it will result in the loss of power and fuel efficiency. You may also fail to pass an emissions control check.
Components of an Exhaust System
The main components of an exhaust system are as follows:
- Exhaust Valve: This part is in the cylinder head and it will open after the piston’s stroke.
- Piston: The piston is responsible for pushing the gases created during combustion into the exhaust manifold.
- Exhaust manifold: It is the exhaust manifold that further transmits the emissions to the catalytic converter.
- Catalytic Converter: It will reduce the percentage of toxins in the emissions to clean the gases.
- Exhaust Pipe: The emissions are then moved from the catalytic converter to the muffler via the exhaust pipe.
- Muffler: The function of the muffler is to reduce the noise and remove the exhaust gases.
How Does the System Work?
Exhaust gases are produced in the engine after the fourth cycle gets completed. All the cylinders are connected to the exhaust manifold through respective pipes. There is a single output and the manifold simultaneously collects the exhaust gases from the different chambers. The gases are pushed through a single pipe. A poppet valve controls the opening and closing of the manifold.
Once the exhaust gases are collected, they pass through the catalytic converter through pipes. Oxygen sensors play a role in checking oxygen concentration in the exhaust gas. Excess oxygen is a sign that the engine is not using adequate fuel. Less oxygen is a sign that excess fuel is used, and the sensors send this data to the TCU to adjust the fuel delivery.
What is a Catalytic Converter?
The catalytic converter is an important part of the exhaust system and has a crucial role to play in the process. It works to reduce the emission of harmful carbon monoxide and nitrous oxide, resulting in the release of harmless gases.
- The converter works by breaking down the gases using two processes – reduction and oxidation.
- Reduction takes place when the converter breaks down nitrous oxide particles into nitrogen and oxygen which are harmless
- Oxidation takes place when carbon monoxide particles are transformed into carbon dioxide, which is a harmless gas.
Thus, the catalytic converter plays a very important role in cleaning the exhaust gases before emission. These cleaned gases are further tested by another oxygen sensor to ensure that the emissions comprise only oxygen and carbon dioxide. Else, this sensor will send a message to the ECU that the catalytic converter is not working properly.
The exact system also includes a resonator that works to reduce noise during the driving experience. The noise produced by the engine while driving can be irritating without a system to reduce it. This system works by cutting on noises such as the droning sound. It will create a droning sound in the same frequency but in the opposing wavelength. This helps cancel the frequency and remove the noise.
So, these are the main components and parts of the exhaust system and how the system works overall. If your vehicle has an issue with the exhaust system, it is recommended to choose a specialized exhaust system repair service.
|
PHYSICAL BASED RENDERING TEXTURES
PBR Texture or physical based rendering Textures used life-like lighting or shading models along with computed surface values to precisely depict the real-world materials. Or it can also define as the combination of physically accurate shading, lighting, and properly measured art content.
Subsequently, we have discussed the fundamental principles behind how physical-based rendering (PBR) computed the shading and lighting.
Diffuse and reflected
Diffuse and reflected lights are the terms that demonstrate the interaction between the material and the light.
The reflected light is the light that strikes the surface and bounces off. On a smooth surface, the light will be reflected in the same direction and create a mirror-like appearance.
Diffuse light is the light that penetrates the inside of the object. There it gets absorbed or scattered in the material and re-emerged.
Unlike the reflected light, the diffused light is uniform in direction. The light which is not absorbed provides the material its color.
Diffused color is also known as Albedo or base color.
The total light hitting the material is equal to the sum of the reflected light and diffusive light.
If the material is highly reflective then it will show less diffusive color. In contrast, If the material has a rich diffusive color, it can not reflect enough.
Metals & Non-metals
It is essential to know the nature of the material. whether the material is a conductor (metal) or an insulator (non-metal). Because it determines how the material behaves with the light.
Metals are usually reflective whereas non-metals are not. Therefore, metals reflect the color as diffusive whereas non-metals on reflection appear white.
Due to these differences, a PBR workflow has the property of metalness which makes things easier by defining either the material is metal or non-metal.
Fresnel is a term defined as the different angles show the different extent of reflectivity. The light that hits near the edges shows more reflection than the light that falls at 0 angles.
The detail of the microsurface is a very significant characteristic for any material. Because it explains how smooth or rough a surface is. Some PBR systems use Glossiness and some use roughness, They both are the same thing. Glossiness is the inverse of roughness and vise versa.
|
This guide presents a variety of artworks, from the 17th century to the present, that highlight the presence and experiences of Black communities across the Atlantic world. Use the collections in the virtual gallery below to engage your students in conversation about the many narratives of everyday life, enslavement, and resistance that have been told through art. Lesson plans are provided to extend these conversations and help students consider the many and continuing legacies of the transatlantic slave trade.
This Teacher’s Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the experiences, achievements, and perspectives of Asian Americans and Pacific Islanders across U.S. history.
Archival visits, whether in person or online, are great additions to any curriculum in the humanities. Primary sources can be the cornerstone of lessons or activities involving any aspect of history, ancient or modern. This Teachers Guide is designed to help educators plan, execute, and follow up on an encounter with sources housed in a variety of institutions, from libraries and museums to historical societies and state archives to make learning come to life and teach students the value of preservation and conservation in the humanities.
This Teacher's Guide compiles EDSITEment resources that support the NEH's "A More Perfect Union" initiative, which celebrates the 250th anniversary of the founding of the United States. Topics include literature, history, civics, art, and culture.
Our Teacher's Guide offers a collection of lessons and resources for K-12 social studies, literature, and arts classrooms that center around the achievements, perspectives, and experiences of African Americans across U.S. history.
This Teacher's Guide will introduce you to the cultures and explore the histories of some groups within the over 5 million people who identify as American Indian in the United States, with resources designed for integration across humanities curricula and classrooms throughout the school year.
Since 1988, the U.S. Government has set aside the period from September 15 to October 15 as National Hispanic Heritage Month to honor the many contributions Hispanic Americans have made and continue to make to the United States of America. Our Teacher's Guide brings together resources created during NEH Summer Seminars and Institutes, lesson plans for K-12 classrooms, and think pieces on events and experiences across Hispanic history and heritage.
|
Sonoma sunshine is a California endangered plant species, which means that killing or possessing this plant is prohibited by the California Endangered Species Act (CESA). Sonoma sunshine occurs naturally in Sonoma County, and is found in vernal pools and wet grasslands in the Sonoma Valley and the Santa Rosa Plain. Populations of vernal pool plants such as Sonoma sunshine are typically discontinuous and fragmented due to differences in climate, substrate, and topography, and are often restricted to very specific habitats and locations. These factors coupled with the urbanization and conversion of land for agriculture endangers many California vernal pool species with extinction. Sonoma sunshine is a small annual plant that blooms February through April and is sometimes associated with Burke’s goldfields (Lasthenia burkei) and Sebastopol meadowfoam (Limnanthes vinculans) which are also listed as endangered species under CESA. Sonoma sunshine is also listed as an endangered species under the federal Endangered Species Act, and at the time of this page’s posting, the California Natural Diversity Database reported 22 occurrences of this species that are presumed to still exist.
The biggest threat to Sonoma sunshine continues to be urban development and conversion of land to viticulture or other intensive land uses, and the resulting habitat fragmentation. Sonoma sunshine is also sensitive to land use changes that cause variations in hydrology and the duration of vernal pool inundation. Sonoma sunshine is threatened by increased runoff, frequent disking of land, breaking of the vernal pool hardpan, and activities that allow competing plant species to become established. Other threats include manipulation of normal gene flow resulting from restoration work, buildup of thatch in previously grazed areas, and the effects of climate change.
Although work has already begun to conserve this species, further action is necessary to aid the recovery of Sonoma sunshine. Remaining natural populations of Sonoma sunshine should be protected and new populations should be established that do not negatively affect the natural populations. Non-native plant species that compete with Sonoma sunshine should be managed and effective weed eradication measures that do not harm Sonoma sunshine should be researched. Populations of Sonoma sunshine should be monitored using standardized protocols and research into the habitat requirements, reproductive ecology, gene flow, seed bank dynamics, and the long-term viability of restoration sites should be conducted.
CDFW has participated in the following Sonoma sunshine studies and papers through the Cooperative Endangered Species Conservation Fund or other mechanisms:
CDFW may issue permits for Sonoma sunshine pursuant to CESA, and you can learn more about the California laws protecting Sonoma sunshine and other California native plants. Populations of Sonoma sunshine occur in CDFW’s Bay Delta Region. More information is also available from the United States Fish and Wildlife Service Species Profile for Sonoma Sunshine.
|
What is quality control? Definition and examples
Quality control is a system in manufacturing of maintaining standards. It involves testing a sample of the output. The quality controller or inspector tests the samples against their specification. We also call it QC.
QC is a specialized kind of system control designed to check that a product meets design specifications and quality.
ISO 9000 says that quality control is “A part of quality management focused on fulfilling quality requirements.” ISO stands for International Quality Organization. ISO 9000 is a family of quality management systems standards.
QC is one of four fields that make up quality management. The other three are quality assurance, quality improvement, and quality planning.
‘Quality’ refers to how good something is. It contrasts with ‘quantity.’
When somebody says ‘how much‘ or ‘how many,’ we think about quantity. If, on the other hand, they say ‘how good,’ we think about quality.
Quality control – before production
Before making something, its designer determines what quality checks are necessary and when to carry them out.
A top-quality product should:
- function correctly,
- be defect-free,
- be safe to use or consume,
- satisfy the requirements of the customer, and
- meet the specifications.
Quality control – inspection
Inspection is an important component of QC. The inspector examines the product visually. If it is a service, he or she will examine the service’s end results.
The product inspector will have descriptions and lists of defects that are not acceptable. In most cases, cracks and blemishes, for example, are unacceptable defects.
Quality control vs. quality assurance
These two terms refer to two aspects of quality management. Even though quality assurance (QA) and QC are closely related, they are different.
Put simply; quality assurance focuses on defect prevention – it monitors the process. Quality control, on the other hand, focuses on identifying defects – it monitors the product.
Let’s imagine that Tom is making a birthday cake. First, he checks that his procedure is correct and that he has all the right equipment and ingredients. This is quality assurance.
When he has made the cake, he tastes it to make sure it is good. He also looks at it to make sure it looks nice. That is quality control.
Diffen.com explains the difference between the goals of QA and QC as follows:
“The goal of QA is to improve development and test processes so that defects do not arise when the product is being developed.”
“The goal of QC is to identify defects after a product is developed and before it’s released.”
A corrective tool and a managerial tool
Quality control is a corrective tool. Quality assurance, on the other hand, is a managerial tool.
Where does QC reside in a company?
If you run your company properly, quality assurance resides independently of manufacturing and operations.
Quality control, on the other hand, resides within manufacturing and operations, Kimberlee A. Washburn writes in a MasterControl article.
|
The set of North American painted hides held in French collections is the largest and oldest of its kind anywhere in the world. Fifteen of them, in the musée du quai Branly - Jacques Chirac, date back to the eighteenth century. The hides come from the ethnographic collection of the Cabinet des Médailles, in the French National Library, and from the Versailles Public Library's former "Cabinet of curiosities and of decorative objects" [Cabinet de curiosités et d’objets d’art], which was assembled in 1806 and moved to Paris in 1934.
Yet after more than two centuries in public collections, they still hold many secrets for us: what of their origins, what was their function and what is their history? Attempts to identify and interpret these objects has produced inconclusive results, from fragmentary, often secondary sources.
There are good reasons for this. First, absolutely no research has been done on painted hide production in the 18th century. Secondly, since the items in Paris are the oldest in the world, comparing them with other collections is of limited use. So, in order to shed new light on this exceptional corpus, we have set up a multi-disciplinary research team to work solely from primary sources (original inventory entries, contemporary written documents, etc.) and from material analyses of the painted hides under consideration. This long-term program aims at identifying the hides' origins, and their intrinsic characteristics.
The first research session ran from 22 June to 3 July 2020, and brought together a team of conservators, anthropologists, restorers, and materials and natural science specialists. They started by comparing all the hides, in order to circumscribe lines of inquiry and research, to define what conservation and restoration work was required, and to carry out a preliminary series of non-destructive analyses in order to identify the pigments and materials used in decorating the hides.
The painted hides are designated as "painted hides" or "coats" depending on their probable use (as carpets, or robes). All are status items, some of which were worn as garments, and others, showing no signs of wear, were probably not used before they changed hands; they were perhaps produced for trade.
The team is at present determining the provenance of the hides, and their stylistic features. Already we can say that most of them come from the Great Plains region, a vast cultural zone stretching from the present State of Texas and Louisiana in the United States, to the south of the Provinces of Manitoba, Saskatchewan and Alberta in Canada. Until the end of the nineteenth century, this territory was inhabited by a variety of nomadic, and semi-sedentary tribes, which had in common their use of the horse, with bison for their subsistence. Hides were the raw material for tipis (conical tents), clothing, and all sorts of equipment. Deer hides were also produced. Further north, in sub-Arctic regions, hides came from other species such as caribou and moose.
The hides generally keep the shape of the animal, including its feet and neck. Some of them, called "split-robe", are carefully made with a central seam running down the middle (the animal's backbone) of the single hide. Documented specimens of this type generally come from the Plains. Other hides have cultural characteristics that link them to the Central and Southern Plains regions.
Biometric data, that is, the specific measurements of the hide, tell us what animal species it comes from, as do other morphological features such as the presence of hair follicles and the appearance and color of the hair (in most cases it is possible to observe these by microscope). This data can also tell us about changes in texture and color as a result of the methods used in tanning and treating the skins. We were able to confirm that each painted hide came from a single skin, and that half were bison hides, the other half deer hides, with maybe one caribou.
We were able to identify visually the technologies used, like the tool marks left by the scrapers used for scraping the hide's surface in preparation for tanning. Signs of wear show how the skins had been used. Certain changes to the hide's structure (marks of nails, holes) point to old methods for hanging hides in the various conservation institutions they passed through.
Lastly, the composition of the materials used in making and decorating the hides could be identified by physico-chemical analyses using X-ray fluorescence spectroscopy and Fourier-transform infrared spectroscopy. We were thus able to distinguish the pigments that were sourced locally, from those provided by trade with the Europeans.
The painted hides of this exceptional corpus are unique. Not only are they the earliest documented specimens of their kind in the world, but they also have the most varied geographical origins. They constitute a major research area, and are a focal point of the CRoyAN project.
- Vitart-Fardoulis, Anne (éd.), Parures d’histoire. Peaux de bisons peintes des Indiens d’Amérique du Nord. Paris : Musée de l’Homme/RMN, 1993.
- Feest, Christian (éd.), Premières nations, collections royales : les Indiens des forêts et des prairies d’Amérique du Nord, catalogue d'exposition. Paris : Musée du quai Branly/RMN, 2007.
|
A χ2 test is used to measure the discrepancy between the observed and expected values of count data.
- The dependent data must – by definition – be count data.
- If there are independent variables, they must be categorical.
The test statistic derived from the two data sets is called χ2, and it is defined as the square of the discrepancy between the observed and expected value of a count variable divided by the expected value.
The reference distribution for the χ2 test is Pearson’s χ2. This reference distribution has a single parameter: the number of degrees of freedom remaining in the data set.
A χ2 test compares the χ2 statistic from your empirical data with the Pearson’s χ2 value you’d expect under the null hypothesis given the degrees of freedom in the data set. The p value of the test is the probability of obtaining a test χ2 statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis (“there is no discrepancy between the observed and expected values”) is true. i.e. The p value is the probability of observing your data (or something more extreme), if the data do not truly differ from your expectation.
The comparison is only valid if the data are:
- Representative of the larger population, i.e.the counts are sampled in an unbiased way.
- Of sufficiently large sample size. In general, observed counts (and expected counts) less than 5 may make the test unreliable, and cause you to accept the null hypothesis when it is false (i.e. ‘false negative’). R will automatically apply Yates’ correction to values less than 5, but will warn you if it thinks you’re sailing too close to the wind.
Do not use a χ2 test unless these assumptions are met. The Fisher’s exact test
fisher.test() may be more suitable if the data set is small.
In R, a χ2-test is performed using
chisq.test(). This acts on a contingency table, so the first thing you need to do is construct one from your raw data. The file tit_distribution.csv contains counts of the total number of birds (the great tit, Parus major, and the blue tit, Cyanistes caeruleus) at different layers of a canopy over a period of one day.
tit.distribution<-read.csv( "H:/R/tit_distribution.csv" ) print( tit.distribution )
This will spit out all 706 observations: remember that the raw data you import into R should have a row for each ‘individual’, here each individual is a “This bird in that layer” observation. You can see just the start of the data using
head( tit.distribution )
Bird Layer 1 Bluetit Ground 2 Bluetit Ground 3 Bluetit Ground 4 Bluetit Ground 5 Bluetit Ground 6 Bluetit Ground
and look at a summary of the data frame object with
str( tit.distribution )
'data.frame': 706 obs. of 2 variables: $ Bird : Factor w/ 2 levels "Bluetit","Greattit": 1 1 1 1 1 1 1 1 1 1 ... $ Layer: Factor w/ 3 levels "Ground","Shrub",..: 1 1 1 1 1 1 1 1 1 1 ...
To create a contingency table, use
tit.table<-table( tit.distribution$Bird, tit.distribution$Layer ) tit.table
Ground Shrub Tree Bluetit 52 72 178 Greattit 93 247 64
If you already had a table of the count data, and didn’t fancy making the raw data CSV file from it, just to have to turn it back into a contingency table anyway, you could construct the table manually using
tit.table<-matrix( c( 52, 72, 178, 93, 247, 64 ), nrow=2, byrow=TRUE ) # nrows means cut the vector into two rows # byrow=TRUE means fill the data in horizontally (row-wise) # rather than vertically (column-wise) tit.table
[,1] [,2] [,3] [1,] 52 72 178 [2,] 93 247 64
The matrix can be prettified with labels (if you wish) using
dimnames(), which expects a
list() of two vectors, the first of which are the row names, the second of which are the column names:
dimnames( tit.table )<-list( c("Bluetit","Greattit" ), c("Ground","Shrub","Tree" ) ) tit.table
Ground Shrub Tree Bluetit 52 72 178 Greattit 93 247 64
To see whether the observed values (above) differ from the expected values, you need to know what those expected values are. For a simple homogeneity χ2–test, the expected values are simply calculated from the corresponding column (C), row (R) and grand (N) totals:
|E||302×145/706 = 62.0||302×319/706 = 136.5||302×242/706 = 103.5|
|χ2||(52−62)2/62 = 1.6||(72−136.5)2/136.5 = 30.5||(178−103.5)2/103.5 = 53.6|
|E||404×145/706 = 83.0||404×319/706 = 182.5||404×242/706 = 138.5|
|χ2||(93−83)2/83 = 1.2||(247−182.5)2/182.5 = 22.7||(64−138.5)2/138.5 = 40.1|
The individual χ2 values show the discrepancies for each of the six individual cells of the table. Their sum is the overall χ2 for the data, which is 149.7. R does all this leg-work for you, with the same result:
chisq.test( tit.table )
Pearson's Chi-squared test data: tit.table X-squared = 149.6866, df = 2, p-value < 2.2e-16
The individual tits’ distributions are significantly different from homogeneous, i.e. there are a lot more blue tits in the trees and great tits in the shrub layer than you would expect just from the overall distribution of birds.
Sometimes, the expected values are known, or can be calculated from a model. For example, if you have 164 observations of progeny from a dihybrid selfing genetic cross, where you expect a 9:3:3:1 ratio, you’d perform a χ2 manually like this:
|A- B-||A- bb||aa B-||aa bb|
|E||164×9/16 = 92.25||164×3/16 = 30.75||164×3/16 = 30.75||164×1/16 = 10.25|
|χ2||(94−92.25)2/92.25 = 0.033||(33−30.75)2/30.75 = 0.165||(28−30.75)2/30.75 = 0.246||(9−10.25)2/10.25 = 0.152|
For a total χ2of 0.596. To do the equivalent in R, you should supply
chisq.test() with a second, named parameter called
p, which is a vector of expected probabilities:
dihybrid.table<-matrix( c( 94, 33, 28, 9 ), nrow=1, byrow=TRUE ) dimnames( dihybrid.table )<-list( c( "Counts" ), c( "A-B-","A-bb","aaB-","aabb" ) ) dihybrid.table
A-B- A-bb aaB- aabb Counts 94 33 28 9
null.probs<-c( 9/16, 3/16, 3/16, 1/16 ) chisq.test( dihybrid.table, p=null.probs )
Chi-squared test for given probabilities data: dihybrid.table X-squared = 0.5962, df = 3, p-value = 0.8973
The data are not significantly different from a 9:3:3:1 ratio, so the A and B loci appear to be unlinked and non-interacting, i.e. they are inherited in a Mendelian fashion.
The most natural way to plot count data is using a
barplot( dihybrid.table, xlab="Genotype", ylab="N", main="Dihybrid cross" )
Use the χ2 test to investigate the following data sets.
- Clover plants can produce cyanide in their leaves if they possess a particular gene. This is thought to deter herbivores. Clover seedlings of the CN+ (cyanide producing) and CN− (cyanide free) phenotypes were planted out and the amount of rabbit nibbling to leaves was measured after 48 hr. Leaves with >10% nibbling were scored as ‘nibbled’, those with less were scored as ‘un-nibbled’. Do the data support the idea that cyanide reduces herbivore damage?
- In a dihybrid selfing cross between maize plants heterozygous for the A/a (A is responsible for anthocyanin production) and Pr/pr (Pr is responsible for modification of anthocyanins from red to purple) loci, we expect an F2 ratio of 9 A− Pr−: 3 A− pr pr: 3 a a Pr− : 1 a a pr pr. The interaction between the loci results in the a a Pr− and a a pr pr individuals being indistinguishable in the colour of their kernels. The file maize_kernels.csv contains a tally of kernel colours. Do the data support the gene-interaction model?
- As the data is already in a table, it is easier to construct it directly as a matrix. The data do not support the hypothesis that the two phenotype differ in their damage from rabbit nibbling
clover.table<-matrix( c( 26, 74, 34, 93 ), nrow=2, byrow=TRUE ) dimnames( clover.table )<-list( c( "CN.plus", "CN.minus" ), c( "Nibbled", "Un.nibbled" ) ) clover.table
Nibbled Un.nibbled CN.plus 26 74 CN.minus 34 93
chisq.test( clover.table )
Pearson's Chi-squared test with Yates' continuity correction data: clover.table X-squared = 0, df = 1, p-value = 1
- The data are in a file, so we construct a simple contingency table from that and then test against the expected frequencies of 9 purple : 3 red : 4 colourless. Make sure you get them in the right order! The data support the model, as the χ2 value has a p value greater than 0.05, i.e. we can accept that the data are consistent with a 9:3:4 ratio.
maize.kernels<-read.csv( "H:/R/maize_kernels.csv" ) head( maize.kernels )
Kernel 1 Red 2 Colourless 3 Colourless 4 Colourless 5 Purple 6 Colourless
maize.table<-table( maize.kernels$Kernel ) maize.table
Colourless Purple Red 229 485 160
chisq.test( maize.table, p=c( 4/16, 9/16, 3/16 ) )
Chi-squared test for given probabilities data: maize.table X-squared = 0.6855, df = 2, p-value = 0.7098
Next up… One-way ANOVA.
|
Tips for training, encouraging and playing with your visually impaired child
There are many activities that you can do with a child who is visually impaired. You want to encourage your child to gain experience, confidence and independence at home as well as outside of the home. You want to encourage your child to use all of their senses and make sure that they are getting as much stimulation as possible.
Activities for hearing:
- Playing music and singing is a great way to start your child’s day. While listening to music you can clap your hands and add movement as your child develops and Reading books out loud and having your child listen to different sounds such as car alarms and animal sounds provides a variety of stimuli. Teach your child how to use an iPod or CD player so they can play their own audiobooks and music.
Activities for touching:
- Have your child touch different textures and help her or him to differentiate between soft and rough This will help your child with motor skills and introduce using his or her hands to be able to describe something.
- Encourage your child to feel a fire hydrant, touch different leaves and flowers, explore a stop sign and learn the parts of a computer (mouse/track ball and )
- Buy or borrow different animals and encourage your child to identify them by shape and touch knowing the different parts of different
- Ask your local fire department or police station to allow your child to climb onto a fire truck or into a police car, feel the fire hose and the coat or uniform that a fireman or policeman
- Create a binder for on-the-road activities that she or he can use while away from Choose a variety of items to place in the binder such as cotton balls, dry pasta, sand paper, rocks, and tissue. Attach each item to a sheet of paper and have your child sue touch to describe them to you.
Activities for scents:
- Have your child smell a variety of scented items such as perfume, flowers, and
- Outside of the home, ask your child to identify where they are using Begin by choosing places with very distinctive scents such as a bakery or pizzeria.
- See if your child can identify smoke (such as from a fireplace or burning leaves).
Activities for taste:
- Make mealtime a time for taste Ask your child if something is sweet or salty, hot or cold, hard or mushy. Ask them to describe the texture of the foods and use the texture and taste to guess what the food is.
- Have your child do a taste-test—offer him or her bites of different fruits (or vegetables) to see if they can identify the Children are more likely to choose a varied diet when they are familiar with a variety of foods.
- Include your child in daily tasks and For example, while on a family stroll ask your child what they hear or smell while walking.
- Remember to include examples of each of the four senses: hearing, touching, smelling and Ask questions such as: Do you hear the fire truck or ambulance? Do you smell the food that is being cooked on the grill or the flowers? What does the grass or tree feel like?
- Describe everything to your child as it is Explain your surroundings: the tree leaves are green, there are purple flowers growing in the dirt, or listen to the children running around the playground.
- Make sure to give your child with vision loss chores such as washing the dog or taking out the garbage or making their They can learn to set the table, wash the table or put chairs back where they belong.
Make sure they put away their toys so they can find them, hang up their coat, put away their shoes and always know where their low vision glasses or prescribed mobility cane is kept (if they have one.)
|
Minimum Order Quantity: 500 Square Meter
|Max Withstanding Temperature||120 C|
|Country of Origin||Made in India|
A radiant barrier is a type of building material that reflects thermal radiation and reduces heat transfer. Because thermal energy is also transferred by conduction and convection, in addition radiation, radiant barriers are often supplemented with thermal insulation that slows down heat transfer by conduction or convection.
A radiant barrier reflects heat radiation (radiant heat), preventing transfer from one side of the barrier to another due to a reflective, low emittance surface. In building applications, this surface is typically a very thin, mirror-like aluminum foil. The foil may be coated for resistance to the elements or for abrasion resistance. The radiant barrier may be one or two sided. One sided radiant barrier may be attached to insulating materials, such as polyisocyanurate, rigid foam, bubble insulation, or oriented strand board (OSB). Reflective tape can be adhered to strips of radiant barrier to make it a contiguous vapor barrier or, alternatively, radiant barrier can be perforated for vapor transmittance.
View Complete details
|
NASA’s Cassini spacecraft captured these images of a propeller in Saturn’s A ring on Feb. 21, 2017. These are the sharpest images ever taken of a propeller and reveal an unprecedented level of detail. This propeller is nicknamed “Santos-Dumont” after the Brazilian-French aviator who is hailed as the father of aviation in Brazil.
The February 2017 imaging was Cassini’s first targeted observation of a propeller. The two views show the same object from opposite sides of the rings. The top image looks toward the rings’ sunlit side, while the bottom image shows the unilluminated side, where sunlight filters through the backlit ring.
Propellers are the term given to small disturbances in Saturn’s rings caused by the gravitational influence of embedded moonlets. They are thematically nicknamed in honor of famous world aviators. The particularly large propeller Santos-Dumont is caused by an object a little over half a mile (1 km) across.
More than just being ring decorations, propellers are important to researchers because they mimic the behavior of objects in an orbiting debris field; they are sort of like miniature protoplanets inside a circumstellar disk. They were first spotted by Cassini in July 2004.
“Observing the motions of these disk-embedded objects provides a rare opportunity to gauge how the planets grew from, and interacted with, the disk of material surrounding the early sun,” said Cassini imaging team leader Carolyn Porco in 2010. “It allows us a glimpse into how the solar system ended up looking the way it does.”
Read the rest of this story here: Cassini Targets a Propeller in Saturn’s A Ring
|
You may have heard the term “holistic development” thrown around a lot lately. It’s become a buzzword in the education world, and for a good reason! Holistic development is a way of approaching education that counts the whole child rather than just their academic achievements. Keep reading to learn more about holistic development and why it’s an essential part of 21st-century education.
What is holistic development?
Holistic development is an approach to education that focuses on the whole child. This means that educators not just look into a child’s academic progress but also their social, emotional, and physical development. To provide a well-rounded education that meets all of these needs, educators need to work together with parents, guardians, and other caretakers.
Why is holistic development important?
In today’s world, it’s more important than ever for children to be able to thrive in all aspects of their lives. With the ever-increasing demands of the workforce and the constantly changing landscape, it’s more important than ever for children to be adaptable and resilient. A holistic approach to education ensures that children receive the support they need in all areas of their lives, setting them up for success both now and in the future.
Why is it becoming the buzzword in the education world?
There are a few reasons why holistic development is becoming the buzzword in education. First, educators are beginning to see the importance of educating the whole person. They understand that cognitive skills are not enough; students need opportunities to grow emotionally and physically. Second, society is increasingly complex; therefore, students must be prepared to navigate this complexity with thoughtful reflection and critical thinking skills. Finally, we see a shift from standardized tests and rote learning towards a more student-centered approach to education. This shift has created more opportunities for educators to incorporate holistic development into the classrooms.
A holistic approach to education is an integral part of 21st-century learning. By taking into account the whole child—their academic progress, social and emotional development, and physical well-being—educators can better meet the needs of each student. This approach sets students up for success in all areas of their lives and prepares them for the challenges of the modern world.
|
The latest science news about how dogs see
It is known that dogs do not see far, it can be said that their visual acuity, and their ability to see at a distance, is four times less than that of a human being. What a dog sees only at 25 meters, we will notice at 100 meters with the same accuracy. On the other hand, dogs find it very difficult to see close objects. The minimum focal length is 30 cm, which means that it is more difficult for them to focus on an object located at a shorter distance. We can say that they are strictly myopic.
Although in progress science, Some unknowns are not fully disclosed. One of them is how they see dog. There is a great certainty of knowledge about how dogs visually perceive their external environment. In this sense, there are many studies trying to come close to a definitive answer about the ability of this popular animal to distinguish different Colors.
Humans have three types of cone photoreceptor cells: long-wave (red), medium-wave (green), and short-wave (blue), which are the recognizable primary colors. Meanwhile, dogs have only two, which correspond to short-wave and long-wave sensitivities (blue and yellow).
From this data we can conclude that dogs see colors in shades blue and yellowThey are unable to distinguish well between shades of colors such as red and green. In other words, they have dichromatic vision, with two types of color receptor cells that make them see color within two spectrums of light: blue and yellow.
To stir up controversy, some studies provide evidence to suggest that dogs can imagine These colors even without the corresponding cone photoreceptor cells. These studies warn that dogs appear to be more aware of the colors they can perceive because during the test they were able to distinguish between the three primary colors and gray.
It is not known if this is due to the differential density of gray or actual perceived color differences. Other recent studies claim that although they lack a specific UV visual pigment, dogs may have the ability to perceive ultraviolet light.
The truth is that more serious and comparable research is needed to understand how well dogs perceive color and how similar a dog’s perception is to that of humans.
What is true and proven assures us that the dog will be able to distinguish between different shades of blue, yellow and gray; But he could not distinguish green, pink, red or purple. These assertions explain why, for example, a dog TV channel called DogTV prioritizes these colors in its programming.
In addition to being able to identify certain colors, dogs are able to see in the dark. Among the variety containing cells Retina For a dog, in the sensitive membrane of the eye, which would be like the roll of an old camera, there are retinal rods which are the receptors that allow vision with very low light.
At the bottom of the eyeball is a membrane that acts like a mirror, focusing light rays, and increasing the effectiveness of the receptors. This membrane is called tapetum lucidum, which is what makes their eyes shine when we take pictures of our dogs in front of them in the dark. Because they have more rods and tapetums, dogs discern and see up to five times more in the dark than we do.
* Prof. Dr. Juan Enrique Romero @drromerook is a veterinarian. Specialized in university education. Master’s degree in Psychological Immuno-Endocrinology. Former Director of the Hospital Escuela de Animales Pequeños (UNLPam). University professor at various Argentine universities. international speaker.
|
What Is The Purpose Of The Malleus Incus And Stapes
According to research, dentine, a component of the lower jaw, is connected to the ear bones in mammalian embryos. Meckel’s cartilage refers to ossified cartilage that is related to the jaw. During embryonic growth, cartilage hardens into bone. When the bone structure moves from the jaw to the inner ear later in development, detachment occurs. The middle ear, often known as the auditory canal, is made up of the stapes, incus, malleus, and tympanic membrane.
The ossicles – malleus, incus, and stapes – stretch like a chain from the membrane that connects the vestibular (oval) window to the tympanic membrane. Endochondral ossification results in the formation of healthy bone in the ossicles. Between them, synovial joints develop.
Some head traumas can result in damage to the outer, middle, and inner ears. This is determined by whether or not the petrous bone is shattered. Audiometry is required in cases of bloody otorrhea because it can suggest both conductive hearing loss and labyrinthine concussion-related sensorineural loss.
To be used in conjunction with the hammer, anvil, and stirrup. I envision the stirrup pushing on the oval glass in the same manner that a stirrup could pull on a horse. So there’s the hammer sound, as well as the etymological link (hammer-anvil-stirrup)
The Malleus Incus And Stapes Are Called The Quizlet
The middle ear is full of air-fluid, and the TM has gone blue, which is a clear symptom of otitis media with effusion (OME). In individuals with bilateral OME, eosinophilic obstruction of the Eustachian tube should be explored. One of the most common nicknames for OME among youngsters is “glue ear.” Hemotympanum in diving can be induced by barotrauma or a concussion with a broken temporal bone.
The inner ear receives tympanic membrane vibrations via the malleus manubrium, which are then sent to the incus and stapes via the oval window (). Conduction hearing loss is caused by errors in this mechanism.
In American Sign Language, how do you say hammer, anvil, and stirrup? (ASL).
Operating on the middle ear necessitates a complete grasp of how the eardrums, ossicles, and inner ear are linked. The little anatomical space and anatomical defects hinder surgical therapies even more. Furthermore, the nerves and blood vessels around the ossicles require special attention. The chorda tympani nerve, which is intimately related to the malleus, is commonly damaged during surgery. The facial nerve begins in the rear wall of the middle ear and goes through the temporal bone. Several surgical techniques must take these considerations into account.
The stapedius muscle, which connects to the stapes, is responsible for noise reduction. When the facial nerve is injured, the stapedius muscle stops working. As a result, the stapes’ response to sound widens, resulting in hyperacusis.
The incus is the name given to the middle ossicle (anvil). The malleus and stapes are joined by a synovial joint, which allows them to be suspended between the two ossicles. The structure includes a lenticular process and a long limb/process.
In surgery, two facial nerve branches that travel through the middle ear are crucial. The tympanic chorda and a horizontal section of the facial nerve are two examples. If the horizontal branch of the brain is injured during ear surgery, facial paralysis might result. The tympanic chorda is a facial nerve branch that conveys taste impulses from the ipsilateral half of the tongue.
Stapes Footplate Develops From
Otosclerosis is a bone remodeling condition of the inner ear that can be congenital or develop spontaneously. By sticking to the stapes and reducing their capacity to conduct sound, an oval window might cause conductive hearing loss. Clinical otosclerosis affects around 1% of the population, with variants that do not cause obvious hearing loss being the most common. Females and young individuals are at a higher risk of developing otosclerosis. Stapedectomy is the surgical removal of the stapes and replacement with a prosthesis, whereas stapetomy is the creation of a tiny hole at its base, insertion of a prosthesis, and replacement with a new one. A persisting stapes artery, fibrosis-induced bone base damage, or obliterative otosclerosis leading to base obliteration might all complicate the surgery.
Stapes Bone Meaning
Specific illnesses, such as incus necrosis and otosclerosis, can impact or disrupt the ossicles of the middle ear. Ossicle reconstruction is usually necessary as part of the therapeutic process.
Professor Giovanni Filippo Ingrassia’s anatomical commentary published posthumously in 1603 says that he discovered the stapes in 1546, however, this is challenged because Ingrassia’s anatomical commentary was published posthumously in 1603. Because stirrups did not exist in the early Latin-speaking world, the bone’s name derives from its similarity to one (Latin: stapes), an example of a late Latin term that may have come from “stand” (Latin: stapia) in the Middle Ages.
Ear components include the meatus, incus, malleus, and stapes.
When sound waves strike a liquid, it absorbs the majority of the energy released by the waves. Through the fluids and membranes of the middle ear, impedance matching between airborne sound and inner ear acoustic waves is achievable. The propagation of sound waves through a liquid differs from the propagation of pressure waves through a liquid.
Anatomy Of The Ear
The manubrium (also known as the malleus) is a medial-surface-implanted downward extension of the tympanic membrane. As it falls, the pedicle becomes smaller. Ligaments link the malleus to the tympanic membrane’s pars tensae at the peduncle’s end. The tympanic membrane is pulled medially from the center by this connection, resulting in an indentation known as the tympanic node. The transverse process elongates in a cone-like form toward the peduncle’s base. The anterior and posterior malleolar folds connect to the tympanic membrane at the top. The anterior process is significantly longer than the lateral one. Above the transverse process and below the neck, a spindle-shaped protrusion extends forward from the transverse process and connects to the front wall of the middle ear. The anterior process is another name for the Folian or Rau’s process.
In the early 1500s, Alessandro Achillini discovered the anvil. Anvil is Latin for the anvil-shaped bone from which it is derived. A lateral and a medial ligament connects the malleus and stapes.
The eardrum, also known as the tympanic membrane, is located at the base of the bone external auditory canal and serves as the boundary between the outer and middle ear. A fibrocartilage ring connects it to the tympanic membrane portion of the temporal bone.
Middle Ear Ossicles
This tube, also known as the Eustachian tube, is in charge of transporting sound from the ear canal to the nose and mouth. To be clear, the air pressure in your ear canal and throat is the same as the rest of your body. The pressure on each side of the eardrum is balanced by this relationship between the middle ear and the nasopharynx.
The tympanic membrane, often known as the eardrum, links the malleus to the outer ear, while the incus does the same on the opposite side. The incus is also related to the stapes, which connects the incus to the inner ear’s oval window and transmit sound waves.
Stapes Surgery Success Rate
Two muscles are linked to the middle ear’s bone ossicles. The tensor tympani’s role is to reduce the vibrations of the tympanic membrane by being linked to the malleus. When the tensor tympani tightens, the malleus moves medially, causing the eardrum to constrict. As a consequence, the ear is protected from potentially harmful noises.
Human papillomavirus has been identified to infect the middle ear mucosa in recent research. DNA from two oncogenic HPVs, HPV16 and HPV18, has been found in standard middle ear samples, indicating that the usual middle ear mucosa might be a site of HPV infection.
Giovanni Ingrassia discovered the stapes in 1546 at the University of Naples in Italy. The Latin term for this bone’s name is stirrup or stapia (meaning to stand). The stapes bone is the smallest and lightest bone in the human body, measuring 3 mm 2.5 mm in length (3 mm x 2.5 mm). The ligamentum annulare joins it laterally to the incus and medially to the oval window of the inner ear.
The oval window in the middle ear absorbs vibrations from the tympanic membrane via the ossicles. The fluid in the inner ear moves as a result of a wave produced by the oval window, which stimulates receptor cells and converts mechanical vibrations into electrical impulses. Because the oval window (to which the stapes are attached) is smaller than the eardrum, vibrational forces at the stapes’ base are ten times stronger than at the eardrum. When vibrations move through the ossicles, they gain strength but lose amplitude. As a result, instead of substantial, low-force vibrations, this creates minor, high-force vibrations of the same amplitude.
Stapes Function Hearing
The incudostapedial joint joins the stapes head (capitulum) to the incus long limb through the lenticular process. The fore and hind limbs are linked to the oval-shaped base by the head. The stapes (footplate) lie in the oval window of the tympanic cavity’s labyrinthine (medial) wall.
Brought To You By – Ear Wax Removal London
The post What Is The Purpose Of The Malleus Incus And Stapes appeared first on https://alef3.com
|
Interpretation – topical affirmatives increase exploration or development within the Hollow Earth Earth’s mesosphere is the lower mantle of the crust
Egger, Undergraduate Program Coordinator in the School of Earth Sciences at Stanford University 3 (Anne E., “Earth Structure: A Virtual Journey to the Center of the Earth”, Visionlearning Vol. EAS (1), http://www.visionlearning.com/library/module_viewer.php?mid=69)sbl
The compositional divisions of the earth were understood decades before the development of the theory of plate tectonics - the idea that the earth’s surface consists of large plates that move (see our Plate Tectonics I module). By the 1970s, however, geologists began to realize that the plates had to be thicker than just the crust, or they would break apart as they moved. In fact, plates consist of the crust acting together with the uppermost part of the mantle; this rigid layer is called the lithosphere and it ranges in thickness from about 10 to 200 km. Rigid lithospheric plates "float" on a partially molten layer called the aesthenosphere that flows like a very viscous fluid, like Silly Putty®. It is important to note that although the aesthenosphere can flow, it is not a liquid, and thus both S- and P-waves can travel through it. At a depth of 660 km, pressure becomes so great that the mantle can no longer flow, and this solid part of the mantle is called the mesosphere. The lithospheric mantle, aesthenosphere, and mesosphere all share the same composition (that of peridotite), but their mechanical properties are significantly different. Geologists often refer to the aesthenosphere as the jelly in between two pieces of bread: the lithosphere and mesosphere.
( “The Earth’s Crust,” http://www.kidsgeo.com/geology-for-kids/0022-earths-mantle.php , 6-27-11 , GJV)
Traveling beyond the Earth’s crust, we next encounter the mantle. The mantle extends to a depth of approximately 1,800 miles, and is made of a thick solid rocky substance that represents about 85% of the total weight and mass of the Earth. The first 50 miles of the mantle are believed to consist of very hard rigid rock. The next 150 miles or so is believed to be super-heated solid rock, that due to the heat energy is very weak. Below that for the next several hundred miles, the Earth mantle is believed to once again be made up of very solid and sturdy rock materials.
NASA was created to make interstellar travel believable. The Apollo Space Program foisted the idea that man could travel to, and walk upon, the moon. Every Apollo mission was carefully rehearsed and then filmed in large sound stages at the Atomic Energy Commissions Top Secret test site in the Nevada Desert and in a secured and guarded sound stage at the Walt Disney Studios within which was a huge scale mock-up of the moon. All of the names, missions, landing sites, and events in the Apollo Space Program echoed the occult metaphors, rituals, and symbology of the Illuminati's secret religion: The most transparent was the faked explosion on the spacecraft Apollo 13, named "Aquarius" (new age) at 1:13 (1313 military time) on April 13, 1970 which was the metaphor for the initiation ceremony involving the death (explosion), placement in the coffin (period of uncertainty of their survival), communion with the spiritual world and the imparting of esoteric knowledge to the candidate (orbit and observation of the moon without physical contact), rebirth of the initiate (solution of problem and repairs), and the raising up (of the Phoenix, the new age of Aquarius) by the grip of the lions paw (reentry and recovery of Apollo 13). 13 is the number of death and rebirth, death and reincarnation, sacrifice, the Phoenix, the Christ (perfected soul imprisoned in matter), and the transition from the old to the new. Another revelation to those who understand the symbolic language of the Illuminati is the hidden meaning of the names of the Space Shuttles, "A Colombian Enterprise to Endeavor for the Discovery of Atlantis... and all Challengers shall be destroyed." Exploration of the moon stopped because it was impossible to continue the hoax without being ultimately discovered: And of course they ran out of pre-filmed episodes. No man has ever ascended higher than 300 miles, if that high, above the Earth's surface. No man has ever orbited, landed on, or walked upon the moon in any publicly known space program. If man has ever truly been to the moon it has been done in secret and with a far different technology. The tremendous radiation encountered in the Van Allen Belt, solar radiation, cosmic radiation, temperature control, and many other problems connected with space travel prevent living organisms leaving our atmosphere with our known level of technology. Any intelligent high school student with a basic physics bookcan prove NASA faked the Apollo moon landings.
|
It is called twilight at the interval before sunrise or after sunset, during which the sky is still somewhat illuminated. Twilight occurs because sunlight illuminates the upper layers of the atmosphere. The light is diffused in all directions by the molecules of the air, reaches the observer and still illuminates the environment.
The map shows the parts where they are during the day and where they are at night. If you want to know exactly the time that dawns or dusk in a specific place, in the meteorological data we have that information.
Why do we use UTC?
Universal coordinated time or UTC is the main standard of time by which the world regulates clocks and time. He is one of the several successors closely related to Greenwich Mean Time (GMT). For most common purposes, UTC is synonymous with GMT, but GMT is no longer the most precisely defined standard for the scientific community.
|
What is the fragment sentence? How do you write a fragment sentence? What is a sentence fragment with examples? How do you tell if it is a sentence fragment? What does a sentence fragment not contain? How do you create a fragment?
How Do You Write A Fragment Sentence?
A sentence fragment is defined as a group of words that look like sentences but aren’t; it is an incomplete sentence lacking an element as a verb or subject.
Correcting sentence fragments can make your writing clearer. It doesn’t mean that a sentence begins with a capital and ends with good correct lines or periods make it a complete and grammatically correct sentence. To be a complete and accurate sentence, every sentence must have a subject and a verb.
It is a group of words that looks like a complete sentence, but in reality, it isn’t a complete sentence at all. These sentences are mainly separated from the main clause, which is present in that particular sentence. You can easily remove the period between the main clause and fragment to make it a correction.
In short words, a sentence contains three things: subject, verb, and a complete thought, which makes the perfect sentence, but a sentence fragment is a group of words that lack all of these three things mentioned above.
There are some reasons to end up with a fragment. Either it is missing a subject, missing a verb, or aborting to complete the thought it starts. These are the main key points to end up with a fragment that seems completely not legit at all. Moreover, to avoid this, you must analyze them in your writings and fix them in a perfect way without having any problem.
Usually, some fragments depend upon verbal speaking like, you should read a sentence aloud to figure out fragments in your writing to know if there is a missing subject or preceding sentence.
Sentence fragment Examples
Here are some examples of a sentence fragment:
Some sentence fragments lack a subject, and here are some examples of sentence fragments along with a correction adding subject:
1. Shows no interest in his game.
Correction: Hari shows no interest in this game
2. Barks and run into the house
Correction: Dog barks and runs into the house
3. Discovered the cure for Corona Virus
Correction: The doctors and researchers discovered the cure for Corona Virus.
4. gave us homework, but he is absent
Correction: Our Teacher gave us homework, but he is absent.
Also, here are sentence fragment examples along with a correction adding verb.
1. A time of happiness and blessings.
Correction: There was a time of happiness and blessings.
2. Cars and bikes all over the road.
Correction: Cars and bikes were all over the road.
3. The elected mayor for our city.
Correction: The elected mayor for our city was Joseph Rodriguez
4. Showing his clothes and shoes.
Correction: Denzel was showing his clothes and shoes.
You should read the sentence aloud to know whether it fits in that position or not. It may sound inappropriate when talking in an incomplete sentence. Fragments must be turned into complete thoughts that are used through or connected through complete sentences.
Likewise, Sentence fragments begin with a capital letter and end with a period and have independent clauses. Also, you will be able to find out that fragments don’t form a complete thought when you read them more closely. It’s like giving the definition or picturing it in short form without and verb and subject.
It means these fragments make no sense in perfect definition, but it is only an incomplete word, and it makes sense when speaking and thinking or feeling through the sentences.
Moreover, it is something like a puzzle, and there will be only half of it is solved and others you have to do yourself or fill yourself with your way of thinking. Like in the picture puzzle, you can’t make the whole picture without all the pieces. It goes the same with this sentence fragment.
You can’t even complete the sentence without missing important elements that are verb, subject, and body. You can find sentence fragments before or after the independent clauses.
For example, when we got inside the house, the rain started. Here ‘when you got inside the house is the sentence fragment that belongs to the independent clause.
Moreover, the group of words that look like sentences but they aren’t the actual sentences these types of fragments are called a sentence fragment.
Also, any group of words that contain both a subject and verb is called an independent clause. Usually, you can find subordinators like ‘are,’ ‘after,’ ‘when,’ ‘where,’ ‘if,’ ‘since,’ are called subordinating clauses, followed by a comma, and appear at the beginning of the sentences. Some words make it easy to fall into the sentence fragment.
They are ‘also,’ ‘for example, ‘and, ‘but,’ ‘for instance, ‘or’ there are some words that can easily fall into the sentence fragment without any doubt.
Generally, there is a lot of difference between speaking and writing. We write in fragments because we picture those sentences and words while speaking, and while picturing those words, we often talk about them that way, which makes us write in the same way. Likewise, phrases need an additional verb that acts as a verb in the sentence to compete for these to be clauses.
For example: “I went home yesterday” is an independent clause, whereas “Because I went home yesterday” is a dependent clause. You have to keep in mind that sentences like command sentences are not fragments even though they lack a subject.
These sentences lack a subject but have grammatically correct a sentence which makes them complete. Likewise, you won’t find any verb in the phrase by adding some clauses like “they were,” which are placed at the beginning of the phrase, which turns the fragment into an independent clause.
Normally, grammatically incomplete sentences are called sentence fragments which are usually covered with phrases and dependent clauses. So, it would help if you connected fragments to an independent clause to work in writing. Punctuation must be added to fix sentence fragments; the words have to be added or removed.
|
The speed of movement v is equal to the ratio of the distance s to the time of movement t. A) How to find the distance traveled by the body, knowing its speed and time of movement? B) How to find the time of movement, know the speed and distance traveled by the body?
1. It is necessary to learn from any formula describing the relationship of units of measurement to display the required values.
2. Let us write down the formula for the speed of uniform movement:
V = S / t;
V is the speed of movement in km / h, or m / s;
S is the distance traveled in km or meters;
t – travel time in hours or minutes.
Let’s define the path from the formula:
S = V * t;
substitute numerical values and calculate.
Determine the travel time:
t = S / V.
Keep track of the dimension. If the speed is in km / h, then the path is in km and the time is in hours.
One of the components of a person's success in our time is receiving modern high-quality education, mastering the knowledge, skills and abilities necessary for life in society. A person today needs to study almost all his life, mastering everything new and new, acquiring the necessary professional qualities.
|
Here is a worksheet in which students have to put the regular and irregular verbs in the past simple positive or negative,and match them to the pictutes, to talk about winter holidays , then you can have them talk or write about their holidays
Other pedagogical goals
The above lesson is a great teaching resource for:Elementary (A1), Pre-intermediate (A2)
Special needs students
Solutions not included
Quality not yet verified by the community.
This resource does not contain any images, words or ideas that would upset a reasonable person in any culture.
|
All cultures, including Deaf culture have four components: language, behavioral norms, values and traditions. For Deaf culture, vision plays a significant role in each of the four components. People who are Deaf rely strongly on their vision to communicate and gather information.
American Sign Language (ASL) is:
- The preferred language of the Deaf community
- A visual gestural language
- A language with its own syntax and grammatical structure
American Sign Language is not:
- Signs in English word order
- An auditory or written language
- A universal language
Historically, ASL has been passed from one generation to the next in schools. Even when ASL was not allowed in the classroom, deaf staff and peers discreetly used their cherished language to communicate. ASL has also been preserved through church and other social gatherings.
Making eye contact
- Essential for effective communication
- Important because people who are Deaf read the nuances of facial expressions and body language for additional information
Meeting others within the Deaf community:
- Hand waving is most common
- Tapping the shoulder or arm is acceptable
- Flickering lights on and off is also common
- Tapping on a table or stomping foot on a floor is done occasionally
- Using a third person to relay attention is sometimes used in a crowded room
- Greetings often include hugs instead of handshakes
- Conversations tend to include elaboration about lives and daily occurrences
- Conversations tend to be open and direct
- There is an interest in other people's connection with the Deaf community
The following are highly valued and vital aspects of everyday living by the Deaf community. Notice the value comparisons between people who are deaf and people who can hear.
People who are Deaf value
- Eyes (rely on vision)
- Videophone (VP); Relay Service; TTY
- Visual/vibrating alerting systems
- Video mail
- Deaf clubs, deaf civic and social organizations
People who can hear value
- Spoken language
- Ears (rely on sound)
- Sound alerting system
- Voice mail
- Civic and social organizations
Traditions of the Deaf community reflect their cultural values. Many of their traditions are based on face-to-face gatherings of people who are Deaf, because communication-the lifeblood of any culture-only happens visually in this community.
Traditions materialize in the strong family-like ties and lifelong camaraderie that develops between individuals. Some examples include their strong devotion to community Deaf club/events, Deaf alumni events, senior citizen gatherings, religious activities, conferences and sporting events at the local, regional and national level. These provide a social gathering opportunity, a way to participate in political and economic decisions affecting Deaf citizens and a means of grooming new leaders to carry on Deaf community traditions. Events are frequently filled with entertainment such as Deaf folklore, arts, history, ASL poetry, songs and joke-telling.
|
Wood for Ottoman Egypt
These efforts were handicapped, however, by the lack of important strategic resource in the Ottomans' Indian Ocean territories: wood. In order to construct fleets, the empire had to requisition lumber from Anatolia to the Black Sea littoral, ship it by sea to Egypt, then by caravan to Red Sea shipyards. The expense of such projects was no small part of the reason why Ottoman efforts in the Indian Ocean were inconsistent with the turning of political factions, and perhaps explains the failure of the entire venture.
In a more recent book, Nature and Empire in Ottoman Egypt, Alan Mikhail dedicates a chapter to the continuing significance of Egypt's reliance on outside sources of wood. Egypt was the Ottoman Empire's granary, but the technology to extend cultivated areas and keep crops irrigated relied on wood, which still came primarily from Anatolia. In an interesting passage, he writes:
Wood was everywhere in rural Egypt, and without it, peasants could not have functioned as they had for centuries. Wood came to be so "natural" a part of the countryside's environment that it would be hard to imagine Egyptian villages without dams, canals, waterwheels, embankments, and other wooden structures and equally as hard to imagine Egypt without ships to move grain across the Empire...Indeed, without those objects, Egyptian peasants could not irrigate otherwise uncultivable land, and they could not protect themselves against the ravages of the flood. In short, given their millennia of interactions with wood and their dependence on it, peasants could not live without the material and what they made of it...
With the story of wood in Ottoman Egypt, we see that it was the demand for and use of lumber by both Egyptian peasants and the imperial bureaucracy of the province that led to the removal of large portions of forest in Anatolia. Put differently, Egyptian peasants - who had never seen Anatolia and likely never heard of the place - affected its history in massively important ways. As forests were cut, ecosystems were altered or destroyed, soil fertilities depleted, and animal habitats changed.Egypt's need for wood played a critical role in its 19th-century expansion. Mehmet Ali (or Muhammad Ali) wanted a self-sufficient industrialized polity to call his own. Early on, he rashly depleted the country's meager existing wood supplies, which he then later tried to rebuild through a mandatory tree-planting campaign. As time passed, however, his answer to Egypt's wood security problem was expansion. After he conquered the Sudan, he explored ways to try and ship wood up the Nile from the south. Wood also figured prominently in Egyptian and Ottoman internal memos related to the negotiations which ended his 1830's campaign into Syria and Anatolia, and one of Mehmet Ali's core objectives was access to the wood-rich province of Adana.
Today, trees are mostly about love of nature and preservation of the ecosystem. Through much of human history, however, they have been a resource worth controlling and fighting over.
|
Why Is Iron Important to Prevent Anemia?
Iron is a nutritional mineral that helps the body produce red blood cells—the cells that carry oxygen through the bloodstream—which is key to feeling strong and healthy. When the body doesn’t get enough iron, it doesn’t produce the number of red blood cells—or hemoglobin—it needs. This can lead to a variety of health problems including low energy, fatigue, dizziness, poor circulation, and more. This condition is called iron deficiency and can eventually lead to iron deficiency anemia (IDA).
IDA is the most common type of anemia and it affects millions of people in the U.S. today – children, men, and especially women. Women of all ages can all suffer from IDA: athletic young women, premenopausal and menopausal women, women who are pregnant or considering becoming pregnant.
What Causes Iron Deficiency or Loss?
IDA has a range of causes including poor diet and nutrition, gastrointestinal disorders, heavy bleeding from menstruation, and pregnancy. According to the National Institutes of Health (NIH), at least 10% of women of menstruating age (12-49 years) have IDA due to monthly bleeding. The NIH recommends iron supplements to prevent anemia in pregnant women, so as to prevent health problems in both the mother and baby.
Women who are premenopausal or menopausal can require extra iron, as menstruation itself can deplete the body’s iron supply. At least 10% of menstruating age women suffer from iron deficiency. However, iron deficiency does not immediately result in IDA. IDA develops over time; when iron levels in the blood decrease, ferritin—the blood protein that stores iron—is reduced. This limits the formation of hemoglobin or healthy red blood cells resulting in IDA. Anemia causes fatigue, a lack of focus, irritability, and other symptoms because the hemoglobin is not as capable of carrying oxygen to the cells of the body.
Dietary Iron and Iron Supplements for Anemia
Though iron does occur naturally in foods such as beef, turkey, fish (especially shellfish), beans, legumes, and green, leafy vegetables like spinach and kale, often diet is not enough to get all the iron that the body needs. That’s why doctors and health care professionals will recommend iron supplements for anemia. Iron supplements come in different forms including pills or tablets; they also come in several different strengths depending on an individual’s needs.
Multivitamins or multi-mineral tablets containing iron, especially those designed for women, typically provide 18mg of iron which meets The U.S. Food and Drug Administration (FDA) recommended Daily Value (DV).
Multivitamins or multimineral tablets containing iron, especially those designed for women, typically provide 18mg of iron which meets The U.S. Food and Drug Administration (FDA) recommended Daily Value (DV). While a large portion of the US obtains adequate amounts of iron from their diets, many people, including infants, young children, teenaged girls, pregnant women, and premenopausal women, are at risk of obtaining insufficient amounts from diet alone. For example, the average daily iron intake from foods for premenopausal women between 19 and 50 years old is between 12.5-13.5 mg/day. Women who are prescribed an iron supplement, typically increase their intake amount to 17-19 mg/day – much closer to the target Daily Value amount of 18 mg/day recommended by The Dietary Guidelines for Americans. For children (2-11 years old) and teens (12-19 years old), iron from food alone is usually between 11.5-15 mg/day, while those taking iron supplements increase their intake to around 13.5-16+ mg/day.
The Recommended Dietary Allowance (RDA) of iron for pregnant women is 27 mg/day. It is difficult for pregnant women to get that much iron from diet alone and so they are at significant risk for IDA. That’s why pregnant women are frequently prescribed iron supplements by their obstetricians. Iron supplements may also be recommended for lactating women and women who have heavy periods.
Of course, how much and how often an iron supplement is needed to treat anemia should be evaluated by a doctor or other health care professional. And always consult a doctor before taking any medicines—over the counter or otherwise—when pregnant or when planning a pregnancy.
|
Digital signatures are becoming increasingly important in our digital world. They provide a secure way to verify the authenticity and Worm: A type of malware that replicates itself to spread to ... of digital documents, ensuring that they have not been tampered with or forged. In this beginner’s guide, we will demystify digital signatures, explaining what they are, how they work, and why they are crucial in today’s digital age.
What is a E2E Encryption (End-to-End Encryption): A system of communic...?
A digital signature is a mathematical technique used to verify the authenticity and integrity of digital documents or messages. It is the digital equivalent of a handwritten signature or a stamped seal on a paper document. A digital signature provides assurance that the message or document has not been altered in transit and that it was indeed sent by the claimed sender.
How do Digital Signatures Work?
Digital signatures rely on Anonymous Browsing: Using the internet without disclosing yo... to ensure the Incognito Mode: A privacy setting in web browsers that preve... and authenticity of digital documents. The process can be broken down into three distinct steps: key generation, signing, and Biometric Authentication: A security process that relies on ....
The first step in generating a digital signature is the creation of a pair of cryptographic keys: a private key and a public key. The private key is kept secret and is used by the signer to create the digital signature, while the public key is made available to anyone who wants to verify the signature.
To create a digital signature, the signer uses their private key to perform a mathematical operation on the document or message. This operation, known as a hash function, converts the original content into a fixed-length string of characters. The resulting hash value, unique to the document, is then encrypted using the signer’s private key, creating the digital signature.
When someone receives a digitally signed document, they can use the signer’s public key to decrypt the digital signature and obtain the hash value. They then perform the same hash function on the received document, generating a new hash value. If the two hash values match, it means that the document has not been altered since it was signed and that the signature is valid.
Why are Digital Signatures Important?
Digital signatures play a crucial role in ensuring the authenticity and integrity of digital documents. They provide several benefits, including:
1. Authentication: Digital signatures allow the recipient to verify the identity of the signer. By using the signer’s public key, the recipient can ensure that the document was sent by the claimed sender and has not been tampered with.
2. Integrity: Digital signatures guarantee that the content of the document has not been altered since it was signed. If any modifications are made to the document, the resulting hash value will be different, rendering the signature invalid.
3. Public Key Infrastructure (PKI): A framework that manages di...: Digital signatures provide evidence that the signer cannot deny their involvement in the signing process. Since the private key is unique to the signer, it ensures that the signature was created by them and cannot be repudiated later.
4. Efficiency: Digital signatures eliminate the need for paper-based signatures and physical document storage. They streamline workflows, reduce costs, and improve efficiency in document management processes.
Where are Digital Signatures Used?
Digital signatures find applications in a wide range of industries and sectors. Some common use cases include:
1. Government and Legal Documents: Governments use digital signatures to sign electronic documents, such as contracts, permits, and tax returns. Legal professionals also rely on digital signatures to authenticate legal documents, ensuring their validity.
2. Financial Smart Contract: A self-executing contract with the terms of ...: Digital signatures play a vital role in ensuring the security and integrity of financial transactions, including online banking, Digital Native: A person born during the age of digital tech... transactions, and online payment gateways.
3. Healthcare: In the healthcare industry, digital signatures help safeguard patient Data Sovereignty: The idea that data is subject to the laws ... and ensure the integrity of medical records and prescriptions.
4. Software Distribution: Digital signatures are often used to verify the authenticity and integrity of software during distribution. Users can verify that the software was not tampered with and that it is from a trusted source.
5. Supply Chain Management: Digital signatures are utilized to validate the authenticity of documents involved in supply chain processes, such as purchase orders, invoices, and shipping documents.
Common Challenges and Limitations
While digital signatures offer numerous benefits, there are some challenges and limitations to be aware of:
1. Key Management: Ensuring the security and integrity of private keys is vital. If a private key is compromised, the digital signature loses its validity. Effective key management practices must be followed to mitigate this risk.
2. Dependency on Technology: Digital signatures rely heavily on technology, and any weaknesses or vulnerabilities in the underlying cryptographic algorithms or software implementations can undermine the security and trustworthiness of the signature.
3. Legal Recognition: Although digital signatures have gained widespread acceptance, legal recognition may vary across different countries and jurisdictions. It is essential to ensure that the applicable laws are followed when using digital signatures for legal documents.
4. User Awareness: Many individuals are still unaware of the benefits and proper usage of digital signatures. Increasing user awareness and educating users about digital signatures’ importance is crucial for their widespread adoption.
Digital signatures are an essential tool in today’s digital age. They offer a secure and reliable way to verify the authenticity and integrity of digital documents, ensuring their trustworthiness in various domains. Understanding how digital signatures work and their significance is crucial for individuals and organizations looking to enhance the security and efficiency of their digital transactions and document management processes. By demystifying digital signatures, we can embrace their benefits and further advance in our digital world.
|
Complete Standard 1 of the “Teacher Work Sample” template.
Knowing your school and community with its unique demographics will support your understanding and effectiveness as a teacher.
Research demographic and logistical information about your school, community, and students. Organize this information into the corresponding sections of the TWS. The inserted data will include the following:
Community, District, and School Factors:
- Geographic location, population, stability of community, and community support for education.
- District name and grades served, number of schools, number of students, and percentage of students receiving free or reduced-price lunch.
- Your school information, including the school name, grades, number of students,
percentage of students receiving free or reduced-price lunch, and the academic
- The academic achievement ranking/level could include the Adequate Yearly Progress (AYP) and school improvement status.
- The demographic data of the students in your class, i.e., enrollment, ethnicities,
gender, and socioeconomic information.
- Knowing classroom demographic information is helpful in developing instruction and classroom management/engagement plans.
- Environmental factors such as physical arrangement of the room, classroom resources available, parental involvement, and available/accessible technology.
Student factors to include the number of students that receive outside the class educational resources, number of students whose primary language is not English, number of students on IEPs, number of non-labeled students, such as 504, and any other factors that influence the delivery of content in the classroom.
- Use a peer-reviewed or professional journal to expound on a topic within this standard.
- Write a 3-5 bullet summary of the literature source.
After you have inserted this information into the appropriate sections of the TWS, write the narrative portion of the report (Instructional Implications) and ensure it is no more
than 500 words. Summarize how the information compiled above will influence your success as a teacher candidate by addressing the following:
- What points of information are imperative for new teachers to know? How will thi
information affect your teaching and interactions with students, staff, and
community? How does this data inform your effectiveness as a teacher?
- How will the differences and similarities of the students in your classroom effect
your daily interactions, instruction, planning, and classroom engagement/management plans?
|
Will future spacecraft fit in our pockets? - Dhonam Pemba
- 330,900 Views
- 3,368 Questions Answered
- TEDEd Animation
Due to Earth’s gravity and atmosphere, big powerful chemical rockets are needed to launch from the earth’s surface. However, chemical rockets engine run out of fuel before they can reach their maximum attainable speed. Ion thrusters have very high specific impulses but low thrust, which mean they are not powerful, but can provide propulsion for a very long time. They are good for deep space travel because their fuel can last a very long time, which allows them propel spacecraft to very high speeds with low fuel. Although Ion and Electric propulsion has today become accepted technology powering space missions like the Dawn spacecraft towards the asteroid belt, Vesta and Ceres, this technology wasn’t popular 15 years ago. Check out “Frequently Asked Questions About Ion Propulsion,” to get some answers to your potential questions.
Deep Space One was launched October 24, 1998 to test out high risk, but high reward technologies. Its success pioneered ion-propelled spacecraft missions. Want to learn about Future Spaceship Power and Propulsion? Watch this documentary.
Today, not only are researchers looking for new ways to develop ion and electric thrusters to propel large spaceships into deep space, but they are also developing small thrusters the size of sugar cubes to drive cubesats, provide precise maneuvers, altitude and repositioning control. NASA’s Game Changing Development Program advances technologies that may lead to new ideas and techniques for space missions and other initiatives. Visit the site! Then, visit the Microdevices Laboratory at NASA’s Jet Propulsion Lab and see the latest development ideas for microspacecraft.
Create and share a new lesson based on this one.
|
Electric vehicles use electricity as a primary or secondary power source instead of conventional motor fuels like gasoline or diesel. Using electricity stored in a battery to power an electric motor has natural advantages over the internal combustion engine, including quieter operation, zero tailpipe emissions, instant acceleration, and significantly cheaper operating and maintenance costs. Electricity can be used differently in vehicle applications – those applications are classified below:
Hybrid Electric Vehicles (HEVs)
HEVs are powered primarily by an internal combustion engine, but they also store electricity in a battery that assists in propelling the vehicle to improve fuel efficiency. Unlike a plug-in electric vehicle, the electricity used in HEVs doesn’t come from the electrical grid by plugging into a socket or charging station; instead, electricity is created through a regenerative braking process and through the internal combustion engine. HEVs cannot operate solely on electricity at higher speeds and rely heavily on the internal combustion engine at all times.
Plug-in Hybrid Electric Vehicles (PHEVs)
PHEVs use electric motors in conjunction with an internal combustion engine to power the vehicle, but PHEVs are different from HEVs because they plug-in to an electricity source and can store enough electricity on board to operate independently from the internal combustion engine. Depending on the model, PHEVs can travel 10-50 miles on electricity without using conventional fuels, but when the electric range is depleted the vehicle switches to the internal combustion engine and operates like an HEV for an extended range. PHEVs work great for commuters who can use their electric range to get to and from work during the week but occasionally need additional range for longer trips.
Battery Electric Vehicles (BEVs)
BEVs have no internal combustion engine and operate completely on electricity. They get their electricity by plugging into a power source, whether that is an electric vehicle charging station or an electrical outlet at home. Depending on the power source they utilize, BEVs can charge in as little as 30 minute or as longer than 8 hours. The maximum range of a BEV depends greatly on the specific model, considering the Nissan Leaf as a range of around 80 miles and a Tesla Model S can travel 265 miles on a single charge.
We worked with the Colorado Energy Office to develop a great website, Refuel Colorado, that covers the details of each alternative fuel, including:
Workplace Charging: Get Your Work Plugged-In
HP’s Browyn Pierce, Global Real Estate Sustainability Program Manager, gives a run down on the benefits and considerations of workplace charging at a global company.
Deciding to install electric vehicle (EV) workplace charging has a number of positive outcomes:
- It helps employers attract high-quality applicants and retain important employees, acting as a competitive employee benefit
- It improves public image, leading to more publicity and potentially increasing client and customer-bases
- It reduces the perception of range anxiety, a major barrier to EV adoption
- Community members will be gracious for the improved air quality and health benefits as employees ditch their gas-guzzlers and switch to zero tailpipe emission EVs
- Businesses who switch their fleet to EVs will reduce their bottom-line as fuel expenses become a thing of the past
To learn more about workplace charging and how it can benefit your business check out the resources below! If you have any questions regarding workplace charging policies or installation please give our coalition a call!
ALA’s Driving Change Program – http://lungwalk.org/CleanCitiesWebsite/wordpress/programs/rocky-mountain-clean-diesel-collaborative/
D.O.E. Workplace Charging Challenge – http://energy.gov/eere/vehicles/ev-everywhere-workplace-charging-challenge
Colorado’s EV Wired Workplaces – https://www.colorado.gov/pacific/energyoffice/wired-workplace-ev-charging
|
Thanks to TV shows like Sesame Street, many children enter preschool chanting or singing the number names from 1 to 20. Learning to count meaningfully requires both memorizing arbitrary terms or number names (rote counting) and rule-governed counting (rational counting). Rote recitation of the number words is not the same as having a good number sense for what 20, 25, or 100 means.
© Erikson Institute’s Early Math Collaborative. Reprinted from Big Ideas of Early Mathematics: What Teachers of Young Children Need to Know (2014), Pearson Education.
|
EMBARGOED: NOT FOR PUBLIC RELEASE BEFORE 5 P.M. EST MONDAY, FEBRUARY
Frugal Computer Uses DNA as Input, Fuel
Fifty years after the discovery of the structure of DNA, a team of scientists presents a tiny computing machine composed solely of DNA and enzymes. In terms of speed and size, DNA computers may eventually surpass traditional computers that use silicon microchips. While many groups have proposed designs for DNA computers, previous attempts have relied on an energetic molecule called ATP for fuel. In article #5624, Ehud Shapiro and colleagues describe a DNA computer that uses DNA as the fuel supply and is recognized by Guinness World Records as the world's smallest biological computing device. In each computational step, two complementary DNA molecules--an input molecule and a software molecule--spontaneously bond together. The software molecule then directs a DNA-cleaving enzyme to cut a piece of the input molecule. The enzyme, FokI, breaks two bonds in the DNA double helix, releasing the energy stored in these bonds as heat. This process generates sufficient power to carry out computations to completion without an external energy source. The authors report that a microliter of solution could hold up to three trillion of the DNA computers, performing 66 billion operations per second.
"DNA molecule provides a computing machine with both data and fuel" by Yaakov Benenson, Rivka Adar, Tamar Paz-Elizur, Zvi Livneh, and Ehud Shapiro
MEDIA CONTACT: Ehud Shapiro, Weizmann Institute of Science; tel. 972-8-9344506, fax 972-8-9471746, or e-mail <[email protected]>; and David Hawksett, Guinness World Records; tel 44-207-891-4588, fax 44-207-891-4501, or e-mail <[email protected]>
A VISUAL ACCOMPANYING THIS ARTICLE IS AVAILABLE
Additional PNAS embargoed supplementary material:
|
The Equality Act and your child’s education
Knowing what your legal rights are can help make sure that your child gets the support they need in their education.
Sometimes just showing that you're aware of your legal rights can mean that the education setting will take your concerns more seriously.
It’s usually easier to try and prevent problems rather than having to fix something once it’s gone wrong. For more information on working successfully with your child’s school or college see:
The Equality Act 2010 is an important law that protects deaf children and young people from being discriminated against.
If your child lives in England, Scotland or Wales, they will be protected by the Equality Act. If you live in Northern Ireland your child is protected by the Disability Discrimination Act 2005.
To find out more information on disability legislation in Northern Ireland, visit the Equality Commission for Northern Ireland website.
The Equality Act is particularly helpful in education settings in situations such as when:
- a nursery or school has refused to admit your child for a reason related to their deafness
- your child’s school has refused to include them in an outing for a reason related to their deafness
- your child has been excluded from school, or punished in some way, for a reason that you believe is connected to their deafness
- you believe that your child’s nursery, school or college isn’t deaf aware
- your child’s deafness makes it harder for them to do well and they need more support
- your child needs radio aids or other equipment to help take part in lessons or lectures, but they don’t have an Education, Health and Care (EHC) plan (England), statement of special educational needs (Wales) or a coordinated support plan (Scotland)
- your child needs special arrangements so that they can sit exams or tests
- your child is about to start or transfer to another school and you want to work with the new school to make sure they’ve planned ahead so that your child has a smooth start
- your child is about to start studying at a further education college or university and you want to make sure that they get the right support.
All public bodies and services must follow the Equality Act, including local authorities and all education providers such as nurseries, schools (including private and independent schools), colleges, training providers and universities.
To be protected by the Equality Act, your child needs to meet the legal definition of disability.
This is when a person has “a physical or mental impairment which has a substantial and long term adverse effect on that person’s ability to carry out normal day-to-day activities.”
In this instance, ‘long term’ is defined as lasting, or likely to last, for at least 12 months.
All permanently deaf children will meet the definition of disability.
However, children who experience temporary deafness (for example, due to glue ear), will only be considered disabled under the Equality Act if the temporary deafness has lasted or is likely to last more than 12 months.
Under the Equality Act, the following behaviors are unlawful.
This is when a provider treats a child or young person less favourably because of their deafness (or other disability).
Example: a school refusing to admit a child because they’re deaf.
It’s also direct discrimination when a disabled person is placed at a substantial disadvantage because reasonable adjustments haven’t been made.
Example: a teacher asking students to make notes while they speak. A deaf child would be at a substantial disadvantage because it’s impossible to lip-read the teacher while looking down to make notes.
This is when a provider does something which applies to all children and young people, but is more likely to have an impact on those with disabilities.
Example: a school rewarding those who have a 100% attendance record with a trip to a theme park, without taking into account instances where pupils have had to miss lessons because of their disability (for example, to attend audiology appointments).
Discrimination arising from disability
This is when a provider treats a child or young person less favourably because of something connected to their disability rather than the disability itself.
Example: a school deciding that children with short attention spans (including deaf children who can sometimes find it harder to pay attention for long periods of time) won’t be allowed to watch a performance by a visiting theatre company.
This is when a provider does something which makes a disabled child or young person feel ‘picked on’, intimidated or humiliated because of their deafness.
Example: a teacher ridiculing a deaf child in class because the child didn’t hear their name being called.
This is when a person is treated less favourably because they, their parent or sibling have done a ‘protected act’. Protected acts include making a claim or complaint of discrimination.
Example: a parent writes a letter of complaint, saying that the school isn’t fulfilling its duties towards their deaf child because no deaf awareness training has been given. As a result, the deaf child’s non-disabled younger sister is refused a place at the school.
The Equality Act protects your child in two main ways:
- It entitles your child to reasonable adjustments.
- It means education providers need to plan ahead and think about how they can remove any barriers that might disadvantage deaf or disabled children and young people.
What is a reasonable adjustment?
A reasonable adjustment is a change a provider makes so that a deaf child can do something which they wouldn’t otherwise be able to do.
If an education provider refuses or fails to make reasonable adjustments, then this can be seen as discrimination.
More information on reasonable adjustments can be found in the Equality and Human Rights Commission’s publication Reasonable Adjustments for Disabled Pupils.
The law doesn’t say exactly what a reasonable adjustment would be. This is because what’s ‘reasonable’ may depend on the situation.
Factors that should be taken into account include:
- how much it would cost to make the adjustment
- how practical it would be to make the adjustment
- the difference it would make to the disabled child or young person
- how much funding the provider has to make the adjustment.
For example, it may be reasonable to expect a large secondary school to introduce soundfield systems, but for a village primary school with only 50 pupils and a small budget, the cost might be considered unreasonably high and impractical.
If something can be done easily, quickly or inexpensively, then it should be seen as a reasonable adjustment.
Examples of reasonable adjustments include:
- teachers being asked to make sure that they face your child when speaking so that your child can lip-read
- teachers agreeing to support their lessons with visual aids
- basic deaf awareness training being organised for all staff at the secondary school your child will attend
- homework tasks being printed out in advance of the lesson, so that your child doesn’t have to listen and write at the same time
- a school agreeing to install a soundfield system in your child’s classroom
- your child being given extra time to complete an exam, because they take longer to process what they read
- a college or university providing a note-taker to help your child.
No charge should be made to parents or a young person for any reasonable adjustment.
Auxiliary aids and services
Under the Equality Act, education settings and local authorities must also provide ‘auxiliary aids’ as a reasonable adjustment to help disabled people overcome any disadvantage they may experience because of their disability.
The term auxiliary aid covers both aids and services and could include:
- radio aids
- soundfield systems
- assistance to make sure that hearing aids are working correctly
- a note-taker to provide written notes
- a communication support worker, sign language interpreter or lip-speaker
- a British Sign Language (BSL) interpreter for deaf parents to attend parents’ evening.
What if a reasonable adjustment can’t be made?
In some cases, it’s not possible for an education setting to make a reasonable adjustment because it would be too expensive or difficult to do so.
In these cases, the law says that the local authority may need to provide additional support beyond what the education setting can provide.
For example, if it’s too expensive for a nursery, school or college to provide your child with a radio aid, the local authority should consider if they could provide this themselves as a reasonable adjustment.
Local authorities will have much larger budgets so the cost of a radio aid may be more reasonable at this level. For this reason, in many areas, radio aids are normally provided by the local authority to be used in different education settings and within the home.
In other cases, the fact that an education setting can’t make reasonable adjustments may mean that your child meets the threshold for an Education, Health and Care (EHC) plan (England), a statement of special educational needs (Wales) or a coordinated support plan (Scotland).
These are legal documents that set out the support that your child needs. They are usually given to a child where they need more support that a nursery, school or college can reasonably provide. They are not available to young people in higher education.
The Public Sector Equality Duty
Under the Equality Act, all public bodies must follow the Public Sector Equality Duty (PSED).
This requires all public bodies, including stated-funded schools, colleges and universities, and local authorities to have due regard for the need to:
- eliminate discrimination and other conduct that is prohibited by the Act
- advance equality of opportunity between people who share a ‘protected characteristic’ and people who don’t share it. A protected characteristic includes: sex, race, disability, religion or belief and sexual orientation
- foster good relations between people who share a protected characteristic and people who don’t, for example, between disabled people and non-disabled people.
The PSED is an ‘anticipatory duty’. This means that all public bodies must plan ahead and think about how they can remove any barriers that might disadvantage deaf or other disabled children and young people.
This also means that whenever significant decisions are being made or policies developed, schools must consider carefully the impact they will have on equality. For example, when planning an extra-curricular activity, schools must consider whether the activities are accessible for disabled children and young people.
Schools and colleges are also required to publish equality objectives to show how they are meeting the PSED. These objectives should be updated at least once a year and new objectives should be published at least every four years.
An example of an equality objective for a school might be to reduce hostile attitudes and behaviour towards, and between, disabled and non-disabled pupils.
Many schools and colleges publish their objectives on their website.
Accessibility planning for disabled pupils
Schools are required to carry out accessibility planning for disabled pupils. They must write, implement and review accessibility plans which are aimed at:
- increasing the extent to which disabled pupils can participate in the curriculum
- improving the physical environment of schools so that pupils can take better advantage of education, benefits, facilities and services provided
- improving the availability of accessible information to disabled pupils.
The accessibility plan may be a separate document or may be published as part of another document such as the school development plan. It should be available on the school website, or you can ask the school for a copy.
Accessibility plans should be reviewed and republished every three years.
Whoever is responsible for delivering the educational part of an apprenticeship (this could be a further education college, private training provider or the employer themselves) has the same duty as other education providers to make reasonable adjustments and avoid/prevent discrimination.
Funding is available through Disabled Students' Allowances (DSAs) for support which can’t be met as reasonable adjustments by the university or other higher education provider.
Students should apply for DSAs at the same time as filling in their student finance applications – they should do this as early as possible to guarantee support will be in place when they start their course.
In England, DSAs aren't available for support that the Government expects universities to fund as reasonable adjustments. This includes manual note-takers, proofreaders and alert systems for accommodation.
Access arrangements for exams
Education providers and general qualifications bodies both have a duty to make reasonable adjustments to make sure deaf children and young people aren’t unfairly disadvantaged when sitting exams or assessments which lead to a qualification.
For more information see Access arrangements for your child's examinations.
If you think that an education setting is refusing or failing to make reasonable adjustments, then this may mean your child is being discriminated against.
Before deciding what action to take, check that the problem really is discrimination. You can do this by answering the following questions:
- Did the education provider know that your child was deaf?
For some types of discrimination to apply, the provider must know about the disability. This doesn’t mean that you need to formally tell the provider that your child is deaf, particularly if it’s obvious because, for example, your child wears a hearing aid.
However, if your child’s hearing loss is less obvious or if you aren’t sure if the provider is aware, it’s better to let them know and make sure that a formal record is kept so that there’s no doubt that the provider is aware.
- Has your child been put at a substantial disadvantage or been treated less favourably?
- Is this disadvantage or less favourable treatment related to your child’s deafness (or other disability)?
- Could the provider have made reasonable adjustments to avoid the disadvantage?
- Could the provider justify the less favourable treatment, for example, on the grounds of health and safety?
Providers might use a ‘justification defence’. This means that they may acknowledge that your child has been discriminated against but that there was nothing they could do about it. The provider will only be able to rely on this defence if it can demonstrate that it has considered all reasonable adjustments.
Example 1: A school decides not to take a pupil on a trip to the local swimming pool as the school’s risk assessment policy requires children to be able to hear verbal commands whilst in the pool in case of an emergency.
This is an example of indirect discrimination as it’s a policy which is preventing the deaf child from swimming. The school may argue that it’s simply trying to keep the children safe. However, if a reasonable adjustment could be made (such as a communication support worker entering the pool with the child) then the school is unlikely to be able to rely on this defence.
Example 2: A school decides not to allow a child who is deaf and has complex needs to join in a school trip to London. This is because he often acts impulsively and has run away from the school and staff on previous school trips.
This is an example of discrimination arising from a disability, as the school is not saying that he can’t go on the trip due to his disability but because of something connected to his disability. The school may again argue that it wants to keep the child safe. If the school can demonstrate that it has undertaken a full risk assessment and has exhausted the reasonable adjustments which can be made, this decision may be justified.
Please note that the justification defence is not available for claims of direct discrimination, failure to make reasonable adjustments, harassment or victimisation.
Making a claim against a school
In most cases, you should be able to work with your child’s school to resolve problems. Find out more about making a complaint about your child's school.
However, if you need to take matters further, you have the right to appeal or make a claim to a specialist independent Tribunal.
There are different Tribunals for England, Scotland and Wales.
Claims must be made within six months of the date of the incident.
In England, claims against schools should be made to the First-tier Tribunal Special Educational Needs and Disability.
The Tribunal publishes a guide to bringing a disability discrimination case which gives more details about what issues the Tribunal will and won’t deal with (for example it doesn’t deal with school admissions or some exclusions). Our factsheet, How to appeal to the Tribunal against a decision about your child’s special educational needs (England), gives more information.
In Scotland you should approach your local authority education department. If you can’t resolve the problem with the education department you can raise a claim at the First tier Tribunal for Scotland (Education and Health Chamber).
In Wales, you can appeal to the Special Educational Needs Tribunal for Wales.
Our factsheet, Appealing to the Special Educational Needs tribunal in Wales, gives more information.
What can I expect at a Tribunal?
There’s no charge for lodging a disability discrimination appeal or claim with the Tribunal service and you don’t have to have legal representation for a Tribunal appeal, although you can if you wish to.
In England and Wales, although the Tribunal hearing is still a legal hearing, it’s conducted in a less formal way than some other court proceedings.
For example you and your representative and witnesses (if applicable), and the provider and their representative and witnesses (if applicable) will all sit at one side of a table in the same room throughout the hearing, with the members of the Tribunal sitting opposite.
Tribunals usually take place in a room inside a court building where people will meet around a table. They may occasionally, although not usually, take place in a typical courtroom.
The chair of the Tribunal is legally qualified, while the lay members (non lawyers) will have significant knowledge and experience of special educational needs and disability. The Tribunal members will aim to run proceedings in a friendly way and make things as unintimidating as possible.
In Scotland, a Tribunal hearing may seem more formal (for example, witnesses are called one by one and don’t stay in the room to hear other witnesses’ evidence) but the Tribunal members still aim to conduct hearings in a way that is accessible to families.
The Tribunals can’t order the payment of compensation, but they may make any order that they think appropriate, often with the intention of trying to remedy the damage done and to reduce any further disadvantage. For example, tribunals could order:
- a letter of apology
- staff training
- changes to policies and procedures
- additional education for a pupil who has missed education
- an additional school trip for a pupil who has missed a trip.
If you need extra support, you can also contact our Freephone Helpline. We have a team of education appeals advisers who may be able to support you in making a claim against a school or local authority.
Making a claim against other providers
The Tribunal can only hear disability discrimination claims in relation to schools. All other discrimination (for example in nurseries, colleges, universities and training providers) must be challenged in the County Court (England and Wales) or Sheriff Court (Scotland).
Unlike the Tribunal, the County Court or Sheriff Court can order the payment of compensation. This is known as an award for injury to feelings. It can also order other remedies including:
- making a declaration that your child has been unlawfully discriminated against, harassed or victimised, or declare that no unlawful discrimination, harassment or victimisation has taken place
- imposing an injunction (known in Scotland as an interdict) requiring the local authority to do something to prevent them from repeating any discriminatory act in the future.
If you’re lodging a claim in the County Court in England or Wales, or Sheriff Court in Scotland, you’ll have to pay a fee for lodging your claim form. The fee will depend on the amount of money you’re claiming for damages.
There may also be other costs connected to lodging a claim in the County Court or Sheriff Court. These can include:
- the cost of your legal representatives. This may not apply if you’re eligible for legal aid, or if you win your case, the opposing party may be ordered to pay your legal costs.
- the legal costs of the opposing party if you lose your case.
We have created some template letters to help you use the Equality Act to:
- challenge a permanent exclusion (England and Wales)
- request reasonable adjustments
- challenge a failure to make reasonable adjustments
- complain about failure to make reasonable adjustments
- complain about less favourable treatment (disability discrimination)
- notify of your intention to make a claim to the Tribunal about less favourable treatment (disability discrimination)
Please copy and paste the content of these template letters and then edit them as appropriate to your situation.
We have created some scenarios to show how the Equality Act might be relevant in a range of different circumstances.
Although the Equality Act only applies in England, Scotland and Wales, these scenarios may still be helpful to parents in Northern Ireland as there are some similarities between the Equality Act and Northern Ireland’s Disability Discrimination Act 2005.
|
Lesson Plans for Secondary School Educators
Unit Five: "The Tides of Fate Are Flowing"
Content Focus: The Lord of the Rings, Book Two
Thematic Focus: Free Will and Fellowship
In Book Two of The Lord of the Rings, one of Tolkien's major concerns, the tension between personal freedom and providential design, emerges full blown. The Unit Five resources are intended to help students wrestle with the ancient problem of free will, both as a literary theme and as the sine qua non of their choices in life.
By the end of Unit Five, the student should be able to:
Discuss why "free will" is a crucial idea in the history of Western thought.
Indicate how the ideal of fellowship not only suffuses Tolkien's epic but also figured in his personal life.
Recapitulate the four proposed solutions to the Ring crisis considered by the Council of Elrond, and explain why only one proved acceptable.
Compare and contrast two pivotal moments in Book Two: the test of Galadriel and the fall of Boromir.
Paraphrase Galadriel's warning concerning prophecy.
Unit Five Content
Comments for Teachers
These lesson plans were written by James Morrow and Kathryn Morrow in consultation with Amy Allison, Gregory Miller, Sarah Rito, and Jason Zanitsch.
Lesson Plans Homepage
|
This article incorporates, in modified form, material from Illustrated Guide to Home Chemistry Experiments: All Lab, No Lecture.
A colloid, also called a colloidal dispersion, is a two-phase heterogeneous mixture made up of a dispersed phase of tiny particles that are distributed evenly within a continuous phase. For example, homogenized milk is a colloid made up of tiny particles of liquid butterfat (the dispersed phase) suspended in water (the continuous phase). In comparison to true solutions, the continuous phase can be thought of as the solvent-like substance and the dispersed phase as the solute-like substance.
Each type of colloid has a name. A solid sol is one solid dispersed in another solid, such as colloidal gold particles dispersed in glass to form ruby glass. A solid emulsion is a liquid dispersed in a solid, such as butter. A solid foam is a gas dispersed in a solid, such as Styrofoam or pumice. A sol is a solid dispersed in a liquid, such as asphalt, blood, pigmented inks, and some paints and glues. An emulsion, sometimes called a liquid emulsion, is a liquid dispersed in another liquid, such as mayonnaise or cold cream. A foam is a gas dispersed in a liquid, such as whipped cream or sea foam. A solid aerosol is a solid dispersed in a gas, such as smoke and airborne particulates. An aerosol, sometimes called a liquid aerosol, is a liquid dispersed in a gas, such as fog, which is tiny water droplets suspended in air. All gases are inherently miscible (completely soluble in each other), so by definition there is no such thing as a gas-gas colloid. Some colloidal substances are a mixture of colloid types. For example, smog is a combination of liquid and solid particles dispersed in a gas (air), and latex paint is a combination of liquid latex particles and solid pigment particles dispersed in another liquid. Table 18-1 summarizes the types of colloids and their names.
What About Gels?
Many reference sources incorrectly list gel as a type of colloid, describing a gel as a liquid dispersed phase in a solid continuous phase, which is properly called a solid emulsion. In fact, a gel is a type of sol in an intermediate physical phase. The density of a gel is similar to the density of the dispersing liquid phase, but a gel is physically closer to solid form than liquid form. Prepared gelatin is a good example of a typical gel. Mary Chervenak adds, “I think toothpastes are defined as colloidal gels with viscoelastic properties.”
Table 18-1. Types of colloids
|Phase of colloid||Continuous phase||Dispersed phase||Colloid type|
What differentiates a colloid from a solution or a suspension is the size of the dispersed particles. In a solution, the dispersed particles are individual molecules, if the solute is molecular, or ions, if the solute is ionic. Particles in solution are no larger than one nanometer (nm), and usually much smaller. In a colloid, the dispersed particles are much larger, with at least one dimension on the close order of 1 nm to 200 nm (=0.2 micrometer, μm). In some colloids, the dispersed particles are individual molecules of extremely large size, such as some proteins, or tightly-bound aggregates of smaller molecules. In a suspension, the dispersed particles are larger than 100 nm.
These differing particle sizes affect the physical characteristics of solutions, colloids, and suspensions, as follows:
- Solutions, and (usually) colloids, do not separate under the influence of gravity, while suspensions eventually settle out. In a colloid, the interactions among the tiny particles of the dispersed phase with each other and/or with the continuous phase are sufficient to overcome the force exerted by gravity on the tiny particles of the dispersed phase. In a suspension, the force of gravity on the more massive particles of the dispersed phase is sufficient to cause them to settle out eventually, although it may take a long time for that to occur. (If the particles of the dispersed phase are less dense than those of the continuous phase, as for example in a mixture of oil dispersed in water, the dispersed phase “settles” out on top of the continuous phase, but the concept is the same.)
- Solutions do not separate when centrifuged, nor do colloids except those that contain the largest (and most massive) dispersed particles, which may sometimes be separated in an ultracentrifuge.
- The particles in solutions and colloids cannot be separated with filter paper, but suspensions can be separated by filtering.
- Solutions pass unchanged through semipermeable membranes–which are, in effect, filters with extremely tiny pores–while suspensions and all colloids except those with the very smallest particle sizes can be separated by membrane filtration.
- Flocculants are chemicals that encourage particulate aggregation by physical means. Adding a flocculant to a solution has no effect on the dispersed particles (unless the flocculent reacts chemically with the solute) but adding a flocculant to a colloid or suspension causes precipitation by encouraging the dispersed particles to aggregate into larger groups and precipitate out.
- The particles in a solution affect the colligative properties of the solution, while the particles in a colloid or suspension have no effect on colligative properties.
- Solutions do not exhibit the Tyndall Effect, while colloids and suspensions do. The Tyndall Effect describes the scattering effect of dispersed particles on a beam of light. Particles in solution are too small relative to the wavelength of the light to cause scattering, but the particles in colloids and suspensions are large enough to cause the light beam to scatter, making it visible as it passes through the colloid or suspension.
Mary Chervenak comments
Synthetic latexes, some of which have small enough particles to be considered aqueous colloidal suspensions, appear blue for this reason.
Figure 18-1 shows the Tyndall Effect in a beaker of water to which a few drops of milk had been added. I used a green laser pointer for this image because the much dimmer red laser pointer I used when I actually did the lab session proved impossible to photograph well, even thought it was clearly visible to the eye. The bright green line that crosses the beaker is the actual laser beam, reflected by the colloidal dispersion. The green laser pointer is bright enough that the scattered light illuminates the rest of the contents of the beaker as well.
Figure 18-1. The Tyndall Effect
Table 18-2 summarizes the physical characteristics of solutions, colloids, and suspensions. It’s important to understand that there are no hard-and-fast boundaries between solutions, colloids, and suspensions. Whether a particular mixture is a colloid or a suspension, for example, depends not just on the particle size, but the nature of the continuous phase and the dispersed phase. For example, note that the particle size of colloids may range from about 1 nm to about 200 nm, while the particle size of suspensions may be anything greater than 100 nm. Furthermore, particle sizes are seldom uniform, and may cover a wide range in any particular mixture.
So, is a particular mixture with a mean particle size of 100 nm a colloid or a suspension? It depends on the nature of the particles and the continuous phase. Solutions, colloids, and suspensions are each separated by a large gray area. Near the boundaries between types, it’s reasonable to argue that a substance is both a solution and a colloid, or both a colloid and a suspension. As George S. Kaufman said, “One man’s Mede is another man’s Persian.”
Mary Chervenak comments
“Quod cibus est aliis, aliis est venenum.” (What to some is food, to others is poison.)
Table 18-2. Physical characteristics of solutions, colloids and suspensions.
|Type of particle||individual molecules or ions||very large individual molecules or
aggregates of tens to thousands of smaller molecules
|very large aggregates of molecules|
|Particle size||< 1 nm||~ 1 nm to ~ 200 nm||> 100 nm|
|Separation by gravity?||no||no (usually; otherwise, very slowly)||yes|
|Separation by centrifugation?||no||yes, for more massive dispersed particles||yes|
|Captured by filter paper?||no||no||yes|
|Captured by membrane?||no||yes (usually)||yes|
|Precipitatable by flocculation?||no||yes||yes|
|Exhibits Tyndall Effect?||no||yes||yes|
|Affects colligative properties?||yes||no||no|
In this chapter, we’ll prepare various colloids and suspensions and examine their properties.
Everyday Colloids and Suspensions
- The protoplasm that makes up our cells is a complex colloid that comprises a dispersed phase of proteins, fats, and other complex molecules in a continuous aqueous phase.
- Detergents are surfactants (surface-active agents) that produce a colloid or suspension of tiny dirt particles in an aqueous continuous phase.
- Photographic film consists of an emulsion of gelatin that serves as a substrate for a suspension of microscopic grains of silver bromide and other light-sensitive silver halide salts.
- Many common foods, including nearly all dairy products, are colloids or suspensions.
- Toothpaste, shaving gel, cosmetic creams and lotions, and similar personal-care products are colloids.
- Water treatment plants use flocculants (chemicals that cause finely suspended or colloidal dirt to clump into larger aggregates and settle out) as the first step in treating drinking water.
|
As the evidence that human activity and pollution are quickly warming our planet continues to pile up, researchers are now noticing a particularly shocking side effect. We normally think of migration as the movement of animals (including humans) between two locations, but ecologists from Purdue University have discovered that the effects of climate change — called global warming before the spin artists gave it a new, less-dire moniker — are actually forcing trees to move, too, and not in the way that anyone could have predicted.
As the planet gradually gets hotter forest researchers have been looking for signs of various tree species moving to new areas that match the climate they are used to. In most cases, that would mean moving close to the poles in search of cooler conditions, but now, scientists have shown that many species in the United States are actually heading west instead of north. Of the 86 tree species the team tracked, the majority have taken a westward path.
This discovery was confusing at first, but the researchers ultimately determined that it’s likely do to shifting weather patterns. The long-term changes in precipitation levels — which, again, are a documented symptom of manmade global warming — have caused a rather abrupt response from the trees, thought ultimately temperature will likely play a role in their migration as well.
Additionally, forest fires — sometimes a natural phenomenon, but over 90 percent of which are actually caused by humans — are making the process of tracking migration of tree species even more difficult, so we may not fully understand the impact humans are having, because one of our horrible mistakes is helping to obscure the other horrible mistake. If trees could talk, you have to imagine they’d be screaming right now.
|
"The definition of good parenting and good teachingOppositional Defiant Disorder (ODD)
is being responsive to the hand you've been dealt,"
Dr. Ross Greene says.
Teaching a child with Oppositional Defiant Disorder can be frustrating, challenging and exhausting. However, it is important to remember that the student is suffering, too. These students have mental or emotional deficits that may be a result of stress at home, economic disadvantages, conflicting parenting styles, or even negligence or neurochemical imbalances. They are not acting this way just to make everyone else miserable – even though it may sometimes seem that way! Though these students can be disruptive or upsetting, there are useful strategies for helping them act appropriately. Another point to remember is that these students need structure: rules, laws, rewards, punishment, love, guidance, and a sense of safety.
Blessed Sacrament >
|
By JoAnne Skelly
Extension Educator, Carson City/Storey County
University of Nevada Cooperative Extension
The Caughlin Fire has raised many questions from homeowners on how to save their landscapes. People want to know how to tell if their trees are alive or whether they can be saved. They want to know what to do first and whether they should prune now.
Fire damages trees or shrubs in a number of ways:
- Trunk or branch damage
- Inner tissue injury
- Leaf or needle scorch
- Bud death
- Root damage
Since the Caughlin fire occurred when trees are entering dormancy, trees may be more likely to survive. It will depend on fire intensity and length of exposure to the tree. Thickness of bark also influences survival. Chemical content is another factor. Evergreen trees have a high oil and wax content and a greater burn potential. Leafless, deciduous trees that have an open loose branching pattern are more likely to survive. Trees stressed due to drought, injury, disease, insects are weak to begin with and unlikely to survive.
To determine if a tree will survive, look to see if the bark is completely burned off exposing the tender tissue underneath. When the bark is gone, the tree probably won’t survive. If there is bark, cut a quarter-sized piece off to see if there is a green or white layer immediately below the bark. If the layer beneath is green or white, the tree has a good chance of recuperating. If the trunk is severely burned for more than 50 percent around the circumference, the tree will probably die, although some thick-barked trees may survive. To check if burned branches are alive, peel back a bit of bark on twigs. If there is a thin layer underneath that is green or white and it is moist, the twigs may be alive. Wait to see if they have spring growth before pruning these branches. Where the fire burned deeply into the trunk, the tree will be unstable and survival is unlikely. These are hazard trees and should be removed. Evergreen trees may survive if more than 10 percent of their foliage is still green. Whether evergreen or deciduous, check the buds. They should be moist not brittle.
See if the roots are burned around the base of the tree. Gently brush away soil 6 to 8 inches deep in a few locations and see if roots appear supple rather than dry and brittle. If 50 percent of the roots have been burned, the tree is unstable, may be toppled by wind and is likely to die.
To care for fire damaged trees water them as soon as possible. Plants will need water because soils were dried out by the fire. Some soils may repel water. Fire-damaged and water-stressed trees are more susceptible to bark beetle attack. Prune off dead, broken or severely damaged limbs. Trees that must be cut down should be removed from the property to avoid beetle infestations.
After a fire, when evaluating what steps to take, think about safety first. Check for unstable trees or tree limbs that may fall. Then, take care of remaining trees and be patient. Many trees can survive a fire.
For more information on landscape care after fire see the University of Nevada Cooperative Extension publication Taking Care of Residential Trees after Wildfire or contact JoAnne Skelly at 775-887-2252 or [email protected].
|
At one time most people referred to the Original Peoples of the western hemisphere as Indians, as though they were all a common ethnic group. In the Arctic they were called Eskimos. The Original Peoples, however, knew, and still know, themselves as Inuit, Innu, Haudenosaunee, Beothuk, Nuu-chah-nulth, Inka, Mapuche, etc. They all became known as Indians because Christopher Columbus landed somewhere in the Caribbean, likely the Bahamas, but he reckoned that he was in India. As a consequence, he called the Lucayan people Indians, and everyone else in the western hemisphere were called Indians by the Europeans, even after they realized it was not India. Over time the Original Peoples began to refer to themselves as Indians and the Indigenous names became Christianized in the ways of the White people. Such is the force of colonization and the matrix of assimilation.
Nowadays, informed people realize the foolishness of considering the multiplicity of nations as a unitary people.
In Kent Nerburn’s Neither Wolf nor Dog the Lakota Elder says:
Some Indians decided they would rather be called Native Americans. But some say that’s no more real than Indians, because, to some of us, this isn’t even America. Someone was lost and thought they landed somewhere else. It’s like if someone took over this country, now, and called it, say, Greenland, and then they said that those of us who were already here are going to be called Native Green landers. And they said they were doing this out of respect. Would you feel respected? That’s what we put up with every day-people calling us a bunch of names that aren’t even real and aren’t even in our language. We had our identities taken from us the minute Columbus arrived in our land.1
Leroy Little Bear, Menno Boldt, and J. Anthony Long wrote, “Indians [sic] resent and object to what they perceive as academic paternalism and an assimilationist bias in much of literature on Indian issues and policies.”2
They decried the 1876 Indian Act in Canada as having subverted “traditional political institutions” of Original Peoples “by provisions in the act that deliberately encouraged individual property rights and landholding of reserve lands.”3
The Indian Act imposed a top-down electoral system that militated against the traditional Indigenous system of consensus. The outcome was “band councils [that] functioned as agents of the federal government in a model of colonial indirect rule rather than as representatives responsible for their own people.”4
It has been an attempt to undermine sovereignty and nationhood (something never surrendered or conquered) by fobbing First Nations off on provincial governments while trying to municipalize reserves. The authors argue that First Nation rights pre-date Canadian confederation; therefore, Canadian government rule over natives is seen as illegitimate.
“[S]elf-government is seen by Indians as necessary to preserve their philosophical uniqueness” … “They do not want merely a European-Western model of government that is run by Indians; rather, they want a government that will restore their relationship and natural environment rather than try to assimilate them into the dominant society.”5
Recently the head of the Assembly of First Nations, Shawn Atleo, was forced out, and this brought again to the fore the dynamic between resistance to assimilation and assimilation and collaboration. For an Indigenous perspective on what this means, I interviewed Gord Hill, author of The 500 Years of Resistance Comic Book (see review) and The Anti-Capitalist Resistance Comic Book. Gord Hill is a member of the Kwakwaka’wakw nation in the Pacific Northwest.
Kim Petersen: I asked to interview you because of a news release by Idle No More entitled “Shawn Atleo Forced Out as National Chief by Indignation of First Nations Peoples, as Opposition Builds Against Rushed, Assimilationist First Nations Education Act.” Basically the news release communicated that Atleo, who was supposed to be a representative for First Nation peoples was a collaborationist with the Harper regime. How do you respond to the news release?
Gord Hill: The INM statement is not a new revelation, as grassroots Indigenous people have been saying the same thing for over thirty years in regards to the AFN and the Indian Act band councils in general. The band councils were imposed by the federal government, rely on the government for funding and legitimacy, and were established in order to control and further assimilate Indigenous peoples.
KP: Many in the Indigenous resistance, for example, warrior Dacajeweiah (Splitting the Sky) and Elder Kahn-Tineta Horn, consider the Assembly of First Nations a collaborationist institution. If so, then, is it any wonder that Atleo would allegedly act as a collaborator with a colonial institution?
GH: The colonial institutions, such as the AFN and its provincial counterparts, as well as the band council system overall, are designed to be collaborator organizations that carry out federal government policies.
KP: I have read that you also are skeptical to Idle No More. Could you elaborate on how you currently view Idle No More?
GH: Overall I think INM was a positive experience as it mobilized thousands of Natives across the country, even if it was brief. Many Natives were talking about colonialism and decolonization, so in that way it raised people’s consciousness. On the other hand, it was manipulated and to some extent integrated with the Indian Act chiefs and councils, who have their own agenda that conflicts with that of genuine grassroots peoples. So in that way it helped to further legitimize the band councils. In addition, because of the controlling and authoritarian approach of the “official” founders of INM, the movement was blunted and unable to expand beyond simply opposing Bill C-45. The “official” founders, coming from middle-class professional backgrounds, were reformist and opposed to any radical actions such as the blockades that began occurring. Flowing from their reformist strategy was an emphasis on “peaceful” protests, pacifism, and the “flash mob” round dances in malls. So while we took one step forward with the INM mobilizations, we also took two steps back in that pacifism and “peaceful rallies” was widely promoted on a national level. This is in contrast to decades of grassroots Indigenous resistance that has used militant actions such as blockades and even armed resistance. I’m glad that the INM mobilization occurred, but I’m also glad that it had a relatively short life and hopefully those that were mobilized will learn and grow from this experience.
KP: There is a circumstance that confronts many non-Indigenous supporters of Indigenous rights: namely that Indigenous peoples appear sometimes to be in conflict. For example, and this is of course well known to yourself, in “BC” the salmon has been central to the lifeways of many First Nations. Yet scientific studies warn that wild salmon appear imperilled by the presence of salmon-farming operations in open water. The Kwicksutaineuk/Ah-Kwa-Mish First Nation have taken to the colonial Supreme Court of Canada to protect the wild salmon. Salmon-farming advocates, however, point to other First Nations, for example, the Ahousaht First Nation carrying out salmon farming. Another current example is the proposed Enbridge Northern Gateway pipeline that would transport Tar Sands oil across northern BC to the coast. Enbridge is confident more than half of the First Nations will come onside. If so, that would see a split among First Nations. When First Nations appear at odds with each other – where one group appears to have embraced the White man’s ways over traditional ways – can you explain what underlies such apparent First Nation disunity?
GH: There are many factors that affect Indigenous unity. Some communities or nations have a strongly entrenched collaborator regime, some have been affected by colonization more than others (for example, some areas have been effectively Christianized due to the work of individual priests), while others have a strong culture of resistance that persists to this day. In some cases, communities may have been so devastated by colonialism and industrial development that they can no longer sustain themselves traditionally and therefore turn to industry.
In any case, Indigenous people were never a homogenous unified group. This can be seen in the initial reception given to European traders and explorers: some communities welcomed them, fed them and helped guide them, while others sought to destroy them.
- See “What We’re Called – Indian? or….” [↩]
- Leroy Little Bear, Menno Boldt, and J. Anthony Long (Eds.) Pathways to Self-Determinism: Canadian Indians and the Canadian State (Toronto: University of Toronto Press, 1985): ix. [↩]
- Bear et al., xii. [↩]
- Bear et al., xiii. [↩]
- Bear et al., xvi. [↩]
|
Parents, Talk to Your Kids About Math Before It's Too Late
Researchers and policymakers have urged parents for years to read to their young children, even infants, to help them develop better vocabulary and reading readiness. Now, a new study by the University of Chicago suggests parents should be talking to their toddlers about numbers, too.
The study, "What Counts in the Development of Young Children's Number Knowledge?," in the current issue of Developmental Psychology, suggests there are big differences in the amount of number-related words parents use in regular conversation with their children, and this can have a big effect on a child's numeracy, even before formal number instruction in preschool.
Susan C. Levine, a psychologist at the University of Chicago, and a team of researchers found toddlers whose parents talked with them frequently about numbers were better able to understand one of the foundation principles of early math: the cardinal number principle, i.e., the understanding that the number "six" represents a set of six items. According to researchers, children learn the abstract meaning of a given number separately from simply learning to count to that number.
The team studied 44 preschool children interacting with their parents during everyday activities in five 90-minute taped home visits conducted every four months from the time the children were 14 to 30 months old. The researchers then coded the number of times a parent used a number-related word, such as pointing to a series of toys on the floor and saying, "There are four trucks."
Researchers found parents wildly varied in the amount of number-related words they used around their children, from as few as four to as many as 257 &mdash which would translate to a range of 28 to 1,799 number-related words used per week between the most and least vocal parents. Moreover, Ms. Levine's team found children whose parents used more number words in discussions when the child was 14 to 30 months old were more likely at 46 months old, or just at preschool age, to be able to answer accurately when shown two sets of four and five blocks and asked to point out the set of five.
"By the time children enter preschool, there are marked individual differences in their mathematical knowledge, as shown by their performance on standardized tests," Ms. Levine said in a statement on the study. "These findings suggest that encouraging parents to talk about numbers with their children, and providing them with effective ways to do so, may positively impact children's school achievement."
|
Jason Renshaw has posted an interesting three minute screencast sharing why he thinks it’s best to teach vocabulary words after English Language Learners have read a text. It’s definitely worth a visit (in fact, all his posts are worth reading!).
When I’m teaching Beginning ELL’s, I tend to teach vocabulary prior to reading. With any class above that level, including native English speakers, I use a technique I learned from Kelly Young of Pebble Creek Labs, who has designed the extraordinary curriculum we use in our mainstream classes.
It’s called a Word Splash.
Prior to beginning a unit, I’ll write about twenty words on large sheet of paper that’s in front of the class. I’ll put it there a few days prior to starting that unit so students have been exposed to the words for awhile. Then, I have students copy the words down and write what they think it means — guesses are fine. Students then go into small groups and share their definitions. Next, we have a class discussion.
In that discussion, I don’t tell students if they’re correct or not.
The point is to help students become aware of the key words they’ll need to know to understand important parts of the unit. During subsequent lessons, I’ll ask students to highlight words from the Word Splash that they see in various texts. At some point I might ask them to revisit their definitions, or have each student take a word and draw and define it in a poster.
This process certainly helps students see how much they have learned from the beginning of the a unit to a later time.
Please share your throughts — either here or at Jason’s blog — about how and when you think vocabulary is best taught.
|
Samurai Video Clips: The Three Unifiers
Oda Nobunaga and
1. Who was Oda Nobunaga?
2. What role did Oda Nobunaga play in reunifying Japan?
3. What weapon changed samurai warfare? How did this weapon reach Japan?
4. Who was Toyotomi Hideyoshi? How did he come to power?
5. What changes did Hideyoshi make to protect his position of power?
Rules regarding weapons: Rules regarding social status:
Imagine you are a samurai. Describe how you feel about Toyotomi Hideyoshi’s new rules imposed upon Japan.
1. How did Tokugawa Ieyasu gain power?
2. What city became the capital of Japan under Tokugawa Ieyasu?
3. How did Tokugawa Ieyasu make sure that the daimyo didn’t rebel?
4. What foreign policy did Tokugawa Ieyasu establish? Was this a good policy? Why or why not?
5. How did the role of the samurai change during the Tokugawa shogunate?
Imagine you work for the Tokugawa Shogun. The shogun has created a set of new rules in an effort to preserve a traditional Japan. Create three signs listing the new rules for life under Tokugawa rule. Sign 1 should list rules for the daimyo. Sign 2 should list rules for the samurai. Sign 3 should list rules for all people including rules about religion and foreigners.
These signs will serve as an official edict and will be posted throughout Japan. Be sure to include text and/or images and be creative!
The Three Unifiers: Final Processing
A famous poem still taught today reads,
What if the bird will not sing? Oda Nobunaga answers, “Kill it!” Toyotomi Hideyoshi answers, “Make it want to sing!” Tokugawa Ieyasu answers, “Wait for it to sing!”
Explain this poem. Then, explain why you think Tokugawa Ieyasu was ultimately the most successful of the three unifiers. What was his legacy?
Oda Nobuna and Toyotomi Hideyoshi
STOP! Before you watch the video about Tokugawa Ieyasu, read the handout about him.
|
Do the tropics have an internal thermostat?
Science Daily March 6, 2017
New research findings show that as the world warmed millions of years ago, conditions in the tropics may have made it so hot some organisms couldn't survive. Longstanding theories dating to the 1980s suggest that as the rest of Earth warms, the tropical temperatures would be strictly limited, or regulated by an internal 'thermostat.' These theories are controversial, but the debate is of great importance because the tropics and subtropics comprise half of Earth's surface area, greater than half of Earth's biodiversity, as well as over half Earth's human population. But new geological and climate-based research indicates the tropics may have reached a temperature 56 million years ago that was, indeed, too hot for living organisms to survive in parts of the tropics.
The Paleocene-Eocene Thermal Maximum (PETM) period occurred 56 million years ago and is considered the warmest period during the past 100 million years. Global temperatures rapidly warmed by about 5 degrees Celsius (9 F), from an already steamy baseline temperature, and this study provides the first convincing evidence that the tropics also warmed by about 3 degrees Celsius (5 F) during that time. These results are unique because geological records from the PETM are typically hard to find, especially in tropical regions. To overcome this limitation, researchers can analyze the carbon and oxygen isotopic composition of shells, which tell a story about the carbon cycle and temperatures from the past. Two research methods were used to judge the temperature during the PETM, one utilizing isotopes in shells, while the other examined organic residues in deep-sea sentiments. The biotic records left behind from living organisms indicate they were dying at the same time the conditions were warming.
This research has important implications in the context of climate change. If temperature buffering in the tropical regions did not occur in the past, the future of these regions is uncertain when exposed to rapidly increasing temperatures.
1. Joost Frieling, Holger Gebhardt, Matthew Huber, Olabisi A. Adekeye, Samuel O. Akande, Gert-Jan Reichart, Jack J. Middelburg, Stefan Schouten, Appy Sluijs. Extreme warmth and heat-stressed plankton in the tropics during the Paleocene-Eocene Thermal Maximum. Science Advances, 2017; 3 (3): e1600891 DOI: 10.1126/sciadv.1600891
|
As researchers explore the nature of the intelligence of animals, the corvid family presents some arresting examples of brainy birds. The most common corvids are crows, ravens, and jays; other relatives are the rooks, magpies, choughs, nutcrackers, and jackdaws. The familiar corvids are large, noisy, and social, and they are not shy in the presence of people. They play pranks, tease other animals, and engage in aerial acrobatics for fun. Crows live happily in human settlements and have found many ways to exploit the curious human trait of discarding food.
The strong social structure of corvids has been widely studied, as have their complex vocalizations and cooperative actions. Pioneering animal behaviorist Konrad Lorenz studied jackdaws in his native Austria; his King Solomon’s Ring reports his interactions with them and observations for their behavior.
Corvids are known to mimic human voices and other sounds and to enjoy the confusion that results. Zookeeper Gerald Durrell recounted the antics of his pet magpies, who learned to imitate the Durrell’s maid’s call to the chickens to come and be fed. When the magpies got bored, they called the chickens, who came running in anticipation of a treat. When the disappointed chickens went back to roost, the magpies called them again, and again, and the chickens, no match for the clever magpies, fell for the ruse every time.
In the 19th century crows and ravens were considered to be the cleverest of birds — inquisitive, playful, and able mimics — and though today parrots are giving them a run for the money, there are some areas in which crows truly shine. Zoologists and behaviorial researchers have documented numerous examples of the crow’s sharp mind, adding to the vast body of anecdote and folklore surrounding these birds.
Tools and tasks
One outstanding example is the crow’s ability to use tools, and what’s more, to make tools. In 1960 Jane Goodall created a sensation when she reported seeing chimpanzees make tools; her observations forced a reevaluation of the human’s status as sole practitioner of tool-making and its related abilities to solve problems, manipulate objects, and plan toward a desired result.
This video below shows an astounding feat by a New Caledonian crow. In an experiment conducted by behaviorists from the University of Oxford, a small bucket of food was placed inside a tube; the crow was unable to reach the bucket because of the length of the tube. She then picked up a short length of wire, and, after a few futile attempts to snag the bucket with it, bent the wire into a hook and lifted the bucket from the tube. What’s more, the crow repeated the behavior in nine out of 10 subsequent trials.
New Caledonian crows are believed to be especially adept at using tools, being known to use naturally occurring hooks. But although this crow had seen hooks, she had never seen wire being bent into a hook.
The researchers, clearly impressed, mused: “Our finding, in a species so distantly related to humans and lacking symbolic language, raises numerous questions about the kinds of understanding of “folk physics” and causality available to nonhumans, the conditions for these abilities to evolve, and their associated neural adaptations.”
Another experiment with Caledonian crows again involved an out-of-reach bit of food. The crows quickly solved the problem by using a long stick to reach the food. And when the long stick was placed inside a cage, the crows—six out of seven in the experiment–used a shorter stick to push the long stick into a position where it could be picked up. Thus the crows used a tool to manipulate another tool, and it was not just a single individual with this skill. The use of a “metatool” is a behavior difficult even for primates.
Much of the corvids’ problem-solving is directed toward obtaining food or water. And why eat bread when you could have fish? This hooded crow in Tel Aviv scattered bits of bread into a pond and then caught the fish that came to eat them. With no prior experience of the situation, a raven quickly figured out how to reel in a piece of food that a researcher had attached to a long string.
And the winner is…
The top prize for clever problem-solving goes to these Japanese crows, who first solved the problem of how to get at the nutmeats from hard-shelled nuts (drop them in the road and let cars run over them, then swoop down and eat them) and then devised to plan for avoiding getting run over themselves (drop them in the crosswalk, let the nuts get crushed by cars, then wait for the light to turn red and stop traffic)!
To Learn More
- Read the Encyclopaedia Britannica article on crows
- Check out For the Love of Crows, a Web site devoted to these fascinating birds
- Read an article from New Scientist on crows’ tool-making
- Listen to recordings of crows from the Patuxent Wildlife Research Center of the U.S. Geological Survey in Laurel, Maryland
- Consult Avibase, a database of worldwide information on bird species
This post originally ran on Britannica’s Advocacy for Animals site.
|
What is a Primary Source?
A primary source is a document or physical object which was written or created during the time under study. These sources were present during an experience or time period and offer an inside view of a particular event. Some types of primary sources include:
- ORIGINAL DOCUMENTS (excerpts or translations acceptable): Diaries, speeches, manuscripts, letters, interviews, news film footage, autobiographies, official records
- CREATIVE WORKS: Poetry, drama, novels, music, art
- RELICS OR ARTIFACTS: Pottery, furniture, clothing, buildings
- Examples of primary sources include:
- Diary of Anne Frank - Experiences of a Jewish family during WWII
- The Constitution of Canada - Canadian History
- A journal article reporting NEW research or findings
- Weavings and pottery - Native American history
- Plato's Republic - Women in Ancient Greece
What is a secondary source?
A secondary source interprets and analyzes primary sources. These sources are one or more steps removed from the event. Secondary sources may have pictures, quotes or graphics of primary sources in them. Some types of seconday sources include:
- PUBLICATIONS: Textbooks, magazine articles, histories, criticisms, commentaries, encyclopedias
- Examples of secondary sources include:
- A journal/magazine article which interprets or reviews previous findings
- A history textbook
- A book about the effects of WWI
Always Available Primary Source ebooks
These primary soruces are ALWAYS AVAILABLE to download from Gale Virtual Reference Library. Click the book cover view the record on Gale and download from there.
|
This article is recommended by the editorial team.
Coastal systems may self-organize at various length and time scales. Sand banks, sand waves both in the shelf and at the coastline, sand bars, tidal inlets, cusps, cuspate forelands, spits (among others) are morphological features that are frequently dominated by self-organized processes. Stability models are the genuine tool to understand these processes and make predictions on the dynamics of those features.
- 1 Stability: concepts.
- 2 Stability methods: use in coastal sciences.
- 3 Stability methods: use in long term morphological modelling.
- 4 Linear stability models.
- 5 Nonlinear stability models.
- 6 Cellular models.
- 7 References.
The concepts of equilibrium and stability come from Classical Mechanics (see, for example, Arrowsmith and Place, 1992). A state where a system is in balance with the external forcing so that it does not change in time is called an equilibrium position. However, any equilibrium position may be either stable or unstable. If released near a stable equilibrium position, the system will evolve towards such a position. On the contrary, if released near an unstable equilibrium position, it will go far away from this position. For instance, a pendulum has two equilibrium positions, one up (A), another down (B). If released at rest at any position (except at A) the pendulum will start to oscillate (if it is not already in B) and due to friction it will end up at rest at B. Thus, the pendulum will move spontaneously towards the stable equilibrium and far away from the unstable equilibrium.
Similarly, a beach under constant wave forcing is commonly assumed to reach after some time certain equilibrium profile. However, two main assumptions are here involved: i) an equilibrium state exists and ii) the equilibrium is stable. The existence of an equilibrium profile seems to be granted in the books on coastal sciences and the stability of such an equilibrium is implicitly assumed. However, even if an equilibrium profile exists, it is not necessarily stable. This means that the system would ignore such an equilibrium, it would never tend spontaneously to it. Furthermore, several equilibria may exist, some of them stable, some others unstable.
Let us assume a system which is described by only one variable as a function of time, , and two constant parameters and which are representative of both the characteristics of the system and the external forcing. Assume that this variable is governed by the ordinary differential equation:
For instance, in a coastal system, could represent sediment grain size or wave height, and the shoreline displacement at an alongshore location. Given an initial position , the subsequent evolution of the system is described by the solution of the differential equation
It becomes clear that the system has two equilibrium positions, A: , and B: . Moreover, for , A is stable while B is unstable. In contrast, A is unstable and B is stable for . This is illustrated by Fig. 1 where typical solutions are plotted for various initial conditions.
Stability methods: use in coastal sciences.
Equilibrium situations are fundamental in any coastal system as they are the possible steady states where the system can stay under a steady forcing (steady at the time scale which is relevant according to the definition of the system). Then it is crucial to know whether an equilibrium state is stable or not since only stable equilibria can be observed. Furthermore, knowing the conditions for stability of certain equilibrium may be vital when this equilibrium means preserving a beach against erosion, or keeping water depth in a navigation channel. For instance, the entrance of a tidal inlet may be in stable equilibrium under the action of tides and waves. However, if this equilibrium becomes unstable (e.g., because of climate change) the entrance may close up (see section 'Tidal inlets'). Very often, even if the system is out of equilibrium its dynamics can be understood as its path from an unstable equilibrium to a stable one. Therefore, the stability concepts and the mathematical techniques involved are of very general use in coastal sciences. However, stability models are nowadays commonly associated to models for pattern formation. Thus, we will focus here in this broad class of applications which are very often related to that transition from an unstable equilibrium to a stable one.
Coastal and geomorphical systems exhibit patterns both in space and time. Some of these patterns directly obey similar patterns in the external forcing. For example, a beach profile may erode and subsequently recover directly in response to the cycle storm/calm weather in the external wave forcing with the same time scale. This is known as forced behaviour. Other patterns, even if they are driven by some external forcing, do not resemble similar patterns in the forcing. For instance, although bed ripples may be originated by a unidirectional current over a sandy bed, there is nothing in the current itself which dictates the shape, the lengthscale or the characteristic growth time of the ripples. The ripples constitute a new pattern which is not present in the forcing. This is called free behaviour or self-organized behaviour. The forced behaviour is much simpler to predict once the forcing is known. In contrast, predicting the free behaviour is typically much more complicated as it involves the complex internal dynamics of the system itself (see, for instance, Dronkers, 2005).
Stability methods are the genuine tool to describe, understand and model pattern formation by self-organization (Dodd et al., 2003). The typical procedure is to start considering an equilibrium of the system where the pattern is absent (for instance, flat bed, in case of ripples). The key point is that small fluctuations or irregularities are always present (a perfect flat bed or an exact unidirectional, uniform and steady current do not exist). Then, if the equilibrium is stable, any initial small perturbation of the equilibrium will dye away in time. Thus, those small fluctuations will not succeed in driving the system far from the equilibrium (the bed will keep approximately flat). However, if the equilibrium is unstable, there will exist initial perturbations that will tend to grow. Among all of them, some will grow faster than others and their characteristics will prevail in the state of the system. In other words, the patterns corresponding to these initially dominant perturbations of the equilibrium will emerge and will explain the occurrence of the observed patterns (the ripples). However, different patterns may emerge during the instability process and the finally dominant one (the one which is observed) may not correspond to the initially dominant.When applied to the formation of coastal morphological patterns the instability which leads to the growth is typically originated by a positive feedback between the evolving morphology and the hydrodynamics. cusps) without any relation with crescentic bars. If their orientation is not exactly shore-normal they are called oblique bars. The equilibrium state to start with is a rectilinear coastline with a bathymetry which is alongshore uniform, either unbarred or with one or more shore-parallel bars. The wave field is assumed to be constant in time. Since the cross-shore profile is assumed to be an equilibrium profile there is no cross-shore net sediment transport. Even if there is longshore transport due to wave oblique incidence, there are no gradients in such transport so that the morphology is constant in time. Now, this equilibrium may be stable or unstable. This means that given a small perturbation on the bathymetry, the wave field will be altered (changes in wave energy distribution, wave breaking, shoaling, refraction, diffraction, etc.), hence the mean hydrodynamics will be altered too (changes in the currents, in set-up/set-down). Therefore, there will be changes in sediment transport thereby appearing convergences/divergences of sediment flux and morphological changes. These morphological changes may either reinforce or damp the initial perturbation. If the latter happens for any perturbation one may consider, the equilibrium is stable and the bathymetry will keep alongshore uniform. If the former happens for at least one possible perturbation, the equilibrium is unstable and the beach will 'spontaneously' (i.e., from the small fluctuations) develop coupled patterns in the morphology, the wave field and the mean hydrodynamics other than the featureless equilibrium. These patterns may eventually result in the observed rhythmic bars with the corresponding circulation patterns. This has been shown for the case of crescentic bars (Calvete et al., 2005; Dronen and Deigaard, 2007 ) and transverse/oblique bars (Garnier et al., 2006).
Stability methods can be used not only to understand and model naturally occurring features but also to analize the efficiency and impact of human interventions. The sand which is dumped in a shoreface nourishment interacts with the natural nearshore bars and may trigger some of the morphodynamic instability modes of the system. Following this idea, Van Leeuwen et al., 2007 have applied a morphodynamical stability analysis to assess the efficiency of different shoreface nourishment strategies.
Stability methods: use in long term morphological modelling.
Continental shelf morphological features
The sea bed of the continental shelf is rarely flat. Rather, it is usually covered by a number of different types of morphological features ranging from megaripples to sand waves and sand banks. The latter two may be considered as long term features since the characteristic time for its formation and evolution is of decades or centuries. Their horizontal lengthscale (size and spacing) is of the order of hundreds of m for sand waves and few km's for the sand banks. Their origin has been explained as a morphodynamical instability of the coupling between the sandy bed and the tidal currents (Besio et al., 2006). The equilibrium situation is the flat bed where the tides do not create any gradient in sediment flux. The instability mechanism involves only depth averaged flow in case of sand banks whereas it is related to net vertical circulation cells in case of sand waves. Sand banks may also appear near the coast, in water depths of 5-20 m. In this case they are known as shoreface-connected sand ridges. Their origin has also been explained from an instability but where the tidal currents have little influence. In this case, the instability mechanism is caused by the storm driven coastal currents in combination with a transversely sloping sea bed (Calvete et al., 2001, see Sec: 'Example: MORFO25 model').
Stability analysis has been applied to tidal inlets at different levels. First the dynamics of the cross-sectional area of the entrance with its equilibria, their stability and the possibility of closure has been considered. This is done with very simple parametric descriptions of the gross sand transport by tidal currents and waves that allow to derive simple governing ordinary differential equations (see, for instance, van de Kreeke, 2006). Typical time scales for such a dynamics are (e.g., in case of the Frisian inlet, on the Dutch coast) about 30 years. At a second level, the possible equilibrium bathymetries inside the inlet and their stability can be analyzed. This allows to understand the origin and dynamics of the channels and shoals inside the inlet. It turns out that this sometimes complicated structure (even fractal) of channels and shoals is originated by an instability of the flat topography in interaction with the tidal currents due to frictional torques. The time scale for such an instability is of the order of 1 year (see, e.g., Schuttelaars and de Swart, 1999 and Schramkowski et al., 2004). These channels and shoals scale with the length of the embayment but the stability analysis of the flat bottom topography in interaction with tidal currents also gives instability modes at a smaller scale which correspond to the tidal bars that form at the inlet entrance. The growing perturbations associated to this instability are trapped near the entrance and scale with the width of the inlet (Seminara and Tubino, 2001 and van Leeuwen and de Swart, 2004).
Large scale shoreline instabilities
Shorelines characterized by a wave climate with high incidence angle with respect to the shore-normal commonly show a wavy shape, cuspate landforms and spits (Classification of coastlines). This can be interpreted as a result of a coastline instability. The littoral drift or total alongshore sediment transport driven by the breaking waves, , is a function of wave incidence angle with respect to the shore-normal in deep water, . It is zero for , increases up to maximum for about and decreases down to zero for . The equilibrium situation is a rectilinear coastline with alongshore uniform nearshore bathymetry and alongshore uniform wave forcing with a angle. Assume now a small undulation of the otherwise rectilinear coastline consisting of a cuspate foreland. The wave obliquity with respect to the local shoreline is larger at the downdrift side than at the updrift side. Then, if , higher obliquity means higher transport so that there will be larger sediment flux at the downdrift side than at the updrift side. This will erode the cuspate shape and the shoreline will come back to the rectilinear equilibrium shape. The shoreline is stable. The contrary will happen if so that the shoreline will be unstable in this case. The instability tends to create undulations of the coastline (shoreline sand waves) with an initial wavelength of about 1-10 km's with a characteristic growth time of the order of a few years (Falqués and Calvete, 2005). Once the initial undulations have grown, the shoreline may evolve towards larger wavelengths and very complex shapes including hooked spits (Ashton et al., 2001, see Sec: 'Cellular models').
Linear stability models.
The equations governing coastal systems are typically nonlinear and it is difficult to solve them or to extract useful information from them. However, the small departures from an equilibrium situation approximately obey linear equations that are very useful to determine whether the equilibrium is stable or unstable and, in the latter case, which are the emerging patterns and how fast they grow at the initial stage. For instance, for the equilibrium B () for equation (1), one may define the departure from equilibrium, , with governing equation:
where the last approximation is valid for and is called linearization. The approximate equation is linear in and it is immediately solved to give:
where is called the growthrate and determines whether the perturbation of equilibrium will grow or decay. We recover that B is stable if and unstable if without solving the nonlinear equation (1).
Steps in developing and using a linear stability model.
- Governing equations.
The first step is to define the variables describing the state of the system. These may be the level of the sea bed as a function of two horizontal coordinates and time, , or the wave energy density field, , or the position of the coastline as a function of a longitudinal coordinate and time, , etc. Then, equations expressing the time derivatives of those variables must be derived. For coastal morphodynamic problems these typically constitute a system of partial differential equations in (this was not the case in our extremely simple example where there is a single governing equation which is ordinary in or for the model of tidal inlets mentioned above).
- Equilibrium state.
An equilibrium solution of the governing solutions where all the variables of the system are constant in time must be selected. (In our simple example, there are two possible equilibrium solutions.)
- Linearization of the governing equations.
The perturbations with respect to the selected equilibrium solution must be defined, , , etc. Then, the governing equations must be linearized by neglecting powers higher than one of those perturbations. If the perturbations in all the variables are represented by a vector , the linearized equations can be represented by: where is a linear operator typically involving partial derivatives with respect to the horizontal coordinates, . (In our simple example, the linearized equation is eq. (3) and operator is algebaic and one-dimensional, simply .)
- Solving the linearized equations: eigenvalue problem.
Since the coefficients of the linearized equations do not depend on time, solutions can be found as where and are eigenvalues and associated eigenfunctions of operator (note that is -dimensional, where is the number of variables describing our system). These eigenvalues and eigenfunctions may be complex and only the real part of the latter expression has physical meaning. The equations expressing the eigenproblem are partial differential equations in . In case where the equilibrium solution is uniform in both directions (e.g. stability of horizontal flat bed in an open ocean), the coefficients do not depend on these coordinates and wave-like solutions may be found as where is a constant vector. In this case, the eigenvalue problem can be solved algebraically leading to a complex dispersion relation, . Very often, there is uniformity in one direction, say , but not in the other. This is typically the case in coastal stability problems where the equilibrium solution depends on the cross-shore coordinate, , but not on the alongshore one, . Thus, the eigenfunctions are wave-like only in the direction and are: In this case, solving the corresponding eigenproblem requieres solving a boundary value problem for ordinary differential equations for which is commonly done by numerical methods. In case where the equilibrium state has gradients in any horizontal direction the eigenproblem leads to a boundary value problem for partial differential equations in . (All this is really trivial in our simple example, because both the governing equation does not involve partial derivatives and vector is one-dimensional. There is only one eigenvalue, .)
- Analysis of the eigenvalue spectrum: extracting conclusions.
Once the corresponding eigenproblem has been solved, one has a spectrum of eigenvalues with the associated eigenfunctions . The symbol represents an 'index' to number the eigenvalues, but it is not necessarily discrete. It may be continuous in response to unboundedness of our system in some direction. If the eigenvalue problem has been solved numerically, the numerical eigenvalues are just approximations to the exact eigenvalues. Some of them may even be numerical artifacts that do not have any relation with the exact eigenvalues. These purely numerical eigenvalues are called spurious eigenvalues. Distinguishing between physical and spurious eigenvalues is commonly achieved from physical meaning and from convergence under mesh refinement but may sometimes be quite difficult. The real part of each eigenvalue determines the growth or decay of the perturbation with shape defined by the associated eigenfunction and is called growthrate. If all the eigenvalues have negative real part, the equilibrium is stable. If there exist at least one positive growthrate, the equilibrium is unstable. If there are a number of eigenvalues with positive growthrate, all the corresponding perturbations can grow. The one with largest growthrate is called the dominant mode and the associated eigenfunction is expected to correspond to the observed emerging pattern in the system. The imaginary part of the eigenvalues is related to a propagation of the patterns. Each eigenvalue with the associated eigenfunction are called normal mode, linear mode or instability mode (the latter in case the growthrate is positive).
Example: MORFO25 model.This is a linear stability model to identify and explore the physical mechanism which is responsible for the formation of shoreface-connected sand ridges on the continental shelf (Calvete et al., 2001). The model domain is a semi-infinite ocean bounded by the coastline. The governing equations are partial differential equations in time and in both horizontal coordinates which are derived from: i) water mass conservation, ii) momentum conservation and iii) bed changes caused by sediment conservation. The unknowns are the mean sea level, the bottom level and the depth averaged mean current. The sediment transport is directly driven by the current. The equilibrium situation is an alongshore uniform bathymetry consisting of a plane sloping bottom next to the coastline (inner shelf) and a plane horizontal bottom further offshore (outer shelf) together with an alongshore coastal current.
) (see Fig. 4). All this is consistent with observations (see Sec: 'Continental shelf morphologic features').
Nonlinear stability models.
The linear stability models indicate just the 'initial tendency' to pattern formation starting from small fluctuations of certain equilibrium. This initial tendency involves the shape and horizontal lenghtscales of the pattern but not its amplitude. Also, the shape and lenghtscale at the initial stages may be quite different from the shape at later stages to be compared with observations. Actually, a reliable prediction of pattern formation needs to consider the nonlinear terms which have been neglected in the linearization. This is the aim of nonlinear stability models. Since solving the governing equations can hardly be done analytically, the numerical or approximation method used is essential to them and may be the basis for their classification.
Standard discretized models.
The governing equations can be discretized by standard numerical methods as finite differences, finite elements or spectral methods. This provides algorithms to find an approximation to the time evolution of the system starting from initial conditions. If these initial conditions are chosen as small random perturbations of an equilibrium solution and no linearization has been introduced in the equations, the corresponding code implements a nonlinear stability model since it describes the behaviour of the system when released close to equilibrium.
A possible procedure is the use of existing commercial numerical models (e.g., MIKE21, DELFT-3D, TELEMAC, etc.). This has been done in a number of stability studies (for instance, in case of tidal inlets, see van Leeuwen et al., 2004 or Roelvink, 2006). Their advantage is that they commonly describe the relevant processes as accurately as possible according to present knowledge. However, this typically results in highly complex models with several inconvenients for their use in this context. The most important is that they are typically based on fixed set of equations and discretization methods that the user can not easily change. Furthermore and due to their complexity it is sometimes difficult for the user to exactly know the set of governing equations and parameterizations. In particular, one can not freely turn on/off some of the constituent processes. Another problem may be that the diffusivity which is necessary to keep control of unresolved small scale processes may damp the instabilities to be studied. Moreover, because of their complexity, they are highly time-consuming. Therefore, the use of nonlinear stability models specifically designed may prove to be more efficient to describe a particular pattern dynamics in a particular environment. The most important advantage is that the governing equations, the parameterizations and the discretization methods are more transparent and can be more easily changed. In particular, these models may consider idealized conditions which are suited to exploring certain processes in isolation. Alternatively, one may include more and more processes with the desired degree of complexity and get close to the commercial models. An example of such specific models is MORFO55 which is suited to describe the dynamics of rhythmic bars in the surf zone (Garnier et al., 2006; see Sec: 'Stability methods: use in coastal sciences).
Weakly nonlinear models.
Although direct numerical simulation discussed in last section is very powerful, a systematic exploration of the nonlinear stability properties of a given system needs many runs and may thus be prohibitive. An alternative approach then consists in deriving approximate governing equations based on multiple-scale developments which are called amplitude equations. Apart from their simplicity, the big advantage is that they are generic, i.e., they have the same structure for many different physical systems. Thus, they allow for obtaining general properties in a much cheaper way than by direct numerical simulation. They have however an essential limitation which is stated as follows. The stability or instability of equilibrium depends on the parameters describing the external forcing and the properties of the system. Typically, a single parameter can be defined, , such that below some threshold or critical value, , the equilibrium is stable whereas it is unstable above it, . Then, if is defined, amplitude equations are restricted to slightly unstable conditions, i.e., , the so-called weakly nonlinear regime.
If the eigenvalue spectrum is discrete, it can be assumed for slightly unstable conditions that there is only one eigenvalue of the linear stability problem which has positive real part. i.e., only one instability mode. This means that starting from arbitrary small perturbations of equilibrium, the time evolution of the system will be dominated by this mode, i.e., which means an exponential growth of the pattern defined by according to the real part of the eigenvalue. However, the latter expression gives just an indication but can not be the solution because of the nonlinear terms. A procedure to find an approximate solution begins by realizing that for the real part of the eigenvalue tends to 0 so that it can read: where and the power two can always be introduced by redefining parameter if necessary. This leads to defining a slow time as and to look for an approximate solution of the form where is a complex amplitude and the terms on the right of it correspond to the linear eigensolution for . By considering that this is just the first term of a power expansion in the so-called Landau equation is obtained for the amplitude:
where is a coefficient which depends on the system under investigation. When the real part of is positive, the solutions tend to a new equilibrium characterized by a finite amplitude; then, if this solution represents a travelling finite amplitude wave. If the real part of is negative, explosive behaviour occurs, i.e., the amplitude becomes infinte in a finite time.
Very often, the spectrum is continuum. In this case, for slightly unstable conditions there is a narrow band of eigenvalues with positive real part even for very small . A similar development can be carried out in this case but now, slow spatial coordinates and/or must be defined and the complex amplitude depends on it/them. The generic governing equation is the so-called Ginzburg-Landau equation which is a partial differential equation in T and the slow spatial coordinates.
In practice, the big difficulty of using such methods is the computation of the coefficients (e.g., ) from the original governing equations which can be a tremendous task for the complex coastal systems (see Komarova and Newell for an example). An alternative approach is assuming that the governing equations already are of Ginzburg-Landau type and derive the coefficients from field observations. Then, the resulting equations may be used to make predictions.
A possible method for direct numerical simulation is the use of spectral methods which are based on truncated expansions in basis functions of the spatial coordinates. On the other hand, weakly nonlinear methods show that for slightly unstable conditions the spatial patterns are close to those predicted by linear stability analysis. Thus, it is plausible that the spatial patterns of the system may be expressed as a combination of eigenfunctions even for non weakly unstable regime if all the linear instability modes are incorporated:
By inserting this ansatz into the governing equations and by doing a Galerkin-type projection a set of N nonlinear ordinary differential equations for the unknown amplitudes is obtained. This system is then solved numerically. In other words, these methods essentially are spectral Galerkin methods but instead of using expansions in a mathematically defined basis (trigonometric functions, Chebyshev polynomials, etc.) the eigenmodes are used. The set of eigenfunctions to use in the expansion (here symbolically indicated by N) must always contain at least those with positive growthrate but the choice of the additional ones is by no means trivial.
Although there are no a priori restrictions on the applicability of this method (e.g., weakly nonlinear regime), the state of the system must be in practice relatively close to the starting equilibrium in order its behaviour can adequately be described in terms of the eigenfunction set. Thus, this type of models could be classified as 'moderately nonlinear'. Examples of their application are shoreface-connected sand ridges (Calvete and de Swart, 2003) and tidal inlets (Schramkowski et al., 2004).
In the linear/nonlinear models considered sofar the coastal system is considered as a continuum and the governing equations are set as partial differential equations from the fundamental physical laws as conservation of mass, momentum, energy, wave phase, etc. Either the full equations or the linearized version are subsequently discretized to be solved by numerical methods. The numerical approximations give rise to algorithms to obtain information on the time evolution of the system and these algorithms are finally codified.
An alternative option is to consider the discrete structure of the system from the very beginning. The system is assumed to be governed partially by some of the fundamanental physical laws and partially by some abstract rules which define its behaviour. These rules and laws are directly set in a numerical or algorithmic manner rather than expressing them as partial differential equations. The algorithms giving the time evolution of the system are finally codified. This type of models are known as cellular models because the discretization is inherent to the model itself. It is also sometimes known as self-organization models. The latter is however misleading since self-organization is a type of behaviour of the system and is independent of the model used to describe it.
Cellular models have been applied to explore the self-organized formation of beach cusps (see Coco et al., 2003 and references herein.) A very relevant example for long-term morphodynamic modelling is the cellular model of Ashton et al., 2001, to study shoreline instabilities due to very oblique wave incidence. The model domain represents a plan view of the nearshore which is discretized into cells or 'bins'. Each cell is assigned a value, F, , representing the cell's plan view area that is occupied by land. represents dry land, represents ocean cells and corresponds to shoreline cells. At each time step, the model updates the shoreline position according to alongshore gradients in littoral drift and sediment conservation similarly to one-line shoreline models. The model allows however for arbitrarily sinuous shorelines, even doubling back on itself and with 'wave-shadow' regions. So, starting from small fluctuations of the rectilinear coastline equilibrium the model can go a long reach and can therefore be considered as strongly nonlinear (see Sec: 'Large scale shoreline instabilities').
- D. K. Arrowsmith and C. M. Place, 1992. "Dynamical Systems". Chapman and Hall/CRC.
- J. Dronkers, 2005."Dynamics of Coastal Systems". World Scientific.
- N. Dodd, P. Blondeaux, D. Calvete, H. E. de Swart, A. Falqués, S. J. M. H. Hulscher, G. Rózynski and G. Vittori, 2003. "The use of stability methods in understanding the morphodynamical behavior of coastal systems". J. Coastal Res., 19, 4, 849-865.
- D. Calvete, N. Dodd, A. Falqués and S. M. van Leeuwen, 2005. "Morphological Development of Rip Channel Systems: Normal and Near Normal Wave Incidence". J. Geophys. Res., 110, C10006, doi:10.1029/2004JC002803.
- N. Dronen and R. Deigaard, 2007. "Quasi-three-dimensional modelling of the morphology of longshore bars". Coast. Engineering, 54, 197-215.
- R. Garnier, D. Calvete, A. Falqués and M. Caballeria, 2006. "Generation and nonlinear evolution of shore-oblique/transverse sand bars". J. Fluid Mech., 567, 327-360.
- S.Van Leeuwen, N.Dodd, D. Calvete and A. Falqués, 2007. "Linear evolution of a shoreface nourishment". Coast. Engineering, in press, doi:10.1016/j.coastaleng.2006.11.006.
- G.Besio, P.Blondeaux and G. Vittori, 2006. "On the formation of sand waves and sand banks". J.Fluid Mech., 557, 1-27.
- D. Calvete, A. Falqués, H. E. de Swart and M. Walgreen, 2001. "Modelling the formation of shoreface-connected sand ridges on storm-dominated inner shelves". J. Fluid Mech., 441, 169-193.
- J. van de Kreeke, 2006. "An aggregate model for the adaptation of the morphology and sand bypassing after basin reduction of the Frisian Inlet". Coast. Engineering, 53, 255-263.
- H. M. Schuttelaars and H. E. de Swart, 1999. "Initial formation of channels and shoals in a short tidal embayment". J. Fluid Mech., 386, 15-42.
- G. P. Schramkowski, H. M. Schuttelaars and H. E. de Swart, 2004. "Non-linear channel-shoal dynamics in long tidal embayments". Ocean Dynamics, 54, 399-407.
- G. Seminara and M.Tubino, 2001. "Sand bars in tidal channels. Part 1. Free bars". J.Fluid Mech., 440, 49-74.
- S. M. van Leeuwen and H. E. de Swart, 2004. "Effect of advective and diffusive sediment transport on the formation of local and global bottom patterns in tidal embayments". Ocean Dynamics, 54, 441-451.
- A. Falqués and D. Calvete, 2005. "Large scale dynamics of sandy coastlines. Diffusivity and instability". J. Geophys. Res., 110, C03007, doi:10.1029/2004JC002587.
- A. Ashton, A. B. Murray and O. Arnault, 2001. "Formation of coastline features by large-scale instabilities induced by high-angle waves". Nature, 414, 296-300.
- J. A. Roelvink, 2006. "Coastal morphodynamic evolution techniques". Coast. Engineering, 53, 277-287.
- N.L . Komarova and A. C. Newell, 2000."Nonlinear dynamics of sand banks and sand waves". J. Fluid Mech., 415, 285-321.
- D. Calvete and H. E. de Swart, 2003. "A nonlinear model study on the long-term behaviour of shoreface-connected sand ridges". J.Geophys.Res., 108 (C5), 3169, doi:10.1029/2001JC001091.
- G. Coco, T. K. Burnet, B. T. Werner and S.Elgar, 2003. "Test of self-organization in beach cusp formation". J. Geophys. Res., 108, C33101, doi:10.1029/2002JC001496.
Please note that others may also have edited the contents of this article.
|
Prokaryotes vs eukaryotes: These are two distinct types of cellular organisms. Despite their differences, they also share some similarities between Prokaryotes and Eukaryotes. The differences between prokaryotes and eukaryotes extend to size, reproduction methods, genetic organization, and evolutionary history, highlighting the diversity of life on Earth.
- Prokaryotes are single-celled organisms that lack a true nucleus and membrane-bound organelles, including bacteria and archaea.
- Eukaryotes are Multicellular organisms that have a true nucleus and membrane-bound organelles, including plants, animals, fungi, and protists.
Definition of Prokaryotes:
Prokaryotes are a category of cellular organisms that lack a true nucleus and membrane-bound organelles. They are characterized by their simple cell structure, with their genetic material, typically a circular DNA molecule, located in the cytoplasm. Prokaryotes include bacteria and archaea, and they are considered the earliest forms of life on Earth. Despite their simplicity, prokaryotes exhibit remarkable adaptability and can be found in diverse environments, playing essential roles in various ecological processes.
Definition of Eukaryotes:
Eukaryotes are a category of cellular organisms that have a true nucleus and membrane-bound organelles. They are characterized by their complex cell structure, with their genetic material organized into linear chromosomes within the nucleus. Eukaryotes encompass a wide range of organisms, including plants, animals, fungi, and protists. Their cells contain various membrane-bound organelles, such as mitochondria and endoplasmic reticulum, which perform specialized functions.
Here is a list of examples for both prokaryotes and eukaryotes:
- Bacteria: Examples include Escherichia coli (E. coli), Bacillus subtilis, Streptococcus pyogenes, and Mycobacterium tuberculosis.
- Archaea: Examples include Methanogens, Halophiles, and Thermophiles.
- Plants: Examples include Oak trees (Quercus), Sunflowers (Helianthus), Wheat (Triticum aestivum), and Roses (Rosa).
- Animals: Examples include Humans (Homo sapiens), Dogs (Canis lupus familiaris), Cats (Felis catus), and Birds (Aves).
- Fungi: Examples include Mushrooms (Agaricus bisporus), Yeasts (Saccharomyces cerevisiae), Molds (Penicillium), and Truffles (Tuber spp.).
- Protists: Examples include Amoeba (Amoeba proteus), Paramecium (Paramecium caudatum), Euglena (Euglena gracilis), and Diatoms (Diatomeae).
Prokaryotic Cell Structure:
Prokaryotic cells are relatively simple in structure. They lack a true nucleus and membrane-bound organelles. The genetic material, typically a single circular DNA molecule, is present in the cytoplasm. The cell is enclosed by a cell membrane and often has a rigid cell wall outside the membrane, providing structural support. Some prokaryotes have additional structures like pili for attachment or flagella for movement.
Eukaryotic Cell Structure:
Eukaryotic cells are more complex in structure. They have a distinct nucleus that houses linear DNA molecules. The nucleus is separated from the cytoplasm by a nuclear envelope. Eukaryotic cells contain various membrane-bound organelles, each with specific functions. These organelles include mitochondria for energy production, endoplasmic reticulum for protein synthesis, Golgi apparatus for protein modification and transport, lysosomes for intracellular digestion, and vacuoles for storage. Eukaryotic cells also have a cytoskeleton, a network of protein filaments, providing structural support and enabling cellular movement.
Similarities Between Prokaryotes and Eukaryotes:
Here are 15 similarities between prokaryotes and eukaryotes:
- Genetic Material: Both prokaryotes and eukaryotes store their genetic information in the form of DNA.
- DNA Replication: Both prokaryotes and eukaryotes replicate their DNA using similar enzymatic processes.
- Transcription: Both prokaryotes and eukaryotes transcribe DNA into RNA molecules.
- Translation: Both prokaryotes and eukaryotes translate RNA into proteins using the same genetic code.
- ATP as Energy Currency: Both prokaryotes and eukaryotes utilize adenosine triphosphate (ATP) as the primary energy currency within cells.
- Metabolism: Both prokaryotes and eukaryotes carry out fundamental metabolic processes, such as glycolysis and the citric acid cycle, to generate energy.
- Cell Membrane: Both prokaryotes and eukaryotes have a cell membrane that acts as a barrier, regulating the movement of substances into and out of the cell.
- Cytoplasm: Both prokaryotes and eukaryotes have a cytoplasm where various cellular processes take place.
- Ribosomes: Both prokaryotes and eukaryotes possess ribosomes, which are responsible for protein synthesis.
- Cellular Respiration: Both prokaryotes and eukaryotes can perform cellular respiration to convert organic molecules into usable energy in the form of ATP.
- Signal Transduction: Both prokaryotes and eukaryotes have mechanisms for sensing and responding to external signals or changes in their environment.
- Homeostasis: Both prokaryotes and eukaryotes maintain internal stability and balance through various regulatory mechanisms.
- Cytoskeleton: Both prokaryotes and eukaryotes have a cytoskeleton that provides structural support and aids in cell movement.
- Membrane Transport: Both prokaryotes and eukaryotes have transport mechanisms for moving molecules across the cell membrane.
- Cell Division: Both prokaryotes and eukaryotes replicate and divide their cells to reproduce and grow, although the specific mechanisms differ.
Table of Similarities:
|Genetic Material||Store genetic information in DNA||Store genetic information in DNA|
|DNA Replication||Replicate DNA using similar enzymatic processes||Replicate DNA using similar enzymatic processes|
|Transcription||Transcribe DNA into RNA molecules||Transcribe DNA into RNA molecules|
|Translation||Translate RNA into proteins using the same genetic code||Translate RNA into proteins using the same genetic code|
|ATP as Energy Currency||Utilize ATP as primary energy currency||Utilize ATP as primary energy currency|
|Metabolism||Carry out fundamental metabolic processes||Carry out fundamental metabolic processes|
|Cell Membrane||Have a cell membrane as a barrier||Have a cell membrane as a barrier|
|Cytoplasm||Contain cytoplasm where cellular processes occur||Contain cytoplasm where cellular processes occur|
|Ribosomes||Possess ribosomes for protein synthesis||Possess ribosomes for protein synthesis|
|Cellular Respiration||Perform cellular respiration to generate energy||Perform cellular respiration to generate energy|
|Signal Transduction||Have mechanisms for sensing and responding to external signals||Have mechanisms for sensing and responding to external signals|
|Homeostasis||Maintain internal stability and balance||Maintain internal stability and balance|
|Cytoskeleton||Have a cytoskeleton for structural support and cell movement||Have a cytoskeleton for structural support and cell movement|
|Membrane Transport||Possess transport mechanisms for moving molecules across the cell membrane||Possess transport mechanisms for moving molecules across the cell membrane|
|Cell Division||Replicate and divide cells for reproduction and growth||Replicate and divide cells for reproduction and growth|
Do prokaryotes and eukaryotes have genetic material?
Yes, both prokaryotes and eukaryotes possess genetic material in the form of DNA.
How is protein synthesis carried out in prokaryotes and eukaryotes?
Both prokaryotes and eukaryotes utilize ribosomes for protein synthesis.
Do prokaryotes and eukaryotes have cell membranes?
Yes, both prokaryotes and eukaryotes have cell membranes that surround and protect their cells.
Can prokaryotes and eukaryotes reproduce?
Yes, both prokaryotes and eukaryotes have mechanisms for reproduction, although the processes differ between them.
Are prokaryotes and eukaryotes capable of metabolism?
Yes, both prokaryotes and eukaryotes have metabolic processes that allow them to obtain and utilize energy.
Do prokaryotes and eukaryotes exhibit cell division?
Yes, both prokaryotes and eukaryotes undergo cell division as part of their life cycles.
Can prokaryotes and eukaryotes respond to their environment?
Yes, both prokaryotes and eukaryotes possess mechanisms to respond to stimuli in their environment.
Can prokaryotes and eukaryotes have flagella?
Yes, both prokaryotes and eukaryotes can have flagella, which are whip-like structures used for movement.
Are prokaryotes and eukaryotes considered living organisms?
Yes, both prokaryotes and eukaryotes are classified as living organisms, as they display the essential characteristics of life.
Possible References Used
|
McMurdo Dry Valleys LTER - Antarctica
McMurdo Dry Valleys LTER
The McMurdo Dry Valleys (MDVs) (78°S, 162°E) represent the largest (4500 km^2) ice-free area on the Antarctic continent. The MDV landscape is a mosaic of glaciers, soil and exposed bedrock, and stream channels that connect glaciers to closed-basin, permanently ice-covered lakes on the valley floors. Mean annual air temperatures are cold (ranging from -15 to -30°C on the valley floors), and precipitation is low (~50 mm annual water equivalent as snow). Summer air temperatures typically hover around freezing and winter air temperatures are commonly < -40°C. While the water columns of the lakes are liquid and biologically active year round, glacial meltwater streams flow and soils thaw only during the austral summer. There are no vascular plants, but microbial mats are abundant in lakes and streams. Mat organisms are transported by wind onto glacier and lake ice surfaces where they actively metabolize in liquid water pockets (cryoconites) that form during the summer months. In the streams, which desiccate for ~10 months each year, cyanobacterial mats host extensive diatom and soil invertebrate communities. Lakes provide a habitat for diverse phototrophic and heterotrophic plankton communities that are adapted to annual light-dark cycles and temperatures near 0°C. Soils are inhabited by nematodes, rotifers, and tardigrades, all of which are metabolically active during summer. The McMurdo Dry Valleys LTER (MCM) began studying this cold desert ecosystem in 1993 and showed that its biocomplexity is inextricably linked to past and present climate drivers. In the fifth iteration of the MCM LTER program, we are working to determine how the MDVs respond to amplified landscape connectivity resulting from contemporary climate variation.
General Characteristics and Status
Affiliation and Network Specific Information
|
Flags Before Federation
Prior to Federation on 1 January 1901, the official flag of the Australian Colonies was the flag of Great Britain the 'Union Jack'. However, the British colonial Naval Defence Act 1865 authorised the establishment of naval defence forces by the colonies and specified that such naval vessels should fly a Blue ensign with 'the seal or badge of the colony in the fly thereof'. Such flags were designed and adopted by the colonies.
The flags of the Australian colonies prior to Federation used the blue ensign with the Jack in the top left corner and the colony's badge in the fly. South Australia did not adopt a flag until 1904. Over time, use of these flags was extended beyond display on naval vessels.
Here is Western Australia's nineteenth century example:
It was changed in 1953 as vexilologists pointed out that the swan was facing outwards and not inwards toward the flagpole (or bearer of the flag) and thus was heraldically incorrect. Western Australia's state flag these days is:
The badges of the former colonies (now States) are as follows. To gain an impression of the their flags as would have been flown before Federation just mentally change the symbol in the fly of the WA flag above:
The mainland Territories do not have badges but have flags:
The Flag of the Australian Capital Territory
The Flag of the Northern Territory
|
After the end of the last Ice Age, by 9000 BC, weather conditions had become similar to today and plants and animals had returned to the landscape of Britain. At Star Carr in Yorkshire, archaeologists found the remains of a settlement which had been preserved well in waterlogged soil. The people hunted red deer, among other animals, and made headdresses from their antlers. Star Carr was very important in helping archaeologists understand the Mesolithic period better and provides a vivid insight into several aspects of life at that time.
Star Carr, Vale of Pickering, north Yorkshire, England
part of skull and antler of red deer
length: 42.9 cm
width: 39 cm
(Please always check with the museum that the object is on display before travelling)
|
Protein Monomers: Exploring the Building Blocks of Proteins
As one of the most important macromolecules, proteins play a crucial role in many biological processes. They are responsible for performing essential functions such as catalyzing chemical reactions, providing structural support, and facilitating communication between cells. But, have you ever wondered what makes up these complex biomolecules? In this article, we will uncover the secrets of protein monomers, the building blocks of proteins.
An Introduction to Protein Monomers
Proteins are polymers composed of monomers called amino acids. Amino acids are small molecules that contain a central carbon atom, called the alpha carbon, which is attached to four chemical groups: an amino group, a carboxyl group, a hydrogen atom, and a variable R-group. There are 20 different types of amino acids that can be arranged in any order to create a specific protein. Each amino acid has its unique chemical properties, which determine its function in the protein.
Proteins play a crucial role in many biological processes, including catalyzing chemical reactions, transporting molecules, and providing structural support. The sequence of amino acids in a protein determines its three-dimensional structure, which is essential for its function. Changes in the amino acid sequence can lead to alterations in the protein's structure and function, which can result in diseases such as sickle cell anemia and Alzheimer's disease. Understanding the properties and functions of protein monomers is essential for understanding the complex biological processes that occur in living organisms.
The Importance of Protein Structure
The structure of a protein is critical to its function. The specific sequence of amino acids determines the primary structure of the protein, which then folds into a three-dimensional shape. This folding is essential because it creates pockets, clefts, and channels that allow the protein to interact with other molecules selectively. The shape of a protein is determined by the type, sequence, and arrangement of amino acids. Any significant variation in protein structure can impact its function and lead to diseases such as Alzheimer's, Huntington's, or cystic fibrosis.
Furthermore, the study of protein structure is crucial in the development of new drugs and therapies. By understanding the three-dimensional structure of a protein, scientists can design drugs that specifically target and bind to certain regions of the protein, either inhibiting or enhancing its function. This approach has been successful in treating diseases such as cancer, HIV, and autoimmune disorders. Therefore, the importance of protein structure extends beyond basic research and has practical applications in the field of medicine.
The Different Types of Protein Monomers
There are 20 different types of amino acids, each with its own unique side chain or R-group. These amino acids can be classified into four groups based on their R-group chemical property: polar, nonpolar, acidic, and basic. Polar amino acids contain an R-group with a charge separation, while nonpolar amino acids have an R-group that is hydrophobic. Basic amino acids contain an R-group that can donate a proton, while acidic amino acids have an R-group that can accept a proton. The various combinations of these amino acids make up the distinct protein monomers.
Protein monomers can also be classified based on their shape or structure. There are four levels of protein structure: primary, secondary, tertiary, and quaternary. The primary structure is the linear sequence of amino acids in a polypeptide chain. The secondary structure refers to the folding of the polypeptide chain into alpha helices or beta sheets. The tertiary structure is the overall 3D shape of a single polypeptide chain, while the quaternary structure refers to the arrangement of multiple polypeptide chains in a protein complex. The specific combination of amino acids and their resulting structure determine the function of the protein.
Amino Acids: The Essential Building Blocks of Proteins
Amino acids are vital building blocks of proteins due to their unique chemical functionality. Each amino acid has a distinct chemical structure, which determines its interaction with other amino acids and molecules within the protein. The chain of amino acids determines the protein's function and is responsible for the protein's unique physical and chemical properties. The sequence of amino acids also influences the protein's stability and three-dimensional structure, which directly impacts its function.
There are 20 different types of amino acids that can be found in proteins. These amino acids can be classified into two categories: essential and non-essential. Essential amino acids cannot be produced by the body and must be obtained through the diet, while non-essential amino acids can be produced by the body. It is important to consume a balanced diet that includes all essential amino acids to ensure proper protein synthesis and overall health.
In addition to their role in protein synthesis, amino acids also play a crucial role in various metabolic pathways. For example, some amino acids can be converted into glucose, which is used as a source of energy by the body. Other amino acids can be used to synthesize neurotransmitters, which are important for proper brain function. Amino acids also play a role in the immune system, as they are involved in the production of antibodies that help fight off infections.
Understanding the Role of Peptide Bonds in Protein Formation
Peptide bonds are the covalent bonds that link amino acids to form protein monomers. During protein formation, the carboxyl group of one amino acid reacts with the amino group of another to create an amide bond. This reaction results in the formation of a peptide bond, creating a long chain of amino acids. The sequence of amino acids within this chain determines the primary structure of the protein.
Peptide bonds also play a crucial role in protein folding and stability. The formation of peptide bonds creates a rigid, planar structure that limits the rotation of the atoms around the bond. This rigidity affects the overall shape of the protein, as certain regions of the chain will be more likely to form alpha helices or beta sheets. Additionally, the presence of peptide bonds creates hydrogen bonding opportunities between the carbonyl oxygen and the amide hydrogen, which can further stabilize the protein structure. Understanding the role of peptide bonds in protein folding and stability is essential for developing new drugs and therapies that target specific proteins.
How Protein Monomers Fold to Form Unique Structures
After the formation of a protein chain, the molecule starts to fold into a unique three-dimensional shape. This folding is essential to the protein's physical and chemical properties, as it determines the protein's ability to interact with other biomolecules. Protein folding is a highly complex process, which involves several stages of folding, each requiring the formation of specific structures, such as alpha-helices and beta-sheets. The final protein structure is determined by the sequence of the amino acids in the chain.
One of the key factors that influence protein folding is the environment in which the protein is located. For example, changes in temperature, pH, or the presence of certain chemicals can cause a protein to unfold or misfold, leading to a loss of function or even disease. Understanding the factors that affect protein folding is crucial for developing new drugs and therapies for a range of diseases, including Alzheimer's and Parkinson's.
Recent advances in technology, such as cryo-electron microscopy and X-ray crystallography, have allowed scientists to study protein structures in unprecedented detail. This has led to new insights into the mechanisms of protein folding and the development of new computational tools for predicting protein structures. These advances have the potential to revolutionize drug discovery and lead to the development of more effective treatments for a range of diseases.
Exploring the Primary, Secondary, Tertiary and Quaternary Structure of Proteins
The structure of a protein comprises four levels: primary, secondary, tertiary, and quaternary. The primary structure is the sequence of amino acids in the protein chain. The secondary structure involves the formation of alpha-helices and beta-sheets. The tertiary structure is the folding of the protein into a three-dimensional shape, while the quaternary structure is the arrangement of multiple protein chains to form a larger protein structure. Alterations in any one of these structures can lead to changes in protein function.
Proteins are essential for many biological processes, including catalyzing chemical reactions, transporting molecules, and providing structural support. Understanding the structure of proteins is crucial for understanding their function. Researchers use various techniques, such as X-ray crystallography and nuclear magnetic resonance spectroscopy, to determine the structure of proteins. These techniques allow scientists to visualize the arrangement of atoms in the protein and provide insights into how the protein functions. By studying the structure of proteins, researchers can develop new drugs and therapies to treat diseases.
How Protein Function is Determined by its Monomer Structure
Protein function is influenced by the protein's monomer structure through the chemical and physical properties of each amino acid. For example, an enzyme's active site may contain a particular combination of amino acids that allow for the specific reaction to occur. Similarly, proteins that interact with DNA rely on specific amino acid combinations to bind to DNA molecules. The overall function of a protein is a result of its tertiary and quaternary structure.
Examining the Relationship between Protein Monomers and Enzymes
Enzymes are proteins that catalyze chemical reactions in the body. Their catalytic function relies on their specific three-dimensional structures, which are related to their amino acid sequence. Amino acids that make up the enzyme's active site are arranged to allow specific substrates to bind, creating a unique environment that facilitates chemical reactions. In this way, enzymes rely on their monomer structure for proper function.
The Role of Protein Monomers in Biological Processes
Protein monomers are essential components of biological processes. They make up enzymes, hormones, structural proteins, and transport proteins, among others. Enzymes are responsible for catalyzing chemical reactions, while hormones regulate and control various biological processes. Structural proteins provide support and shape to cells, while transport proteins move molecules across cell membranes. These are just a few examples of the critical roles that protein monomers play in maintaining biological processes.
How Genetic Information Determines the Formation of Specific Protein Monomers
Each cell in the body contains genetic information that encodes for specific proteins. This genetic information is contained in DNA molecules, which are transcribed into messenger RNA molecules that code for specific protein sequences. These sequences dictate the order and type of amino acids that will be used to create the protein. Thus, the genetic information contained within an organism's DNA determines the type of protein monomers that are present and the resulting protein structure and function.
Protein monomers are the building blocks of proteins and are essential components of many important biological processes. Their unique chemical properties give rise to the specific physical and chemical properties of proteins, allowing them to perform diverse functions in the body. Understanding protein monomers and their interactions can lead to new insights into biological processes and the design of more effective therapies for diseases caused by protein dysfunction.
|
It’s All About Consent!
People with disabilities are 3 times more likely to be sexually assaulted than people without disabilities. (disabilityjustice.org) The statistics are devasting. There are things we can do to change these numbers and help create safer communities for all people. Teaching about consent is one powerful thing. Here are some of the basics to share with people in your life with disabilities.
Consent is everything! But what does it mean?
Consent means to agree to something. We use this word especially when we are talking about our bodies. Consent is about relationships between bodies. Consent is giving your permission or not giving permission.
Consent means you are in charge of your body. Your body is yours! You get to decide what happens to it. You get to set your own boundaries.
Boundaries are limits about what you are comfortable or not comfortable with. For example, you might be comfortable hugging a close friend, but you might not be comfortable hugging people you don’t know. That’s a boundary. You either consent to a hug or you don’t consent to a hug- it’s your choice, based on your boundaries. If you are unsure, pay attention to your body’s responses which could be, increased heartrate, sweating or upset stomach. That could be your bodies way of saying something is not right for you right now, and it’s better to say no, to not give consent. Better to be on the safe side.
It’s ok to say ‘No’ to unwanted touch. It’s your right to say ‘No’ when it comes to your body. Saying ‘No’ to touch or attention means you do not consent. If another person tries to force you, or if you are afraid, that is not consent. And you can not consent if under the influence of drugs or alcohol. If someone does try to touch you without your consent, you can say ‘NO’ loudly, or you can run away. In both cases, it’s important to tell a trusted adult. It’s also important to get consent from others, because everyone is in charge of their own body. For instance, if you want to hug somebody, ask first. Consent goes both ways.
To be clear: Yes means Yes, No means No. Anything other than an enthusiastic Yes, means a positive No!
It’s ok to change your mind when it comes to consent! You can give consent or agree to hugging one day and then decide the next day that you don’t want to hug. You have a right to say ‘no’ to something and then decide you actually would like to do it, so change your mind and say ‘yes’. We have to keep checking in about consent because people can change their minds about consent.
Lastly, the good news is -we can practice consent and get better at it. Practice saying ‘No’ loudly and clearly. Practice saying ‘Yes’ loudly and clearly. Practice asking others before touching them in anyway. Practice asking for consent even with the small things like: Would you like to play this game? Can I sit next to you? Do you want to hold my hand?
You are in charge of your body! No one has a right to do anything to your body that you don’t consent to. And everyone is in charge of their own bodies!
1. Consent (For Kids), Boundaries, Respect, and Being in Charge of You. Rachel Brian
|
A joint is defined as a point where 2 bones articulate or make contact. Joints can be classified in 2go ways: (1) histologically, based on the type of connective tissue which is dominant in the joint or (2) functionally, based on the amount of movement permitted between the bones forming the joint. Based on the histological classification, the 3 types of joints in the human body are fibrous, cartilaginous, and synovial. Based on the functional classification, the 3 types of joints are synarthrosis (immovable), amphiarthrosis (slightly moveable), and diarthrosis (freely moveable). The 2 classification schemes correlate: synarthroses are fibrous, amphiarthroses are cartilaginous, and diarthroses are synovial.
A fibrous joint is often called a fixed joint and is where fibrous tissue comprised mainly of collagen connects bones. Fibrous joints are usually immoveable (synarthroses) and they have no joint cavity. They are subdivided further into sutures, gomphoses, and syndesmoses.
In cartilaginous joints, the bones are attached by hyaline cartilage or fibrocartilage. Based on the type of cartilage involved, the joints are further classified as primary and secondary cartilaginous joints.
Synovial joints are freely mobile (diarthroses) and are often considered the main functional joints of the body. Its joint cavity characterizes the synovial joint. The cavity is surrounded by the articular capsule which is fibrous connective tissue that is attached to each participating bone just beyond its articulating surface. The joint cavity is filled with synovial fluid, secreted by the synovial membrane (synovium) which lines the inside of the articular capsule. Hyaline cartilage forms the articular cartilage, covering the entire articulating surface of each bone. The articular cartilage and the synovial membrane are continuous. Some synovial joints also have associated fibrocartilage, for example, menisci, between the articulating bones.
Synovial joints are often further classified by the type of movements they permit. There are 6 such classifications: hinge (elbow), saddle (carpometacarpal joint), planar (acromioclavicular joint), pivot (atlantoaxial joint), condyloid (metacarpophalangeal joint), and ball and socket (hip joint).
The histological and functional classification schemes offer a broad understanding of joints. Within these categories, each specific joint type (suture, gomphosis, syndesmosis, synchondrosis, symphysis, hinge, saddle, planar, pivot, condyloid, ball, and socket) has a specific function in the body.
Of the fibrous joints, sutures and gomphoses are found only in the skull and the teeth, respectively, and were discussed in the introduction.
A syndesmosis, an amphiarthrosis joint and the third type of fibrous joint, maintain the integrity between long bones and resists forces that attempt to separate the two bones. All syndesmoses are amphiarthroses, but each specific syndesmosis joint permits a different amount of movement. For example, the tibiofibular syndesmosis primarily provides strength and stability to the leg and ankle during weight bearing; however, the antebrachial interosseous membrane of the radioulnar syndesmosis permits rotation of the radius bone during forearm movements. The interosseous membranes of the leg and forearm are also areas of muscle attachment.
A synchondrosis, or primary cartilaginous joint, only involves hyaline cartilage and can be temporary or permanent.
A temporary synchondrosis is an epiphyseal plate (growth plate), and it functions to permit bone lengthening during development. The epiphyseal plate connects the diaphysis (shaft of the bone) with the epiphysis (end of the bone) in children. Over time, the cartilaginous plate expands and is replaced by bone, adding to the diaphysis. Eventually, when all the hyaline cartilage has been ossified, the bone is done lengthening ad the diaphysis and epiphysis fuse in synostosis. Other temporary synchondroses join the ilium, ischium, and pubic bones of the hip; over time, these also fuse into a single hip bone.
A permanent synchondrosis does not ossify with age; it retains its hyaline cartilage. Permanent synchondroses function to connect bones without movement as a synarthrosis joint. Examples are found in the thoracic cage, such as the first sternocostal joint: the first rib is joined to the manubrium by its costal cartilage. Other examples include the relationship between the anterior end of the other 11 ribs and the costal cartilage.
A symphysis, or secondary cartilaginous joint, involves fibrocartilage. Fibrocartilage is thick and strong, so symphyses have great ability to resist pulling and bending forces. While the fibrocartilage strongly unites adjacent bones, the joint is still an amphiarthrosis joint and permits limited movement.
Symphysis can be narrow or wide. Narrow symphyses include the pubic symphysis and the manubriosternal joint. In females, the slight mobility of the pubic symphysis between the left and right pubic bones is critical for childbirth. A wider symphysis is the intervertebral symphysis or intervertebral disc. The thick pad of fibrocartilage fills the gap between adjacent vertebrae and provides cushioning during high-impact activity.
The primary purpose of the synovial joint is to prevent friction between the articulating bones of the joint cavity. While all synovial joints are diarthroses, the extent of movement varies among different subtypes and is often limited by the ligaments that connect the bones.
A hinge joint is defined as an articulation between the convex end of one bone and the concave end of another. This type of joint is uniaxial because it only permits movement in one axis. In the body, this axis of movement is usually bending and straightening, or flexion and extension. Examples include the elbow, knee, ankle, and interphalangeal joints.
A condyloid joint, or an ellipsoid joint, is defined as an articulation between the shallow depression of one bone and the rounded structure of another bone or bones. This type of joint is biaxial because it permits two axes of movement: flexion/extension and medial/lateral (abduction/adduction). An example is the metacarpophalangeal joints of the hand between the distal metacarpal and proximal phalanx, commonly known as the knuckle.
A saddle joint is defined as an articulation between two bones that are saddle-shaped, or concave in one direction and convex in another. This type of joint is biaxial, and one example is the first carpometacarpal joint between the trapezium (carpal) and the first metacarpal bone of the thumb. This permits the thumb to flex and extend (within the plane of the palm) as well as abduct and adduct (perpendicular to the palm). This dexterity gives humans the characteristic trait of “opposable” thumbs.
A planar joint, or gliding joint, is defined as an articulation between bones that are both flat and of similar size. This type of joint is multiaxial because it permits many movements; however, surrounding ligaments usually restrict this joint to a small and tight motion. Examples include intercarpal joints, intertarsal joints, and the acromioclavicular joint.
A pivot joint is defined as an articulation within a ligamentous ring between the rounded end of one bone and another bone. This type of joint is uniaxial because, although the bone rotates within this ring, it does so around a single axis. An example would be the atlantoaxial joint between C1 (atlas) and C2 (axis) of the vertebrae. This permits side-to-side head motion. Another example is the proximal radioulnar joint. The radius sits in the annular radial ligament, which holds it in place as it articulates with the radial notch of the ulna. This permits pronation and supination.
Synovial: Ball and Socket
A ball and socket joint is an articulation between the rounded head of one bone (ball) and the concavity of another (socket). This type of joint is multiaxial: it permits flexion/extension, abduction/adduction, and rotation. The only two ball and socket joints of the body are the hips and the shoulder (glenohumeral). The shallow socket of the glenoid cavity permits a more extensive range of motion in the shoulder; the deeper socket of the acetabulum and the supporting ligaments of the hip constrain the motion of the femur.
Joints, comprising bones and connective tissue, embryologically derive from mesenchyme. The bones either develop directly through intramembranous ossification or indirectly through endochondral ossification. During direct development (intramembranous ossification), the mesenchymal cells differentiate into bone-producing cells. During indirect development (endochondral ossification), the mesenchymal cells first differentiate into a hyaline cartilage model, which is then gradually displaced by bone. The connective tissue of the joint arises from the mesenchymal cells between the developing bones.
For synovial joints of the limbs, the space between the developing long bones is termed the joint interzone. The interzone becomes apparent in the sixth week of embryonic development when a cellular condensation of mesoderm on either side, termed the paraxial blastema, chondrifies into hyaline cartilage models for the long bones. In the eighth week of embryonic development, mesenchymal cells at the margin of the Interzone become the articular capsule; cell death in the center forms the joint cavity, which is filled with synovial fluid produced by mesenchymal cells. The articular cartilage is a remnant of the hyaline cartilage that, between gestational weeks 6 and 8, became the long bones via endochondral ossification.
Every joint in the body has a different blood supply; however, there are patterns based on the histological classification of joints.
Perforating branches of the proximal vessels usually supply fibrous joints. For example, the tibiofibular joint is supplied by branches from the anterior tibial artery as well as the peroneal artery.
Cartilaginous joints only receive vascular supply at the periphery because cartilage itself is an avascular tissue. Intervertebral discs, for example, are supplied at the margins by capillaries from the vertebral bodies.
Synovial joints are supplied by a rich anastomosis of arteries extending from either side of the joint, termed the periarticular plexus. Some vessels penetrate the fibrous capsule to form a rich plexus deeper in the synovial membrane. This deeper plexus, termed circulus vasculosus, forms a loop around the articular margins that supplies the articular capsule, synovial membrane, and terminal bone. The articular cartilage, which is avascular hyaline cartilage, is nourished by the synovial fluid.
Lymphatic vessels for every joint follow the lymph drainage of the surrounding tissue. Some joints house lymph nodes, like the popliteal lymph nodes in the popliteal fossa of the knee.
Every joint in the body has different innervation; however, innervation of synovial joints is most extensively understood-- perhaps because of the functional implications.
Synovial joints are highly innervated by sensory and autonomic fibers. The autonomic nerves are vasomotor in function, controlling the dilation or constriction of blood vessels. The sensory nerves of the articular capsule and ligaments (articular nerves) provide proprioceptive feedback from Ruffini endings and Pacinian corpuscles. Proprioception of the joint permits reflex control of posture, locomotion, and movement. Free nerve endings convey pain sensation that is diffuse and poorly localized. The articular cartilage has no nerve supply.
There are two general principles that can be applied to synovial joint innervation: Hilton’s law and Gardner’s observation. Hilton’s law states: the articular nerves supplying a joint are branches of the nerves which supply the muscles responsible for moving that joint. Therefore, irritation of articular nerves causes a reflex spasm of the muscles which position the joint for greatest comfort. These nerves also supply the overlying skin, providing a mechanism for referred pain from joint to skin. Gardner’s observation appreciates: the part of the articular capsule that is tightened by contraction of a group of muscles is supplied by the same nerves that innervate the antagonist muscles. This relationship provides local reflex arcs that stabilize the joint.
Muscles are most critical in providing additional support for synovial joints. The muscles and their tendons which cross a joint resist the forces acting on that joint, behaving as a dynamic "ligament." Muscle strength is therefore essential to the stability of synovial joints, especially during high-stress activity as well as for joints with weaker ligaments, for example, the glenohumeral.
Joints can be surgically replaced during an operation called an arthroplasty to treat chronic pain and limited mobility associated with osteoarthritis. Arthroplasty is a highly invasive procedure, so it is often the last line of treatment. The operation removes the damaged bone and replaces the articular surfaces with an artificial metal, plastic, or ceramic device built to mimic the natural structure of the joint (prostheses). Hips and knees are the most commonly replaced joints.
Different pathology is associated with different joint types. Below is a review of the most common injuries that plague each histological class.
Sutures, the immobile fibrous joints that bind the bony plates of the cranium, can fuse too early in development, a condition termed craniosynostosis. The plates of a newborn’s skull are not fused to permit space for the brain to grow in all planes; therefore, early fusion (synostosis) alters the shape of the head. For example, if the sagittal suture synostoses, the head will not develop width and will instead grow long and narrow (scaphocephaly). In addition to altered head shape, some children may experience symptoms that are secondary to high pressure on the brain due to more confined skull space. These include headaches, developmental delays, or problems with eyesight.
A syndesmosis joint, the slightly mobile fibrous joint that connects long bones with an interosseous membrane, can be sprained. For example, in the leg, excessive external rotation can push the fibula away from the tibia causing injury to the distal tibiofibular syndesmosis; this is termed a “high ankle sprain.”
Epiphyseal plates, an example of temporary synchondroses, are vulnerable to damage when there is an injury to the associated growing long bone. Such damage to the cartilage would stop bone lengthening and stunt bone growth.
Arthritis is inflammation of the synovial joint. There are many types of arthritis, distinguished by different mechanisms of injury. The most common type of arthritis is osteoarthritis, which is defined as gradual damage to and subsequent thinning of the articular cartilage. This is considered a “wear and tear” injury and is seen in older patients; it is associated with previous injury to the joint and longstanding high-impact stress on the joint (due to sports or excessive body weight). Because the articular cartilage has no innervation, the degradation itself does not cause pain. Instead, as the articular cartilage becomes thinner, more pressure is placed on the bones. The joint responds by overproducing synovial fluid. This leads to swelling and inflammation, that stretches the highly innervated articular capsule to cause pain and stiffness of the joint. The underlying bone also has a rich nerve supply that perceives pain.
Gout is another form of arthritis caused by deposition of uric acid crystals within a joint. Uric acid causes gout when there is an excessive amount in the body; this is either due to over-production or improper excretion by the kidneys. The most commonly affected joint is the metatarsophalangeal (MTP) joint of the big toe. Patients often present with excruciating pain and swelling.
Synovitis is inflammation of the synovial membrane that lines the articular capsule of synovial joints. The most common cause is overuse of a synovial joint in an active, healthy person. Persistent synovitis in multiple joints can indicate rheumatoid arthritis, where the synovium is the target of autoimmune attack. Patients with synovitis often present with pain out of proportion to examination; in fact, sometimes the patient has pain without swelling or tenderness, which is termed arthralgias.
|||Cope PJ,Ourradi K,Li Y,Sharif M, Models of osteoarthritis: the good, the bad and the promising. Osteoarthritis and cartilage. 2018 Oct 25; [PubMed PMID: 30391394]|
|||Mittlmeier T,Rammelt S, Update on Subtalar Joint Instability. Foot and ankle clinics. 2018 Sep; [PubMed PMID: 30097081]|
|||Tu C,He J,Wu B,Wang W,Li Z, An extensive review regarding the adipokines in the pathogenesis and progression of osteoarthritis. Cytokine. 2019 Jan; [PubMed PMID: 30539776]|
|
The purpose of this activity to learn about sounds. 1. Watch the about sounds. 2. Find a piece of paper and a pencil. 3. Close your eyes and get very quiet. Listen to all of the sounds you can hear in one minute. 4. Open your eyes. Make a list of the sounds your hear. 5. Ask your parents for permission to go outside. 6. Go outside and close your eyes. Listen to all of the sounds you can hear in one minute. 7. Open your eyes and make a list of the sounds your hear. 8. Use Seesaw to share the sounds you heard inside and outside.
|
Beech Class Spring Term
This term our topic is ‘Colours and Animals’ The theme this term is animals and colours. The linguistic focus is gender, articles (definite & indefinite), plurals and adjectives (position & basic agreement).
The key verbs are ‘il/elle est’ (he/she/it is), ‘ils sont’ (they are), il y a (there is/are). The negative is revisited and there is also a subtle introduction to ‘aussi' (also/too/as well), ‘mais’ (but).
Pupils are encouraged at all times to strive to work things out for themselves, work in pairs and small groups sharing knowledge, and to speak aloud when possible – thereby building confidence.
Pronunciation, memory, pattern finding, sentence building, autonomy, performance and creativity are the concepts at the heart of this term’s learning. Have a brilliant Christmas
If your child needs to self-isolate, individually or as a whole class, the school office will email home their home learning on a weekly basis. If you have any questions or queries, please contact the office and your class teacher will be in contact with you.
How to access Google classroom
Each child should read for 30-40 minutes a day and complete regular reading journal entries. They should complete at least one Read Theory quiz a week.
English and Maths home learning is set each week to practise skills taught in class and weekly spellings that teach spelling rules and patterns.
|
About the New York
Police Department (NYPD):
The first law-enforcement officer began to patrol the trails and paths of New York City when it
was known as New Amsterdam, and was a Dutch settlement and fort in the year 1625. This lawman was known as a "Schout
– fiscal" (sheriff – attorney) and was charged with keeping the peace, settling minor disputes, and warning
colonists if fires broke out at night. The first Schout was a man named Johann Lampo.
The Rattle Watch was a group of colonists during the Dutch era (1609 - 1664) who patrolled
from sunset until dawn. They carried weapons, lanterns and wooden rattles (that are similar to the ratchet noisemakers used
during New Year celebrations). The rattles made a very loud, distinctive sound and were used to warn farmers and colonists
of threatening situations. Upon hearing this sound, the colonists would rally to defend themselves or form bucket-brigades
to put out fires. The rattles were used because whistles had not yet been invented. The Rattle Watchmen also are believed
to have carried lanterns that had green glass inserts. This was to help identify them while they were on patrol at night (as
there were no streetlights at that time). When they returned to their Watch House from patrol, they hung their lantern on
a hook by the front door to show that the Watchman was present in the Watch House. Today, green lights are still hung outside
the entrances of Police Precincts as a symbol that the "Watch" is present and vigilant.
When the High Constable of New York City, Jacob Hays retired
from service in 1844, permission was granted by the Governor of the state to the Mayor of the City to create a Police Department.
A force of approximately 800 men under the first Chief of Police, George W. Matsell, began to patrol the City in July of 1845.
They wore badges that had an eight-pointed star (representing the first 8 paid members of the old Watch during Dutch times).
The badges had the seal of the City in their center and were made of stamped copper.
|
What is epilepsy
Epilepsy occurs as a result of abnormal electrical activity originating in the brain. Brain cells communicate by sending electrical signals in an orderly pattern. Sometimes the brain cells communicate in an abnormal or uncontrolled manner leading to a seizure or epileptic episode.
This is where the work of a neurologist comes in. If you suffer from an epileptic seizure, you may start behaving differently. Or you may feel a brief loss of awareness. Sometimes patients would experience convulsions or even unconsciousness.
Anyone of us can have a seizure in certain circumstances - about one person in 20 has a seizure at some point in their life. However, we would only call it epilepsy only if you are likely to have recurring seizures. About one person in 200 has epilepsy, making it quite a common condition.
So how can a neurologist like myself help you with the treatment of epilepsy? Generally, epilepsy is successfully treated with antiepileptic medications. About 60-70% of people with epilepsy will be able to control their seizures by simply taking medication. The remaining 30-40% may continue to have seizures but they would occur less frequently.
There are many different antiepileptic medications. The choice of antiepileptic medication depends on a number of factors including:
- your type of seizure or epilepsy
- your age
- your gender
- your other medical conditions or general health
- your informed opinion
This is where, in my work as a neurologist in Perth, I take the time to go over your specific situation and your medical history. It’s essential to look at the bigger picture and to listen to your own story and your experiences, before making decisions about medication.
When you start taking medication to control your epilepsy symptoms, and you still have seizures, we may consider other treatment options. They can include vagus nerve stimulation or surgery.
Types of epilepsy seizures
Part of my role as your neurologist is to help you understand the type of seizures that you may be experiencing. They fall into two main groups. One group of seizures begins in a particular spot in the brain (focal seizures), while the other group starts in large areas of the brain at the same time (generalised seizures).
FOCAL SEIZURES - there are two forms:
- Focal seizures without loss of consciousness
- Alterations to sense of taste, smell, sight, hearing or touch
- Tingling and twitching of limbs
- Focal seizures with loss of consciousness
- Staring blankly into space
- Performing repetitive movements
- Confusion once the seizure is over
GENERALISED SEIZURES - there are six types:
- Tonic-clonic seizures - used to be called “grand mal seizures” (French for ‘big pain’)
- Loss of consciousness
- Stiffening of body
- Shaking of limbs
- Loss of bladder control
- Tongue biting
- Absence seizures - used to be called “petit mal seizures”(French for ‘small pain’).
- Blank stare and a brief duration of loss of awareness. There may also be repetitive movements like lip smacking or blinking.
- Myoclonic seizures
- Tonic seizures
- Atonic seizures
- Clonic seizures
Perth Epilepsy Specialist
When to see an epilepsy specialist or neurologist?
If you suspect you have had a seizure, I recommend that you see your doctor as soon as possible. A seizure can be a symptom of a serious medical issue. In order to diagnose epilepsy, your GP will most likely refer you to a neurologist. And the role of the neurologist is then to confirm the diagnosis of epilepsy, and to discuss which treatment is most appropriate for your situation.
You may wonder if there is a cure for epilepsy. The straightforward answer is no, at this stage there is no known cure for the condition. But research and the advancements in modern medicine allow us to work out the best possible treatment. And that, in most patients, leads to dramatic improvements, and to a better quality of life.
On the other hand, not seeking help or not having access to a qualified neurologist, may mean that your epilepsy is left undiagnosed, untreated, and causes complications. During a seizure, an epilepsy patient can sustain serious injuries. In other cases, we have seen that uncontrolled or prolonged seizures lead to brain damage. As we have established, epilepsy is all about abnormal electric activity in the brain, and if this abnormal situation is not treated, it causes abnormal functioning of the brain. Experts also agree that epilepsy may increase the risk of sudden unexplained death.
Epilepsy Specialist Perth
What are the causes of epilepsy?
At my practice in Perth I often get questions about the exact cause of epilepsy as a condition. Through scientific research in this area, we have learned that there are quite a few main causes:
- Genetic predisposition: something in your genes that makes certain people more prone to developing the symptoms.
- Developmental disorders of the brain: a disorder in the early stages of life, causing this abnormal electrical activity.
- Perinatal brain injury: a lack of oxygen at birth can cause this type of brain damage.
- Febrile convulsions in infancy
- Infectious diseases of the brain: infections such as meningitis or encephalitis
- Traumatic brain injury: brain damage caused by an accident, for example
- Scarring on the brain after a brain injury (post-traumatic epilepsy)
- Stroke or other vascular diseases
- Brain tumour
- Neurodegenerative disorders such as Alzheimer’s disease
Part of my role is to create clarity around the exact cause of your epilepsy. And another role your neurologist plays, is to create clarity around the triggers. They are not the direct cause, but they are specific situations that initiate a seizure.
For example: you may be tired, or suffer from some form of sleep deprivation. You may be stressed. Another trigger could be the use of alcohol. Or, simply the fact that you have been given epilepsy medication and forgot to take it.
What to expect from an epilepsy diagnosis?
To assess if you suffer from epilepsy, at my practice in Mt Lawley Perth, I may recommend one or several of the common neurological tests:
- Neurological examination
- Blood tests
- Video-EEG monitoring
- CT Scan
- Cranial MRI
The tests I recommend will be based on your personal situation. And the next important step is to make sure that you get a clear explanation of what the test has told us, so you are clear about the results and about your diagnosis.
What to expect from an epilepsy diagnosis?
Medication for epilepsy is often the recommended first line of treatment. As a Perth based neurologist, I can help you with finding the best antiepileptic medication for your type of epilepsy. The treatment of seizures depends on an accurate diagnosis. I will discuss with you the various medication options, their side-effects and what is relevant in your particular circumstances. “No seizures and no side-effects” is my goal for any epilepsy treatment. While this is not always possible, we will keep striving to get the best seizure control possible.
Epilepsy surgery is considered when focal onset seizures are particularly dangerous or debilitating, occurring many times a day and not responsive to anti-epileptic medications. Surgery is also performed if the specific cause of the epilepsy requires surgery, for example in case of a brain tumour. Epilepsy surgery requires extensive pre-surgical work-up. I will perform the initial investigations and direct you to a centre with the best expertise in epilepsy surgery.
I build on 20 years of experience as a neurologist, with a particular interest in epilepsy diagnosis and treatment. My commitment is to give you clarity about the tests that you may need, and clear communication to make sure that you fully understand the test results. Working in the complex area that is the human brain, I believe that you deserve to understand what is going on in that part of your body, and you deserve to have access to the most suitable and most personalised type of epilepsy treatment.
|
Origin of Rastafari
The origin of the Rastafari movement is generally accepted as the Coronation of His Imperial Majesty Emperor Haile Selassie I of Ethiopia on 2nd November 1930. On this date Ras Tafari, Heir Apparent to the Throne of Ethiopia, took the new name of Haile Selassie I, meaning Power of the Trinity.
The Emperor traced an unbroken lineage as the 225th descendant from the throne of King David.
Others contend that Rastafari was born out of a spirit of resistance from the first boatload of enslaved Africans that left the Guinea Coast for serfdom in the west. That spirit was imbued in a succession of freedom fighters such as Nanny of the Maroons, Sam Sharpe, Paul Bogle and Alexander Bedward. They all refused to accept the status quo of enslavement and colonisation. Through centuries of oppression that spirit endured and finally surfaced in the context of Ethiopianism, a revived racial memory of African ancestry, spirituality and divinity.
In the late 19th and early 20th century Ethiopianism emerged In Southern Africa, the Americas and the Caribbean. Several Black religio-political organisations and Churches sprang up, espousing Ethiopianism. They looked to the last bastion of independence on the African continent as a source of spiritual strength. In Jamaica this trend had occurred a century earlier with the introduction of Black Jamaicans to the Bible. African-Americans George Lisle (or Liele), Moses Baker and George Lewis, emancipated slaves who had fled to Jamaica following the outbreak of the American War of Independence, pioneered this Afrocentric religious trend on the island. (Chevannes 1994: 18). They were inspired by Biblical references to Ethiopia and tried to proselytize enslaved Jamaicans, inducting them by way of the Bible. Through their speeches and activities they paid tribute to Africa, and emphasized the greatness of Africans. George Lisle (1750-1820) founded the Ethiopian Baptist Church in 1784 and contributed greatly to uplifting the black masses by associating Africa and its people with the Promised Land and the Elect, the ‘true Jews’ descended from King Solomon and Makeda, Queen of Sheba.
These strains of consciousness inspired individual forbears of the Rastafari movement in Jamaica, who, independently of each other, identified Ras Tafari as the fulfilment of prophecy, the promised Messiah who would break the chains of captivity that had held African descendants in thraldom for centuries. Lay preachers such as Howell, Hinds, Hibbert and Dunkley openly proclaimed Ras Tafari as the Redeemer, quoting Psalm 87; Revelation 5, Vs 5; Revelation 17 Vs 14; Revelation 19 Vs 16; Isiah 9 Vs 6 and other scriptural passages that gave unshakeable proof of His identity, divinity and redemptive power.
The congruent strands of history and prophecy were fused to reveal the Almighty – not through ecclesiastical divination or theosophic scholarship, but through grassroots research of ‘unlettered’ men moved by the zeitgeist of the age and the timeless spirit of Truth that inhabits the human condition. To quote Marcus Garvey: “Many a man was educated outside the schoolroom. It is something you let out, not completely take in. You are part of it, for it is natural; it is dormant simply because you will not develop it, but God creates every man with it knowingly or unknowingly to him who possesses it, that’s the difference. Develop yours and you become as great and full of knowledge as the other fellow without even entering the classroom.”
Rastafari embodied the recovered and reconstituted ‘decency’ of humanity from the dawn of creation. Livity came into being as an archetypal reconstruction of moral life, fashioned by outcasts of humanity, the cut-offs of nations in the bottomless pit of the Caribbean Basin. A precious stone, fired in the ultimate crucible of oppression, became the saving grace of nations.
The Forerunners of Rastafari
Reverend Gordon (1836-1885) – whose words might have been taken from a speech of Marcus Garvey – except that they were delivered in 1875, twelve years before Garvey’s birth:
“Some people…are ashamed to own their connection with Africa, but this should not be, since it must be admitted, that she once held the most prominent and influential position in the world, and that from her, through Greece and Rome, the British Nation received the first elements of civilization.” (Stewart 1983: 280).
Doctor, pastor, journalist, politician and orator Joseph Robert Love (1839-1914), born in the Bahamas, was a key figure in Pan-Africanism between 1890 and 1914, upholding “Africa for the Africans”. (Chevannes 1994: 38). Love was very proud of his blackness and his African roots, and founded two of the main vehicles for Pan-African and anticolonial ideas of his time: the journal Jamaica Advocate (1894-1905) and the Pan-African Association launched in 1901 in association with another Pan-Africanist, Trinidad-born Henry Sylvester-Williams. Unusual for his time, Love also advocated education for women, stating that a people cannot rise above the standards of its womanhood. The young Marcus Garvey was tutored by Dr Love, whom he revered and considered one of his earliest influences. In his ethnographic work on the roots of Rastafari, Professor Barry Chevannes also cites Isaac Uriah Brown, Prince Shrevington and ‘Warrior Higgins’, three religious street-preachers who kept alive the consciousness of Africa among the urban and rural poor in the early twentieth century. (Chevannes 1994: 38).
Alexander Bedward (about 1859-1930), a great healer with followers all over Jamaica as well as in Cuba and Panama, was the most famous preacher of the time. He sternly denounced the oppression of Blacks in Western society and urged his followers towards a Black revolution. He cited two of Jamaica’s National Heroes Sam Sharpe and Paul Bogle, who rebelled against the white establishment defending their right to liberty at the cost of their lives. Bedward was arrested many times for ‘subversive activities’. In April1921 he and eight hundred of his followers marched on Kingston, assaulting some people including a census officer along the way. Bedward was arrested and sent to a psychiatric hospital where he died in November 1930. According to Professor Chevannes, “he led his followers directly into Garveyism by finding the appropriate charismatic metaphor: Bedward and Garvey were as Aaron and Moses, one the high priest, the other prophet, both leading the children of Israel out of exile.” (Chevannes 1994: 39)
These forerunners paved the way for Anguillan Robert Athlyi Rogers and others who followed in their wake. In the 1920s Rogers founded an Afrocentric religion, the Afro Athlican Constructive Church, which preached self-reliance and self-determination for Africans. Rogers saw Ethiopians/Africans as the chosen people of God and proclaimed Marcus Garvey an apostle. In 1927 Garvey is reputed to have told a Church audience: “Look to Africa. When you shall see a Black King crowned, know that the day of deliverance is at hand.” Emperor Haile Selassie I’s ascension to the Imperial throne in 1930 was taken as confirmation of Garvey’s prophetic utterance. For the nascent Rastafari movement Garvey was the reincarnation of John the Baptist, pointing towards the returned Messiah.
Between 1913 and 1917 Athlyi Rogers wrote The Holy Piby, also known as ‘The Blackman’s Bible’, first published in 1924. The Holy Piby includes rules of conduct, religious doctrine, references to Ethiopia and Egypt as well as to apostles and saints of God, who are all depicted as being Black. In 1926, his work was followed by the publication of The Royal Parchment Scroll of Black Supremacy, by Reverend Fitz Balintine Pettersburgh, who described it as ‘Ethiopia’s Bible-Text’. These two books, which were banned in Jamaica and other Caribbean islands, were templates for Leonard Howell’s The Promised Key, written a decade later around 1935. This trinity of works are the formative texts that propelled Rastafari into an ideological knowledge-system based on the Divinity of His Imperial Majesty Emperor Haile Selassie I of Ethiopia.
In the genealogy of the Rastafari movement we find these antecedents unravelling from a broad cumulative awareness of Ethiopia as the cradle of civilisation, slowly tapering into a pinpoint of identification: firstly, Ethiopia as a generic name for the continent of Africa; secondly, as the ancestral spiritual home of Africans; thirdly as the precise geographical land mass in north-east Africa, the fabled land of the mythical Priest-King Prester John; and finally, the birthplace of the returned Messiah, Christ in His Kingly Character, Emperor Haile Selassie I, King of Kings, Lord over all Lords, Conquering Lion of the Tribe of Judah, Elect of Himself and Light of this world.
Leonard Percival Howell is often acknowledged as ‘the first Rasta’. Howell’s commune, Pinnacle, founded on 400 acres of land purchased in Sligoville, St Catherine in 1940, was an attempt to create heaven on earth – no more, no less – the ‘pinnacle’ of man’s aspirations for a world of justice, independence, peace, security, and love. Against all the odds of time and circumstance, for a fleeting moment Pinnacle gave us a glimpse of an Ethiopian utopia where mankind’s cherished hopes could be realised in the here and now. Howell’s community appropriated the best that could be gleaned from the ‘remnants of nations’ woven into a quilted tapestry of righteous living. As a seaman, his influences spanned a world in turmoil – from the Russian revolution to the Harlem Renaissance and Hindu spirituality in the ‘New World’. All these strains are also to be found in African culture. The Caribbean (‘Carry-beyond’) was already a melting-pot for social and racial outcasts. What Howell attempted was something new. In a fragmented world he dared to postulate a new holistic (yet ancient) reality – one that effectively coalesced the noblest traditions of world history that cascaded into the present. Howell’s astute interpretation of the times brought him into a calculated vision of Majesty and Divinity. The transforming power of man’s spiritual evolution dictated that we are in a time of fulfilment, hence God must manifest in Man – the Alpha and Omega – His Imperial Majesty, (or Their Imperial Majesties – as he put it in The Promised Key, where he describes the ‘cosmic trigger’ on which the foundation of life is set. He identifies the Emperor and Empress as the Paymaster and Pay-mistress, through Whom the healing balm of regeneration gives new life to the universe).
Pinnacle was an agrarian community where every nutritional food-crop was grown and shared by its members, known as ‘Howellites’. Ganja/marijuana was the main cash crop. Pinnacle’s popularity and prosperity were largely established through the trade in herbs, vegetables and ‘ground provisions’. The community also developed ritualised forms of worship involving drums, prayers and chanting. Pinnacle operated as an alternative otherworldly society (conformable to the Essenes) with engrained principles of communal love, fellowship and fraternity. It was conceived and constructed on the ideal of collective security and independence, totally at odds with the western mode of Babylonian lifestyle. Pinnacle was an oasis of African life in a desert of colonial oppression. The world was not ready for Pinnacle and its message of Peace and Love, the salutation of the early ‘locksmen’ and women who populated it. Howell swore fealty to the African Emperor of Ethiopia. He urged his followers not to pay taxes to the imposter, King George of England. His defiant Rastafari stronghold represented a final outpost of African culture, identity and resistance in the West.
It was inevitable that Pinnacle would be vilified, constantly raided and ransacked by agents of the colonial state. Eventually the community (numbering thousands) was crushed by police action in 1954. Yet, out of Pinnacle came a spreading gospel, an empowering re-enactment of righteous life, a recovered way of decent living that shaped the Rastafari movement. Today Pinnacle is up for grabs by rich landowners. An Occupy Pinnacle Movement is in place as a militant new generation of Rastafari seek to preserve their sacred heritage from the encroachment of Babylonian forces. In revisiting the roots of this essentially Pan-African phenomenon we can ‘over-stand’ the universal attraction of this way of life (livity) for a world desperately seeking a way out of the fallout, rubble and detritus of global conflicts that threatened life on this planet.
Here in Pinnacle the origins of livity, (a word coined to describe the Rastafari way of life), began to take shape. Significantly, language was one of the first barriers to be dismantled. ‘I-and-I’ replaced ‘you and me’. The dictum of ‘love thy neighbour as thyself’ was codified and reflected in this simple reformation of language and human relationship. From this pivotal axiom a new idiosyncratic tongue was developed (or ‘i-veloped’). In one fell swoop the prefix ‘I’ released the new movement from the bondage of the oppressor’s tongue and mindset, emphasizing the centrality of love in the conduct of life’s relationships.
Food, vital for life, became ‘i-tal’. Control became ‘man-trol’ or ‘i-trol’. Understanding became ‘over-standing’ or ‘i-verstanding’. Freedom became ‘freeman’, since ‘free’ was incompatible with the sound of ‘dom’ (dumb). A new vocabulary was initiated and popularised to fit a new creation (or ‘i-ration’), based on word-sound and power. The Word of Rastafari was made flesh, invested with vibrational power, purpose and intent. Grammar was rendered fallacious, seductive and irrelevant. The ‘I-talk’ of Rastafari (Iyaric) became a distinguishing feature of livity, marking a total rejection of the constraining word-sound of Babylon. The world was redefined in relation to the ‘I-man’. Its tenor was given new value through positivity, confidence and self-knowledge. Conversation was replaced by reasoning. The Rastafari voice dispelled all lukewarm ‘Christian’ expression. Its strength of utterance emanated from the heart, rather than the head.
Alignment with Nature was another crucial facet of livity. At a time when Black populations worldwide trekked to the cities for social and economic uplift Rastafari bucked the trend by celebrating the joys of nature and ‘country living’. Babylon’s gross materialism and one-upmanship was based on city-life and the pursuit of wealth rather than peace and contentment. Capitalism had created a toxic, hostile, polluted environment where human decency and natural affection were sacrificed on altars to Mammon. A return to Nature’s way sustained mankind physically and morally, away from the crass waste, hate and lust of Babylonian society. Livity was diametrically opposed to the values of upward mobility projected by Western civilisation. The human body was a temple to be cleansed for the in-dwelling of the Most High.
The abhorrence of meat in the ital diet became another tenet of livity. The blood sacrifice of animals to sustain man’s greed for nourishment was seen as unnatural and unnecessary. As in Genesis 1, Vs 29, God had given man all manner of herbs for meat. The movement generally adhered to this guidance, though many Rastafari earned their living as fishermen. Hence the eating of fish was deemed acceptable to some.
In the Nyahbinghi ritual, a Rastafari celebration of divine worship to the Creator, a trinity of drums is used: bass, funde and repeater (akete). The huge booming bass single-note drum is the ‘Pope-Smasher’, the two-note funde represents the heartbeat, saying “Do good! Do good!” repeatedly. The repeater (peetah) is polyrhythmic, speaking in intricate patterns that enliven the free-stepping dances of the congregation. The chants are often reworked versions of Church hymns, such as Bob Marley’s Rastaman Chant:
One bright mornin’
When my work is over
I will fly away home
Ras Tafari is the lily of the valley
He’s the bright and morning star
Ras Tafari is the fairest of ten thousand
Everybody should a know
Others are original lyrics composed by members of the movement and are charged with political comment:
Black liberation day (repeat)
What a great day dat must be
When Africa is free
I-demption yodding, Hail Fari! (Redemption trodding)
What a wonderful iwa (iwa = hour, time)
Glory to the King
Jah-Jah take I outa bondage into Jah freeman
I-demption yodding, Hail Fari!
The Psalms of David and other Biblical passages are essential readings during the ‘Binghi. The Sacred Herb (ganja) is imbibed liberally by the congregation.
Indeed, reggae music with its hypnotic, soothing rhythms and fervent lyrics evolved from the ‘harps’ of the ‘binghi. Reggae became the popular purveyor of the Rastafari message and ethos – often to the letter – as in Marley’s musical reprise of His Majesty’s utterance: “Until the philosophy that holds one man superior and another inferior…” Today reggae is perhaps the most widely known and imitated musical genre globally.
The smoking of ganja (marijuana) has been long been accepted as an essential feature of livity. Decriminalization in Jamaica and other parts of the world poses new challenges for the movement. Is sacramental use of the herb being diluted to accommodate global popularity? Or will Rastafari maintain priestly observance of the herb – despite its widespread use as a recreational high?
Livity effectively combines spirituality and political awareness in a seamless expression of everyday life. In this process the roles of priest and warrior are evenly weighted and entwined. “Peace and love” is balanced by the strident call for Equal Rights and Justice as in:
Get up, Stand up!
Stand up for your rights!
The Nyahbinghi warrior’s credo states, “Death to Black and White oppressors!” It also prescribes, “InI war not against flesh and blood, but against principalities and powers and the workers of iniquity in high and low places.” The Rastafari Creed invokes “that the hungry be fed, the naked clothed, the sick nourished, the aged protected and the infants cared for.”
The words, works and speeches of His Imperial Majesty are studiously regarded as a ‘Third Testament’, an updated addendum to the Old and New Testaments in this Dispensation in which the final battle is enjoined, and in which InI confidence is in the victory of Good over evil. His Majesty’s historic appeal to the League of Nations in Geneva in 1935 was a watershed moment for international morality. His warning to the Great Powers had gone unheeded: “You have lit a match in Ethiopia, but it shall burn throughout Europe.” His subsequent defeat of Fascism gave hope to smaller nations and to the ‘wretched of earth’ who had supported Ethiopia’s cause against the overwhelmingly aggressive military might of Italy. As He Himself declared after returning to Ethiopia in triumph: “People who see this throughout the world will realise that even in the 20th century with faith, courage and a just cause David will still beat Goliath.”
Livity repudiates the notion of freedom (freeman) as licence. In fact, livity defines a lived holistic discipline where mind, body and spirit are enjoined in a singular purpose and commitment to the liberation of the self and of the human condition – primarily, but not solely, for Africa and Africans. In livity the Word is liberated, realized and fulfilled in action. The injunction to do good invests InI with an aura of divinity as sons and dawtas of the Almighty, created (i-rated) in His image and likeness to perform His work on earth, i.e. to promote and secure the victory of Good over evil. The writ of Christianity having fallen into disrepute, a new dispensation was ushered in by the Rastafari movement, firstly by His Imperial Majesty; secondly by His progeny, the Rastafari nation, with a solemn promise and obligation for peace and prosperity in the global community.
Ultimately, Rastafari livity is moulded on the indomitable spirit of Majesty and Divinity, in which Justice and Mercy abide for the healing of nations. Livity overcomes all obstacles through strategy, patience, endurance, determination and consummate confidence in the power of Right over Might, enshrined in the philosophy of One Love. The outlandish sect that was hounded out of polite society, ostracised, brutalized, scorned, defamed, threatened with genocide and counted at nought in the 1930s, has now become a flowering tree spreading its knotty (natty) branches in every nook and cranny of today’s world. This too, is the triumph of livity.
Ras Shango Baku
|
Self-contained prototype brings artificial photosynthesis a step closer to commercial reality. While solar cells and wind turbines are the devices many people will think of for off-grid electricity production, the development of practical artificial photosynthesis for the creation of hydrogen via solar-powered water splitting could radically alter the way we produce energy locally.
As part of the on-going pursuit of this goal, researchers from Forschungszentrum Jülich claim to have created a working, compact, self-contained artificial photosynthesis system that could form the basis for practical commercial devices. Photosynthesis in plants and certain types of algae is the process where light energy is transformed into chemical energy to synthesize simple carbohydrates from carbon dioxide and water.
Latest bionic leaf now 10 times more efficient than natural photosynthesis. Over the last few years, great strides have been made in creating artificial leaves that mimic the ability of their natural counterparts to produce energy from water and sunlight.
In 2011, the first cost-effective, stable artificial leaves were created, and in 2013, the devices were improved to self-heal and work with impure water. Now, scientists at Harvard have developed the "bionic leaf 2.0," which increases the efficiency of the system well beyond nature's own capabilities, and used it to produce liquid fuels for the first time. The project is the work of Harvard University's Daniel Nocera, who led the research teams on the previous versions of the artificial leaf, and Pamela Silver, Professor of Biochemistry and Systems Biology at Harvard Medical School. Like the previous versions, the bionic leaf 2.0 is placed in water and, as it absorbs solar energy, it's able to split the water molecules into their component gases, hydrogen and oxygen.
Source: Harvard. Liquid hydrocarbon fuel created from CO2 and water in breakthrough one-step process. As scientists look for ways to help remove excess carbon dioxide from the atmosphere, a number of experiments have focused on employing this gas to create usable fuels.
Both hydrogen and methanol have resulted from such experiments, but the processes often involve a range of intricate steps and a variety of methods. Now researchers have demonstrated a one-step conversion of carbon dioxide and water directly into a simple and inexpensive liquid hydrocarbon fuel using a combination of high-intensity light, concentrated heat, and high pressure. According to the researchers from the University of Texas at Arlington (UTA), this breakthrough sustainable fuels technology uses carbon dioxide from the atmosphere, with the added benefit of also producing oxygen as a byproduct, which should create a clear positive environmental impact.
Audi just created diesel fuel from air and water. Audi is making a new fuel for internal combustion engines that has the potential to make a big dent when it comes to climate change – that's because the synthetic diesel is made from just water and carbon dioxide.
View all The company's pilot plant, which is operated by German startup Sunfire in Dresden, produced its first batches of the "e-diesel" this month. German Federal Minister of Education and Research Johanna Wanka put a few liters of the fuel in her work car, an Audi A8, to commemorate the accomplishment. Using Algae to Treat Water and Create Fuel. Category: New Inventions and Innovations (10)Jan-31-13 A team at Cal Poly is working on a project that would use algae to create biofuels from human waste.
The algae biofuel project, which was recently awarded a $1.3 million grant, will take place at the “raceway” style algae ponds that over half an acre of the San Luis Obispo Water Reclamation Facility. Since algae release oxygen while absorbing nutrients and CO2, they would be able to treat the wastewater to current standards using only energy from the sun, and the nutrients recycled from algae biomass could be processed into sustainable algal biofuels. Hybrid Solar System Helps Green Natural Gas Plants. Hybrid Solar System Helps Green Natural Gas Plants (11)Apr-21-13 A hybrid solar system could help increase the efficiency and eco-friendliness of natural gas power plants while also creating a synthetic fuel for vehicles.
The system features a parabolic mirror that focuses sunlight on a small chemical reactor lined with narrow channels containing natural gas. A catalyst breaks down the molecules of the sunlight-warmed gas to create a mixture of carbon dioxide and hydrogen called syngas (synthesis gas). The waste heat from this reaction is capture by a heat exchanger that sends it back to the reactor, boosting the process until 60 percent of the sunlight is being converted into energy. Syngas can also be used to make synthetic crude oil, which can then be refined for use in vehicles. New Process Creates Crude Oil from Algae in Minutes. New Process Creates Crude Oil from Algae in Minutes (6)Dec-26-13 Researchers have developed a method of creating crude oil from freshly harvested algae in only minutes, saving both time and energy.
Algae has long been recognized for its use as a biofuel source, but the process involved in drying out the wet, aquatic plant is expensive, time consuming and involves a series of steps. To expedite the process, the team from PNNL developed a continuous process that involves subjecting the wet algae to high temperatures and pressure (662ºF and 3,000 psi), which team leader Douglas Elliott described as “a bit like using a pressure cooker.”
In tests, the process resulted in between 50 and 70 percent conversion of the algae’s carbon in to fuel. Other products included clean water, fuel gas and nutrients that can be used to grow more algae. Harvesting Fuel from Plastic Shopping Bags. Harvesting Fuel from Plastic Shopping Bags (5)Feb-16-14 Researchers have developed an energy-efficient way to convert plastic shopping bags into a variety of petroleum products—giving shoppers another reason to keep the bags from landfills, roadsides and oceans.
The bags are first converted to crude oil by the pyrolysis process, part of which involves heating the bags in an oxygen-free chamber. This method has been in use by other research teams, but the team from University of Illinois expanded on the technique by fractioning the crude oil into different petroleum products, which allowed them to then produce products such as natural gas, gasoline, waxes and lubricating oils.
Making Ethanol without Corn. Making Ethanol without Corn (1)Apr-12-14 Researchers have developed a way to make ethanol fuel that eliminates the need for corn or sugarcane, drastically reducing the amount of energy needed to manufacture the alternative fuel.
Manufacturing ethanol usually requires gathering large amounts of biomass and then subjecting the material to fermentation. As an alternative to the labor-intensive process, Stanford University researchers have proven that it is possible to use an electric current to produce the ethanol directly from water and waste gases. The process described by the research team involves converting carbon dioxide to carbon monoxide—using either an existing technology or one of the more efficient methods currently being developed—and then using an electrochemical process to convert the carbon monoxide to ethanol.
Modular biobattery plant turns a wide range of biomass into energy - Images. Researchers at the Fraunhofer Institute for Environmental, Energy and Safety Technology have developed a "biobattery" in the form of a highly efficient biogas plant that can turn raw materials like straw, scrap wood and sludge into a variety of useful energy sources including electricity, purified gas and engine oil.
The new plant design, currently being put to the test in a prototype plant in Germany, is said to be highly modular and economically viable even at the small scale. The production of biogas – gas created by the breakdown of organic matter, by fermentation or through the action of anaerobic bacteria – is an interesting complement to other sources of renewable energy since it can not only generate electricity at little cost to the environment, but also create biofuel, fertilizer and engine oil.
One issue, however, is that these plants only accept few organic substances as raw materials. Source: Fraunhofer Institute Share. Researchers develop system for on-farm biofuel and animal feed production. Building on methods used by farmers to produce silage for feeding livestock, Japanese researchers have developed a technology for simultaneous biofuel and animal feed production which doesn't require off-site processing. Impressum. Vortragsfolien - Interessengemeinschaft Miscanthus Sachsen.
Homepage. Energie die nachwächst - nachwachsende Rohstoffe, Miscanthus, Pellets- und Hackgut-Heizanlagen. Über Endina Endina unterstützt die regionale Landwirtschaft bei dem Anbau und der Ernte von Brennstoffen aus nachwachsenden Energiepflanzen, die von Endina weiterverarbeitet und vermarktet werden. Endina ist das Bindeglied zwischen Erzeuger und Verbraucher und bildet einen Kreislauf vom Anbau der Energiepflanzen, über die Verarbeitung und den Vertrieb bis zum Kunden. Durch langfristige Verträge wird die Existenz unserer heimischen Landwirtschaft gesichert, aber auch Transporteure, Angestellte und Arbeitnehmer in der Verarbeitung und letztendlich der Endverbraucher profitieren von diesem regionalen Wertschöpfungskreislauf.
Miscanthus - Heizmaterial vom Acker » Klimaschutz-Blog. 12. April 2012 von hempstar. Internet Einstieg » Miscanthus. Miscanthus - ein nachwachsende Rohstoff mit Zukunft! New Energy Farms. New Energy Farms (NEF) provides a complete solution to sourcing biomass feedstock for your project.We have extensive experience in commercial production of high yielding energy grasses. NEF operates in the US, Canada and the EU with vertically integrated operations from plant breeding through to commercial supply of feedstock. Ihr Partner für Anbau, Pflege, Ernte und Nutzung. GREEN ENERGY. Miscanthus. Miscanthus-Anbau. Fonamis GmbH Co KG - Miscanthus - nachwachsender Rohstoff mit Zukunft. Erich Kuhn GmbH Pellets Hackgut-Pellets-Heizung. Energiepreisentwicklung in Deutschland Die Preise für Energierohstoffe entwickeln sich im Laufe der Zeit ganz uneinheitlich.
In der nachfolgenden Grafik zeigen wir Ihnen den Verlauf der letzten 5 Jahre im Überblick. Im Durchschnitt liegen die Preise für Pellets ganz deutlich unter dem Niveau von Erdgas und Heizöl. Die clevere Lösung verwendet zusätzlich die in den Abgasen enthaltene Energie. Der Wasserdampf in der Abluft wird so weit abgekühlt, dass er kondensiert und die in ihm steckende Energie freigibt. Einmal mehr bestätigt HARGASSNER seine Vorreiterrolle für umweltfreundliches und energiesparendes Heizen. Miscanthus. Miscanthus, gemeinhin auch als Chinaschilf bezeichnet, ist ein ausdauerndes Gras, das ebenso wie Zuckerrohr und Hirse zur Familie der Süßgräser gehört.
Miscanthus stammt aus Ostasien und wurde 1935 zunächst als Zierpflanze nach Europa eingeführt. Miscanthus OppStock GbR - Ihr Lieferant für nachwachsenden Rohstoff. Auf der Internetseite der Miscanthus OppStock GbR.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.