content
stringlengths 275
370k
|
---|
In the last few sections, we learned how to multiply polynomials. We did that by using the Distributive Property. All the terms in one polynomial must be multiplied by all terms in the other polynomial. In this section, you will start learning how to do this process in reverse. The reverse of distribution is called factoring.
Let’s look at the areas of the rectangles again: Area = length width. The total area of the figure on the right can be found in two ways.
Method 1 Find the areas of all the small rectangles and add them
Method 2 Find the area of the big rectangle all at once
Since the area of the rectangle is the same no matter what method you use then the answers are the same:
Factoring means that you take the factors that are common to all the terms in a polynomial. Then, multiply them by a parenthesis containing all the terms that are left over when you divide out the common factors.
Polynomials can be written in expanded form or in factored form. Expanded form means that you have sums and differences of different terms:
Notice that the degree of the polynomials is four. It is written in standard form because the terms are written in order of decreasing power.
Factored form means that the polynomial is written as a product of different factors. The factors are also polynomials, usually of lower degree. Here is the same polynomial in factored form.
Notice that each factor in this polynomial is a binomial. Writing polynomials in factored form is very useful because it helps us solve polynomial equations. Before we talk about how we can solve polynomial equations of degree 2 or higher, let’s review how to solve a linear equation (degree 1).
Solve the following equations
Remember that to solve an equation you are trying to find the value of :
Now we are ready to think about solving equations like . Notice we can't isolate in any way that you have already learned. But, we can subtract 42 on both sides to get . Now, the left hand side of this equation can be factored!
Factoring a polynomial allows us to break up the problem into easier chunks. For example, . So now we want to solve:
How would we solve this? If we multiply two numbers together and their product is zero, what can we say about these numbers? The only way a product is zero is if one or both of the terms are zero. This property is called the Zero-product Property.
How does that help us solve the polynomial equation? Since the product equals 0, then either of the terms or factors in the product must equal zero. We set each factor equal to zero and we solve.
We can now solve each part individually and we obtain:
Notice that the solution is OR . The OR says that either of these values of would make the product of the two factors equal to zero. Let’s plug the solutions back into the equation and check that this is correct.
Both solutions check out. You should notice that the product equals to zero because each solution makes one of the factors simplify to zero. Factoring a polynomial is very useful because the Zero-product Property allows us to break up the problem into simpler separate steps.
If we are not able to factor a polynomial the problem becomes harder and we must use other methods that you will learn later.
As a last note in this section, keep in mind that the Zero-product Property only works when a product equals to zero. For example, if you multiplied two numbers and the answer was nine you could not say that each of the numbers was nine. In order to use the property, you must have the factored polynomial equal to zero.
Solve each of the polynomials
Since all polynomials are in factored form, we set each factor equal to zero and solve the simpler equations separately
a) can be split up into two linear equations
b) can be split up into two linear equations
c) can be split up into three linear equations.
Once we get a polynomial in factored form, it is easier to solve the polynomial equation. But first, we need to learn how to factor. There are several factoring methods you will be learning in the next few sections. In most cases, factoring takes several steps to complete because we want to factor completely. That means that we factor until we cannot factor anymore.
Let’s start with the simplest case, finding the greatest monomial factor. When we want to factor, we always look for common monomials first. Consider the following polynomial, written in expanded form.
A common factor can be a number, a variable or a combination of numbers and variables that appear in all terms of the polynomial. We are looking for expressions that divide out evenly from each term in the polynomial. Notice that in our example, the factor appears in all terms. Therefore is a common factor
Since is a common factor, we factor it by writing in front of a parenthesis:
Inside the parenthesis, we write what is left over when we divide from each term.
Let’s look at more examples.
a) We see that the factor 2 divides evenly from both terms.
We factor the 2 by writing it in front of a parenthesis.
Inside the parenthesis, we write what is left from each term when we divide by 2.
This is the factored form.
b) We see that the factor of 5 divides evenly from all terms.
Factor 5 to get
c) We see that the factor of 3 divides evenly from all terms.
Factor 3 to get
Here are examples where different powers of the common factor appear in the polynomial
Find the greatest common factor
a) Notice that the factor appears in all terms of but each term has a different power of . The common factor is the lowest power that appears in the expression. In this case the factor is .
Factor to get
b) The factor a appears in all the term and the lowest power is .
We rewrite the expression as
Factor to get
Let’s look at some examples where there is more than one common factor.
click below for: |
When you smoke a cigarette indoors, you expose yourself and everyone around you to secondhand smoke, thirdhand smoke and an increased risk of fire. Secondhand smoke includes smoke from the end of the burning cigarette, called sidestream smoke, and smoke exhaled by the smoker, called mainstream smoke. Thirdhand smoke is the name given to the toxic particles from cigarette smoke that settle onto surfaces in your home and remain long after smoking has ceased. Passive exposure to smoke poses health risks to you and everyone around you, while smoking-related fires kill and injure hundreds of people each year.
While all secondhand cigarette smoke contains toxic chemicals, the American Cancer Society says sidestream smoke contains smaller particles than mainstream smoke. Because of their smaller size, these particles can more easily enter the lungs and cells of anyone who breathes in the smoke from your cigarette. Children and non-smoking adults exposed to secondhand smoke have an increased risk of lung cancer, and possibly cancers of the breast, lymphatic system, blood, larynx, throat, sinuses, brain, bladder, rectum and stomach. Dust samples taken from the homes of smokers contain tobacco-specific carcinogens, making thirdhand smoke a possible risk factor for cancer as well.
Exposure to tobacco smoke is a major risk factor for cardiovascular disease. The Merck Manual for Health Care Professionals says that, while the risk is less for people exposed to secondhand smoke as compared to active smokers, the increased risk still exists. For example, non-smoking spouses have a 20- to 30-percent increased risk of coronary artery disease. The American Cancer Society says as many as 42,000 non-smokers die annually from cardiovascular disease due to exposure to secondhand smoke.
Lung cancer isn’t the only way the lungs are affected by exposure to tobacco smoke. Exposure to secondhand smoke is a risk factor for chronic obstructive pulmonary diseases, such as emphysema and chronic bronchitis. Pneumonia is also more common, and tobacco smoke can trigger attacks in children and adults who have asthma. The Merck Manual for Health Care Professionals says children exposed to cigarette smoke get sick easier and miss more school than children who are not exposed.
Sudden Infant Death Syndrome
Sudden infant death syndrome, or SIDS, kills approximately 2,200 infants annually in the U.S. Among the risk factors are exposure to tobacco smoke while in the womb, whether from a mother who smoked or a mother who was exposed to secondhand smoke, and exposure to secondhand smoke as an infant. The Family Practice Notebook says 61 percent of SIDS deaths are associated with parental smoking. Secondhand smoke also increases the risk of problems with pregnancy such as stillborn births, low birth weights and difficulties during delivery.
The U.S. Fire Administration, a division of the Federal Emergency Management Agency, says home fires caused by smoking materials kill almost 1,000 smokers and non-smokers annually in the U.S. One in 4 killed was not the smoker, and more than one-third of those were children of the smoker. For the health and safety of yourself and your loved ones, stop smoking, but if you must smoke, take it outdoors. |
08 Oct Fire … New ISO standard will take its breath away
Prevention is always better than cure, and there are few better examples than with fires. If fires can only survive when there is oxygen to fuel them, removing it from the air is an effective way to ensure that the environment remains fire-free. Oxygen reduction systems (ORS) do that by creating atmospheres where there is not enough oxygen for a fire to break out, but enough for humans to breathe easily and is considered to be one of the most effective ways of preventing fires in buildings is to reduce the level of oxygen in the air.
Now the world’s first International Standard for oxygen reduction systems has just been published ISO 20338- 2019 Oxygen reduction systems for fire prevention — Design, installation, planning and maintenance.
Oxygen reduction systems are designed to prevent fires from starting or spreading, by means of the introduction of oxygen reduced air and creating an atmosphere in an area which is having lower permanent oxygen concentration in respect to ambient conditions. Oxygen reduction systems are not designed to extinguish fires. The design and installation is based on detailed knowledge of the protected area, its occupancy and the materials in question. It is important to suit the fire protection measures to the hazard as a whole.
However, installing such systems can be a complex business, and requires in-depth knowledge of the space being protected, how it is used and by whom.
Currently, there are various national standards and technical guidelines in place, mainly in Europe, but what has been missing is an internationally agreed set of requirements for quality, safety and performance that everyone can use. Until now … ISO 20338, Oxygen reduction systems for fire prevention – Design, installation, planning and maintenance, specifies minimum requirements and defines the specifications for the design, installation and maintenance of fixed oxygen reduction systems. It applies to those systems that use nitrogen-enriched air used for fire prevention in buildings and industrial production plants, and can be used for new systems as well as for the extension and modification of existing systems.
This document does not apply to:
- oxygen reduction systems that use water mist or combustion gases;
- explosion suppression systems;
- explosion prevention systems, in case of chemicals or materials containing their own supply of oxygen, such as cellulose nitrate;
- fire extinguishing systems using gaseous extinguishing agents;
- inertisation of portable containers;
- systems in which oxygen levels are reduced for reasons other than fire prevention (e.g. steel processing in the presence of inert gas to avoid the formation of oxide film);
- inerting required during repair work on systems or equipment (e.g. welding) in order to eliminate the risk of fire or explosion.
In addition to the conditions for the actual oxygen reduction system and its individual components, it also covers certain structural specifications for the protected area.
The space protected by an oxygen reduction system is a controlled and continuously monitored indoor climate for extended occupation and this does not cover unventilated confined spaces that can contain hazardous gases.
The elements covered by this latest ISO 20338:2019 are predictably comprehensive addressing system requirements; design including qualification of the designer; pipework; monitoring, alarms and notifications; control equipment; installation, operation and maintenance.
Alan Elder, chair of the ISO technical subcommittee that developed the standard, said it will be useful to users of ORS, such as facilities owners, as well as for meeting regulatory requirements.
It can be purchased from iso.org/standard/67742.html
Safety signage standards updated
From no-go areas on construction sites to emergency exits, ISO 7010, Graphical symbols – Safety colours and safety signs – Registered safety signs, prescribes safety signs for the purposes of accident prevention, fire protection, health hazard information and emergency evacuation. Introduced in 2011, and updated in 2018, this has now been updated ISO 7010:2019.
It features the shape and colour of the sign as referenced in ISO 3864-1, Graphical symbols – Safety colours and safety signs – Part 1: Design principles for safety signs and safety markings, and the design of the symbol is according to ISO 3864-3, Graphical symbols – Safety colours and safety signs – Part 3: Design principles for graphical symbols for use in safety signs.
Mr Jan-Bernd Stell, Chair of the ISO technical committee that developed the standard, said lack of harmonisation and standardisation in this area could lead to confusion and accidents.
“International standardisation of safety signs means everyone speaks the same language when it comes to safety. This provides a simple solution for everyone, both in workplaces and public areas like airports where many nationalities converge.
Examples of safety signs documented in the standard include everything from warnings around deep water, electricity or barbed wire to instructions such as ‘do not walk or stand here’, or to not use lifts in the event of a fire.”
It is available for purchase here : iso.org/obp/ui/#iso:std:iso:7010:ed-3:v1:en |
Sensory Integration and Processing Dysfunction
One of the greatest challenges for parents today is detecting their child’s special needs. The Sensory Integration Disorder or Sensory Processing Dysfunction affects both parents and children.
Every mother and father wants to have healthy normal children. Kids who have this disorder need special attention in order to develop and learn. The disorder impedes daily functions, social relationships, psychological health development, as well as learning.
Children with such dysfunctions exhibit unusual characters like:
- hyperactive behavior
- hypersensitivity to touch
- reaction to loud sound
- poor coordination
- choosy behavior with food and clothing
- fear of getting dirty
Symptoms of Sensory Integration Disorder Early detection of the condition is significant and every parent needs to be enlightened about this condition. By looking at the signs and symptoms of the dysfunction, it is possible to note if a child has the Sensory Processing Dysfunction. Parents should watch their children’s reaction to different things and situations.
Simple things seem to disturb a child who has the disorder-these could be:
- friendly touch
- rough bed sheets
- water from the shower
- new clothes and
- brushing teeth
A caretaker who is unaware of this condition might think the child is overreacting. The child could also face discrimination from other kids due to their “choosy” character. This affects the way they relate, and their overall growth in society.
For older children, their school performance reduces due to its effects on their coordination abilities. The condition affects an important part of their body system.
What Causes the Sensory Processing Disorder?
This is due to poor reception of messages by the nervous system. The negative behavior reaction that the child displays is because of an unusual sensory processing system. Different psychologists researching on the condition describe the extent to which a child could show these signs. Some situations are extreme and require special attention. While the condition is unavoidable, there are some causes, which are beyond control.
Reasons for the SPD include:
- prenatal attention
- birth complications
- Environmental effects
Children with this condition require occupational therapy. Proper diagnosis will suggest the right treatment for the child. The degree of effects on their system determines their level of treatment. The family and society members need to understand the condition so that they can offer the best care.
For more information about the condition, proper inquiries and research using ideal links will provide adequate information. More about Sensory Integration and Processing Dysfunction available at the following email lists:
The best way to prevent the disorder is by getting to know about it in advance. Whether a parent has a child suffering from Sensory Integration Disorder or not, it is their responsibility to learn about it. |
Methyl chemical groups dot lengths of DNA, helping to control when certain genes are accessible by a cell. In new research, UCLA scientists have shown that at the connections between brain cells — which often are located far from the central control centers of the cells — methyl groups also dot chains of RNA. This methyl markup of RNA molecules is likely key to brain cells’ ability to quickly send signals to other cells and react to changing stimuli in a fraction of a second.
To dictate the biology of any cell, DNA in the cell’s nucleus must be translated into corresponding strands of RNA. Next, the messenger RNA, or mRNA — an intermediate genetic molecule between DNA and proteins — is transcribed into proteins. If a cell suddenly needs more of a protein — to adapt to an incoming signal, for instance — it must translate more DNA into mRNA. Then it must make more proteins and shuttle them through the cell to where they are needed. This process means that getting new proteins to a distant part of a cell, like the synapses of neurons where signals are passed, can take time.
Research has recently suggested that methyl chemical groups, which can control when DNA is transcribed into mRNA, are also found on strands of mRNA. The methylation of mRNA, researchers hypothesize, adds a level of control to when the mRNA can be translated into proteins, and their occurrence has been documented in a handful of organs throughout the bodies of mammals. The pattern of methyls on mRNA in any given cell is dubbed the “epitranscriptome.”
UCLA and Kyoto University researchers mapped out the location of methyls on mRNA found at the synapses, or junctions, of mouse brain cells. They isolated brain cells from adult mice and compared the epitranscriptome found at the synapses to the epitranscriptomes of mRNA elsewhere in the cells. At more than 4,000 spots on the genome, the mRNA at the synapse was methylated more often. More than half of these spots, the researchers went on to show, are in genes that encode proteins found mostly at the synapse. The researchers found that when they disrupted the methylation of mRNA at the synapse, the brain cells didn’t function normally.
The methylation of mRNA at the synapse is likely one of many ways that neurons speed up their ability to send messages, by allowing the mRNA to be poised and ready to translate into proteins when needed.
The levels of key proteins at synapses have been linked to a number of psychiatric disorders, including autism. Understanding how the epitranscriptome is regulated, and what role it plays in brain biology, may eventually provide researchers with a new way to control the proteins found at synapses and, in turn, treat disorders characterized by synaptic dysfunction.
The authors of the study are Daria Merkurjev, a UCLA postdoctoral research fellow; Matteo Pellegrini, a UCLA professor of molecular, cell and developmental biology; Dr. Kelsey Martin, a professor of biological chemistry and of psychiatry and biobehavioral sciences at UCLA; and Wan-Ting Hong, Kei Iida, Ikumi Oomoto, Belinda Goldie, Hitoshi Yamaguti, Takayuki Ohara, Shin-ya Kawaguchi, Tomoo Hirano and Dan Ohtan Wang, all of Kyoto University.
The study was published in the journal Nature Neuroscience.
The study was funded by Japan’s Grants-in-Aid for Scientific Research Program; Hirose Foundation; Astellas Foundation; and Japan Society for the Promotion of Science. |
A mosquito net offers protection against mosquitos, flies, and other insects, and thus against the diseases they may carry examples include malaria, dengue fever, yellow fever, zika virus and various forms of encephalitis, including the west nile virus to be effective the mesh of a mosquito net must be fine enough to. The most effective means of preventing malaria is sleeping under a mosquito net, specifically a long-lasting insecticide treated net (llin) malaria is transmitted by certain mosquitoes when they bite these mosquitoes bite people to get a blood meal the malaria parasite then passes from the infected mosquito to the person. The cdc foundation's bed nets for children program helps the centers for disease control and prevention (cdc) teams purchase and distribute insecticide -treated bed nets to help protect children and families from malaria malaria is a leading cause of death and disease worldwide the bed nets for children fund is. Insecticide-treated bed nets (itns) are a form of personal protection that has been shown to reduce malaria illness, severe disease, and death due to malaria in endemic regions in community-wide trials in several african settings, itns have been shown to reduce the death of children under 5 years from all.
An insecticide-treated net (itn) is a net (usually a bed net), designed to block mosquitoes physically, that has been treated with safe, residual insecticide for the purpose of killing and repelling mosquitoes, which carry malaria5 a long-lasting. By philip bejon, bob snow, charles mbogo bed nets have probably saved the lives of tens of thousands of kenyans over the last decade, and likely hundreds of thousands of people worldwide kemri has played a key role in studies leading to the introduction of bed nets treated with insecticides, and bed. Bukenya f, echodu d, adoke y (2018) long lasting insecticidal bed nets ownership, access and use in a high malaria transmission setting before and after a mass distribution campaign in uganda plos one 13(1): e0191191 https ://doiorg/101371/journalpone0191191 editor: luzia helena carvalho,.
Insecticide-treated nets are currently a major tool to reduce malaria transmission their level of repellency affects contact of the mosquito with the net, but may also influence the mosquito's entry into the house the response of host-seeking malaria mosquitoes approaching the eave of an experimental house was recorded. Malaria may be the worst killer in history — some estimates suggest it has killed half of people who have ever lived but this map — using the latest data — shows a surprising, and encouraging result. Free distribution of insecticide-treated bed nets is preferred to partial subsidization in malaria-endemic areas of kenya. Sleeping under an insecticide-treated net (itn) is the most widely adopted preventive measure against malaria itns are effective because in the majority of malaria-endemic regions of the world, the female mosquito that transmits malaria only bites at night itns – and in particular long-lasting insecticidal nets (llins).
Malaria, a disease carried by mosquitoes, accounts for 800 child deaths in a day in africa - primarily in the sub-saharan region an insecticide-treated mosquito net is a simple and effective way to protect and save precious young lives buy bed nets to protect children and families around the world from malaria-infected. Insecticide-treated nets (itns) for malaria control are widespread but coverage remains inadequate we developed a bayesian model using data from 102 national surveys, triangulated against delivery data and distribution reports, to generate year-by-year estimates of four itn coverage indicators we explored the impact.
A bite from one malaria-infected mosquito can be lethal in developing countries the disease kills nearly 800,000 people every year, more than 90 percent of them children a $10 bed net treated with a natural insecticide is one of the best forms of prevention we also train and equip church workers to help their communities. Have bed nets lost their power to protect people from malaria-carrying mosquitoes that's the subject of debate among researchers looking for ways to cut down on malaria cases and deaths over the past two decades, the insecticide-treated bed net has been one of the most powerful tools against malaria. In africa, malaria-carrying mosquitoes typically bite between dusk and dawn a mosquito net hung over the sleeping area prevents mosquitoes from biting individuals sleeping under it when that net is treated with insecticide, it provides greater protection by repelling mosquitoes and killing those that land on it.
Bed nets have cut the spread of malaria, but mosquitoes are evolving resistance to them by changing their behavior. Insecticide-treated bed nets and curtains for preventing malaria (archived) 2004 addthis sharing buttons share to print share to email share to facebook share to twitter share to google+ share to more.
With that always weighing on his mind, mwewa ndefi gets up at dawn, just as the first orange rays of sun are beginning to spear through the papyrus reeds, and starts to unclump a mosquito net nets like his are widely considered a magic bullet against malaria — one of the cheapest and most effective. Assessing the effective use of mosquito nets in the prevention of malaria in some parts of mezam division, northwest region cameroon ngum helen ntonifor email author and serophine veyufambom malaria journal201615:390 https:// doiorg/101186/s12936-016-1419-y © the author(s) 2016. In africa, some malaria-carrying mosquitoes have found ways to survive exposure to insecticides this means that bednets treated with these chemicals may become less effective at preventing malaria a new study we've published in pnas shows that although these resistant mosquitoes don't die. Describes a shift in guidance on malaria prevention through the use of insecticide - treated nets (itns) the who/gmp calls upon national ma- laria control programmes and their partners involved in insecticide-treated net interven- tions to purchase only long-lasting insecti- cidal nets (llins) llins are designed to maintain. |
Monday, July 18, 2016
IDM Explores Potential Cost-Effective Vaccine Campaign
Cholera is a disease that affects millions in the developing world each year. Untreated, severe dehydration from cholera can kill a person in less than a day. In the 1960's, it was found that inexpensive oral rehydration solution could usually save the lives of those with severe diarrhea. However, when cholera outbreaks strike unprepared and vulnerable populations, as happened in Haiti in 2010, death tolls can be high. The outbreak in Haiti, which continues to this day, renewed interest in using vaccines against cholera.
Bangladesh has a high burden of diarrheal disease, and cholera is responsible for a substantial number of the cases with severe dehydration. Many parts of Bangladesh have predictable seasonal outbreaks. The use of oral cholera vaccine (OCV) offers a possible solution to help reduce and potentially prevent these outbreaks, until there is a sustainable way for these populations to have access to clean water and high-quality sanitation. IDM, in a collaboration with colleagues from other institutions is in the process of preparing a report that will estimate the cost and benefits (reduced cholera cases and deaths) from mass vaccination in Dhaka. Mathematical modeling will be used to estimate how effective mass vaccination can be in reducing the burden of cholera.
Cholera vaccine has only moderately efficacy around 65%. However, if you can vaccinate a large proportion of a vulnerable population, even unvaccinated people benefit from herd protection. If enough people are vaccinated, about 50-70%, cholera transmission could slow to a trickle. Estimating the amount of herd protection, a vulnerable population receives from mass vaccination can be difficult, and mathematical modeling may be required. However, these kinds of calculations are needed to estimate the full benefit of large-scale public health interventions against infectious diseases. IDM’s efforts in Bangladesh and elsewhere will help countries decide how to allocate scarce resources to public health problems. |
What do the Rights of Nature have to do with Indigeneity?
The idea that a feature of nature, like a river, is a living being might seem like a strange concept to some, but it is nothing new to Indigenous and other traditional peoples around the world. While the Western philosophical system is underpinned by the idea that man is separate from nature and in dominion over it, Indigenous philosophical systems tend to conceive of humans as a part of nature, often in a stewardship role to help maintain its balance.
The “Rights of Nature,” which codifies this Indigenous philosophy, has been in the news lately. On March 15, the the New Zealand parliament passed the Te Awa Tupua Bill, which granted the Whanganui River the rights of legal personhood. Less than a week later, on March 20, the Ganges, and Yamuna Rivers in India were also granted legal personhood status. Should these rights be threatened by human activity, legal cases on behalf of these rivers can be brought before a court to uphold their rights.
“Recognition of personhood rights are an important step forward toward the recognition of the full rights of the rivers to be healthy, natural ecosystems. Such rights would include the rights of the rivers to pure water, to flow, to provide habitat for river species, and other rights essential to the health and well-being of these ecosystems.”
For more, read CELDF’s press statement on the Uttarakhand High Court decision.
Though the political and historical contexts underpinning each policy decision are different, all three rivers share these basic features in common:
- The local peoples have a deep spiritual connection with the rivers, and consider them as living entities.
- Since colonization, these rivers have been highly polluted by toxic chemicals released into them from farming and industry.
- Local people’s ability to steward the rivers have been violated through commercial interference (pollution, diverting water, over-fishing, etc.).
Because the Whanganui decision influenced the Ganges and Yamuna ruling, it’s important to understand the role that New Zealand’s Indigenous Maori people played in catalyzing the legal personhood status of rivers (the Whanganui directly, and Ganges and Yamana indirectly). In a brilliant turn of events, Maori claimants wielded the Western legal system, essentially formed to protect the property of the “have’s,” on its head to recognize the Rights of Nature.
By granting the Whanganui River the rights of legal personhood, the Te Awa Tupua Bill affirmed the Maori relationship with the river as a life force of its own, a spiritual place of cleansing and renewal that must be protected for the sake of its own existence. If this seems like a radical idea to pass through a national governing body, it is. The bill came on the heels of the 2016 Whanganui River Settlement between the Crown and the Whanganui Iwi [Maori tribe]. Marking the longest running legal case in New Zealand history, the case closed 148 years after Maori made a settlement claim for land surreptitiously alienated from them. As part of the settlement, the Whanganui River was granted full legal rights, $30 million was provided to restore its health, and $80 million in redress was granted to the Iwi.
The testimonies of countless Maori helped the court to understand that the river and the people are inexorably intertwined, that the process of colonization disrupted both the river and the people’s health, well-being and ability to survive. Because the river and nearby lands were wrested from Maori stewards in a less-than-legal manner (and we can’t go back in time), the settlement became a symbolic means to remediate for the irreparable damage done –the lives lost, systems disrupted, and cycles broken involving the river and all that depends upon it. To learn more, check out the trailer for the moving documentary film, Te Awa Tupua – Voices from the River (2014).
We are witnessing a shift in global consciousness
Indigenous peoples’ understanding of the Rights of Nature is a part of our worldviews, reflected in our languages, songs, art, traditional economies, and customary law. But you don’t have to belong to an Indigenous tribe or practice Hinduism to understand that nature has a right to exist.
Rights of Nature is a growing movement of like-minded allies from every culture, background and walk of life, as our colleagues at CELDF have proven through their work with communities around the world. Nearly ten years before the Whanganui, Ganges and Yamana rivers gained the rights of personhood, Ecuador and Bolivia adopted Rights of Nature provisions into their constitutions. In the US, more than three dozen communities have now enacted Rights of Nature laws, with communities now joining together in several states to drive such rights through state constitutional amendments.
Until we are able to shift mainstream perceptions of nature from something to be exploited to something to be protected for the benefit of generations to come, this strategy could at least buy us time to protect parts of the planet from immediate, and specific threats. Inspired by the the Ho-Chunk Nation, the first US tribe to vote to amend their tribal constitution to include the Rights of Nature, the Bioneers Indigeneity Program is partnering with CELDF to share this groundbreaking legal strategy with tribal partners. We feel that with our deep knowledge of natural systems combined with sovereign legal status, Native American peoples are in a unique position to advance the Rights of Nature. Stay tuned for more updates from the Indigeneity team!
Cara Romero (Chemhuevi) and Alexis Bunten (Aleut/Yup’ik)
Bioneers Indigeneity Program
Our weekly newsletter provides insights into the people, projects, and organizations creating lasting change in the world.
Join us, hear from those who are uncovering Pathways Forward, and be inspired. |
WALT: make and talk about 2D shapes by using symmetry
- identify 2D shapes by name
- name 2-dimensional shapes: triangle, square, oblong (non-square rectangle), circle, oval, pentagon, hexagon and diamond
- describe shape attributes in their own language
- explore and describe faces, edges, and corners of 2D and 3D objects
- make, name and describe polygons and other plane shapes
- describe the process of making shapes with line symmetry.
- name common two-dimensional mathematical shapes |
Chemistry - the Complete Course Series, teaches students quantitative reasoning to solve chemistry problems. The 30 volume series covers all aspects of Chemistry from density to the mole, to molarity, stoichiometry and equilibrium. It demonstrates how to ask meaningful questions and conduct careful investigations to build knowledge of chemical and physical concepts and develop the skills to solve mathematical and scientific problems. The metric system of measurement has a long historical background, and grew out of a need for standard and reproducible measurements for both science and commerce. The metric system has at least two distinct advantages over the English system of measurement. In 1960, the modern SI system of measurement was established. This volume teaches seven base units are required and all other scientific units can be derived from combinations of these seven base units. That conversions between metric units can be easily accomplished by either moving a decimal point or multiplying by some power of ten. |
No products in the cart.
The Practical Fruits of Early Music Exposure
In last week’s post we discussed the Magic of Music (if you missed it, go here), and in this post, we wanted to list the fascinating benefits that music has on various areas of your child’s development.
- Music changes the way neurons fire in your brain, after approximately 10 minutes of listening to a piano sonata your brain displays more orderly and efficient electrical impulses.
- Music allows the brain to prepare itself to perform complex algebra problems.
- The patterns found in music are similar to the patterns found in mathematics.
- Both music and mathematics follow logical order and sequence.
- Music increases spatial reasoning skills which is needed for understanding mathematics.
- The rhythm recognition within music helps with the ability to recognise syllables.
- Pitch awareness in music assists with phonetic awareness in reading.
- Auditory conceptualisation (the ability to ‘hear’ a song in your mind) assists when children start to read (by ‘hearing’ words in their minds).
- The repetitive nature of the words in songs increases vocabulary which aids with understanding what you’re reading.
- Music aids with building the ability to remember.
- The patterns and predictability of music builds intellectual development.
- Music helps to encourage goal orientation.
Social and Emotional Development:
- Predictability of music and repetition increases feelings of security – especially with younger babies.
- Enjoying music develops ‘joint attention’ and allows parent and child to share a moment together.
- Music teaches imitation – especially when it comes to songs that have movements or specific sounds.
- Music sets an ’emotional tone’, allowing your baby to experience various emotions such as excitement, calmness, sleepiness, sadness, happiness etc.
- Music is a form of self-expression and fosters joy.
While actively participating in music (such as dancing, clapping and singing) builds more neural pathways than listening alone, going a step further and teaching music to your child is even more beneficial. Many studies have confirmed that children who formally learn music do better at school and receive better grades.
Music really is an amazing and magical gift to give your children – the love for which is outshone only by its amazing brain-building qualities.
Words: Loren Stow
when we know better… we do better
If you would like to be notified of all new posts via email, please send an email to [email protected] |
Get ready for more “extreme” El Niños
Batten down the worldwide hatches. Scientists say baby Jesus’ meteorological namesake will become a thundering hulk more often as the climate changes.
The latest scientific projections for how global warming will influence El Niño events suggest that wild weather is ahead. El Niño starts with the arrival of warm water in the eastern Pacific Ocean, and it can culminate with destructive weather around the world. It was named by Peruvian fishermen after the infant Jesus because the warm waters reached them around Christmas.
We’ve previously told you that El Niños appear to be occurring more frequently as the climate has been changing. The authors of the latest paper on this subject, published Sunday in the journal Nature Climate Change, don’t project that El Niños will become more common in future. What they do project, though, is that twice as many El Niños will be of the “extreme” variety.
Extreme El Niños happened in the early 1980s and again in the late 1990s when surface water temperatures in the eastern Pacific Ocean shot up, triggering global weather pandemonium. Here’s a reminder of what that was like, taken from the new paper:
Catastrophic floods occurred in the eastern equatorial region of Ecuador and northern Peru, and neighbouring regions to the south and north experienced severe droughts. The anomalous conditions caused widespread environmental disruptions, including the disappearance of marine life and decimation of the native bird population in the Galapagos Islands, and severe bleaching of corals in the Pacific and beyond. The impacts extended to every continent, and the 1997/98 event alone caused US$35–45 billion in damage and claimed an estimated 23,000 human lives worldwide.
Jeez, that was a pretty horrible reminder. What’s worse than being reminded of past such disasters, though, is imagining more of them in the future — and that’s just what authors of this paper say we should be doing.
After aggregating the findings of different climate simulations, the scientists found that “the total number of El Niño events decreases slightly but the total number of extreme El Niño events increases.”
The slight decrease in the frequency of El Niños detected by the models wasn’t statistically significant, meaning there’s considerable uncertainty over whether such a decrease would actually occur. But the increase in extreme such events was statistically significant. That means that if the researchers’ models produced accurate simulations, we could start to expect extreme El Niños once every decade by the end of the century.
“Potential future changes in such extreme El Niño occurrences could have profound socio-economic consequences,” the scientists warn in their paper.
Increasing frequency of extreme El Niño events due to greenhouse warming, Nature Climate Change. |
Would it be possible to make a microscope on the island using just a basic array of wire, glassware, kitchen utensils and a car battery but not a clean glass lens in sight?
How does a microscope work?
Essentially, a microscope is just a lens. You can see the principle quite easily by looking through a tiny drop of water. Try balancing a water droplet on a 2mm hole made in cardboard and looking through it. You'll have to get your eye very close - and the object you're looking at very close too - but it magnifies surprisingly well.
Why does glass work better than water at magnifying things?
Glass has a larger refractive index than water, so light travelling through a sphere of glass is bent much more than when light travels through a sphere of water of the same size. So it can magnify it better.
Why use a sphere?
A sphere has a highly curved surface, which makes for a very powerful lens. The diameter of the sphere determines the magnification - the smaller the diameter, the greater the magnification.
Who invented the microscope?
Strong lenses were used since antiquity to examine tiny objects. One of the earliest uses of a simple microscope was by Antony van Leeuwenhoek (1632-1723) in around 1680.
Leeuwenhoek was a Dutch fabric merchant who used little "glass pearls" to examine the textiles in detail. Leeuwenhoek began to observe everything around him from saliva to pond water to beer. He discovered many micro-organisms and was the first person to describe bacteria, blood cells and sperm cells.
To obtain ever-increasing magnifications, Leeuwenhoek worked on smaller and smaller lenses, finally reaching 1-2 mm diameter lenses. Such small and powerful lenses are difficult to handle and focus: you have to keep the instrument very close to your eye and look directly through the tiny lens.
Would it be possible to make a simple microscope on the island?
Despite the fact that there were no clean, unscratched lenses on the island, there is a very simple microscope you can make using a tiny, ~2mm diameter ball of glass. To get lenses of high quality, the spherical balls need to stay small.
The reason why the glass beads are spherical instead of tear-drop shaped is due to surface tension. With small enough blobs of melted glass, the force of surface tension keeps the balls round. Once the balls get too big, gravity starts to deform the spheres into drop shapes.
WARNING: It is essential that you wear eye protection when using a flame and working with glass.
- We heated the central part of the glass rod until it looked like it had softened.
- We removed it from the flame and pulled it firmly apart until it was very thin.
- We broke the thread using tweezers.
- We held one of the threads horizontally in the flame until it began to melt, forming a little ball.
- We rotated the thread in the flame until the ball reached 1.5 to 2mm in diameter.
- We then removed the thread from the flame and let the ball cool. When cool, we broke off the thread about 10mm from the little ball using the tweezers. We used the tail to glue the lens in its seat.
- It took us a few tries to get a glass bulb without air bubbles or other imperfections.
How can we determine the magnifying power?
The smaller the size of the sphere, the greater its magnifying power.
This equation is used: I = 333/d where I is the magnifying power and d is the diameter of the sphere in mm.
Therefore, we can work out the magnification of these spheres:
A sphere of 1.66mm in diameter = 200X
A sphere of 3.33mm in diameter = 100X
A sphere of 1.11mm in diameter = 300X
How do we view objects through the glass bead?
As the bead is so small it needs to be put inside housing apparatus: on the island, we used a saucepan and a wooden base. A mirror was used to shine light onto the sample to illuminate it.
Since the sphere can only focus at a short distance the objects we examined had to be thin. We put the samples - such as onion cells or human hair - onto a piece of glass and held them in position with a couple of drops of water.
We used water and a cover slip and put the sample under the lens so that it was almost touching.
We incorporated an adjustment mechanism to focus the microscope.
We proved that it's possible to create a microscope with 200X magnification from just a glass rod, a saucepan and a spirit burner!
Kathy and her microscope
The BBC and the Open University are not responsible for the content of external websites.
Water Microscopes - by Mike Dingley, Australia on the Microscopy UK website
Microscopy UK - the home of Popular Microscopy on the Web
A Glass-Sphere Microscope - the Fun Science Gallery site
Anthony van Leeuwenhoek - from the University of California Museum of Paleontology site
Cell Diversity - from the Biology 150 Laboratory Review, University of North Dakota site
Use of Microscopes and Creation of Slides - from the Michigan Tech Mathematical Sciences site
Advanced Physics by Tom Duncan, John Murray |
As educators become increasingly aware of the prevalence and harm of bullying, there have been major conferences, school-wide programs, and legislation in 47 states intended to curtail it. But a recent study suggests how simple exercises in the classroom, involving just small groups of students at a time, may also have a positive impact.
In the study, researchers gave surveys to 217 students in grades three through five, measuring how much the students liked to cooperate or compete with their peers, and how often they acted with aggression or kindness toward them. The students also reported how often their teachers put them in small groups to complete assignments together, a classroom strategy known as “cooperative learning” because the students have to collaborate with one another in order to get their work done.
The results, published in the Journal of Applied Social Psychology, suggest that cooperation begets cooperation: Students who participated in more cooperative learning exercises were more likely than their peers to say they liked cooperating with other students, leading the researchers to conclude that “cooperative experiences promote the development of the personality trait of cooperativeness.”
What’s more, students who engaged in more frequent cooperative learning were also more likely to report performing kind, helpful—or “pro-social”—behavior toward their classmates.
On the other hand, students who said they liked competing were significantly more likely to act aggressively toward their peers and try to do them harm.
Students who cooperate with each other are not just more likely to do well on their shared projects, say the researchers. Prior studies suggest that participating in cooperative projects leads to positive relationships and greater psychological health. On the other hand, they report, being competitive is associated with bullying, and bullies tend to be more sad, lonely, and anxious.
Based on their results, the researchers advocate more cooperative learning in classrooms as a way to promote positive behaviors and combat bullying (which they dub “harm-intended aggression”).
“Cooperative learning experiences may be used to increase students’ cooperative predispositions,” they write. “Doing so will increase student engagement in pro-social behaviors and will reduce the incidence of harm-intended aggression among students.” |
|Small steam plume rises from a cinder cone within the summit caldera of Mount Veniaminof, Alaska. The large pit in the ice formed when lava (dark area) flowed beneath the ice and melted it. (Photograph by M.E. Yount.)|
Mount Veniaminof. Mount Veniaminof is a massive composite volcano with a summit caldera about 8 kilometers in diameter. Since its formation about 3,700 years ago, the caldera has filled with ice to a depth of at least 60 meters. Between June 1983 and January 1984, a series of small explosions, lava fountains, and lava flows erupted from a small cinder cone within the caldera. The explosions hurled molten lava from the cinder cone, and lava flows melted a pit about 1.5 kilometers in diameter in the ice near the base of the volcano. Water from the melting ice formed a temporary lake.
Mount Spurr. The summit cone of Mount Spurr consists of a large lava dome built in the center of a horseshoe-shaped crater formed earlier by a large landslide. At the southern edge of this ancient crater is a younger, more active cone known as Crater Peak. Scientists have determined that Crater Peak is the source for at least 35 ash layers found in the Cook Inlet area, all of which were erupted in the past 6,000 years. Until recently, a warm turquoise-colored lake partially filled its crater. |
See MIT News Item
for full story.
Excerpt: CAMBRIDGE, MA -- …Researchers at MIT, the University of California at Santa Cruz and other institutions have detected the first exoplanetary system, 10,000 light years away, with regularly aligned orbits similar to those in our solar system. At the center of this faraway system is Kepler-30, a star as bright and massive as the Sun. After analyzing data from NASA’s Kepler space telescope, the MIT scientists and their colleagues discovered that the star — much like the sun — rotates around a vertical axis and its three planets have orbits that are all in the same plane.
“In our solar system, the trajectory of the planets is parallel to the rotation of the sun, which shows they probably formed from a spinning disc,” says Roberto Sanchis-Ojeda, a physics graduate student at MIT who led the research effort. “In this system, we show that the same thing happens.”
Their findings, published today in the journal Nature, may help explain the origins of certain far-flung systems while shedding light on our own planetary neighborhood.
“It’s telling me that the solar system isn’t some fluke,” says Josh Winn, an associate professor of physics at MIT and a co-author on the paper. “The fact that the sun’s rotation is lined up with the planets’ orbits, that’s probably not some freak coincidence.”
…Hot Jupiters’ orbits are typically off-kilter, and scientists have thought that such misalignments might be a clue to their origins: Their orbits may have been knocked askew in the very early, volatile period of a planetary system’s formation, … But to really prove this “planetary scattering” theory, Winn says researchers have to identify a non-hot Jupiter system, one with planets circling farther from their star. If the system were aligned like our solar system, with no orbital tilt, it would provide evidence that only hot Jupiter systems are misaligned, formed as a result of planetary scattering.
…In order to resolve the puzzle, Sanchis-Ojeda looked through data from the Kepler space telescope, … on Kepler-30, a non-hot Jupiter system with three planets, all with much longer orbits than a typical hot Jupiter. To measure the alignment of the star, Sanchis-Ojeda tracked its sunspots, dark splotches on the surface of bright stars like the sun.
“These little black blotches march across the star as it rotates,” Winn says. … If a planet crosses a dark sunspot, the amount of light blocked decreases, creating a blip in the data dip. …From the data blips, Sanchis-Ojeda concluded that Kepler-30 rotates along an axis perpendicular to the orbital plane of its largest planet. The researchers then determined the alignment of the planets’ orbits by studying the gravitational effects of one planet on another. By measuring the timing variations of planets as they transit the star, the team derived their respective orbital configurations, and found that all three planets are aligned along the same plane. The overall planetary structure, Sanchis-Ojeda found, looks much like our solar system.
The findings from this first study of the alignment of a non-hot Jupiter system suggest that hot Jupiter systems may indeed form via planetary scattering.
Return to News Archive |
It's common knowledge that humans and other animals couldn't survive without oxygen. But scientists are now learning a good deal more about the extent of our evolutionary debt to a substance that was once a deadly poison.
New research at Lawrence Livermore National Laboratory (LLNL) and Boston University shows that many of the complex biochemical networks that humans and other advanced organisms depend on for their existence could not have evolved without oxygen.
"You could call it the 'oxygen imperative,' " said LLNL postdoctoral researcher Jason Raymond. "It's clear that you need molecular oxygen to evolve complex life as we know it."
"Researchers have spent decades putting together maps of how the building blocks of life connect to each other," added Daniel Segrè of Boston University, who holds a joint appointment in LLNL's Biosciences Directorate. "It turns out that whole regions in this map may not have existed without oxygen."
Raymond and Segrè used computer simulations to study the effect of oxygen on metabolic networks – the biochemical systems that enable organisms to convert food and nutrients into life-sustaining energy. Their analysis shows that the largest and most complex networks – those found in humans and other advanced organisms – require the presence of molecular oxygen. The research is reported in the March 24 issue of the journal Science.
"We wanted to look at how the availability of oxygen changed the types of chemical reactions," Raymond said, "both with respect to metabolites (metabolism byproducts) and to the enzymes needed to carry out metabolism."
Raymond and Segrè calculated the number of possible combinations of the thousands of enzymes and chemicals involved in all known metabolic reactions across the tree of life, and came up with a "virtually limitless" number – ten to the 16,536th power. Simulating that many networks would be an impossible task even for LLNL, which houses the world's most powerful supercomputers.
To make the project manageable, the researchers used a statistical technique called Monte Carlo to randomly sample and simulate about 100,000 networks. "We found that all the different types of networks fell into four different clusters of increasing size and connectivity, and in networks within the largest clusters, molecular oxygen was always present," Raymond said.
The smaller, simpler networks encompass anoxic, or oxygen-free, pathways common to all life, from single-celled bacteria to the largest mammals. "All this information can be gathered by analyzing the many genomes already sequenced and publicly available," Segrè said.
"Certain processes were essential to the development of the earliest cellular life," added Raymond, "and the most basic, intrinsic reactions organisms need to survive persist today. For example, how we break down glucose (sugar) has been remarkably well conserved for billions of years."
For higher life forms to evolve, however, additional processes were needed, such as the ability to synthesize, or break down, steroids and alkaloids – and those require oxygen. But until about two billion years ago, Earth's atmosphere was mostly carbon dioxide, sulfur dioxide and nitrogen; only about one-tenth of 1 percent was oxygen. The first microorganisms derived their sustenance from amino acids, hydrogen sulfide, organic carbon and similar hard-to-get substances. Not only was oxygen unnecessary, it was toxic to early organic life.
But about 2.2 billion years ago, a remarkable transformation took place. Cyanobacteria, also known as blue-green algae, learned how to do oxygenic photosynthesis – using sunlight, carbon dioxide and water to produce sugar and other carbohydrates, and giving off oxygen as a byproduct. Thanks to the abundance of all three substances, the cyanobacteria thrived, and the oxygen they produced began to fill the ocean and the atmosphere. Today, due to the photosynthetic action of both bacteria and plants, oxygen makes up about 20 percent of the Earth's atmosphere.
"Things starkly changed when cyanobacteria evolved," Raymond said. "The atmosphere became deadly to all the microorganisms that were around at the time. It would have been cataclysmic for life – the existing bacteria either had to retreat (into the deep ocean) or adapt to use oxygen."
Fortunately for us, many organisms did adapt, either on their own or through horizontal gene transfer – when one species, in effect, "steals" a gene and its molecular function from another. An important example of this was the ability of early life to derive energy from oxygen by "capturing" oxygen-using bacteria inside their cells, which ultimately became mitochondria (the cell's energy-generating "power plants"). That adaptation may have marked the beginning of complex life on Earth. What's more, recent research suggests that a sharp rise in atmospheric oxygen about 50 million years ago was the evolutionary boost that enabled mammals to grow in size and ultimately dominate the planet.
"Oxygen is the high-energy reactant that we need to grow into big, complex, multi-cellular organisms," Raymond said, "so life as we know it was kick-started a few billion years ago by the oxygen-producing microbes."
The new findings also may imply that oxygen would be a good proxy for the search for intelligent life elsewhere in the universe. "If you can detect oxygen or ozone in the atmosphere," Raymond said, "that would be a great marker, along with water, for finding a habitable planet."
Raymond and Segrè's findings suggest that additional evolutionary secrets might be uncovered through the study of metabolic networks.
"We will go back and look at the evolutionary history of the development of enzymes and metabolites to see how the process evolved over time," Raymond said. "There's lots more information available to be mined, not just with respect to oxygen but also other contingencies in the evolution of metabolism – for example, how the metabolic networks have changed over time in response to things like vitamins."
"Looking at networks that integrate information from many different organisms," added Segrè, "also may prove to be crucial for understanding the dynamics and evolution of complex ecosystems, such as the microbial communities revealed by metagenomic sequencing."
Source: Lawrence Livermore National Laboratory
Explore further: CO2 emissions change with size of streams and rivers |
Put a fish in a tank with unlimited food and it will gobble until it grows many times its size. This is possible because the fish has a much larger digestive system than it actually needs. But why spend so much energy maintaining all of these guts when fish in the wild don't eat nearly as much? A new analysis of 600 fish populations (including the bluegill, pictured), reported online today in Nature, suggests that large guts help fish deal with feast or famine conditions in the wild . A digestive system that's two or three times bigger than needed helps these fish gorge on food when they find it and store the calories for times when food is scarce. And, in the long run, that makes hauling around a bunch of guts worthwhile.
See more ScienceShots . |
During the last ice age a substantial amount of organic carbon was captured in permafrost soils as a result of the decay of plants and fauna. As the Earth warms, permafrost soils melt and this old carbon is released into the atmosphere as methane and CO2. If a significant amount of this carbon were to reenter the atmosphere, it would accelerate the rise in atmospheric greenhouse gases. In a recent article radiocarbon dating of methane bubbles and soil organic carbon from lakes formed by melting thermafrost in Alaska, Canada, Sweden and Siberia is combined with remote sensing determination of the growth of lakes to estimate the amount of old carbon released from permafrost soils in the Arctic over the past 60 years. Based on these results it is estimated that during the past 60 years 0.2 to 2.5 gigatonnes (Gt) of permafrost carbon were released as methane and carbon dioxide in the Arctic region. This is much less than the 10 Gt of CO2 contributed annually from anthropogenic and other sources.
During the beginning of the last deglaciation between 17,500 and 14,500 years ago atmospheric CO2 concentrations began rising from about 190 ppm in glacial times to approximately 270 ppm by the beginning of the present warm period. There is evidence that the rise in CO2 was from an old carbon pool that was formed from a biological source. Two possible sources are the southern ocean and permafrost soils. A large amount of carbon, of the same order of magnitude as is contained in the atmosphere, was deposited in permafrost soils during the last ice age. In addition the oceans contain about ten times as much CO2 as the atmosphere. It has been suggested that if melting permafrost did initially contribute to rising CO2 levels during the last deglaciation, the ventilation of CO2 from the oceans must have amplified this effect.
When permafrost melts, methane and other greenhouse gases are emitted. The conversion of old soil carbon deposited during the last ice age to greenhouse gases is difficult to measure in most terrestrial environments because greenhouse gases are also generated by modern plant and fauna decomposition.
However, the lakes formed by melting permafrost provide a way of measuring the emissions of greenhouse gases from old soils without this complication. When permafrost soils melts, the ground surface collapses and lakes called thermokarst lakes are formed in the depressions. Permafrost soils thawing beneath thermokarst lakes emit methane which forms bubbles in the lakes. Ultimately the methane is emitted into the atmosphere. These emissions can be detected and measured. Because of this process whereby soil containing carbon captured during the last ice age decomposes and emits methane, thermokarst lakes can be used to quantify the relationship between permafrost melting and greenhouse gas release.
In this article radiocarbon dating has been applied to methane in lake bubbles and soil organic carbon for lakes in Alaska, Canada, Sweden and Siberia. Methane emissions and radiocarbon ages were measured in 60-year thermokarst expansion zones and stable open-water zones of a variety of lake types in Alaska and Siberia spanning different latitudes (latitude 63°–71° North), ecosystems, and permafrost types.
Methane forms bubbles in lake sediments. Newly formed bubbles follow escape pathways through sediments. This results in point sources of continuous methane seepage into the water. During the ice-free season, bubbles rise through the water and escape to the atmosphere. In winter, lake ice traps the seep bubbles. The majority of winter bubble methane escapes from the lakes in spring when the ice melts.
Seep density, seep location (determined by GPS), measured flows and samples collected using submerged bubble traps were used to quantify methane emissions. Over 9,000 individual seeps were surveyed in different lakes. This included removing snow from early winter lake ice to expose bubble clusters trapped in ice for seep classification. The radiocarbon age of methane was measured in lake bubbles collected from seeps, background bubble traps and stirred sediments.
Soil profiles adjacent to lakes were sampled in late summer 2008 and 2010 to obtain permafrost soil samples. Organic carbon density and carbon dated age were measured. It was found that the methane age from lakes is nearly identical to the age of permafrost soil carbon thawing around them thus confirming that the methane is entirely from an old carbon source.
They then used remote sensing to measure the increase in extent of thermokarst lakes in the Arctic regions of the Earth. Aerial photos from the 1950s were overlaid with shorelines identified in modern high-resolution satellite imagery to quantify the increase in thermokarst zones over the past 60 years. Black and white aerial photos from circa 1950 were acquired from the USGS EROS Data Center and high-resolution panchromatic satellite imagery from 2010 from Digital Globe satellites.
Geospatial analysis was combined with physical modelling of permafrost thaw to relate rates of methane emission from lakes to the soil carbon inputs in zones of lakes that changed from land to water via thermokarst expansion during the past 60 years. Based on an existing model of permafrost thawing beneath these lakes, the volume of permafrost soils that thawed and eroded into lakes during the 60 years observation period was calculated.
The volume of permafrost soil that thawed and eroded into lakes during the 60 year observation period was estimated using lake depth measurements taken in the field and the shoreline positions of lakes 60 years ago determined from remote sensing imagery. Average lake expansion rates were calculated by combining the lake expansion over 60 years, the present lake water depth at the point where the shoreline was 60 years ago determined by field measurements, and the volumetric ice content of permafrost surrounding lakes determined from past regional studies.
Based on this analysis, it was estimated that 0.2 to 2.5 gigatonnes (Gt) of permafrost old carbon was released as methane and carbon dioxide in thermokarst expansion zones of pan-Arctic lakes during the past 60 years. For comparison global carbon emissions from fossil fuel use were 9.795 Gt in 2014. The comparison indicates that over the past 60 years permafrost thawing has not contributed significantly to increases in atmospheric greenhouse gases.
However, it is estimated that 1,400 GT of organic carbon were captured in permafrost soils as a result of the decay of plants and fauna during the last ice age. Currently there is roughly 800 GT of carbon in the atmosphere. If a significant amount of the carbon captured in permafrost soils were released, it could contribute to significantly accelerating the increase in atmospheric greenhouse gas concentrations.
Methane emissions proportional to permafrost carbon thawed in Arctic lakes since the 1950s, Katey Walter Anthony, Ronald Daanen, Peter Anthony, Thomas Schneider von Deimling, Chien-Lu Ping, Jeffrey P. Chanton & Guido Grosse, Nature Geoscience 9, 679–682 (2016) doi:10.1038/ngeo2795. |
Dear Mom and Dad,
We have learned a lot in math class! From graphs to order of operations, here's what we've learned so far this year.
We have learned about bar and line graphs. We used bar graphs for things such as surveys, data, and frequency charts. We used line graphs for things like data over a period of time. For example, stocks would be on a line graph, but favorite type of dog would be on a bar graph. To learn more about these, we had a assignment to find a graph and tell as much as we could about it. It was really fun!
Another thing we learned about was the order of operations, or PEMDAS. PEMDAS stands for Parenthesis, Exponents, Multiplication, Division, Addition, and Subtraction. That's the order we do operations. We even sang a song about it! A sentence in parenthesis would come before a multiplication sentence, but an addition problem would come after division. Brackets () extend parenthesis if there is a problem that goes in parenthesis, but has another parenthesis in them. Here's an example of brackets.
[3+ ( 5+5)] +2
Another subject we learned about was probability. Probability is the chance of something happening. For example, if I said that there were ten candies in a bag and five of them were blue and five of them were red, the probability of picking red would be 5/10, or ½. We also learned about two other types of probability, experimental probability and theoretical probability. Theoretical probability is what should happen. If I asked what the chances were of picking a red candy out of a box that had 3 red, 5 blue, and 2 yellow, the theoretical probability is 3/10. If I asked you to pick a random candy in a bag without looking ten times, you might get a different answer, which is experimental probability. Experimental probability is what actually happens. If you did it ten times, you may have gotten a blue seven times, a red twice and a yellow once.
Math class has been fun! I don't think I've disliked anything. I hope you found this letter useful. I can't wait to see what we do next! |
How is glaucoma detected?
Regular optic nerve checks are the best way to detect glaucoma early. Visit your eye health professional (Optometrist, Ophthalmologist) for an eye examination.
Glaucoma is a complex disease, and no single test can provide enough information to make a diagnosis. A regular eye check-up usually involves screening for glaucoma, and may indicate that further examination is required. It is important to note that glaucoma blindness is irreversible. Therefore, early detection is crucial as glaucoma treatment can save remaining vision but it does not improve eyesight.
On referral to an eye specialist (ophthalmologist) for a glaucoma assessment five tests are usually performed. Results of these tests, along with examination and patient history, will form a possible diagnosis and help build a management plan.
The following is a brief overview of the five most common glaucoma tests.
Tonometry measures the pressure within the eye. One of the main risk factors for glaucoma is high eye pressure and the best known treatment for glaucoma is lowering the eye pressure. Therefore the accurate measurement of the eye pressure (tonometry) is essential. The Goldman Applanation Tonometer is the most accurate way of measuring the eye pressure. It involves numbing the eye with eye drops first and then the instrument gently contacts the front of the eye to take the measurement.
Ophthalmoscopy (with or without Optic Nerve imaging)
Ophthalmoscopy is the visual examination of the optic nerve. Since glaucoma is a disease of the optic nerve, this is a key test. Dilating drops are usually given to enlarge the pupil, so that the optic nerve can be more clearly seen. The nerve(s) will be looked at for signs of glaucoma-related nerve cell loss. This can be done using a table top slit lamp or hand held ophthalmoscope. The appearance of the optic nerve can be documented with a drawing, a photo or with an imaging device. This is so any worsening of the nerve appearance can be detected in the future.
Glaucoma initially causes peripheral vision loss that the patient does not notice. This vision loss can be detected with a perimetry test (also known as a visual field test). It involves testing each eye separately with an automated machine that flashes a series of small lights in the periphery to which the patient should react by pressing a button. The perimetry test takes around 3-6 minutes per eye and is repeated 1-2 times a year so that every new test can be compared to the previous ones to look for any worsening.
Gonioscopy is the examination of the intraocular fluid outflow drainage angle. Fluid is constantly being made in the eye and it flows out of the eye at the drainage angle. This test can determine if the high eye pressure is caused by a closed/blocked angle (angle closure glaucoma) or if the angle is open but just not working well (open angle glaucoma). This is important because the management of each sub-type is slightly different. The test involves putting a mirrored lens on the surface of the eye after using a numbing drop, almost like wearing a contact lens.
Pachymetry is a test that measures the thickness of the front window of the eye (cornea). The eye is numbed with drops then a contact probe quickly measures the thickness in a few seconds. A very thick or very thin cornea can affect the pressure/tonometry readings and also a very thin cornea increases the risk of developing glaucoma.
A diagnosis of glaucoma involves medication and possible surgical intervention to preserve vision and quality of life.
Patient education involves
Other tests performed for the detection/diagnosis of glaucoma:
Current research about glaucoma detection includes: |
Great introductory lesson for geometry!
Practice geometry terms such as lines, line segments, parallel lines and angles. This lesson uses a graphic organizer to help students write the symbol for each term, state a short definition, and create a sketch representing the term. Discussions questions sum up the lesson.
Licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
Polish Armed Forces in the East
- This article is about the World War II period. For World War I, see Polish Armed Forces in the East (WWI).
||This article needs additional citations for verification. (May 2012)|
Polish Armed Forces in the East (Polish: Polskie Siły Zbrojne na Wschodzie) (or Polish Army in USSR) refers to military units composed of Poles created in the Soviet Union at the time when the territory of Poland was occupied by both Nazi Germany and the Soviet Union in the Second World War.
Broadly speaking, there were two such formations. The first was the Polish government-in-exile-loyal Anders Army, created in the second half of 1941 after the German invasion of the USSR led to the 30 July 1941 Polish-Soviet Sikorski-Mayski Agreement declaring an amnesty for Polish citizens held captive in the USSR. In 1942 this formation was evacuated to Persia and transferred to the Western Allies, whereupon it became known as the Polish II Corps and went on to fight Nazi German forces in Italy, including at the Battle of Monte Cassino.
Following this, the remaining Polish forces in USSR were reorganised into a Soviet-controlled Polish I Corps in the Soviet Union, which in turn was reorganised in 1944 into the Polish First Army (Berling Army) and Polish Second Army, both part of the Polish People's Army (Ludowe Wojsko Polskie, LWP).
Anders Army: 1941-1942
After the German occupation of the eastern part of interwar Poland by that time effectively defeated by the German invasion, the Soviets effectively broke off diplomatic relations with the evacuated Polish government. Diplomatic relations were however re-established in 1941 after the German invasion of the Soviet Union forced Soviet premier Joseph Stalin to look for allies. Thus the military agreement of August 14 and subsequent Sikorski-Maiski Agreement of August 17, 1941, resulted in Stalin agreeing to declare the Molotov-Ribbentrop Pact in relation to Poland null and void, and release tens of thousands of Polish prisoners-of-war held in Soviet camps. Pursuant to an agreement between the Polish government-in-exile and Stalin, the Soviets granted "amnesty" to many Polish citizens, from whom a military force was formed.
General Władysław Sikorski, the leader of the London-based exiled government of Poland, nominated General Władysław Anders - one of the Polish officers held captive in the Soviet Union - as commander of this new formation. The formation began to organise in the Buzuluk area, and recruitment begun in the NKVD camps for Polish POWs. By the end of 1941 25,000 soldiers (including 1,000 officers) had been recruited, forming three infantry divisions: 5th, 6th and 7th. In the spring of 1942 the organising formation was moved to the area of Tashkent; the 8th and 9th divisions were also formed that year.
In the second part of 1942, during the German Caucasus offensive (the most notable part of which was the Battle of Stalingrad), Stalin agreed to use the Polish formation on the Middle Eastern front; and the unit was transferred via the Persian Corridor to Pahlevi, Iran. As such, the unit passed from Soviet control to that of the British government, and as the Polish Second Corps joined the Polish Armed Forces in the West. About 77,000 combatants and 41,000 civilians - former Polish citizens - left the USSR with the Anders Army.
Berling Army: 1943-1945
After the Anders Army left Soviet controlled territory, and it became more and more apparent that the Soviet forces were able to hold the front against the German invaders without reliance on Western aid (Lend-Lease Act) or temporary allies (like the Polish government-in-exile), the Soviets decided to assume much greater control over the remaining Polish military potential in the USSR (ignoring the agreements signed with the Polish government-in-exile). Increasing numbers of volunteers were denied the opportunity to enlist in the Polish formations, instead they were declared Soviet citizens and assigned to the Red Army. Activities of organisations and people loyal to the Polish government-in-exile, particularly the Polish embassy in Moscow, were curtailed and its assets confiscated. Finally, diplomatic relations between the Soviets and the Polish government-in-exile were severed again as news of the Katyn massacre emerged in 1943.
In 1943 the Soviet Union created in Moscow the Union of Polish Patriots (ZPP) as a communist puppet government designed to counter the legitimacy of the Polish government in exile; the ZPP was led by the pro-Soviet Polish communist Wanda Wasilewska.
At the same time a new army was created - the Ludowe Wojsko Polskie (Polish People's Army, LWP). Its first unit, the 1 Polish Infantry Division (1 Dywizja Piechoty im. Tadeusza Kościuszki), was created in summer 1943, reaching operational readiness by June/July. In August, the Division was enlarged to a corps, becoming the Polish I Corps. It would be commanded by General Zygmunt Berling; other notable commanders included General Karol Świerczewski and Col. Włodzimierz Sokorski. The division with its supporting elements was sent to the Eastern Front in September 1943; one of the most notable battles of that period was the Battle of Lenino, the first major engagement of the Berling Army. By March 1944 the Corps had been strengthened with increasing armoured and mechanical support, and numbered over 30 000 soldiers. In mid-March 1944 the Corps was reorganized into the Polish First Army. The later Soviet-created Polish army units on the Eastern Front included the Second (1945) and the Third Polish Armies (the latter was quickly merged with the second due to recruitment problems), with the number of smaller formations being 10 infantry divisions (numbered from 1st to 10th) and 5 armoured brigades. Plans for a Polish Front were considered but dropped, and the Polish First Army was integrated into the 1st Belorussian Front.
These units were led by Soviet commanders, appointed by the Soviets and fought under Soviet general command (the Second Army, for example, was led by the Soviet general Stanislav Poplavsky). In the Air Force of those formations 90% of officers and engineers were Soviet ones, the situation was similar in armoured formations. In the Polish Second Army they comprised 60% of officers and engineers, and in the 1st 40%. In the command staff and training the percentage of Soviets was about 70 to 85%. Special political officers, almost exclusively made up of Soviets, oversaw the Polish soldiers. The Soviets created also political military police, based on thousands of secret informants called Główny Zarząd Informacji Wojska Polskiego in Polish.
The First Army entered Poland from Soviet territory in 1944. Under Soviet orders it did not advance into Warsaw as the Germans military defence of Warsaw was substantial with combined SS units fighting to the death against the advancing Polish units. (General Zygmunt Berling gave the command to move forward cross the Vistula river into Warsaw. During this episode of heroic fighting against Nazis heavy losses were sustained by the First division while crossing the Vistula river into Warsaw yet it was pivotal in the eventual liberation of Warsaw from Nazi forces. In January 1945, after the Germans had unsuccessfully suppressed the uprising against Polish forces, the 1st Army participated in the Soviet Warsaw offensive that finally ended the Nazi occupation of the ruined city. In April–May 1945 the 1st Army fought in the final capture of Berlin.
See also
- Polish Armed Forces in the West
- Polish contribution to World War II
- History of Poland (1939-1945)
- Northern Group of Forces
- Soviet invasion of Poland
- History of Poland (1945–1989)
- Soviet repressions of Polish citizens (1939-1946)
- See telegrams: No. 317 of September 10: Schulenburg, the German ambassador in the Soviet Union, to the German Foreign Office. Moscow, September 10, 1939-9:40 p.m.; No. 371 of September 16; No. 372 of September 17 Source: The Avalon Project at Yale Law School. Last accessed on 14 November 2006; (Polish)1939 wrzesień 17, Moskwa Nota rządu sowieckiego nie przyjęta przez ambasadora Wacława Grzybowskiego (Note of the Soviet government to the Polish government on 17 September 1939 refused by Polish ambassador Wacław Grzybowski). Last accessed on 15 November 2006.
- "In relation to Poland the effects of the pact have been abrogated on the basis of the Sikorski-Maiski agreement".
René Lefeber, Malgosia Fitzmaurice, The Changing Political Structure of Europe: aspects of International law, Martinus Nijhoff Publishers, ISBN 0-7923-1379-8, Google Print, p.101
- Note that as there was no coordination between the Polish Armed Forces in the East and West, both formations shared numbers of some divisions, and divisions numbered 5 to 9 existed both within the Anders Army and Berling's First (1,2,3,4,6) and Second Armies (5,7,8,9,10).
- Soviet Note of April 25, 1943, severing unilaterally Soviet-Polish diplomatic relations online, last accessed on 19 December 2005, English translation of Polish document
- Steven J Zaloga (1982). "The Polish People's Army". Polish Army, 1939-1945. Oxford: Osprey Publishing. ISBN 0-85045-417-4.
- Związek Patriotów Polskic, PWN Encyklopedia, last accessed on 23 March 2006
- Polish historian Paweł Piotrowski on LWP. Institute of National Remembrance, from Internet Archive. Last accessed on 23 March 2006. |
Anatomy and Physiology of Animals/The Skeleton
- 1 Objectives
- 2 The Vertebral Column
- 3 The Skull
- 4 The Rib
- 5 The Forelimb
- 6 The Hind Limb
- 7 The Girdles
- 8 Categories Of Bones
- 9 Bird Skeletons
- 10 The Structure Of Long Bones
- 11 Compact Bone
- 12 Spongy Bone
- 13 Bone Growth
- 14 Broken Bones
- 15 Joints
- 16 Common Names Of Joints
- 17 Locomotion
- 18 Summary
- 19 Worksheet
- 20 Test Yourself
- 21 Websites
- 22 Glossary
After completing this section, you should know:
- the functions of the skeleton
- the basic structure of a vertebrae and the regions of the vertebral column
- the general structure of the skull
- the difference between ‘true ribs’ and ‘floating ribs
- the main bones of the fore and hind limbs, and their girdles and be able to identify them in a live cat, dog, or rabbit
Fish, frogs, reptiles, birds and mammals are called vertebrates, a name that comes from the bony column of vertebrae (the spine) that supports the body and head. The rest of the skeleton of all these animals (except the fish) also has the same basic design with a skull that houses and protects the brain and sense organs and ribs that protect the heart and lungs and, in mammals, make breathing possible. Each of the four limbs is made to the same basic pattern. It is joined to the spine by means of a flat, broad bone called a girdle and consists of one long upper bone, two long lower bones, several smaller bones in the wrist or ankle and five digits (see diagrams 6.1 18,19 and 20).
Diagram 6.1 - The mammalian skeleton
The Vertebral Column
The vertebral column consists of a series of bones called vertebrae linked together to form a flexible column with the skull at one end and the tail at the other. Each vertebra consists of a ring of bone with spines (spinous process) protruding dorsally from it. The spinal cord passes through the hole in the middle and muscles attach to the spines making movement of the body possible (see diagram 6.2).
Diagram 6.2 - Cross section of a lumbar vertebre
The shape and size of the vertebrae of mammals vary from the neck to the tail. In the neck there are cervical vertebrae with the two top ones, the atlas and axis, being specialised to support the head and allow it to nod “Yes” and shake “No”. Thoracic vertebrae in the chest region have special surfaces against which the ribs move during breathing. Grazing animals like cows and giraffes that have to support weighty heads on long necks have extra large spines on their cervical and thoracic vertebrae for muscles to attach to. Lumbar vertebrae in the loin region are usually large strong vertebrae with prominent spines for the attachment of the large muscles of the lower back. The sacral vertebrae are usually fused into one solid bone called the sacrum that sits within the pelvic girdle. Finally there are a variable number of small bones in the tail called the coccygeal vertebrae (see diagram 6.3).
Diagram 6.3 - The regions of the vertebral column dik
The skull of mammals consists of 30 separate bones that grow together during development to form a solid case protecting the brain and sense organs. The “box “enclosing and protecting the brain is called the cranium (see diagram 6.4). The bony wall of the cranium encloses the middle and inner ears, protects the organs of smell in the nasal cavity and the eyes in sockets known as orbits. The teeth are inserted into the upper and lower jaws (see Chapter 5 for more on teeth) The lower jaw is known as the mandible. It forms a joint with the skull moved by strong muscles that allow an animal to chew. At the front of the skull is the nasal cavity, separated from the mouth by a plate of bone called the palate. Behind the nasal cavity and connecting with it are the sinuses. These are air spaces in the bones of the skull which help keep the skull as light as possible. At the base of the cranium is the foramen magnum, translated as “big hole”, through which the spinal cord passes. On either side of this are two small, smooth rounded knobs or condyles that articulate (move against) the first or Atlas vertebra.
Diagram 6.4 - A dog’s skull
Paired ribs are attached to each thoracic vertebra against which they move in breathing. Each rib is attached ventrally either to the sternum or to the rib in front by cartilage to form the rib cage that protects the heart and lungs. In dogs one pair of ribs is not attached ventrally at all. They are called floating ribs (see diagram 6.5). Birds have a large expanded sternum called the keel to which the flight muscles (the ‘breast” meat of a roast chicken) are attached.
Diagram 6.5 - The rib
The forelimb consists of: Humerus, radius and ulna, carpals, metacarpals, digits or phalanges (see diagram 6.6). The top of the humerus moves against (articulates with) the scapula at the shoulder joint. By changing the number, size and shape of the various bones, fore limbs have evolved to fit different ways of life. They have become wings for flying in birds and bats, flippers for swimming in whales, seals and porpoises, fast and efficient limbs for running in horses and arms and hands for holding and manipulating in primates (see diagram 6.8).
Diagram 6.6 - Forelimb of a dog
Diagram 6.7. Hindlimb of a dog
The Hind Limb
The hind limbs have a similar basic pattern to the forelimb. They consist of: femur, tibia and fibula, tarsals, metatarsals, digits or phalanges (see diagram 6.7). The top of the femur moves against (articulates with) the pelvis at the hip joint.
Diagram 6.8 - Various vertebrate limbs
Diagram 6.9 - Forelimb of a horse
This diagram is wrong. The Long pastern or proximal phalanx is P1, not P3. The distal phalanx or coffin bone (called hoof there) is the 3rd phalanges. The patella or kneecap is embedded in a large tendon in front of the knee. It seems to smooth the movements of the knee. The legs of the horse are highly adapted to give it great galloping speed over long distances. The bones of the leg, wrist and foot are greatly elongated and the hooves are actually the tips of the third fingers and toes, the other digits having been lost or reduced (see diagram 6.9).
The girdles pass on the “push” produced by the limbs to the body. The shoulder girdle or scapula is a triangle of bone surrounded by the muscles of the back but not connected directly to the spine (see diagram 6.1). This arrangement helps it to cushion the body when landing after a leap and gives the forelimbs the flexibility to manipulate food or strike at prey. Animals that use their forelimbs for grasping, burrowing or climbing have a well-developed clavicle or collar bone. This connects the shoulder girdle to the sternum. Animals like sheep, horses and cows that use their forelimbs only for supporting the body and locomotion have no clavicle. The pelvic girdle or hipbone attaches the sacrum and the hind legs. It transmits the force of the leg-thrust in walking or jumping directly to the spine (see diagram 6.10).
Diagram 6.10 - The pelvic girdle
Categories Of Bones
People who study skeletons place the different bones of the skeleton into groups according to their shape or the way in which they develop. Thus we have long bones like the femur, radius and finger bones, short bones like the ones of the wrist and ankle, irregular bones like the vertebrae and flat bones like the shoulder blade and bones of the skull. Finally there are bones that develop in tissue separated from the main skeleton. These include sesamoid bones which include bones like the patella or kneecap that develop in tendons and visceral bones that develop in the soft tissue of the penis of the dog and the cow’s heart.
Although the skeleton of birds is made up of the same bones as that of mammals, many are highly adapted for flight. The most noticeable difference is that the bones of the forelimbs are elongated to act as wings. The large flight muscles make up as much as 1/5th of the body weight and are attached to an extension of the sternum called the keel. The vertebrae of the lower back are fused to provide the rigidity needed to produce flying movements. There are also many adaptations to reduce the weight of the skeleton. For instance birds have a beak rather than teeth and many of the bones are hollow (see diagram 6.11).
Diagram 6.11 - A bird’s skeleton
The Structure Of Long Bones
A long bone consists of a central portion or shaft and two ends called epiphyses (see diagram 6.12). Long bones move against or articulate with other bones at joints and their ends have flattened surfaces and rounded protuberances (condyles) to make this possible. If you carefully examine a long bone you may also see raised or rough surfaces. This is where the muscles that move the bones are attached. You will also see holes (a hole is called a foramen) in the bone. Blood vessels and nerves pass into the bone through these. You may also be able to see a fine line at each end of the bone. This is called the growth plate or epiphyseal line and marks the place where increase in length of the bone occurred (see diagram 6.16).
Diagram 6.12 - A femur
6.13 - A longitudinal section through a long bone
If you cut a long bone lengthways you will see it consists of a hollow cylinder (see diagram 6.13). The outer shell is covered by a tough fibrous sheath to which the tendons are attached. Under this is a layer of hard, dense compact bone (see below). This gives the bone its strength. The central cavity contains fatty yellow marrow, an important energy store for the body, and the ends are made from honeycomb-like bony material called spongy bone (see box below). Spongy bone contains red marrow where red blood cells are made.
Compact bone is not the lifeless material it may appear at first glance. It is a living dynamic tissue with blood vessels, nerves and living cells that continually rebuild and reshape the bone structure as a result of the stresses, bends and breaks it experiences. Compact bone is composed of microscopic hollow cylinders that run parallel to each other along the length of the bone. Each of these cylinders is called a Haversian system. Blood vessels and nerves run along the central canal of each Haversian system. Each system consists of concentric rings of bone material (the matrix) with minute spaces in it that hold the bone cells. The hard matrix contains crystals of calcium phosphate, calcium carbonate and magnesium salts with collagen fibres that make the bone stronger and somewhat flexible. Tiny canals connect the cells with each other and their blood supply (see diagram 6.14).
Diagram 6.14 - Haversian systems of compact bone
Spongy bone gives bones lightness with strength. It consists of an irregular lattice that looks just like an old fashioned loofah sponge (see diagram 6.15). It is found on the ends of long bones and makes up most of the bone tissue of the limb girdles, ribs, sternum, vertebrae and skull. The spaces contain red marrow, which is where red blood cells are made and stored.
Diagram 6.15 - Spongy bone
The skeleton starts off in the foetus as either cartilage or fibrous connective tissue. Before birth and, sometimes for years after it, the cartilage is gradually replaced by bone. The long bones increase in length at the ends at an area known as the epiphyseal plate where new cartilage is laid down and then gradually converted to bone. When an animal is mature, bone growth ceases and the epiphyseal plate converts into a fine epiphyseal line (see diagram 6.16).
Diagram 6.16 - A growing bone
A fracture or break dramatically demonstrates the dynamic nature of bone. Soon after the break occurs blood pours into the site and cartilage is deposited. This starts to connect the broken ends together. Later spongy bone replaces the cartilage, which is itself replaced by compact bone. Partial healing to the point where some weight can be put on the bone can take place in 6 weeks but complete healing may take 3-4 months.
Joints are the structures in the skeleton where 2 or more bones meet. There are several different types of joints. Some are immovable once the animal has reached maturity. Examples of these are those between the bones of the skull and the midline joint of the pelvic girdle. Some are slightly moveable like the joints between the vertebrae but most joints allow free movement and have a typical structure with a fluid filled cavity separating the articulating surfaces (surfaces that move against each other) of the two bones. This kind of joint is called a synovial joint (see diagram 6.17). The joint is held together by bundles of white fibrous tissue called ligaments and a fibrous capsule encloses the joint. The inner layers of this capsule secrete the synovial fluid that acts as a lubricant. The articulating surfaces of the bones are covered with cartilage that also reduces friction and some joints, e.g. the knee, have a pad of cartilage between the surfaces that articulate with each other.
The shape of the articulating bones in a joint and the arrangement of ligaments determine the kind of movement made by the joint. Some joints only allow a to and from gliding movement e.g. between the ankle and wrist bones; the joints at the elbow, knee and fingers are hinge joints and allow movement in two dimensions and the axis vertebra pivots on the atlas vertebra. Ball and socket joints, like those at the shoulder and hip, allow the greatest range of movement.
Diagram 6.17 - A synovial joint
Common Names Of Joints
Some joints in animals are given common names that tend to be confusing. For example:
- The joint between the femur and the tibia on the hind leg is our knee but the stifle in animals.
- Our ankle joint (between the tarsals and metatarsals) is the hock in animals
- Our knuckle joint (between the metacarpals or metatarsals and the phalanges) is the fetlock in the horse.
- The “knee” on the horse is equivalent to our wrist (ie on the front limb between the radius and metacarpals) see diagrams 6.6, 6.7, 6.8, 6.17 and 6.18.
Diagram 6.18 - The names of common joints of a horse
Diagarm 6.19 - The names of common joints of a dog
Different animals place different parts of the foot or forelimb on the ground when walking or running.
Humans and bears put the whole surface of the foot on the ground when they walk. This is known as plantigrade locomotion. Dogs and cats walk on their toes (digitigrade locomotion) while horses and pigs walk on their “toenails” or hoofs. This is called unguligrade locomotion (see diagram 6.20).
- Plantigrade locomotion (on the “palms of the hand) as in humans and bears
- Digitigrade locomotion (on the “fingers”) as in cats and dogs
- Unguligrade locomotion (on the “fingernails”) as in horses
Diagram 6.20 - Locomotion
- The skeleton maintains the shape of the body, protects internal organs and makes locomotion possible.
- The vertebrae support the body and protect the spinal cord. They consist of: cervical vertebrae in the neck, thoracic vertebrae in the chest region which articulate with the ribs, lumbar vertebrae in the loin region, sacral vertebrae fused to the pelvis to form the sacrum and tail or coccygeal vertebrae.
- The skull protects the brain and sense organs. The cranium forms a solid box enclosing the brain. The mandible forms the jaw.
- The forelimb consists of the humerus, radius, ulna, carpals, metacarpals and phalanges. It moves against or articulates with the scapula at the shoulder joint.
- The hindlimb consists of the femur, patella, tibia, fibula, tarsals, metatarsals and digits. It moves against or articulates with the pelvis at the hip joint.
- Bones articulate against each other at joints.
- Compact bone in the shaft of long bones gives them their strength. Spongy bone at the ends reduces weight. Bone growth occurs at the growth plate.
Use the Skeleton Worksheet to learn the main parts of the skeleton.
1. Name the bones which move against (articulate with)...
- a) the humerus:
- b) the thoracic vertebrae:
- c) the pelvis:
2. Name the bones in the forelimb:
3. Where is the patella found?
4. Where are the following joints located?
- a) The stifle joint:
- b) The elbow joint:
- c) The hock joint:
- d) The hip joint:
5. Attach the following labels to the diagram of the long bone shown below.
- a) compact bone
- b) spongy bone
- c) growth plate
- d) fibrous sheath
- e) red marrow
- f) blood vessel
6. Attach the following labels to the diagram of a joint shown below
- a) bone
- b) articular cartilage
- c) joint cavity
- d) capsule
- e) ligament
- f) synovial fluid.
- http://www.infovisual.info/02/056_en.html Bird skeleton
A good diagram of the bird skeleton.
A great introduction to the mammalian skeleton. A little above the level required but it has so much interesting information it's worth reading it.
- http://www.klbschool.org.uk/interactive/science/skeleton.htm The human skeleton
Test yourself on the names of the bones of the (human) skeleton.
Quite a good article on the different kinds of joints with diagrams.
- http://en.wikipedia.org/wiki/Bone Wikipedia
Wikipedia is disappointing where the skeleton is concerned. Most articles stick entirely to the human skeleton or have far too much detail. However this one on compact and spongy bone and the growth of bone is quite good although still much above the level required. |
California Institute of Technology researchers successfully combined up to 12 different DNA logic gates in five cascading levels, although the process takes hours, they report in the December 8 Science.
A group of so-called logic gates performs each operation. In the A AND C operation, for example, one gate would consist of strand B intertwined with strand A', which prefers strand A to strand B. When researchers introduce A into a test tube containing this gate, A' exchanges B for A, leaving B floating free.
To yield D as an output, researchers would add strand C to the test tube along with a second logic gate that contains strand D intertwined with two other sequences. One of these sequence latches onto B, the other to C, and D then floats free, as intended.
The new system can perform relatively complex sequences of operations because it allows the output strand of one operation, such as D, to serve as the input for another logical operation. "The ability to do sophisticated computations relies on the ability to build [these] networks," Winfree says. "We've opened the door to being able to build quite large and complex systems." Other approaches to DNA computing, such as a system that plays tic-tac-toe, rely on gates made from DNA- or RNA-based enzymes, which have not yet proven as capable of turning their own outputs into inputs.
A crucial part of combining so many gates is purifying noisy input signals, Winfree says. In electronic circuits a whole range of voltages, say 0 to 0.5 volt, would all represent a single input. To accomplish the same effect his group designed gates that act as thresholds, soaking up stray strands until they reach a preset concentration. Other gates amplified correct but weak signals by producing more of a given strand.
More on Nadrian Seeman DNA arms:
Nadrian Seeman and Baoquan Ding of New York University inserted into gaps specially designed DNA cassettes, each of which contains a flipper that swivels from a fixed point on the cassette. Each flipper can project from the array's surface in one of two different directions, depending on input strands of DNA that are added to the cassettes.
For now the flippers, about 100 total per array, all swivel identically in unison like windshield wipers, but in principle they could be oriented in other ways and controlled individually by specific input strands, Seeman says.
DNA logic circuits
DNA robotic arms |
One of the mysteries of the English language finally explained.
1Remove the saddle from (a horse or other ridden animal).
- ‘I unsaddled the horse and tied it near lots of grass.’
- ‘Jesse returned just as they were unsaddling the horses back at the ranch.’
- ‘‘A pretty story,’ he said flippantly as he unsaddled his horse and threw saddle and bridle to the ground.’
- ‘Thankfully, no one had unsaddled their horses because a moment later, the officers came galloping back in haste.’
- ‘Next, the horses are unsaddled and judged at halter in ranch conformation.’
- ‘The drover got to work unhitching the oxen, and the horsemen unsaddled their horses and led them to the trees and hobbled them.’
- ‘Tracey unsaddled his horse, all the while thinking.’
- ‘As she walked her horse into the stable I followed, she walked her horse in and unsaddled it and offered me a brush.’
- ‘She unsaddled the horse and tied it to a tree where it could graze.’
- ‘The blizzard hit while they were unsaddling the horses.’
- ‘A piece of lead weight from a saddle had been found dropped on the ground, so all the horses had to be unsaddled and the jockeys reweighed.’
- ‘Inside stood a single man unsaddling his horse.’
- ‘Dan patted his horse on the flank and unsaddled it, then took off its bridle.’
- ‘They reached the stables and each person either began unpacking or unsaddling their horse, and some both.’
- ‘The boy did not say a word as they unsaddled the horse in the barn.’
- ‘We got back to barn, unsaddled and groomed our horses, and then went upstairs to clean up and change clothes.’
- ‘The horses are unsaddled, the Winchesters taken from their scabbards for protection through the night.’
- ‘Sullivan unsaddled their horses and dropped the saddles in front of the tent.’
- ‘She stayed with Reese as he unsaddled the horse, she was surprised when he just slapped it on the rump and let it gallop off, wouldn't it run away?’
- ‘I trailed my brother as he unsaddled his horse and hung up the saddle and the tack in the stable.’
- 1.1 Dislodge from a saddle.
- ‘His balance was thrown off, but he was not unsaddled until the horse went about twenty feet more.’
- ‘The first knight to charge unsaddled him.’
- ‘Emily became unsaddled as her pony made its way from rock ledge to rock ledge.’
Top tips for CV writingRead more
In this article we explore how to impress employers with a spot-on CV. |
For decades, scientists and the public alike have wondered why some fireflies exhibit synchronous flashing, in which large groups produce rhythmic, repeated flashes in unison – sometimes lighting up a whole forest at once.
Now, UConn’s Andrew Moiseff, a professor in the Department of Physiology and Neurobiology in the College of Liberal Arts and Sciences, has conducted the first experiments on the purpose of this phenomenon. His results, reported today in the journal Science, suggest that synchronous flashing encourages female fireflies’ recognition of suitable mates.
“There have been lots of really good observations and hypotheses about firefly synchrony,” Moiseff says. “But until now, no one has experimentally tested whether synchrony has a function.”
Moiseff has had an interest in fireflies since he was an undergraduate at Stony Brook University. There he met his current collaborator, Jonathan Copeland of Georgia Southern University, who was a graduate student at the time. When the two graduated, Moiseff moved on to pursue other research interests. But in 1992, Copeland received an enlightening phone call.
“He had commented in a paper that firefly synchrony was rare, and mostly seen in southeast Asia,” says Moiseff. “But a naturalist from Tennessee called him to say that each summer the fireflies at her summer cabin all flashed at the same time.”
Moiseff and Copeland flew down to the Great Smoky Mountains National Park to check out the fireflies and, says Moiseff, they’ve been going back every year since.
Fireflies – which are actually a type of beetle – produce bioluminescence as a mating tool, in which males display a species-specific pattern of flashes while “cruising” through the air, looking for females, says Moiseff. These patterns consist of one or more flashes followed by a characteristic pause, during which female fireflies, perched on leaves or branches, will produce a single response flash if they spot a suitable male.
Of the roughly 2,000 species of fireflies around the world, scientists estimate that about 1 percent synchronize their flashes over large areas. Thousands of male fireflies may blink at once, creating a spectacular light show. In their current study, Moiseff and Copeland wondered what evolutionary benefit this species gains from synchronous flashing.
The two hypothesized that males synchronize to facilitate the females’ ability to recognize the particular flashing pattern of their own species. To test this theory, they collected females of the synchronous species Photinus carolinus from the Smoky Mountains National Park and exposed them in the laboratory to groups of small blinking lights meant to mimic male fireflies. Each individual light produced the P. carolinus flashing pattern, but the experimenters varied the degree to which the flashes were in synch with one another.
“We had the technology to design something that we thought would create a virtual world for these females,” says Moiseff.
Their results showed that females responded more than 80 percent of the time to flashes that were in perfect unison or in near-perfect unison. But when the flashes were out of synch, the females’ response rate was 10 percent or less.
Since synchronous species are often observed in high densities, Moiseff and Copeland concluded that their results suggest a physiological problem in the females’ information processing. Male fireflies are typically in flight while searching for females, so their flashes appear in different locations over time. Therefore, says Moiseff, females must be able to recognize visual cues over a large area.
But, he points out, this behavior presents a problem in areas crowded with male fireflies. Instead of seeing a single flying male, the female would see a cluttered landscape of flashes that could be individually unrecognizable.
“When males are flashing in high densities, the female’s inability to focus on just one male would make it very difficult for her to detect her species-specific pattern,” Moiseff says. “So if the males synchronize, it can maintain the fidelity of the signal in the presence of many other males.”
Whether the females can’t or simply choose not to discriminate spatial information on small scales is unclear, says Moiseff. His future research will focus on questions that address whether physiological constraints or behavioral decisions are driving the evolution of synchrony.
Overall, says Moiseff, he is interested in the role that animal physiology plays in shaping evolution.
“Animals have evolved to solve unique problems in many different ways, and I’m interested in how they do that,” he says. “Fireflies have these tiny heads and these tiny brains, but they can do some complex and amazing things.” |
With controlled vocabulary and short, simple sentences, this "Explore the Biomes" series is intended to engage the interest of middle readers with below-expectation reading levels and help them access information about varied ecological communities called "biomes." Each book contains five or six short chapters defining the biome, introducing native plants and animals, exploring the role of humans in that ecology, and then offering a "field guide" (quick facts) and a profile of a scientist who works in one of the biomes being studied. Following this formula, the authors have managed to make the brief text as lively as possible; color photos are generally well selected and of much greater interest than is usual in a series from this publisher. Readers will be able to identify major oceans on a world map and learn about ocean plants like kelp and edible seaweed. Other chapters introduce animals of the shore, shallow water, and deep water; for example, blue crabs, sea otters, and viperfish. Stressing conservation, the chapter on human interaction explains the fragility of oceans and dangers from over fishing, pollution, and destruction of habitats. Especially striking are photos of seaweed farmers harvesting plants for sushi and the close-up of a huge-eyed orange shrimp. Readers will meet Dr. Sylvia Earle, an oceanographer who has designed a deep-sea submersible and writes children's books about ocean creatures. These visually attractive books present biome overviews that should be colorful and appealing enough to inspire further research. Each title contains a glossary, a short bibliography, and an index. |
Also found in: Dictionary, Thesaurus, Medical.
monochromatic light[män·ə·krə′mad·ik ′līt]
an electromagnetic wave of one specific and strictly constant frequency in the frequency range directly perceivable by the human eye. The term “monochromatic light” originated because a person perceives a difference in the frequency of light waves as a difference in color. However, the electromagnetic waves of the visible region do not differ in physical nature from those of other regions (such as the infrared, ultraviolet, and X-ray regions). The term “monochromatic” is also applied to the other regions, although such waves do not produce any perception of color.
The term “monochromatic light” (like “monochromatic radiation” in general) is an idealization. Theoretical analysis shows that the emission of a strictly monochromatic wave should continue indefinitely. However, real radiation processes are limited with respect to time, and therefore waves of all frequencies that belong to a certain frequency interval are emitted simultaneously. The narrower the interval, the more monochromatic the radiation. Thus, the radiation of the individual lines of the emission spectra of free atoms (such as the atoms of a gas) is very close to monochromatic light. Each line corresponds to a transition of an atom from a state m (with higher energy) to a state n (with lower energy). If the energies of the states had strictly fixed values Em and En, the atom would radiate monochromatic light with a frequency νmn = 2πωmn = (Em − En)/h. Here h is Planck’s constant, equal to 6.624 × 10−27 erg · sec. However, an atom can stay in states with a higher energy only for a short time Δt (usually 10−8 sec, called the lifetime at the energy level), and according to the uncertainty principle, for the energy and lifetime of a quantum state (ΔEΔt ≥ h) the energy of a state m, for example, can have any value between Em + ΔE and Em − ΔE. Because of this, the radiation of each spectral line acquires a frequency “spread” Δνmn = 2ΔE/h = 2/Δt.
During emission of light (or of electromagnetic radiation in other bands) by real sources, a set of transitions between different energy states may take place. Therefore, waves of many frequencies are present in such radiation. Instruments used to isolate narrow spectral intervals (radiation that is close to monochromatic) are called monochromators. Extraordinarily high monochromaticity is characteristic of the radiation of certain types of lasers (its spectral interval may be much narrower than that of the lines of atomic spectra).
REFERENCESBorn, M., and E. Wolf. Osnovy optiki, 2nd ed. Moscow, 1973.
Kaliteevskii, N. I. Volnovaia optika. Moscow, 1971.
L. N. KAPORSKII |
The Vela pulsar is a neutron star about 12 miles in diameter, itself spinning at a dizzying 11 times per second and the brightest and most persistent source of gamma rays in the sky. The pulsar and the supernova remnant was created by a massive star which exploded over 10,000 years ago. Due to its behavior, it produces tremendously powerful electric and magnetic fields, which go on to accelerate particles in the remnant to nearly the speed of light. In effect, the pulsar is producing a vast, natural particle accelerator.
The wide-angle view of the Vela pulsar and its pulsar wind nebula above are shown against a background of clouds, or filaments, of multi-million degree Celsius gas. These clouds are part of a huge sphere of hot expanding gas produced by the supernova explosion associated with the creation of the Vela pulsar about 10,000 years ago.
As the ejecta from the explosion expanded into space and collided with the surrounding interstellar gas, shock waves were formed and heated the gas and ejecta to millions of degrees. The sphere of hot gas is about 100 light years across, 15 times larger than the region shown in this image, and is expanding at a speed of about 400,000 km/hr.
The pulsar is considered to be one of the most fascinating images ever captured by the Chandra X-ray Observatory, revealing a striking, almost unbelievable, structure consisting of bright rings and jets of matter. Such structures indicate that mighty ordering forces must be at work amidst the chaos of the aftermath of a supernova explosion. Forces can harness the energy of thousands of suns and transform that energy into a tornado of high-energy particles that astronomers refer to as a "pulsar wind nebula."
The Vela pulsar is the collapsed stellar core within the Vela supernova remnant --the massive star that formed this structure blew up between 11,000 and 12,300 years ago, astronomers have established.
More massive than the Sun, it has the density of an atomic nucleus.The pulsar's electric and magnetic fields accelerate particles to nearly the speed of light, powering the compact x-ray emission nebula revealed in the Chandra image below.
Source: The Daily Galaxy via Fermi Space Telescope |
DIABETES – EPIDEMIC OF THE 21ST CENTURY
Diabetes is a condition in which the pancreas can’t make enough insulin or the body can’t use the insulin produced efficiently. Insulin helps our bodies to produce glucose, a sugar which our bodies use for energy that comes from most foods we eat. With diabetes, glucose builds up in the blood, instead of being produced and delivered for energy production.
Of those with diabetes, 5% are Type 1 while 95% are Type 2 . The more serious Type 1 diabetes is an autoimmune disease usually found in children or young adults. The immune system attacks the insulin-producing cells in the pancreas and destroys them. The pancreas then produces little or no insulin, preventing cells from taking up needed sugar from blood. Someone with type 1 diabetes needs daily injections of insulin and a strict diet with regular blood sugar monitoring under physician supervision.
Type 2, or adult onset, is the more common form of diabetes and affects primarily adults over age 55, with about 80 percent being overweight. In type 2 diabetes, the pancreas usually produces insulin, but for different reasons the body can’t use the insulin effectively including primarily transfer into the cells. Glucose if not delivered into the cells where needed then builds up in the blood to produce unwanted excessive blood sugar levels.
The symptoms of diabetes include feeling tired or ill, frequent urination, unusual thirst, constant hunger, weight loss, blurred vision, frequent infections, and slow healing of sores.
High risk factors for developing diabetes
High risk factors for developing diabetes include:
- Being more than 20 percent above your ideal body weight;
- Having immediate family with diabetes;
- Having high blood pressure of 140/90 or higher;
- Belonging to certain ethnic groups;
- Having low HDL, the good cholesterol; or high triglycerides
70 to 100 is now considered a normal blood sugar level, with 100 to 125 classified as pre-diabetic and over 126 being diabetes; the latter doubling your chance of death. A must in monitoring blood sugar status in the body is a resting Hb-A1c test that measures the percentage of glucose molecules clinging to your red blood cells for the past 90 days. A 5.0 level is normal. If you have had diabetes Type 2 for an extended time, a 7.0 level is sought while 5.0 for a non-diabetic. If higher you have too much sugar floating in your blood and are prediabetic or diabetic.
Diabetes treatment focuses on keeping blood sugar in a normal range daily. Your doctor can evaluate if you need diabetes drugs or insulin shots but measurement over time is necessary versus instant readings that can be reading a spike in insulin rather than your overall blood sugar level over time.
In diet, a low carbohydrate, restricted sugar diet combined with exercise with weight control helps control or diminish your diabetes. Most diabetics should restrict carbohydrate intake to less than 45 grams daily. Don’t skip meals and eat several small ones if your schedule allows. Many prepare small servings and keep them in serving-size containers to be used throughout the day. Weight control is essential and often can dramatically reduce diabetes symptoms and sometimes even produce a cure based on blood sugar measurements.
DIABETIC DANGERS IN HEALTH
Diabetes is in almost epidemic growth in the United States, having doubled in just 15 years. How serious is diabetes? Did you know diabetes is now the primary cause of blindness (retinopathy), kidney disease (nephropathy), amputations and nerve damage (peripheral neuropathy) in the U.S.? Negative lifestyle changes in recent years, including obesity and lack of exercise, are primary causes.
Additionally, 65% of diabetics die from a heart attack or stroke, contributing to cardiovascular disease being the primary cause of death in the U.S.
The good news is if you have diabetes it can in most cases be controlled to enable a normal lifestyle. If you are diagnosed as millions of others, follow these steps and the advice of your physician:
As stated before, have a hemoglobin A1C blood test quarterly to know your average blood sugar level over the previous 3 months. Most physicians consider a target level of 7.0 or below for extended-time diabetics, with 5.0 for non-diabetics. Your fasting blood sugar should be below 130, with a target of 70 to 100.
Cholesterol levels are especially important because of the relationship of high cholesterol to heart attack and stroke risks. LDL or bad cholesterol should be less than 100 for diabetics. If diabetic, blood pressure should be below 140/90, with a prescription drug such as Lisinopril often given to assist in blood pressure management. Realize Lisinopril has side effects that increase mucous production that can negatively affect sleep and voice quality.
A baby aspirin of 81 mg is frequently suggested daily for cardio protective effects in the blood unless taking blood thinners such as Warfarin or Heparin. Do not take in excess of a baby aspirin dosage unless advised by your physician. Excess aspirin, also known as acetylsalicylic acid, usage can raise unwanted liver enzyme levels so be aware. Kidney disease and renal failure are a major risk with angiotensin-converting enzymes, or ACE inhibitors, often prescribed for diabetics.
Be sure to have your eyes checked annually for diabetic retinopathy. An ophthalmologist will check your retina at the back of your eye for diabetic damage. Natural ingredients in supplements such as NSC Immunition Eye Care Formula nutritionally contribute beneficial nutrients and vitamins for your eyes to help address diabetic concerns.
Your feet are often negatively affected by diabetes and are another risk area that your doctor will check with a device called a monofilament that tests your nerves and sensation. Exercise positively improves insulin resistance and decreases blood sugar levels. With doctor approval, moderate exercise activities of 30 minutes daily are optimum.
Acute complications of diabetes then include hypoglycemia, ketoacidosis, and hyperosmolar hypoglycemia (dehydration with 20% comas), while chronic complications are cardiovascular diseases, renal failure involving the kidneys, retinal damage in the eyes, nerve damage and poor healing. Be aware and make necessary lifestyle changes to avoid so many negative health challenges from diabetes.
DIABETIC RETINOPATHY – EYE RISK
Diabetic retinopathy is the most common diabetic eye disease that occurs when blood vessels in the retina change. Sometimes vessels swell and leak fluid or even close off completely. In other cases, abnormal new blood vessels grow on the surface of the retina.
The retina is a thin layer of light-sensitive tissue that lines the back of the eye. Light rays are focused onto the retina, where they are transmitted to the brain and interpreted as the images you see. The macula is a very small area at the center of the retina responsible for your pinpoint vision, allowing you to read or recognize a face. Diabetic retinopathy usually affects both eyes, with unchecked progression leading to vision loss that in many cases cannot be reversed.
The first form of diabetic retinopathy is Non-proliferative diabetic retinopathy or NPDR. Many people with diabetes have mild NPDR, that doesn’t affect their vision. NPDR is the earliest stage of diabetic retinopathy in which damaged blood vessels in the retina begin to leak extra fluid and small amounts of blood into the eye. Sometimes, deposits of cholesterol from the blood may leak into the retina. NPDR can cause changes in the eye, including:
- Micro-aneurysms causing small bulges in blood vessels of the retina that often leak fluid.
- Retinal hemorrhages seen as tiny spots of blood that leak into the retina.
- Hard deposits of cholesterol or other fats from the blood.
- Macular edema seen asswelling or thickening of the macula caused by fluid leaking from the retina’s blood vessels. The macula doesn’t function properly when it is swollen and
- Macular ischemia where small blood vessels called capillaries close. Your vision blurs because the macula no longer receives sufficient blood to enable proper vision.
The second form is Proliferative diabetic Retinopathy, or PDR, which occurs when many of the blood vessels in the retina close, preventing sufficient blood flow. In an attempt to supply blood to the area where the original vessels closed, the retina responds by growing new blood vessels called neo-vascularization.
Unfortunately, these new blood vessels are abnormal and don’t supply the retina with proper blood flow. The new vessels often appear with scar tissue that may cause the retina to wrinkle or detach. PDR can affect both central and peripheral vision.
The best treatment for diabetic retinopathy is don’t get it. Strict control of your blood sugar and weight will significantly reduce the long-term risk of diabetic vision loss. NSC Eye-care Formula contains multiple eye nutritional aids with Lutein, B Vitamins and Eyebright plus Chromium and Red Raspberry to assist in normalizing blood sugar levels and Quercetin to reduce blood leakage in the retina. N Acetyl L Cysteine and Astaxanthin promote natural Glutathione production, while Zeaxanthin nutritionally enhances retina protection.
RESEARCH - ORAL INTAKE OF BETA 1,3 GLUCAN NUTRITIONALLY HELPS REDUCE DIABETES RISKS
Regarding diabetes and beta glucan, in a peer reviewed article (PubMed 27396408) on current research published in Molecular Nutrition & Food Research, researchers Y Cao, et al reported “B-Glucans have been shown to reduce the risk of obesity and diabetes.” The orally administered pure B-(1,3) glucan in mice significantly down-regulated the blood glucose through suppressing the sodium glucose cotransporter SGLT-1 expression in intestinal mucosa.
What is SGLT-1? SGLT-1 plays a major role in glucose absorption and incretin hormone release in the gastrointestinal tract. So what is incretin? Incretin hormones stimulate insulin secretion in response to meals potentially creating insulin spikes, while beta glucan nutritionally helps minimize insulin spikes by the suppression or minimization of incretin hormone release after a meal “Meanwhile, pure B-glucan promoted glycogen synthesis and inhibited fat accumulation in the liver…and depressed pro-inflammatory cytokines,” according to the report. Glycogen is the principal storage form of glucose in animal and human cells.
Excess glucose stored as glycogen can become a fat primarily around the middle (belly fat). Beta glucan increases the synthesis of glycogen or the breaking down of this complex compound; thus breaking down excess glycogen to enhance controlling obesity. Beta glucan usage must be combined with dietary changes and moderate exercise to lose excess weight so involved in creating diabetes. Weight loss is essential to diminishing Type 2 diabetes levels, including cure.
In additional peer reviewed research published in Vascular Health Risk Management ( PMD 19337540 ) researchers Chen and Raymond state, “The major risk [of diabetes mellitus] is vascular injury leading to heart disease, which is accelerated by increased lipid levels and hypertension. Management of diabetes includes: control of blood glucose level and lipids; and reduction of hypertension. Dietary intake of beta-glucans has been shown to reduce all these risk factors to benefit the treatment of diabetes and associated complications.”
NSC IMMUNITION DIABETES PACKAGE – 40% DISCOUNT!
NSC Diabetes Package Pkg Price Retail
NSC 100 Xtra Strength – 30 ct $41.97 $ 69.95
NSC Eyecare Formula – 30 ct $14.97 $ 24.95
NSC Circulatory Formula – 180 ct $35.97 $ 59.95
Total Diabetes Pkg Price $92.95 $154.85
For Detailed Label/Ingredient Information click on the individual Product and then on Supplement Facts. |
January 27, 2010
How Plants Cope With Variable Light Conditions
As so-called primary producers, plants use solar energy to synthesize the foodstuffs that sustain other forms of life. This process of photosynthesis works in much the same way as the solar panels that supply energy for domestic heating. Like these, plant leaves must cope with variations in the level and quality of ambient light. LMU researcher Professor Dario Leister and his colleagues at LMU Munich have been studying how this is accomplished in the thale cress, Arabidopsis thaliana. "It turns out that, depending on lighting conditions, photosynthesis can rapidly switch between two modes of action, called states 1 and 2", says Leister, "and some years ago we reported that the 1-to-2 transition depends on the enzyme STN7, which attaches phosphate to a key protein." In their latest publication the researchers together with collaborators in Italy, have identified the enzyme that reverses this modification, thus flipping the system back to state 1. The discovery adds a critical element to the understanding of photosynthesis but has also practical implications for improving the growth of plants under low-light conditions, which favor state 2. (PLoS Biology, 26 January 2010)
The photosynthetic machinery is embedded in specialized membranes called thylakoids located in the chloroplasts of leaf cells. Thylakoids contain two types of so-called photosystems, PSI and PSII. Each consists of an antenna complex and a reaction center. The antenna complex channels light energy to the reaction center, where it serves to detach electrons from chlorophyll molecules. The energy imparted to the electrons is captured in a controlled manner as they pass along a sequence of carrier molecules, and is used to power all other cellular activities.The two photosystems contain different antenna proteins, called light-harvesting complexes (LHCs), and differ in their sensitivity to light of different colors. PSII is most sensitive to red light, while PSI responds best to far-red light. "However, the two photosystems act in series, with PSII passing excited electrons via carrier molecules to PSI, where they receive a second energy boost", explains Leister. "The distribution of excitation energy between the photosystems must therefore be balanced for optimal performance, and this is done in part by switching between two functional states."
Red light makes PSII run faster than PSI, but within minutes phosphate is added to a fraction of the LHCII molecules attached to PSII, and the transition to state 2, associated with the migration of modified LHCII to PSI, ensues. "We previously identified the enzyme that attaches phosphate to LHCII as STN7", says Leister, "and showed that STN7 is activated when the carriers that relay electrons to PSI are overloaded." When the modified LHCII proteins bind to PSI, they permit it to utilize more light and accept electrons from PSII, relieving carrier overload and balancing the activities of the two photosystems.
The reverse transition (2-to-1) requires the removal of phosphate from LHCII. In their latest publication, the researchers report how they found the phosphatase enzyme that performs this task. "First we individually inactivated the genes for the nine phosphatases known to reside in the chloroplast, but none of the mutations affected state transitions", explains Leister. However, the team then hit upon another phosphatase, At4g27800, among chloroplast proteins that had been identified by mass spectroscopy.
It proved to be an inspired choice. "We confirmed that this protein, which we renamed TAP38, is associated with thylakoids, and we identified mutant strains that lacked it. These mutants remain locked in state 2, irrespective of lighting conditions, as one would expect if TAP38 is required for removal of the phosphate." And indeed, addition of purified TAP38 to the modified LHCII was found to lead directly to loss of the phosphate group.
The discovery adds a critical element to the circuitry that regulates state transitions, but it also has practical implications for improving the growth of plants under low-light conditions, which favor state 2. As Professor Leister reports, "plants in which the gene for TAP38 is inactivated grow faster than their normal counterparts in continuous low-level light. This is probably due to the more balanced allocation of light between the two photosystems". So perhaps the elegant energy management system elucidated by Leister and his colleagues will someday help reduce energy bills too. (PH)
Publication: "Role of plastid protein phosphatase TAP38 in LHCII dephosphorylation and thylakoid electron flow". Mathias Pribil, Paolo Pesaresi, Alexander Hertle, Roberto Barbato and Dario Leister. PloS Biology, January 26, 2010
On the Net: |
A. Monarchs, Nobles, and the Clergy
1. Thirteenth century European states were ruled by weak monarchs
whose power was limited by their modest treasuries, the regional
nobility, the independent towns, and the church.
2. Two changes in weaponry began to undermine the utility—and
therefore the economic position—of the noble knights. These two
innovations were the armor-piercing crossbow and the development of
3. King Philip the Fair of France reduced the power of the church
when he arrested the pope and had a new (French) one installed at
Avignon, but monarchs still faced resistance, particularly from their
stronger vassals. In England, the Norman conquest of 1066 had
consolidated and centralized royal power, but the kings continued to
find their power limited by the pope and by the English nobles, who
force the king to recognize their hereditary rights as defined in the
4. Monarchs and nobles often entered into marriage alliances. One
effect of these alliances was to produce wars over the inheritance of
far-flung territories. In the long term, these wars strengthened the
authority of monarchs and led to the establishment of territorial
B. The Hundred Years War, 1337–1453
1. The Hundred Years War pitted France against England, whose King
Edward III claimed the French throne in 1337. The war was fought with
the new military technology: crossbows, longbows, pikes (for pulling
knights off their horses) and firearms, including an improved cannon.
2. The French, whose superior cannon destroyed the castles of the
English and their allies, finally defeated the English. The war left the
French monarchy in a stronger position than before.
C. New Monarchies in France and England
1. The new monarchies that emerged after the Hundred Years War had
stronger central governments, more stable national boundaries, and
stronger representative institutions. Both the English and the French
monarchs consolidated their control over their nobles.
2. The advent of new military technology—cannon and hand-held
firearms—meant that the castle and the knight were outdated. The new
monarchs depended on professional standing armies of bowmen, pikemen,
musketeers, and artillery units.
3. The new monarchs had to find new sources of revenue to pay for
these standing armies. In order to raise money, the new monarchs taxed
land, merchants, and the church.
4. By the end of the fifteenth century, there had been a shift in
power away from the nobility and the church and toward the monarchs.
This process was not complete, however, and monarchs were still hemmed
in by the nobles, the church, and by new parliamentary institutions: the
Parliament in England and the Estates General in France.
D. Iberian Unification
1. Spain and Portugal emerged as strong centralized states through a
process of marriage alliances, mergers, warfare, and the reconquest of
the Iberian Peninsula from the Muslims. Reconquest offered the nobility
large landed estates upon which they could grow rich without having to
2. The reconquest took place over a period of several centuries, but
picked up after the Christians put the Muslims on the defensive with a
victory in 1212.
3. Portugal became completely established in 1249. In 1415, the
Portuguese captured the Moroccan port of Ceuta, which gave them access
to the trans-Saharan trade.
4. On the Iberian Peninsula, Castile and Aragon were united in 1469
and the Muslims driven out of their last Iberian stronghold (Granada) in
1492. Spain then expelled all Jews and Muslims from its territory;
Portugal also expelled its Jewish population. |
The prologue, Greek prologos (meaning: before word), is an opening of a story that establishes the setting and gives background details.
Generally speaking, the main function of a prologue tells some earlier story and connects it to the main story. Similarly, it is serves as a means to introduce characters of a story and throws light on their roles. In its modern sense, a prologue acts as a separate entity and is not considered part of the current story that a writer ventures to tell.
Prologue Examples from Literature
Prologue on Greek Stage
The prologos in Greek dramas incorporated the above mentioned features but it had a wider importance than the modern interpretations of the prologue. Greek prologos was more like a preface i.e. an introduction to a literary work provided by a dramatist to tell how the idea of a story developed. Therefore, in Greek dramas, prologue was a complete episode or the first act which was succeeded by the remaining acts of a play.
The invention of prologue is attributed to Euripedes. He prefixed a prologue to his plays as an explanatory first act in order to make the upcoming events in a play comprehensible for his audience. Other dramatist followed in his footsteps and prologue became a part of the traditional formula for writing plays. Almost all Greek prologues told about events that happened much earlier in time than the events depicted in the play.
Prologue on Latin Stage
Plautus, a Latin playwright, has written examples of prologues in his plays that were more elaborate than Greek prologues. His prologues were admired for their romantic quality. His prologues were usually performed by characters that did not make appearances in the play. A prologue to Rudens is a perfect manifestation of his genius in writing prologue. Later, Moliere revived prologue on Latin stage by prefixing it to his play Amphitryon. Furthermore, we notice Racine introducing his coral tragedy Esther with a prologue with Piety as its speaker.
Prologue on Elizabethan Stage
The early English dramatists were influenced by the traditions of prologues in Greek and Latin plays. Even the early forms of drama, mystery and morality plays, always began with a homily i.e. which was a religious commentary on the biblical story that was to be performed in those plays. Elizabethan dramatists took inspiration from the Greek and Latin tradition of prologue and held it as a compulsory ingredient of their plays. In 1562, Sackville wrote Gorboduc which is believed to be the first English play. He prepared a pantomime that acted as a prologue for his play. Later, he wrote Induction that was a prologue to his Miscellany of short romantic epics.
A prologue to Elizabethan plays usually served to quieten and settle down an audience before the commencement of a play. It then introduced the themes of the play and other particulars to audience making them mentally prepared for the events they were to witness in the performance. Also, it was considered necessary to beg their leniency for any error that might occur in the writing of a play or in the performances of actors on stage. Usually, the character who uttered the prologue was dressed in black in order to differentiate him from the rest of the actors who wore colorful costumes during their performances. For instance, read the following lines from the prologue in Shakespeare’s “Romeo and Juliet”:
“Two households, both alike in dignity
(In fair Verona, where we lay our scene),
From ancient grudge break to new mutiny,
Where civil blood makes civil hands unclean.
From forth the fatal loins of these two foes
A pair of star-crossed lovers take their life,
Whose misadventured piteous overthrows
Doth with their death bury their parents’ strife.
The fearful passage of their death-marked love
And the continuance of their parents’ rage,
Which, but their children’s end, naught could remove,
Is now the two hours’ traffic of our stage—
The which, if you with patient ears attend,
What here shall miss, our toil shall strive to mend.”
The Chorus in the extract not only introduces the theme but also requests the audience to be attentive “with patient ears attend.”
An Example of Non-dramatic Prologue
In English literature, a prologue was employed in non-dramatic fiction as well. One of the earliest prologue examples is Chaucer’s A Prologue to Canterbury Tales. His prologue was built on the conventional pattern. He used it to introduce all his characters or pilgrims in dramatic details before each of them told their story on their way to Canterbury to visit the shrine of Saint Thomas Beckett.
Function of Prologue
As previously mentioned, the primary function of a prologue is to let the readers/audience be aware of the earlier part of the story and enable them to relate it to the main story. This literary device is also a means to present characters and establish their roles. |
Constructed wetlands use plants, soils and associated microorganisms to treat wastewater, improve water quality and create wildlife habitat.
Constructed wetlands fall into two categories: 1) Subsurface Flow moves wastewater below the surface of a lined basin filled with sand or gravel and planted with vegetation, and 2) Free Water Surface moves wastewater above the soil in a marsh or swamp lined and planted basin.
Subsurface Flow System
- 1:1 – 1:2 length to width
- 0.5 – 0.6 m (1.6 – 2 ft) depth
Free Water Surface System
- 3:1 to 5:1 length to width
- 0.6 – 0.9 m (2 – 3 ft) emergent plants
- 1.2 – 1.5 m (4 – 5 ft) floating plants
Wetlands average .05 to 0.1 m2 (½ to 1 ft2) of surface area per gallon of water treated per day, and vary from small on-site applications for septic systems to large municipal facilities. Plant selection is based on aesthetics, hardiness and climate. |
June 28, 2012
WASHINGTON -- Data from NASA's Cassini spacecraft have revealed Saturn's moon Titan likely harbors a layer of liquid water under its ice shell. Researchers saw a large amount of squeezing and stretching as the moon orbited Saturn. They deduced that if Titan were composed entirely of stiff rock, the gravitational attraction of Saturn would cause bulges, or solid "tides," on the moon only 3 feet (1 meter) in height. Spacecraft data show Saturn creates solid tides approximately 30 feet (10 meters) in height, which suggests Titan is not made entirely of solid rocky material. The finding appears in today's edition of the journal Science.
"Cassini's detection of large tides on Titan leads to the almost inescapable conclusion that there is a hidden ocean at depth," said Luciano Iess, the paper's lead author and a Cassini team member at the Sapienza University of Rome, Italy. "The search for water is an important goal in solar system exploration, and now we've spotted another place where it is abundant." Titan takes only 16 days to orbit Saturn, and scientists were able to study the moon's shape at different parts of its orbit. Because Titan is not spherical but slightly elongated like a football, its long axis grew when it was closer to Saturn. Eight days later, when Titan was farther from Saturn, it became less elongated and more nearly round. Cassini measured the gravitational effect of that squeeze and pull.
Scientists were not sure Cassini would be able to detect the bulges caused by Saturn's pull on Titan. By studying six close flybys of Titan from Feb. 27, 2006, to Feb. 18, 2011, researchers were able to determine the moon's internal structure by measuring variations in the gravitational pull of Titan using data returned to NASA's Deep Space Network (DSN). "We were making ultrasensitive measurements, and thankfully Cassini and the DSN were able to maintain a very stable link," said Sami Asmar, a Cassini team member at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, Calif. "The tides on Titan pulled up by Saturn aren't huge compared to the pull the biggest planet, Jupiter, has on some of its moons. But, short of being able to drill on Titan's surface, the gravity measurements provide the best data we have of Titan's internal structure."
An ocean layer does not have to be huge or deep to create these tides. A liquid layer between the external, deformable shell and a solid mantle would enable Titan to bulge and compress as it orbits Saturn. Because Titan's surface is mostly made of water ice, which is abundant in moons of the outer solar system, scientists infer Titan's ocean is likely mostly liquid water.
On Earth, tides result from the gravitational attraction of the moon and sun pulling on our surface oceans. In the open oceans, those can be as high as two feet (60 centimeters). While water is easier to move, the gravitational pulling by the sun and moon also causes Earth's crust to bulge in solid tides of about 20 inches (50 centimeters). The presence of a subsurface layer of liquid water at Titan is not itself an indicator for life. Scientists think life is more likely to arise when liquid water is in contact with rock, and these measurements cannot tell whether the ocean bottom is made up of rock or ice. The results have a bigger implication for the mystery of methane replenishment on Titan.
"The presence of a liquid water layer in Titan is important because we want to understand how methane is stored in Titan's interior and how it may outgas to the surface," said Jonathan Lunine, a Cassini team member at Cornell University. "This is important because everything that is unique about Titan derives from the presence of abundant methane, yet the methane in the atmosphere is unstable and will be destroyed on geologically short timescales." A liquid water ocean, "salted" with ammonia, could produce buoyant ammonia-water liquids that bubble up through the crust and liberate methane from the ice. Such an ocean could serve also as a deep reservoir for storing methane. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The mission is managed by JPL for NASA's Science Mission Directorate in Washington. DSN, also managed by JPL, is an international network of antennas that supports interplanetary spacecraft missions and radio and radar astronomy observations for the exploration of the solar system and the universe. The network also supports selected Earth-orbiting missions. Cassini's radio science team is based at Wellesley College in Massachusetts.
[…and where there’s liquid water – even under a deep layer of ice – there’s the possibility, even the probability, of life in all its multiple forms. It’s interesting that there is now strong circumstantial evidence of water on three moons in the outer Solar System. If each of them have life (quite possible) and it developed on each world independently (also quite possible) it would increase the odds of life elsewhere in the galaxy quite considerably. I think that probes designed to visit some of these moons and able to penetrate to their liquid layers are on the cards and may be launched in the near future. Here’s hoping that they (all?) come back with positive results.] |
Polar (arctic) climate zone
The polar (arctic) climate zone occupies the ice caps of the planet. The temperatures are negative all year round. In the South hemisphere this climate zone occupies the all territory of Antarctica and in the North hemisphere occupies the Arctic ocean. The conditions of life are exceptionally hard. Because of this reason, these areas are unpopulated. For example in Antarctica people can be found only in the research bases, but they don’t live there all year around. Because of the fact, that our planet has permanent tilt of 23.5°, we can observe the occurrence of the polar day and the polar night, that continue six months each.
In the polar day the sun is never set beyond the horizon. During the polar night, just the contrary – it is never rise.
In the winter the temperatures can drop to -40 °C – -50 °C, but sometimes even to -70 °C. In the summer months the temperatures can reach 0 °C. The precipitations are scanty and falling as snow.
Though the conditions are unusually hard and unbearable, the polar climate zones are paradise for some species as incredible large colonies of penguins in Antarctica and the polar bear, polar fox, seal, walrus and so on living in Arctic.
The polar areas are most affected as a result of the global warming. The polar caps are thawing.
This create dangerous of rising of the sea level. This can exterminate the habitat of many species, that are in danger and rare.
In the 80es years of the 20th century have found, that the ozone stratum above the Arctic and Antarctica is missing!
The ozone stratum has annihilated, because of the carbon liberated in the atmosphere.
The ozone stratum is of great importance for the life of the planet. It is the only barrier on the way of the deadly ultraviolet rays, coming to the planet Earth.
We have to do everything that is possible to save the polar caps, because our planet is one perfectly worked mechanism.
If only one link of this mechanism be destroyed, this would change beyond unrecognizable the all our planet. |
Other forms of early life had been producing waste products such as iron, which built up in the early ocean. As oxygen began to be produced, a peculiar thing happened. Large amounts of iron which had accumulated in the early ocean were attacked by the accumulating oxygen. When oxygen reacts with iron, iron ores are produced. Today, iron ores are taken out of the ground by miners, and the iron they contain is used by human beings to make lots of things.
For a billion years, the oxygen produced by early plant life attacked leftover iron in the ocean, and huge deposits of iron ores were laid down at the bottom of the sea. This activity took place between 3.5 and 2.5 billion years ago. Iron ores mined today in the United States, Australia, and South Africa, are part of the huge deposits laid down at that time. Once the oceans were swept clean of iron, then the oxygen could begin to build up in the atmosphere, and more sophisticated life forms could develop. But the advent of more sophisticated life had to wait for another era, the Proterozoic. It took a billion years for the iron ore process to complete. When it was finished, it closed the period in the history of the Earth which we call the Archean.
This is page 10 of 10 |
What is cryptography?
Cryptography is the science of using mathematics to encrypt
and decrypt information. Once the information has been encrypted,
it can be stored on insecure media or transmitted on an insecure
network (like the Internet) so that it cannot be read by anyone
except the intended recipient.
What is encryption?
Encryption is the process in which data (plaintext) is translated
into something that appears to be random and meaningless (ciphertext).
Decryption is the process in which the ciphertext is converted
back to plaintext.
What is a cryptographic algorithm?
A cryptographic algorithm, or cipher, is a mathematical function
used in the encryption and decryption process. A cryptographic
algorithm works in combination with a key (a number, word, or
phrase) to encrypt and decrypt data. To encrypt, the algorithm
mathematically combines the information to be protected with
a supplied key. The result of this combination is the encrypted
data. To decrypt, the algorithm performs a calculation combining
the encrypted data with a supplied key. The result of this combination
is the decrypted data. If either the key or the data is modified,
the algorithm produces a different result. The goal of every
encryption algorithm is to make it as difficult as possible
to decrypt the generated ciphertext without using the key. If
a really good encryption algorithm is used, then there is no
technique significantly better than methodically trying every
possible key. Even for a key size of just 40 bits, this works
out to 2ˆ40 (just over 1 trillion) possible keys.
Differences between symmetric and asymmetric algorithms.
Symmetric algorithms encrypt and decrypt with the same key.
Main advantages of symmetric algorithms are its security and
high speed. Asymmetric algorithms encrypt and decrypt with different
keys. Data is encrypted with a public key, and decrypted with
a private key. Asymmetric algorithms (also known as public-key
algorithms) need at least a 3,000-bit key to achieve the same
level of security of a 128-bit symmetric algorithm. Asymmetric
algorithms are incredibly slow and it is impractical to use
them to encrypt large amounts of data. Symmetric algorithms
are about 1,000 times faster than asymmetric ones.
Encryption Made Easy
How secure is CryptoForge?
CryptoForge uses four strong (symmetric) cryptographic algorithms
to protect your information:
key) is a strong and fast algorithm designed by Bruce Schneier,
one of the most prestigious cryptographers all over the world.
Rijndael (256-bit key) is a high security algorithm created
by Joan Daemen and Vincent Rijmen (Belgium). Rijndael is the
new Advanced Encryption Standard (AES) chosen by the National
Institute of Standards and Technology (NIST).
DES (168-bit key) is a strong, well-known, U.S. Government
algorithm. TripleDES use the DES algorithm three times with
three different keys.
Gost (256-bit key) is a cryptographic
algorithm from Russia that appears to be the Russian analog
to DES. Gost has undergone intensive peer review and is regarded
to be secure.
At present, there is no way to break any of these algorithms,
unless to try all possible keys. If one billion computers were
each searching one billion keys per second, it would take over
10*10ˆ24 years to recover information encrypted with a
168-bit algorithm (the age of the universe is 10*10ˆ9 years).
In addition, CryptoForge implements mechanisms against modifications
in its code. When executed, it verifies the algorithms with
the test vectors provided by their designers.
The four encryption algorithms implemented in CryptoForge are Block Ciphers.
This means that they encrypt data in block units, rather than
a single bit at a time. The algorithms are used in Cipher Block
Chaining mode, where the original data is XORed with the previous
ciphertext before encryption. On the first encryption, a random-generated
128-bit Initialization Vector is used as the ciphertext. CBC
mode ensures that even if the data contains many identical blocks,
they will each encrypt to a different ciphertext block.
When you enter your passphrase into CryptoForge, it is hashed
with a Hash algorithm to generate a fingerprint, also known
as digest. The one-way Hash function takes variable-length input,
in this case your passphrase, and produces a fixed-length output.
Also ensures that, if the passphrase is changed -even by just
one bit- an entirely different output value is generated. This
value is the key actually used by the cipher. That process is
repeated using a different Hash function for each encryption
algorithm, thus generating four unique keys.
Although CryptoForge allows encryption with more than one algorithm,
for most users this might be considered unnecessary, because
the level of protection provided by any of the employed algorithms
is (at least in the unclassified world), good enough. However,
this ensures that even if in the future one of them is attacked,
your information will remain protected. Actually, it is surprisingly
difficult to determine just how good an encryption algorithm
is. If you wish to use more than one encryption algorithm, changing
the order in which they are used should add another problem
to a hardware-based attack (i.e. an array of special chips trying
trillions keys a second).
Secure file deletion is accomplished by writing a pattern of
all ones, zeros, and a stream of pseudo-random data, iterating
the number of times specified by the user. The name of the file
is overwritten as well. The length of the file is then truncated |
Death is the termination of the biological functions that define a living organism. The leading cause of death in developing countries is infectious disease. more common than in developed countries.
10. Septicemia and Neonatal Infections
Sepsis is a serious medical condition that is characterized by a whole-body inflammatory state (called a systemic inflammatory response syndrome or SIRS) and the presence of a known or suspected infection. Related condition when occurs with neonates is called neonatal infection. Neonates are prone to infection in the first month of life. Some organisms such as S. agalactiae (Group B Streptococcus) or (GBS) are more prone to cause these occasionally fatal infections. Risk factors for GBS infection include: prematurity, a sibling who has had a GBS infection and prolonged labour or rupture of membranes. Untreated sexually transmitted infections are associated with congenital and perinatal infections in neonates, particularly in the areas where rates of infection remain high. The overall perinatal mortality rate associated with untreated syphilis, for example, approached 40%.
Tuberculosis or TB is a common and often deadly infectious disease caused by various strains of mycobacteria, usually Mycobacterium tuberculosis in humans. Tuberculosis usually attacks the lungs but can also affect other parts of the body. It is spread through the air, when people who have the disease cough, sneeze, or spit. Most infections in humans result in an asymptomatic, latent infection, and about one in ten latent infections eventually progresses to active disease, which, if left untreated, kills more than 50% of its victims. A third of the world’s population are thought to be infected with M. tuberculosis, and new infections occur at a rate of about one per second. The proportion of people who become sick with tuberculosis each year is stable or falling worldwide but, because of population growth, the absolute number of new cases is still increasing. Recent stats show there were an estimated 13.7 million chronic active cases, 9.3 million new cases, and 1.8 million deaths, mostly in developing countries. In addition, more people in the developed world are contracting tuberculosis because their immune systems are compromised by immunosuppressive drugs, substance abuse, or AIDS. The distribution of tuberculosis is not uniform across the globe; about 80% of the population in many Asian and African countries test positive in tuberculin tests, while only 5-10% of the US population test positive.
8. Diarrheal Diseases
Diarrhea is defined by the World Health Organization as having 3 or more loose or liquid stools per day, or as having more stools than is normal for that person. It is thus the condition of having three or more loose or liquid bowel movements per day. It is a common cause of death in developing countries and the second most common cause of infant deaths worldwide. The loss of fluids through diarrhea can cause dehydration and electrolyte imbalances. In 2009 diarrhea was estimated to have caused 1.1 million deaths in people aged 5 and over and 1.5 million deaths in children under the age of 5. Oral rehydration salts and zinc tablets are the treatment of choice and have been estimated to have saved 50 million children in the past 25 years.
7. Alzheimer’s Disease and Other Dementias
Senile Dementia of the Alzheimer Type (SDAT) or simply Alzheimer’s, is the most common form of dementia. This incurable, degenerative, and terminal disease was first described by German psychiatrist and neuropathologist Alois Alzheimer in 1906 and was named after him. Generally, it is diagnosed in people over 65 years of age,] although the less-prevalent early-onset Alzheimer’s can occur much earlier. In 2006, there were 26.6 million sufferers worldwide. Alzheimer’s is predicted to affect 1 in 85 people globally by 2050. The cause and progression of Alzheimer’s disease are not well understood. Research indicates that the disease is associated with plaques and tangles in the brain. Currently used treatments offer a small symptomatic benefit; no treatments to delay or halt the progression of the disease are as yet available. As of 2008, more than 500 clinical trials have been conducted for identification of a possible treatment for AD, but it is unknown if any of the tested intervention strategies will show promising results.
6. Accident (Unintentional Injuries)
Accidents have been a major cause of death since the population rise. Most would include in the roadside accidents but many other unintentional causes are within this category like accidental poisoning, drowning etc. An accident is a specific, unidentifiable, unexpected, unusual and unintended external action which occurs in a particular time and place, with no apparent and deliberate cause but with marked effects. It implies a generally negative outcome which may have been avoided or prevented had circumstances leading up to the accident been recognized, and acted upon, prior to its occurrence. Experts in the field of injury prevention avoid use of the term ‘accident’ to describe events that cause injury in an attempt to highlight the predictable and preventable nature of most injuries. Such incidents are viewed from the perspective of epidemiology – predictable and preventable. Preferred words are more descriptive of the event itself, rather than of its unintended nature (e.g., collision, drowning, fall, etc.) However, together this umbrella term is the 6th leading cause of death worldwide.
5. Acquired Immunodeficiency Syndrome
Acquired immunodeficiency syndrome (AIDS) is a disease of the human immune system caused by the human immunodeficiency virus (HIV). This condition progressively reduces the effectiveness of the immune system and leaves individuals susceptible to opportunistic infections and tumors. HIV is transmitted through direct contact of a mucous membrane or the bloodstream with a bodily fluid containing HIV, such as blood, semen, vaginal fluid, preseminal fluid, and breast milk. This transmission can involve intercourse, blood transfusion, contaminated hypodermic needles, exchange between mother and baby during pregnancy, childbirth, breastfeeding or other exposure to one of the above bodily fluids. AIDS is now a pandemic. In 2007, it was estimated that 33.2 million people lived with the disease worldwide, and that AIDS killed an estimated 2.1 million people, including 330,000 children. Over three-quarters of these deaths occurred in sub-Saharan Africa. Genetic research indicates that HIV originated in west-central Africa during the late nineteenth or early twentieth century. AIDS was first recognized by the U.S. Centers for Disease Control and Prevention in 1981 and its cause, HIV, identified in the early 1980s. Although treatments for AIDS and HIV can slow the course of the disease, there is currently no known cure or vaccine. Antiretroviral treatment reduces both the mortality and the morbidity of HIV infection, but these drugs are expensive and routine access to antiretroviral medication is not available in all countries.
4. Lower Respiratory Tract Infections
Lower respiratory tract is the part of the respiratory tract below the vocal cords. While often used as a synonym for pneumonia, the rubric of lower respiratory tract infection can also be applied to other types of infection including lung abscess and acute bronchitis. Symptoms include shortness of breath, weakness, high fever, coughing and fatigue. Lower respiratory tract infections place a considerable strain on the health budget and are generally more serious than upper respiratory infections. Since 1993 there has been a slight reduction in the total number of deaths from lower respiratory tract infection. However, they are still the leading cause of deaths among all infectious diseases, and they accounted for 3.9 million deaths worldwide and 6.9% of all deaths. There are a number of acute and chronic infections that can affect the lower respiratory tract. The two most common infections are bronchitis and pneumonia.
A stroke (sometimes called a cerebrovascular accident (CVA)) is the rapidly developing loss of brain function(s) due to disturbance in the blood supply to the brain. This can be due to ischemia (lack of blood flow) caused by blockage (thrombosis, arterial embolism), or a hemorrhage (leakage of blood). As a result, the affected area of the brain is unable to function, leading to inability to move one or more limbs on one side of the body, inability to understand or formulate speech, or inability to see one side of the visual field. A stroke is a medical emergency and can cause permanent neurological damage, complications, and even death. It is the leading cause of adult disability in the United States and Europe and it is the number three cause of death worldwide. Risk factors for stroke include advanced age, hypertension (high blood pressure), previous stroke, diabetes, high cholesterol, cigarette smoking and atrial fibrillation. High blood pressure is the most important modifiable risk factor of stroke.
Cancer is a class of diseases in which a group of cells display uncontrolled growth (division beyond the normal limits), invasion (intrusion on and destruction of adjacent tissues), and sometimes metastasis (spread to other locations in the body via lymph or blood). These three malignant properties of cancers differentiate them from benign tumors, which are self-limited, and do not invade or metastasize. Most cancers form a tumor but some, like leukemia, do not. The branch of medicine concerned with the study, diagnosis, treatment, and prevention of cancer is oncology. Cancer affects people at all ages with the risk for most types increasing with age. Cancer cause about 13% of all human deaths.
1. Heart Disease
Heart disease or cardiopathy is an umbrella term for a variety of different diseases affecting the heart. As of latest stats, it is the leading cause of death in US, England, Canada and Wales, accounting for 25.4% of the total deaths in the United States. Among different hear diseases, over 459,000 Americans die of coronary heart disease every year. In the United Kingdom, 101,000 deaths annually are due to coronary heart disease, which refers to the failure of the coronary circulation to supply adequate circulation to cardiac muscle and surrounding tissue. Besides that a number of people die each year of cardiomyopathies, heart failure, and hypertensive heart disease. |
Living in a Community
This Living in a Community lesson plan also includes:
Students explore philanthropy through song. In this philanthropy lesson, students review the meaning of philanthropy and recite lyrics to the song "What Is a Philanthropist?"
3 Views 3 Downloads
- Activities & Projects
- Graphics & Images
- Lab Resources
- Learning Games
- Lesson Plans
- Primary Sources
- Printables & Templates
- Professional Documents
- Study Guides
- Writing Prompts
- AP Test Preps
- Lesson Planet Articles
- Interactive Whiteboards
- All Resource Types
- Show All
See similar resources:
Exploring Community Responsibility: The Web of My Community
Encourage class members to get involved in their community with a lesson that highlights the benefits of helping others. Through grand conversation and a visual aid that sheds light on the interconnectedness within community service,...
K - 6th Social Studies & History CCSS: Adaptable
My Book About Community Helpers: Mini-Book
Teach children about all the people that help to make our communities safe and healthy places to live with this printable book. From firefighters and mail carriers to doctors and trash collectors, students practice reading as they...
K - 3rd English Language Arts CCSS: Adaptable
Using Connecting Themes in First Grade Social Studies
Foster contributing members of society with a social studies unit focused on five aspects of community. First graders discuss themes of culture, groups, location, scarcity, and change with discussion questions and activities about...
1st - 3rd Social Studies & History CCSS: Adaptable
America: The Land We Live In: Landmarks
Students explore the concept of landmarks. In this landmark lesson, students brainstorm different landmarks around their community and nationally. Students then identify the patterns in Georgia O'Keeffe's paintings or landmarks.
1st - 2nd Social Studies & History
Living the Dream: 100 Acts of Kindness
Inspire kindness in and out of school with a instructional activity that challenges scholars to perform 100 acts of kindness during the time between Martin Luther King Jr. Day to Valentine's day. Leading up to a celebration of...
K - 2nd Social Studies & History CCSS: Designed |
Acrylic adhesive is a fast-bonding, highly resistant epoxy made by polymerizing acrylic or methacrylic acids through a reaction with a catalyst. They require no mixing and are in a 2-component form, which means the acrylic is applied to one surface and the catalyst, hardener or accelerator is coated to the opposite surface. When those surfaces are pushed together, the two adhesive components bond together and form a water tight seal.
The double component acrylic adhesive will bond to almost any material, including wood, plastic, most metals, ceramic, rubber, glass and even oily surfaces. The bonding happens fast at room temperature and is highly resistant to chemicals, environmental conditions and moisture. Acrylic is an adhesive often used in the construction industry to bond two surfaces together much like epoxy is most often employed by engineers. They are also used in medical applications to bond implants to bone.
When coated on foam or paper, acrylic adhesives replace fasteners in appliance, automotive, sign and graphics applications. They also provide sound and vibration dampening and adhere decorative film and over-lamination to surfaces. Some food grade acrylic adhesives are used in food processing to seal packaging. Adhesive made from acrylic have different viscosities, meaning different resistances to flow. Those that are liquid form come in spray bottles, while the thicker adhesives are contained in tubes.
The curing process, which is what initiates the adhesive harden, is prepared by an adhesive manufacturer. The process is dependent upon the room temperature. Curing refers to the length of time required to fully set a bond between the adhesive and surface material. The lower the temperature, the longer the curing time will take. If the temperature is too low, acrylic adhesives tend to get brittle, although they will still last for many years.
They always exhibit good peel and shear strengths as well as shock and impact resistance. Acrylic also withstands thermal movement and maintains its bond when exposed to water. When used during an industrial manufacturing process, acrylic is a cost effective method of adhesion because it does not require heat in order to cure, which eliminates the need for an expensive heating system.
Unlike the easily removed pressure sensitive adhesives, acrylic adhesives are to be used only for permanent adhesion. Once it has cured, removal is difficult, time consuming and often results in a damaged surface. However, the bond is permanent and strong, and can be used in applications like cabinet building, where the bond must stay for decades.
Acrylic Adhesives Informational Video |
What is oil sands mining?
Crude oil deposits lie beneath the earth among layers of rock, water, and sand in the form of bitumen, a heavy, viscous type of crude oil that mixes with the geological strata like cold molasses. Mining these oil sands deposits takes place in the Athabasca region of northern Alberta, Canada, and in certain regions of Venezuela.
As a non-traditional type of crude deposit, oil sands mining remained a costly resource to extract and refine. However, oil sands mining now brings big profits to petroleum industries. Standard crude deposits continue to shrink, while barrels of crude and gallons of gas grow more expensive, driving Big Oil’s destructive harvest.
Mining companies extract bitumen through two standard, destructive methods.
Strip Mining – Traditional operations clear-cut trees and brush, and then strip top soil and clay to access the layers of oil sand beneath. The process involves massive trucks and and earth movers to extract 1 to 20% bitumen by total volume. Once filtered, the upgraded crude goes directly to refineries where it becomes manufactured in to gas, jet fuel, and other synthetics.
In Situ – Hailed originally as a newer, cheaper alternative to standard strip mining techniques, in situ forces pressurized steam into layers of bituminous sand, buried too deeply below the surface to harvest with trucks and earth movers. Pressurized steam separates the sludgy bitumen from surrounding sand and clay, pumping the thick muck to the surface, to be filtered, upgraded, and sent to refineries. Despite its acceptance as a cost-effective alternative to strip mining, in situ processes do tremendous environmental damage.
Mining’s “quick fix” to environmental restoration amounts to little more than greenwashing.
Psy Org indicated in March that David Shindler, and his colleagues at the University of Alberta had reported disturbing news regarding the long term environmental impact of oil sands mining in Canada.
While Canadian mining industry hopes to assuage public concerns by planting dry, upland trees and similar plant species destroyed through mining, Shindler calls it “greenwashing.” A simple approach to restoring a sophisticated ecology may sound like corporate responsibility, but its net effect will vastly increase carbon emissions indefinitely.
Unfortunately, 65% of Alberta’s proposed oil sands mining region contains peatlands, whose expansive, nonrenewable bogs serve as natural carbon sinks, preventing greenhouse gasses from escaping into our atmosphere.
Oil/tar sands mining in the expanded region under consideration, will destroy wildlife habitats, and permanently lay waste to the landscape’s ability to sequester carbon. An estimated 11 – 47 million metric tons of carbon gas will be released into Earth’s atmosphere as his region falls prey to oil sand mining.
Researchers who conducted this University of Alberta study, report that “the area will lose the ability to sequester carbon in the future; this [the researchers] say will add up to about 5,700-7,200 metric tons of carbon each year, which they say should be looked at as a net gain of emissions each year.”
Oil sands mining: In situ mining techniques do little to conserve Canada’s natural habitat.
In situ technology may actually cause less environmental devastation than traditional strip mining techniques for oil sand extraction, but is this too little too late? Last Friday, Gayathri Vaidyanathan of Energy & Environment Daily, shared a review of the following study published by Alberta Innovates: Thermal In Situ Water Conservation Study – A Summary Report
In 2011 alone, in situ operations in Alberta, Canada, consumed about 28 million cubic meters of water. To combat public outcry regarding huge losses to local reservoirs and the Athabasca watershed of Alberta, mining operators have begun to recycle their in situ water supply. Ironically, their recycling process may result in new environmental problems.
Alberta Innovates, an R&D corporation funded by the Alberta state government, provincial agencies, along with 9 oil and gas companies, studied in situ water recycling and its effects on waste generation and greenhouse gas emissions.
In situ extraction methods: A more efficient means of creating a wasteland.
Mining operators “drill two parallel horizontal wells, with one well used to inject pressurized steam. The steam cools to water underground, mixes with the bitumen and rises up the second well. The bitumen is removed and the water, contaminated with chemicals and ions, is recycled.”
The study determined that about 90% of the water used during the in situ extraction process can recycle through existing technologies, such as blowdown evaporation. B.E. employs large machines to clean water through boiling and evaporation. Blowdown evaporation bears the fewest tradeoffs, while building on the benefits of recycling, releasing less greenhouse gas emissions and waste when compared to wholesale strip mining.
Like many other technologies that recycle water, blowdown evaporation generates liquid waste, which mining companies inject into disposal wells. In landscapes that cannot support waste-water injection as a disposal method, operators use zero-liquid discharge, or ZLD, to create solid waste as opposed to liquid. Regrettably, ZLD generates 2 – 8% more carbon emissions and tremendous amounts of solid waste.
Alberta Innovates concluded that in situ technology could improve wastewater recycling to reduce the environmental consequences of oil sands mining. However, oil sands mining methods continue to extract finite fossil fuels for profit with tragically unsustainable results. In situ processes boost air pollution, contribute to global warming, create irreparable wastelands, and build nothing for our future. Improved technology for oil sand mining will never offer us more than a legacy of destruction and better way to do the wrong thing.
What kinds of renewable fuel sources would you explore as an alternative to oil sands / oil tar mining for fossil fuels?
- Redford fumbles the oil sands file (theglobeandmail.com)
- Alberta Innovates- Energy and Environment Solutions releases Thermal In-Situ Water Conservation Study (sys-con.com)
- The Canadian Oil Sand Mines Refused Us Access, So We Rented This Plane To See What They Were Doing (businessinsider.com)
- Imperial CEO says cleaner crude production a reality (business.financialpost.com) |
Definition - What does Backfill mean?
Backfill refers to products or materials used to fill excavations. The most common form is composed of a mixture of gypsum and calcium bentonite.
Apart from filling holes, one of the primary purposes is to mantle sacrificial anodes under the ground. This enhances the ability of the anode to lessen electrical resistance, thus reducing corrosion as well.
Corrosionpedia explains Backfill
The kind of backfill to be utilized in ground beds is determined according to the type of cathodic protection: impressed or sacrificial. In general, there are two types:
- Chemical - These are commonly utilized through galvanic anodes, creating an atmosphere that is conducive for the dissolution of the anode. The usual mixture is 75% gypsum powder and 25% bentonite. This kind of backfill mixture is perfect for soils with high levels of resistivity. With this backfill, the water is expanded and absorbed, leading to suitable contact between the soil and the anode through reducing resistance in the ground bed.
- Carbonaceous - These are intended for impressed anodes. The common elements include natural graphite and coke breeze. The chief purposes of this backfill are the following: lower ground bed resistance, create an adequate surface where oxidation may take place. It also helps in prolonging the lifespan of the anode.
To ensure effectiveness, the correct type of backfill must be chosen along with the appropriate particle shape and size. |
By Jake Smith
The fever, general weakness, and headache that start off the infection are only harbingers of what is to come. Soon, vomiting and diarrhea set in, followed by unexplained bleeding and, in many cases, death. Ebola virus is the subject of much recent attention. The 2014-2015 outbreak of the virus, occurring largely in the African countries of Guinea, Sierra Leone, and Liberia, has infected over 28,000 people and caused over 11,000 deaths. But could the outbreak have been prevented?
As humans encroach on the territory of lemurs in Madagascar, transmission of diseases between humans and lemurs, and vice versa, can occur.
“Milne-Edwards’ sifaka (Propithecus edwardsi) in the Ranomafana National Park, Madagascar” by Brian Gratwicke is licensed under CC 2.0
Although researchers continue to investigate Ebola virus’s natural reservoir – the animal host in which the virus primarily resides – the origin of human cases can likely be attributed to contact with an infected animal, such as a bat or primate. Research by Deanna Bublitz, of Stony Brook University, in the journal American Journal of Primatology suggests that ecological disruption and land use change, specifically the breaking up of forest habitats, could potentially lead to outbreaks of infectious diseases such as several forms of diarrhea, and even Ebola, in humans. Thus, human encroachment on wildlife not only impacts nonhuman species, but humans themselves.
By determining the presence of six bacteria that commonly cause human illness in lemurs living in Madagascar’s Ranomafana National Park, Bublitz found something astonishing: the only lemurs that tested positive for those six bacteria were found in disturbed areas of the park. Lemurs in undisturbed areas showed no signs of harboring these potentially deadly germs. Thomas Gillespie, another author on the study, says, “Any time we alter a pristine natural system there are going to be unintended consequences.” In other words, we may never know when journeying into a new landscape will lead to a germ making its jump from a nonhuman species to our very own, or the other way around.
Perhaps unsurprisingly, nonhuman primates, such as lemurs, provide suitable intermediates between animal hosts and humans for bacterial and viral organisms. The genetic similarity of nonhuman primates and humans makes these primates important in the transmission of several pathogens as human-nonhuman primate interactions increase. The act of cutting down a tree may, unwittingly, cause a human infection.
Furthermore, the extent of human-mediated changes to forest habitats correlates with higher transmission rates of diseases between species. The reason is simple: humans provide a means for viruses and bacteria to spread past previous niches, and pathogens such as Ebola virus, E. coli, and the parasite causing malaria take advantage of this opportunity. We become the breeding ground for malicious microorganisms.
With the world becoming more developed as a whole, the ability of infectious diseases to spread will likely grow. If forested areas are destroyed, species of primates (and other animals) will be lost, potentially resulting in a strong pressure for parasites to infect other organisms – humans – in close proximity. Human occupation of wild areas results in the loss of local flora and fauna and replacement with an environment ripe for disease transmission.
However, several things can be done from a conservation standpoint in order to mitigate the impact of infectious illnesses on society. Protecting nonhuman primates and other species, designing housing developments such that they do not impinge on wild habitats, and restricting access to wildlife may decrease the spread of infectious illnesses, as well as protect habitat and species diversity.
Of course, encroachment on habitats is not the only cause of the introduction of infectious illnesses, and something as simple as restricting access to wildlife cannot fully limit negative human-nonhuman species interactions. Furthermore, disease outbreaks should not be used to mandate the abandonment of cultural practices or sustainable subsistence hunting. However, it is necessary to educate people in areas with particularly high risk of disease transmission about behaviors and lifestyle choices associated with developing infectious illnesses, along with the impacts of human activity on wildlife.
Conserving species diversity and the habitats that maintain these species can be of help to both our species and the various other organisms with whom we share the earth. By keeping natural environments intact – and restoring those we have destroyed – we simultaneously protect ourselves from the havoc that obscure, deadly organisms can wreak.
And we can save the lemurs from the nasty diseases that we harbor, too.
Bublitz DC, Wright PC, Rasambainarivo FT, Arrigo-Nelson SJ, Bodager JR, Gillespie TR. 2015. Pathogenic enterobacteria in lemurs associated with anthropogenic disturbance. American Journal of Primatology 77:330-337. |
Can your young mathematician add and subtract tenths and hundredths less than 1 without regrouping? That's exactly what they do here. Students find the missing number while adding and subtracting decimals by using the relationship between addition and subtraction. Add and subtract tenths and hundredths less than 1 without regrouping worksheet does not require them to regroup numbers. In each problem, the numbers are laid out in the horizontal format. Students should try to use different strategies involving composing and decomposing numbers to solve these problems. This will help them develop flexibility and fluency. |
This is the coastal region’s largest living sea snail, having a shell diameter that can reach 14 cm. The shell is a yellowish-white to pale brown in color, almost round and quite large. The life expectancy is from 2 to 3 years.
Moonsnails glide on a very large, mucus-covered foot which, when fully extended, can be up to 30 cm long. When this fleshy mantle is extended, it will nearly cover the snail’s shell. The animal can discharge water which allows it to shrink, slide into its shell and seal the opening by closing its operculum. The snail cannot stay in the shell for long periods because it needs to breathe.
Moonsnails feed on clams, mussels, and other various mollusks, and sometimes they will even prey on their own species. They use their foot to clamp onto the clamshell and then using their tongue, they can drill a hole in the clam’s shell. The foot can form a siphon which they push through the hole and suck up the flesh of the clam.
This animal is quite unique in its reproduction. In late spring and early summer, the egg case of the Lewis Moonsnail can be found. It is a mixture composed of sand and mucus that forms a single gelatinous ribbon sand collar. In between the layers of this sand, collar are thousands of eggs. As the sand disintegrates over a period of weeks, the larvae are released into the water column. The larvae move into deeper water and feed as herbivores on diatoms and sea lettuce for a while, then switch to shellfish as they grow. When wet, the collar remains quite rubbery and pliable but becomes brittle when it dries out. Fascinating snail to observe in the wild. |
Lyme disease, tick-borne bacterial disease that was first conclusively identified in 1975 and is named for the town in Connecticut, U.S., in which it was first observed. The disease has been identified in every region of the United States and in Europe, Asia, Africa, and Australia.
Lyme disease is caused by several closely related spirochetes (corkscrew-shaped bacteria), including Borrelia burgdorferi in the United States, B. mayonii in the upper Midwestern United States, and B. afzelii and B. garinii in Europe and Asia. The spirochetes are transmitted to the human bloodstream by the bite of various species of ticks. In the northeastern United States, the carrier tick is usually Ixodes scapularis (I. dammini); in the West, I. pacificus; and in Europe, I. ricinus. Ticks pick up the spirochete by sucking the blood of deer or other infected animals. I. scapularis mainly feeds on white-tailed deer (Odocoileus virginianus) and white-footed mice (Peromyscus leucopus), especially in areas of tall grass, and is most active in summer. The larval and nymphal stages of this tick are more likely to bite humans than are the adult and are therefore more likely to cause human cases of the disease.
In humans, Lyme disease progresses in three stages, though symptoms and severity of illness vary depending on which type of Borrelia is involved. In B. burgdorferi infections, the first and mildest stage is characterized by a circular rash in a bull’s-eye pattern that appears anywhere from a few days to a month after the tick bite. The rash is often accompanied by flulike symptoms, such as headache, fatigue, chills, loss of appetite, fever, and aching joints or muscles. The majority of persons who contract Lyme disease experience only these first-stage symptoms and never become seriously ill. A minority, however, will go on to the second stage of the disease, which begins two weeks to three months after infection. This stage is indicated by arthritic pain that migrates from joint to joint and by disturbances of memory, vision, or movement or other neurological symptoms. The third stage of Lyme disease, which generally begins within two years of the bite, is marked by crippling arthritis and by neurological symptoms that resemble those of multiple sclerosis. Symptoms vary widely, however, and some persons experience facial paralysis, meningitis, memory loss, mood swings, and an inability to concentrate.
Because Lyme disease often mimics other disorders, its diagnosis is sometimes difficult, especially when there is no record of the distinctive rash. Early treatment of Lyme disease with antibiotics is important in order to prevent progression of the disease to a more serious stage. More powerful antibiotics are used in the latter case, though symptoms may recur periodically thereafter. |
(Dr Kelvin Kemm in SPPI) The latest world environment and climate change conference (COP-18) is taking place in Doha, Qatar. One of the prime issues under discussion is the attempt to force countries all over the world to adopt binding agreements to limit “carbon emissions.”The term “carbon emissions” really refers to emissions of carbon dioxide gas – but “carbon” and “carbon dioxide” are two totally different things. Carbon is a solid (think coal and charcoal) and the central building block of hydrocarbons, whereas carbon dioxide is the gas that all humans and animals exhale and all plants require to grow. Without carbon dioxide, all life on Earth would cease.It is thus not just silly to talk of “carbon emissions.” It is also simplistic and grossly inaccurate – except when referring to carbon particulate matter released during the combustions of wood, dung, hydrocarbons and other carbon-based materials. Saying “carbon emissions” also reflects the appalling lack of scientific knowledge so prevalent today. But never mind.The real issue is that some people insist that increasing atmospheric carbon dioxide concentrations is leading to an increased greenhouse effect, which in turn is leading to dangerous global warming.However, the graph of increasing atmospheric carbon dioxide over the last century fails to match the graph of measured temperature increases. In fact, average global temperatures have been essentially stable for 16 years, even as the carbon dioxide (CO2) level has continued to rise.Henrik Svensmark and other scientists have shown that global temperature is much more accurately correlated to observed sunspot activity. Sunspots reflect solar activity, specifically the sun’s magnetic field, that affects the quantity of cosmic rays entering Earth’s atmosphere from outer space. That in turn is linked to the proposition that particles in the cosmic rays cause clouds to form, and varying cloud cover on earth has a great influence on global temperatures.Fewer cosmic rays mean fewer clouds, more sunlight reaching the Earth, and a warmer planet. More cosmic rays mean more clouds, more reflected sunlight, and a cooler planet.Indeed, historical sunspot records correlate quite well with warming and cooling trends on Earth, whereas carbon dioxide and climate trends do not correlate well – except in one respect. Warm periods are typically followed several centuries later by rising CO2 levels, as carbon dioxide is released from warming ocean waters, increasing terrestrial plant growth. Cooling periods eventually bring colder oceans, which absorb and retain greater amounts of CO2 – and less plant growth.
Thus the CO2 argument for global warming is very much in doubt – whereas there is a very viable, and more plausible, alternative.However, CO2 is largely produced by automobiles and electricity generating power stations, which burn the fossil fuels so loathed by Deep Ecology environmentalists. That makes these energy, transportation and economic development sources the target of “carbon emission” reduction schemes.I was a delegate at COP-17 in Durban, South Africa in 2011. As a scientist and resident of Africa, I walked around the Africa pavilion, discussing these issues and gauging the opinions of many people from African countries. To put it bluntly, the African representatives were not happy.Their general feeling was that the First World is trying to push Africa around, bully African countries into accepting its opinions and, even worse, adopting its supposed “solutions.”The “solutions” include moving away from fossil fuels and implementing supposed alternatives like wind, solar and biofuel power. Africans were unhappy about this. They still are. They can intuitively see that large scale wind or solar power is not practical – and biofuels mean devoting scarce cropland, water and fertilizer to growing energy crops, instead of using the crops for food. What Africa needs now is abundant, reliable, affordable electricity and transportation fuel, which means producing more of the Earth’s still abundant oil, coal and natural gas.It is all well and good if highly variable, expensive wind power makes up ten percent or less of an already industrialized nation’s enormous electricity supply. If it varies significantly, or fails entirely, even on the hottest and coldest days (as it is prone to do), the loss of ten percent is not a disaster.But First World countries have been telling poor African countries to base their futures on wind power as major portions of their national supplies.What this implies is that, if the wind power fails, whole sections of a country can grind to a halt. “Oh, no problem,” say climate campaigners. “Just install a smart grid and longer transmission lines, so that when wind is blowing somewhere in the country the smart grid will do all the fancy switching, to make sure electricity flows to critical functions.” In theory, maybe.But meanwhile, in the real world, in August 2012, industrialized Germany’s wind power was under-performing to such a degree that the country decided it must open a new 2,200-megawatt coal-fired power station near Cologne – and announced the immediate construction of 23 more!Moreover, installing a smart grid assumes that the country concerned wants to develop a major complex national grid – and has the money to do so – or has one already. Bad assumption.Africa is huge. In fact, Africa is larger than China, the United States, Europe and India added together. So it’s a mistake to assume African countries will want to implement major national grids, following European historical examples – or will be able to, or will have the vast financial and technical resources to do so, or will have the highway or rail capability to transport all the necessary components to construct thousands of miles of transmission lines.Even in the USA, the electricity system in the state of Texas is not connected to the rest of the country, and the issue of building thousands of miles of new transmission lines and smart grids is generating controversy and serious funding questions.In South Africa we already run major power lines, for example from Pretoria to Cape Town, which is the same distance as Rome to London. We need to ask:Is it wise to keep doing this, or should smaller independent grids be developed as well? If compulsory carbon emissions come into force, will this limit African economic growth and African electricity and transportation expansion?Should Africans be told to “stay in harmony with the land” – and thus remain impoverished and wracked by disease and premature death – by continuing to live in an underdeveloped state, because a dominant First World bloc believes its climate alarmism is correct, suppresses alternative evidence, and is more than willing to impose its views on the poorest, most politically powerless countries?The promised billions in climate change “mitigation” and “reparation” dollars have not materialised yet, and are unlikely to appear any time soon. Even worse, the energy, emission and economic growth restrictions embodied in the proposed climate agreements would prevent factories and businesses from blossoming, perpetuate poverty, limit household lighting and refrigeration, and impede human rights progress on our continent.Africa should resist the psychological and “moral” (actually immoral) pressure being exerted on it to agree to binding limits on carbon dioxide emissions. Any such agreement would place African countries at the mercy of bullying First World countries, put them in a crippling emissions arm lock, and bring no health, environmental or other benefits to Africa.Dr. Kelvin Kemm is a nuclear physicist and business strategy consultant in Pretoria, South Africa. He is a member of the International Board of Advisors of the Committee For A Constructive Tomorrow (CFACT), based in Washington, DC (www.CFACT.org) and received the prestigious Lifetime Achievers Award of the National Science and Technology Forum of South Africa. |
Anyone who dives will probably already know that rays are related to sharks – although they look nothing like them – and that rays drop loosely into two groups, those that sting in some way (electric and sting rays) and those that don't, such as mobula and manta rays.
Stingrays have a barbed stinger, and despite the negative publicity at times, these rays tend to be placid creatures that avoid contact with others. They hide under wrecks, inside caverns or on the reef and only ever sting in self-defense. Depending on the species, the stinger can be up to 35 centimetres long and has grooves that contain venom glands. Some stingrays have more than one stinger.
Electric rays are a different shape to stingrays with a stubby tail and extended, flat pectoral fins containing a pair of kidney-shaped organs. These generate an electric current, which varies from 8 volts up to 220 volts. This is used as a defense mechanism and also to immobilize prey. Sometimes strong enough to stun humans, the ancient Greeks and Romans used these fish to treat a variety of ailments. |
Young preschoolers are brimming with energy. That's a good thing in terms of physical development, because it's the repeated movement of large and small muscle groups that builds and refines how well these parts of the body work.
Large motor skills (or gross motor skills) develop first. That's why 2-, 3-, and 4-year-olds tend to do more running, jumping, reaching, and wiggling than sitting still when using their hand muscles for, say, drawing or for manipulating small toys. But it's a good idea to spend time at both kinds of activities.
Activities to boost physical development in preschooler's
Here are some ways to boost your young preschooler's physical development:
- Take family walks. Alternate walking, running, jogging, and marching. Play "I Spy" or start a collection of feathers or leaves as a diversion while you walk. Indoors, lead a parade with musical instruments or flags.
- Encourage sandbox time. Fill the box with sand toys that encourage manipulation.
- Water play in the backyard. A paddle pool, sprinkler, or running hose all encourage splashing, running, and touching. (Always supervise your child around water.)
- Make an obstacle course in your living room or backyard, consisting of cushions, cardboard boxes, toys, or other found objects that your child can run around and climb over.
- Play pretend games. Animals are a young child's favorite: "Can you walk like a chicken? Gallop like a horse? What does a puppy do?" Or encourage your child to "fly" through the yard like an airplane or row a boat across the room.
- Introduce different kinds of tag at playdates: Play freeze tag, for example.
- Play ball. Games that involve kicking, throwing, and catching are great practice. Try not to get overly elaborate about rules in the preschool years.
- Dance to the music. Expose your child to different styles of music. Playing musical instruments boosts physical development, too. Or share tunes with physical movements, like "I'm a Little Teapot." Many familiar songs emphasize fine-motor skills through finger play, such as "Patty Cake" and "Itsy Bitsy Spider."
- Place a string on the ground and pretend it's a tightrope or a pirate ship's plank to develop balance.
- Wash the car, bikes, dog – anything involving suds and water is energizing fun. Blow bubbles and let your child try to catch them.
- Introduce games from your childhood. Everything's new to your child: "Ring around the rosy," "Red light, green light," "What time is it, Mr. Fox?"
- Put on a puppet show. Make sock or finger puppets or use toys, crouching behind a table with your child.
- Build fine motor skills in ways that go beyond the art table. Help your child draw a village with sidewalk chalk. Use sticks to trace letters in the dirt outside, or indoors in flour or cornmeal. |
Sanfilippo Syndrome is caused by a defect in a single gene. It is an inherited disease of metabolism that means the body cannot properly break down long chains of sugar molecules called mucopolysaccharides or glycosaminoglycans (aka GAGs). Sanfilippo syndrome belongs to a group of diseases called mucopolysaccharidoses (MPS). Specifically, it is known as MPS III. Sanfilippo Syndrome occurs when the enzymes the body needs to break down the Heparan Sulfate (HS) are absent or are defective. When HS is not broken down, the body does not release it. Instead, it is stored inside the lysosomes of every cell in the body.
This is why Sanfilippo Syndrome is classified as a Lysosomal Storage Disease (LSD). As the GAGs accumulate, they damage the cells they are stored in. This leads to the progressive degeneration of the central nervous system.
To date, there are four types of Sanfilippo syndrome. They are distinguished by the enzyme that is affected.
- Sanfilippo Type A: heparan N-sulfatase. Estimated incident rate is 1 in 100,000 live births.
- Sanfilippo Type B: alpha-N-acetylglucosaminidase. Estimated incident rate is 1 in 200,000 live births.
- Sanfilippo Type C: acetyl-CoAlpha-glucosaminide acetyltransferase. Estimated incident rate is 1 in 1,400,000.
- Sanfilippo Type D: N-acetylglucosamine 6-sulfatase. Estimated incident rate is 1 in 1,100,000.
Combined, the four types of Sanfilippo presents in approximately 1 in 70,000 births.
Sanfilippo is an insidious disease that often goes undetected for years. Most children are born with no visible signs that anything is wrong. It’s not until the preschool years that children start to show delays; even then, the disease is often misdiagnosed. Highly specialized and focused testing must be done in order to diagnose Sanfilippo.
Sanfilippo is progressive and can be broken down into stages.
First stage: The affected child will display delayed speech as well as mild facial abnormalities and behavioral issues. They are often misdiagnosed as Autistic. Affected children are prone to frequent sinus infections, ear infections, and chronic diarrhea. They may have cavities or chipped teeth from weak enamel and headaches from accumulated fluid pressure on the brain. Children may seek input demonstrated by vestibular stimulation. Minor bone deformities like a raised sternum and flared ribs are quite common. Children have large head circumferences, due to a skull deformity, frontal lobe blossoming.
Second stage: The affected child will become overly active, often diagnosed with ADHD or extreme oppositional defiant disorder. They’re restless, suffer from sleeplessness and exhibit difficult behavior. Irrational fears are common, or they seem to have no fear, they will run into the street to avoid walking by a dog on the sidewalk. Many children are compelled to chew on things- doctors may diagnose this as a sensory processing disorder. They may experience major temper tantrums accompanied by inconsolable behavior and compulsive behavior; grabbing at people or items, screaming for no apparent reason, laughing fits. Some children have seizures others have visual and hearing problems. Over time, speech loss occurs and communication skills decline along with cognitive regression and loss of motor skills.
Third stage: The disease will take its ultimate toll. The child will lose the ability to walk, talk and eat on his own while his body shuts down. Death may occur as early as the age of three. More common, however, are children that live into their early teens. Children often succumb to pneumonia or other types of infections. A few cases of those with attenuated forms of Sanfilippo have been reported to live to the 3rd or 4th decade of life, but with a poor quality of life. |
The activities you will find in this platform are free, practical, aligned to the core subjects, multidisciplinary, inclusive, easy to implement and beautifully designed. Every detail has been designed and planned with you in mind and with our love and respect for you. Your work of inspiring children, day by day, and empowering our future entrepreneurs, scientists, politicians and decision makers, is one of the most important in society. That is why we want to support you and make you fall in love with environmental education and social and emotional learning.
We are committed to breaking any barrier to access the necessary information to educate new generations to coexist with nature and each other. The well-being and survival of humanity depends on it.
P.S. Our educational platform is originally intended for Spanish speaking teachers. The ESL lessons you can find here are a gift from Guardians of Nature and educators in Costa Rica. They have been developed by public school teachers, based on the national English curriculum from the Ministry of Education. They were planned for students from preschool to adults.
Type of materials
Esta es una recopilación de historias narradas por niños y niñas que viven o han vivido, en diferentes lugares del mundo, y que por circunstancias particulares han tenido experiencias especiales con el acceso al agua.
1.Name some common living and non living things in familiar environments. 2. Identify the difference between living and non living things. Essential scenario: Humans, animals, and plants are living things that need each other.
1.Name some common living and non living things in familiar environments. 2. Identify the difference between living and non living things. Essential Scenario: Humans, animals, and plants are living things that need each other.
1. Recognize specific information about wild animals and their habitats. 2. Respond in predictable patterns to simple questions about familiar things. ESSENTIAL QUESTION: How does nature help us? Humans, animals, and plants are living things that need each other.
1. Understand the important information in simple, clearly drafted printed materials. 2. Sustain a conversational exchange in the classroom. 3. Express opinions about the fragile world, welcoming questions and others’ opinions. ESSENTIAL QUESTION: Why is it important to focus on sustainable development?
1. Understand the main point of a video. 2. Classify materials as recyclable and nonrecyclable using illustrations. 3. Indicate which items can be recycled, reused, and reduced. Essential Question: Why is it important to focus on sustainable development?
1. Understand simple explanations if given slowly and clearly. 2. Engage in the writing process for simple publications. Essential Question: What are rainforests and what happens if they disappear?
1. Identify the main idea of audio text about our sustainable world. Essential Question: Why is it important to focus on sustainable development?
1. Describe places and things. 2. Extract information from texts to answer questions. 3. Express social responsibility in actions. Essential question: What are rainforests and what happens if they disappear? |
How to Teach Cursive Writing
We often take the things we learned while young for granted. Reading and writing are two things that all of us do on a daily basis. When you've been doing something for a very long time without having to think about it, teaching a beginner how to perform this function can be somewhat challenging. Writing in cursive is something that most of us learned many, many years ago. Showing a young child or an adult how to do so can be frustrating for the student if they are not being taught step-by-step. Here is how to teach cursive writing.
Begin by purchasing or printing “three-lined” paper with examples of cursive writing. These are easy to find at any school supplies store. There is also an excellent Web site that you can use to print out numerous pages and workbooks for your students (see Resources below).
2 Teach the letters
Teach the letters that are similar in print and cursive before moving on to those that are different. “A” and “C” are two letters practically identical in both print and cursive. Demonstrate the slight differences in these letters before moving on to those that are completely different, such as lower case “R” or upper case “G.”
3 Use the three-lined paper
Use the three-lined paper to demonstrate when letters should begin and end. For example, when writing a lower case “n,” the letter should begin on the bottom line and the “tail” of the letter should end before the middle line. Many instructors use dots as a way to teach cursive writing. This way, students can simply connect those dots as a way to learn the letters. You may actually find that some of the three-line paper you purchased or printed has those exact dots.
4 Emphasize to your students
Emphasize to your students that the pen or pencil should never be lifted from the paper until the word has been completed. This is probably the most difficult thing for a person learning how to write in cursive for the first time. Some teachers will use popular letter combinations to demonstrate this, such as “br,” “ng” and “qu.” You could also use smaller words, such as “to,” “ma” and “pa” as examples.
5 Remind your student
Remind your student(s) that there are some letters that do not connect to the rest of the word. Capital “T” is an example of this. You will want to inform your students of this fact before moving on to whole words as this may confuse them when they attempt to write a sentence or paragraph in cursive.
6 Have the person
Have the person you're teaching write his or her name in cursive repeatedly as practice. Our signatures are the thing we will write in cursive the most throughout our lifetime. It's important that we know how to do so properly. This is also a great way to have your students learn different letter combinations, especially if the person has a particularly long name.
- Be patient with your student. Teaching somebody to do something that you've been doing for decades can cause a person to unintentionally rush through a lesson or even be condescending. The reason this takes place is because the task is naturally very “easy” to us. Try to remember that there was a time when you didn't know how to write in cursive. A teacher or parent took the time to teach you. You need to be the same way. |
If you’re looking for a reason to care about tree loss, the nation’s latest heat wave might be it. Trees can lower summer daytime temperatures by as much as 10 degrees Fahrenheit, according to a recent study.
But tree cover in US cities is shrinking. A study published last year by the US Forest Service found that we lost 36 million trees annually from urban and rural communities over a five-year period. That’s a 1% drop from 2009 to 2014.
If we continue on this path, “cities will become warmer, more polluted and generally more unhealthy for inhabitants,” said David Nowak, a senior US Forest Service scientist and co-author of the study.
Nowak says there are many reasons our tree canopy is declining, including hurricanes, tornadoes, fires, insects and disease. But the one reason for tree loss that humans can control is sensible development.
“We see the tree cover being swapped out for impervious cover, which means when we look at the photographs, what was there is now replaced with a parking lot or a building,” Nowak said.
More than 80% of the US population lives in urban areas, and most Americans live in forested regions along the East and West coasts, Nowak says.
“Every time we put a road down, we put a building and we cut a tree or add a tree, it not only affects that site, it affects the region.”
The study placed a value on tree loss based on trees’ role in air pollution removal and energy conservation.
The lost value amounted to $96 million a year.
Nowak lists 10 benefits trees provide to society:
Heat reduction: Trees provide shade for homes, office buildings, parks and roadways, cooling surface temperatures. They also take in and evaporate water, cooling the air around them. “Just walk in the shade of a tree on a hot day. You can’t get that from grass,” Nowak said. To get the full temperature benefit, tree canopy cover should exceed 40% of the area to be cooled, according to a recent study in the Proceedings of the National Academy of Sciences. “A single city block would need to be nearly half-covered by a leafy green network of branches and leaves,” the authors wrote.
Air pollution reduction: Trees absorb carbon and remove pollutants from the atmosphere.
Energy emissions reduction: Trees reduce energy costs by $4 billion a year, according to Nowak’s study. “The shading of those trees on buildings reduce your air conditioning costs. Take those trees away; now your buildings are heating up, you’re running your air conditioning more, and you’re burning more fuel from the power plants, so the pollution and emissions go up.”
Water quality improvement: Trees act as water filters, taking in dirty surface water and absorbing nitrogen and phosphorus into the soil.
Flooding reduction: Trees reduce flooding by absorbing water and reducing runoff into streams.
Noise reduction: Trees can deflect sound, one reason you’ll see them lining highways, along fences and between roads and neighborhoods. They can also add sound through birds chirping and wind blowing through leaves, noises that have shown psychological benefits.
Protection from UV radiation: Trees absorb 96% of ultraviolet radiation, Nowak says.
Improved aesthetics: Ask any real estate agent, architect or city planner: Trees and leaf cover improve the looks and value of any property.
Improved human health: Many studies have found connections between exposure to nature and better mental and physical health. Some hospitals have added tree views and plantings for patients as a result of these studies. Doctors are even prescribing walks in nature for children and families due to evidence that nature exposure lowers blood pressure and stress hormones. And studies have associated living near green areas with lower death rates.
Wildlife habitat: Birds rely on trees for shelter, food and nesting. Worldwide, forests provide for a huge diversity of animal life.
Planning for trees
Nowak says there’s a downside to trees too, such as pollen allergies or large falling branches in storms, “and people don’t like raking leaves.” But, he says, there are ways cities and counties can manage trees to help communities thrive. “You can’t just say ‘we’re not going to have forests.’ We might as well manage and work with the trees.
“You don’t want a tree in the middle of a baseball field. It’s very difficult to play sports if you have trees in the way. Or trees in the middle of freeways.”
Nowak says we can design and manage tree canopies in our cities to help “affect the air, to affect the water, to affect our well-being.”
How you can help stop tree loss
Protect what you have: Nowak says the first step is caring for the trees on your own property. “We think we pay for our house, and so we must maintain it. But because we don’t pay for nature, we don’t need to. And that’s not necessarily true.”
Prune the dead limbs out of your trees: If they’re small enough, do it yourself or hire a company. The risk of limbs damaging your house is significantly lowered when there’s tree upkeep, Nowak said.
Notice where your trees may be in trouble: Often, you can observe when something’s wrong, such as when branches are losing leaves and breaking or when mushrooms are growing at the base or on the trees. You can also hire an arborist or tree canopy expert to assess the health of your trees on an annual basis. Or you can contact your local agricultural extension office for advice.
Don’t remove old trees if it’s not necessary: Instead, try taking smaller actions like removing branches. “It takes a long time for these big trees to get big: 50 to 100 years. And once they’re established, they can live a long time. But taking a big tree out and saying ‘we’ll replant,’ there’s no guarantee small trees will make it, and it will take a very long time to grow.”
Allow trees to grow on your property: Although everyone’s aesthetic is different, it’s the cheap way to get cooler yards and lower energy bills. It’s also an inexpensive approach to flood and noise control.
Nowak says he laughs when his neighbors wonder why their property doesn’t have more trees, because “I hear people running their lawn mowers.” Fallen seeds need a chance to implant, and constant mowing prevents that. If you don’t like where a seedling is growing, you can dig it up and plant it or a new tree where you like.
Educate yourself about trees and get involved: Many cities have tree ordinances that seek to protect very old, significant trees. You can get involved by attending city council meetings. You can also help your city plant trees by joining local nonprofit groups.
Volunteer or donate to tree planting and research organizations:
- Arbor Day Foundation
- National Forest Foundation
- Trees Atlanta
- ReLeaf Michigan
- Urban ReLeaf
- Sustainable Urban Forests Coalition |
- Scientific Revolution- Generally placed from the sixteenth through late eighteenth centuries, the scientific revolution
- was a time of paradigm shifts on how the universe worked. Rather than
- continuing the use of thoughts from the Middle Ages, new ideas
- circulated that were based on mathematics. The scientific revolution is
- often known for reconstructing the idea of the universe, but its base
- in mathematics spread even to religion, taking hold in the form of
- Nicolaus Copernicus- (1473-1543) a Polish astonomer who challenged the existing theory of Ptolemy.
- He suggessted the heliocentric theory in his work On the Revolutions of
- the Heavenly Spheres (1543) which was not a revolutionary idea, but
- rather revolution-making. He provided a starting point for criticism of
- the popular view of the earth in the universe. He provided another way
- to attack the difficulties in Ptolemic astronomy.
On the Revolutions of the Heavenly Spheres
- On the Revolutions of the Heavenly Spheres- (1543) text published by Nicolaus Copernicus
- which challenged Ptolemic astronomy by suggested the earth moved around
- the sun(heliocentric theory). This was not a new idea, rather a
- starting point for criticism of the problems associated with the
- Ptolemic theory. In fact, the ideas in this text were no more acurate
- than the Ptolemic theory.
- Ptolemy- Before the scientific revolution, astronomers believed that the universe ran on a geocentric model;
- that is, the earth was the center of the universe. This concept was
- derived from Ptolemy, and ancient mathematician and astronomer from the
- Roman Empire. His ideas are referred to as the Ptolemaic Systems,
- on which astronomers would make scientific calculations that resulted
- in the earth lay underneath spheres that contained the planets and
- stars. The Ptolemaic System was eventually challenged by Copernicus’
- views and soon replaced entirely.
- Tycho Brahe-(1546-1601)
- After Copernicus, he took the next major step toward the conception of
- the sun-centered system. However, he didn't advocate Copernicus' view
- of the universe and spent most of his life supporting the
- earth-centered system. He suggested that the moon and the sun revolved
- around the earth and that the other planets revolved around the sun.
- When he died, his vast body of astronomical data came into the
- possession of his assistant, Johannes Kepler (below).
- Johannes Kepler-(1571-
- 1630) Johannes Kepler, a German astronomer and the assistant of Tycho
- Brahe (above), was a rigorous advocate of the Copernican heliocentric
- theory of the universe. He was determined to find mathematical
- harmonies in Brahe's numerical data that supported a sun-centered
- universe. Kepler discovered that to keep the sun at the center of
- things, he would have to abandon the circular components of
- Copernicus's model, particularly the epicycles. Brahe's observations
- suggested that the motions of the planets were elliptical. He published
- his findings inThe New Astronomy(1609). Using Copernicus'
- sun-centered universe and Brahe's empirical data, he solved the problem
- of planetary motion, while also defining new problems: Why were the
- planetary orbits elliptical, and why was planetary motion orbital
- rather than simply moving off along a tangent?
- Galileo Galilei-(1564-1642)
- An Italian mathematician and natural philosopher who discovered many
- new objects in space using the recently invented telescope. In theStarry Messenger(1610) andLetters on Sunspots(1613),
- he argued that his newly observed physical evidence required a
- Copernican interpretation of the heavens. His career illustrates that
- the forging of new science involved more than just presentation of
- arguments and evidence. He not only popularized the Copernican system,
- but also articulated the concept of a universe subject to mathematical
Dialogue on the Two Chief Systems of the World-(1632)
- Dialogue on the Two Chief Systems of the World-(1632)
- A book written by Galileo upon the permission of Pope Urban VIII. The
- book defended the physical truthfulness of Copernicanism. The voices in
- the dialogue favoring the older system appeared slow-witted, and those
- voices presented the views of Pope Urban VIII. Feeling both humiliated
- and betrayed, the pope ordered and investigation of Galileo's book,
- later requiring him to abjure his views.
Decline and Fall of the Roman Empire
- Decline and Fall of the Roman Empire- Written by Edward Gibbon and published in six volumes between 1776 and 1789, The History of the Decline and Fall of the Roman Empire used primary source evidence to tell the story of the fall of the Roman Empire. Behind this historical veil, however, this volumed work examined Christianity's
- rise as it happened politically, indirectly contradicting the Church's
- teachings of the divine establishment of the "proper Church to save
- humanity." In this work, Gibbon also praises Muhammad's success in establishing the religion of Islam.
- Cesare Beccaria- Cesare Beccaria was an Italian philosophe whose Enlightenment ideas influenced ideas about criminal law and practical reform. His On Crimes and Punishments,
- published in 1764, critically analyzed the fine balance between
- effective and just punishments. He considered punishment's sole role in
- the legal system as deterring others from further crime through fear.
- Punishments that are overly harsh for the sake of justice were attacked
- by Beccaria, who particularly bludgeoned capital punishment and torture. He supported a noble monarchy, but urged that the monarchy be rational and work for the happiness of the people.
- Isaac Newton - described universal gravitation and the three laws of motion.
- Newton showed that the motions of objects on Earth and of
- celestialbodies are governed by the same set of natural laws by linking
- Kepler's laws of planetary motion and his own theory of gravitation,
- which ended the last doubts about heliocentrism.
Principia - published in 1687, this book states Newton's three laws of motion and the mathematical methods Newton used to discover them. |
The presidency of Andrew Jackson is typically associated with the American expansionism that furthered our democracy, but often at a high cost to Native American cultures. Could similar outcomes have been achieved differently? Historians debate whether the Civil War could have been avoided, why attempts to avert war failed, and which individuals had the greatest potential ability to divert the nation’s path away from violent conflict. This book examines these historical questions regarding the unfolding of American history through an introduction to carefully edited primary documents relevant to the period, from the inauguration of President Andrew Jackson through that of Abraham Lincoln.
These documents include not only major state papers from the legislative, executive, and judicial branches, but also primary sources that directly communicate the concerns of African Americans, women, and Native Americans of the period. Important themes include the rising controversy over slavery, American expansionism, and attempts to avert crises through compromise. High school and college students and patrons of public libraries seeking to better understand American history will profit from the introductions and annotations that accompany the primary documents in this book—invaluable resources that put the information into context and explain terms and language that have become outdated.
- Provides readers with a clearer understanding of why President Andrew Jackson was such a controversial figure
- Supplies historical context for explaining the causes and effects of American westward expansionism, especially as they related to slavery
- Shows how arguments for women's rights emerged along with those of the rights for African Americans
- Impartially presents the arguments both for and against slavery and states' rights that led up to the American Civil War |
Teaching in the Field
Field trips often form the social backbone of geoscience departments, bringing students and faculty together to learn. Field trips can be the highlight of an elementary students year. Similarly, field trips are an integral part of the professional geoscientists ongoing professional development providing opportunities to see new and familiar field areas through the eyes of our colleagues and to wrangle over their interpretation. Participation in field trips builds new collegial relationships and brings together people from across states, regions, nations and the world who share common interests.
- enable NAGT sections to learn from one another in order to elevate the quality of their field offerings around the country,
- promote models for effective educational field trips to geoscientists around the world,
- and provide an archive of field guides furthering the ability of K-12 teachers, faculty, community groups, and others to lead scientifically accurate, pedagogically effective field trips.
Field Trip Collection
This collection features information about field trips organized for a variety of purposes across the country. The collection aims to share information about the design of various kinds of field trips as well as to provide easy access to field guides. Populating the collection is underway, and we will continue to highlight new additions to the collection.
You can also browse and search through the entire collection of field trip examples.
If you have a field trip example you would like to share in the collection, check out the Field Trip Submissions page and learn how to upload your activity.
Published NAGT Field Guides
- Pacific Northwest Section Guidebooks
- Far West Section Publications
Other NAGT Resources on Teaching in the Field
- Safety in the Field: Learn about safety and liability issues that educators need to think about before taking students into the field.
- Special Issue of JGE: Teaching in the Field, March 2006
- Strategies for Successful NAGT Field Trips
- 2004 Southwest Section Field Conference
Links to Useful Resources
- Teaching in the Field Topical Site from On the Cutting Edge
- Using Field Labs from Starting Point-Teaching Entry Level Geoscience
Including an activity on Making a Soil Monolith with a video of the process.
- Using Field Observations and Field Experiences to Teach Geoscience:An Illustrated Community Discussion from the NAGT sponsored "On the Cutting Edge" program for faculty professional development in the geosciences.
- Field Notes: these notes provide instructors with helpful tips for a successful field trip, including research-based information on overcoming students' barriers to learning in the field.
- The Montana-Yellowstone Geologic Field Guide Database from Integrating Research and Education: Moving Research Results into Geoscience Courses
- Geologic Guidebooks of North America Database: This database from AGI and the Geoscience Information Society contains bibliographic references and location for published field guides.
- A searchable collection of references and resources on field-based learning from the Synthesis of Research on Learning in the Geosciences
- Field Team Leadership: Strategies for Successful Field Work: This web article was written by Erin Pettit of the Department of Geology and Geophysics at the University of Alaska Fairbanks. She offers valuable advice and tips for field team leaders as well as participants which she sums up in the statement - "Happy, comfortable, safe people make for great scientific results".
Teaching in the Field is supported by the National Science Foundation (GEO 0507394).
Disclaimer: Any opinions, findings, conclusions or recommendations expressed in this website are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. |
By Susan P. Limber, PhD (Professor of Psychology, Clemson University)
In recent years, adults have had a lot to say about bullying. Members of the press have produced thousands of news articles and reports about bullying. Legislators in 49 states have written and rewritten laws requiring school districts to develop policies about bullying. And researchers, like myself, have published hundreds of articles and books each year about the nature of bullying, its prevalence, the effects it has on kids and schools, and ways to best address it.
Talking about bullying is important. For too many years, we adults were strangely silent about this issue. But amidst all of the talk among adults, it’s important not to lose the voices of kids themselves. What do they have to tell us about their experiences with bullying? How do they feel about bullying? How do they react to it? How do they see others responding to bullying?
Recently, in an effort to understand how boys and girls in elementary, middle, and high school grades experience and view bullying, my colleagues, Dan Olweus, Harlan Luxenberg, and I analyzed surveys from 20,000 3rd-12th graders from schools across the U.S. that had not yet implemented the Olweus Bullying Prevention Program [link to the full report]. Here’s some of what we found:
Bullying is serious. This seems obvious, but responses to our survey were sobering regarding the numbers of kids who were involved in bullying, the duration of bullying that many had endured, and the fear it caused.
- One out of every five students said they were involved in bullying (as one who bullies, one who is bullied, or both) 2-3 times a month or more often.
- Among those who said they were bullied, one-quarter had been bullied for several years. Although any amount of bullying is too much and may be extremely painful, one can only imagine that bullying that lasts several years is agonizing.
- Fourteen percent of students said they are often afraid of being bullied at school. This fear undoubtedly makes it hard to focus on lessons and perform to the best of one’s abilities.
Adults hold some misconceptions about bullying.
- The most common forms of bullying that students experienced weren’t cyberbullying or even physical bullying–they were verbal bullying, rumor-spreading, and, social exclusion. Although some media accounts may lead us to believe that cyberbullying is an epidemic, students in our study confirm that electronic forms of bullying, however troubling, are not as common as other more “traditional” forms of bullying.
- Many adults believe that they are very much aware of bullying that goes on in their schools and communities, but fewer than one in six bullied students in our study had told an adult at school about being bullied, and a disturbing number—30% of bullied high school girls and 41% of bullied high school boys–had not told anyone about being bullied.
Most students feel sorry for bullied peers, but empathy often doesn’t translate to action.
- Nine out of 10 students reported that they felt sorry for kids their age who were bullied at school.
- Did they try to stop bullying? Although 70% of elementary-aged students said they tried to help out if a peer was bullied, these numbers dropped quickly in middle school, where fewer than half reported trying to help. School and community-based efforts to increase witnesses’ comfort level to support bullied students, report bullying to adults, and speak out against bullying are clearly needed.
Our findings, and those of others, show that bullying is still a major issue facing children and youth. We need to take their views seriously as we work to reduce the impact of this public health problem.
Susan P. Limber, PhD
Professor Susan Limber’s research and writing have focused on legal and psychological issues related to youth violence, child protection, and children’s rights. She was the 1997 recipient of the Saleem Shah Award from the American Psychology-Law Society for early career excellence in law and policy. In 2000, she was named Researcher of the Year by the South Carolina Professional Society on Abuse of Children. Prof. Limber’s work on prevention of bullying has been recognized as exemplary by three federal agencies, and it has served as the basis for the federally funded design of a national public information campaign. In further recognition of this work, Prof. Limber received the APA’s Award for Distinguished Contributions to Psyschology in the Public Interest in 2004. Prof. Limber’s consultation on the media campaign also was recognized with a National Telly Award and an Award of Excellence from the National Association of Government Communicators. She is a past chair of the APA Committee on Children, Youth, and Families. |
The lungs make up one of the largest organs in the body and work as part of the respiratory system to bring fresh air into the body and expel the stale air.
Your lungs are in your chest and take up most of the room within the rib cage, but interestingly enough, they are not the same size. The left lung is slightly smaller than the right lung as room is needed for the heart, also on the left of your chest. The rib cage that protects the lungs is made up 12 sets of ribs which surround the internal organs within the rib cage. Below the lungs is the diaphragm that works the lungs and allows the inhalation and exhalation of air into and out of the body.
At the base of the windpipe there are two large tubes that connect to each lung. Once they reach each lung, these tubes branch into smaller tubes and they in turn branch off again, rather like a tree root until the last legs of the tubes are no thicker than a human hair. These tubes are called bronchi, hence the term, bronchitis which is a condition affecting the lungs.
Inhaling and Exhaling
You inhale without having to think about it, but inhaling involves many parts of the body working together. When you breath in, your rib cage expands and the diaphragm contracts which allows air to be drawn into the lungs. The air passes down your windpipe and is cleaned by small hairs called cilia which remove dirt etc. so that it doesn’t enter the lungs. This air then passes through all the bronchi and delivers the oxygen rich air to the alveoli, which remove the oxygen from the air and pass it into the blood stream.
The oxygen then joins the red blood cells that carry the blood to the heart which feeds the rest of the body. Exhaling is just the reverse. The diaphragm moves up and expels air from the lungs. The alveoli also remove carbon dioxide from the blood and pass it to the cilia, which is then exhaled when we breathe out.
Your lungs also allow you to make sounds, to talk, shout, laugh and all the other sounds you make. Above the windpipe is the voice box, or larynx, which contains two ridges, called vocal chords that open and close to make sounds. If the vocal chords are closed then they vibrate and a sound is made.
When we exercise, our bodies use up more oxygen than when we are resting and as the oxygen comes from the lungs; they then have to work harder to deliver that oxygen to the heart when we stress our bodies through exercise.
We all know the feeling of being out of breath, which results from the body using more oxygen than the lungs can deliver, hence we get tired. However, through regular exercise, our lungs become more capable of working harder and can in turn deliver more oxygen to the heart, and hence we get fitter.
To look after your lungs, you should firstly not smoke as smoke damages the cilia so the air reaching the lungs is no longer clean. The alveoli also get damaged and are less able extract the oxygen from the air and nice clean healthy cells that line the inside of the lungs become cancer cells. Worst of all, damaged lungs cannot be repaired.
Look after your lungs and they will look after you! |
We live in an ever-changing world: the rotation of our planet, the effects of tides and the patterns of winds and rainfall means that each day tends to be different from the last. Human activity changes daily and over the seasons. This is reflected in variations in demand for energy at different time scales and the availability of energy also varies, crucially in the case of renewable energy. This leads to intermittency both in supply and demand for energy.
The ASLEE project aims to use the production of micro-algae to smooth out the intermittencies of supply and demand by providing demand side management that can be used to match the patterns of intermittency coming from other users and from energy production. Algae use light to provide energy that in turn is used to fix carbon dioxide and turn it into sugars by the process of photosynthesis. Photosynthetic bacteria, algae and higher plants evolved on a planet where natural light levels show considerable intermittencies, caused by the daily patterns of night and day, seasonality and cloud cover. Algae in polar regions can go months in near total darkness into periods where they experience light 24 hours a day. Algae are well adjusted to deal with these fluctuations in their primary energy source so there is good reason to believe that they will adjust to light intermittency when LED lighting is used for demand side management.
Of course, there is a potential cost: if algae do not receive light then respiration will deplete energy reserves and the algae will consume themselves and eventually starve but this is not a rapid process so the question is more one of productivity than survival: just how much can the amount of light given to algae be varied before production becomes economically ineffective? The answer to this also depends on the value of the use of the algae in demand side management, through allowing renewable projects to be undertaken that could not otherwise happen due to grid constraint or income streams that become available through grid balancing. These strictly economic questions are being modelled as part of the ASLEE project by the University of West Scotland but at Xanthella one of the tasks is to better understand the effects on the algae of the intermittency of light in the industrial production of algae.
For photosynthetic organisms like algae, light is energy and so we might expect that growth of the algae simply corresponds to the availability of light as a function of total energy where the other feedstocks (water, CO2 and nutrients) are not limiting. Thus algae that are given light over twenty four hours might be expected to grow at twice the rate of algae that are given the same light concentration but only over twelve hours, mimicking a natural day-night cycle. However, the situation is considerably more complex than this due to a process known as photoinhibition.
Photosynthesis occurs in the chloroplast in algae and higher plants. Light is captured at the thylakoid membranes and the energy used to produce NADPH and ATP which are in turn used to fuel the Calvin Cycle where CO2 is converted into sugars. Photons are captured by molecular antennae in the thylakoid membranes but this process damages the antennae reducing their ability to capture more photons. At the same time cellular repair mechanisms are fixing this damage. As light intensities increase, more damage occurs until a point is reached where the repair mechanisms cannot keep up with the rate of damage and the overall rate of photosynthesis drops. This is photo-inhibition.
Complicating this further is the fact that individual micro-alga in a photobioreactor do not experience identical light levels except at low densities. As the culture increases in density, light penetrates less and less distance into the photobioreactor giving a gradient where the light levels can be quite different over a few centimetres. The algae are also not in fixed positions as the water within the photobioreactor is constantly circulating so that an individual micro-alga may be moving from very different concentrations of light every few seconds: moving from zones where they may be subjected to photo-inhibition then into areas where there is insufficient light to maintain photosynthesis and then into a “Goldilocks” zone where the light concentration is optimal for photosynthesis.
Increasing the light intensity in a photobioreactor will increase the zone within which photoinhibition might be expected to occur but it will also mean that there are fewer areas where light is insufficient for photosynthesis. Changing the light intensity will, therefore, not necessarily directly relate to growth of the algae and we can expect to find plateaus of photosynthetic activity over which adding more light will have little effect on increasing the amount of algae produced. Similarly, if we make the light intermittent so that there are dark periods this will allow the repair mechanisms to fix damage quicker than if they were exposed to constant light within the photobioreactor.
Xanthella are looking at the effects of both changing light intensity and changing the periodicity with which light is delivered to the algae. Initial results are very encouraging as to the potential of using light intermittency for demand side management of electricity use. A 15 hour illumination with 9 hour dark cycle was chosen as this matches the proposed availability of “free” electricity from the Ardnamurchan Estate biomass Combined Heat and Power (CHP) plant which will run 24/7 but from which the electricity is only required during the working day. What we found was that there was no significant difference in either growth rate of biomass production over seven days when the light was given constantly or in a 15 hour light: 9 hour dark cycle. Increasing the amount of light given in the 15:9 cycle also had no significant effect.
The most efficient production was actually where light was given at the 15:9 condition without increasing the maximum of light to match the amount of light given over 24 hours under constant illumination. This suggests that there is significant photoinhibition occurring at the chosen light levels but the important finding is that we can manipulate light levels (and thus use of electricity) to a considerable extent but still achieve comparable results in terms of algal production.
We are now looking at other species and the effects of different patterns of intermittency including rapid changes in light illumination to mimic grid balancing activities |
A dental condition wherein the nerve or the dental pulp (inner part of tooth) gets infected is termed as abscessed tooth. The bacterial infection in the inner part of the tooth leads to collection of pus. Following a good oral hygiene regimen many help preventing dental abscess.
Tooth abscess types
Periodontal: occurs in the supporting bone and tissue structure of the teeth.
Periapical: found in the dental pulp.
Gingival: occurs in gum tissue, does not affect tooth or periodontal ligament.
Abscessed tooth causes
Abscessed tooth symptoms
Typical symptoms of an abscessed tooth include swelling, toothache, bad breath, inflammation of the gum tissue, sensitivity in tooth and swollen neck glands. A dentist will physically examine it and might order a tooth xray.
Abscessed tooth treatment
Abscess is drained through the procedure of root canal. A crown is placed on the tooth to protect it. Affected tooth may be extracted. Incision into the swollen gum may treat the condition. Antibiotics prescribed prevent infection and pain relievers help relieve pain.
Dental Abscess prevention
Brush teeth twice a day. Eat a balanced diet low in sugar. Use dental floss to clean in between teeth. Visit dentist for regular dental check. Use fluoridated drinking water.
Enter your health or medical queries in our Artificial Intelligence powered Application here. Our Natural Language Navigational engine knows that words form only the outer superficial layer. The real meaning of the words are deduced from the collection of words, their proximity to each other and the context.
Diseases, Symptoms, Tests and Treatment arranged in alphabetical order:
Bibliography / Reference
Collection of Pages - Last revised Date: July 9, 2020 |
Modern thin concrete shells, which began to appear in Europe in the 1920s, are made from steel reinforced concrete of uniform thickness as thin as 2”-4” depending on the span. In many cases there were no supplementary ribs or additional structure, relying wholly on the thin slab or shell to perform the major structural tasks in the building. Modern shells were first introduced by architects and engineers such as Eugène Freyssinet (1879-1962), Bernardo Laffaielle (1900-1955), Pier Luigi Nervi (1891-1979), Eduardo Torroja (1899-1961), Félix Candela (1910-1997), among others. The strongest form of shell is the monolithic shell, which is cast as a single unit. The most common monolithic form is the dome, but ellipsoids and cylinders and variations thereof are also possible.
The design and construction of shell structure were a trend through the ‘60s. However, the approach to this type of structural and architectural design declined due to the high costs of labour, concrete, and cost of the complex project specific formwork. The shells also require a reasonably high level of maintenance to prevent leaks and other construction pathologies due to the exposed concrete also serving as the roof and primary moisture barrier. Since the 1980s, the preference for polygonal shapes and stretched structures occurred. |
CHICXULUB IMPACT - NO MASS EXTINCTION
The Chicxulub impact, which left a crater of about 180 km-in-diameter, is commonly believed to have caused the K-T mass extinction. In previous studies we have shown that this impact predates the K-T boundary by about 300,000 years. Here we evaluate the biotic effects of the Chicxulub impact in NE Mexico, about ~600 km from the impact crater on Yucatan, and in Texas along the Brazos River about 1000 km from the impact crater.
Figure 1. Locations of localities studied with K-T sequences containing Chicxulub impact ejecta.
In each of these localities we evaluated the planktic foraminiferal assemblages above and below the impact ejecta layer (impact glass spherules) in terms of species diversity and abundance changes in each species population. Samples were analyzed at 10-20 cm intervals based on quantitative analyses of large (>150µ) and small (63-150µ) size fractions. Bulk and clay mineralogy, stable isotopes and platinum group elements were also analyzed. Stable isotope data is not useful because the original signals are obliterated in these diagenetically altered sediments. Platinum group elements (Ir, Pd, Pt) show no changes across the spherule layer.
In NE Mexico, sediments at the El Peñon section were deposited at >500 m depth in an upper slope environment. In this region, a sedimentary influx from continental erosion was high due to the rising mountains of the Sierra Madre Oriental and the sediments were funneled across the continental shelf and down the slope via submarine canyons. At this locality the Chicxulub impact is represented by a nearly 2 m thick spherule layer interbedded in undisturbed marls 4 m below the 8 m thick sandstone complex that infills a submarine canyon below the K-T boundary. Reworked impact spherules are present at the base of this submarine canyon fill.
In earlier studies the sandstone complex that infills the submarine canyons was interpreted as the result of a mega-tsunami generated by the Chicxulub impact. In this scenario sediment deposition occurred within hours to days of the impact. However, burrows are present through much of the canyon deposit and a limestone layer with burrows in-filled with spherules separates two spherule layers (Fig. 2). Limestone takes thousands of years to accumulate and invertebrates established colonies on the ocean floor repeatedly during deposition of these sediments. This means deposition of the submarine canyon sandstone complex occurred over a very long time and could not have been due an impact-tsunami.
Figure 2. Litholog of the El Penon section showing the sandstone complex with two reworked spherule layers at the base, and the original Chicxulub impact deposit in late Maastrichtian sediments over 4 m below.
Within the nearly 2 m thick spherule deposit more than 4 m below the submarine canyon sandstone deposit, there are four upward fining spherule layers that suggest wave action and suspension settling. At the base of each unit spherules are densely packed and compressed, or partly welded in a calcite matrix (Fig. 3). No detritus or foraminifera are present. These features suggest rapid settling after the Chicxulub impact. The absence of detritus indicates that these sediments were not reworked and transported from shallow waters, similar to the spherule layers at the base of the sandstone complex. This nearly 2 m thick spherule unit therefore may well represent the time of the Chicxulub impact and the immediate rapid settling and deposition of the ejacta fallout.
Figure 3. A-C: reworked Chicxulub impact spherules from the base of the sandstone complex (see Fig. 2). These spherules are in a matrix of detrital grains, reworked shallow water debris and foraminifera. D-P: Chicxulub impact spherules from the 1.8 m thick spherules unit of the late Maastrichtian more than 4 m below the sandstone complex. These spherules are in a matrix of calcite cement and show no signs of reworked shallow water debris. Abundant rounded (D-F), elongate and compressed spherules (G-K) with concave-convex contacts (L, M) and vesicular glass (N-P) are characteristic of the Chicxulub spherule ejecta layer. The cement matrix and absence of clastic grains indicate that no reworked component is present. The compressed and welded glass indicates that deposition occurred rapidly while the glass was still hot. Spherules range in size from 2-5 mm.
The latest Maastrichtian is identified by the presence of Plummerita hantkeninoides, a species that evolved in magnetochron C29r about 300,000 years before the K-T boundary and became a casualty of the mass extinction (Pardo et al.,1996). The presence or absence of this species in middle and low latitudes is a very reliable indicator for evaluating the continuity and completeness of the sedimentation record. At El Penon P. hantkeninoides first appears 8.25 m below the unconformity at the base of the submarine canyon sandstone complex, and 1.25 m below the base of the nearly 2 m thick Chicxulub spherule deposit (Fig. 2). This constrains the age of the Chicxulub impact to the late Maastrichtian and predates the mass extinction by about 300,000 years (Keller et al., 2003). No other species evolved during this interval and while a number of environmentally sensitive species disappeared or became very rare, none can be reliably shown to be extinct globally. The same age was determined for the Chicxulub impact layer at Brazos, Texas (Keller et al., 2007) and in the crater core Yaxcopoil-1 on Yucatan (Keller et al., 2004a,b).
Figure 4. High resolution biostratigraphic scheme for the KT transition.
Species richness, a census of the number of species present at any given time, and the relative abundance of individual species populations are two commonly used proxies to assess environmental changes. Both of these proxies were analyzed at El Peñon. Relative species abundances were analyzed in two size fractions in order to evaluate the response of the small (63-150µ) and large (>150µ) species. Large species comprise a very diverse group of generally complex, ornamented and highly specialized K-strategists that thrived in tropical and subtropical environments, but were intolerant of environmental changes and hence prone to extinction (Abramovich et al., 2003; Keller and Abramovich, in press). The biotic effects of the Chicxulub impact should thus be most apparent in the K-strategists. Small species are less diverse, ecologic generalists, or r-strategists, and generally tolerant of environmental perturbations, including variations in temperature, salinity, oxygen and nutrients (Keller and Abramovich, in press. Some of these species respond to environmental catastrophes by opportunistic blooms, such as observed for Heterohelix and Guembelitria species.
A total of 52 species are present in the >150µ size fraction at El Peñon during the late Maastrichtian. Of these 75% (39 species) are K-strategists and 25% (13 species) are r-strategists (Fig. 5). Across the Chicxulub impact spherule layer species richness remains unchanged - the same species present below the spherule layer are also present above it. Not a single species went extinct.
About 2 m above the spherule layer species richness decreases to 42-44 species, rising only at the base of the unconformity at the base of the sandstone complex probably due to reworking. The variability in species richness is due to the rare and sporadic occurrences of 9 (K-strategy) species, or 17% of the total assemblage. Their increasingly sporadic occurrences may be the result of environmental changes and/or preservation.
The bulk of the species (83%) are continuously present. These data indicate that the decrease in species richness cannot be assigned to the biotic effects of the Chicxulub impact because (1) it occurs much later, (2) the species that are very rare and sporadically present are already endangered species below the spherule layer, and (3) all of these species are known to have survived to the K-T boundary elsewhere.
Figure 5. Species richness and relative abundances of specialized large species show no significant changes across the Chicxulub impact spherule layer. This means that no species went extinct as a result of the Chicxulub impact and no significant environmental changes are evident on the geological time scale.
Species richness in the smaller (63-150µ) size fraction totals 39 species, of which 64% (25 species) are K-strategists and 36% (14 species) are r-strategists (Fig. 6). Species richness remains unchanged across the impact spherule layer and throughout the section, with a low variability of 34-36 species, in contrast to the slight decrease in the larger size fraction (Fig. 5). The maximum number present (38 species) is observed at the unconformity at the base of the sandstone complex with the reworked spherule layer (similar to the >150µ size fraction) and is likely the result of reworking. Variability is due to five K-strategy species, which are rare and sporadically present.
Figure 6. Species richness and relative species abundances in the smaller non-specialized species show no significant variations and no extinctions. The Chicxulub impact appears to have had no catastrophic effect on the geological time scale.
Relative abundance changes in individual species populations are more sensitive indicators of environmental changes than the presence or absence of species. During the late Maastrichtian, K-strategy species in the >150µ size fraction show normal diversity and abundances. Nearly half of the K-strategists are common with the assemblages dominated (10-20%) by Pseudoguembelina costulata, Rugoglobigerina rugosa and R. scotti (Fig. 5). Also common are pseudotextularids, other rugoglobigerinidsand globotruncanids (e.g., arca, aegyptiaca, rosetta, orientalis, stuarti). Among r-strategists, the larger morphotypes of Heterohelix globulosa are common in this assemblage. Relative species abundance variations above and below the spherule layer are within normal fluctuations of the section with no significant changes. The only significant abundance change occurs in the upper 2 m of the section where H. globulosa decreases and Pseudotextularia deformis and Globotruna stuarti increase. No specific biotic effects in K-strategists can be attributed to the Chicxulub impact.
Species abundances in the small size fractionare dominated by the small biserial r-strategist Heterohelix navarroensis, which varies between 40-50% across the spherule layer and decreases in the upper part to an average of 40% (Fig. 6). Other r-strategists vary between 5 and 15% and consist of small heterohelicids, globigerinellids, and hedbergellids. The disaster opportunist Guembelitria is a minor component (<5%). K-strategists are dominated by Pseudoguembelina costulata and costellifera. All other K-species are rare (<1%). The relative species abundance changes show no significant variations across the impact spherule layer, except for two species. Pseudoguembelina costellifera, a surface dweller, decreases 8% above the spherule layer, concurrent with a decrease in H. navarroensis, a low oxygen tolerant species. This abundance variation suggests a change in the watermass stratification, though whether this relatively minor biotic change was related to the Chicxulub impact is unclear.
If Chicxulub caused the K-T mass extinction, then the spherule ejecta should be found at the same stratigraphic layer as the mass extinction. This is not the case. The K-T boundary in NE Mexico is well represented and always above the sandstone complex, and thus up to 15 m above the spherule layer in the late Maastrichtian.
The best and most continuous K-T transitions can be found by laterally tracing the sandstone complex 50-150 m beyond the submarine canyons where only the topmost thin (10-25 cm) sandstone is present, such as at La Parida, La Sierrita, and El Mimbral (Keller et al., 1997). This is shown for La Sierrita and El Mimbral (Fig. 7A, B) where a thin clay and K-T characteristic red layer are present with iridium concentrations of 0.3 and 0.8 pbb, respectively. This clay and red layer mark the basal Danian planktic foraminiferal zone P0 (Keller et al., 1994). Elevated Ir concentrations between 0.2-0.8 ppb at the K-T boundary were also reported from eight sections in NE Mexico (Stueben et al., 2005).
Figures 7 and 8. The K-T boundary red layer at El Mimbral (Fig. 7) and La Sierrita (Fig. 8) is enriched in Iridium and marks the mass extinction in planktic foraminifera. The K-T boundary and red layer can only be found in areas away from the submarine canyon deposits, as for example by tracing the top of the deposit laterally away from the canyons where normal sedimentation occurred.
Figure 9. The mass extinction at La Sierrita coincides with the Ir anomaly and the negative excursion in carbon isotopes.
At La Sierrita, the section was collected where a 5 cm thick calcareous sandy layer is the only representative of the submarine sandstone complex. Above it is a thin clay and mm thin red layer, which contains an Ir anomaly. The K-T defining negative shift in carbon isotopes and the mass extinction of all tropical and subtropical foraminifera coincide with this clay layer. These characteristic mark the K-T boundary worldwide.
Figure 10. La Parid K-T boundary transition shows a thin calcareous sandstone remnant of the submarine canyon sandstone complex and 10cm thick marl with late Maastrichtian assemblages above it, but below the mass extinction horizon. This 10 cm thick marl layer indicates that the sandstone complex of the submarine canyon was deposited prior to the KT boundary.
At La Parida, the thin K-T clay layer is missing (Fig. 10). This section is interesting, however, for its 10 cm thick layer of Late Maastrichtian marls with Late Maastrichtian zone CF1 assemblages that overlies the remnant calcareous sand of the sandstone complex, but is below the K-T extinction horizon. A thin Late Maastrichtian marl layer overlying the sandstone complex was also observed in several other localities, including La Lajilla and El Mulatto (Lopez-Oliva and Keller, 1996). This suggests that the sandstone complex predates the K-T boundary.
Danian grey shale conformably overly this marl layer. Planktic foraminifera in the basal grey shale contain the early Danian Parvularugoglobigerina eugubina zone (P1a(1) assemblages (Fig. 6). The K-T boundary is thus marked by a short hiatus.
The mass extinction in planktic foraminifera has been documented in various sequences in Mexico (e.g., Keller et al., 1994, 1997; Lopez-Oliva and Keller, 1996; Stinnesbeck et al. 2002) and all show extinction and evolution patterns similar to La Parida (Fig. 10). From a maximum of about 52 species during the late Maastrichtian at the time of the Chicxulub impact at least 86% (45 species) survived to the end of the Maastrichtian in Mexico. The 7 species missing at La Parida may be result of local disappearances or failure to record them due to their rare and sporadic occurrences. Another 7 species are rare and sporadically present. In the 1 m below the K-T boundary at La Parida, rare species account for 22% (10 species) of the assemblages.
At the K-T catastrophe 69% (31 species) went extinct, all of them specialized tropical and subtropical large, complex K-strategists. Ten of the species (22%) present are known to have survived the catastrophe for at least some time, all of them r-strategists, tolerant of environmental fluctuations (heterohelicids, hedbergellids, globigerinellids). One species, the disaster opportunist Guembelitria cretacea, thrived in the immediate aftermath of the catastrophe globally. The evolution of new species began almost immediately after the mass extinction; all new species were small, unornamented and with simple biserial, triserial or trochospiral chamber arrangements. This mass extinction pattern is characteristic in planktic foraminiferal assemblages throughout the Tethys, though species abundances may vary depending on regional conditions.
Planktic foraminifera, which suffered the most dramatic mass extinction at the K-T boundary with 2/3 of the species extinct, experienced no significant biotic effects as a result of the Chicxulub impact. No species went extinct and no species population decreased or increased significantly as a result of this large impact (Figs. 5, 6). This observation comes as a surprise mainly because we have assumed that the Chixculub impact caused the K-T mass extinction by associating this impact with the K-T boundary. A survey of the impact crater and mass extinction records over the past 500 m.y. reveals that no impact crater can be associated with any mass extinction (review in Keller, 2005).
The Chicxulub crater with a diameter between 150-180 km is the largest known impact. Other well studied impacts that show no significant species extinctions or other biotic effects include the 90-100 km in diameter late Eocene Chesapeake Bay and Popigai craters dated at 35.7±0.2 and 35.6±0.2 Ma (Keller et al., 1983; Montanari and Koeberl, 2000; Pusz et al., 2006), the late Triassic Manicouagan crater dated at 214±1 Ma, the 100-120 km in diameter late Devonian Alamo (382.8-385.3 Ma) and Woodleigh (359±4 Ma) impacts (review in Keller, 2005). When none of these large impacts (90-120 km diameter craters) caused significant biotic and environmental effects, it should not be surprising that the same is true for the Chicxulub impact, which was not much larger with a crater of at most 180 km in diameter.
The Chicxulub impact and K-T mass extinction are thus two separate and unrelated events. What are likely alternative causes for the K-T mass extinction? The global Ir anomaly at the K-T boundary suggests another large impact, if the iridium is of extraterrestrial origin. But volcanism is another source for enhanced iridium. Recent studies suggest that the main phase (80%) of Deccan eruptions may have been very rapid and ended at the K-T mass extinction. These intriguing data call for a re-evaluation of the current K-T impact mass extinction theory.
1. Abramovich, S., Keller, G., Stueben, D., Berner, Z., 2003. Characterization of late Campanian and Maastrichtian planktonic foraminiferal depth habitats and vital activities based on stable isotopes. Palaeogeography, Palaeoclimatology, Palaeoecology 202, 1-29.
2. Keller, G., 2005. Impacts, volcanism and mass extinctions: random coincidence or cause
3. Keller, G., D’Hondt, S., Vallier, T.L., l983. Multiple microtektite horizons in upper Eocene marine sediments: No evidence for mass extinctions. Science, 221, 150-152.
4. Keller, G., Stinnesbeck, W., and Lopez-Oliva, J.G., l994a, Age, deposition and bioticeffects of the Cretaceous/Tertiary boundary event at Mimbral, NE Mexico: Palaios,v. 9, p. 144-157.
5. Keller, G., Li, L., and MacLeod, N. 1995, The Cretaceous/Tertiary boundary stratotype section at El Kef, Tunisia: how catastrophic was the mass extinction?Palaeogeography, Palaeoclimatology, Palaeoecology, v. 119, p. 221-254.
6. Keller, G., Lopez-Oliva, J.G., Stinnebeck, W. and Adatte, T., l997. Age, stratigraphy and deposition of near-K/T siliciclastic deposits in Mexico: relation to bolide impact? Geological Society of America Bulletin 109, 410-428.
7. Keller, G., Adatte, T., Stinnesbeck, W., Affolter, M., Schilli, L., and Lopez-Oliva, J.G., 2002. Multiple spherule layers in the late Maastrichtian of northeastern Mexico. Geological Society of America Special Paper 356, 145-161.
8. Keller, G. and Abramovich, S., 2008. Lilliput Effect in late Maastrichtian planktic Foraminifera: Response to Environmental Stress. Paleogeogr., Paleoclimatol., Paleoecol., in press.
9. Keller G., Stinnesbeck W., Adatte T. and Stueben D. 2003. Multiple impacts across the Cretaceous-Tertiary boundary. Earth-Science Reviews 1283: 1-37.
10. Keller, G., Adatte, T., Stinnesbeck, W., Rebolledo-Vieyra M., Urrutia Fuccugauchi, J., Kramar, G., and Stueben, D., 2004a. Chicxulub predates the K/T boundary mass extinction. Proceedings of the National Academy of Sciences 101, 3753-37-58.
11. Keller, G., Adatte, T., Stinnesbeck, W., Stüben, D., Berner, Z., Harting, M., 2004b, More evidence that the Chicxulub impact predates the K/T mass extinction: Meteoritics & Planetary Science, v. 39(7), p, 1127-1144.
12. Keller, G., Adatte, T. , Berner, Z., Harting, M., Baum, G., Prauss, M., Tantawy, A.A. and Stueben, D.,2007. Chicxulub impact predates K-T boundary: New evidence from Brazos, Texas, Earth Planet. Sci. Lett. 255, 339-356.
13. Lopez-Oliva, J. G., and Keller, G., 1996. Age and stratigraphy of near-K/T boundary clastic deposits in NE Mexico. Geol. Soc. Amer., Special Paper 307. p. 227-242.
14. Montanari, A., Koeberl, C., 2000, Impact stratigraphy. Lecture Notes in Earth Sciences, 93, Springer, Heidelberg, Germany, 364 pp.
15. Pardo, A., Ortiz, N. and Keller, G., 1996. Latest Maastrichtian and K/T boundary foraminiferal turnover and environmental changes at Agost, Spain. In, MacLeod, N. and Keller, G., (eds.), The Cretaceous-Tertiary Mass Extinction: Biotic and Environmental Effects, Norton Press, New York, p. 157-191.
16. Pusz, A.E., Miller, K.G., Kent, D.V., Wright, J.D., Wade, B.S., 2007. Global Effects of Late Eocene Impacts, AGU Joint Assembly, Acapulco, Mexico (2007) p.
17. Stinnesbeck, W., Keller, G., Schulte, P., Stüben, D., Berner, Z., Kramar, U., Lopez-Oliva, J.G., 2002. The Cretaceous-Tertiary (K/T) Boundary transition at Coxquihui, state of Veracruz, Mexico: evidence for an early Danian impact event? Am. J. S. Amer. Res. 15, 497-509.
18. Stüben, D., Kramar, U., Harting, M., Stinnesbeck, W., Keller, G., 2005, High-resolution geochemical record of Cretaceous-Tertiary boundary sections in Mexico: New constraints on the K/T and Chicxulub events: Geochemica et Cosmochimica Acta. v. 69 (10), p. 2559-2579. |
Graphene is attracting the attention of innovators around the world. Can this simple, lightweight, potentially inexpensive, renewable material change the world? We think so. And after you learn a bit about graphene, we think you’ll agree.
Graphene is a single layer of pure carbon atoms bonded together with sp2 bonds in a hexagonal lattice pattern. Stacked layers of graphene form graphite. Graphene, measuring one atom thick (0.345Nm), is the thinnest compound known to exist. In fact, it’s actually 2-dimensional. Before graphene was isolated, it was commonly believed that two dimensional compounds could not exist because they would be too unstable, but the carbon-to-carbon bonds in graphene are small and strong and completely stable. While it’s largely transparent, graphene, even at only one atom thick, can be seen with the naked eye.
Graphene has been studied theoretically for many years, but was first isolated in 2004 by physicists Andre Geim, Konstantin Novoselov, and other collaborators at the University of Manchester in the UK. Their initial question was: Can we make a transistor out of graphite? During their research, Geim and Novoselov extracted thin layers of graphite from a graphite crystal using Scotch tape, transferred these layers to a silicon substrate, and then attached electrodes and created a transistor. These researchers won the Nobel Prize for physics in 2010. Since this discovery, research into graphene around the world has exploded.
Everything about graphene is extraordinary.
- Thinnest. At one atom thick, it’s the thinnest material we can see.
- Lightest. One square meter of graphene weighs about 0.77 milligrams. For scale, one square meter of regular paper is 1000 times heavier than graphene and a single sheet of graphene big enough to cover a football field would weigh less than a gram.
- Strongest. Graphene is stronger than steel and Kevlar, with a tensile strength of 150,000,000 psi.
- Stretchiest. Graphene has an amazing ability to retain its initial size after strain. Graphene sheets suspended over silicone dioxide cavities had spring constants in the region of 1-5 N/m and a Young’s modulus of 0.5 TPa.
- Best Conductor of Heat. At room temperature, graphene’s heat conductivity is (4.84±0.44) × 10^3 to (5.30±0.48) × 10^3 W·m−1·K−1.
- Best Conductor of Electricity. In graphene, each carbon atom is connected to three other carbon atoms on a two-dimensional plane, which leaves one electron free for electronic conduction. Recent studies have shown electron mobility at values more than 15,000 cm2·V−1·s−1. Graphene moves electrons 10 times faster than silicon using less energy.
- Best Light Absorber. Graphene can absorb 2.3% of white light, which is remarkable because of its extreme thinness. This means that, once optical intensity reaches saturation fluence, saturable absorption takes place, which makes it possible to achieve full-band mode locking.
- Most Renewable. Statistically speaking, carbon is the fourth most abundant element in the entire universe (by mass). Because of this abundance, graphene could well be a sustainable, ecologically friendly solution for an increasingly complex world.
- Most Exceptional. What most captures the imagination is that graphene is one simple material that by itself possesses all these astonishing qualities. No other material in the world is the thinnest, strongest, lightest, and stretchiest, and can conduct heat and electricity super-fast, all at the same time.
For graphene to successfully make the leap from the lab to the marketplace, production methods need refining. As was mentioned earlier, graphene was initially isolated using Scotch tape. This method, called exfoliation, achieves single layers of graphene with multiple exfoliation steps, each producing a slice with fewer layers, repeated until only one layer, graphene, remains. Exfoliation remains the most effective way to isolate high-quality graphene in small amounts. Researchers and engineers are developing alternative methods for isolating graphene that can be used to created mass quantities. One of the most promising methods is chemical vapor deposition (CVD), or epitaxy. In very simple terms, the CVD process involves placing an often reusable thin metal substrate into a furnace heated to extremely high temperatures (900 to 1000° C). Decomposed methane gas that contains the necessary carbon and hydrogen is then introduced to the chamber, resulting in a reaction with the surface of the metal film substrate that leads to the formation of graphene. Copper, nickel, and cobalt substrates are commonly used with varying results. The chamber is then cooled rapidly to prevent multiple graphene layers from forming. While CVD graphene is promising, the results still vary widely for a number of reasons. First, the cooling conditions affect the growth behavior and quality of graphene deposits. Second, the quality of the metal substrate impacts the outcome of the graphene. And third, the quantity and quality of the reaction gasses also affects the graphene output. Precisely understanding and controlling each of these variables is critical to the success of CVD as a method for producing marketable quantities of graphene. For now, the question of how to produce large sheets of high-quality graphene efficiently and with consistent quality remains the biggest challenge facing mass-market adoption of graphene.
Challenges aside, graphene is an incredibly exciting compound with the very real potential to change the world. Initially, graphene will be used to improve the performance of existing applications, but graphene’s potential goes way beyond that. It will be used in conjunction with other emerging 2-D compounds to revolutionize the way we interact with the world.
- Electronics. Graphene conducts electricity faster than any other compound out there and is smaller and thinner as well, making it possible for all our electronics to get even smaller and faster than they are now. Graphene is also a transparent conductor, so it can replace fragile and expensive Indium-Tin-Oxide (ITO) in touch screens, light panels, and solar cells. It’s also flexible, which greatly expands the possibilities. Imagine a foldable television or windows in your home that are also projectors.
- Biological Engineering. Graphene’s large surface area, high conductivity rates, thinness, and great strength all make it perfect for a new class of fast and efficient bioelectric sensory devices for monitoring such things as DNA sequencing, glucose and hemoglobin levels, and cholesterol. Lightweight, flexible graphene-infused “rubber bands” can sense the smallest motions, such as breathing, pulse, and small movements, and make it possible to remotely monitor vulnerable patients such as premature babies. Graphene oxide also promises to revolutionize drug delivery. Studies have already explored the use of graphene oxide to deliver cancer treatments and anti-inflammatory drugs safely and precisely.
- Filtration. Graphene allows water to pass through it, but is, at the same time, almost completely impervious to liquids and gases. Because of its strength and the fineness of its pores, graphene can be used in water filtration systems, desalination systems, and biofuel manufacturing
- Mixed Materials. Graphene can be used to produce anything that needs to be strong and light. Graphene is useful for airplanes, body armor, military vehicles, and anything else that needs strength with little weight. Its electrical conductivity opens up new possibilities as well. For example, the body of an aircraft made from graphene can resist damage from lightning and also communicate electronically any problems with the structure to the pilots. Concrete and other materials are also being developed that take advantage of the many exceptional properties of graphene.
- Batteries. Batteries that use graphene to store energy rather than traditional lithium ion will be stronger, more stable and efficient, and will last longer. Electric cars, laptops, and other devices can be more durable, light weight, and efficient with graphene-enhanced batteries.
Graphene is a powerful, versatile material. The isolation of this one amazing material has blown the possibilities of what we can achieve wide open. Previous limitations are gone and a whole universe of applications lie in front of us just waiting to be discovered. |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
- Persuasion is clearly a sort of demonstration, since we are most fully persuaded when we consider a thing to have been demonstrated.
- Of the modes of persuasion furnished by the spoken word there are three kinds. [...] Persuasion is achieved by the speaker's personal character when the speech is so spoken as to make us think him credible. [...] Secondly, persuasion may come through the hearers, when the speech stirs their emotions. [...] Thirdly, persuasion is effected through the speech itself when we have proved a truth or an apparent truth by means of the persuasive arguments suitable to the case in question.
- Main article: Ethos#Rhetoric
Ethos (plural: ethe) is an appeal to the authority or honesty of the speaker. It is how well the speaker convinces the audience that he or she is qualified to speak on the particular subject. It can be done in many ways:
- By being a notable figure in the field in question, such as a college professor or an executive of a company whose business is that of the subject.
- By having a vested interest in a matter, such as the person being related to the subject in question.
- By using impressive logos that shows to the audience that the speaker is knowledgeable on the topic.
- By appealing to a person's ethics or character.
- Main article: Pathos
Pathos (plural: patha or pathea) is an appeal to the audience’s emotions. It can be in the form of metaphor, simile, a passionate delivery, or even a simple claim that a matter is unjust. Pathos can be particularly powerful if used well, but most speeches do not solely rely on pathos. Pathos is most effective when the author connects with an underlying value of the reader.
In addition, the speaker may use pathos to appeal to fear, in order to sway the audience.
- Main article: Logos#Use in rhetoric
Logos (plural: logoi) is logical appeal or the simulation of it, and the term logic is derived from it. It is normally used to describe facts and figures that support the speaker's topic. Having a logos appeal also enhances ethos (see above) because information makes the speaker look knowledgeable and prepared to his or her audience. However, data can be confusing and thus confuse the audience. Logos can also be misleading or inaccurate. |
The calculation for determining the body mass index is: weight (in pounds) divided by height (in inches) divided by height (in inches) multiplied by 703. For example, the body mass index of a person who weighs 164 pounds and who is 5 feet 9 inches, would be calculated, 164 ÷ 69 ÷ 69 x 703 = 24.2, which may be rounded to 24.
A body mass index of less than 18.5 is below normal: People with a BMI below this level are considered underweight. A BMI slightly below 18.5 is not necessarily unhealthy unless the reason for it is because of malnutrition, an eating disorder, or other underlying or unknown health issues. A BMI of 16 or less is considered starvation: People with a BMI this low may need medical intervention to help increase their weight to a healthier level.
A body mass index between 18.5 and 24.99 is considered normal; however, it is important for people with a BMI near the upper limit of the normal range to be extra vigilant in following a healthy lifestyle to ensure that their BMI does not increase. Once the BMI rises above 24.99, specifically between 25 and 29.99, a person is considered overweight. For a person with a BMI at the low end of this range, a few minor changes in diet and exercise may lower their BMI into the normal range. People at the high end of this range are at risk of passing over the threshold into obesity unless they make immediate changes in their lifestyle. People with a BMI in the overweight range carry a higher risk of developing health problems than people with a BMI in the normal range.
People with a body mass index between 30 and 30.99 are considered obese. Significant changes in lifestyle may be necessary to reduce the risk of weight related health problems. A BMI above 40 indicates that a person is morbidly obese. People with a BMI this high live with an extreme risk of developing a number of severe health problems. Some studies have indicated that many morbidly obese people may be predisposed to extreme weight gain: genetics and brain chemistry imbalances may play a part in this. Whatever the cause, genetic or otherwise, life expectancy is greatly reduced in most cases.
It is widely accepted that as a person's body mass index increases above normal levels, the risk of health problems related to obesity increases as a result; however, a number of variables must be taken into account when determining the possible risk. Age, body shape, the ratio of fat to muscle, pregnancy, and an active or sedentary lifestyle are among the important considerations when making an accurate assessment of BMI and how it relates to overall health. The body mass index should be used only as a general guide for assessing possible health risks associated with obesity: It is by no means a definitive tool. Body mass index is, however, becoming increasingly utilized to gauge progress in weight management rather than conventional "ideal weight" charts. |
Today’s Google Doodle celebrates the birthday of Nicolaus Copernicus, the Polish Renaissance man who first floated the theory that the sun, not the Earth, was the center of the universe. According to the Stanford Encyclopedia of Philosophy, “sometime between 1510 and 1514 [Copernicus] wrote an essay that has come to be known as the ‘Commentariolus,’ [which] introduced his new cosmological idea, the heliocentric universe.” The piece also included seven now-popular axioms such as, “the center of the universe is near the sun” and “the distance from the Earth to the sun is imperceptible compared with the distance to the stars.” The theory was published in 1543, shortly before he died.
The Google Doodle, then, naturally has the “O”s in Google representing planets circling around the sun. When you click anywhere on the soundless Doodle, Google takes you to the search results for “Nicolaus Copernicus.”
On what would be his 540th year on the planet, Copernicus’s birthday beat out other notable Feb. 19 achievements such as Thomas Edison receiving a patent for his phonograph (1878), and the publishing of Betty Friedan’s The Feminine Mystique (1963). |
Foreign powers have long been interested in Bolivia because of its enormous wealth of natural resources. Over several hundred years, while the participants have changed, certain patterns of engagement have persisted. Most outsiders have hoped to reap as much benefit as possible, employing strategies that create and/or reward a small group at the top of Bolivia's pyramid society. Sustained contact has resulted in the perpetuation of this skewed internal structure. Outsiders have imposed their culture and values. The unintended consequence of an unchallenged elite has been the persistence of a distinct indigenous people that comprise close to 60% of the population. For most of Bolivia's history, control of the purse has led to control of the government, and often interest in the country extends only as far as it aids the economic ambitions of others.
The Incas spread south and inland from Peru (Figure 15) and (Figure 24) in the fifteenth century. They encountered distinct tribal groups that had lived in the area for centuries. Archaeological remains provide evidence of settled villages in modern day Bolivia as far back as 1000 BCE. A society with religion, government and cities existed at that time. Copper remnants date back a thousand years earlier. Successive cultures began to extend influence over wider areas. The Chavin civilization began around 800 BCE. It used textiles and gold, displayed advanced pottery techniques, and had major religious centers. No evidence of this culture exists after 100 BCE when tribes from the area of Lake Titicaca (Figure 15) began to dominate the region. The Tiahuanaco domination of the area began in 600 CE and lasted for 600 years. By 1200 CE, regional states and empires, including the Aymara and Quecha tribes, controlled various portions of the area. Speakers of both languages comprise the dominant indigenous groups in Bolivia today.
The Incas benefited from indigenous tribal practices of collective farming, Ayllu, and an integrated economic society based on extreme geographic differences. These two factors facilitated their administration of the area, to which they added a forced labor system, Mita, where all able bodied citizens had to contribute a certain number of days of work each year.
The Incas efficiently administered an area that stretched 2500 miles along the western coast of South America (Figure 24). They had an elaborate system of governance with an efficient central bureaucracy. They employed the Ayllu system throughout their empire, and a chain of command that allowed for local autonomy as long as tribute was paid and labor was provided. The empire was connected via 14,000 miles of road. They established a command economy where central authority determined production amounts and distribution. Two thirds of the agricultural yield was turned over to the empire yet those under Incan rule could expect the state to provide for them. Incan engineers created a water management system that provided sufficient resources for the dry season. Religion was central and integral to maintaining the power of the state, and wisely did not contradict the tenets of local practices. Although the Incas used force when necessary, the years of their control also saw a flowering in the arts.
The primary interest of the Europeans in Bolivia was its ample supply of precious metals. The Spanish, who arrived in the 1520s and ruled for the next 300 years, were not interested in settling the area and only a total of 250,000 Spaniards arrived during their first hundred years of rule. As part of Spain's mercantile economy (Figure 7) and (Figure 8), Bolivia existed only to provide these raw materials. The goal was to gain as much as possible while doing only what was necessary to reap the desired benefits. While this was the Crown's intention, other factors did shape the experience. The arrival of the Catholic Church had a large impact on the development of life in the Western Hemisphere.
Culture did not flow in one direction only and the Spanish gained many things from the experience that they had not intended, including farming techniques and crops. These new lands presented an opportunity for many ambitious young Spaniards to make a fortune. The largely male emigration from Europe resulted in racial mixing. The Spanish also needed the indigenous population to work in the mines. Throughout North and South America, the Europeans hoped that the native born population would provide sufficient labor to meet their needs, but eventually they instituted a transatlantic slave trade in order to develop and sustain their work force (Figure 10).
Spanish success in Bolivia and the rest of South America brought more than material wealth. For two centuries, Spain dominated international trade and helped solidify the centrality of Europe in world affairs. As time passed, the Spanish method of mercantilism grew weak in contrast to the strong market economies developed by England and France. As Spain's power waned throughout the eighteenth century, British influence increased in Bolivia. Although the English would not directly control the area, they benefited enormously from gaining access to these now unprotected markets.
Formal Spanish involvement ended in the beginning of the nineteenth century. The growing power of the United States increasingly influenced Latin America. (Figure 6). The motivations of this northern neighbor were complex, a combination of economics, altruism, and global strategy. At first, the United States supported the Latin American countries in their movements for independence, proud to have served as a model of freedom from an oppressive monarchy. President Monroe issued his famous doctrine in 1823, asserting that the Western Hemisphere should be free from colonial power. Powerful support for this sentiment came from the British who welcomed the opportunity to now trade in the markets opened by the end of Spanish rule.
As the nineteenth century wore on, the United States continued its watchful concern. South America proved to be a good place to invest American capital and sell American products. Industrialization brought a demand for raw materials such as tin and petroleum that are plentiful in Bolivia. American foreign policy couched these economic interests in lofty sentiments of uplift and opportunity, rhetoric that persists today.
In the twentieth century, the US became involved in South America in more direct ways. The countries of Latin America were weak militarily and diplomatically, and their northern neighbor increasingly exercised its control. At the beginning of the century, the United States claimed that these areas were part of its Sphere of Influence. In the 1930s, President Franklin Roosevelt promised a Good Neighbor Policy and in the 1960s, President John Kennedy stressed an Alliance for Progress. US actions and attitudes combined virtue and self-interest towards its southern neighbors. There was no competition for influence in the area until after World War Two. American companies had invested a great deal in Latin America and the US government protected those interests.
The United States had strong economic ties with Bolivia. Americans consumed its raw materials and sold manufactured goods in its domestic markets. American companies invested directly in the Bolivian economy. Standard Oil was integral in developing the oil industry there (Figure 21a). The United States also provided direct economic aid. As Bolivia suffered during much of the last 50 years, the United States stepped in to help, whether it was providing direct aid or loan supports. Successive American leaders turned a blind eye to Bolivia's government. As long as they did not disturb US economic interests, President Eisenhower accepted claims that the populist governments in the 1950s were not pro-communist despite strong socialist policies. Likewise, the United States supported military rulers who ignored human rights in the 1960s and 1970 until inflation and debt crippled the economy in the 1980s. At that point, the United States insisted on a return to a democratic government and a market economy before it would endorse loans from the World Bank and the International Money Fund.
Although Bolivia complied with these mandates, its economy was not fully self-supporting and in recent years has been subjected to mass protests by those who reject both US policies and assistance. The US continues to provide $100 million annually to Bolivia to pay for everything from infrastructure development to social services, yet symbolically America is considered the enemy. Regional allies, such as Venezuela, are preferred as its assistance is not tainted with markers of cultural imperialism (Figure 26).
Bolivia is no longer in the era of the Incas or the Conquistadors or even the mining barons. Its people will not accept another round of exporting raw material that benefits the elite only. At the same time, Bolivia's newly elected government appreciates that foreign investment and expertise are important contributions to the development of the recent discoveries of natural gas in Bolivia. In the 1990s, Brazilian companies took a leading role in developing this industry in Bolivia.
Most of the natural gas is located in the eastern portion of country (Figure 24) whose residents are increasingly agitating for more autonomy from the central government. Many in western Bolivia note the irony that these same individuals were not calling for separation when the nation's wealth came from tin mines in the west. Newly elected President Morales has taken strong steps to guarantee that all Bolivians gain wealth from the natural gas resource. About six months into his administration, in May 2006, he took steps to nationalize the natural gas industry. These actions did not include whole scale confiscation. In the 1950s, when the Bolivian government assumed complete control over the tin and petroleum industries, these actions did not reduce poverty for the majority of Bolivians. Nor did the government know how to run these industries. Morales hoped to gain control of the natural gas industry without losing the expertise of the foreign companies. He gave these companies 180 days to propose revenue sharing plans that would allow them and their experts to stay. The current arrangement is that 50-80% of the profits would be given to Bolivia's state owned energy company.
This move reflects far more than just economic gain. It speaks to a new and much more comprehensive concept of Bolivian nationalism. While calls for national unity have been invoked in the past, they have been abandoned quickly once mass support was no longer necessary. As the country's first indigenous president, Morales hopes actions such as profit-sharing with foreign companies will promote pride as well as riches to benefit the poor of his country. One component of this new nationalism is a lessening of dependence on former allies, particularly the United States. In general, the US has seen its influence in Latin America wane, in part because of its own policies and in part because of the rise of Hugo Chavez in Venezuela who promotes an aggressive anti-American stance (Figure 26). Chavez provides Bolivia with much more than rhetoric. He uses his country's vast oil wealth to support his allies.
Bolivia's relationship with the other nations of Latin America has a long and complex history. Initially, much of South America was under Spanish control (Figure 4). With independence in the early 19th century, came many opportunities for cooperation and conflict for each of these new countries. The first issue that had to be resolved was boundary disputes. Even the creation of Bolivia as a distinct country was subject to debate and there were moments when it looked as if it might be divided among neighboring countries such as Peru, Chile and Brazil (Figure 11). Even with its sovereignty established, disputes over borders persist to this day. Each nation erected protective tariffs, retarding economic development necessary for growth and stability. Likewise, common concerns such as transportation suffered in the wake of national divisions. All of these ongoing issues continue to undermine Bolivia's economic growth and political stability.
At various times, Bolivia's leaders made attempts to combat these problems, but their efforts often complicated the issues. In the 1870s, the government hoped to exact taxes from the many Chileans who lived in Bolivia's coastal region (Figure 14). When these revenues were not forthcoming, Bolivia declared war on Chile. The defeat led to Bolivia's loss of its access to the sea, a severe blow to its ability to conduct international trade (Figure 16). Likewise in the 1930s, a border dispute with Paraguay over the Chaco region (Figure 18) led to full-scale war in which Bolivia lost this oil rich region <(Figure 21a). Current President Morales has garnered much support for his efforts to try to reclaim access to the Pacific.
The nations of South America (Figure 26) currently share economic and diplomatic concerns yet also exist in competition with one another. They do support each other's economies when it is for their own benefit. Brazil has invested heavily in Bolivia's natural gas industry. Efforts to see common interests are evident in organizations such as MERCOSUR, established in 1991 as a South American common market. Recent tensions have developed concerning the future direction of this group. Brazil and Argentina promote economic integration, open markets and free trade. Venezuela, with Bolivia's backing, wants to pursue a more anti-imperialist (read anti-American) and socialist agenda. Likewise, this body has also promoted "Democracy", not allowing full voting membership to nations with military rulers. Venezuela's president, Hugo Chavez, has been given a virtual carte blanche by his government, and does not see internal politics as an obstacle to full participation in the regional body.
Venezuela looms large in Bolivian affairs. Despite all of Chavez's anti-American rhetoric, he still recognizes the importance of the United States to his country's economic success. The U. S. continues to be Venezuela's most important trading partner and largest consumer of its goods. Trade between the two countries has risen 36% in the last year. This reality has not stopped Chavez from very visibly supporting Bolivia in its efforts to minimize the influence the United States. Venezuela supports Bolivia's burgeoning natural gas industry and Chavez has promised to provide direct assistance if Morales' threats to nationalize this industry result in foreign companies leaving Bolivia. He has also supported Bolivia's right to set prices for natural gas. Chavez has sent troops into the western portion of Bolivia to help subdue Bolivians that resent some of Morales' more left wing policies. Venezuela provides only a fraction of Bolivia's foreign aid but receives the public accolade and appreciation denied to the United States who has invested much more in the country.
- What has Bolivia gained or lost from its connection to other nations?
- How has Bolivia affected these foreign powers as well?
- Why is there resentment towards the United States despite the vast amount of financial assistance it has provided?
- How have changes within Bolivia in the last decade affected its relationship to other nations?
- How did foreign conquest reinforce internal divisions within Bolivia?
- How do centuries of foreign exploitation explain the current attitudes among South American countries?
- Feroro, Juan. "Leaders Discuss Bolivian Energy Takeover." The New York Times. May 5, 2006.
- ______. "U.S. Aid Can't Win Bolivia's Love as New Suitors Emerge." The New York Times. May 14, 2006.
- Klein, Herbert S. A Concise History of Bolivia. Cambridge: Cambridge University Press, 2006.
- Library of Congress Website: A Country Study: Bolivia http://lcweb2.loc.gov/frd/cs/botoc.html
- Rohter, Larry. "Venezuela Wants Trade Group to Embrace Anti-Imperialism." The New York Times. January 19, 2007.
- Surowiecki, James. "Synergy with the Devil." The New Yorker. January 8, 2007. |
At a Glance
- For the first time, astronauts onboard the ISS used CRISPR-Cas9 technology to edit DNA in space.
- The student-led experiment, awarded through the Genes in Space competition, used CRISPR-Cas9 gene editing technology to create targeted breaks in the yeast genome that imitate damage to DNA caused by radiation.
- Results from the experiment may inform our understanding of DNA repair mechanisms and may lead to improvements in current methods to protect astronauts against cosmic radiation during space travel.
Genes in Space student investigators have succeeded in making history this week, when their experiment successfully used CRISPR-Cas9 technology to edit DNA on the International Space Station (ISS). The experiment, designed to provide meaningful insight on how DNA repairs itself after damage incurred through cosmic radiation, is the first use of this specific gene editing technique in space.
CRISPR (clustered regularly interspaced short palindromic repeats) holds the potential to combat a variety of global medical and environmental issues. Its precise gene-editing capabilities have been used in animal models to correct the genetic mutations responsible for cystic fibrosis and Duchenne muscular dystrophy. The technology is also used to address global food shortages by genetically modifying crops to stay fresh longer during long-distance transport. While the technology is still most widely used in fundamental biology research, its potential for therapeutic use is broad, and human trials for some uses are underway. CRISPR has even been suggested for use in animals—to help save endangered species or as a method to modify organs for more successful xenotransplantation, which could ameliorate the organ shortage crisis.
Student researchers David Li, Aarthi Vijayakumar, Rebecca, Li, and Michelle Sung designed the groundbreaking experiment and co-lead the effort, winning the opportunity to conduct research on the ISS National Lab through the Genes in Space competition.
The Genes in Space program, founded by Boeing and miniPCR BioTM and supported through the ISS National Lab, holds a free annual competition for students in grades 7 through 12 to propose pioneering DNA experiments that use the unique environment of the ISS. The winning proposals are developed into flight projects that are launched to the space station. The Genes in Space program is part of the Space Station Explorers consortium, a growing community of ISS National Lab partner organizations working to leverage the unique platform of the ISS to provide valuable educational experiences.
For the experiment, ISS crew members used CRISPR-Cas9 technology to make targeted breaks in the yeast genome that simulate DNA damage akin to damage caused by radiation exposure in space. The crew members then used a miniPCR™ (polymerase chain reaction) machine to make copies of the DNA and the minION sequencing technology on the ISS to read the DNA. The results provide information on changes in the molecular structure of the yeast genome due to the damage imposed by CRISPR, as well as any genetic errors introduced as the DNA attempts to repair the damage.
This first experiment using CRISPR technology for gene editing is an exciting step forward in understanding the mechanics of DNA damage and repair in space. In addition, the entire experimental process took place on station—the DNA damage and repair as well as the sequencing to study resulting molecular changes—setting the stage for future DNA experiments that can be conducted on the ISS to expand our understanding of genetics in space.
As the future of space exploration evolves, results from this study may lead to improved radiation protection for astronauts during long-term spaceflight missions. Long-term space travel and colonization of other planets would expose astronauts to the harsh environment of space for extended periods of time. Life support systems and protective gear will be necessary to shield the human body from long-term radiation exposure and subsequent genetic damage. Knowledge gained through this experiment may offer researchers crucial information about how we might safely navigate future space-based activity.
The use of CRISPR in space is just the latest in a trend of advancing genetics tools in space. In 2016, Kate Rubins sequenced DNA in space for the first time, and the inaugural winner of Genes in Space performed PCR in space for the first time. In September, RNA was sequenced for the first time in space, and now with CRISPR, we have hit another milestone that brings us closer to cutting-edge terrestrial genomics. |
Print these language worksheets on interesting topics to improve your English. These English worksheets will help children practice and learn more of English basic key skills. Below you’ll find a variety of free printable English language worksheets for home and school use. This resource of printable worksheets is great for teaching English in simpler way.
Language worksheets will assist in mastering English fluency. Proficiency in English is essential for academic success. Kids who learn English as a second language (ESL) can benefit from lessons and activities in a variety of exercises. Here you’ll find downloadable worksheets in English for printing, so your kids can keep learning through these exercises. Each English printable is related to a different stage level or grade.
Worksheets are a very important part of learning English. Make your lesson planning easier with our range of free English worksheets for kids. These worksheets include printable resources for ESL kids to learn and teach English vocabulary and grammar.
Free printable worksheets and lesson plans for busy teacher. Learn English with our huge collection of worksheets. Kids can now have fun and learn English at the same time. Improve your vocabulary, spelling and reading skills with our printable English worksheets. |
Solving the longitude puzzle
On the open seas, sailors use the sky to pinpoint their latitude (how far north or south they are) and longitude (how far east or west they are).
Navigators in the 1700s could work out their latitude by measuring the height of a heavenly body above the horizon — the sun by day and the stars at night. Longitude was much harder to establish. Most sailors guessed their longitude using 'dead reckoning', which involved estimating how fast and how long they had travelled since their last known position. These estimates were often wrong and ships were frequently wrecked or ran out of supplies before reaching port.
In 1714 the British Government offered a large cash prize for a simple and practical way of establishing a ship's exact longitude. British voyages into the Pacific during the 1700s tested many of the methods proposed — including lunar observations and the marine chronometer.
Right: British navigators in the 1700s used the Prime Meridian line at the Royal Observatory in Greenwich, England, as the zero longitude. Courtesy: National Maritime Museum, London.
What are latitude and longitude?
Any position on earth can be pinpointed using a north–south coordinate (latitude), measured in degrees, minutes and seconds from the Equator, and an east–west coordinate (longitude), measured from an agreed starting point at Greenwich, England (Prime Meridian). The National Museum of Australia's location, for example, is at 35°17'35"South and 149°07'15"East.
There is a direct connection between longitude and time. The earth rotates through 360 degrees each day, or 15 degrees an hour at the Equator. If a navigator knows the time in a fixed location, and the exact time where he or she is, the difference in time between them would equal his or her longitude in degrees.
This diagram of the globe shows the lines of longitude and latitude.
Today, there is a shift occurring in navigation and mapping as significant as that seen in Cook's time. The sextant, almanac, chronometer and calculations used by mariners for centuries are being replaced by radio navigation or global navigation satellite systems (such as the Global Positioning Systems [GPS]), which use microwave signals transmitted by satellites to establish exact locations on earth.
A Global Positioning System (GPS) satellite on public display at the San Diego Aerospace Museum. At least 24 of these satellites orbit the earth transmitting signals to GPS receivers. Photo: Scott Ehardt. |
Teachers capitalising on the opportunities to teach handwriting through the writing practices of modelled, shared, interactive, guided and independent writing. Improvement in handwriting speed and accuracy does impact the ability of students to generate written text.
Journal of Educational Psychology, 92 4 Poole, D. For the Victorian modern cursive handwriting script, this means letters can be taught in the following groups: anticlockwise letters a, c, d, g, q, e, o, f, s clockwise letters m, n, r, x, z, h, k, p the i family letters i.
Explicit handwriting instruction in your classroom will give your students a jump start on communication success.
The Teaching of Handwriting Revised Ed. Workbook Design Clean, simple and intuitive approach to workbooks invites personalization and creativity and fosters handwriting success.
Ensuring handwriting does not take the place of writing. |
In this lesson, your tutor will help you go over this topic: taking the bus. First, go over the following vocabulary and expressions with your tutor. Read the word/expression and definition out loud, and your tutor will go over anything you do not understand. Practice creating a sentence or two to make sure you know how to use the word/expression properly.
|bus station||(n) a place where buses start and end their routes; people also pay for tickets
The bus station is always full of people.
|bus stop||(n) a place where buses stop to let people on and off
The closest bus stop is down the street from my house.
|bus driver||(n) the person who drives and operates the bus
The bus driver drives safely.
|bus fare||(n) the money you pay to take a bus
The bus fare is $3.00 one-way.
|bus route||(n) the direction or way the bus goes
We need to take the bus route going north to 5th Avenue.
|get on||(v – phrase) to enter a bus, airplane, or train
We need to get on the bus now.
|get off||(v – phrase) to exit the bus, airplane, or train
At the next bus stop, we will get off.
|seat number||(n) the number given to a seat; sometimes with letters
My seat number is 24B.
|transfer||(n) to go from one bus to another bus
We have to transfer buses at the next stop.
|coach||(n) a type of bus that goes long distances
The coach is going from London to Paris.
Use the following questions as a guideline to form an interesting conversation with your tutor. Feel free to diverge from these suggestions if anything interesting comes up.
- Do you take the bus? Why or why not?
- Why do people take the bus?
- Are the bus systems in your country good or bad? Why?
- Do you think more people should take the bus? Why or why not?
- Have you ever taken a long bus trip? If yes, tell me about it. If no, do you want to? Why?
- Would you like to be on a bus that is full of people? Why or why not?
- Do you think bus stations are clean or dirty?
- What are some other examples of public transportation?
- Do you have a metro or subway in your country?
- Do a lot of people take taxis in your country? |
A controversial paper presented at the prestigious science conference, the 102nd Indian Science Congress in Mumbai, claims human aviation and advanced space flight was achieved and mastered by the ancient Indians, thousands of years before the Wright brothers in 1903.
The paper, presented by Captain Anand Bodas and Ameya Jadhav within a session titled Ancient Sciences through Sanskrit, details that in Vedic texts from 7,000 years ago, airplanes are described as being able to fly backwards and side-to-side. They could also shuttle between countries, continents and even planets.
Captain Anand J Bodas draws upon the ancient Vedas for evidence of aviation technology
“There is official history and unofficial history,” said Captain Bodas, according to TheNational. “Official history only noted that the Wright Brothers flew the first plane in 1903,” but the inventor of the airplane was really a sage named Bharadwaja, who lived around 7,000 years ago. “The ancient planes had 40 small engines.”
Painting of Bharadwaja, said to be one of the greatest Hindu sages. Public Domain
The Vedas are a large collection of Sanskrit texts originating in ancient India and constitute the oldest layer of Sanskrit literature and the oldest scriptures of Hinduism. Some of the collection, such as the Samhitas, are known to date back to at least 1,700 B.C., although it is believed that many go back much further.
An illustration of the Shakuna Vimana that is supposed to fly like a bird with hinged wings and tail. Public Domain
The subject of ‘flying machines’ has been a popular subject among ancient astronaut theorists, who argue that some extracts are evidence of extra-terrestrial visitations:
“The Pushpaka (flowery vimana) chariot that resembles the Sun and belongs to my brother was brought by the powerful Ravana; that aerial and excellent chariot going everywhere at will… that chariot resembling a bright cloud in the sky… and the King (Rama) got in, and the excellent chariot at the command of the Raghira, rose up into the higher atmosphere.” (Ramayana)
However, Captain Bodas said that ancient Indians invented the technology and that it was later forgotten because of the passage of time, foreign rulers and things being stolen from the country.
The Times of India reports that the paper, presented at the conference which included six Nobel laureates and award winning academics and scientists in its roster, has been met with skepticism, claims of “pseudo-science,” and the argument that the theory undermines empirical evidence by citing ancient religious texts.
The Indian Science Congress Association (ISCA) is a premier scientific organization of India, with more than 30,000 scientist members. The ISCA’s mandate is to publish journals, hold conferences and advance and promote the cause of science.
Valedictory Session of the 100th Indian Science Congress in Kolkata (Wikimedia Commons)
NASA scientist Dr. Ram Prasad Gandhiraman started an online petition before the conference was held to oppose certain lectures which were thought to advance a mix of science, mythology and the politics of Hindu nationalists.
However others, such as an Indian scientist from the U.S. who attended the conference, seemed to find the examination of ancient testimony compelling, saying, “Knowledge always grows, its flow never stops. So if all this knowledge was available in the ancient days, I need to know where it stopped. Why did it fail to grow? Why was there no advancement? When did it stop? I am not aware of the chronology of events, but I am definitely willing to learn more and find out.”
Featured Image: A manuscript illustration of the Sky Battle of Kurukshetra, fought between the Kauravas and the Pandavas, recorded in the Mahabharata Epic (Wikimedia Commons)
By Liz Leafloor |
TREATY OF UTRECHT, SECTION XV
April 11, 1713
For centuries prior to 1713, wars raged almost constantly between France and England - over the eons many peace treaties were negotiated, signed, and broken at will.
True to form, on July 13, 1713, they ratified another, the Treaty of Utrecht. Which, in time, like all previous peace deals between them, would prove to be no more than a respite from war. The war the treaty ended, like most European wars, had been caused by family squabbles among the pampered royal houses of Europe. Religion was also a prime factor, it’s prominently mentioned in the preamble and in several sections.
The treaty also included provisions that were extremely bad news for the Mi'kmaq, Maliseet, and Acadians. Section XII transferred to the British Crown the self-presumed French ownership of Acadia. This event marked the beginning of the end of French power in the Americas.
The clear winners of the peace in this instance were the British, they got almost everything they wanted, including the renunciation of all claims by the Crowns of France and Spain to each other's Thrones. Thus, they forestalled the possibility that the two Catholic Crowns would ever be worn by one person. The French and Spanish Crowns also agreed to recognize that thereafter Great Britain's Crown was restricted to Protestant royalty only. The Spanish, like the French, had to give up several of their prized possessions, including the strategic Rock of Gibraltar.
The main victims of this peace were the First Nations Peoples of the Americas and the Black people of Africa. The tragic destinies of both people were decided by the European Crowns, without an iota of thought being given to their interests. Their rights, as free and independent peoples were being abrogated and First Nations and African lands were also being taken. The Treaty of Utrecht also gave European nations license to forcibly remove Black people from Africa and bring them to the Americas as slaves.
Section XIV of the treaty deals with the rights of French subjects to stay within the ceded colonies and to practise their religion freely, subject to the discriminatory religious laws of Great Britain. It also placed the Eastern Amerindian Nations under British dominion. However, another section makes this presumption confusing. Section XV of the Treaty of Utrecht reads:
“The Subjects of France inhabiting Canada, and others, shall hereafter give no Hinderance or Molestation to the five Nations, or Cantons, of Indians, subject to the Dominion of Great Britain, nor to the other Natives of America, who are Friends to the same. In like manner, the Subjects of Great Britain shall behave themselves peaceably towards the Americans, who are Subjects or Friends to France. And on both sides, they shall enjoy full Liberty of going and coming on account of Trade. Also the Natives of those Countrys shall, with the same Liberty, resort as they please to the British and French Colonys, for promoting Trade on one side and the other, without any Molestation or Hinderance, either on the part of the British Subjects, or of the French. But it is to be exactly and distinctly settled by Commissarys, who are, and who ought to be accounted the Subjects and Friends of Britain, or of France."
Interpretations of the section have ranged from saying that it gives dominion over the Eastern First Nations and their lands to Great Britain, to saying that it identifies some of them as French subjects, to saying that it acknowledges them as independent Nations. If this section was meant to place these Nations under British rule, that intention is not clearly stated. In fact, just the opposite may be inferred, given that the British sought a separate treaty with the Eastern First Nations. If they had thought otherwise, they would have demanded that First Nations ratify Section XIV of the Treaty of Utrecht, rather than entering into separate agreements with them.
In view of the White supremacist attitudes prevailing at the time, the fact that First Nations, including the Mi'kmaq, were left out of the treaty negotiations, not even advised about its signing, should come as no surprise. A letter from Governor T. Caulfield to Vaudreuil, dated May 7, 1714, attests to the fact that the Mi'kmaq had been left in the dark:
"Breach of the treaty of peace and commerce committed by Indians under French government upon a British trading vessel at Beaubassin. Enclosed letter from Pere Felix, giving the Indians' excuse, i.e., that they did not know that the treaty was concluded between the two crowns, or that they were included in it. The Indians come from Richibucto. Enclosed John Adams' account of the goods taken from him. Hopes that satisfaction will be given, and promises to prevent similar outrages on his side."
Finally, in 1715, the Mi'kmaq were enlightened. At a meeting with the Nation's Chiefs, two English officers informed them that France had transferred them and the ownership of their land to Great Britain via the Treaty of Utrecht, and that King George I was now their sovereign. The Mi'kmaq responded in no uncertain terms that they did not come under the Treaty of Utrecht, would not recognize a foreign king in their country, and would not recognize him as having dominion over their land.
At the same meeting the English had the audacity to place before the Chiefs the proposal that they permit British settlement in their villages for the purpose of creating one people. The Mi'kmaq, of course, immediately rejected this monstrous request to submit to extinction by assimilation. The Chiefs then clarified for the English that they had never given over ownership of their land to the French King or considered themselves to be his subjects, and therefore, he had had nothing to transfer. With no agreement, open hostilities between the Mi'kmaq and the English resumed. Thus the die was cast for close to fifty more years of conflict, with occasional periods of uneasy truce.
After they had learned that the French had claimed their land and, unbeknownst to them, attempted to transfer their territories to Great Britain by treaty two years earlier, the Mi'kmaq directed protests to St. Ovide de Brouillant, Louisbourg's military commander in 1715, and Governor after September 1717. He responded with what can be described as lies and doubletalk:
"He [the French King] knew full well that the lands on which he tread, you possess them for all time. The King of France, your Father, never had the intention of taking them from you, but had ceded only his own rights to the British Crown.”
To read about the horrible consequences that European treachery visited upon American Indians Click American Indian Genocide |
Perseid Meteor Shower
The Comet Hypothesis
On August 11 & 12 the Perseid meteor shower will have maximum visibility in the northern hemisphere. As a result of studies of these showers, which began in the mid-1800s, astronomers developed a hypothesis that they are the result of the Earth passing through the extended tails of comets. Eventually they ‘found’ comets which they associated with dozens of meteor showers, except the Geminids.
The primary problem with this hypothesis is that the periods of the majority of these comets – the years between their returning close to the orbit of the Earth, are, for example: Ursids 13.6, Leonids 33, Orionids 73, Perseids 133 and Lyrids 415 years. To imagine that the ‘tails’ of these comets, hundreds of millions of miles long, are sufficiently dense and remain so for such long periods that they produce meteor showers every year at the same date, is to stretch credulity.
Meteor Shower Particle Orbits
There is actually considerable scientific evidence which contradicts the comet hypothesis, which I discuss in detail in a 2010 post titled Geminid Meteor Shower Mystifies Scientists. This evidence is based on the photographically determined orbits of the meteor shower particles, which are completely different from the comet orbits. It shows that the orbit of each meteor shower is within the inner solar system. (Babadjanov, P., Orbital Elements of Photographic meteors, p. 287.) What is absolutely amazing is that practically every orbit is tangent to the orbit of the Earth at perihelion and the orbits of the particles of each meteor shower converge at these points.
Each meteor shower is material ejected by one or more convulsions within priori-Mars when the planet was in geostationary orbit of the Earth, and the point of each ejection is marked by the convergence of the remaining particles. Because of the recent activity and the massive amount of material ejected from that planet, there is still plenty of material, indeed all meteorites still falling to Earth were ejected from priori-Mars in the last 6,000 years. |
Today in History – April 9, 1967 – The Boeing 737 makes its maiden flight. Boeing initiated the design in 1958 and received its first order in 1965. Since then the Boeing 737 has become a top selling airplane with approximately 6,000 orders.
John McMasters was an expert engineer at Boeing who had an endless passion for designing aircraft. John was equally passionate about inspiring the next generation of engineering designers. He graduated with BS and MS degrees in engineering from the University of Colorado at Boulder. John started his career with the U.S. Air Force where he was awarded the Air Force Commendation Medal in 1965 for his conceptual design, deployment and testing of air-to-air guided missiles. He obtained a Ph.D. degree in aeronautical engineering from Purdue University in 1975 and joined Boeing as an aerodynamics engineering soon afterward, where he developed many new concepts that revolutionized the airplane design industry.
John passed way in 2008 and prior to his death he prepared large annotated presentation and other documents to preserve his legacy of innovations in airplane design. These documents are accessible on the Engineering Pathway digital library and the following are John’s own words on airplane design:
“In order to understand where we have been as a guide to where we are now going in our future, let us begin by considering the history of flight – so far.
Starting with the world’s first controlled, powered flight by the Wright Brothers in 1903, we have seen dramatic progress driven by the mantra: “Farther, faster, and higher”
With the introduction of practical rocket propulsion schemes in WWII, the quest for progress continued as we were able to fly beyond the earth’s atmosphere and successfully send a person to the moon.
Today, we dream of travel to other planets and beyond, and air transportation is so commonplace that we often take it for granted (even though it has dramatically changed the world we live in – and will likely continue to do so).”
“The Boeing B-47 and the Douglas DC-3 are two of the ten most historically significant aircraft of the first century of flight. The B-47 was more than just another bomber, it established the “paradigm” for what a good long-range subsonic cruising airplane should look like and led directly to the Boeing 707 (and competitor Douglas DC-8) that transformed the commercial airplane business in the mind of the traveling public. The fact that Douglas and Boeing are now combined under the same tenet is an irony of the history of aviation. Likewise, the Boeing led International Space Station and the Rockwell/North American Space Shuttle are “synergistic” with each other and both are now “Boeing”. It may also be argued that the incredible North American B-70 was a more fantastic technical achievement than the better known Anglo/French “Concord” SST. The satellite is the Hughes Anik A – the first synchronous communications satellite.
• As a consequence of our mergers and acquisitions, Boeing now has a strong domestic presence in over half the states in the union, including Hawaii and Alaska (not shown).
• Because we have such a broad presence, there is a huge range of job and career opportunities for new employees. There is also a broader selection of job site opportunities. “ |
Students will be able to define adaptation and give 10 specific examples of adaptations for food-getting behavior and how they enhance the animals' ability to survive.
Students will watch a five-minute video on bear-feeding behavior and make observations in writing concerning the adaptations involved with their feeding. Teacher will draw specific attention to the prehensile lips.
Students will work in groups or pairs and identify ten specific feeding
adaptations in other animals.
Groups will present their lists to the class making a larger class list of adaptations.
End the lesson with a video of other animals and their specific adaptations.
Adaptations allow an organism an "edge" in the game of survival. When one animal can compete for resources more efficiently than another, the more suitably-adapted individual will survive in greater numbers than his less well-adapted counterpart. This, in turn, will also allow the more adapted individual to reproduce more.
Soon, the population that is more adapted becomes the more predominant organism in the entire population. This is differential reproduction and is the basis for Darwin's theory of evolution by natural selection.
Class will create a composite list of adaptations for further reference. This list could be used as a basis for a quiz at some future time. |
Coronal mass ejections (CMEs) are huge explosions of magnetic field and plasma from the Sun's corona. When CMEs impact the Earth’s magnetosphere, they are responsible for geomagnetic storms and enhanced aurora. CMEs originate from highly twisted magnetic field structures, or “flux ropes”, on the Sun, often visualized by their associated “filaments” or “prominences”, which are relatively cool plasmas trapped in the flux ropes in the corona. When these flux ropes erupt from active regions on the Sun (regions associated with sunspots and very strong magnetic fields), they are often accompanied by large solar flares; eruptions from quiet regions of the Sun, such as the “polar crown” filament eruptions, sometimes do not have accompanying flares.
CMEs travel outward from the Sun typically at speeds of about 300 kilometers per second, but can be as slow as 100 kilometers per second or faster than 3000 kilometers per second. The fastest CMEs erupt from large sunspot active regions, powered by the strongest magnetic field concentrations on the Sun. These fast CMEs can reach Earth in as little as 14--17 hours. Slower CMEs, typically the quiet region filament eruptions, take several days to traverse the distance from the sun to Earth. Because CMEs have an embedded magnetic field that is stronger than the background field of the solar wind, they will expand in size as they propagate outward from the Sun. By the time they reach the Earth, they can be so large they will fill half the volume of space between the Sun and the Earth. Because of their immense size, slower CMEs can take as long as 24 to 36 hours to pass over the Earth, once the leading edge has arrived.
CMEs that are traveling faster than the solar wind plasma’s fast mode wave speed (the space equivalent of the Earth’s sound speed) will generate a shock wave, just like an airplane traveling faster than the speed of sound generates a sonic boom. These shock waves accelerate charged particles ahead of them to create much of the solar radiation storm affiliated with large-scale solar eruptions. Often, the first sign of a CME hitting the Earth environment is the plasma density jump due to the shock wave’s passage.
The size, speed, direction, and density of a CME are important parameters to determine when trying to predict if and when it will impact Earth. We can estimate these properties of a CME using observations from an instrument known as a coronagraph, which blocks the bright light of the solar disk, just as the moon does in a total solar eclipse, allowing the outer solar atmosphere (chromosphere and corona) to be observed. CMEs show up as bright clouds of plasma moving outward through interplanetary space.
In order to predict the strength of the resulting geomagnetic storm, estimates of the magnetic field strength and direction are important. At the present time, the magnetic field cannot be determined until it is measured as the CME passes over a monitoring satellite. If the magnetic field direction of the CME is opposite to that of the Earth’s dipolar magnetic field, the resulting geomagnetic disturbance or storm will be larger than if the fields are in the same direction. Some CMEs show predominately one direction of magnetic field in their passage past the Earth, but most exhibit changing field directions as the large magnetic cloud passes over our relatively tiny magnetosphere, so most CMEs that impact the Earth’s magnetosphere will at some point have magnetic field conditions that favor the generation of geomagnetic storming with the associated auroral displays and geomagnetically induced currents in the ground.
Images courtesy of NASA/ESA SOHO mission. |
Chicago researchers report the development of a new mouse model for food allergy that mimics symptoms generated during a human allergic reaction to peanuts. The animal model provides a new research tool that will be invaluable in furthering the understanding of the causes of peanut and other food allergies and in finding new ways to treat and prevent their occurrence, according to experts at the National Institute of Allergy and Infectious Diseases (NIAID), the component of the National Institutes of Health (NIH) that funded the research. Peanut allergy is of great public health interest because this food allergy is the one most often associated with life-threatening allergic reactions, resulting in up to 100 deaths in the United States each year.
The findings of the research team, led by Paul Bryce, Ph.D., of the Feinberg School of Medicine at Northwestern University, appear in the January issue of the Journal of Allergy and Clinical Immunology. The development of new animal models for food allergy was identified as a critical need by the 2006 NIH Expert Panel on Food Allergy Research.
"Food allergies affect the health and quality of life of many Americans, particularly young children," says NIAID Director Anthony S. Fauci, M.D. "Finding an animal model that mimics a severe human allergic reaction to peanuts will help us better understand peanut allergy and develop new and improved treatment and prevention strategies."
Allergic reactions to food can range from mild hives to vomiting to difficulty breathing to anaphylaxis, the most severe reaction. Anaphylaxis may result from a whole-body allergic reaction to the release of the chemical histamine, causing muscles to contract, blood vessels to dilate and fluid to leak from the bloodstream into the tissues. These effects can result in narrowing of the upper or lower airways, low blood pressure, shock or a combination of these symptoms, and also can lead to a loss of consciousness and even death.
The most significant obstacle to developing an animal model of food allergy is that animals are not normally allergic to food. Scientists must add a strong immune stimulant to foods to elicit a reaction in animals that resembles food allergy in humans. Because of this requirement, useful animal models have been developed only in the last few years, and such animal models have until now used cholera toxin as the immune stimulant.
Dr. Bryce's team took the novel approach of feeding mice a mixture of whole peanut extract (WPE) and a toxin from the bacteria Staphylococcus aureus, called staphylococcal enterotoxin B (SEB) to simulate the human anaphylactic reaction to peanuts in mice.
"Persistent S. aureus colonization is commonly found on the skin of people with eczema and in the nasal cavities of people with sinusitis," says Dr. Bryce. "The history between S. aureus and allergic diseases led us to use staphylococcal toxins to stimulate food allergy in animals."
According to Dr. Bryce, the results using the SEB/WPE mixture were considerably better than those seen with previous animal models, which failed to mimic many features of food allergy. They showed that the SEB/WPE mixture stimulated severe symptoms in mice that closely resemble those found in human anaphylaxis, including swelling around the eyes and mouth, reduced movement and significant problems breathing. Additionally, mice given the SEB/WPE mixture had high blood levels of histamine, which indicates a severe allergic reaction.
The researchers also observed that the blood and tissues of mice in the SEB/WPE group had higher-than-normal numbers of eosinophils, which are white blood cells often associated with allergy-related inflammation. Future studies will be needed to determine if eosinophils play an important role in human food allergy.
These results, say Dr. Bryce, suggest that this animal model of food allergy will be useful for many types of future research studies.
Approximately 4 percent of Americans have food allergies. For reasons that are not well understood, the prevalence in children increased by 18 percent between 1997 and 2007. The most common causes of food allergies are milk, eggs, shellfish, peanuts, tree nuts, wheat and soy.
Each year there are between 15,000 and 30,000 episodes of food-induced anaphylaxis, which are associated with 100 to 200 deaths in the United States.
Reference: K Ganeshan et al. Impairing oral tolerance promotes allergy and anaphylaxis: a new murine food allergy model. Journal of Allergy and Clinical Immunology. DOI: 10.1016/j.jaci.2008.10.011 (2008).
Source: NIH/National Institute of Allergy and Infectious Diseases
Explore further: After Uber and Airbnb, meal-sharing sites under fire in France |
Look up at the night sky and you'll see stars, sure. But you're also seeing planets—billions and billions of them. At least.
That's the conclusion of a new study by astronomers at the California Institute of Technology (Caltech) that provides yet more evidence that planetary systems are the cosmic norm. The team made their estimate while analyzing planets orbiting a star called Kepler-32—planets that are representative, they say, of the vast majority in the galaxy and thus serve as a perfect case study for understanding how most planets form.
"There's at least 100 billion planets in the galaxy—just our galaxy," says John Johnson, assistant professor of planetary astronomy at Caltech and coauthor of the study, which was recently accepted for publication in the Astrophysical Journal. "That's mind-boggling."
"It's a staggering number, if you think about it," adds Jonathan Swift, a postdoc at Caltech and lead author of the paper. "Basically there's one of these planets per star."
The planetary system in question, which was detected by the Kepler space telescope, contains five planets. The existence of two of those planets have already been confirmed by other astronomers. The Caltech team confirmed the remaining three, then analyzed the five-planet system and compared it to other systems found by the Kepler mission.
The planets orbit a star that is an M dwarf—a type that accounts for about three-quarters of all stars in the Milky Way. The five planets, which are similar in size to Earth and orbit close to their star, are also typical of the class of planets that the telescope has discovered orbiting other M dwarfs, Swift says. Therefore, the majority of planets in the galaxy probably have characteristics comparable to those of the five planets.
While this particular system may not be unique, what does set it apart is its coincidental orientation: the orbits of the planets lie in a plane that's positioned such that Kepler views the system edge-on. Due to this rare orientation, each planet blocks Kepler -32's starlight as it passes between the star and the Kepler telescope.
By analyzing changes in the star's brightness, the astronomers were able to determine the planets' characteristics, such as their sizes and orbital periods. This orientation therefore provides an opportunity to study the system in great detail—and because the planets represent the vast majority of planets that are thought to populate the galaxy, the team says, the system also can help astronomers better understand planet formation in general.
"I usually try not to call things 'Rosetta stones,' but this is as close to a Rosetta stone as anything I've seen," Johnson says. "It's like unlocking a language that we're trying to understand—the language of planet formation."
One of the fundamental questions regarding the origin of planets is how many of them there are. Like the Caltech group, other teams of astronomers have estimated that there is roughly one planet per star, but this is the first time researchers have made such an estimate by studying M-dwarf systems, the most numerous population of planets known.
To do that calculation, the Caltech team determined the probability that an M-dwarf system would provide Kepler-32's edge-on orientation. Combining that probability with the number of planetary systems Kepler is able to detect, the astronomers calculated that there is, on average, one planet for every one of the approximately 100 billion stars in the galaxy. But their analysis only considers planets that are in close orbits around M dwarfs—not the outer planets of an M-dwarf system, or those orbiting other kinds of stars. As a result, they say, their estimate is conservative. In fact, says Swift, a more accurate estimate that includes data from other analyses could lead to an average of two planets per star.
M-dwarf systems like Kepler-32's are quite different from our own solar system. For one, M dwarfs are cooler and much smaller than the sun. Kepler-32, for example, has half the mass of the sun and half its radius. The radii of its five planets range from 0.8 to 2.7 times that of Earth, and those planets orbit extremely close to their star. The whole system fits within just over a tenth of an astronomical unit (the average distance between Earth and the sun)—a distance that is about a third of the radius of Mercury's orbit around the sun. The fact that M-dwarf systems vastly outnumber other kinds of systems carries a profound implication, according to Johnson, which is that our solar system is extremely rare. "It's just a weirdo," he says.
The fact that the planets in M-dwarf systems are so close to their stars doesn't necessarily mean that they're fiery, hellish worlds unsuitable for life, the astronomers say. Indeed, because M dwarfs are small and cool, their temperate zone—also known as the "habitable zone," the region where liquid water might exist—is also further inward. Even though only the outermost of Kepler-32's five planets lies in its temperate zone, many other M dwarf systems have more planets that sit right in their temperate zones.
As for how the Kepler-32 system formed, no one knows yet. But the team says its analysis places constraints on possible mechanisms. For example, the results suggest that the planets all formed farther away from the star than they are now, and migrated inward over time.
Like all planets, the ones around Kepler-32 formed from a proto-planetary disk—a disk of dust and gas that clumped up into planets around the star. The astronomers estimated that the mass of the disk within the region of the five planets was about as much as that of three Jupiters. But other studies of proto-planetary disks have shown that three Jupiter masses can't be squeezed into such a tiny area so close to a star, suggesting to the Caltech team that the planets around Kepler-32 initially formed farther out.
Another line of evidence relates to the fact that M dwarfs shine brighter and hotter when they are young, when planets would be forming. Kepler-32 would have been too hot for dust—a key planet-building ingredient—to even exist in such close proximity to the star. Previously, other astronomers had determined that the third and fourth planets from the star are not very dense, meaning that they are likely made of volatile compounds such as carbon dioxide, methane, or other ices and gases, the Caltech team says. However, those volatile compounds could not have existed in the hotter zones close to the star.
Finally, the Caltech astronomers discovered that three of the planets have orbits that are related to one another in a very specific way. One planet's orbital period lasts twice as long as another's, and the third planet's lasts three times as long as the latter's. Planets don't fall into this kind of arrangement immediately upon forming, Johnson says. Instead, the planets must have started their orbits farther away from the star before moving inward over time and settling into their current configuration.
"You look in detail at the architecture of this very special planetary system, and you're forced into saying these planets formed farther out and moved in," Johnson explains.
The implications of a galaxy chock full of planets are far-reaching, the researchers say. "It's really fundamental from an origins standpoint," says Swift, who notes that because M dwarfs shine mainly in infrared light, the stars are invisible to the naked eye. "Kepler has enabled us to look up at the sky and know that there are more planets out there than stars we can see."
California Institute of Technology: http://www.caltech.edu
This press release was posted to serve as a topic for discussion. Please comment below. We try our best to only post press releases that are associated with peer reviewed scientific literature. Critical discussions of the research are appreciated. If you need help finding a link to the original article, please contact us on twitter or via e-mail.
The Chinese space agency's newest space launch will test atmospheric re-entry technology ahead of a more complex mission to collect moon rocks in 2017
A comet making its first trip in from the Oort cloud was caught on camera before a near miss with four spacecraft currently orbiting the Red Planet
An ingenious technique reveals data that's been lost for 11 billion years
Saturn’s battered moon Mimas may have a thin global ocean buried miles beneath its icy surface, raising the prospect of another "life-friendly" habitat in the solar system, scientists said on Thursday.
"There is a lot new to be learned by seeing the deposits," scientist says of icy find
Software that can identify plumes emanating from comet and moon surfaces is the next step toward landers that can explore planets autonomously
A pair of spacewalking NASA astronauts hustled through an electrical repair job outside the International Space Station on Wednesday, then began work to prepare the outpost for new commercial space taxis.
Two NASA spacewalkers are working to replace a faulty voltage regulator in the space station solar power grid and make other upgrades
The space agency is developing two separate missions to learn how to exploit stores of water on the lunar surface
It may be a bit early to celebrate Halloween, but NASA gets into the spirit with this spooky photo of the sun |
Asbestos is the generic name given to the fibrous variety of six naturally occurring minerals that have been used in commercial products. These minerals are made up of fibrous bundles. These fibers are long and thin, and they can be easily separated from one another. Asbestos minerals have physical properties (high tensile strength, flexibility, resistance to heat and chemicals, high electrical resistance, and the capability to be woven like fabric) that make them useful in many commercial products.
Asbestos minerals come from metamorphic rocks. Significant deposits of asbestos are located in the western United States. However, the mountains of North and South Carolina also have extensive deposits of asbestos minerals. Some small deposits are found in the Smoky Mountains of East Tennessee. There is currently no production of asbestos in the United States. Most of the asbestos used here is imported from Canada.
It has been estimated that asbestos was once used in more than 3,000 different products. Asbestos can be found in vinyl flooring, patching compounds and textured paints, sprayed acoustic ceilings, acoustic ceiling tiles, stove insulation, furnace insulation, pipe insulation, wall and ceiling insulation, roofing shingles and siding, home appliances, fire-retardant clothing, vehicle brake pads, and cement pipe.
Yes. Asbestos is still used today. Because of its unique physical properties, it remains an important component of many products. The major manufacturing uses in the United States are: asphaltic roofing compounds used on commercial buildings (61%), gaskets (19%), and friction products, such as brake shoes and clutches (13%). Most of these products are utilized on a commercial basis and are installed under conditions regulated by OSHA. Today, there are no asbestos- containing products manufactured specifically for use by the general public.
It would depend upon the context of an individual’s contact with asbestos.
In the United States, asbestos containing products available to the general public have essentially been eliminated. Its use today is mainly limited to commercial and industrial applications. Federal government regulations require the protection of the health and safety of persons that use materials containing asbestos in their occupations. The primary public health hazard is the inadvertent or uncontrolled contact with old materials or products that contain asbestos materials and are not intact.
You will not be harmed by touching asbestos or being near materials containing it. Asbestos can cause health problems when inhaled into the lungs. If products containing asbestos are disturbed, microscopic, lightweight asbestos fibers are released into the air. Persons breathing the air may breathe in the asbestos fibers. Continued exposure increases the amount of fibers that remain in the lung. Fibers embedded in lung tissue over time may result in lung diseases such as asbestosis, lung cancer, or mesothelioma. Studies have shown that the combination of smoking and asbestos exposure is particularly harmful.
There is no minimum concentration of asbestos fibers in the air that is considered safe for humans to inhale on a continual basis. The risk of developing adverse health effects is dependent upon the exposure (the amount of asbestos inhaled and the duration of time it was inhaled, typically measured in years) a person has had. Symptoms of lung problems do not usually appear until after 20-30 years of exposure to high levels of asbestos fibers (as might be found in an industrial setting). Most people do not develop health problems when exposed to small amounts of asbestos.
Asbestos-containing material (ACM) can be found in many buildings (including public buildings and schools) and homes today. The older the building is, the more likely ACMs are present in the structure. Fortunately, most residential homes constructed within the past 20 years are not likely to contain any ACM.
It is not possible to identify asbestos just by looking at it. Only a person trained in asbestos fiber identification using a special polarized light microscope can identify it. There are environmental consultants who can be hired to identify asbestos in building materials.
If you have ACM in your home, your choices are to remove it, contain it, or live with it. The recommended thing to do, if the ACM is in good condition, is to leave it alone. The only way it can affect your health is when the material is damaged and fibers become airborne. If it is moderately damaged, it is recommended that you manage it in place (repair the damage and contain it, possibly with a coat of paint or sealer). Removing and disposing of any ACM is expensive and also increases the likelihood of releasing the fibers into the air.
If you feel that you cannot live with it, then the services of an asbestos abatement contractor should be considered. There are commercial companies that can be hired to remove ACM from homes and buildings. However, these companies are not regulated by the state. It would be advisable to check out these companies with your local Better Business Bureau. Due to the expense involved with the removal of asbestos from a home, it is also recommended that a homeowner obtain bids from several companies.
Yes. Depending upon the age of the school building, there may be ACM present in the school your child attends. Don’t panic. This has been a recognized and thoroughly studied situation for over 20 years. To address this problem, Congress passed the Asbestos Hazard Emergency Response Act (AHERA), a provision of the Toxic Substances Control Act, in 1986. AHERA requires local educational agencies to inspect their schools for ACM and prepare management plans that make recommendations for the reduction of asbestos hazards. Should you have concerns about your child’s school, contact your local board of education and inquire about their asbestos management policies.
The following summarizes the five major facts that the EPA has presented in congressional
Asbestos is hazardous and human risk of asbestos disease depends upon exposure.
Since concerns about asbestos have been in the public eye for over 20 years, there is a wealth of information available from many resources. For persons seeking more information concerning asbestos, visit the web links below.
United States Environmental Protection Agency – Indoor Air
United States Environmental Protection Agency
Region 4, Asbestos Informer
United States National Institutes of Health
National Cancer Institute
United States Department of Labor
Occupational Health and Safety Administration
United States Department of Health and Human Services
Agency for Toxic Substances and Disease Registry
Toxicological Profile of Asbestos
Centers for Disease Control and Prevention
National Institute for Occupational Safety and Health
Tennessee Department of Environment and Conservation
Division of Air Pollution Control - Notification of Asbestos Demolition or Renovation |
Question by VS Prasad: Why did ancient Indians designated Moon as a planet?
Precession of earth’s axis: The Vedic seers came up
with the value of 25,870 years. How these ancient
people were able to make these calculations, however is
“as great a mystery as the origin of life itself”.
There are many texts like Khagola-shastra in ancient
India. The original findings of ancient India were:
(1) The calculation of occurrences of eclipses
(2) Determination of Earth’s circumference
(3) Theorizing about the theory of gravitation
(4) Determining that sun was a star and
determination of number of planets.
Aryabhata wrote that 1,582,237,500 rotations of the
Earth equal 57,753,336 lunar orbits. This is an
extremely accurate ratio of a fundamental astronomical
ratio (1,582,237,500/57,753,336 = 27.3964693572), and
is perhaps the oldest astronomical constant calculated
to such accuracy.
It is known to modern science that mass of any
satellite in our solar system is less than a 20th part
of the planet. In case of our moon, it is more than a
10th of the mass of earth. Such a case should be
designated as a ‘twin-planet system’. Such systems were
found around some other stars also. Our moon does not
rotate around the center of the earth. Both of them
rotate around a common geometric center determined by
the momentum of each.
Answer by Saphire4
Because they were ignorant. There ideas were false not based on scientific evidence.
Add your own answer in the comments! |
The Activity Block is the large chunk of time devoted to a class or subject. Within the Learning Cultures model, most of the Activity Block is devoted to time for students to work independently or in cooperative learning groups. This period of time is known as Work Time. Work Time consists of a long, uninterrupted period that provides students the opportunity to make choices, pursue personal goals, solve problems and learn relevant skills through self-selected activities. During Work Time, children have ample freedom to read or write texts of their choice, solve problems they feel need to be solved, use space and materials freely, and collaborate with peers of their choice. The Work Time Rubric should be used to guide the implementation of the Work Time format. Work time gives students a chance to exercise independence, make choices about what to learn, think critically, and practice newly-learned skills while working on projects of interest. All of these factors help motivate students to become engaged in their work. Since students pursue activities of choice, and not those assigned by a teacher, they tend to be highly motivated and approach Work Time with the intensity they might devote to play or games. The emphasis of Work Time is to provide students with opportunities to pursue their own work agenda. Students are able to learn the habits of mind and dispositions of disciplined thinkers through the daily practice of exercising free will responsibly. Work time is also the teacher’s working medium. A classroom in which all students are industriously engaged in independent, self-directed projects is an ideal context for teachers to conduct the curriculum-embedded assessments and differentiated instruction that are distinctive features of the Learning Cultures curriculum. This course provides an explanation of the Work Time format. It also provides all of the resources and materials needed in order to implement Work Time in your classroom. |
are a large family of motor proteins
found in eukaryotic tissues
. They are responsible for actin
- The term “myosin” was originally used to describe a group of similar, but nonidentical, ATPases found in striated and smooth muscle cells.
Structure and Function
Most myosin molecules are composed of a head, neck, and tail domain.
- The head domain binds the filamentous actin, and uses ATP hydrolysis to generate force and to "walk" along the filament towards the (+) end (with the exception of one family member, myosin VI, which moves towards the (-) end).
- the neck domain acts as a linker and as a lever arm for transducing force generated by the catalytic motor domain. The neck domain can also serve as a binding site for myosin light chains which are distinct proteins that form part of a macromolecular complex and generally have regulatory functions.
- The tail domain generally mediates interaction with cargo molecules and/or other myosin subunits. In some cases, the tail domain may play a role in regulating motor activity.
Nomenclature, evolution, and the family tree
The wide variety of myosin genes found throughout the eukaryotic phyla were named according to different schemes as they were discovered. The nomenclature can therefore be somewhat confusing when attempting to compare the functions of myosin proteins within and between organisms.
Skeletal muscle myosin, the most conspicuous of the myosin superfamily due to its abundance in muscle fibers, was the first to be discovered. This protein makes up part of the sarcomere and forms macromolecular filaments composed of multiple myosin subunits. Similar filament-forming myosin proteins were found in cardiac muscle, smooth muscle, and non-muscle cells. However, beginning in the 1970s researchers began to discover new myosin genes in simple eukaryotes encoding proteins that acted as monomers and were therefore entitled Class I myosins. These new myosins were collectively termed "unconventional myosins" and have been found in many tissues other than muscle. These new superfamily members have been grouped according to phylogenetic relationships derived from a comparison of the amino acid sequences of their head domains, with each class being assigned a Roman numeral (see phylogenetic tree). The unconventional myosins also have divergent tail domains, suggesting unique functions. The now diverse array of myosins likely evolved from an ancestral Precursor (see picture).
Analysis of the amino acid sequences of different myosins shows great variability among the tail domains but strong conservation of head domain sequences. Presumably this is so the myosins may interact, via their tails, with a large number of different cargoes, while the goal in each case - to move along actin filaments - remains the same and therefore requires the same machinery in the motor. For example, the human genome contains over 40 different myosin genes.
These differences in shape also determine the speed at which myosins can move along actin filaments. The hydrolysis of ATP and the subsequent release of the phosphate group causes the "power stroke," in which the "lever arm" or "neck" region of the heavy chain is dragged forward. Since the power stroke always moves the lever arm by the same angle, the length of the lever arm determines how fast the cargo will move. A longer lever arm will cause the cargo to traverse a greater distance even though the lever arm undergoes the same angular displacement - just as a person with longer legs can move farther with each individual step. Myosin V, for example, has a much longer neck region than myosin II, and therefore moves 30-40 nanometers with each stroke as opposed to only 5-10.
Myosin I's function is unknown, but it is believed to be responsible for vesicle transport or the contraction vacuole of cells.
Myosin II is the best-studied example of these properties.
- Myosin II contains two heavy chains, each about 2000 amino acids in length, which constitute the head and tail domains. Each of these heavy chains contains the N-terminal head domain, while the C-terminal tails take on a coiled-coil morphology, holding the two heavy chains together (imagine two snakes wrapped around each other, such as in a caduceus). Thus, myosin II has two heads.
- It also contains 4 light chains (2 per head), which bind the heavy chains in the "neck" region between the head and tail. These light chains are often referred to as the essential light chain and the regulatory light chain.
In muscle cells, it is myosin II that is responsible for producing the contractile force. Here, the long coiled-coil tails of the individual myosin molecules join together, forming the thick filaments of the sarcomere. The force-producing head domains stick out from the side of the thick filament, ready to walk along the adjacent actin-based thin filaments in response to the proper chemical signals.
Genes in humans
Note that not all of these genes are active.
- Class I: MYO1A, MYO1B, MYO1C, MYO1D, MYO1E, MYO1F, MYO1G, MYO1H
- Class II: MYH1, MYH2, MYH3, MYH4, MYH6, MYH7, MYH7B, MYH8, MYH9, MYH10, MYH11, MYH13, MYH14, MYH15, MYH16
- Class III: MYO3A, MYO3B
- Class V: MYO5A, MYO5B, MYO5C
- Class VI: MYO6
- Class VII: MYO7A, MYO7B
- Class IX: MYO9A, MYO9B
- Class X: MYO10
- Class XV: MYO15A
- Class XVIII: MYO18A, MYO18B
Myosin light chains are distinct and have their own properties. They are not considered "myosins" but are components of the macromolecular complexes that make up the functional myosin enzymes.
- Light chain: MYL1, MYL2, MYL3, MYL4, MYL5, MYL6, MYL6B, MYL7, MYL9, MYLIP, MYLK, MYLK2, MYLL1
The myosin of clam shells. It enables prolonged contraction of muscles that hold the clam shells closed for as long as a month. Moreover, paramyosin enables this with low-rate energy consumption. Paramyosins typically have a molecular weight ranging between 93000 and 115000 Da
depending on the species.
- Berg JS, Powell BC, Cheney RE (2001). "A millennial myosin census". Mol. Biol. Cell 12 (4): 780–94.
- Cheney RE, Mooseker MS (1992). "Unconventional myosins". Curr. Opin. Cell Biol. 4 (1): 27–35.
- Cheney RE, Riley MA, Mooseker MS (1993). "Phylogenetic analysis of the myosin superfamily". Cell Motil. Cytoskeleton 24 (4): 215–23.
- Hodge T, Cope MJ (2000). "A myosin family tree". J. Cell. Sci. 113 Pt 19 3353–4.
- Gavin RH (2001). "Myosins in protists". Int. Rev. Cytol. 206 97–134.
- Goodson HV (1994). "Molecular evolution of the myosin superfamily: application of phylogenetic techniques to cell biological questions". Soc. Gen. Physiol. Ser. 49 141–57.
- Mooseker MS, Cheney RE (1995). "Unconventional myosins". Annu. Rev. Cell Dev. Biol. 11 633–75.
- Oliver TN, Berg JS, Cheney RE (1999). "Tails of unconventional myosins". Cell. Mol. Life Sci. 56 (3-4): 243–57.
- Pollard TD, Korn ED (1973). "Acanthamoeba myosin. I. Isolation from Acanthamoeba castellanii of an enzyme similar to muscle myosin". J. Biol. Chem. 248 (13): 4682–90.
- Sellers JR (2000). "Myosins: a diverse superfamily". Biochim. Biophys. Acta 1496 (1): 3–22.
- Soldati T, Geissler H, Schwarz EC (1999). "How many is enough? Exploring the myosin repertoire in the model eukaryote Dictyostelium discoideum". Cell Biochem. Biophys. 30 (3): 389–411.
- Molecular Biology of the Cell. Alberts, Johnson, Lewis, Raff, Roberts, and Walter. 4th Edition. 949-952. |
For sixth grade, the science curriculum should include space, Earth, life, physical and general science topics. A sixth grade science curriculum usually incorporates inquiry-based investigations, meaning students propose a question and investigate the answer.Continue Reading
A sixth grade science curriculum includes Earth and space topics. This topic includes the Earth's structure as well as the geologic processes, cycles of oceans and water, plate tectonics and the climate. The topics in these lessons should also cover the solar system, stars and galaxies.
With life science, the curriculum includes lessons about the characteristics of living things. Students learn about plant and animal cell structures, classification of organisms, genetics and the structure and function of plants. They also study about cells, including unicellular and multicellular life and human cells.
Physical science incorporates physics and chemistry. Sixth grade students learn about physical and chemical changes as well as atoms and elements, states of matter, and mixtures and solutions. Lessons include experiments on motion, gravity, density and buoyancy, energy and heat. Students also learn about the periodic tables and properties of waves and light.
Quite often teachers design STEM projects, or those that integrate science, technology, engineering and mathematics. The goal of the curriculum should be for students to make connections to and reflect on their learning.Learn more about K-12 Curriculum |
Discuss with students how recent findings concerning the unique gravitational interaction of three dead stars have the potential to poke holes in Einstein’s famous theory.
Grade Level: 9-12
Student Learning Objectives:
Students will gain an understanding of (1) Einstein’s physics principles, (2) basic astronomy (star systems) and (3) how scientific knowledge evolves.
As you review the following information with students, follow the links to articles and YouTube videos, where you’ll find detailed explanations of key facts and concepts from Helen B. Warner Prize winner Scott Ransom.
Three recently discovered dead stars are presenting a possible challenge to Einstein’s equivalence principle and theory of relativity.
(The equivalence principle states that the effect of gravity on a body of mass doesn’t depend on the nature or internal structure of that body of mass. In other words, objects of different sizes and weights fall at the same speed. The principle was famously proven by Galileo’s experiment at the Leaning Tower of Pisa, but because the principle doesn’t jibe with quantum mechanics [quantum gravity], physicists have suspected that equivalence would not hold true under extreme conditions. Any evidence contrary to the equivalence principle would, in turn, cast doubt on Einstein’s theory of general relativity.)
The discovered three-star system includes a millisecond pulsar, which is located 4,200 light-years from Earth, and is closely orbited by a hot white dwarf and a cooler, more distant white dwarf. The stars formed as the result of younger stars' deaths and are comprised of those stars' remains.
The extreme conditions produced by the system led to very pure gravitational interactions among the stars' orbits. This means that all sorts of interesting information can be calculated. More specifically, millisecond pulsars spin rapidly and give off radio waves that race through space as the stars rotate. They are used as a measurement device in astronomy because after the amount of times they spin per second is determined, millisecond pulsars can help scientists understand the effects of gravity. Scientists measure the arrival time of the stars’ radio waves in order to calculate the gravitational properties of the system and its stars’ masses.
The equivalence principle assumes that the gravitational effect of the outer white dwarf would be identical for both the inner white dwarf and the pulsar. If, however, the equivalence principle is invalid under the conditions in this system, the outer star's gravitational effect on the inner white dwarf and the pulsar would be slightly different, and the high-precision pulsar timing observations could easily show that. Finding a deviation from the equivalence principle would indicate a breakdown of general relativity and would point us toward a new, revised theory of gravity.
The stars in this specific system are unusual, to say the least, originating from a rare set of circumstances. Previously, similar systems haven’t survived when tested, because when they form, pulsars typically blow away anything that could orbit them.
Introducing the discussion to students:
According to Einstein’s equivalence principle, the effect of gravity on a body of mass doesn’t depend on the nature or internal structure of that body of mass. But because the principle doesn’t jibe with more modern quantum mechanics (quantum gravity), physicists have suspected that equivalence would not hold true under extreme conditions.
Those extreme conditions haven’t truly presented themselves—until now. Let’s explore how the unique gravitational interaction of three recently discovered dead stars has the potential to poke holes in not only Einstein’s equivalence principle, but also his theory of general relativity.
Options for student discussion questions: |
Dark galaxies are small, gas-rich galaxies in the early Universe that are very inefficient at forming stars. They are predicted by theories of galaxy formation and are thought to be the building blocks of today’s bright, star-filled galaxies. Astronomers think that they may have fed large galaxies with much of the gas that later formed into the stars that exist today.
Because they are essentially devoid of stars, these dark galaxies don’t emit much light, making them very hard to detect. For years astronomers have been trying to develop new techniques that could confirm the existence of these galaxies. Small absorption dips in the spectra of background sources of light have hinted at their existence. However, this new study marks the first time that such objects have been seen directly.
“Our approach to the problem of detecting a dark galaxy was simply to shine a bright light on it.” explains Simon Lilly (ETH Zurich, Switzerland), co-author of the paper. “We searched for the fluorescent glow of the gas in dark galaxies when they are illuminated by the ultraviolet light from a nearby and very bright quasar. The light from the quasar makes the dark galaxies light up in a process similar to how white clothes are illuminated by ultraviolet lamps in a night club.”
The team took advantage of the large collecting area and sensitivity of the Very Large Telescope (VLT), and a series of very long exposures, to detect the extremely faint fluorescent glow of the dark galaxies. They used the FORS2 instrument to map a region of the sky around the bright quasar HE 0109-3518, looking for the ultraviolet light that is emitted by hydrogen gas when it is subjected to intense radiation. Because of the expansion of the Universe, this light is actually observed as a shade of violet by the time it reaches the VLT.
“After several years of attempts to detect fluorescent emission from dark galaxies, our results demonstrate the potential of our method to discover and study these fascinating and previously invisible objects,” says Sebastiano Cantalupo (University of California, Santa Cruz), lead author of the study.
The team detected almost 100 gaseous objects which lie within a few million light-years of the quasar. After a careful analysis designed to exclude objects where the emission might be powered by internal star-formation in the galaxies, rather than the light from the quasar, they finally narrowed down their search to 12 objects. These are the most convincing identifications of dark galaxies in the early Universe to date.
The astronomers were also able to determine some of the properties of the dark galaxies. They estimate that the mass of the gas in them is about 1 billion times that of the Sun, typical for gas-rich, low-mass galaxies in the early Universe. They were also able to estimate that the star formation efficiency is suppressed by a factor of more than 100 relative to typical star-forming galaxies found at similar stage in cosmic history.
“Our observations with the VLT have provided evidence for the existence of compact and isolated dark clouds. With this study, we’ve made a crucial step towards revealing and understanding the obscure early stages of galaxy formation and how galaxies acquired their gas”, concludes Sebastiano Cantalupo.
The MUSE integral field spectrograph, which will be commissioned on the VLT in 2013, will be an extremely powerful tool for the study of these objects.
Sebastiano Cantalupo, Simon J. Lilly, & Martin G. Haehnelt (2012). Detection of dark galaxies and circum-galactic filaments fluorescently illuminated by a quasar at z=2.4 Monthly Notices of the Royal Astronomical Society : arxiv.org/abs/1204.5753
Fluorescence is the emission of light by a substance illuminated by a light source. In most cases, the emitted light has longer wavelength than the source light. For instance, fluorescent lamps transform ultraviolet radiation — invisible to us — into optical light. Fluorescence appears naturally in some compounds, such as rocks or minerals but can be also added intentionally as in detergents that contain fluorescent chemicals to make white clothes appear brighter under normal light.
Quasars are very bright, distant galaxies that are believed to be powered by supermassive black holes at their centres. Their brightness makes them powerful beacons that can help to illuminate the surrounding area, probing the era when the first stars and galaxies were forming out of primordial gas.
This emission from hydrogen is known as Lyman-alpha radiation, and is produced when electrons in hydrogen atoms drop from the second-lowest to the lowest energy level. It is a type of ultraviolet light. Because the Universe is expanding, the wavelength of light from objects gets stretched as it passes through space. The further light has to travel, the more its wavelength is stretched. As red is the longest wavelength visible to our eyes, this process is literally a shift in wavelength towards the red end of the spectrum — hence the name ‘redshift’. The quasar HE 0109-3518 is located at a redshift of z = 2.4, and the ultraviolet light from the dark galaxies is shifted into the visible spectrum. A narrow-band filter was specially designed to isolate the specific wavelength of light that the fluorescent emission is redshifted to. The filter was centered at around 414.5 nanometres in order to capture Lyman-alpha emission redshifted by z=2.4 (this corresponds to a shade of violet) and has a bandpass of only 4 nanometres.
The star formation efficiency is the mass of newly formed stars over the mass of gas available to form stars. They found these objects would need more than 100 billion years to convert their gas into stars. This result is in accordance with recent theoretical studies that have suggested that gas-rich low-mass haloes at high redshift may have very low star formation efficiency as a consequence of lower metal content.
Share the post "Dark Galaxies of the Early Universe" |
Herbert Hoover was born in 1874 in Iowa. He had croup as a young boy and was thought to have died. His uncle managed to resuscitate him, though. Both his parents died before he was 10, which left him and his sisters as orphans. He moved to Oregon to live with his uncle but didn’t go to high school. He managed to make up for that by taking Night School, though.
Herbert Hoover worked as an assistant in his uncle’s real estate office, and entered Stanford in 1891, and was the first person to spend time in their dorm. He also claims to be their first student. He got a degree in Geology and started Mining and Australia and China from 1897 to 1908. He didn’t do the mining work, but most of the time just picked out where they would mine and assign roles, he was the mining executive. By 1908, he made it as an independent mining consultant, and by 1914, he made a fortune estimated at around 4 million dollars. Then, he started Humanitarian and Political efforts instead.
Herbert joined the Republicans and tried running for president in 1920 which didn’t work out. He did get the position of secretary of commerce by President Harding. He became president later and encouraged the Native Americans to work towards self-sufficiency. He also had a Native American vice president. Most of his moves were widely approved, but he was really only remembered for the Great Depression that began in his presidency. In that time, many banks lent out more money than they could afford and did not keep enough cash on hand. The optimism of the roaring 20’s meant that this was no problem for the banks, but it helped a negative chain reaction when that optimism ended.
At that time the stock market rose, and many investors made a lot of money in it, which led to more spending and even higher returns. But in 1929, the market reached a peak, then plummeted downwards, reacting to the nervousness among the investors. Then Black Tuesday, which was on October 29, 1929, and the decade of speculative trades and over lending came to a halt. This turned into the doubt and fear of the 1930’s. Many people tried to pull what money they had out of banks, and because the bank didn’t own that much money, a series of bank runs happened. 3000 banks across the country had to declare that they were bankrupt. Therefore, many people suffered from extreme poverty. This was the only decade where the US did not experience economic growth and had lasting effects on Americans of that generation. Unemployment was at less than 3 million in 1929, which jumped to 12.5 million by 1932. Pay cuts happened, and businesses were no longer interested in taking on any additional expenses or risks. But even so, the worst part of the depression still meant that 75% of the workforce remained employed.
Vagrancy numbers shot up, and there was what was called the dust bowl. Three severe droughts and sun storms happened and crippled the agricultural heart of the Midwest US. That meant that many farmers had to pack and move, and so famine was present. This also contributed to the length and severity of the Great Depression. Hoover’s legacy isn’t that great. The Hoover dam did begin during his presidency, but it was called the boulder dam for many years instead. He insisted that the economy would recover on its own, which was the right course to take, but it meant that they had to wait longer for the recovery.
The Great Depression was a horrible time to live in, and it provided a very big change compared to the Roaring 20’s. |
by Rich Coppenbarger
When I first started working at NASA Ames, I was working on accident investigation, specifically accidents due to weather conditions. Whenever there was an accident or incident due to turbulence or other wind-related weather phenomenon we would get the black box data recordings and try to figure out what happened to that airplane.
We often looked at data that involved turbulent conditions occurring at high altitude called Clear Air Turbulence (CAT), which is impossible to see or predict ahead of time. If the turbulence is severe enough the airplane may get thrown around or lose altitude very rapidly. You may have seen or read about this in the news where airplanes may suddenly loose up to 10,000 feet of altitude. Because typical airliners are flying at 30,000 feet they can afford to lose the altitude, but the planes often experience very strong aerodynamic forces. Passengers who are not strapped in can be thrown about the airplane and there have even been fatalities due to CAT. That is why some airlines are now insisting that passengers wear their seatbelts throughout the flight, not just during take-off and landing.
My main research interest while doing accident investigation was something called microburst wind shear. Low-altitude wind shear is caused by downward flowing air, usually as a result of thunderstorm activity. This type of weather phenomenon, often called a downburst, is caused by the same type of conditions that cause tornadoes and occur often in the Midwest. Although downbursts rarely cause damage to homes, they are very dangerous to aircraft that are landing.
Downbursts are dangerous because they often mislead pilots into making the wrong decisions. A lot of the intuitive things a pilot knows to do are wrong when they encounter a microburst. When first entering a downburst, the airplane experiences a head wind which causes the pilot to naturally pull back on the engines. This head wind is followed by a sudden down draft, followed by a tail wind. This means that the pilot must now throttle forward on the engines, but the engines are already at a low power setting due to the pilot's first instinct to throttle back. Occasionally this leaves the aircraft without enough power to climb out of the downburst. A tragic example was in 1985 when an aircraft landing at Dallas-Fort Worth airport during a thunder storm hit a downburst causing it to loose altitude very rapidly and crash, killing people on the aircraft and on the ground. Since the 1970's there have been about 20 accidents due to wind shear downbursts.
What we were trying to do in our program was to understand wind shear phenomena a little more. From blackbox flight data, we were trying to understand what the wind profile of a downburst looked like so that we could develop flight procedures that would help a pilot identify and respond to a wind shear encounter. The goal was to get pilots to recognize this phenomenon and take the appropriate action before the aircraft gets into a dangerous situation. As a result of the enormous amount of research by NASA and other agencies, wind shear related accidents are now much less common than they used to be. |
Bird Feeding Adaptations: How Beaks are Adapted to What Birds Eat - YouTube
Flash Cards - Learning the seasons and weather in Te Reo Maori
A great way to teach Maori, start with Te Ra the sun. Each child gets a ray and their name and a number. Great for getting them use to Maori numbers
40 Waiata (free) teaching basic sentence structure. Aimed towards preschool/Junior school but could be used further up.
Maori language resources Fruit : Hua rākau
Te Reo display of Maori fruit and veges
One day a Taniwha
Stitchlily: How to draw a Tiki Head!
Kite Paper Stars
Solar System Art--Here's a craft that's out of this world!
Fun in PreK-1: There's No Place Like Space: Outer Space Adventures. painting the sun
Room B6: Matariki...
Tirama Tirama Matariki - YouTube
New Zealand Maori Koru Art Lesson Plan: Multicultural Art and Craft Lessons for Kids: KinderArt ® |
Smallpox Questions and Answers: The Disease and the Vaccine
- "Smallpox Questions and Answers: The Disease and the Vaccine" is also availiable in PDF format.
Smallpox Questions and Answers: The Disease and Vaccine
What should I know about smallpox?
Smallpox is an acute, contagious, and sometimes fatal disease caused by an orthopoxvirus and marked by fever and a distinctive progressive skin rash. In 1980, the disease was declared eradicated following worldwide vaccination programs. However, in the aftermath of the events of September and October, 2001, New York State, along with other states, and the U. S. government are taking precautions to be ready to deal with a bioterrorist attack using smallpox as a weapon. As a result of these efforts: 1) There is a detailed nationwide smallpox preparedness program to protect Americans against smallpox as a biological weapon. This program includes the creation of preparedness teams that are ready to respond to a smallpox attack on the United States. Members of these teams - health care and public health workers - are being vaccinated so that they might safely protect others in the event of a smallpox outbreak. 2) There is enough smallpox vaccine to vaccinate everyone who would need it in the event of an emergency.
How serious is the smallpox threat?
The deliberate release of smallpox as an epidemic disease is now regarded as a possibility, and the United States is taking precautions to deal with this possibility.
How dangerous is the smallpox threat?
Smallpox is classified as a Category A agent by the Centers for Disease Control and Prevention. Category A agents are those that pose the greatest potential threat for adverse public health impact and have a moderate to high potential for large-scale dissemination. The public is generally more aware of category A agents, and broad-based public health preparedness efforts are underway. Other Category A agents are anthrax, plague, botulism, tularemia, and viral hemorrhagic fevers.
If I am concerned about a smallpox attack, can I go to my doctor and get the smallpox vaccine?
At the moment, the smallpox vaccine is not available for members of the general public. In the event of a smallpox outbreak, however, there is enough smallpox vaccine to vaccinate everyone who would need it.
What are the symptoms of smallpox?
The symptoms of smallpox begin with high fever, head and body aches, and sometimes vomiting. A rash follows that spreads and progresses to raised bumps and pus-filled blisters that crust, scab, and fall off after about three weeks, leaving a pitted scar.
If someone comes in contact with smallpox, how long does it take to show symptoms?
After exposure, it takes between 7 and 17 days for symptoms of smallpox to appear (average incubation time is 12 to 14 days). During this time, the infected person feels fine and is not contagious.
Is smallpox fatal?
The majority of patients with smallpox recover, but death may occur in up to 30% of cases.
How is smallpox spread?
Smallpox normally spreads from contact with infected persons. Generally, direct and fairly prolonged face-to-face contact is required to spread smallpox from one person to another. Smallpox also can be spread through direct contact with infected bodily fluids or contaminated objects such as bedding or clothing. Indirect contact is not common. Rarely, smallpox has been spread by virus carried in the air in enclosed settings such as buildings, buses, and trains. Smallpox is not known to be transmitted by insects or animals.
How many people would have to get smallpox before it is considered an outbreak?
One suspected case of smallpox is considered a public health emergency.
Is smallpox contagious before a rash appears?
A person with smallpox is sometimes contagious with onset of fever (prodrome phase), but the person becomes most contagious with the onset of rash. Patients remain infectious until the last scab falls off.
Is there any treatment for smallpox?
Smallpox can be prevented through use of the smallpox vaccine, even if the vaccine is given within three days after exposure to smallpox. There is no proven treatment for smallpox, but research to evaluate new antiviral agents is ongoing. Preliminary results with the drug, cidofovir suggest it may be useful. (The use of cidofovir to treat smallpox or smallpox vaccine reactions should be evaluated and monitored by experts at NIH and CDC.) Patients with smallpox can benefit from supportive therapy (e. g., intravenous fluids, medicine to control fever or pain) and antibiotics for any secondary bacterial infections that may occur.
What is the smallpox vaccine, and is it still required?
The smallpox vaccine is the only way to prevent smallpox. The vaccine is made from a virus called vaccinia, which is another pox-type virus related to smallpox. The vaccine helps the body develop immunity to smallpox. It was successfully used to eradicate smallpox from the human population.
Routine vaccination of the American public against smallpox stopped in 1972 after the disease was eradicated in the United States. Until recently, the U. S. government provided the smallpox vaccine only to a few hundred scientists and medical professionals who work with smallpox and similar viruses in a research setting. After the events of September and October, 2001, however, we have taken extensive actions to improve our level of preparedness against terrorism. For smallpox, this included updating a response plan and ordering enough smallpox vaccine to immunize the American public in the event of a smallpox outbreak. The plans are in place, and there is sufficient vaccine available to immunize everyone who might need it in the event of an emergency.
Should I get vaccinated against smallpox?
The smallpox vaccine is not available to the general public at this time. If vaccination is considered advisable, you will be notified quickly.
How is the vaccine given?
The smallpox vaccine is not given with a hypodermic needle. It is not a shot, like many vaccinations. The vaccine is given using a bifurcated (two-pronged) needle that is dipped into the vaccine solution. When removed, the needle retains a droplet of the vaccine. The needle is then used to prick the skin 15 times in a few seconds. The pricking is not deep, but it will cause a sore spot and one or two drops of blood to form. The vaccine usually is given in the upper arm.
If the vaccination is successful, a red and itchy bump develops at the vaccination site in three or four days. In the first week after vaccination, the bump becomes a large blister, fills with pus, and begins to drain. During week two, the blister begins to dry up and a scab forms. The scab falls off in the third week, leaving a small scar. People who are being vaccinated for the first time have a stronger reaction than those who are being revaccinated.
If someone is exposed to smallpox, is it too late to get a vaccination?
Vaccination within 3 days of exposure will completely prevent or significantly modify smallpox in the vast majority of persons. Vaccination 4 to 7 days after exposure likely offers some protection from disease or may modify the severity of disease.
How long does a smallpox vaccination last?
Past experience indicates that the first dose of the vaccine offers protection from smallpox for 3 to 5 years, with decreasing immunity thereafter. If a person is vaccinated again later, immunity lasts longer.
Are diluted doses of smallpox vaccine as effective?
Recent tests have indicated that diluted smallpox vaccine is just as effective in providing immunity as full-strength vaccine.
What is the smallpox vaccine made of?
The vaccine is made from a virus called vaccinia, another pox-type virus related to smallpox. The smallpox vaccine helps the body develop immunity to smallpox.
Is it possible for people to get smallpox from the vaccination?
No. The smallpox vaccine does not contain smallpox virus and cannot spread or cause smallpox. However, the vaccine does contain another virus called vaccinia which is live in the vaccine. Because the virus is alive, it can spread to other parts of the body or to other people from the vaccine site. For that reason, the vaccine site must be carefully monitored.
Is it possible to get vaccinia, the virus in the vaccine, from someone who has recently been vaccinated?
Yes. Vaccinia is spread by touching a vaccination site before it has healed or by touching bandages or clothing that have become contaminated with live virus from the vaccination site. Vaccinia is not spread through airborne contagion. The vaccinia virus may cause rash, fever, and head and body aches.
What are the symptoms of vaccinia?
The vaccinia virus may cause rash, fever, and head and body aches.
How is vaccinia spread?
Vaccinia is spread by touching a vaccination site before it has healed or by touching bandages or clothing that have become contaminated with live virus from the vaccination site. Vaccinia is not spread through the air.
How safe is the smallpox vaccine?
The smallpox vaccine is the best protection you can get if you are exposed to the smallpox virus. Most people experience normal, usually mild reactions that include a sore arm, fever, and body aches. In recent tests, one in three people felt bad enough to miss work, school, or recreational activity or had trouble sleeping after receiving the vaccine. However, the vaccine does have some more serious risks. In the past, about 1,000 people for every 1 million people vaccinated experienced reactions that, while not life-threatening, were serious. These reactions include a vigorous (toxic or allergic) reaction at the site of the vaccination and spread of the vaccinia virus (the live virus in the smallpox vaccine) to other parts of the body and to other people. These reactions typically do not require medical attention. Rarely, people have had very bad reactions to the vaccine. In the past, between 14 and 52 people per 1 million vaccinated experienced potentially life-threatening reactions, including eczema vaccinatum, progressive vaccinia (or vaccinia necrosum), or postvaccinal encephalitis. Based on past experience, it is estimated that between 1 and 2 people out of every 1 million people vaccinated will die as a result of life-threatening reactions to the vaccine. Careful screening of potential vaccine recipients is essential to ensure that those at increased risk do not receive the vaccine. People most likely to have side effects are people who have, or even once had, skin conditions, (especially eczema or atopic dermatitis) and people with weakened immune systems, such as those who have received a transplant, are HIV positive, or are receiving treatment for cancer. Anyone who falls within these categories, or lives with someone who falls into one of these categories, should NOT get the smallpox vaccine unless they are exposed to the disease. Pregnant women should not get the vaccine because of the risk it poses to the fetus. Anyone who is allergic to the vaccine or any of its components should not get the vaccine, and anyone under the age of 18 should not get the vaccine unless they are exposed to smallpox.
Who should NOT get the vaccine?
People who should not get the vaccine include anyone who is allergic to the vaccine or any of its components (polymyxin B, streptomycin, chlortetracycline, neomycin); pregnant women; women who are breastfeeding; people who have, or have had, skin conditions (especially eczema and atopic dermatitis); and people with weakened immune systems, such as those who have received a transplant, are HIV positive, are receiving treatment for cancer, are taking medications (like steroids) that suppress the immune system, or have heart conditions. Also individuals younger than 12 months of age should not get the vaccine. Additionally, the Advisory Committee on Immunization Practices (ACIP) advises against non-emergency use of smallpox vaccine in children younger than 18 years of age and the vaccine manufacturer's package insert states that the vaccine is not recommended for use in geriatric populations in non-emergency situations. The term geriatric generally applies to those people age 65 and above. These people should not receive the vaccine unless they have been exposed to smallpox. Also, people who are using steroid drops in their eyes should wait until they are no longer using the medication to get the vaccine.
Should I get the vaccine if I have heart problems?
Careful monitoring of smallpox vaccinations given over recent months has suggested that the vaccine may have caused side effects on the heart. There have been reports of heart pain (angina), heart inflammation (myocarditis), inflammation of the membrane covering the heart lining (pericarditis), and/or a combination of these two problems (myopericarditis). Experts are exploring this more in depth. As a precaution, if you have been diagnosed by a doctor as having a heart condition with or without symptoms you should NOT get the smallpox vaccine at this time. These include conditions such as known coronary disease and/or three or more of the following risk factors:
- You have been told by a doctor that you have high blood pressure.
- You have been told by a doctor that you have high blood cholesterol.
- You have been told by a doctor that you have diabetes or high blood sugar.
- You have a close relative (mother, father, brother, or sister) who had a heart condition before the age of 50.
- You smoke cigarettes now.
Is there any way to treat bad reactions to the vaccine?
Vaccinia Immune Globulin (VIG) can help people who have certain serious reactions to smallpox vaccine. A second drug, cidofovir, may be used is some situations. Neither drug is currently licensed for this purpose (both administered under investigational new drug (IND) protocol) and they may have side effects of their own.
Is a child under the age of 1 year in the household a contraindication to vaccination?
Vaccinated parents of young children need to be careful not to inadvertently spread the virus to their children. They should follow site care instructions that are essential to minimizing the risk of contact transmission of vaccinia. These precautions include covering the vaccination site, wearing a sleeved shirt, and careful hand washing anytime after touching the vaccination site or anything that might be contaminated with virus from the vaccination site. If these precautions are followed, the risk for children is very low. Individuals who do not believe that they can adhere to such instructions should err on the side of caution and not be vaccinated at this time.
Are there any eye conditions that would preclude vaccination?
The concern surrounding eyes is that frequent touching of the eyes by someone who has gotten the smallpox vaccine may increase the chances that that person will experience spread of the vaccinia virus to the eyes (inadvertent inoculation of the eye) by touching the vaccine site or something contaminated with live virus and then touching their eyes before they wash their hands. This side effect is a serious one because it can lead to damaged vision, or even blindness. People who wear contact lenses, or touch their eyes frequently throughout the day can get the smallpox vaccine, but they must be especially careful to follow instructions for care of the smallpox vaccination site. Frequent and thorough hand washing will minimize the chance of contact spread of the vaccinia virus. As an additional precaution to minimize the risk of this type of transmission in selected groups of people, the Advisory Committee on Immunization Practices (ACIP) decided that anyone with eye diseases or other conditions (e.g., recent LASIK surgery) that require the use of corticosteroid drops in the eye should wait until they no longer require such treatment before getting vaccinated. |
Scratch, a graphical programming language developed at MIT, introduces students to fundamental programming concepts like variables, loops and conditional statements. In this course young students enter the world of computer science by learning how to create animations, computer games, and interactive projects. As they teach a mischievous cat to dance, explore a maze, or play mini games, students learn how to use math and computer coding to think creatively. No previous programming skills required! Students should be comfortable using a computer and browser, as well as managing files. |
At the doctor’s office yesterday I witnessed my preschooler have a conversation with an older child. They greeted each other, offered their names, and discussed their interests.
At three and four years of age, children begin to participate in collaborative conversations with others. They may not follow all of the rules for having a conversation (ex. they may interrupt or talk off topic), but these beginning conversations are an important first step in their social-emotional development and will form the structure for building friendships and relationships with peers and adults.
Before transitioning to kindergarten, preschoolers should have lots of opportunities to participate in collaborative conversing with people within and outside of their immediate family. Collaborative conversing is talking that includes:
- Showing interest in others
- Listening to others
- Taking turns when discussing a topic
These collaborate conversing skills are important for budding relationships, as well as for literacy activities within the kindergarten classroom (think discussing a book during small group reading). Since my preschooler has hyperlexia, he doesn’t always understand the basics of conversation without direct and explicit guidance on how people speak. One way we discuss the basics of conversation is by using puppets!
Puppets often pop up in preschool classrooms as a tool for teachers to model appropriate social behavior. For example, a teacher may bring out a well-loved puppet during circle time or when children are arguing over a toy and use the puppet to demonstrate desired behaviors. Puppets are fun, and children are naturally drawn toward them!
Puppets can be used as a means of expression for teachers in a preschool classroom, but puppets are a great tool for children, as well. When children are given reign of the puppets and with coaching from parents and teachers, puppets are able to talk and do things that children may not be able to or feel comfortable doing. Plus, they lead to some great pretend play scenarios!
You do not need to write a formal script for your child to experience the benefits of puppetry, nor do you need to cover the cost (most libraries have awesome puppets!). You don’t even need a puppet stage; puppets can be used anywhere. Ours like to fly around our house (for some reason our puppets always have magic powers).
To begin coaching your child in collaborative conversing using puppets, have your child begin by simply experimenting with the puppets. What do these puppets like to do and say? What do their voices sound like? Let your child take the lead here. It is important that your child become comfortable (and is having fun) being the puppeteer.
After a few minutes of free play, have your child pick a single puppet for him/herself and one for you (and one for your toddler too if you have one tagging along as we do!) Then start these three conversational elements:
- Greetings. Anytime we play with puppets, we practice properly greeting each other. Hello. Hi there! How are you today? Good, how are you? I’m doing great, thank you! My preschooler often ignores people when they greet him, so we reinforce the importance of simply recognizing other people and saying hello. This also includes sharing your name to people you’ve never met. Thinking up names for puppets is too much fun!
- Showing interest. Showing interest in what others are saying and what others like to do is often tricky for preschoolers. They are just transitioning out of egocentric development and are beginning to notice the thoughts and feelings of others. Showing interest can begin with such phrases as, What are you up to? I’m coloring with crayons. What do you like to do? What do you like to play? I like to play tag. Have your puppet start this back and forth language. Then encourage your child to ask the questions. Whisper some of the phrases in your child’s ear, so they have some guidance when getting started.
- Ask to play. This seems simple enough, but some children will need encouragement on how to ask others to play with them. Have your puppets inquire: Would you like to play with me? Would you like to race with me? (Enter specific things your child likes to do here.) What would you like to play? Again, start by talking with your puppet, and then whisper for your child to try these phrases. Answer with excitement and praise to encourage your child to continue their inquiry.
Collaborative conversing is a pivotal social skill that your preschooler will need to thrive in kindergarten and in their future school years. Puppets make honing this skill fun!
Don’t forget to use fun voices! Can you do an accent? If not, talk really high or really low; the sillier your puppet’s voice the better. Keep the whole experience silly, and don’t forget to follow you child’s lead. Your preschooler will be a pro at collaborative conversations in no time.
Already a pro? Extend your puppet play to include expressing and sharing emotions and having a discussion on more than one topic. Feel free to comment on how you use puppets with your preschoolers, or email me with specific questions regarding puppet play! |
According to a new report from the California Department of Public Health, there may be increased health risks from exposure to radiofrequency energy from cell phones.
What is RF energy?
- Cell phones work by sending and receiving signals to and from cell phone towers
- These signals are a form of electromagnetic radiation called radiofrequency (RF) energy
- When a phone sends signals to a tower, the RF energy goes from the phone’s antenna out in all directions, including into the head and body of the person using the phone
- Cell phones also emit RF energy when using wifi and Bluetooth, but at lower levels
Why should parents be concerned about exposure to radiofrequency energy?
Some laboratory experiments and human health studies have suggested the possibility that long-term, high use of cell phones may be linked to certain types of cancer and other health effects, including:
- Brain cancer and tumors of the acoustic nerve (needed for hearing and maintaining balance) and salivary glands
- Lower sperm counts
- Headaches and effects on learning and memory, hearing, behavior, and sleep
The concerns of radiofrequency energy from cell phones in the news
There’s not a lot of research on the effects of cell-phone use on children’s and teens’ health, the [California Department of Public Health] report acknowledges, but some studies have suggested that it may be associated with hearing loss, ringing in the ears, headaches and decreased well-being. –TIME
The risks may be greater for kids because their brains and bodies are smaller and still growing (RF could have a greater effect on cells that are changing and multiplying). Moreover, the earlier someone starts using a cell phone, the longer he or she may be exposed to RF. –Forbes
How can you reduce your exposure to RF energy?
- Keep your phone away from your body
- When you talk on your cell phone, avoid holding it to your head—use the speakerphone or a headset instead
- Consider sending text messages instead of talking on the phone
- If you are streaming or if you are downloading or sending large files, try to keep the phone away from your head and body
- Carry your cell phone in a backpack, briefcase, or purse; NOT in a pocket, bra or belt holster
- Reduce or avoid using your cell phone when it is sending out high levels of RF energy, such as:
- When the cell signal is weak
- When in a fast-moving car, bus, or train
- When streaming audio or video, or downloading or sending large files
- Don’t sleep with your phone in your bed or near your head. Keep your phone at an arm’s length away
- Take off your headset when you’re not on a call
- Don’t rely on a “radiation shield” or other products claiming to block RF energy, electromagnetic fields, or radiation from cell phones |
Children should be told not to ignore bullying - it won't go away on its own and may get worse. Children should tell someone they trust, such as a teacher, parent or friend. Remember, it is never the victim's fault. No one deserves to be bullied.
Ask children to keep a record of incidents, saving any nasty texts or emails. They should try not to retaliate; they might get hurt.
If your child is having a problem try to increase their circle of friends by regularly inviting others home. Make sure your child joins groups outside school which none of their schoolmates attend to help build confidence and friendships.
Tell the head of year what is happening, ask what strategy they will follow and how it will be monitored. Ask for your complaint to be answered in writing and for a copy to be put onto your child's school file with a note of the action taken. If you are not satisfied, telephone the LEA education welfare officer (sometimes called an education social worker) to ask them to intervene with the school to get the bullying stopped.
Schools should make every effort to pick up a problem early, acting before the issue spreads and becomes entrenched. Ensure all teachers and inspectors are trained so they do not 'collude' with bullying by turning a blind eye - there are risks for children in telling someone and adults should handle this information with care.
Teachers should not rely solely on the victim to identify who is bullying them before intervening. Other ways to get people to talk, include buddy groups and anonymous bullyboxes. |
For undergraduate and graduate courses in Curriculum Development and/or Curriculum Planning. Defining curriculum broadly, as "what is taught in schools," this practical text arranges content around two major themes: 1) curriculum processes involve decision making by people who are guided by their beliefs and values about what students should learn; and, 2) curricular change occurs only after individuals have made internal transitions. Unlike its competition, this text painstakingly bridges curriculum theory to practice, exploring ways to develop curriculum, implement a curriculum plan, and assess a school's curriculum by applying chapter content to sample curriculum projects. Through accessible, jargon-free language and student-friendly pedagogy, the author shows both how practice informs theory and how use of theory helps educators engage in curriculum tasks appropriately.
"synopsis" may belong to another edition of this title.
Sowell combines curriculum theory and practice in a comprehensive, integrative introduction. Coverage spans all major curriculum processes--development, classroom use, evaluation, etc.--emphasizing the importance of a clearly defined purpose of education as a first step in curriculum development or revision, and as a necessity for classroom use and evaluation.From the Back Cover:
Written with a broad-based approach to curriculum, this book includes processes of curriculum development, use, and evaluation. Clear descriptions of curriculum development processes. Provides a hands-on approach to needs assessment usable in any district. Shows how to implement a curriculum in school classrooms. Provide readable, down-to-earth information about curriculum evaluation. For Educators and school Administrators, including Principals, governing board members, and curriculum specialists.
"About this title" may belong to another edition of this title.
Book Description Book Condition: Brand New. Book Condition: Brand New. Bookseller Inventory # 97801311129191.0
Book Description Pearson, 2004. Paperback. Book Condition: New. Bookseller Inventory # P110131112910
Book Description Pearson, 2004. Paperback. Book Condition: New. 3. Bookseller Inventory # DADAX0131112910 |
To make learning chords a little easier, you might want to try using a piano chord chart. This handy reference tool gives you immediate and visual access to some of the most commonly played chords. And although they can be quite complicated for the beginning pianist, this article will describe their fundamentals.
Acknowledging that a chord is a combination of three or more notes played together, a piano chord chart displays the keys that should be played in order to achieve a particular harmony or chord. The note that begins a chord is called the root and in order to use a piano chord chart effectively, you’ll need to start with a root key. Common roots are the “C” “F” or “G” keys. So selecting a root is a simple matter of deciding which key or note will start the chord. The subsequent keys or notes that follow contribute to the chord and build a harmonious sound.
So how does one use a piano chord chart? Well if you wanted to start a major chord with the “C” key, you would need to play the “C, E, G” keys simultaneously. That’s how chords are built on a piano. On a chord chart however, you would need to select your root key (in our case, “C”) and then select the name of the chord that you want to play (in this case, “major”). The chart would then highlight the “C, E, G” keys of a model piano keyboard to indicate that they should be played together.
Pretty easy, right? Let’s try another one.
If you wanted to start a minor chord with the “C” key, you would need to play the “C, Eb, G” keys simultaneously. On a piano chord chart, you would select your root key (the “C” key) and then select the name of the chord that you want to play (in this case, “minor”). The chart would then highlight the ” C, Eb, G ” keys to indicate that they should be played together.
There are approximately 12 different root keys that you can experiment with and about 600 chords that you can learn to play by using a chord chart. As you experiment and practice, you’ll discover some interesting patterns. For example, the C Major chord skips a single white key between each note. The C Minor chord however, skips three white keys between the first and last note, but plays the second black key in-between! The C Major, E/F# Major, G Major chords remain pretty faithful to the pattern exhibited by the C Major chord.
As complicated as chords can get, you’ll really benefit from using this visual reference. Once you get started with one, you’ll be ready to tackle some of the more complicated pieces of music that you’ve always wanted to play. There simply is no rhyme or reason to frustrate yourself any longer because even the most basic piano chord chart removes the most prevalent obstacle to playing harmonious music. And that’s “figuring out where the fingers go!” |
Abdominal Pain in Children
Diabetic Peripheral Neuropathy
What Is MS?
Multiple sclerosis (MS) is an autoimmune disease in which the body's immune system attacks its own central nervous system (the brain and spinal cord). In MS, the immune system attacks and damages or destroys the myelin, a substance that surrounds and insulates the nerves. The myelin destruction causes a distortion or interruption in nerve impulses traveling to and from the brain. This results in a wide variety of symptoms.
Who Can Get Multiple Sclerosis?
Multiple sclerosis is estimated to affect 2.3 million people worldwide. Most people are diagnosed between the ages of 20 to 50, though it can also occur in young children and the elderly.
MS in Women
Multiple sclerosis is three times more common in women than in men. In addition, nearly all women afflicted with MS get the condition before menopause. This could mean that hormones play an important role in the disease’s development.
MS in Men
Usually, MS in men is more severe than it is in women. They typically get MS in their 30s and 40s, just as their testosterone levels begin to decline.
Although MS is more common in women than men overall, one form of the disease contradicts this pattern. People with primary progressive (PP) MS are about as likely to be male as female. (The four main types of MS are described later).
Multiple Sclerosis and Smoking
People who smoke are more likely to develop MS, and to develop it more severely and with a faster progression.
MS is more prevalent among Caucasians than other ethnicities. MS is believed to have a genetic component as people with a first-degree relative with the disease have a higher incidence than the general population.
Multiple Sclerosis Causes
The exact cause of multiple sclerosis is unknown, but it is believed to be some combination of immunologic, environmental, infectious, or genetic factors. Researchers are examining the possible role of viruses in the cause of MS, but this is still unproven.
Finding the MS Cause: Many Approaches
A range of scientific disciplines are being employed to find the cause of MS. Immunologists, epidemiologists and geneticists are all working to narrow in on the cause of multiple sclerosis.
One unusual finding that has emerged is that MS occurs more frequently the farther people live from the equator. This suggests a possible connection between the condition and vitamin D deficiency.
How MS Attacks the Body
Multiple sclerosis (MS) is an autoimmune disorder where the immune system mistakenly perceives its own myelin (the sheath around the nerves) as an intruder and attacks it, as it would a virus or other foreign infectious agent. To understand how this harms the body, it helps to understand how nerves work.
A nerve can be seen by the naked eye, but it is made up of hundreds or even thousands of microscopic nerve fibers wrapped by connective tissue. Nerves conduct messages to and from the brain by way of electrical impulses.
Often the nerve fibers that make up a nerve are all individually wrapped in myelin, a protective sheath that causes electric impulses to conduct down the nerve much faster than fibers that lack myelin. (The same principle is used to improve electric wires by covering them with a plastic outer layer.)
How Does MS Destroy Myelin?
In multiple sclerosis, the immune system’s T cells attack the myelin sheath. By attacking myelin, the immune system in a person with MS causes inflammation and degeneration of the myelin that can lead to demyelination, or stripping of the myelin covering of the nerves. It can also cause scarring (the “sclerosis” in the name “multiple sclerosis”). This causes electrical impulses to travel more slowly along the nerves resulting in deterioration of function in body processes such as vision, speech, walking, writing, and memory.
Is Multiple Sclerosis Inherited?
While multiple sclerosis is not hereditary, genetics are believed to play a role. In the U.S., the chances of developing MS are one in 750. Having a first-degree relative (parent, sibling) increases the risk to up to 5%. An identical twin of someone with MS has a 25% chance of being diagnosed with the disorder. It is thought there is an outside trigger and genetics only makes certain people susceptible to getting MS. which is why the disease is not considered hereditary. Genes may make a person more likely to develop the disease, but it is believed that there still is an additional outside trigger that makes it happen.
Types of MS
There are four different types of multiple sclerosis that have been identified and each type can have symptoms ranging from mild to severe. The different types of MS can help predict the course of the disease and the patient's response to treatment. The four types of MS are discussed on the next four slides.
Relapsing-Remitting (RR) MS
Relapsing-remitting multiple sclerosis (RR-MS) is the most common type of MS, affecting about 85% of MS sufferers. RR-MS is defined by inflammatory attacks on the myelin and nerve fibers causing a worsening of neurologic function. Symptoms vary from patient to patient, and symptoms can flare up (called relapses or exacerbations) unexpectedly, and then disappear (remission).
Common Symptoms of RR MS
- Vision problems
- Muscle spasms or stiffness
- Bowel and bladder function problems
- Cognitive difficulties
Primary-Progressive (PP) MS
Primary-progressive multiple sclerosis (PP-MS) is characterized by steady worsening of neurologic functioning, without any relapses or remissions. There may be occasional plateaus, but overall the progression of the disability is continuous. This form of MS occurs equally in men and women, and the age of onset is about 10 years later than in relapsing-remitting MS.
Secondary-Progressive (SP) MS
Secondary-progressive multiple sclerosis (SP-MS) is a form of MS that follows relapsing-remitting MS. The majority of people diagnosed with RR-MS will eventually transition to having SP-MS. After a period of relapses (also called attacks, or exacerbations) and remissions the disease will start to progress steadily. People with SP-MS may or may not experience remissions.
Progressive-Relapsing (PR) MS
Progressive-relapsing multiple sclerosis (PR-MS) is the least common form of MS, occurring in about 5% of MS patients. People with PR-MS experience steady disease progression and worsening neurological function as seen in primary-progressive multiple sclerosis (PP-MS), along with occasional relapses like people with relapsing-remitting multiple sclerosis (RR-MS).
Symptoms of multiple sclerosis may be single or multiple and may range from mild to severe in intensity and from short to long in duration.
List of MS Symptoms
- Numbness or tingling
- Dizziness or vertigo
- Sexual dysfunction
- Emotional instability
- Difficulty walking
- Muscle spasms
- Vision problems
- Bladder or bowel problems
- Cognitive changes
Multiple Sclerosis Diagnosis
Multiple sclerosis is often difficult to diagnose as symptoms are so varied and can resemble other diseases. It is often diagnosed by a process of exclusion – that is, by ruling out other neurological diseases – so the diagnosis of MS may take months to years. A physician will do a complete history and neurological exam, along with tests to evaluate mental, emotional and language functions, strength, coordination, balance, reflexes, gait, and vision.
Tests to Confirm a Multiple Sclerosis Diagnosis
- Electrophysiological test
- Cerebrospinal fluid exam (spinal tap, lumbar puncture)
- Evoked potential (EP) tests
- Blood tests
Multiple Sclerosis Diagnosis and MRIs
One of the main ways to diagnose multiple sclerosis is an MRI (magnetic resonance imaging) scan. Characteristic areas of demyelination will show up as lesions on an MRI scan. On the left is a brain MRI scan of a 35-year-old man with relapsing remitting multiple sclerosis that reveals multiple lesions with high T2 signal intensity and one large white matter lesion. The right image shows the cervical spinal cord of a 27-year-old woman representing a multiple sclerosis demyelination and plaque (see arrow).
Multiple Sclerosis Treatment
There are several aspects to treating multiple sclerosis.
- Modifying the disease – there are several drugs that can reduce the severity and frequency of relapses
- Treating exacerbations (or attacks) with high dose corticosteroids
- Managing symptoms
- Rehabilitation both for fitness and to manage energy levels
- Emotional support
Multiple Sclerosis Drug Treatment
Treatment for multiple sclerosis may include drugs to manage attacks, symptoms, or both. Many medications carry the risk of some side effects so patients need to manage their treatment with their doctors.
Corticosteroids for MS
Corticosteroids are drugs that reduce inflammation in the body and affect the function of the immune system. They are often used to manage MS attacks, but can have numerous side effects.
Side Effects of Short-Term Corticosteroid Use
- Fluid retention
- Potassium loss
- Stomach distress
- Weight gain
- Changes in emotions
Side Effects of Long-Term Corticosteroid Use
- Adrenal insufficiency
- Peptic ulcer
- High blood pressure (hypertension)
- Menstrual irregularities
- Skin atrophy
- Elevated blood sugar
- Abnormal appearance of the face (Cushingoid face)
- Increased risk of infection
Multiple Sclerosis Drug Treatment: Medications
There are currently 10 medications approved for disease modification
Interferons for relapsing MS
- Interferon beta-1b (Betaseron and Extavia)
- Interferon beta-1a (Rebif)
- Interferon beta-1a (Avonex)
Other medications approved for relapsing MS
- Glatiramer acetate (Copaxone)
- Natalizumab (Tysabri)
- Mitoxantrone (Novantrone)
- Fingolimod (Gilenya)
- Teriflunomide (Aubagio)
- Dimethyl fumarate (Tecfidera)
Treating Emotional and Physical MS Symptoms
Many medications are used to treat and manage symptoms associated with multiple sclerosis. Here are some common multiple sclerosis symptoms, followed by the medical treatments often used to treat them.
Difficulty (Slowness) Walking
- Baclofen (Lioresal)
- Tizanidine (Zanaflex)
- Diazepam (Valium)
- Clonazepam (Klonopin)
- Dantrolene (Dantrium)
- Methylprednisolone (Solu-Medrol): Solu-Medrol is given intravenously during the acute attack, sometimes followed up with an oral corticosteroid.
- Amantadine (Symmetrel)
- Modafinil (Provigil)
Treating Physical MS Symptoms (Continued)
Continued from the last slide, here are some common multiple sclerosis symptoms, followed by the medical treatments often used to treat them.
- Anti-convulsants: Anti-convulsants like carbamazepine (Tegretol) or gabapentin (Neurontin) are used for face or limb pain.
- Anti-depressants: Anti-depressants or electrical stimulation are used for pricking pain, intense tingling, and burning.
- Antibiotics: Antibiotics are used to manage infections
- Vitamin C: Vitamin C and cranberry juice are used to prevent infections
- Oxybutynin (Ditropan): Used for bladder dysfunction
- This is usually treated by increasing fluids and fiber to the diet
- Sildenafil (Viagra)
- Tadalafil (Cialis)
- Vardenafil (Levitra)
- Vaginal gels
- Often resistant to treatment. Sometimes drugs or surgery are used if tremors are severe
Current Research Into MS
There has been a lot of progress over the years in managing multiple sclerosis, and research is ongoing into new therapies. There are several new avenues of research including techniques to allow brain cells to generate new myelin or prevent the death of nerves. Other research involves use of stem cells that might be implanted into the brain or spinal cord to regrow the cells that have been destroyed by the disease. Some therapies being investigated include methods that would improve the nerve impulse signals. In addition the effects of diet and the environment on multiple sclerosis are being investigated.
Fast Facts About MS
- Multiple sclerosis (MS) is an autoimmune disease that progressively damages the nerves of the brain and spinal cord.
- Any sensory or motor (muscular) function in the body may be affected by the nerves damaged from MS.
- The cause of multiple sclerosis is unknown, but it is believed to be a combination of genetic, immunological, infectious, and/or environmental factors.
- There are four different types of multiple sclerosis and symptoms range from mild to severe. The different types of MS can help predict the course of the disease and, to some degree, the patient's response to treatment. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.