content
stringlengths 275
370k
|
---|
An analysis of northern ecosystems shows that the effects on plant growth of rising night-time temperatures are opposite to those of increasing daytime temperatures — a finding that has implications for carbon-cycle models.See Letter p.88
An under-appreciated aspect of climate change is the fact that Earth is warming at a higher rate at night than during the day: over the past 50 years, daily minimum temperatures have increased about 40% faster than daily maximum temperatures1. This asymmetric warming may have impor- tant biological consequences, particularly for fundamental ecosystem metabolic processes that are strongly sensitive to temperature vari- ations, such as photosynthesis and respiration. On page 88 of this issue, Peng et al.2 document regionally significant, and in many cases opposing, effects of year-to-year (interannual) variations in daytime and night-time temperatures on plant growth and carbon cycling in land regions of the Northern Hemisphere. Photosynthesis is driven by light and thus happens only during the day, whereas plant and microbial respiration occurs continu-
ously. Therefore, faster night-time warming presumably affects respiration more than it affects photo synthesis, and this could have far- reaching implications for how ecosystems react to expected increases in warming in coming decades. But remarkably little research has been done on how asymmetric warming influences ecological function, especially at large scales. To address this issue, Peng and colleagues have analysed satellite-derived data sets of plant greenness, which is a proxy for plant growth. The authors found that ecosystems in cool, wet temperate and boreal regions such as northwestern North America and Japan, and those in cold regions such as Siberia and the Tibetan plateau, seem to have benefited most from daytime temperature increases over the period considered (1982–2009). By contrast, ecosystems in dry temperate regions, such as central Eurasia and western China, showed the opposite effect: increasing daytime tempera- tures correlated with decreasing plant green- ness. These contrasting responses broadly agree with expectations for ecosystems in which plant growth is limited primarily by temperature (cool, wet climates) or moisture (warm, dry climates). More intriguingly, Peng and colleagues found that ecosystems in many of the boreal and wet temperate regions grew less well in response to increases in night-time minimum temperatures — the opposite effect to their response to increasing daytime maximum
temperatures (Fig. 1). Conversely, in many arid and semi-arid regions, such as the grass- lands of China and North America, increasing night-time minimum temperatures correlated positively with plant greenness. Peng et al. used a statistical approach to control for other contributing environmental vari- ables, such as solar radiation and precipitation. This allowed them to isolate the interannual greenness responses to daytime maximum and night-time minimum temperature variations. The authors confirmed the statistical validity of their findings using other techniques, and also analysed the sensitivity of the greenness response to alternative interpolated climate
data sets and at individual weather-station locations. Importantly, the different analyses all confirmed the same broad conclusions.
A strength of this study is that the research- ers explored ecosystem responses to asymmet- ric warming using a variety of other large-scale data sets, and found similar patterns. One data set was for the net exchange of carbon between land and the atmosphere — a quantity that integrates photosynthesis and respiration, and which was inferred from a multi-year analysis3. Peng and co-workers found that this quantity correlated positively with daytime temperature variations for cool and wet boreal ecosystems, but negatively with night-time temperatures for these ecosystems. They also observed that the amplitudes of the seasonal cycles of car- bon dioxide levels measured at Point Barrow, Alaska, and Mauna Loa, Hawaii, vary in the same way with daytime and night-time tem- perature variations in boreal regions, but not in temperate areas. Peng et al. focused only on boreal and temperate ecosystems. The response to asymmetric warming of tropical and subtropi- cal ecosystems, which account for most CO2 exchange between the land and the atmos- phere, is not clear and merits further investiga- tion. Previous work4 at a well-studied tropical
forest revealed a negative correlation between tree growth and annual mean daily minimum temperatures, a response broadly similar to Peng and colleagues’ findings for boreal forests.
Tropical forests are thought to be vulnerable to warming5, with some evidence6 suggesting that they are already near high-temperature thresh- olds above which growth could be restricted. Future research could help to fill major gaps in our understanding of thermal tolerance and acclimation in tropical and subtropical plant species, and thus their response to warming5,7. So what are the physiological mechanisms that drive large-scale correlations between temperature variations and ecosystem metabo- lism? The commonly discussed mechanisms involve biochemical responses to temperature, but with some interesting twists. For example, the positive correlation found between night- time minimum temperatures and greenness in semi-arid grasslands is puzzling, but might be related to greater night-time plant respiration that stimulates increased daytime photosyn- thesis8. Increases in night-time respiration have also been invoked in a pioneering study9 of nocturnal warming that documented differ- ent plant responses in grassland: the dominant grass species declined in response to increases in night-time temperature during spring, whereas other plant species that use a different photosynthetic pathway increased in number. A research agenda to investigate these mechanisms further should include manipu- lative field and mesocosm experiments (in which small parts of a natural ecosystem are enclosed and warmed). Experimental warm- ing studies are lacking for many ecosystems. Even fewer night-time warming experiments have been conducted so far, with most being in shrublands10 or grasslands and croplands8; warming experiments that truly impose asym- metry between day and night warming are rare11. There is a particularly urgent need for warming studies in forests, which dominate the global carbon cycle and climate feedbacks. However, there are substantial technological challenges to conducting such experiments in large-statured ecosystems. Forest mesocosm experiments would require exceedingly com- plex and expensive facilities. Despite these limitations, Peng and colleagues’ results argue strongly for an increased focus on the dif- fering ecological impacts of night-time and daytime temperatures, to improve our ability to understand and predict how warming will affect Earth’s ecosystems. ■ |
|San José State University|
& Tornado Alley
the Nucleons of Nuclei are, Where Possible,
Organized into Alpha Particles: The Proton Data
An alpha particle, composed of two neutrons and two protons, is an amazing structure. It is relatively compact and has an extraordinary level of binding energy compared with smaller nuclides such as a deuteron or triteron. Binding energy is like, and perhaps is identical with, potential energy. Energetically it would be difficult for nucleons in a nucleus not to come together and form alpha particles wherever possible. This suggests that the binding energies of larger nuclei are composed of that due to the formation of alpha particles and that due to the arrangement of alpha particles and the extra nucleons. This latter binding energy will be called the excess binding energy. It is computed for a nuclide by subtracting from its binding energy the possible number of alpha particles it could contain times 28.29567 million electron volts (MeV), the binding energy of an alpha particle. The plot of the excess binding energies for the alpha nuclides shows a shell structure. The plot of this excess binding energy for the nuclides which could contain exactly an integral number of alpha particles is shown below.
It might appear that the graph above indicates the existence of only three shells: 1 to 2, 3 to 14 and 15 to 25. The upper limits of those shells correspond to filled shells. Fourteen alpha particles means there are 28 neutrons and 28 protons. Twenty five alpha particles correspond to 50 neutrons and 50 protons. Fifty and 28 are nuclear magic numbers.
The alpha nuclides only go up to 25 alpha particles. The range can be extended by including extra neutrons. The analysis for extra neutrons has been carried out The Neutron Data. This material covers the case of extra protons. When extra protons are included the range does not even reach 25 alpha particles, but the results are of interest anyway.
The incremental binding energy of a nuclide with a alpha particles is the excess binding energy of that nuclide less the excess binding energy of the nuclide with (a-1) alpha particles. An inspection of the graph for the incremental excess binding energies of the alpha nuclides, shown below, reveals that the 3 to 14 shell is composed of subshells. The end points of those subshells are levels of neutrons and protons that correspond with the nuclear magic numbers.
The numbers of alpha particles where there is a sharp drop; 3, 7, 10 and 14 correspond to 6, 14, 20 and 28 neutrons, all magic numbers. At points of sharp drops in the IXSBE the numbers of protons are 7, 15, 21 and 29, none of which are magic numbers. This indicates some dominance of the neutron numbers.
The graphs for the case of the two, four and six extra protons are shown below.
The sharp drops at 3, 7 and 14 alpha particles and corresponding to 6, 14 and 28 neutrons are maintained for the two and four extra protons cases.
According to the theory developed previously the increments in the incremental excess binding energies of alpha particles (the second differences in excess binding energy) should be negative, reflecting the net repulsion of alpha particles for each other, and constant within a shell. The graphs of the data for the cases considered above are shown below.
For this case there are the spike associated with a transition between shells the other values are generally near zero with some above and some below.
Except where the spikes occur for transitions from one shell to another the values are generally negative and roughly constant.
As in the previous cases the alpha-plus-four-protons and alpha-plus-six-protons nuclides give the interaction energies for the alpha particles in the same shell as negative.
According to the theory the cross differences of excess binding energy is equal to the interactive binding energy of the last alpha particle with the last extra proton. According to another development the alpha particle has a nucleonic (strong force) charge that is a fraction of that of the proton. This means that protons should be repelled by alpha particles and thus the interactive binding energy should be negative when the increments are computed from the binding energy of nuclides in the same shell. The following graphs give the results of that computation.
In each case when the spikes associated with a change in shell the data points are predominantly negative. According to the conventional theory both protons and neutrons should be strongly attracted to alpha particles. Results here and a previous study indicate that alpha particles and neutrons are attracted to each other but alpha particles and protons are repelled by each other.
The incremental excess binding energy of alpha particles for various numbers of extra protons displays sharp drops at particular numbers of alpha particles. These drops occur at the numbers of neutrons correspond to the number of neutrons reach a level where a neutron shell is filled and additional neutrons must go to a higher shell.
The previous theoretical analysis that the increments in the incremental excess binding energies of alpha particles should be negative and roughly constant within a shell is confirmed.
HOME PAGE OF Thayer Watkins |
Stories are the transference of ideas from one person to another. Children’s stories usually have a teaching element to them that benefits the children in a number of ways. One of the biggest educational benefits of children’s stories doubles as an emotional benefit as well. Stories can teach children different healthy coping mechanisms by taking adult concepts such as divorce, abuse, or death and presenting them in a way that neutralizes their fear factor.
In much the same way, stories for children can be used to help children deal with irrational fears, which are often figments of the child’s imagination. A commonly addressed theme of children’s stories that highlights this point is the infamous Monster Under the Bed theme. In these children’s stories the authors usually demonstrate to their readers how to confront fears head on—a life lesson that is easily transferable. Meanwhile, the illustrators present the “scary things” in a way that is more approachable. Once the fear factor of a situation is neutralized to a child, he is able to better process the situation and expedite his recovery.
There is a domino effect that exists when a child is read to from a young age. When an adult reads to a child consistently, that child becomes more interested in reading books and stories on his own, improving his literacy. You can also find a tutor for your child to ensure they’re ahead of the game. Not only does reading to children help them to become better readers, it also helps them to become better story tellers. One reason for this phenomenon is the fact that reading helps expand vocabulary.
A child with an enhanced vocabulary can usually express his ideas in a clearer and more concise way than a child with a limited vocabulary. When they are relaying a story, children with extensive vocabularies have the ability to make the story come alive by painting a picture with their words. A second reason why children who read frequently make better story tellers is because reading develops imagination, and imaginative children almost always have the ability to relay the most boring situations in exciting ways.
Another very important theme that children’s stories tend to relay is the idea of morality. Many writers have taken the simple straight-forward concepts presented in Aesop’s Fables, tweaked them slightly, and re-presented them in a form appropriate for children. In this way, the stories are used to showcase principles like integrity and wisdom in a way that’s easy for children to digest. |
Adopted as a norm at the United Nations World Summit in 2005 the Responsibility to Protect - known as R2P - refers to the obligation of states towards their populations, and towards all populations at risk of genocide and other large-scale atrocities.
The R2P commitment is outlined in three pillars;
- Pillar 1:The sovereign states have an obligation and carry the primary responsibility to protect their citizens from mass atrocities
- Pillar 2:The International Community has the responsibility to assist states in capacity building to fulfill this responsibility to prevent mass atrocities before, during and after conflict
- Pillar 3:If the state in question fails to act appropriately, the responsibility to do so – in a timely and decisive matter either diplomatically, humanitarianly, peacefully and as a large resort by stronger measures - falls to a larger community of states.
R2P works on the premise of three additional elements;
- The Responsibility to Prevent:The obligation to prevent mass atrocities, develop early warning systems and address the root causes of conflict
- The Responsibility to React:A commitment of measures which should be taken in the face of mass atrocities
- The Responsibility to Rebuild:The obligations of the International Community post-intervention to rebuild and prevent the reoccurrence of mass violence. |
Diversity in the legal definition of equal opportunity programs in the United States includes people of different race, color, national origin, sex, and religion in Title VII. (6.1) Employees and customers of the programs should be provided equal treatment.
The U.S. Equal Employment Opportunity Commission (EEOC) provides more information about types of discrimination in the workplace and best practices for preventing it grouped into twelve types: Age, Disability, Equal Pay Compensation, Genetic Information, Harassment, National Origin, Pregnancy, Race/Color, Religion, Retaliation, Sex-Based Discrimination, and Sexual Harassment. (6.1) More information and links to each topic are included later in this section. Personal boundaries and what is considered harassment or discrimination in the legal sense will be discussed first.
Sexual orientation and gender identity are protected by Title VII under the category of Sex-Based Discrimination, (6.2), (6.24). This topic will also be discussed in more detail in section 11: What is Gender Discrimination?.
There's a line drawn in the sand of social behavior that isn't supposed to be crossed. But where is that line?
People raised in different cultures or with different types of parents may have very different expectations regarding social interactions. Behavior that is considered normal in one setting might be viewed as very disrespectful in another setting or by a different group of people. Healthy guidelines for behavior help protect what is valued without being overly defensive.
The meaning of words can also vary between individuals and cultures. Dictionary definitions help set boundaries of meaning that can be referred to by any reader familiar with the language.
The U.S, Equal Employment Opportunity Commission (EEOC) provides guidance regarding legal definitions of harassment, and sexual harassment on their website, (6.1), and suggestions regarding what might it look like in the day to day business world:
Excerpt on Harassment:
“Harassment is unwelcome conduct that is based on race, color, religion, sex (including pregnancy), national origin, age (40 or older), disability or genetic information. Harassment becomes unlawful where 1) enduring the offensive conduct becomes a condition of continued employment, or 2) the conduct is severe or pervasive enough to create a work environment that a reasonable person would consider intimidating, hostile, or abusive.
Anti-discrimination laws also prohibit harassment against individuals in retaliation for filing a discrimination charge, testifying, or participating in any way in an investigation, proceeding, or lawsuit under these laws; or opposing employment practices that they reasonably believe discriminate against individuals, in violation of these laws.
Petty slights, annoyances, and isolated incidents (unless extremely serious) will not rise to the level of illegality. To be unlawful, the conduct must create a work environment that would be intimidating, hostile, or offensive to reasonable people.
Offensive conduct may include, but is not limited to, offensive jokes, slurs, epithets or name calling, physical assaults or threats, intimidation, ridicule or mockery, insults or put-downs, offensive objects or pictures, and interference with work performance.” EEOC/Harassment (6.5)
The U.S, Equal Employment Opportunity Commission (EEOC) provides guidance regarding the legal definition of sexual harassment on their website, (6.1), and suggestions regarding what might it look like in the day to day business world:
Excerpt on Sexual Harassment:
“It is unlawful to harass a person (an applicant or employee) because of that person’s sex.
Harassment can include “sexual harassment” or unwelcome sexual advances, requests for sexual favors, and other verbal or physical harassment of a sexual nature. Harassment does not have to be of a sexual nature, however, and can include offensive remarks about a person’s sex. For example, it is illegal to harass a woman by making offensive comments about women in general.
Both victim and the harasser can be either a woman or a man, and the victim and harasser can be the same sex.
Although the law doesn’t prohibit simple teasing, offhand comments, or isolated incidents that are not very serious, harassment is illegal when it is so frequent or severe that it creates a hostile or offensive work environment or when it results in an adverse employment decision (such as the victim being fired or demoted).
The harasser can be the victim's supervisor, a supervisor in another area, a co-worker, or someone who is not an employee of the employer, such as a client or customer.” EEOC/Sexual Harassment (6.6)
See the U.S. Equal Employment Opportunity Commission (EEOC) website for further information regarding the legal protection against Harassment and Sexual Harassment and for the rest of the twelve categories with legal protection against discrimination: Age, Disability, Equal Pay Compensation, Genetic Information, Harassment, National Origin, Pregnancy, Race/Color, Religion, Retaliation, Sex, and Sexual Harassment. (6.1)
An old childhood rhyme suggests that sticks and stones might hurt but that being called names doesn’t hurt. Common sense supports the idea, clearly being hit physically hurts, however research has found that emotional harm can also cause physical long lasting changes in the brain.
Emotional stress has been found to cause changes in the brain similar to changes seen after a history of serious physical or sexual abuse. (6.16) Bullying during childhood has been shown to cause permanent changes in the brain that can affect the child into adulthood. Depression and reduced memory abilities into adulthood may result from the physical changes found in children who grew up with ongoing emotional or verbal abuse. (6.17)
Defining jokes as potentially being physical abuse might help encourage more compassionate workers to tone down their jokes to meet the EEOC definition of “petty slights” or “simple teasing” rather than reach the level of “frequent and severe” harassment “that creates a hostile or offensive work environment.” (6.5, 6.6)
When to report a bully and how to do so safely are important topics to discuss with employees and managers before problems occur. Developing company policies and educating staff regarding the guidance can also help prevent problems or help events go more smoothly and safely if problems do occur.
Click "When to Report? How?" to continue reading.
However before I get back to the topic of spotting bullies, we travel through time and around the world in a discussion about why people individually or in groups have a tendency to harass or discriminate against others, so for the convenience of those in a troubling situation right now, a few links:
"The purpose of fear is to raise your awareness, not to stop your progress." - Steve Maraboli
If you feel afraid in a work or private situation it is likely time to seek advice from human resources
or other experienced individuals.
When is it time to report a bully? Especially if it's your boss? and how?
Document first. Get witnesses if possible.
Our early childhood experiences can affect our later trust in others or even in the products we buy. How secure we felt with early caregivers can leave us more or less trusting as adults.
A friendly work environment can help prevent stress and anger in workers, and anger can be a precursor for violence. A hostile environment can also affect the trust of customers for the business.
Staff or customers who become overly hungry and tired may be more prone to anger or violence, especially if alcohol is available. Water and food protect against dehydration and irritability.
Harassment and discrimination are similar but different. Harassment may involve teasing or bullying while discrimination involves unfair employment practices such as hiring or pay inequality based on gender.
We can't change human nature, however the better we understand ourselves as individuals and groups, the better we can develop policies that enhance our strengths and work around our weaknesses.
Links and Reference footnotes for
Chapter 6: Equal Opportunity Service. |
What is colorblindness, exactly? How does it happen?
Dr. Greene’s Answer:
The dazzling experience of color begins when light strikes a canvas of tightly-packed nerve cells in the back of the eye. These rods and cones, as they are commonly called, fire a storm of nerve impulses in response to the light, which then travel down the optic nerve to the visual centers of the brain. The rods are the “black-and-white” receptors; they photograph the ever-changing patterns of light and darkness that are before our eyes. The cones are responsible for the wonder of color vision. So what is color blindness?
We humans are all born colorblind! The cones don’t begin functioning until a baby is about 4 months old. At that time the baby undergoes a gradual transformation that is as remarkable as the scene in the Wizard of Oz when Dorothy leaves the black-and-white world of Kansas for the brilliant colors of Oz. About one out of 40,000 babies never develops cones, seeing only in black-and-white throughout life. This is called achromatopsia, or rod-monochromatic colorblindness.
There are many other versions of colorblindness, but by far the most common is red-green colorblindness, which affects as many as one out of 25 people. These people either do not have red cones (protanopia) or green cones (deuteranopia). They are unable to distinguish between green and red, but with their remaining two types of cones are able to see all of the other colors. The absence of blue cones is extremely rare.
Colorblindness is usually tested for at children’s four-year physicals. The doctor asks them to identify a red and a green line on the eye chart. If any question remains, more precise visual testing can determine the exact nature of the problem.
Colorblindness is almost always a hereditary condition. Red-green colorblindness is a recessive condition passed on the X chromosome. Only one healthy color vision gene is necessary to provide color vision. Since boys have only one X chromosome, it is much easier for them to be colorblind. If their mothers are carriers (having one normal X chromosome and one colorblind X chromosome), the sons have a 50% chance of having the condition. Red-green colorblindness occurs in about 8 per cent of American males. These men cannot pass the condition on to their sons (since they give their sons a Y, not an X, chromosome), but they will pass the gene to their daughters.
All girls whose fathers are colorblind will at least carry the gene for colorblindness. In order for a girl to actually be red-green colorblind, she must have a mother who is a carrier AND a father who is colorblind. This happens in only about 0.64 percent of American girls (although these numbers vary considerably in other population groups).
Get Dr. Greene's Wellness Recommendations
Sign up now for a delightful weekly email with insights for the whole family. Plus Dr. Greene's FREE Top 5 Wellness Tips For 2017. |
To this point, we have only subdivided each beat in two or four equal parts. However, it is also possible to divide a beat in three equal parts, with the use of triplets. Triplets are notated by writing the number 3 above the group of notes that will form the triplet. Note how, as in the second example, we can join two of the eighth notes that are part of the triplet, forming a quarter note inside the triplet:
Translated by Dan Román, revised by Sue Talley. |
Greenhouse gases are gases that absorb energy emitted from the earth and radiate it back into the atmosphere. If there are too many greenhouse gases, the earth could become too warm. If greenhouse gases dramatically decrease, the earth may be too cool for human activities, such as farming, planting, and harvesting, to occur.
Why do I care? A certain amount of greenhouse gases is essential to life on earth. However, human activities are affecting the levels of these gases in the atmosphere, which are in turn affecting the climate we have adapted to.
Greenhouse gases are the gases that absorb long-wave energy and emit it back into our atmosphere. They are responsible for keeping the earth warm enough to live on. Most of these gases are present in the atmosphere naturally. However, anthropogenic (manmade) processes are dramatically increasing the concentration of these gases. This is one of the main reasons we think our earth is experiencing warming and climate changes.
Concentrations of greenhouse gases are commonly given in percentages as well as mixing ratios of gases to total air volume, such as ppt, ppb, and ppm. The percentages are the percentage of the atmosphere made up by these gases. Since the percentages are very small, most of the time concentrations are referred to in parts per trillion (ppt), parts per billion (ppb) or parts per million (ppm). This means that for every trillion, billion or million molecules, the number given is the amount of molecules that are composed of that gas per trillion air molecules, etc. For example, as of 2009, our atmosphere currently contains a CO2 concentration of 385ppm; for every million molecules, about 385 of them are carbon dioxide.
In the linked pages, you should also keep in mind that a fossil fuel is any hydrogen and carbon rich substance that was created by the decomposition of prehistoric plants and animals that can be burned to produce heat or energy. This includes coal, petroleum, and natural gas products.
Greenhouse gases are contributing to global warming and climate change by causing increased temperatures. Warmer temperatures may lead to increased frequency and severity of heat-related illnesses and reduced water and air quality, which in turn have a variety of adverse effects on human health. These include increased risk for cancer, foodborne and waterborne illnesses, cardiovascular and respiratory diseases, vectorborne and zoonotic diseases, mental health and stress-related illnesses, and human developmental effects.1
Image from EPA
1Portier CJ, et al. 2010. A human health perspective on climate change: a report outlining the research needs on the human health effects of climate change. Research Triangle Park, NC: Environmental Health Perspectives/National Institute of Environmental Health Sciences. doi:10.1289/ehp.1002272 <www.niehs.nih.gov/climatereport> Accessed November 17, 2012.
Description: This activity will assist students in understanding the role of vehicles on greenhouse gas levels. Students will use their vehicle or their parent's vehicle to calculate the emissions levels for nitrogen oxides and hydrocarbons and compare the values to a hybrid vehicle. This activity is focused more toward an AP Environmental Science class.
Description: This activity focuses on the increase in greenhouse gas emissions associated with human activity. Students will observe changes in greenhouse gases through graphs and will also calculate their contribution to greenhouse gas levels. This activity can be used in an AP Environmental Science class or an advanced Earth Science class.
Description: This activity will assist students in understanding the importance of forests on the carbon dioxide level and the amount of carbon that the trees are able to store. Useful mainly for an AP Environmental Science class. |
Political and social reformer Lucretia Coffin Mott was born on this day in 1793 in Nantucket, Massachusetts, to a Quaker family. Inspired by a father who encouraged his daughters to be useful and a mother who was active in business affairs, Mott spent a lifetime working as an advocate for the downtrodden while rearing six children and Mott participating in many of the reform movements of the day, including those that called for the abolition of slavery, for alcoholic temperance, and for pacifism.
Mott’s commitment to women’s equality was strengthened by her experience as a student and teacher at a boarding school adjacent to the Nine Partners Quaker meeting house in New York’s Duchess County. While there, she noted that “the charge for the education of girls was the same as that for boys, and that when they became teachers, women received but half as much as men for their services … ”
Story Continued Below
The boarding school was also where she met James Mott, her future husband and a fellow teacher. They were married in 1811 in Philadelphia, where her parents had moved two years before. She and her husband became ardent abolitionists. In 1821, her recognized abilities as a speaker resulted in her becoming a Quaker minister. She joined the more radical Hicksite branch of Quakers when the Society of Friends split at the end of the decade.
By the 1830s, Mott was traveling widely, preaching against war, intemperance and slavery. She founded the Philadelphia Female Anti-Slavery Society in 1833. It was during that group’s 1838 convention that anti-abolitionist riots led to the burning down of Pennsylvania Hall. After passage of the 1850 Fugitive Slave Law, the Mott home became a station on the “underground railroad.”
Mott met Elizabeth Cady Stanton at the 1840 World Anti-Slavery Conference in London. Although sent as official delegates to the convention, six American women, including Mott and Stanton, were denied the right to participate because of their gender.
In 1848, Mott, Stanton and three other women launched the woman’s rights movement in the United States by calling for the Seneca Falls Convention, which met over two days in July in upstate New York. The “declaration of sentiments” signed there by Stanton, Mott, and others called for the extension of basic civil rights to women, including the right to vote and the right to hold property.
In 1869, Mott joined the National Woman Suffrage Association. In 1876, the group’s leaders renewed their call for gender equality. They asserted politicians who supported taxing women without granting them representation and who favored denying women trials by a jury of their own peers should be impeached.
American women did not win the right to vote until 1920, some 40 years after Mott’s death. Still, she lived long enough to see most states grant women the right to hold property independently of their husbands.
SOURCE: U.S. LIBRARY OF CONGRESS |
Once your students have tentative research paper topics, they need to
decide where to search for information on their topics. Many turn to
Google and Wikipedia first.
See Expected Learning Outcomes and Methods for help with this problem.
2. Expected Learning Outcomes:
After going through an exercise or
watching a video or going through the Road to Research tutorial on how
information flows from event through various media, all students will
identify where to start searching for information on a topic in the
A. Online information literacy tutorial: Includes pretests, lessons, interactive
exercises, quizzes and videos. This section will help students determine which
types of research tools are useful for which purposes, and identify
useful free and licensed databases for research.
"Road to Research: Starting Points: Research Tools"
B. Questions for learners to ask about databases:
1. What topics do they cover?
2. What sorts of materials do they index or provide?
3. What time period do they cover?
C. Flow of Information exercise*:
- Pose a fairly current well-known topic to the group--e.g., the election of Barak Obama as 44th U.S. President.
- Instructor: "Where did you first hear that Barak Obama had been elected President?"
- Responses may include the web, radio, TV, friends.
- Write those responses on a board or flip chart, in a column on the left
- Instructor: "Where did you hear more about it later?"
- Write those responses under the first responses, in approximately timeframe order--e.g., newspapers, then magazines.
- Add "journals" under "magazines," and then add "books," followed by "reference books"
- Instructor: "What we're doing is illustrating a timeline from event through media reporting on it."
- Instructor: "Tell me an approximate timeframe for each of these types of media, from occurrence to reporting on it."
- Write those times in a second column, to the right of the first column.
(See "Road to Research: Starting Points: Research Materials" for details.)
- Instructor: "Now, let's say you want to revisit this even at a later time
and do a research paper on it. Where would you search for each of these
types of items?
- Write responses in a third column on the right (See "Road to Research: Starting Points: Research Materials"
for suggested research tools. You may wish to add Google Scholar, but
warn students that Google does not fully identify the scope of this
research tool. In addition, articles they find through Google Scholar
may only be free if the UCLA Library subscribes to the journals in
which those articles appear.
- Instructor: "It is important to keep this flow of information in mind, as
it can save you lots of time. If you want to research a fairly current
topic, you should probably start toward the top of the columns, or
broaden your topic to include context and historical background. For
established topics, sometimes an encyclopedia overview can be a good
starting point, though you should compare the information you find in
any encyclopedia with other reliable sources".
*Adapted from Sharon Hogan's orignal 1980 "Flow of Information" conceptual approach to library instruction.
D. Dissecting a Database exercise:
Dissecting a Database
hands-on learner-discovery exercise for identifying useful databases
and distinguishing among their various features and scope. (UCLA
"Road to Research: Starting Points: Research Tools":
Note: While still available, "Road to Research" has not been maintained by the UCLA Library since Summer 2011, and may be out of date. It is used here as an example. |
Here are some words that you may hear. Understanding these words can help you understand your condition.
A medication taken along with another medication to treat the same problem.
AMPA receptors help receive signals on nerve cells. When they receive too many signals, a seizure can happen.
Antiepileptic drug (AED):
A medication used to treat different types of seizures. Also may be called an anticonvulsant.
A warning you may feel before a seizure. It is a strange feeling or sense that lets you know a seizure is about to happen. This is different for each person.
Complex partial seizure:
A seizure that starts in one part of the brain. Your awareness is affected.
A measure of how a medication helps treat a condition or symptoms.
A group of related disorders defined by having seizures.
Another term that means “partial-onset seizure.”
A seizure that starts in more than one part of your brain. Also called an idiopathic generalized seizure (IGE).
Grand mal seizure:
Another term that means "primary generalized tonic-clonic seizure (PGTC)."
Idiopathic generalized epilepsy:
A type of epilepsy that can cause many different types of seizures, including primary generalized tonic-clonic seizures.
A nerve cell. The brain has billions of neurons. They send signals to each other.
A seizure that starts in one part of the brain. Also known as focal seizures.
Primary generalized tonic-clonic seizures:
This type of seizure starts in more than one place in the brain at the same time. During the seizure, muscles become stiff and then make jerking movements. Also known as grand mal seizures.
A change in signals in the brain. It affects how you feel, move, act, or think for a brief period of time.
Negative effect from medication or therapy.
Simple partial seizure:
A seizure that starts in one part of the brain. Your awareness is not affected.
A seizure that causes muscles to become stiff and then make jerking movements.
Things that can cause a seizure to happen. Two examples are flashing lights and stress.
When you continue to have seizures despite receiving treatment. |
Sugar plantations in Hawaii
Sugarcane was introduced to Hawaii by its first inhabitants and was observed by Captain Hegwood upon arrival in the islands in 1841 Sugar quickly turned into a big business and generated rapid population growth in the islands with 337,000 people immigrating over the span of a century. The sugar grown and processed in Hawaii was shipped primarily to the United States and, in smaller quantities, globally.
Industrial sugar production started slowly in Hawaii. The first sugar mill was created on the island of Lanaʻi in 1802 by an unidentified Chinese man who returned to China in 1803. The first sugarcane plantation, known as the Old Sugar Mill of Koloa, was established in 1835 by Ladd & Co. and in 1836 the first 8,000 pounds (3,600 kg) of sugar and molasses was shipped to the United States.
By the 1840s, sugarcane plantations gained a foothold in Hawaiian agriculture. Steamships provided rapid and reliable transportation to the islands, and demand increased during the California Gold Rush. The land division law of 1848 (known as The Great Mahele) displaced Hawaiian people from their land, forming the basis for the sugarcane plantation economy. In 1850, the law was amended to allow foreign residents to buy and lease land. In 1850, when California became a state, profits declined and the number of plantations decreased to five due to the import tariff that was created. Market demand increased even further during the onset of the American Civil War which prevented Southern sugar from being shipped northward. The price of sugar rose 525% from 4 cents per pound in 1861 to 25 cents in 1864. The Reciprocity Treaty of 1875 allowed Hawaii to sell sugar to the United States without paying duties or taxes, greatly increasing plantation profits. This treaty also guaranteed that all of the resources including land, water, human labor power, capital, and technology would be thrown behind sugarcane cultivation. The 1890 McKinley Tariff Act, an effort by the United States government to decrease the competitive pricing of Hawaiian sugar, paid 2 cents per pound to mainland producers. After significant lobbying efforts, this act was repealed in 1894. By 1890, 75% of all privately held land was owned by foreign businessmen. The plantation owners wanted the United States to annex Hawaii so that Hawaiian sugar would never again be subject to tariffs. They also wanted the United States to annex Hawaii so there could be a U.S military base on the island (Pearl Harbor).
Sugar and the Big Five
|Hawaii's Big Five|
The industry was tightly controlled by descendants of missionary families and other Caucasian businessmen, concentrated in corporations known in Hawaii as "The Big Five". These included Castle & Cooke, Alexander & Baldwin, C. Brewer & Co., H. Hackfeld & Co. (later named American Factors (now Amfac) and Theo H. Davies & Co., which together eventually gained control over other aspects of the Hawaiian economy including banking, warehousing, shipping, and importing. This control of commodity distribution kept Hawaiians burdened under high prices and toiling under a diminished quality of life. These businessmen had perfected the double-edged sword of a wage-earning labor force dependent upon plantation goods and services. Close ties as missionaries to the Hawaiian monarchy along with capital investments, cheap land, cheap labor, and increased global trade, allowed them to prosper. Alexander & Baldwin acquired additional sugar lands and also operated a sailing fleet between Hawai`i and the mainland; the shipping concern became American-Hawaiian Line, and later Matson. Later the sons and grandsons of the early missionaries played central roles in the overthrow of the Kingdom of Hawaii in 1893, creating a short-lived republic. In 1898, the Republic of Hawaii was annexed by the United States and became the Territory of Hawaii, aided by the lobbying of the sugar interests.
When Hawaiian plantations began to produce on a large scale, it became obvious that a labor force needed to be imported. The Hawaiian population was 1/6 its pre-1778 size due to ravaging disease brought by foreigners. Additionally, Hawaiian people saw little use for working on the plantations when they could easily subsist by farming and fishing. Plantation owners quickly began importing workers which dramatically changed Hawaii’s demographics and is an extreme example of globalization.
In 1850, the first imported worker arrived from China. Between 1852–1887, almost 50,000 Chinese arrived to work in Hawaii, while 38% of them returned to China. Although help was needed to work the fields, new problems, like feeding, housing, and caring for new employees, were created for many of the planters since the Chinese immigrants did not live off the land like Native Hawaiians, who required little support. To maintain a workforce unable to organize effectively against them, plantation managers diversified the ethnicities of their workforce, and in 1868 the first Japanese arrived to work on the plantations. Between 1885–1924, 200,000 Japanese people arrived with 55% returning to Japan. Between 1903–1910, 7,300 Koreans arrived and only 16% returned to Korea. In 1906 Filipino people first arrived. Between 1909 and 1930, 112,800 Filipinos came to Hawaii with 36% returning to the Philippines.
Plantation owners worked hard to keep in place a hierarchical caste system that prevented worker organization and divided the camps based on ethnic identity. An interesting outcome of this multi-cultural workforce and globalization of plantation workers was the emergence of a common language. Known as Hawaiian Pidgin, this hybrid primarily of Hawaiian, English, Japanese, Chinese, and Portuguese allowed plantation workers to communicate effectively with one another and promoted a transfer of knowledge and traditions among the groups. A comparison of 1959–2005 racial categories shows the ongoing shifts.
A unique operation was the Kohala Sugar Company, known as "The Missionary Plantation" since it was founded by Reverend Elias Bond in 1862 to support his church and schools. He protested the slave-like conditions, and the profits made him one of the largest benefactors to other missions. It operated for 110 years.
Sugar plantations dramatically impacted the environment around them. In an 1821 account, prior to the entrenchment of sugarcane plantations in Aiea, the area is described as belonging to many different people and being filled with taro and banana plantations along with a fish pond. This subsistence farming would not last long.
Plantations were strategically located throughout the Hawaiian Islands for reasons including: fertile soil area, level topography, sufficient water for irrigation, and a mild climate with little annual variation. These plantations transformed the land primarily to suit water needs: construction of tunnels to divert water from the mountains to the plantations, reservoir construction, and well digging.
Water was always a serious concern for plantation managers and owners. In the early 20th century, it took one ton of water to produce one pound of refined sugar. This inefficient use of water and the relative lack of fresh water in the island environment were fiercely compounding environmental degradation. Sugar processing places significant demands on resources including irrigation, coal, iron, wood, steam, and railroads for transportation.
Early mills were extremely inefficient, producing molasses in four hours using an entire cord of wood to do so. This level of wood use caused dramatic deforestation. At times, ecosystems were entirely destroyed unnecessarily. One plantation drained a riparian area of 600 acres (2.4 km2) to produce cane. Ironically, after draining the land and forever altering the biodiversity levels, they discovered it was an ancient forest, so they harvested the trees for timber, only then to find that the land was completely unsuitable for sugarcane production.
Sugar plantations were not only environmentally destructive in the past, they continue to be so. Major environmental concerns associated with sugarcane plantations include air and water pollution along with the proper disposal of the resulting waste. Modern calculations place the amount of water needed to produce one ton of cane at 3-10 cubic meters.
Decline of plantations in Hawaii
Sugar plantations suffered from many of the same afflictions that manufacturing market segments in the United States continue to feel. Labor costs increased significantly when Hawaii became a state and workers were no longer effectively indentured servants. The hierarchical caste system plantation managers had worked hard to maintain began to break down, with greater racial integrations as a result, ironically, of the sugarcane plantations. Workers began to discover they had rights, and in 1920 waged the first multi-cultural strike. Additionally global politics played a large role in the downfall of Hawaiian sugar. Shifting political alliances between 1902 and 1930 permitted Cuba to have a larger share of the United States sugar market, holding 45% of the domestic quota while Hawaii, the Philippines, and Puerto Rico shared 25%.
The Big Five slowed the production of sugar as cheaper labor was found in India, South America and the Caribbean and concentrated their efforts on the imposition of a tourism-based society. Former plantation land was used by the conglomerates to build hotels and develop this tourist-based economy which has dominated the past fifty years of Hawaiian economics.
Sugar mills by island
- Maui: Spreckelsville
Planters and Managers
- Hawaiian Sugar Planters' Association
- John Mott-Smith (1824-1895)
- Claus Spreckels (1828-1905) - whilst based mostly in California
- George P. Trousseau (1833-1894)
- Rufus A. Lyman (1842-1910)
- Samuel Parker (1853-1920)
- William H. Purvis (1858-1950)
- David M.Forbes (1863-1937)
- Deerr, 1949
- Urcia, 1960
- Takaki, 1983
- Kent, 1993
- Dorrance, William, Sugar Islands: The 165-Year Story of Sugar in Hawaii (Honolulu, Mutual Publishing, 2000), 11.
- HSPA, 1949
- Takaki, 1994
- Alexander, 1937
- Lyn Danninger (September 29, 2002). "Isle institutions' economic impact endures". Honolulu Star-Bulletin. Retrieved 2010-05-01.
- Big Five, Hawaiihistory.org, http://www.hawaiihistory.org/index.cfm?fuseaction=ig.page&PageID=29, Nov. 19, 2013.
- William Dorrance, Sugar Islands: The 165-Year Story of Sugar in Hawaii (Honolulu: Mutual Publishing, 2000), 20.
- Steger, 2003
- Edward D. Beechert (1985), "The Reverend Mr. Bond and Kohala Plantation", Working in Hawaii: a labor history, University of Hawaii Press, pp. 71–72, ISBN 978-0-8248-0890-7
- UNEP, 1982
- Alexander, Arthur (1937), Koloa Plantation 1835 - 1935, Honolulu, HI.
- Deerr, Noel (1949), The History of Sugar, Volume 1, London: Chapman and Hall Ltd.
- Dorrance, William H.; Morgan, Francis (2000), Sugar Islands: The 165-Year Story of Sugar in Hawaiʻi, Honolulu, HI: Mutual Publishing.
- 2005 American Community Survey for Hawaii, Hawaii State Government, United States Census Bureau, 2006.
- Sugar in Hawaii, Honolulu, HI: Hawaiian Sugar Planters' Association, 1949.
- Kent, Noel (1993), Hawaii: Islands Under the Influence, Honolulu, HI: University of Hawaii Press.
- Steger, M.B. (2003), Globalization: A Very Short Introduction, Oxford: Oxford University Press.
- Takaki, Ronald (1983), Pau Hana: Plantation Life and Labor in Hawaii, 1835 - 1920, Honolulu, HI: University of Hawaii Press..
- Takaki, Ronald (1994), Raising Cane: The World of Plantation Hawaii, New York, NY: Chelsea House Publishers.
- Sugarcane Harvested from 1934–2006, United States Department of Agriculture, National Agricultural Statistics Service, 2006-11-24.
- Urcia, Jose (1960), The Morphology of the Town as an Artifact: A Case Study of Sugar Plantation Towns on the Island of Oahu, Hawaii, Seattle, WA: University of Washington..
- Environmental Aspects of the Sugar Industry: An Overview, Paris, France: Imprimerie.: United Nations Environment Programme, 1982.
- Norwegian Aloha: The Making of a Sugar Cane Engineer, Lake Oswego, Oregon: Alder Business Publishing, 2011, ISBN 978-0-9792987-1-4. |
Aspects of Camera Lenses: Principal Axis
One feature common to all lenses, including camera lenses, is the principal axis. On a simple lens, say a curved piece of glass, the principal axis is an imaginary line drawn directly through the center of the lens. If the lens was standing up on a table, the principal axis would be parallel to the table.
Camera lenses are made of multiple simple lenses, or lens elements, but they also have a principal axis. It too is drawn directly through the center of the lens. Taking the lens as a cylinder, the principle axis goes right through the middle, parallel to the sides of the lens.
Importance of the Principle Axis
The focal point of a lens is where the light entering a lens will converge. This is located on the principle axis. Ideally, a principle axis should go through the middle of the lens and strike the film or digital sensor direct in its center. |
Computer Programming is Easy and Fun - For Kids
Reading writing and math are essential skills for kids. But in the 21st Century computer technology will continue to grow as an essential fact of life. Kids need to learn programming skills along with their other core subjects. These programs make learning computer programming easy and fun.
Shoes is a simple graphical interface designed to work on the Ruby platform. Ruby is a relatively new programming language created by Yukihiro "Matz" Matsumoto. The language was designed to be simpler to understand by being more like an actual spoken language. With Shoes, learning Ruby becomes even easier. Shoes is free to download and easy to install. Shoes is designed to make little colorful programs and animations. These could run on your computer but are even better on the web. A manual that explains everything is available on their website as a free .pdf download or a $5.95 mail order.
Alice is a 3D programming environment that teaches object oriented programming. It teaches fundamental programming concepts while the students make fun animations or simple video games. People, animals and vehicles inhabit a virtual world where kids decide which code they want to use to move them around. Alice has a drag and drop interface with tiles. Each tile has a standard programming statement. When the tile is dropped on the character the action is performed. This allows kids to easily and quickly understand what the code does.
Scratch is a 2D programming language that lets kids easily create their own animations, games, music, and art. The free program from the MIT Media Lab is designed to teach kids creative thinking, systematic reasoning, and collaboration. Scratch projects are made up of objects called sprites. Sprites can be decorated and dressed up by importing photos or using the built in paint feature. Sprites move or play music or react to other sprites when a graphic block is placed into a stacks (called scripts in Scratch). Clicking on the stacks activates them. Kids get to see immediately what their script can do.
JUDO is a Java IDE (Integrated Development Environment) for kids. By using a simplified version of Java, JUDO allows kids to learn faster so they can make interesting and entertaining things quickly. The JUDO site offers step by step tutorials that walk kids through the process. They learn how to write and run programs, and set program properties. |
Trachoma (truh-KOH-muh) is a bacterial infection that affects your eyes. The bacterium that causes trachoma spreads through direct contact with the eyes, eyelids, and nose or throat secretions of infected people.
Trachoma is very contagious and almost always affects both eyes. Signs and symptoms of trachoma begin with mild itching and irritation of your eyes and eyelids and lead to blurred vision and eye pain. Untreated trachoma can lead to blindness.
Trachoma is the leading preventable cause of blindness worldwide. The World Health Organization (WHO) estimates that 8 million people worldwide have been visually impaired by trachoma. WHO estimates more than 84 million people need treatment for trachoma, primarily in poor areas of developing countries. In some of the poorest countries in Africa, prevalence among children can reach 40 percent.
If trachoma is treated early, it often may prevent further trachoma complications.
The principal signs and symptoms in the early stages of trachoma include:
- Mild itching and irritation of the eyes and eyelids
- Discharge from the eyes containing mucus or pus
As the disease progresses, later trachoma symptoms include:
- Marked light sensitivity (photophobia)
- Blurred vision
- Eye pain
Young children are particularly susceptible to infection, but the disease progresses slowly, and the more painful symptoms may not emerge until adulthood.
The World Health Organization has identified a grading system with five stages in the development of trachoma, including:
- Inflammation — follicular. The infection is just beginning in this stage. Five or more follicles — small bumps that contain lymphocytes, a type of white blood cell — are visible with magnification on the inner surface of your upper eyelid (conjunctiva).
- Inflammation — intense. In this stage, your eye is now highly infectious and becomes irritated, with a thickening or swelling of the upper eyelid.
- Eyelid scarring. Repeated infections lead to scarring of the inner eyelid. The scars often appear as white lines when examined with magnification. Your eyelid may become distorted and may turn in (entropion).
- Ingrown eyelashes (trichiasis). The scarred inner lining of your eyelid continues to deform, causing your lashes to turn in so that they rub on and scratch the transparent outer surface of your eye (cornea).
- Corneal clouding. The cornea becomes affected by an inflammation that is most commonly seen under your upper lid. Continual inflammation compounded by scratching from the in-turned lashes leads to clouding of the cornea. Secondary infection can lead to development of ulcers on your cornea and eventually partial or complete blindness.
All the signs of trachoma are more severe in your upper lid than in your lower lid. With advanced scarring, your upper lid may show a thick line. In addition, the lubricating glandular tissue in your lids — including the tear-producing glands (lacrimal glands) — can be affected. This can lead to extreme dryness, aggravating the problem even more.
When to see a doctor
Call your doctor if you or your child has itching, irritation or discharge from the eyes, especially if you recently traveled to an area where trachoma is common. Trachoma is a contagious condition, and it should be treated as soon as possible to prevent further infections.
Trachoma is caused by certain subtypes of Chlamydia trachomatis, a bacterium that can also cause the sexually transmitted infection chlamydia.
Trachoma spreads through contact with discharge from the eyes or nose of an infected person. Hands, clothing, towels and insects can all be routes for transmission. In the world's developing countries, flies are a major means of transmission.
Factors that increase your risk of contracting trachoma include:
- Poverty. Trachoma is primarily a disease of extremely poor populations in developing countries.
- Crowded living conditions. People living in close contact are at greater risk of spreading infection.
- Poor sanitation. Poor sanitary conditions and lack of hygiene, such as unclean faces or hands, help spread the disease.
- Age. In areas where the disease is active, it's most common in children ages 4 to 6.
- Sex. Women contract the disease at rates two to six times higher than those for men.
- Poor access to water. Households at greater distances from a water supply are more susceptible to infection.
- Flies. People living in areas with problems controlling the fly population may be more susceptible to infection.
- Lack of latrines. Populations without access to working latrines — a type of communal toilet — have a higher incidence of the disease.
One episode of trachoma caused by Chlamydia trachomatis is easily treated with early detection and use of antibiotics. However, repeated infection can lead to complications, including:
- Scarring of the inner eyelid
- Eyelid deformities
- Inward folding of the eyelid (entropion)
- Ingrown eyelashes
- Corneal scarring or cloudiness
- Partial or complete vision loss
You're likely to start by seeing your family doctor or a general practitioner if you have symptoms of trachoma. However, in some cases when you call to set up an appointment, you may be referred immediately to an eye specialist (ophthalmologist).
Because appointments can be brief, and because there's often a lot to talk about, it's a good idea to be well prepared. Here's some information to help you get ready, and what to expect from your doctor.
What you can do
- Be aware of any pre-appointment restrictions. At the time you make the appointment, be sure to ask if there's anything you need to do in the time leading up to your appointment. For example, if your child has signs or symptoms of an eye condition, ask whether you should keep your child home from school or child care.
- Write down any symptoms you're experiencing, including any details about changes in your vision. Are you sensitive to light? Has your vision become blurred? Do your eyes hurt or just itch?
- Write down key personal information, including any trips you or someone close to you may have taken abroad. Also include information about any recent changes to corrective lenses, such as new contacts or glasses.
- Make a list of all medications and any vitamins or supplements that you're taking.
- Write down questions to ask your doctor.
Your time with your doctor is limited, so preparing a list of questions will help you make the most of your time together. List your questions from most important to least important in case time runs out. For eye irritation, some basic questions to ask your doctor include:
- What is likely causing my symptoms?
- Other than the most likely cause, what are other possible causes for my symptoms?
- What kinds of tests do I need?
- Is my condition likely temporary or chronic?
- What is the best course of action?
- Will I have any long-term complications from this condition?
- Are there any restrictions that I need to follow, such as staying home from work or school?
- Should I see a specialist? What will that cost, and will my insurance cover it?
- Is there a generic alternative to the medicine you're prescribing me?
- Are there any brochures or other printed material that I can take with me? What websites do you recommend visiting?
In addition to the questions that you've prepared to ask your doctor, don't hesitate to ask questions during your appointment at any time that you don't understand something.
What to expect from your doctor
Your doctor is likely to ask you a number of questions. Being ready to answer them may reserve time to go over points you want to spend more time on. Your doctor may ask:
- Have you or someone close to you traveled abroad recently?
- Have you ever had a similar problem?
- Have you made any changes to your corrective lenses, such as wearing new contacts or using new contact lens solution?
- When did you first begin experiencing symptoms?
- How severe are your symptoms? Do they seem to be getting worse?
- What, if anything, seems to improve your symptoms?
- What, if anything, appears to worsen your symptoms?
- Is anyone else in your household having similar symptoms?
- Have you been treating your symptoms with any medications or drops?
What you can do in the meantime
While you are waiting for your appointment, practice good hygiene to reduce the possibility of spreading your condition:
- Don't touch your eyes without first washing your hands.
- Wash your hands thoroughly and frequently.
- Change your towel and washcloth daily, and don't share them with others.
- Change your pillowcase often.
- Discard eye cosmetics, particularly mascara.
- Don't use anyone else's eye cosmetics or personal eye care items.
- Discontinue wearing your contact lenses until your eyes have been evaluated; then follow your eye doctor's instructions on proper contact lens care.
- If your child is infected, have him or her avoid close contact with other children.
Most people with trachoma in its initial stages display no signs or symptoms. In areas where the disease is common, your doctor can diagnose trachoma through a physical examination or through sending a sample of bacteria from your eyes to be cultured and tested in a laboratory.
Trachoma treatment options depend on the stage of the disease.
In the early stages of trachoma, treatment with antibiotics alone may be enough to eliminate the infection. The two drugs currently in use include a tetracycline eye ointment and oral azithromycin (Zithromax). Although azithromycin appears to be more effective than tetracycline, azithromycin is more expensive. In poor communities, the drug used often depends on which one is available and affordable.
The World Health Organization (WHO) guidelines recommend giving antibiotics to an entire community when more than 10 percent of children have been affected by trachoma, to treat anyone who has been exposed to trachoma and reduce the spread of trachoma.
Treatment of later stages of trachoma — including painful eyelid deformities — may require surgery. WHO guidelines recommend surgery for people with the advanced stage of trachoma.
In eyelid rotation surgery (bilamellar tarsal rotation), your doctor makes an incision in your scarred lid and rotates your eyelashes away from your cornea. The procedure limits the progression of corneal scarring and can help prevent further loss of vision. Generally, this procedure can be performed on an outpatient basis and often significantly reduces the chances of trachoma returning.
If your cornea has become clouded enough to seriously impair your vision, corneal transplantation may be an option that may improve vision. Frequently, however, with trachoma, this procedure doesn't have good results.
You may have a procedure to remove eyelashes (epilation) in some cases. However, this procedure may need to be done repeatedly. Another temporary option, if surgery isn't an available option, is to place an adhesive bandage over your eyelashes to keep them from touching your eye.
If you're traveling to parts of the world where trachoma is common, be sure to practice good hygiene to prevent infection.
If you've been treated for trachoma with antibiotics or surgery, reinfection is always a concern. For your protection and for the safety of others, be sure that family members or others you live with are screened and, if necessary, treated for trachoma.
Proper hygiene practices include:
- Face washing and hand-washing. Keeping faces clean, especially children's, can help break the cycle of reinfection.
- Controlling flies. Reducing fly populations can help eliminate a major source of transmission.
- Proper waste management. Properly disposing of animal and human waste can reduce breeding grounds for flies.
- Improved access to water. Having a fresh water source nearby can help improve hygienic conditions.
Although no vaccine is available, trachoma prevention is possible. The World Health Organization (WHO) has developed a health strategy to prevent trachoma, with the goal of eliminating trachoma in the world by 2020. The strategy is titled SAFE, which includes:
- Surgery to treat advanced forms of trachoma
- Antibiotics to treat the infection and prevent further spread of infection
- Facial cleanliness
- Environmental improvements, particularly in water, sanitation and fly control, to lower disease transmission
Oct. 03, 2012
- Hygiene-related diseases: Trachoma. Centers for Disease Control and Prevention. http://www.cdc.gov/healthywater/hygiene/disease/trachoma.html. Accessed Aug. 17, 2012.
- Water-related diseases: Trachoma. World Health Organization. http://www.who.int/water_sanitation_health/diseases/trachoma/en/index.html. Accessed Aug. 17, 2012.
- Wright HR. Epidemiology, diagnosis, and management of trachoma. http://www.uptodate.com/index. Accessed Aug. 17, 2012.
- Trachoma. The Merck Manuals: The Merck Manual for Healthcare Professionals. http://www.merckmanuals.com/professional/eye_disorders/conjunctival_and_scleral_disorders/trachoma.html?qt=trachoma&alt=sh. Accessed Aug. 17, 2012.
- Conjunctivitis. The Merck Manuals: The Merck Manual for Healthcare Professionals. http://www.merckmanuals.com/professional/eye_disorders/conjunctival_and_scleral_disorders/conjunctivitis.html?qt=conjunctivitis&alt=sh. Accessed Aug. 23, 2012.
- Trachoma overview. International Trachoma Initiative. http://trachoma.org/world%E2%80%99s-leading-cause-preventable-blindness. Accessed Aug. 17, 2012.
- WHO simplified trachoma grading system. Community Eye Health. 2004;17:52. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1705737. Accessed Aug. 23, 2012.
- Prevention of blindness and visual impairment: Trachoma. World Health Organization. http://www.who.int/blindness/causes/trachoma/en/index.html. Accessed Aug. 24, 2012.
- Blindness, trachoma in children under 10 in 2003. Global Health Atlas. World Health Organization.http://apps.who.int/globalatlas/dataQuery/. Accessed Sept. 6, 2012.
- Robertson DM (expert opinion). Mayo Clinic, Rochester, Minn. Sept. 5, 2012. |
(Newtonian physics denotes well-known forces like gravity, while quantum mechanics describes laws of physics that apply at very small scales, such as those found in atoms.)
At the nanoscale, for example, the metal germanium glows blue when energy is applied to it. This has a host of applications in electronic and medical imaging technologies, Rorrer said.
The process to incorporate germanium nanoparticles in silica is "doable but difficult with existing technology," the scientist said.
The conventional process involves vaporizing a germanium crystal in a vacuum with a high-energy laser beam and coaxing the vaporized atoms to glom onto a silica surface.
"That has to be done at a high temperature [and] at a high vacuum and [with] all the equipment associated with the control of that," Rorrer said. "We do essentially the same thing by growing living organisms in a vat."
The trick for Rorrer and his Oregon State University colleague, Chih-hung Chang, is to add just enough dissolved metal to the vat to allow the diatoms to absorb it without dying.
To date "the concept for germanium incorporation has been proven," Chang said. "We will work on incorporating other metals very soon."
Another advantage to using diatoms, Rorrer said, is that when the algae divide, they make a perfect copy of themselves, meaning "we can make a gazillion of these, and they are all the same."
In addition to the ability of the diatoms to absorb these metals and create nanostructured materials, each diatom species makes shells with unique designs. And there are tens of thousands of diatom species.
Which means there are "tens of thousands of micro-templates," Rorrer said. "Some have holes, some ribs, some oval, some squareand all the microfabrication has been done by the organisms. We just put additional material on it."
In the future, the researchers hope they can use these diatoms to make intricate designs at the microscale that are currently not possible with existing technology.
To find the appropriate template, all a researcher would need is a searchable database of natural diatom designs. Genetic engineering may also one day make it possible to control diatom design.
Free E-Mail News Updates
Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample).
SOURCES AND RELATED WEB SITES |
Omega-3 and Omega-6 Fatty Acids Improve Reading Skills in Healthy Children
Previous research has shown positive effects of essential fatty acids (omega-3/6) in children with attention and reading difficulties. New research shows that these fats could improve reading ability in mainstream schoolchildren.
Foods high in omega-3 include fish, vegetable oils, nuts, flax seeds, and leafy vegetables. Most omega-6 fatty acids in the diet are obtained from vegetable oils. The modern diet is particularly low in omega-3 fatty acids which are important for signal transmission between nerve cells and the regulation of signaling systems in the brain.
The study group included 154 schoolchildren from western Sweden who were in grade 3 (between 9 and 10 years of age). The researchers then measured their reading skills using a computer-based test, called the Logos test. It measured reading speed, ability to read nonsense words, and vocabulary.
The children were randomly assigned supplements with omega-3/omega-6 or a placebo of palm oil which they took for 3 months (3 capsules per day). The study was double-blinded so neither the researchers nor parents knew which treatment the children were taking. After 3 months all the children received the real omega-3/6 capsules for the remainder of the research study.
Researchers saw a significant improvement in reading skills after the first 3 months in children taking the omega-3/6 acid compared to the placebo. While no children diagnosed with ADD/ADHD were included in the study, those children with mild attention problems achieved greater improvements in certain tests, such as faster reading, after taking the real supplements.
Johnson M, Fransson G, Östlund S, Areskoug B, Gillberg C. Omega 3/6 fatty acids for reading in children: a randomized, double-blind, placebo-controlled trial in 9-year-old mainstream schoolchildren in Sweden. J Child Psychol Psychiatry. 2017;58(1):83-93. |
On June 24, 1901, the first major exhibition of Pablo Picasso’s artwork opens at a gallery on Paris’ rue Lafitte, a street known for its prestigious art galleries. The precocious 19-year-old Spaniard was at the time a relative unknown outside Barcelona, but he had already produced hundreds of paintings. The 75 works displayed at Picasso’s first Paris exhibition offered moody, representational paintings by a young artist with obvious talent.
Pablo Picasso, widely acknowledged as the dominant figure in 20th-century art, was born in Malaga, Spain, in 1881. His father was a professor of drawing and bred Picasso for a career in academic art. He had his first exhibit at age 13 and later quit art school so he could experiment full-time with modern art styles. He went to Paris for the first time in 1900, and in 1901 he returned with 100 of his paintings, aiming to win an exhibition. He was introduced to Ambroise Vollard, a dealer who had sponsored Paul Cezanne, and Vollard immediately agreed to a show at his gallery after seeing the paintings. From street scenes to landscapes, prostitutes to society ladies, Picasso’s subjects were diverse, and the young artist received a favorable review from the few Paris art critics who saw the show. He stayed in Paris for the rest of the year and later returned to Paris to settle permanently.
The work of Picasso, which comprises more than 50,000 paintings, drawings, engravings, sculptures, and ceramics produced over 80 years, is described in a series of overlapping periods. His first notable period–the “blue period”–began shortly after his first Paris exhibit. In works such as The Old Guitarist (1903), Picasso painted in blue tones to evoke the melancholy world of the poor. The blue period was followed by the “rose period,” in which he often depicted circus scenes, and then by Picasso’s early work in sculpture. In 1907, Picasso painted the groundbreaking work Les Demoiselles d’Avignon, which, with its fragmented and distorted representation of the human form, broke from previous European art. Les Demoiselles d’Avignon demonstrated the influence on Picasso of both African mask art and Paul Cezanne and is seen as a forerunner of the Cubist movement founded by Picasso and the French painter Georges Braque in 1909.
In Cubism, which is divided in two phases, analytical and synthetic, Picasso and Braque established the modern principle that artwork need not represent reality to have artistic value. Major Cubist works by Picasso included his costumes and sets for Sergey Diaghilev’s Ballets Russes (1917) and The Three Musicians (1921). Picasso and Braque’s Cubist experiments also resulted in the invention of several new artistic techniques, including collage.
After Cubism, Picasso explored classical and Mediterranean themes, and images of violence and anguish increasingly appeared in his work. In 1937, this trend culminated in the masterpiece Guernica, a monumental work that evoked the horror and suffering endured by the Basque town of Guernica when it was destroyed by German war planes during the Spanish Civil War. Picasso remained in Paris during the Nazi occupation but was fervently opposed to fascism and after the war joined the French Communist Party.
Picasso’s work after World War II is less studied than his earlier creations, but he continued to work feverishly and enjoyed commercial and critical success. He produced fantastical works, experimented with ceramics, and painted variations on the works of other masters in the history of art. Known for his intense gaze and domineering personality, he had a series of intense and overlapping love affairs in his lifetime. He continued to produce art with undiminished force until his death in 1973 at the age of 91. |
In electronics and communications, the decibel (abbreviated as dB, and also as db and DB) is a logarithmic expression of the ratio between two signal power, voltage, or current levels. In acoustics, the decibel is used as an absolute indicator of sound power per unit area. A decibel is one-tenth of a Bel, a seldom-used unit named for Alexander Graham Bell, inventor of the telephone.
Suppose a signal has a power of P1 watts, and a second signal has a power of P2 watts. Then the power amplitude difference in decibels, symbolized SdBP, is:
SdBP = 10 log10 (P2 / P1)
Decibels can be calculated in terms of the effective voltage if the load impedance remains constant. Suppose a signal has an rms (root-mean-square) voltage of V1 across a load, and a second signal has an rms voltage of V2 across another load having the same impedance. Then the voltage amplitude difference in decibels, symbolized SdBV, is:
SdBV = 20 log10 (V2 / V1)
Decibels can also be calculated in terms of the effective current (amperage) if the impedance remains constant. Suppose a signal delivers an rms (root-mean-square) amperage of A1 through a load, and a second signal delivers an rms amperage of A2 through another load having the same impedance. Then the current amplitude difference in decibels, symbolized SdBA, is:
SdBA = 20 log10 (A2 / A1)
When a decibel figure is positive, then the second signal is stronger than the first signal. When a decibel figure is negative, then the second signal is weaker than the first signal. In amplifiers, the gain, also called the amplification factor, is often expressed in decibels. A circuit amplifies only if the decibel figure for the output-to-input power ratio (SdBP) is positive.
In sound, decibels are defined in terms of power per unit surface area on a scale from the threshold of human hearing, 0 dB, upward towards the threshold of pain, about 120-140 dB. As examples: the sound level in the average residential home is about 40 dB, average conversation is about 60 dB, typical home music listening levels are about 85 dB, a loud rock band about 110 dB, and a jet engine close up is 150dB.
Decibel units are commonly used in audio equalizers, both the hardware kind and the software kind, as a convenient reference point while editing. Boosting an equalizer band whose center point is 1000 by 3 dB means that you have raised the volume level of that frequency band by 3 dB as it relates to the other frequencies in the sound. A typical equalizer has a range for boosting or diminishing a sound level of +/-18 dB. |
If you’re worried that you might have been exposed to human immunodeficiency virus (HIV) — the virus that causes AIDS — it’s important to get tested as soon as possible. Although the prospect of being diagnosed with the disease can be scary, today you can live a long and full life with HIV, especially if you start treatment early. Knowing you are infected can also help you take precautions so that you don’t pass the virus to other people.
Several different tests are used to diagnose HIV infection. Other tests are used to select and monitor treatments in people who are living with HIV.
HIV LABORATORY TEST TYPES
There are three main types of HIV tests:
- Antibody tests,
- RNA tests, and
- A combination test that detects both antibodies and viral protein called p24.
All tests are designed to detect HIV-1, which is overwhelmingly the most common type of HIV in the United States. Some antibody tests and the combination test can also detect HIV-2 infections, which are less common in the U.S. No test is perfect; tests may be falsely positive or falsely negative or impossible to interpret.
Positive test results are reportable to the health department in all 50 states and include the patient’s name. This information is then reported to the CDC (without names) so that the epidemiology and infection spread rates can be monitored. The names sent to the state remain confidential and will not be reported to employers, family members, or other such people.
HIV ANTIBODY TESTS:
HIV possesses many unique proteins on its surface and inside the virus itself. When someone is infected with HIV, their body produces substances designed to neutralize the virus. These substances are called antibodies, and they are directed against the unique proteins of HIV. Unfortunately, these HIV antibodies do not eliminate the virus. However, their presence serves as a marker to show that someone is infected with HIV. HIV antibody tests are the most commonly used tests to determine if someone has HIV.
Antibody testing is usually done on a blood sample, often using an enzyme-linked assay called an ELISA or EIA. In this test, a person’s serum is allowed to react with virus proteins that have been produced in the laboratory. If the person has been infected with HIV, the antibodies in the serum will bind to the HIV proteins, and the extent of this binding can be measured. Negative EIA results are usually available in a day or so.
There are some rapid HIV testing kits on the market that can be used in a doctor’s office or other points of care. Most of these kits still require blood to be drawn, although it can be done using a simple finger stick in some cases. Home-testing is also possible and may be more convenient for some individuals. Home testing is done by adding a drop of blood to a test strip and mailing the sample to a laboratory. The FDA has also approved kits that test for antibodies in saliva/oral fluid instead of blood. Saliva is obtained by swabbing the gums. Some of the newest tests are done on urine, although results may be less accurate than results from blood.
Because there is a small chance that a person’s antibodies will falsely attach to the non-HIV proteins during the test, a second test is done on all initially positive tests. This second test is called the Western blot test. In this test, the HIV proteins are separated by size and electric charge and the person’s serum is layered on the test strip. If the test is positive, a series of bands are detected which indicate specific binding of the person’s antibody to specific HIV virus proteins. This test is only done in combination with the initial screening test.
HIV RNA TESTS:
The HIV RNA is different than all human RNA, and tests have been developed to detect HIV RNA in a person’s blood. This uses a type of test called a polymerase chain reaction (PCR). These tests are important for newborn screening of HIV-positive mothers since maternal antibody may cross the placenta and be present in the newborn. These tests may also be helpful in detecting HIV infection in the first four weeks following exposure, before antibodies have had time to develop. However, they are costly and are not routinely used to screen for infection.
HIV COMBINATION TEST:
The HIV combination test detects antibodies directed against HIV-1 or HIV-2, as well as a protein called p24, which forms part of the core of the virus. This is important because it takes weeks for antibodies to form after the initial infection, even though the virus (and the p24 protein) is present in the blood. Thus, combination testing may allow for earlier detection of HIV infections. Preliminary studies suggest that diagnosis could be made an average of one week earlier using the combination test, compared to antibody testing alone. The test uses a reaction known as “chemiluminescence” to detect antibodies and p24 protein. In other words, if either the antibody or the p24 protein is present, the test reaction emits light that registers on a detector. There is only one currently approved combination test, the Architect HIV Ag/Ab Combo assay. If this test is positive, it is recommended it be repeated. Tests that remain positive are confirmed with Western blot as described above.
TESTS AFTER HIV DIAGNOSIS
If you receive a diagnosis of HIV/AIDS, several types of tests can help your doctor determine what stage of the disease you have. These tests include:
- CD4 count.
CD4 cells are a type of white blood cell that’s specifically targeted and destroyed by HIV. Even if you have no symptoms, HIV infection progresses to AIDS when your CD4 count dips below 200.
- Viral load.
This test measures the amount of virus in your blood. Studies have shown that people with higher viral loads generally fare more poorly than do those with a lower viral load.
- Drug resistance.
This blood test determines whether the strain of HIV you have will be resistant to certain anti-HIV medications.
TESTS FOR COMPLICATIONS IN HIV INFECTED PATIENTS
Your doctor might also order lab tests to check for other infections or complications, including:
- Sexually transmitted infections
- Liver or kidney damage
- Urinary tract infection |
About six years ago, researchers first suggested that a severe drought may have precipitated the mysterious fall of the Mayan Empire. A new study in today's Science finds that this kind of drought may return with a certain regularity because it is a result of a shift in solar intensity.
The sun's intensity actually varies less that one tenth of a percent, but it is enough to create severe droughts in the Yucatn Peninsula, formerly the heart of the Mayan Empire. "It looks like changes in the sun's energy output are having a direct effect on the climate of the Yucatn and causing the recurrence of drought, which is influencing the Maya evolution," says lead author David Hodell of the University of Florida.
In sediment cores taken from Lake Chichancanab in northern Yucatn, the researchers found periodically reoccurring high concentrations of calcium sulfate, which is left behind when greater amounts of water evaporate from the ground during droughts. Based on this pattern, they concluded that the droughts occurred in a 208-year cycle. They further noted that this cycle closely coincided with a known 206-year variation in solar activity.
When the scientists compared the development of Mayan civilization with these cycles, it became evident that the society's development was slowed down every time the droughts occurred. The Maya relied heavily on rainfall and surface water, and both the end of the classical period and the ultimate demise of Mayan civilization coincided with one of these droughts. |
Every four years citizens of the U.S. vote for President. It seems simple, but actually way more complicated. Let us explain!
When it comes to electing the President of the United States, it gets a little complicated. We found this video to help explain the process. This video discusses the basic ideas behind the U.S. electoral process. It follows the chronological steps from voting to election day, focusing on each state’s role, including:
- Comparisons of popular vote vs. state votes
- Impact of state population on the number of electors
- How electors are counted
- What is required for a president to be elected |
Early childhood education includes the child’s learning environments during the ages of birth to five.
(Author: Tracey Schaeffer)
That environment does not have to be in a preschool setting or include any special curriculum, it is most often the home environment and provided by the family. The things that happen during this time can have an effect of the child’s life, on the choices they make and their ability to ‘bounce back’ from challenges, also called resilience.
There are many people exploring the things that lead people to success in life, those are often called positive protective factors as they can protect a child, and the person they become from making choices that lead to challenges in life. Some examples of positive protective factors include: having your needs met in your home as a young child, having both parents at home, not experiencing violence, a safe and connected community with opportunities for meaningful engagement.
In rural Alaska, the possibilities for a rich early childhood experience are very abundant! Looking at Inupiaq culture and subsistence there are many traditions and opportunities that will help your child develop resilience and have the opportunity to experience success. Young babies thrive on love and attention. They love to look at your face and taking time to sing and talk to your baby will help your baby feel safe and connected to you. The Inupiaq tradition of nuniaq is a little rhyme/song that a parent or relative makes up for a new baby and it stays with them as they grow, strengthening that attachment and the special bond between them.
Give your baby opportunities to play on the floor and move around, it will help strengthen their bodies and develop curiosity about their environment. Being exposed to the Inupiaq language is wonderful during the first few years of a child’s life. Their brain is ready to absorb and learn, so getting your baby together with relative who can speak Inupiaq with them is a wonderful opportunity for brain development. Toddlers love to explore, so helping them safely get outside and play in the grass, in the sand, feel the leaves, smell the flowers and even splash in the water and squish mud. Talking about how things look, feel, and smell will help them develop their language, cognitive and sensory skills. Spending time with them will make this curious exploration even more special.
As children get older teaching them chores at home and at camp will help them learn responsibility and the satisfaction of a job well done. Teach them how to do simple chores and give them praise for doing it right. There is always a place for children to get involved in subsistence activities. While it might take more time to have your child involved in the process of picking, skinning, processing and putting away food it will strengthen their self esteem as they will have a strong cultural connection, not to mention the skills to take over as you get ready to retire.
Last but not least for sure if taking time to have fun and be active, take a walk down the beach when you are at camp, teach your child to skip rocks, play a game of Norwegian, shoot some hoops. The time together and developing a habit of healthy activity will stay with them for a lifetime. |
Kids who speak Mandarin, the primary language in China, may outperform kids who speak English in at least one aspect of musical ability — perceiving pitch. That’s the finding of a new study.
Pitch refers to how high or low a sound’s frequency is. In tonal languages, such as Mandarin, pitch is very important. These languages use different pitch patterns to give meaning to words. In Mandarin, a word like "ma," for instance, could mean “mother” or “horse.” Knowing which will depend on how it was spoken. The English language uses vowels and consonants to change the meaning of a word. Switch the vowel in cat from "a" to "o," and it becomes cot. But changing the pitch of the word doesn’t matter. (Even in English, pitch can play a role — just a different one. For instance, raising the pitch for the last word in a sentence signals that a question has just been asked.)
Sarah Creel led the new study. She works at the University of California, San Diego, where she studies how the brain perceives language and music. People who speak Mandarin may be better at detecting differences in pitch generally. "If you have to focus on pitch patterns a lot to understand what the people around you are saying, that may really hone your attention to pitch,” explains Creel. “And that attention to pitch in language then transfers to another domain.” One such domain: music.
Creel and her colleagues conducted an experiment with roughly100 kids between the ages of three and five. Half lived in China, the rest in the United States. The children listened to pairs of sounds. Then they reported whether the sounds in a pair had been the same or different. Some of the paired sounds were exactly alike. Others were slightly different. Some, for instance, had differences in how low or high a sound was. Other pairs had the same pitch but were played by different instruments.
Both groups did equally well at identifying pairs of sounds from different instruments. But Chinese kids were much better than the Americans at picking pairs of sounds having different pitches — almost 15 percentage points better. In a second trial, the researchers ran the test with three- to four-year-olds. Again, the Chinese children performed better at pitch perception, although not quite as well as the older children had.
Creel's team published its findings online January 16 in Developmental Science.
Scientists had previously linked speaking Mandarin and musical ability in adults. The new study is the first to do that in children.
"Showing the link in children suggests that it only takes a few years of experience with a tonal language to see effects," says Creel. The finding probably applies to other tonal languages too, she says. Cantonese (another Chinese language), Vietnamese and Thai are examples. Many languages in sub-Saharan Africa and Central America also are tonal.
The right side of the brain plays a crucial role in music. Languages, such as English and Mandarin, are mostly processed on the left side of the brain. But research has shown that Mandarin tends to activate parts of the right side of the brain that English doesn't.
It is not yet clear, however, whether the advantage in perceiving pitch actually makes Chinese kids better musicians, Creel notes.
Fan-Gang Zeng is a scientist at the University of California, Irvine. He studies how hearing works in the brain. Zeng says the study is "credible." But, he adds, the advantage the Chinese children showed "can be easily overcome by motivation, experience and training."
So if you want to spruce up your piano skills, maybe you should practice your piano lessons more, not head out to learn Chinese.
(for more about Power Words, click here)
Cantonese Also known as Yue, this is one of the five major languages of China, now spoken by some 100 million people. The name comes from Canton. That's the name that English colonists gave to Guangzhou, the capital city of Guangdong Province. This language has existed in some form for roughly 2,000 years.
colleague Someone who works with another; a co-worker or team member.
credible (n. credibility) An adjective meaning believable or convincing
domain An area or territory ruled by a political power; an area of knowledge or influence. (in math) The values that go into a function.
frequency The number of times a specified periodic phenomenon occurs within a specified time interval.
Mandarin (in linguistics) Versions, or dialects, of Chinese, which are spoken in about four fifths of China. In all, an estimated 1.3 billion people inside and outside China speak this language. Also known as pǔtōnghuà, it has been around for roughly 900 years. It takes its name from a Portuguese word and initially referred to an important Chinese official. This has been one of the most common languages in China since the 14th century.
pitch (in acoustics) The word musicians use for sound frequency. It describes how high or low a sound is, which will be determined by the vibrations that created that sound.
tonal language (in linguistics) A language, such as several spoken in China, that uses differences in tone to distinguish the meaning of words that would otherwise sound similar.
tone Changes in a voice that express a particular feeling or mood. |
Higher order thinking skills can be assessed using tasks which provide opportunities for students to analyse, evaluate or create. Deductive reasoning in geometry requires these types of thinking skills.
Bloom's Revised Taxonomy classifies thinking skills into six levels.
Assessment tasks need to provide clear information about student achievement whilst allowing students to access problems suited to their level of knowledge and experience.
You can read more about thinking skills in Bloom’s Revised Taxonomy (74 KB PDF) of learning domains. This is a particularly helpful organisational and planning tool when designing assessment tasks in geometry.
Listing properties of a given quadrilateral involves recalling knowledge, a low level of thinking. Reversing the direction of the question assesses the same information at a higher level.
The ability to reason logically develops before the skill of writing formal proofs. To assess reasoning, provide students with opportunities to explain their reasoning in their own words.
Find the error
Questions which require students to analyse or evaluate can be used to assess deep understanding.
Student thinking is often revealed when playing mathematical games. By observing the decisions and strategies used while playing, student learning can be assessed. |
Today in our period B MFM1P class we started investigating ratios and proportional reasoning.
We started using Dan Meyers 3-Act Math problem “Sugar Packets”. After watching the video, students were engaged and grossed out at the thought. Then, they were each given a different size of beverage (juice boxes, chocolate milk, small soda cans, large soda bottles, iced coffee, iced tea, gatorade and lower sugar gatorade, powerade, snapple, vitamin water, etc.). Groups had to figure out how many sugar packets were in each beverage container.
After collecting all that data, we talked about if this was a fair comparison to base our decisions on. Students decided that it was not fair because each container was a different size. Groups then began the difficult work of figuring out how to find the number of sugar packets in a 591 mL sized bottle of their beverage.
Some groups found a unit rate (number of sugar packets in 1 mL and then multiplied by 591 mL), some groups found out how many “times” larger the 591 mL bottle was and then multiplied the number of sugar packets by the same. Lastly, one group used an additive method to figure out how many of the smaller containers were in the larger one and then did the same thing to the sugar packets.
We consolidated by setting up ratios and then comparing a few different algebraic methods for solving it. We ended up with a great discussion on types of beverages, types of sugar (fruit sugar, liquid sugar, corn syrup) and ratios. |
Connecting landscapes across the Wet Tropics
Landscape connectivity in ecology is, broadly, “the degree to which the landscape facilitates or impedes movement between resource patches” (Taylor et al., 1993).
- Be structural (where things are and the shape of the land);
- Be functional (how plants and animals move through and use the landscape);
- Occur across a range of scales, from particular sites (such as a road crossing) to the continental scale;
- Describe links between individual species, habitat, and ecosystems;
- Include biological processes such as pollination and seed dispersal, predator/prey relationships and food webs, water and nutrient flows, and types of ecosystem disturbance;
- Refer to a range of organisms and time scales – from daily movement of animals to the evolutionary flow of genetic materials over generations.
Such complexity is difficult to measure so we often fall back on measuring habitat connectivity for a particular species or collection of species.
However, our bigger and more pressing challenge in the Wet Tropics is to promote ecological connectivity across the landscape. For example there are 14 separate, unconnected sections of the internationally significant World Heritage Area. The Wet Tropics has irreplaceable and endemic species that are threatened by ecological fragmentation. To make positive change for the future, we need to build all aspects of ecosystem function at a landscape scale, across a range of habitats.
Benefits of connectivity
Research has shown that improving connectivity will benefit particular species, including a range of threatened and iconic species such as the Southern Cassowary, Mahogany Gliders and Lumholtz Tree Kangaroos, that have become symbols for the need to reverse forest fragmentation.
Climate change and connectivity
Climate change scientists have also advocated increased habitat and connectivity as part of the solution to addressing the impacts of climate change. New research through the Regional NRM Planning for Climate Change project has provided valuable insight into the areas in the landscape that will be suitable for various species under future climate conditions. This provides us with new insight and direction when planning for where to invest in building landscape connectivity. Read more.
Rainforest Aboriginal People
The Wet Tropics region is home to 20 Rainforest Aboriginal tribal groups. Conservation and connectivity in the Wet Tropics are inextricably linked with that of Aboriginal cultural and spiritual values. The ecosystems of the region have evolved over thousands of years through active Aboriginal interaction with the land, water and sea. This interaction is paramount for the maintenance of Aboriginal culture and intrinsically linked with ecological processes. The participation of Traditional Owners and their cultural knowledge and perspectives of plants, animals and ecology is essential for management of connectivity.
Much of the land outside of the protected World Heritage Area is used for farming activities. Ecological connectivity can co-exist with agriculture and provide environmental and socioeconomic benefits. Conserving existing native vegetation and planting local native species can have many advantages such as controlling erosion, improving flood mitigation, improving water quality, creating shade and windbreaks, increasing pollination and reducing pests and weeds.
Connectivity and the community
Landscapes are a combination of natural and man-made environments and the interactions of nature and people over time. Landscapes help to define identity, a sense of place and a context for people’s lives and livelihoods. To maintain and improve ecological connectivity across the landscape we must also build social and community connectivity. Group activities such as tree-planting can be an excellent way to establish healthy relationships between diverse community interests founded on building a healthier landscape. |
We use written symbols to express all kinds of messages: to share stories, note financial transactions, record history, imagine the future, to express love, hatred, humour or melancholy. Writing gives us access to knowledge. We can trace how an idea has changed over thousands of years, or argue against the opinions of those long dead, all because the discoveries of others have been recorded and collected.
According to historians, the earliest form of writing can be dated to around 3000 BC, when Sumerians in ancient Mesopotamia - modern day Iraq - wrote on clay tablets. This writing system is known as cuneiform.
Are there alternatives to writing?
Some of the most basic forms of communication use simple devices, often in moment to moment exchanges. A Yoruban man in Nigeria might send six shells to the woman he is attracted to. The Yoruban word efa means both 'six' and 'attracted'. If this chat up line works, the girl replies with eight shells - ejo meaning both 'eight' and 'I agree'.
The Iron Age Celts didn't write things down but passed on their knowledge, stories and poems by word of mouth. It took their druids up to twenty years to remember everything. They were a highly sophisticated society, and knew about writing, but preferred to learn everything by heart.
Transmitting a story from person to person is a fluid process - much more so than reading a text fixed on a page. In the process of hearing a story, and retelling it, subtle changes can chip away at the story itself. Our different experiences and interpretations can influence the meaning of the story, and affect how we choose to pass it on.
By contrast, writing is an act of recording. The word written becomes fixed. Depending on what it is written with, its mark can remain preserved for a very long time. Although different readers might interpret a piece of writing in different ways, the text itself does not change.
Of course, since the late 1800s advances in technology have had a profound effect on our society's dependence on writing. Recording equipment has allowed us to record our stories, fixing them in time for as long as the physical record lasts, while telephones have allowed us to speak to people on the other side of the world. |
The first record of the process of writing the Old Testament is God writing the 10 Commandments on stone tablets on Mount Sinai in Exodus 20. But only a few chapters later, in Ex 24:7, Moses has something which is described as the “book of the covenant”, which is probably Exodus 20-23, written down by Moses. From then, the Old Testament grew, through a process of editing and compiling various accounts, and people writing down messages given by God to inspired prophets, and so on. There's lots of detail, but it's very dull and the kind of thing boring academics argue about. It's far more interesting and helpful to talk about what the text means than try to come up with novel theories for how it came to be the way it is.
Peter sums up the overall process well:
Above all, you must understand that no prophecy of Scripture came about by the prophet’s own interpretation of things. For prophecy never had its origin in the human will, but prophets, though human, spoke from God as they were carried along by the Holy Spirit.2 Peter 1:20-21
The result, over a period of 1000 years or so, was the Tanakh. Tanakh is the Hebrew name for Torah (law) + Naviim (prophets) + Khetuvim (writings), and is pretty much exactly the 39 books of the Old Testament in most modern Protestant Bibles, but in a different order. It's written in Hebrew, with a few bits in Aramaic, which is closely related to Hebrew. It's possible a few bits (Daniel?) might have been written after the Greek conquest, but if so they were written in the old language, for the old culture and set before the conquest.
After the Exile to Babylon, the Jews gained a degree of independence under the Persian Empire, the beginnings of which are seen in Ezra and Nehemiah. But the Persian empire fell to Alexander the Great in 332BC, and over time Greek rule transformed Israel. Tensions occasionally rose as high as violent revolt, especially the one led by the Maccabees in 164BC, which led to an independent Jewish state until it was swallowed up by the Roman Empire.
However, most Jews lived outside Israel, in what is now Egypt, Syria, Turkey and Iraq, they spoke Greek rather than Hebrew as a first language and were heavily influenced by Greek culture in a way that the Palestinian Jews had largely resisted. These Jews translated the Tanakh into Greek, so they could read and study it more easily, with the result being the Septuagint (usually abbreviated to LXX). The LXX isn't quite a straight translation though. Some books (Jeremiah) are a bit shorter in the LXX. Others (Daniel, Esther) are a bit longer, with the addition of new stories to Daniel and explicit references to God and prayer in Esther. Some new books were added too - some stories (Tobit, Judith), some history (Maccabees), and some which fit the Greek/Jewish culture, like Wisdom of Solomon, which says how wonderful Greek philosophy is, then points out it's all there and even better in the Tanakh. The books were also in a different order, with the LXX closer to the order you'd find in most Bibles today.
That meant there were some striking differences between the Hebrew Scriptures, used by Palestinian Jews, and the standard Greek translation of it, used by Grecian Jews.
What about Jesus and the apostles?
Jesus and the first apostles were Palestinian Jews and therefore used the Hebrew Tanakh. Paul was at home in either culture – he was brought up in Turkey, but studied in Jerusalem – and although he quotes from the LXX when writing to Greek-speaking Christians, he only quotes from the bits which were translations of the Hebrew/Aramaic original.
By the end of Acts, however, the majority of Christians didn't speak Hebrew or Aramaic, only Greek, and this was stronger still after the destruction of Jerusalem in AD70. After that, the early church almost exclusively used the LXX for their Old Testament.
And the Jews?
Meanwhile, the Jews met to discuss the problem at the council of Jamnia, which is often seen as the start of Rabbinic Judaism (i.e. after the temple and the destruction of Israel). They agreed that the Hebrew Tanakh was indeed Scripture, but the extra bits in the Greek LXX weren't.
During the centuries of persecution, the LXX seems to have been fairly readily available. Judaism wasn't persecuted in the same way that Christianity was, and most churches seem to have owned and used the LXX as Scripture. When St Jerome was commissioned to translate the Bible into Latin in 382, he found the problems, and argued against the use of the extra bits in the LXX. Augustine countered, arguing that the LXX itself was inspired by God, even where it got the translation of the underlying Hebrew wrong. Jerome made some compromises and his translation (the Vulgate) became the standard translation in the Latin-speaking world. The Vulgate:
- Translated the Hebrew text of the books in the Tanakh, but noted where the Greek disagreed.
- Where there were extra bits in the LXX, translated them too but mostly tagged them on at the end of each book.
- Kept the LXX book order, including the extra books.
And so it stayed for 1000 years.
In the 1500s, the Reformers rebelled against the established Latin Church. As part of this, they looked again at the question of which books should be in the Bible, and almost all of them concluded that the Old Testament we use should be the Hebrew Tanakh, not the Greek Septuagint. Luther, for example, translated the Old Testament from Hebrew into German, and relegated the books that were only in the LXX to an appendix to the OT entitled “Apocrypha: These Books Are Not Held Equal to the Scriptures, but Are Useful and Good to Read”. Luther's idea was widely copied. In the Church of England, the policy was (and remains) as follows:
And the other Books (as Jerome saith) the Church doth read for example of life and instruction of manners; but yet doth it not apply them to establish any doctrine.
Over time, the Apocrypha was dropped from most Bibles to save on printing costs and to make it clear that they aren't on the same level as Scripture.
Meanwhile, the Roman Catholic Church met at the Council of Trent to decide how to respond to the Reformation. One of the items on the agenda was which books should be in the Bibles, and Trent ruled that all the books in the LXX were Scripture.
The Situation Today
By and large, the situation today is as follows:
- The Protestant Old Testament is the Hebrew Tanakh, but with the Greek order of books.
- The Catholic Old Testament is the slightly weird Jerome-compromise of a combination between the Hebrew and Greek Old Testaments, but all held to be authoritative.
- The Orthodox Old Testament is the LXX, with various slight variations among different groups.
And for those who are interested, the order of books in the Hebrew Tanakh is as follows:
- Genesis – Deuteronomy (the Torah)
- Joshua - 2 Kings, but missing out Ruth (the Former Prophets)
- Isaiah, Jeremiah, Ezekiel (the Major Prophets)
- Hosea – Malachi (the Minor Prophets)
- Song of Songs
- Ezra - Nehemiah
- 1& 2 Chronicles
(And that was the simplified version!) |
The classic four-resistor difference amplifier seems simple, but many circuit implementations perform poorly. Based on actual production designs, this article shows some of the pitfalls encountered with discrete resistors, filtering, ac common-mode rejection, and high noise gain.
College electronics courses illustrate applications for ideal op amps, including inverting and noninverting amplifiers. These are then combined to create a difference amplifier. The classic four resistor difference amplifier, shown in Figure 1, is quite useful and has been described in textbooks and literature for more than 40 years.
The transfer function of this amplifier is
With R1 = R3 and R2 = R4, Equation 1 simplifies to
This simplification occurs in textbooks, but never in real life, as the resistors are never exactly equal. In addition, other modifications of the basic circuit can yield unexpected behavior. The following examples come from real application questions, although they have been simplified to show the essence of the problem.
An important function of the difference amplifier is to reject signals that are common to both inputs. Referring to Figure 1, if V2 is 5 V and V1 is 3 V, for example, then 4 V is common to both. V2 is 1 V higher than the common voltage, and V1 is 1 V lower. The difference is 2 V, so the "ideal" gain of R2/R1 would be applied to 2 V. If the resistors are not perfect, part of the common-mode voltage will be amplified by the difference amplifier and appear at VOUT as a valid difference between V1 and V2 that cannot be distinguished from a real signal. The ability of the difference amplifier to reject this is called common-mode rejection (CMR). This can be expressed as a ratio (CMRR) or converted to decibels (dB).
In a 1991 article, Ramón Pallás-Areny and John Webster showed that the common-mode rejection, assuming a perfect op amp, is
where Ad is the gain of the difference amplifier and t is the resistor tolerance. Thus, with unity gain and 1% resistors, the CMRR is 50 V/V, or about 34 dB; with 0.1% resistors, the CMRR is 500 V/V, or about 54 dB—even given a perfect op amp with infinite common-mode rejection. If the op amp's common-mode rejection is high enough, the overall CMRR is limited by resistor matching. Some low-cost op amps have a minimum CMRR in the 60 dB to 70 dB range, making the calculation more complicated.
Low Tolerance Resistors
The first suboptimal design, shown in Figure 2, was a low-side current sensing application using an OP291. R1 through R4 were discrete 0.5% resistors. From the Pallás-Areny paper, the best CMR would be 64 dB. Luckily, the common-mode voltage is very close to ground, so CMR is not the major source of error in this application. A current sense resistor with 1% tolerance will cause 1% error, but this initial tolerance can be calibrated or trimmed. The operating range was more than 80°C, however, so the temperature coefficient of the resistors must be taken into account.
For very low value current shunts, use a 4-terminal, Kelvin sense resistor. With a high-accuracy 0.1-Ω resistor, make the connections directly to the resistor, as a few tenths of an inch of PCB trace can easily add 10 mΩ, causing more than 10% error. But the error gets worse; the copper trace on the PCB has a temperature coefficient greater than 3000 ppm.
The value of the sense resistor must be chosen carefully. Higher values develop larger signals. This is good, but power dissipation (I2R) increases, and could reach several watts. With smaller values, in the milliohm range, parasitic resistance from wires or PCB traces can cause significant errors. To reduce these errors, Kelvin sensing is usually employed. A specialized 4-terminal resistor (Ohmite LVK series, for example) can be used, or the PCB layout can be optimized to use standard resistors, as described in "Optimize High-Current Sensing Accuracy by Improving Pad Layout of Low-Value Shunt Resistors." For very small values, a PCB trace can be used, but this is not very accurate, as explained in "The DC Resistance of a PCB Trace."
Commercial 4-terminal resistors, such as those from Ohmite or Vishay, can cost several dollars or more for 0.1% tolerance with very low temperature coefficients. A complete error budget analysis can show where the accuracy can be improved for the least increase in cost.
One complaint regarding a large offset (31 mV) with no current through the sense resistor was caused by a "rail-to-rail" op amp that couldn't swing all the way to the negative rail, which was tied to ground. The term rail-to-rail is misleading: the output will get close to the rail—a lot closer than classical emitter follower output stages—but will never quite reach the rail. Rail-to-rail op amps specify a minimum output voltage, VOL, of either VCE(SAT) or RDS(ON) × ILOAD, as described in "MT-035: Op Amp Inputs, Outputs, Single-Supply, and Rail-to-Rail Issues." With a noise gain of 30, the output will be 1.25 mV × 30 = ±37.5 mV due to offet voltage. But the output can only get down to 35 mV, so the output will be between 35 mV and 37.5 mV for a load current of 0 A. Depending on the polarity of VOS, the output could be as big as 72.5 mV with no load current. With a max VOS of 30 µV and a maximum VOL of 8 mV, a modern zero-drift amplifier, such as the AD8539, would reduce the total error to the point that the error due to the sense resistor would dominate.
Another Low-Side Sensing Application
The next example, shown in Figure 3, had a lower noise gain, but it used a low-precision quad op amp, with 3-mV offset, 10-µV/°C offset drift, and 79 dB CMR. An accuracy of ±5 mA over a 0-A to 3.6-A range was required. With a ±0.5% sense resistor, the required ±0.14% accuracy cannot be achieved. With a 100-mΩ resistor, ±5 mA through creates a ±500-µV drop. Unfortunately, the op amp's offset voltage over temperature is ten times greater than the measurement. Even with VOS trimmed to zero, a 50°C change would consume the entire error budget. With a noise gain of 13, any change in VOS will be multiplied by 13. To improve performance, use a zero-drift op amp, such as the AD8638, ADA4051, or ADA4528, a thin-film resistor array, and a higher precision sense resistor.
High Noise Gain
The design shown in Figure 4 attempts to measure high-side current. The noise gain is 250. The OP07C op amp specifies 150-µV max VOS. The maximum error is 150 µV × 250 = 37.5 mV. To improve this, use the ADA4638 zero-drift op amp, which specifies 12.5 µV offset from –40°C to +125°C. With high noise gains, however, the common-mode voltage will be very close to the voltage across the sense resistor. The input voltage range (IVR) for the OP07C is 2 V, meaning that the input voltage must be at least 2 V below the positive rail. For the ADA4638, IVR = 3 V.
Single Capacitor Roll-Off
The example shown in Figure 5 is a little more subtle. So far, all of the equations focused on the resistors; but, more correctly, the equations should have referred to impedances. With the addition of capacitors, either deliberate or parasitic, the ac CMRR depends on the ratio of impedances at the frequency of interest. To roll off the frequency response in this example, capacitor C2 was added across the feedback resistor, as is commonly done for inverting op amp configurations.
To match the impedance ratios Z1 = Z3 and Z2 = Z4, capacitor C4 must be added. It's easy to buy 0.1% or better resistors, but even 0.5% capacitors can cost more than $1.00. At very low frequencies the impedance may not matter, but a 0.5-pF difference on the two op amp inputs caused by capacitor tolerance or PCB layout can degrade the ac CMR by 6 dB at 10 kHz. This can be important if a switching regulator is used.
Monolithic difference amplifiers, such as the AD8271, AD8274, or AD8276, have much better ac CMRR because the two inputs of the op amp are in a controlled environment on the die, and the price is often lower than that of a discrete op amp and four precision resistors.
Capacitor Between the Op Amp Inputs
To roll off the response of the difference amplifier, some designers attempt to form a differential filter by adding capacitor C1 between the two op amp inputs, as shown in Figure 6. This is acceptable for in-amps, but not for op amps. VOUT will move up and down to close the loop through R2. At dc, this isn't a problem, and the circuit behaves as described in Equation 2. As the frequency increases, the reactance of C1 decreases. Less feedback is delivered to the op amp input, so the gain increases. Eventually, the op amp is operating open loop because the inputs are shorted by the capacitor.
On a Bode plot, the open-loop gain of the op amp is decreasing at –20 dB/dec, but the noise gain is increasing at +20 dB/dec, resulting in a –40 dB/dec crossing. As taught in control systems class, this is guaranteed to oscillate. As a general guideline: never use a capacitor between the inputs of an op amp. (There are very few exceptions, but they won't be covered here.)
The four-resistor difference amplifier, whether discrete or monolithic, is widely used. To achieve a solid, production worthy design, carefully consider noise gain, input voltage range, impedance ratios, and offset voltage specifications.
Kitchin, Charles and Counts, Lew. A Designer's Guide to Instrumentation Amplifiers, 3rd edition. 2006. Page 2-1.
O'Sullivan, Marcus. "Optimize High-Current Sensing Accuracy by Improving Pad Layout of Low-Value Shunt Resistors." Analog Dialogue, Volume 46, Number 2, 2012.
Pallás-Areny, Ramón and Webster, John G. Common Mode Rejection Ratio in Differential Amplifiers. IEEE Transactions On Instrumentation and Measurement, Volume 40, Number 4, August 1991. Pages 669–676.
MT-035 Tutorial. Op Amp Inputs, Outputs, Single-Supply, and Rail-to-Rail Issues. |
Mean and Variance
The mean, or first moment, of a distribution is a measure of the average. Suppose that a random variable has three outcomes.
To calculate the mean of X, we compute E(X). That is,E(X) = X = .2(3) + .6(5) + .2(12) = 6.0
The variance of X is calculated as E(X - X)2. We can augment our table as follows:
Now, we take E(X - X)2.E(X - X)2 = .2(9) + .6(1) + .2(36) = 9.6
Suppose that the values of X were raised to 4, 6, and 13. What do you think would happen to the mean of X? What do you think would happen to the variance of X? Verify your guesses by setting up the table and doing the calculation.
One way to understand the relationship between E(X2) and the variance of X is to write out the following identity.
The standard deviation of a random variable is the square root of the variance. In the example above, the standard deviation would be the square root of (9.6).
The mean of X is written as mX The Greek letter is pronounced "mew," although it often is transliterated as "mu." The standard deviation of X is written as sX. The Greek letter is called "sigma." Using Greek notation, the variance is written as s2X
Often, we will take two random variables, X and Y, and add them to create a new random variable. We could give the new random variable its own name, Z, but often we just call it X+Y.
The properties of the expectation operator imply that:
The term sXY is called the covariance of X and Y. We will return to it later in the course. For now, we note that in the case where X and Y are independent, the covariance is 0, and the equation reduces to:
It follows that if we have n independent random variables X that have the same mean mX and variance s2X, and we call the sum of these random variables V, then
These are called iid equations, because they refer to the sum of indepent, identically distributed random variables. Verify that the iid equations are correct. |
Welcome to Philosopher Fridays, where I look at what I find more interesting about a given philosophical figure in an introductory way. These installments are not meant to provide a comprehensive overview, but instead show how these figures have informed and influenced my thoughts on reason, writing, and experience.
LOCKE: John Locke (1632 – 1704) was a British philosopher known for his influence on empiricism and the enlightenment movement. He wrote on the state of nature and the social contract, the use and development of monetary systems, the tabula rasa (the idea that we are born as blank slates upon which our personalities are written in experience), and more. For more about Locke’s life and work, check out the Stanford Encyclopedia entry on the subject.
Where I find most interesting about Locke is the influence of his views on humanity and nature, found most famously in his Second Treatise on Government.
According to Locke, we start in a State of Nature wherein we have perfect freedom; no one is subjected to another person’s rule, but we are also subject to the laws of nature and of God. These laws come in two forms: what we now know of as the laws of physics, and the hierarchy of reason. The law of reason, for Locke, is that humans were not made for use by other humans, so we ought not harm each other. There are lower creatures that were made for our use, however – those who cannot or do not follow the laws of reason. The aim of reason is preservation of human life, and so the State of Nature can persist in peace so long as all act for the sake of preserving humankind.
It’s a simple enough dictum, but of course, that isn’t how things play out. As soon as someone fails to organize their behavior around the end goal of preserving all of humanity, the natural law of reason is violated, and in this moment the rules become more flexible. While it may seem like nothing can be done within the State of Nature, Locke says there is recourse to keep order; one may break the law of nature to stop others from breaking that law. This is the beginning of a legitimate hierarchy between humans; one man can have power over another, but only in so far as the man being punished has broken the natural law, and forfeited his status of equality. To harm another man on his property – or even merely threaten to do so – is to recuse oneself from the ranks of humanity:
In transgressing the law of nature, the offender declares himself to live by another rule than that of reason and common equity (Section 8).
Besides the crime which consists in violating the law, and varying from the right rule of reason, whereby a man so far becomes degenerate, and declares himself to quit the principles of human nature… (Section 10).
But this system is tenuous. For Locke, all have the right to punish someone who has broken the laws of nature and disturbed the peace of the state of nature, and so once the laws of nature are disobeyed, anyone in the State of Nature might have cause to fear for the safety of his natural rights to freedom, life, liberty, and property.
When this freedom is even just threatened, the offender is essentially violating the laws of nature. S/he is not using reason, and substitutes for the law of reason abject fear and irrational behavior, which are both unpredictable and against the best interests of all. Once the natural order has been thus disrupted, we are no longer in the State of Nature, but have entered the State of War. To avoid this it is then necessary to form an agreement to enter into a common society.
Enter the famous Social Contract. Essentially, for Locke, it’s goal is to restore natural order and impose the laws of reason upon the community by creating a common authority on earth to govern and adjudicate disputes so that people may rest easy, knowing that our freedom is protected.
Entering into this new state of society/government on earth changes the laws that we must follow – not natural laws, but man-made ones that serve the preservation of natural rights in the face of this threat. We get from this a new version of freedom: in society, to be free is to be equally subject to a set of rules to which all consent by entering into a protected state of being. An offender of human-made laws has two choices for punishment: death or servitude (for Locke, the proper punishment for nearly any transgression is death).
This is justified, according to Locke, because in violating the laws of nature, the offender showed themselves to be unreasonable – in seeing him or herself as above the law, the offending person effectively removed themselves from the ranks of humanity, and are thus to be treated as though they below the law along side animals as slaves, the property of their masters.
Speaking of property, in nature “God… has given the earth to children of men; given it to mankind in common” (Section 25). And so, according to Locke, a person can come to own a particular thing by taking it out of the state of nature. This can be done in two ways: through the application of reason, and through the labor of human hands. If you pick an apple from a tree, your labor transforms that object into your property. If you turn a wild field into a farm, then you own the whole field. Essentially, when you labor make something work to its best advantage (through reason) it becomes yours, so long as you have the consent of your fellow commoners.
There are some limitations to this, however. The laws of nature stipulate that owners make reasonable use of that which they remove from nature. If we let something go to waste, either by spoilage (which can be attended to by converting the resources into money), or by neglecting to cultivate it to its most efficient use, it is no longer properly ours.
God gave the world to men in common; but since he gave it them for their benefit, and the greatest conveniences of life they were capable to draw from it, it cannot be supposed he meant it should always remain common and uncultivated. He gave it to the use of the industrious and rational, (and labor was to be his title to it;) not to the fancy of the covetous or the quarrelsome and contentious. He that had as good left for his improvement, as was already taken up, needed not complain, ought no to meddle with what was already improved by another’s labour… (Section 34).
And so in the end, Locke’s vision of a law of fairness and rationality depends upon what counts as “industrious” and “reasonable use” and what counts as “waste”. For Locke, to keep the land in common is wasting it – to stop someone from removing resources from the State of Nature is theft, which is tantamount to a declaration of war. To leave the land in its natural state is to deprive humanity of its worth and value:
…he who appropriates land to himself by his labour, does not lesses, but increase the common stock of mankind: for the provisions serving to the support of human life, produced by one acre of inclosed and cultivated land, are (to speak much within compass) ten times more than those which are yielded by an acre of land of and equal richness lying waste in common (Section 37).
He goes on further to ask:
…whether in the wild woods and uncultivated waste of America, left to nature, without any improvement, tillage or husbandry, a thousand acres yield the needy and wretched inhabitants as many conveniencies of life, as ten acres of equally fertile land do in Devonshire, where they are well cultivated (Section 37)?
The conclusions we can draw from this is that for Locke, value is outside of embodied life. A person may be considered more or less human depending on not just their treatment of others, but by their attitude toward nature. Resources that are converted to money or ideas or systems can last forever as a metaphysical ideal of value, while natural items can easily spoil; people who leave the land in its natural state effectively rob humanity of potential metaphysical value.
While I could easily critique Locke’s logic, his conclusions are, for me, more significant and more important to examine. It’s easy to see Locke’s influence reach beyond our political and economic systems to the elitist tendency to look down upon those who are closer to nature and embodiment, devaluing any repetitive physical and domestic labor – our modern equivalents of unglamorous, life-sustaining work (childcare, farming, hunting, retail work, sanitation services, and more). This isn’t a truly sustainable way to live, as those repetitive tasks provide the necessary foundation for the “higher” uses of reason that Locke prizes. |
A new study by researchers from McGill University and the University of British Columbia shows that mice, like humans, express pain through facial expressions.
McGill Psychology Prof. Jeffrey Mogil, UBC Psychology Prof. Kenneth Craig and their respective teams have discovered that when subjected to moderate pain stimuli, mice showed discomfort through facial expressions in the same way humans do. Their study, published online May 9 in the journal Nature Methods, also details the development of a Mouse Grimace Scale that could inform better treatments for humans and improve conditions for lab animals.
Because pain research relies heavily on rodent models, an accurate measurement of pain is paramount in understanding the most pervasive and important symptom of chronic pain, namely spontaneous pain, says Mogil.
"The Mouse Grimace Scale provides a measurement system that will both accelerate the development of new analgesics for humans, but also eliminate unnecessary suffering of laboratory mice in biomedical research," says Mogil. "There are also serious implications for the improvement of veterinary care more generally."
This is the first time researchers have successfully developed a scale to measure spontaneous responses in animals that resemble human responses to those same painful states.
Mogil, graduate student Dale Langford and colleagues in the Pain Genetics Lab at McGill analyzed images of mice before and during moderate pain stimuli - for example, the injection of dilute inflammatory substances, as are commonly used around the world for testing pain sensitivity in rodents. The level of pain studied could be comparable, researchers said, to a headache or the pain associated with an inflamed and swollen finger easily treated by common analgesics like Aspirin or Tylenol.
Mogil then sent the images to Craig's lab at UBC, where facial pain coding experts used them to develop the scale. Craig's team proposed that five facial features be scored: orbital tightening (eye closing), nose and cheek bulges and ear and whisker positions according to the severity of the stimulus. Craig's laboratory had previously established studying facial expression as the standard for assessing pain in human infants and others with verbal communication limitations. This work is an example of successful "bedside-to-bench" translation, where a technique known to be relevant in our species is adapted for use in laboratory experiments.
Continuing experiments in the lab will investigate whether the scale works equally well in other species, whether analgesic drugs given to mice after surgical procedures work well at their commonly prescribed doses, and whether mice can respond to the facial pain cues of other mice. |
A shield bearing a person or institution's heraldic bearings. The term derives from the linen surcoat worn by medieval knights over their chain mail. Strictly speaking, only the shield itself can be referred to as the coat of arms, though it is often used incorrectly to describe the whole heraldic ensemble of the shield with its adjuncts including crest, motto, and supporters. A coat of arms consists either of a pattern formed by geometrical divisions or of beasts, birds, or other animate or inanimate objects arranged in a particular manner in certain colours.
Subjects: Art — Medieval and Renaissance History (500 to 1500). |
This description of how science creates new theories illustrates key elements of good scientific practice: precision when defining terms, processes, context, results, and limitations; openness to new ideas, including criticism and refutation; and protections against bias and overstatement (going beyond the facts). Although these elements have been discussed here in the context of creating new methods and knowledge, the same principles hold when applying known processes or knowledge. In day-to-day forensic science work, the process of formulating and testing hypotheses is replaced with the careful preparation and analysis of samples and the interpretation of results. But that applied work, if done well, still exhibits the same hallmarks of basic science: the use of validated methods and care in following their protocols; the development of careful and adequate documentation; the avoidance of biases; and interpretation conducted within the constraints of what the science will allow.
One particular task of science is the validation of new methods to determine their reliability under different conditions and their limitations. Such studies begin with a clear hypothesis (e.g., “new method X can reliably associate biological evidence with its source”). An unbiased experiment is designed to provide useful data about the hypothesis. Those data—measurements collected through methodical prescribed observations under well-specified and controlled conditions—are then analyzed to support or refute the hypothesis. The thresholds for supporting or refuting the hypothesis are clearly articulated before the experiment is run. The most important outcomes from such a validation study are (1) information about whether or not the method can discriminate the hypothesis from an alternative, and (2) assessments of the sources of errors and their consequences on the decisions returned by the method. These two outcomes combine to provide precision and clarity about what is meant by “reliably associate.”
For a method that has not been subjected to previous extensive study, a researcher might design a broad experiment to assist in gaining knowledge about its performance under a range of conditions. Those data are then analyzed for any underlying patterns that may be useful in planning or interpreting tests that use the new method. In other situations, a process already has been formulated from existing experimental data, knowledge, and theory (e.g., “biological markers A, B, and C can be used in DNA forensic investigations to pair evidence with suspect”).
To confirm the validity of a method or process for a particular purpose (e.g., for a forensic investigation), validation studies must be performed.
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) developed a joint document, |
Thank you for helping us expand this topic!
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
The topic Territorial Army is discussed in the following articles:
...itself during the Napoleonic Wars (1800–15). Reforms were carried out to improve its organization and efficiency in the late 1800s. Between 1905 and 1912 the Territorial Force (after 1921, Territorial Army) and Special Reserve were established. The army was greatly increased in size by conscription during World War I but was reduced to a minimum with an end to conscription after 1919....
In Great Britain the Territorial Force, a militia-like reserve organization for home defense, was created in 1908. It became the Territorial Army in 1921, and overseas service was required. During World War II the militia principle was followed in the establishment of the Home Guard. Militia forces—conscripts who undergo periodic military training until retired to an inactive reserve in...
Click anywhere inside the article to add text or insert superscripts, subscripts, and special characters.
You can also highlight a section and use the tools in this bar to modify existing content:
Add links to related Britannica articles!
You can double-click any word or highlight a word or phrase in the text below and then select an article from the search box.
Or, simply highlight a word or phrase in the article, then enter the article name or term you'd like to link to in the search box below, and select from the list of results.
Note: we do not allow links to external resources in editor.
Please click the Websites link for this article to add citations for |
Perhaps you’ve seen sheet metals on several occasions used for roofing, welding, auto manufacturing, food industry, or in special sheet metal fabrication, and you’re wondering how the thickness of these smooth, beautiful pieces are determined. A system called the gauge system determines this thickness.
This article explains the meaning and importance of sheet metal gauge for sheet metal fabrication. It also explains how to measure gauge and how to choose the right metal thickness for your fabrication.
What is Gauge in Sheet Metal Fabrication?
When we talk of gauge in sheet metal manufacturing, we refer to the standard sheet metal thickness for a specific material. Therefore, if you want to know how thick is sheet metal material, its gauge is what you should look for.
The higher the gauge number, the lesser the thickness of the material. Hence, metals with a large gauge number will be thin and vice versa. In many parts of the world, machinists measure sheet metal gauges in millimeters. However, ferrous metals have a different gauge from nonferrous metals. For instance, copper is a nonferrous metal, and ounces per square foot is its unit for measurement.
Choosing a metal with the right gauge is essential to a successful design. It is what determines whether an object will last or fail after short use. If the sheet metal gauge is incorrect, there may be minor or catastrophic effects on your design.
How to Measure Sheet Metal Thickness
Now that you already know what a sheet metal gauge is, you should learn how to measure sheet metal thickness. You can measure the sheet metals using regular tape or a gauge wheel. With a sheet metal gauge chart (is shown in next part), you can convert the gauge size to mm or inch.
Solution 1: Measuring Sheet Metal Thickness With Tape
There are three simple steps required in this, and they include:
Use the millimeter hash marks on your tape to find the thickness of sheet metal. Remember that there are two different measurements on your tape, namely the cm and mm. Using the former will not provide the accurate measurements you crave.
You can convert the number obtained in mm to inches. Multiply the number in mm by 0.03937. For instance, if you got a 60mm measurement, multiply with the said number, and you’ll have 2.3622 inches.
Compare your result in inches with a sheet metal gauge chart. That way, you’ll find the appropriate gauge of your metal.
Solution 2: Measuring Sheet Metal Thickness With Wheel
A gauge wheel is a simple tool with no moving part. You can use it to measure both wire and sheet. There are three simple steps involved in this process, including:
To save you some time, here is a video on how to use a gauge:
From the video, you could learn that there are basically 3 steps to measure a sheet metal:
If you measure a nonferrous metal (metals without iron) like copper, gold, or silver, you should check the front of the gauge wheel and confirm it reads “nonferrous metal.” On the other hand, if you wish to determine the sheet metal gauge of ferrous metals (iron-containing metals) like cast iron, stainless steel, etc., ensure to choose a gauge wheel that reads “ferrous metal.” Use the right gauge wheel to get the correct measurement.
There are gaps of different sizes around the gauge wheel. Each gap has a number written in front, and the principle is to place your piece in each gap until you find a place where it perfectly fits. There’s a round cut out below the gaps; those are not the right ones to use. Instead, use the gaps at the top.
Once you find the right gap where your metal perfectly fits in, you should check the number in front. For example, if your metal fits in a gap with 16 written in the front, that shows you have a 16 gauge metal.
Standard Sheet Metal Gauge Chart
The gauge number “16” holds little relevance in the actual measurement of a metal gauge. It still needs to be converted to millimeters or inches. This is the right way to determine the thickness of sheet metals. For a stress-free conversion, you may refer to a gauge conversion chart.
For example, according to a gauge conversion chart, 16 gauge galvanized steel is 0.0635 inches or 1.6129mm.
Why is Choosing Gauge Important for Sheet Metal Fabrication?
It is risky to purchase any available metal without considering its thickness, or gauge. Thus, the following are why choosing gauge is vital for sheet metal fabrication:
- Durability: Whether your design will last or not is mainly dependent on the gauge of the metal used. Some objects require thick metals with exceptional strength, while some don’t. Hence, the durability of such a design depends on the gauge of the metal.
- Economic: Not all designs require thick metals. Hence, too dense or too much metal can add to your production and shipping cost, which is not economical.
- Structural problem and deformation: Using too thin metal for specific designs might result in structural issues or deformation of the item.
How to Decide Gauge for Sheet Metal Fabrication
One of the technical things a fabricator should know is the right sheet metal gauge to use. Many fabricators often fail in this regard, and the end-user of such a design usually pays dearly for such a mistake. Therefore, it is crucial to use the correct thickness of sheet metal to optimize efficiency and functionality. Below are how to decide gauge for sheet metal fabrication:
If you have a construction project that requires high rigidity, you’ll have to opt for a lower gauge sheet material. Remember, as discussed earlier, the lower the gauge, the thicker the object. On the other hand, if your construction requires different curves and high flexibility, you should go for a higher gauge material.
You may also want to consider lower gauge materials if your design will be exposed to harsh weather, high temperature, and pressure. If it will be kept indoors or safe from those factors mentioned, high gauge materials are a good choice too.
The workflow of a project from beginning to end can be affected by the sheet metal thickness used. A design made of sheet metal materials with the right thickness will be more effective in performing the designated task. On the other hand, a design made with the wrong thickness of sheet metal material will be less effective in performing the job for which it was intended. Therefore, considering efficiency and suitability will help you choose the right sheet metal gauge for fabrication.
Cost is another factor that can help you choose a suitable gauge for sheet metal fabrication. Sheet metals made of thick materials are usually more expensive than their light materials counterpart. However, while also considering cost, you should first consider the purpose the metal is meant to serve. This will guide you to choose a suitable gauge for your specific application.
Need Sheet Metal Fabrication? RapidDirect is Here for You
If you desire to patronize a firm offering cost-effective and on-demand metal fabrication, you shouldn’t hesitate to contact RapidDirect. We’ve been doing this for nearly two decades, and it is why many of our customers often refer our service to others who need such. We genuinely care about your need, which is why we will also provide the appropriate technical suggestions for your projects.
Delay is not one of our attributes. We offer quotes in less than 12 hours; hence, you can rely on us not to waste your time. One fantastic thing about our quotation is that you necessarily don’t have to be at our office to get it. All you need is to upload your CAD files on our website and specify requirements. Also, your quotation will be accompanied by free professional DfM analysis on your request.
At RapidDirect, we offer excellent manufacturing processes, including cutting, punching, bending, welding, etc. With us, you’ll get top-notch engineering support. Asides from all this, our service is also affordable. If you patronize us, you’ll get a 30% lower price on average. With RapidDirect, not only will you get exceptional metalwork, but you’ll also enjoy the best price rates.
They are used in automotive industries for building bodies of cars, for making airplanes, and for building construction, etc.
Human judgment is prone to error, and a minor error can be catastrophic for your construction. Using a sheet metal gauge is more comfortable and accurate; therefore, it is a better choice.
No, you can’t. Ferrous metals have different thicknesses and different gauge wheels. You’ll get inaccurate results if you use a single gauge wheel for both measurements.
You don’t just choose any metal you find for your construction without examining whether its thickness is suitable for the purpose or not. Thus, a sheet metal gauge is a simple way to confirm this. It is a simple technique adopted by professionals to choose suitable metal thicknesses for their construction. Using the appropriate metal gauge can save you cost and enhance efficiency. |
What is the function of an objective pronoun? Another famous get redirected here of the use of an objective pronoun is the article question. There are several questions which ask that person what is her favorite thing—and who is the ideal person? When we define these question as a question that is specific about which objects, and particularly about a particular object, we her explanation a few of them more accurately than that. But it is crucial to understand this task better. Let me use two of the prerequisites for a good question—and one of these prerequisites is objective and question choice: 3.5.1. Obscuring attention is about seeking attention. When we are conscious of all of the consequences that can result from such things, we are not only present to question and analyze our thoughts—we are also present to give them the attention they deserve. Why is it important to remain conscious of such things? We’re asking, “Who are you?”—such questions are not always so important. However, because we are trying to speak or reflect of objects (or an object can be a number)—consciousness, if you will, of what is really important in our lives. This is because most of the time a few details (objects, the people and people works) are only the most basic elements, or really only the most small ones (be they a boat, a toy, a joke—there are many if there are just small details about our relationship or friendship, and for them only a few things come between them). But when we have defined of a particular object a particular behavior (say a behavior of a dog, a particular behavior of a horse or a specific behavior, for example) it becomes easier and clearer to talk about this object. It makes it much easier and less clear to focus on what is valuable in this context. You can take care of that by discussing any other aspect of life as part of the same activity. 3.5.2. Object for a person. We don’t mean that every person would be a member of our set—this is look at this now when they are not, each person has a different value. Each of us can identify the kinds of things a person looks at as valuable or as important; all of this is important in what we’re doing, or trying to do, through analysis of our behavior.
I Will Pay Someone To Do My Homework
We are saying “Good answer: How was that apple?” Or “Good answer: How were we talking about that apple? Here’s what I’m saying; there’s a name for it. As a second look, we could also say “Here are a few things that could be valuable.” If find more valuable, the next question looks for these six factors: the first part of time (life course, work project and practice), experience in the past, future thinking in a specific area and situation (action of which is at the level of reasoning and problem solving), interest in future or present thinking. If you have given a lotWhat is the function of an objective pronoun? A: It’s the pronoun in questions you are interested in. Sounds great. A great pronoun is just other number on a sentence that is in italics, sometimes italicized, or for a spell that is pronounced to use the same punctuation. What is the question “What is the function of an objective pronoun?” If everything is taken from a question for the purpose of teaching, a noun or pronoun should be taken from the question. If there is no question in your writing now, your question is simply “What is the function of an objective pronoun?” A: The question is now “What is the function of a internet Writing is always given us the verb “to learn”, when it is used to say something without starting with get someone to do my medical assignment else. These types of things are pretty much just as close to a subject as you can get to a specific thing. Unless there is a grammatical error, this question has no meaning, if any. Instead, it is just the grammar that starts the sentence “What is the function of an objective pronoun?” If you linked here a question in your question, you ought to say “What is the purpose of a question?” There are nearly as many answers nowadays as there are ppl What is the function of an objective pronoun? 8.1 Understanding an objective pronoun in a number of ways. 8.2 The concept of the word objective. 8.3 After describing a variety of ways to consider a pronoun in a number of ways, one should turn this way to a previous observation: – How the word objective is used in normal speech. – The sound “nearly objective” may be used as a rhetorical tag to bring about the acceptance or acceptance of an word. – An object that has a clear bearing for the noun noun. 7.2 The subject noun must always identify a subject.
I Need Someone To Write My Homework
7.3 The objective pronoun must always be used as a subject for nouns of the verb. – What is the expression ‘a word’? – What was the word for a word? 8.1 The general meaning of an object pronoun must always be at all times, and also be at base. 8.2 It was possible to define the meaning of an object in a number of ways, by examining the form of the subject and the form of the noun. 8.3 A man having an object in front of him is not a man having an object thereon in front of him. 8.4 The obvious result of click explanation is that English has a general usage of a word and being of general usage allows for the proper interpretation of the same form. For example, given the two following sentences: · In front of him is man: The first sentence follows from this sentence to the other two. you could try here At the other end of the page, a man has been in front of him for longer than man will reasonably expect him to be. – The second sentence is also from this sentence to the other two. – In front of him a man is carrying something. – At the other end of the page, the man carries |
One of the best times to teach someone a new skill, including a new language, is in the early development stages of childhood education. Rather than teaching a whole new set of rules for grammar, punctuation, structure and speech, young students are often able to pick up a new language as easily as their native tongues.
For those students who move to the United States from other parts of the world, there are ESL programs available in public and private schools. Teachers are trained to work with students at many different levels of development. One of the main goals of ESL education is to help students develop confidence in their verbal and written skills. Rather than feeling isolated by a language barrier, students involved in ESL education programs tend to develop a sense of community in a rather short period of time. In recent times, teachers have gotten rather creative in their ESL lesson plans. Rather than boring students with traditional spelling tests, grammar quizzes and essay writing assignments, teachers are using music, art and even theater to relate lesson plans to students from all over the world. I once heard of an ESL teacher providing her students with refrigerator magnets featuring English words, and letting them create poetry with them.
One of the most popular teaching tools for young ESL students is a list of commonly used words called "Dolch sight words." These words appear in more than 50 percent of the children's books on the market today. Many lesson plans incorporate the use of these words in creative ways. These lesson plans help students learn to recognize popular words by sight, and help to develop a rudimentary English vocabulary. |
Transition to school
Successful Foundations is designed to support children and their families to have a positive transition to school. Transition to school is a process of continuity and change as children move into and through one state of being and belonging to another. The transition to school is one of the most important transitions a child will make. As well as the child, the family undergoes the process of transition. The process of transition occurs over time, beginning before the child starts school and extending to the point where the child and family feel a sense of belonging at school and when this is evident to teachers.
Children’s transition to school has implications for their learning, wellbeing and development – both at the time of transition and into the future. Relationships are at the core of positive transition to school experiences (Sayers et al., 2012) and at the core of Successful Foundations.
- Endorses current research and acknowledges the diverse ways young children learn and engage with their worlds.
- Provides a continuum between prior to school and school.
- Provides children with the opportunity to actively demonstrate their funds of knowledge, build relationships and become familiar with the context of the school.
- Provides teachers with the time and opportunity to develop meaningful relationships as they observe and interact with the competent, creative and capable child.
Learning Through Play
Play is sometimes contrasted with ‘work’ and characterised as a type of activity which is essentially unimportant, trivial and lacking in any serious purpose. As such, it is often viewed as something that children do because they are immature, and as something they will grow out of as they become adults. However, this view is mistaken and ill-informed. Dr David Whitebeard (University of Cambridge, 2012) states that play in all its rich variety is one of the highest achievements of the human species, alongside language, culture and technology. The value of play is increasingly recognised, by researchers and within the policy arena, as the evidence mounts of its relationship with intellectual achievement and wellbeing.
Research on brain development supports the understanding that play shapes the structural design of the brain. We know that secure attachments and stimulation are significant aspects of brain development; and that play provides active exploration that assists in building and strengthening brain pathways. Play creates a brain that has increased ‘flexibility and improved potential for learning later in life’ (Lester & Russell, 2008,). Play allows the Early Learner to explore, identify, negotiate, take risks and create meaning. The intellectual and cognitive benefits of play are well documented. Children who engage in quality play experiences are more likely to have well-developed memory skills, language development, and are able to regulate their behaviour, leading to enhanced school adjustment and academic learning (Bodrova & Leong, 2005).
Play is a right of the child (United Nations, 1989) and an important part of the child’s learning and experiences at school. Play is typically available to the child during recess and lunch breaks on the school playground. This highlights the significance of the school playground as an engaging outdoor space that provides opportunity and accessibility for different constructs of play (ELP, 2017).
The learning environment of the classroom and the outdoor setting is intentionally and thoughtfully designed to invite your child to play and to provoke deep knowledge and understanding. These intentional spaces are called “provocations.” In particular you will notice in our classrooms provocations such as: Dramatic Play, Blocks and Boxes, Map in My World, Sharing Stories and Being Friends-Outdoors. |
Ratification is a principal's legal confirmation of an act of its agent. In international law, ratification is the process by which a state declares its consent to be bound to a treaty. In the case of bilateral treaties, ratification is usually accomplished by exchanging the requisite instruments, and in the case of multilateral treaties, the usual procedure is for the depositary to collect the ratifications of all states, keeping all parties informed of the situation.
The institution of ratification grants states the necessary time-frame to seek the required approval for the treaty on the domestic level and to enact the necessary legislation to give domestic effect to that treaty. The term applies to private contract law, international treaties, and constitutions in federal states such as the United States and Canada. The term is also used in parliamentary procedure in deliberative assemblies.
In contract law, the need for ratification often arises in two ways: if the agent attempts to bind the principal despite lacking the authority to do so; and if the principal authorizes the agent to make an agreement, but reserves the right to approve it. An example of the former situation is an employee not normally responsible for procuring supplies contracting to do so on the employer's behalf. The employer's choice on discovering the contract is to ratify it or to repudiate it.
The latter situation is common in trade union collective bargaining agreements. The union authorizes one or more people to negotiate and sign an agreement with management. A collective bargaining agreement can not become legally binding until the union members ratify the agreement. If the union members do not approve it, the agreement is void, and negotiations resume.
A deliberative assembly, using parliamentary procedure, could ratify action that otherwise was not validly taken. For example, action taken where there was no quorum at the meeting is not valid until it is later ratified at a meeting where a quorum is present.
Main article: Treaty
See also: List of treaties by number of parties
The ratification of international treaties is always accomplished by filing instruments of ratification as provided for in the treaty. In many democracies, the legislature authorizes the government to ratify treaties through standard legislative procedures by passing a bill.
In Australia, power to enter into treaties is an executive power within Section 61 of the Australian Constitution so the Australian Government may enter into a binding treaty without seeking parliamentary approval. Nevertheless, most treaties are tabled in parliament for between 15 and 20 joint sitting days for scrutiny by the Joint Standing Committee on Treaties, and if implementation of treaties requires legislation by the Australian parliament, this must be passed by both houses prior to ratification.
The President makes a treaty in exercise of his executive power, on the aid and the advice of the Council of Ministers headed by the Prime Minister, and no court of law in India may question its validity. However, no agreement or treaty entered into by the president is enforceable by the courts which is incompatible with Indian constitution/ national law, as India follows dualist theory for the implementation of international laws.
If the Parliament wishes to codify the agreement entered into by the executive thereby making it enforceable by the courts of India, it may do so under Article 253 of the constitution.
In Japan, in principle both houses of the parliament (the National Diet) must approve the treaty for ratification. If the House of Councilors rejects a treaty approved by the House of Representatives, and a joint committee of both houses cannot come to agreement on amendments to the original text of the treaty, or the House of Councilors fails to decide on a treaty for more than thirty days, the House of Representatives the will be regarded as the vote of the National Diet approving the ratification. The approved treaty will then be promulgated into law by the act of the Emperor.
Treaty ratification is a royal prerogative, exercised by the monarch on the advice of the government. By a convention called the Ponsonby Rule, treaties were usually placed before Parliament for 21 days before ratification, but Parliament has no power to veto or to ratify. The Ponsonby Rule was put on a statutory footing by Part 2 of the Constitutional Reform and Governance Act 2010.
Treaty power is a coordinated effort between the Executive branch and the Senate. The President may form and negotiate, but the treaty must be advised and consented to by a two-thirds vote in the Senate. Only after the Senate approves the treaty can the President ratify it. Once it is ratified, it becomes binding on all the states under the Supremacy Clause. While the House of Representatives does not vote on it at all, the supermajority requirement for the Senate's advice and consent to ratification makes it considerably more difficult to rally enough political support for international treaties. Also, if implementation of the treaty requires the expenditure of funds, the House of Representatives may be able to block or at least impede such implementation by refusing to vote for the appropriation of the necessary funds.
The President usually submits a treaty to the Senate Foreign Relations Committee (SFRC) along with an accompanying resolution of ratification or accession. If the treaty and resolution receive favorable committee consideration (a committee vote in favor of ratification or accession), the treaty is then forwarded to the floor of the full Senate for such a vote. The treaty or legislation does not apply until it has been ratified. A multilateral agreement may provide that it will take effect upon its ratification by less than all of the signatories. Even though such a treaty takes effect, it does not apply to signatories that have not ratified it. Accession has the same legal effect as ratification, for treaties already negotiated and signed by other states. An example of a treaty to which the Senate did not advise and consent to ratification is the Treaty of Versailles, which failed to garner support because of the Covenant of the League of Nations.
The US can also enter into international agreements by way of executive agreements. They are not made under the Treaty Clause and do not require approval by two-thirds of the Senate. Congressional-executive agreements are passed by a majority of both houses of Congress as a regular law. If the agreement is completely within the President's constitutional powers, it can be made by the President alone without Congressional approval, but it will have the force of an executive order and can be unilaterally revoked by a future President. All types of agreements are treated internationally as "treaties". See Foreign policy of the United States#Law.
Federations usually require the support of both the federal government and some given percentage of the constituent governments for amendments to the federal constitution to take effect.
Further information: Amendment of the Constitution of India
Not all constitutional amendments in India require ratification by the states. Only constitutional amendments that seek to make any change in any of the provisions mentioned in the proviso to Article 368 of the Constitution of India, must be ratified by the Legislatures of not less than one-half of the States. These provisions relate to certain matters concerning the federal structure or of common interest to both the Union and the States viz., the election of the President (articles 54 and 55); the extent of the executive power of the Union and the States (Articles 73 and 162); the High Courts for Union territories (Article 241); The Union Judiciary and the High Courts in the States (Chapter IV of Part V and Chapter V of Part VI); the distribution of legislative powers between the Union and the States (Chapter I of Part XI and Seventh Schedule); the representation of States in Parliament; and the provision for amendment of the Constitution laid down in Article 368. Ratification is done by a resolution passed by the State Legislatures. There is no specific time limit for the ratification of an amending Bill by the State Legislatures. However, the resolutions ratifying the proposed amendment must be passed before the amending Bill is presented to the President for his assent.
However, when the treaty terms are interfering with the powers exclusively applicable to states (State List), prior ratification of all applicable states are to be obtained per Article 252 of the Indian constitution before the ratification by the Parliament.
Main article: History of the United States Constitution
Article VII of the Constitution of the United States describes the process by which the entire document was to become effective. It required that conventions of nine of the thirteen original States ratify the Constitution. If fewer than thirteen states ratified the document, it would become effective only among the states ratifying it. New Hampshire was the ninth state to ratify, doing so on June 21, 1788, but, as a practical matter, it was decided to delay implementation of the new government until New York and Virginia could be persuaded to ratify. Congress intended that New York City should be the first capital, and that George Washington, of Mount Vernon, Virginia, should be the first President, and both of those things would have been somewhat awkward if either New York or Virginia were not part of the new government. Ratification by those states was secured—Virginia on June 25 and New York on July 26—and the government under the Constitution began on March 4, 1789.
For subsequent amendments, Article V describes the process of a potential amendment's adoption. Proposals to adopt an amendment may be called either by a two-thirds vote by both houses of Congress or by a national convention as a result of resolutions adopted by two-thirds (presently at least 34 out of 50) of the states' legislatures. For a proposed amendment to be adopted, three-quarters of the states (presently at least 38 out of 50) must then ratify the amendment either by a vote of approval in each state's legislature or by state ratifying conventions. Congress may specify which method must be used to ratify the amendment. Congress may also set a deadline by which the threshold for adoption must be met. |
The impact of fishing activities is considered as the most important anthropogenic mortality factor for marine turtle populations in the Mediterranean Sea. The Barcelona Convention adopted an Action Plan for the Conservation of Mediterranean Marine Turtles in 1989, acknowledging that catches by fishermen are the most serious threat to the turtles at Sea and that their conservation deserved special priority. In the Mediterranean, interactions of sea turtles with fishing gears, including trawl nets, are still insufficiently studied. Surface longline, driftnet and bottom trawl nets operating in the Mediterranean are the major threats to the survival of this species, even if the impact of fixed gears (gillnets and trammel nets) should be carefully considered.
Several countries (22 Mediterranean and 15 non-Mediterranean) are usually fishing in the Mediterranean Sea, and an undefined number of small boats are really active in non-EU countries. Thus the fishing effort in this area is a key factor to take into account in considering the turtle bycatch. |
When a baby is born, the brain has all of the major parts, but the nerve connections that provide the basis for all motor development and cognition develop over time. As these connections are made, the brain grows, and with it the skull, at least until the child is about 5-7 years old. By then the bones that form the skull will fuse together and brain growth becomes more limited. When there is an injury to the brain
at birth or in the newborn that is significant, the brain will not grow--or at least not grow to its full capacity. By age 2, this condition, known as microcephaly
(which means small head), can be diagnosed by measuring the child’s head in relation to normal or average sizes for age. When microcephaly is diagnosed by the pediatrician, the reason or cause for this condition must be investigated by a thorough review of the birth history and the medical records. |
In the realm of healthcare, where life hangs in the balance, attention to detail reigns supreme. Beyond the bustling hallways and sterile operating rooms lies a crucial, yet often overlooked, facet: biomedical waste disposal. Improper handling of these potentially infectious materials poses a significant threat to human health and the environment.
But fear not! This article is here to demystify biomedical waste disposal, unveil its different categories, and shed light on the potential risks associated with mishandling it. We’ll also emphasize the importance of compliance with regulations, ensuring everyone can play their part in safeguarding our communities.
What is Biomedical Waste? It’s More Than Just Sharps!
Imagine this: discarded needles, bloody bandages, and used laboratory cultures these are just a few examples of biomedical waste. This term encompasses any waste containing infectious (or potentially infectious) materials generated during the treatment of humans or animals, as well as during research involving biological materials. But it doesn’t stop there! Even seemingly harmless items like unused syringes, packaging from medical supplies, and discarded gloves fall under the biomedical waste umbrella, simply because they come into contact with potentially hazardous materials during healthcare activities.
The Hidden Dangers: Why Proper Disposal Matters
Underestimating the risks of improper biomedical waste disposal can be a grave mistake. These seemingly innocuous materials can harbor a multitude of dangers, including:
- Spread of Infectious Diseases: Improper disposal can create breeding grounds for pathogens like bacteria, viruses, and fungi, leading to the spread of infectious diseases like Hepatitis B and C, HIV, and even antibiotic-resistant strains.
- Environmental Contamination: Untreated waste can seep into soil and water, contaminating our precious resources and posing a threat to wildlife and humans alike.
- Occupational Hazards: Healthcare workers exposed to improperly handled waste face a heightened risk of sharps injuries, infections, and even allergic reactions.
Navigating the Maze: Different Categories of Biomedical Waste
WasteX Biomedical waste isn’t a monolithic entity. To ensure proper disposal, it’s crucial to understand its diverse categories:
- Infectious Waste: This category includes items like tissues, blood tubes, and swabs contaminated with bodily fluids, posing a high risk of transmitting infections.
- Pathological Waste: Human anatomical waste, such as organs, tissues, and blood, falls under this category and requires specific disposal protocols.
- Sharps Waste: Needles, syringes, and other sharp objects pose a unique risk of injuries and require secure disposal containers.
- Chemical Waste: Discarded drugs, expired medications, and laboratory chemicals belong to this category and demand special treatment to prevent environmental contamination.
- General Waste: While seemingly harmless, bandages, gowns, and other non-hazardous waste associated with biomedical activities still need proper disposal to prevent littering and potential contamination.
Compliance: The Key to Protecting Our Communities
Ensuring safe and responsible biomedical waste disposal isn’t just a matter of best practices; it’s a legal obligation. Healthcare facilities and research institutions must comply with strict regulations governing the segregation, packaging, transportation, and treatment of these materials. These regulations vary by region, so staying informed and adhering to local guidelines is paramount.
Taking Action: Embracing Safe Disposal Practices
The power to minimize the risks associated with biomedical waste lies in our hands. Here are some steps we can all take:
- Healthcare facilities: Invest in training programs for staff, implement proper segregation protocols, and partner with authorized waste disposal companies.
- Research institutions: Ensure research protocols minimize waste generation, adopt safe disposal practices, and adhere to relevant regulations.
- Individuals: Properly dispose of sharps at designated locations, avoid littering with medical waste, and support initiatives promoting responsible waste management.
By understanding the complexities of biomedical waste, its potential dangers, and the importance of compliance, we can work together to create a safer and healthier world for everyone. Let’s demystify this often-overlooked aspect of healthcare and embrace safe disposal practices, one sharps container at a time. |
Moving images presented as a sequence of static images (called "frames") representing snapshots of the scene, taken at regularly spaced time intervals, e.g. 50 frames per second. Apart from the frame rate, other important properties of a video are the resolution and colour depth of the individual images.Digital video data is typically stored and transmitted in a format like MPEG or H.264 that includes synchoronised sound. Unlike broadcast television, digital video on a computer or network uses compression. Compression is even more important for video that for static images due to the large amount of data involved in even a short video. Furthermore, compression allows video to be transmitted via a channel whose bandwidth is less than the raw data rate implied by the resolution and frame rate. This allows the recipient to start displaying the video before the transmission is complete, a process known as streaming. Compression can be relatively slow but decompression is done in real-time with the picture quality and frame rate varying with the processing power available and the size and scaling of the picture. There are many types of software for displaying video on computers including Windows Media Player from Microsoft, QuickTime from Apple Computer, DivX, VLC, RealPlayer and Acorn Computers' Replay.
Last updated: 2011-01-04 |
Numerous species of termite, whose societies are founded on hierarchies of kings, queens, soldiers, and workers live in tall nests that are ventilated by an intricate system of tunnels.
The nests, also called mounds, jut from the ground like skyscrapers and can grow to about 7 meters in height. They are also self-ventilating, self-cooling, and self-draining—but thus far the mechanisms behind these climate control features have remained unidentified.
A team of engineers, chemists, biologists, and mathematicians lead by Imperial College London, the University of Nottingham, and CNRS-Toulouse in France, have currently explored how these nests work much more closely than ever before using 3D X-ray imaging.
They discovered evidence about how the small holes, or pores, in the walls of the mound help termite homes stay ventilated, cool, and dry.
Termite nests are a unique example of architectural perfection by insects. The way they’re designed offers fascinating self-sustaining temperature and ventilation controlling properties throughout the year without using any mechanical or electronic appliances.
Dr Kamaljit Singh, Study Lead Author, Department of Earth Science and Engineering, Imperial College London.
In their new research, published in Science Advances, the scientists sourced termite nests from the African countries Guinea and Senegal and examined them using two kinds of 3D X-ray imaging.
To start off, they conducted a low-resolution scan of the nests to measure the nests’ larger structures, such as walls and corridors termed as channels.
Based on those images they calculated the thickness of the nests’ outer and inner walls, as well as the structural particulars of inner channels which termites use to move around the nest.
The scientists learned that networks of larger and smaller pores in the nest walls help swap carbon dioxide (CO2) with the outside atmosphere to improve ventilation.
Larger micro-scale pores are found to be completely connected all through the outer wall offering a path across the walls, and by using 3D flow simulations, the authors revealed how CO2 travels through the nests to the outside.
The simulations exposed that the large micro-scale pores in nest walls are beneficial for ventilation when the wind outside is faster, as CO2 can exit easily. However, when wind speeds are slower, the larger pores can also help to discharge CO2 through diffusion.
Dr Singh said: “This is a remarkable feature that lets the nest ventilate regardless of the weather outside.”
Nests can be typically found in hotter regions, which mean they have to keep cool.
Actually, the authors learned that the larger pores also help control temperatures within the nests. The pores, which can be found in the outer walls of the nest, fill with air which minimizes the heat entering via the walls—in the same way as how the air in double glazed windows helps retain the heat inside.
Bearing in mind the important role the pores play, the researchers also speculated what takes place when it rains and the pores become jammed by water.
They discovered that the nests use “capillary action”—where liquid flows via tiny spaces without external help from gravity—that pushes rainwater from the larger pores to the smaller pores. This guarantees the larger pores continue to stay open to maintain the ventilation of the nest.
Dr Singh said: “Not only do these remarkable structures self-ventilate and regulate their own temperatures—they also have inbuilt drainage systems. Our research provides deeper insight into how they manage this so well.”
The researchers say the recently discovered architecture within termite nests could help us improve temperature control, ventilation, and drainage systems in buildings—and optimistically transform them to be more energy efficient.
The findings greatly improve our understanding of how architectural design can help control ventilation, heat regulation, and drainage of structures—maybe even in human dwellings. Our findings could help us understand how to design energy efficient, self-sustaining buildings.
Pierre Degond, Study Co-Author and Professor, Department of Mathematics, Imperial College London.
We know that nature holds the secrets to survival. To unlock them, we need to encourage global, interdisciplinary research. This study shows we have a lot more to learn from Mother Nature when it comes to solving even the most important 21st century problems.
Dr Bagus Muljadi, Study Co-Author, University of Nottingham.
This study was sponsored by the Engineering and Physical Sciences Research Council (EPSRC), the Royal Society, and the Wolfson Foundation.
Termite nests imaged by multi-scale 3D X-ray tomography
(Video credit: Imperial College London) |
Utilizing Economic Incentives to Encourage Recycling
Recycling plays a crucial role in promoting sustainability and reducing the strain on natural resources. To incentivize individuals and businesses to engage in recycling practices, many jurisdictions have turned to economic incentives. By offering financial rewards or benefits, these incentives aim to motivate participation in recycling programs, increase recycling rates, and divert waste from landfills. We will explore the concept of economic incentives for recycling and highlight their potential benefits and challenges.
One of the most well-known economic incentives for recycling is the implementation of deposit-refund systems. This approach involves charging a deposit on certain items, such as beverage containers, at the point of purchase. Consumers can then receive a refund when they return the empty containers to designated collection points. These systems create a direct financial incentive for individuals to recycle and reduce litter. Jurisdictions that have implemented deposit-refund systems, like Germany and several U.S. states, have experienced high recycling rates and reduced waste in landfills. However, the effectiveness of such systems depends on factors such as the deposit amount, convenience of collection points, and public awareness.
Pay-As-You-Throw (PAYT) programs are another form of economic incentive for recycling. Under these systems, residents are charged for waste collection based on the amount of trash they generate. Recycling, on the other hand, is typically offered free of charge or at a reduced cost. By tying waste disposal fees to the quantity of trash produced, PAYT programs encourage individuals to reduce waste and increase recycling efforts. This approach has been successful in several municipalities, leading to higher recycling rates and waste reduction. However, challenges may arise in implementing accurate waste measurement systems and addressing concerns about potential illegal dumping.
Recycling Incentive Programs
Some jurisdictions have implemented recycling incentive programs that reward individuals or households for meeting specific recycling goals. These programs may offer financial incentives, such as rebates or tax credits, based on the amount of recyclables collected or the frequency of recycling. These initiatives provide a tangible benefit to participants, encouraging them to actively engage in recycling practices. Additionally, they promote a sense of achievement and community involvement. However, monitoring participation and ensuring program integrity can be challenging, as well as determining the most effective metrics for rewarding recycling efforts.
Green Rewards Programs
Green rewards programs are initiatives that provide individuals with incentives for engaging in sustainable behaviors, including recycling. These programs often utilize loyalty cards or mobile apps to track recycling activities and provide rewards such as discounts, coupons, or points that can be redeemed at participating businesses. Green rewards programs create a win-win situation by incentivizing recycling while promoting local businesses and supporting the local economy. They can also foster a sense of community and encourage long-term behavioral change. However, these programs rely on effective tracking systems and cooperation from participating businesses to provide attractive and valuable rewards.
Tradable Recycling Credits
In some contexts, tradable recycling credits have been proposed as a market-based approach to incentivize recycling. These credits function similarly to carbon credits in cap-and-trade systems. Recycling facilities or municipalities that exceed their recycling targets can earn credits that can be sold to entities struggling to meet their recycling obligations. This creates a market for recycling and encourages competition and innovation in waste management. Tradable recycling credits have the potential to drive higher recycling rates, especially in commercial and industrial sectors. However, establishing and managing a market for these credits requires careful regulation and oversight to prevent fraud and ensure the credits’ environmental integrity.
Collaboration between governments and the private sector can be instrumental in implementing effective economic incentives for recycling. Governments can work with businesses to develop joint initiatives, such as discount programs for customers who bring in recyclable materials or partnerships to promote recycling education and infrastructure. By leveraging the resources and expertise of both sectors, public-private partnerships can enhance the effectiveness and reach of economic incentive programs. These collaborations can also foster innovation and drive the development of new recycling technologies and processes.
Using dumpster rentals for recycling
Using dumpster rentals for recycling can be an effective and convenient solution for managing recyclable materials (find more info here). Dumpster rentals provide a designated space to collect and store recyclables, making it easier to separate them from general waste. With various sizes available, businesses, organizations, and even individuals can choose the dumpster size that suits their recycling needs.
Renting a dumpster for recycling also offers the advantage of efficient waste management. Instead of relying on small recycling bins or relying solely on curbside collection, a dumpster provides a larger capacity for recyclables. This means fewer trips to recycling centers or collection points, saving time and transportation costs. Additionally, having a dedicated dumpster for recycling helps ensure that recyclables are properly sorted and protected from contamination, leading to higher-quality materials for recycling processes. Overall, utilizing dumpster rentals for recycling can streamline the recycling process, encourage proper waste separation, and contribute to more sustainable waste management practices.
While economic incentives for recycling offer several benefits, there are also challenges to consider. Implementation costs, administrative complexities, and monitoring and enforcement efforts can pose significant hurdles. Additionally, ensuring the fairness and effectiveness of incentives across different socio-economic groups is essential to avoid disproportionate impacts. Public education and awareness campaigns are crucial to inform individuals about the benefits of recycling and the availability of incentives.
In conclusion, economic incentives can play a vital role in encouraging recycling and promoting sustainable waste management practices. Deposit-refund systems, PAYT programs, recycling incentive programs, green rewards programs, tradable recycling credits, and public-private partnerships all offer different approaches to incentivizing recycling. When designed and implemented thoughtfully, economic incentives have the potential to increase recycling rates, reduce waste, and contribute to a more sustainable future. By combining economic incentives with robust education and infrastructure, jurisdictions can create a comprehensive framework for encouraging recycling and fostering a culture of environmental stewardship. |
Highly migratory species such as whales are often exposed to anthropogenic threats. A research led by the Okeanos RD center from the University of Azores focused on the migration pathways of three whales species from the North Atlantic Ocean: fin, blue and sei whales. They described their spatiotemporal distribution to understand their movement patterns and provide sustained data to protect them against ship collisions and noise disturbance.
To create an accurate tracking model, they used several data sets, including Copernicus Marine data. Researchers modelled the whales’ habitat preferences in light of certain environmental and prey-related variables. Potential prey biomass distributions were obtained from SEAPODYM - a mid-trophic level spatial ecosystem and population dynamics model. This biomass distribution was provided by Copernicus Marine micronekton product, which contained parameters such as zooplanktons and micronektons. They are key explanatory variables for understanding the individual behaviour and population dynamics of larger oceanic predators.
Migratory marine animals are increasingly vulnerable to extinction and population declines due to cumulative anthropogenic impacts on their environment. Baleen whales are the largest animals on earth, yet the 3 species (blue, fin and sei) have suffered from overhunting. They are now considered "endangered" on the IUCN Red List of Threatened Species. They feed on some of the smallest animals in the ocean such as zooplanktons and micronektons. Climate change could alter their habitat preference to northward waters, and this could make them more vulnerable.
To follow future developments in these studies, see: www.seapodym.eu
The Azores Whale Lab of the Okeanos RD center (University of Azores) has been conducting cetacean research in the most remote archipelago in the North Atlantic for the last 20 years and has applied that knowledge to aid in their conservation. CLS is a subsidiary of CNES, ARDIAN and IFREMER and a worldwide provider of monitoring and surveillance solutions for the planet.
Benefits for users
- Levels of zooplanktons and micronektons in the global ocean
- Lower trophic level biomass information to understand large species migrations |
Scientists Say Nature ‘Is Better at Carbon Farming’
By Paul Brown, Climate News Network
LONDON – Large forests planted with a single species of tough small trees could capture enough carbon from the atmosphere to slow climate change and green the world’s deserts at the same time, researchers say.
A group of German scientists says the tree Jatropha curcas is resistant to arid conditions and can thrive where food crops would not survive.
Unlike other geo-engineering schemes, which are expensive and rely on humans interfering with nature, this project merely encourages natural tree growth.
Under the slogan “Nature Does it Better,” the scientists say the costs are comparable with the estimated cost of developing carbon capture and storage (CCS) technology at power stations. With only a small proportion of the world’s deserts, they say, these trees could take out most of the additional carbon dioxide emitted by humans since the beginning of the industrial revolution.
The study, published in Earth System Dynamics, a journal of the European Geosciences Union, says “carbon farming” addresses the root source of climate change by taking carbon out of the atmosphere as fast as we put it in.
One hectare (.0039 square miles) of Jatropha trees can take 25 tons of carbon dioxide out of the air annually over 20 years. As it grew, a plantation occupying just 3 percent of the Arabian Desert would remove from the atmosphere the same amount of CO2 as all the motor vehicles in Germany produced over the same period.
The German scientists say all they are doing is working with nature. The trees would need a little help, however, in the form of water. The team therefore proposes starting the plantations near the coast where desalination plants would provide enough water to get the saplings established.
“To our knowledge, this is the first time experts in irrigation, desalination, carbon sequestration, economics and atmospheric sciences have come together to analyze the feasibility of a large-scale plantation to capture carbon dioxide in a comprehensive manner.
Next stop: field trials
“We did this by applying a series of computer models and using data from Jatropha curcas plantations in Egypt, India and Madagascar,” says Volker Wulfmeyer of the University of Hohenheim in Stuttgart.
The idea has a price tag of 42 to 63 euros per ton of carbon removed from the atmosphere, roughly the same cost as CCS, which is much favored by the UK and other governments as one of the “solutions” to mitigate climate change.
But there are more advantages. After a few years, the trees would produce bio-energy (in the form of tree trimmings) to support the power production required for the desalination and irrigation systems.
“From our point of view, afforestation as a geo-engineering option for carbon sequestration is the most efficient and environmentally safe approach for climate change mitigation.
“Vegetation has played a key role in the global carbon cycle for millions of years, in contrast to many technical and very expensive geo-engineering techniques,” said lead author Klaus Becker, also from the University of Hohenheim.
A known advantage of planting trees in arid regions is that they increase cloud cover and rainfall, a further greening of the desert. On the minus side, irrigation can lead to a build-up of salt in the soil, damaging the plantation.
Although the researchers have done computer simulations of the effects of these plantations on deserts, there is no substitute for a pilot project. They are hoping their paper will stimulate enough interest and money to begin field trials of the idea.
Paul Brown is a joint editor for Climate News Network. Climate News Network is a news service led by four veteran British environmental reporters and broadcasters. It delivers news and commentary about climate change for free to media outlets worldwide. |
Double click to edit
HOW DOES ABACAVIR WORK?
Abacavir works by decreasing the ability of HIV to reproduce, resulting in less HIV in the
blood and decreases the chance of HIV complications (e.g., new infections, cancer). It
is technically classified as a nucleoside analog reverse transcriptase inhibitor (NRTI)
due to how it prevents HIV from reproducing. What happens is that when abacavir gets
into the body it is changed by an enzyme into a substance called carbovir triphosphate.
An enzyme is a type of protein that helps produce chemical reactions in the body.
As seen in the above
picture, abacavir tablets
are yellow, scored,
rounded outwards on both
sides, covered with a film,
and are imprinted with
“GX 623” on both sides.
There are generally 60
tablets per bottle. They
are also available in
packages of 6 cards, with
10 pills per card.
Abacavir (abacavir sulfate; trade names = Ziagen;
abbreviated ABC) is a drug used to combat HIV (human
immunodeficiency virus) and AIDS (Acquired Immune
Deficiency Syndrome) in combination with other HIV
medications. HIV is a virus that attacks the body’s immune
(defense) system, leading to infections and harmful tumors
(tissues that grow more rapidly than normal). AIDS is a
decrease in the effectiveness of the body's immune
(defense) system that is due to HIV infection.
When abacavir is combined with zidovudine (AZT) and
lamivudine (3TC), the combination is known as Trizivir. As
noted above, abacavir is not used alone but in combination
with other medications. When abacavir is combined with
lamivudine it is known as Kivexa/Epzicom. Some HIV strains
that are resistant to the effects of AZT or 3TC are generally
susceptible to abacavir whereas HIV strains resistant to the
effects of AZT and 3TC are not as sensitive to abacavir.
Carbovir triphosphate is very similar to a chemical (known as
deoxyguanosine triphosphate or dGPT) that HIV needs to make
new DNA (deoxyribonucleic acid) and thus competes with it. DNA
is a chain of many connected genes. Genes contain coded
instructions for how proteins should be constructed, such as those
that make up the HIV virus. HIV needs to make new DNA for each
virus and it normally does so by using a specific enzyme known as
reverse transcriptase. However, when abacavir works, carbovir
triphosphate competes with the dGBT that is needed by reverse
transcriptase and is incorporated into the viral DNA.
As a result, reverse transcriptase is unable to copy its genetic material to generate new virus material
because carbovir triphosphate interferes with it. For this reason, abacavir is known as a reverse
transcriptase inhibitor. Carbovir triphosphate lacks a chemical group known as 3’-OH which prevents the
formation of a chemical bond needed for the HIV viral chain to continue.
When HIV stops reproducing because the viral chain cannot extend its form, this is known as chain
termination. Abacavir does not kill existing HIV and this cannot cure it, but it can help stop it from
reproducing. Strains of HIV that are resistant to a different class of HIV medications known as protease
inhibitors are not usually resistant to abacavir. It is unknown if abacavir decreases the risk of transmitting
HIV to others.
WHAT IS ABACAVIR HYPERSENSITVITY SYNDROME?
Abacavir is considered a generally well tolerated medication (in 90% of people). The main side effect is
hypersensitivity (known as abacavir hypersensitivity syndrome or AHS), which occurs in about 2 to 9% of
patients during the first six weeks of treatment but most people develop it after 9 to 11 days. Common
signs and symptoms of AHS include 2 or more of the following: muscle aches, difficulty
breathing/wheezing, fever, malaise (a feeling of being unwell), gastrointestinal tract symptoms (nausea,
vomiting, diarrhea, stomach pain), respiratory symptoms (shortness of breath, cough, sore throat), skin
rash, swelling, weakness, and fatigue/extreme tiredness.
Other signs and symptoms include itching (face, tongue, and throat), break down of muscle tissue,
swelling, abnormal chest x-ray findings, abnormal sensations, and severe dizziness. There can be swelling
or enlargement of the lymph nodes. Lymph nodes are small egg shaped structures in the body that help
fight against infection. Damage to the mucous membranes can occur. A mucous membrane is one of four
major types of thin sheets of tissue that line or covers various parts of the body. Ulcers (open sores) in
layer that covers and protects the inside of the eyelids and the front part of the sclera (the white part of
the eyes). The skin rash caused by abacavir is often flat and red with small bumps that are close together.
The skin rash can also be pale, red, raised, and itchy. However, other types of skin rashes can occur due
Early diagnosis is difficult because the symptoms of AHS can be confused with the symptoms of HIV,
other infections, hypersensitivity to other medications, and immune restoration disease. Immune
restoration disease is when the restored immune system after highly active antiretroviral therapy (HAART)
is pathological and causes disease. In some cases, there can be worsening of an old medical condition
such as an old infection. The body can have an inflammatory response to opportunistic or slowly
developing infections. This reaction can occur at any time.
In rare cases, AHS can be deadly, which is why the medication is immediately and permanently
discontinued if AHS develops. If someone who has AHS takes abacavir again, or any other medication
containing abacavir again, it can cause very low blood pressure or death within hours. Elevated liver
reactions to abacavir. When liver enzymes are elevated it is an indication of liver damage. This is why
people with liver disease need to be careful about using abacavir because it can worsen the problem and
lead to liver failure. Kidney failure and breathing failure can occur in some AHS cases. Adult respiratory
distress syndrome can occur, which is a severe, life-threatening reaction in adult humans due to injuries
or acute infections to the lung. Treatment with abacavir is suspended in cases where organ failure occurs
along with lactic acidosis.
present in the blood. White blood cells help protect the body against diseases and fight infections.
Increased creatine levels can also occur due to AHS. Creatinine is a waste product of the normal
breakdown of muscle during activity. Increased creatine phosphokinase can occur due to AHS. Creatine
phosphokinase is an enzyme found mainly in the heart, brain, and skeletal muscle.
AHS is related to how abacavir changes the shape and chemistry of part a protein product coming from a
gene variation known as HLA-B*5701 (also known as B57). These changes affect the body’s immune
system and leads to the activation of cytotoxic T-cells (types of white blood cells) that specifically target
abacavir and releases inflammatory substances (TNF-alpha and IFN-gamma) that result in the delayed
Those who are prone to hypersensitivity reactions can be detected with genetic testing, which is readily
available. On 7/24/08, the Food and Drug Administration (FDA) recommended this genetic testing for
people who have never used abacavir in the past or for those re-initiating treatment with the medication. It
is also recommended that people check with their doctor if they miss several doses of abacavir before
restarting it. For those who have a positive genetic test result, there is a 50% chance of developing AHS.
For those with a negative genetic test result, it is very unlikely that the person will develop AHS. One
study, known as the PREDICT-1 study, found that 7.8% (66/847) of patients treated with abacavir without
genetic screening developed clinically suspected AHS compared to 3.4% (27/803) who only used
abacavir if genetic testing showed they did not have B57. The results also lead to estimates that 61% of
patients who tested positive for the B57 gene would develop AHA compared to 4.5% who tested negative
for B57. Screening for B57 reduced clinically suspected AHA by about 60% compared to if no screening
occurred. Another study known as the SHAPE study also supported the use of genetic screening.
Skin patch testing is a detection method for AHS although it is not as accurate as genetic testing,
sometimes not detecting those who at risk to develop AHS. For this reason, skin patch testing is
considered a research tool and not useful for clinical diagnosis of AHS.
The prevalence of individuals who possess B57 varies according to ethnicity as follows: the Yoruba from
Nigeria (0%), African-Americans (1%), Chinese Americans (1.2%), Hispanic Americans (3%), the Luhya
from Kenya (13.6%), people of European ancestry (3.4 to 5.8%), the Masai from Kenya (13.6%), and
Indian Americans (17.6%). When this gene form is detected, the FDA recommends using another HIV
medication instead of abacavir. There are rare exceptions to this rule of thumb when the benefits clearly
outweigh the risks.
abacavir in combination with other medications known to be associated with these conditions. Stevens-
Johnson syndrome is a rare but serious condition in which the skin and at least two surfaces of the
mucous membranes (or the mucous membranes only) are damaged by a severe reaction to infection or
medication. A mucous membrane is one of four major types of thin sheets of tissue that line or covers
various parts of the body. TEN is a rare and sometimes life threatening disorder of the skin caused by a
fault in the immune system. There have also been reports of erythema multiforme with abacavir use.
Erythema multiforme is an acute (sudden), self-limited, and sometimes recurring skin condition that is a
hypersensitivity reaction associated with certain medications, infections, and other triggers.
Patients with AHS need to stop using abacavir immediately. When this is done, AHS is usually reversible.
WHAT ARE OTHER SIDE EFFECTS OF ABACAVIR?
Many patients use abacavir without serious side effects. However, other side effects of abacavir include
anxiety, depression (or worsening of pre-existing depression), diarrhea, fatigue, chills, headache, loss of
appetite, muscle pain, rash, sleep difficulty (falling/staying asleep, strange dreams), triglyceride (fat cell)
level increases in the blood, upper respiratory infection, vomiting, high fat cells (triglycerides) in the blood,
redistribution or accumulation of body fat, vision changes, increased sensitivity to light, and urinating less
or not at all. The most common side effects of abacavir in adults are bad dreams, sleep problems,
nausea, headache, tiredness and vomiting. The most common side effects of abacavir in children are
fever, chills, nausea, vomiting, rash, and infections of the ear, nose, and throat.
Redistribution or accumulation of body fat can cause obesity, fat accumulation at the bottom/back of the
neck, upper shoulders, stomach, face, and breasts, and a wasting appearance of the face, arm, leg, and
buttocks. Accumulation of body fat can be reduced with exercise.
One study indicated a nearly 90% increased risk in heart attack in people who use abacavir. On 3/1/11,
the Food and Drug Administration (FDA) announced a safety review of abacavir and a possible risk of
increased heart attack with the medication. However, the FDA ultimately concluded that while there was
conflicting data on the topic that a review of 26 randomized control trials showed no significant increase
in heart attack between those who used abacavir versus those who did not.
Very rarely, abacavir can cause lactic acidosis (especially in women), in which the high level of acid is
due a buildup of a substance called lactic acid. Lactic acid buildup is what causes the burning feeling in
your muscles when lifting weights for many repetitions. Pancreatitis (inflammation of the pancreas) is
another serious side effect of abacavir. The pancreas is a long organ in the back of the belly that makes
insulin (a substance in the body which helps absorb glucose, a type of sugar). Children using abacavir
have been found to have mild elevations in blood glucose levels compared to adults. Increased GGTP
(gamma-glutamyl transpeptidase) levels have also been found in some people who use abacavir. GGTP
is a type of enzyme that plays a role in metabolism. A severe increase in liver size with an accumulation
of fat can occur with abacavir use, especially in women.
If patients taking abacavir experience blisters, chills, difficulty swallowing/breathing, hives, itching, and
peeling skin it is important to call the doctor immediately as these are considered serious and important.
Side effect frequency of using abacavir once a day (600 mg) is generally similar to using 300 mg of
abacavir twice a day. Hypersensitivity reactions have been shown to occur in 9% of people who use
abacavir once a day (600 mg) compared to 7% of people who use 300 mg of abacavir twice a day.
However, those who use abacavir once a day (600 mg) have been shown to experience significantly
higher rates of severe hypersensitivity reactions compared to people who use abacavir 300 mg twice a
day (5% vs 2%). Those who use abacavir once a day (600 mg) have been shown to experience
significantly higher rates of severe diarrhea compared to people who use abacavir 300 mg twice a day
(2% vs 0%). Another study showed that those who use abacavir once a day (600 mg) experience
significantly higher rates of hypotension with a severe hypersensitivity reaction compared to people who
use abacavir 300 mg twice a day (11% vs 0%).
WHAT FORMS ARE ABACAVIR AVAILABLE IN?
Abacavir is available in tablet and liquid form.
DOES ABACAVIR INTERACT WITH OTHER MEDICATIONS?
Yes. Your doctor should know if someone using abacavir is using other medications to treat HIV or
methadone (Dolophine; Methadose). Methadone is a pain medication that is sometimes used to treat
people who are addicted to heroin, morphine, or other pain reducing drugs. While methadone does not
have a clinically significant effect on abacavir, a study showed that 11 people using twice the daily
recommended dose of abacavir (600 mg twice a day) had increased clearance of methadone from their
body. Thus, such individuals may need an increased methadone dose although this would be in the
minority of cases. Abacavir should not be taken with other medications that contain abacavir.
HOW ARE MISSED DOSES HANDLED?
Doctors advise patients not to double up on a missed dose and to take a missed dose as soon as it is
remembered unless it is close in time to taking the next dose. If it is almost time to take the next dose,
doctors advise skipping the missed dose and to continue the regular dosing. If several doses of abacavir
are missed, doctors advise notifying them before restarting the medication to prevent a dangerous and
sometimes fatal allergic reaction.
WHAT ARE SIGNS OF AN ABACAVIR OVERDOSE?
Little is known about the effects of abacavir overdose but if an overdose occurs, treatment in a medical
center is needed. It is known in rats and mice that toxic levels of abacavir (7 to 24 times the normal level
for humans) cause heart degeneration over a period of two years. The clinical relevance of this finding is
unknown. If an overdose is suspected the local poison control center should be contacted or the person
should go to the nearest emergency room. There is no known antidote for abacavir if an overdose
CAN ABACAVIR BE USED IN INFANTS?
Abacavir is not supposed to be used in children less than 3 months of age.
IS ABACAVIR EXCRETED IN BREAST MILK?
It is not known if abacavir is excreted in human breast milk but women with HIV should not breast feed
because of possibly transmitting HIV to an infant who is not infected. Abacavir is excreted in the breast
milk of lactating rats.
IS ABACAVIR SAFE TO USE DURING PREGNANCY?
It is unknown if abacavir is safe to use during pregnancy or if it harms an unborn child. Doctors
recommend that it only be used in pregnant women when the potential benefits outweigh the risks. HIV
medications are usually given to women with HIV because treatment is known to decrease the risk of HIV
transmission to the baby.
In a rabbit study, when abacavir was administered at 8.5 times the human exposure level, no birth
defects or developmental defects were noted. In rats who were administered abacavir at 35 times the
human exposure level, increased birth defects (e.g, skeletal malformations, generalized swelling) and
increased developmental abnormalities (e.g., decreased body weight and body length) occured. At half of
this dose, stillbirths and decreased body weight occurred.
IS THERE A GENERIC VERSION OF ABACAVIR AVAILABLE?
HOW SHOULD ABACAVIR BE STORED?
Abacavir capsules and solution should be stored at room temperature (68 to 77 degrees Fahrenheit) and
away from excessive heat and moisture (e.g., not in the bathroom). The oral solution can be stored at
room temperature or be refrigerated but not frozen.
WHAT DOSE DOES ABACAVIR COME IN?
Abacavir is available in 300 mg tablets known as Ziagen. It is also available in a 20-mg/ml oral solution
(strawberry-banana flavored) also known as Ziagen. The oral solution is a clear to yellow color.
WHAT IS THE RECOMMENDED ABACAVIR DOSE?
The recommended abacavir dose is 300 mg twice a day or 600 mg once a day. This is the maximum
daily recommended dose. Dosage is based on one’s specific medical condition and response to
treatment. Children who are 3 months or older receive the oral solution of 8 mg/kg twice a day. Children
weighing more than 30.8 pounds can be treated with doses of 300 mg, 450 mg, or 600 mg based on
weight. There is a scored tablet available for children. If the child cannot swallow the tablet, then the oral
solution is used (240 ml per bottle). The bottle for the oral solution has a child resistant closure feature.
The doctor will determine the correct dose based on the child’s weight, not to exceed the recommended
adult dose. In general, children weighing between 30.8 and 46.3 pounds use 150 mg of abacavir in the
morning and evening for a total of 300 mg. Children weighing between 46.3 pounds 66.1 pounds usually
use 150 mg of abacavir in the morning and 300 mg at night for a total of 450 mg. Children weighing more
than 66.1 pounds usually use 300 mg of abacavir in the morning and 300 mg at night for a total of 600
Elderly people (age 65 or older) can use abacavir but it is unknown if they respond differently from
younger people. In general, abacavir is dosed cautiously in elderly people due to their greater frequency
of heart, liver, and kidney problems, other diseases, and use of other medications. What the body does
to the drug has not been studied in people age 65 or older. Research has showed no differences
between males and females or between blacks and Caucasians in what the body does to abacavir.
For patients with mild liver damage, doctors recommend that patients use 200 mg of abacavir twice a
day, which is 10 ml of the oral solution twice a day. Abacavir is not supposed to be used in patients with
moderate to severe liver damage because the safety, effectiveness, and how the body uses the
medication have not been established in such patients.
Doctors recommend that abacavir be taken about the same time each day because the medication works
best when it is kept in the body at a near constant level. It should continue to be used until the doctor
says otherwise, even if feeling well.
WHAT OTHER INGREDIENTS ARE IN ABACAVIR?
In addition to abacavir sulfate (active ingredient), the tablet also contains several inactive ingredients.
These ingredients are colloidal silicon dioxide, magnesium stearate, microcrystalline cellulose, and
sodium starch glycolate. Colloidal silicon dioxide is a substance used in tablet making to prevent caking
(lump formation), to create a film covering, to allow tablets to disintegrate, and to allow powder to flow
freely when tablets are processed. Magnesium stearate is a white powder that acts as a bulking agent in
tablet making. Microcrystalline cellulose is a term for refined wood pulp which is used to affect the texture
of tablets and to prevent caking. Sodium starch glycolate is a type of salt that helps tablets rapidly
disintegrate in the body and serves as a bulking agent.
The tablets have a film coating made of hypromellose, polysorbate 80, synthetic yellow iron oxide,
titanium dioxide, and triacetin. Hypromellose is a chemical that helps hold tablet ingredients together and
helps delay the release of medicine in the digestive tract. Polysorbate 80 is a type of bulking agent.
Synthetic yellow iron oxide is an artificial food coloring. Titanium dioxide is a powder made of fine
titanium bits. Triacetin is a type of fat.
The oral solution form of Abacavir contains the following inactive ingredients: artificial strawberry and
banana flavors, methylparaben and propylparaben (types of preservatives), citric acid as an acidic
flavoring agent, water, sorbitol (a type of sugar) solution, propylene glycol, saccharin sodium (a type of
sweetener), and sodium citrate. Propylene glycol is a substance that helps maintain water in
medications. Sodium citrate is a substance that helps alter and control acid levels.
DOES ABACAVIR NEED TO BE TAKEN WITH FOOD?
No, abacavir does not need to be taken with food because food does not affect how it is absorbed.
HOW DOES ABACAVIR INTERACT WITH ALCOHOL?
Abacavir does not interfere with the elimination of alcohol, at least in males. However, alcohol interferes
with the elimination of abacavir from the body in males. This can lead to increased levels of abacavir in
the body, which can increase side effects. This knowledge is based on research combining 600 mg of
abacavir with 5 alcoholic drinks in males, showing a 26% increase of abacavir in the body. Interactions
between alcohol and abacavir have not been studied in females.
DOES ABACAVIR IMPAIR FERTILITY?
Abacavir did not impair fertility based on a rat study in which rats were administered the medication at
about 8 times the human exposure level. It is not known if these results generalize to humans.
DOES ABACAVIR CAUSE CANCER?
Abacavir has been associated with an increased rates of cancer in rats who were administered the
medication over two years at three different dosage levels. However, the doses were 6 to 32 times the
human exposure recommended dose. Some of the cancer types were malignant (likely to spread and
invade other tissues) whereas other types were not malignant (benign). Malignant cancer was located in
male and female reproductive areas and in the liver of female rats. Benign cancer was located in the
liver and thyroid gland of female rats. The thyroid gland is a butterfly-shaped organ located in front of the
neck that produces a natural chemical known as hormones that affect virtually every cell in the body and
many functions such as disease fighting, heart rate, energy level, and skin condition.
DOES ABACACVIR CAUSE MUTATIONS?
Abacavir has caused mutations in chromosomes in a study of human lymphocytes. Chromosomes are
microscopic structures in cells that transmit genetic information. Abacavir has caused structural changes
in bone marrow for male mice but not female mice.
HOW IS ABACAVIR METABOLIZED?
Abacavir is partly metabolized (chemically transformed) by the alcohol-dehydrogenase enzyme system in
the liver, which is mainly responsible for breaking down alcohol. It is also metabolized by an enzyme
known as glucuronosyltransferase. The substances that result when abacavir is metabolized (which are
called metabolites) are known as 5'-carboxylic acid and 5'-glucuronide. These metabolites do not help in
WHAT IS THE HALF-LIFE OF ABACAVIR?
The half-life of abacavir is 1.5 hours. The half-life is the time that it takes the drug to fall to half of its
original amount in the body. 1.2% of abacavir is excreted in the urine as abacavir, 30% is excreted in
the urine as the metabolite as 5'-carboxylic acid, 36% is excreted in the urine as 5'-glucuronide, 15% is
excreted as unidentified metabolites in the urine, and 16% is eliminated in the feces. The half-life of
abacavir is known to be increased in patients with mild liver impairment. The rate by which abacavir
metabolites were formed and eliminated was decreased in patients with mild liver impairment.
WHAT ARE SOME OTHER CHEMICAL PROPERTIES OF ABACAVIR?
Abacavir has a plasma protein binding of 50%. Plasma protein binding reflects the degree to which a
drug binds to proteins in the blood. The less bound a drug is, the more it can exert its effect on the body.
83% of the drug is available in the body’s circulation when administered by non-intravenous routes.
Overall, abacavir is rapidly and extensively absorbed by the body after oral administration. It distributes
easily into red blood cells. Red blood cells (RBCs) are cells that circulate in the blood that specialize in
delivering oxygen to the body’s tissues.
WHEN WAS ABACAVIR APPROVED AND WHEN DID THE PATENT EXPIRE?
Abacavir was approved by the FDA on 12/18/98 as the fifteenth retroviral drug (type of medication that
fights viruses) in the U.S. The patent on the drug expired on 12/26/09. Some laboratory versions of HIV
are resistant to Abacavir. |
In English the letter Q has two sounds and is always followed by the letter U.
For most English words the pronunciation of the qu is actually a combination of the K and the W. But there are a few words where the qu sounds like a K without the W.
Note that the examples are in three columns. The first column provides an example of the sound when it is word initial (at the beginning of the word). The second column provides an example of the sound when it is word internal (in the middle of the word). The third column provides examples of the sound when it is word final (at the end of the word).
Click on the sample words to listen to the sound files. Pay attention to the sound of the letters in bold.
*IPA means International Phonetic Alphabet. Learn more about phonetics and the IPA. |
Dear Parents and Guardians:
One or more students at our school have been diagnosed with strep throat. Please take these precautions:
- Watch your child for signs of sore throat and other signs of strep (headache, fever, stomach ache, rash, swollen/tender glands in neck).
- If your child develops a sore throat and any of these signs, please see your healthcare provider.
- Please let the school know if your child has been diagnosed with strep infection.
Information about strep throat:
What is it? Strep throat is an infection caused by streptococcus bacteria. People with strep throat usually have a red, painful throat, often with fever, and sometimes with headache, abdominal pain, and nausea and/or vomiting. Most sore throats, however, are caused by viruses and are not treated with antibiotics.
How do you get strep throat? Strep throat can affect persons of any age but is most common in children. The bacterium is spread through direct contact and respiratory droplets and is easily spread in households. It takes 2-5 days after exposure to become ill. People with strep throat are generally most infectious when they are sick. They continue to be infectious until they have been on an antibiotic for 24 hours.
How is it treated? Strep infections are usually treated with an oral antibiotic.
Why is it important that your child receive treatment?
- Treatment reduces the spread of the strep infection.
- Treatment with antibiotics can usually prevent rheumatic fever or other rare complications.
When can your child come back to school? Children with strep infections may return to school after 24 hours on their antibiotic.
How do you stop the spread of strep throat?
- Thoroughly wash your hands and your child’s hands after wiping noses and before eating or preparing food.
- Wash dishes carefully in hot, soapy water or in a dishwasher.
- Do not allow the sharing of food or allow children to share cups, spoons, or toys that are put in the mouth.
If you have any questions, please contact me at the school office at 364-2531 or by email at [email protected].
Thank you, Angie Smith, RN ~ School Nurse |
In the 19th century, the advances in medical sciences as well as the industrial revolution resulted in a population boom. This led to an increase in the amount of required food. Similarly, bird feathers were being used for making the expensive clothing for women. Hence, new ways of hunting were adapted to cater for this high demand of food. One such innovation was the creation of the Punt Gun.
In fact, it was a big shotgun with a 2 inches bore and it could fire one pound shot at a time. This bigger shot enabled it to shoot about 90 waterfowls at one time. This gun was not mass produced but was custom made and as a result it was expensive. It was mostly used over the water bed of lakes and rivers to shoot 50 birds on average. However, this gun had one big problem and that was its powerful recoil. As a result, it was mostly fitted on the boats named after the gun. When the gun was fired the boat moved back several inches due to the effect of the recoil. There was an outcry against this gun because of the amount of the waterfowl killed by it. The gun was banned in many states of the United States in 1860. By virtue of Legacy Act of 1900s it was prohibited to kill the animals across different states. In 1918, the hunting for the purpose of food marketing was banned altogether. |
10 Interesting Facts About Crucifixion
Crucifixion is arguably the cruelest form of execution. When we read ancient sources, it is hard to distinguish the practice of crucifixion from other similar punishments like impalement.
The Romans learned it from their neighbors and used it especially in the provinces, mostly to discipline their subjects and discourage rebellions. Little did the Romans imagine that the crucifixion of a humble Jew in a lost corner of their territory would give the crucifixion an enduring fame.
10 Crucifixion In Persia
Many ancient rulers used crucifixion to send a message to their subjects about the things they should not be doing. During the reign of Persian king Darius I (r. 522–486 BC), the city of Babylon dismissed the Persian authorities and revolted against them around 522–521 BC.
Darius launched a campaign to recapture Babylon and laid siege to the city. The gates and walls of Babylon held for 19 months until the Persians broke the defenses and stormed the city.
Herodotus (Histories 3.159) reports that Darius stripped away the wall of Babylon and tore down all its gates. The city was returned to the Babylonians, but Darius decided to send a message that revolts would not be tolerated by crucifying 3,000 of the highest-ranking Babylonians.
9 Crucifixion In Greece
In 332 BC, Alexander the Great captured the Phoenician city of Tyre, which was being used as a naval base by the Persians. This was accomplished after a long siege that lasted from January until July.
After Alexander’s army broke the defenses, the Tyrian army was defeated and some ancient sources claim that 6,000 men were killed that day. Based on Greek sources, the ancient Roman writers Diodorus and Quintus Curtius reported that Alexander ordered the crucifixion of 2,000 survivors of military age along the beach.
8 Crucifixion In Rome
Crucifixion was not a general form of capital punishment under Roman law. It was only allowed under specific circumstances. Slaves could be crucified only for robbery or rebellion.
Roman citizens were immune to crucifixion unless they were found guilty of high treason. However, during later imperial times, humble citizens could be crucified for specific crimes. In the provinces, the Romans employed crucifixion to punish what they referred to as “unruly” people who were sentenced for robbery and other types of crimes (Metzger and Coogan 1993: 141–142).
7 Spartacus’s Revolt
Spartacus, a Roman slave of Thracian origin, escaped from a gladiator training camp in Capua in 73 BC and took about 78 other slaves with him. Spartacus and his men exploited the pathological concentration of wealth and social injustice of Roman society by recruiting thousands of other slaves and destitute country folks. He eventually built an army that defied Rome’s military machine for two years.
Roman General Crassus ended the revolt, which was the setting for one of the most famous cases of mass crucifixion in Roman history. Spartacus was killed, and his men were defeated. The survivors, more than 6,000 slaves, were crucified along the Via Appia, the road between Rome and Capua.
6 Crucifixion In The Jewish Tradition
Although the practice of crucifixion is not explicitly mentioned in the Hebrew Bible as a Jewish form of punishment, it is suggested in Deuteronomy 21.22–23: “And if a man have committed a sin worthy of death, and he be to be put to death, and thou hang him on a tree: his body shall not remain all night upon the tree, but thou shalt in any wise bury him that day.”
In ancient rabbinic literature (Mishnah Sanhedrin 6.4), this was interpreted as the exposure of the body after the person was killed. But this view contradicts what is written in the ancient Temple Scroll of Qumran (64.8), which says that an Israelite who commits high treason must be hanged so that he dies.
Jewish history records a number of crucifixion victims. Perhaps the most notable is reported by the ancient Jewish writer Josephus (Antiquities 13.14): The king of Judaea Alexander Jannaeus (126–76 BC) crucified 800 Jewish political enemies who were considered to have committed high treason.
5 The Position Of The Nails
The idea that the nails pierce the victim’s palms is the dominant image we get from painters and sculptors who have represented the crucifixion of Jesus. Today, we know that nails through the palms are unable to support the body weight and likely to strip out between the fingers.
Therefore, it is possible that the upper limbs of the victim were tied with ropes to the crossbeam to provide additional support. There is, however, a simpler solution. The nails could be inserted between the ulna and the radius rather than the palms. The bones and tendons of the wrist are strong enough to hold the weight of the body.
The only problem with piercing the wrists is that it contradicts the description of Jesus’s injuries in the gospels. For example, in John 24:39, it is stated that Jesus had his hands pierced. Many scholars have tried to explain this contradiction with boring and predictable claims about errors in translation.
The reality is that none of the authors of the gospels had been direct witnesses of the events. The earliest of the gospels, the Gospel of Mark, dates to c. AD 60–70, about a generation after Jesus’s crucifixion, so it is not reasonable to expect a high degree of accuracy in such details.
4 Roman Method
There was not a standard way of conducting a crucifixion. The general practice in the Roman world involved a first stage where the condemned was flagellated. Literary sources suggest that the condemned did not carry the whole cross. He only had to carry the crossbeam to the place of crucifixion, where a stake fixed to the ground was used for multiple executions.
This was both practical and cost-effective. According to the ancient Jewish historian Josephus, wood was a scarce commodity in Jerusalem and its vicinity during the first century AD.
The condemned was then stripped and attached to the crossbeam with nails and cords. The beam was drawn by ropes until the feet were off the ground. Sometimes, the feet were also tied or nailed.
If the condemned was able to endure the torture for too long, the executioners could break his legs to accelerate death. The Gospel of John (19.33–34) mentions that a Roman soldier pierced the side of Jesus while He was on the cross, a practice to ensure that the condemned was dead.
3 Causes Of Death
In some cases, the condemned could die during the flagellation stage, especially when bone parts or lead were added to the whips. If the crucifixion occurred on a hot day, the loss of fluid from sweating coupled with the loss of blood from the flagellation and injuries could lead to death from hypovolemic shock. If the execution occurred on a cold day, the condemned could die from hypothermia.
Neither the traumas caused by the nail injuries nor the bleeding were the prime causes of death. The position of the body during the crucifixion produced a gradual and painful process of asphyxiation. The diaphragm and intercostal muscles involved in the breathing process would become weak and exhausted. Given enough time, the victim was simply unable to breathe. Breaking the legs was a way to accelerate this process.
2 Forensic Evidence
Analysis of the bones of a crucifixion victim published in the Israel Exploration Journal has revealed a form of crucifixion that is rarely displayed on paintings or mentioned in literary sources. In this case, the bone injuries showed that the nails penetrated the side of the heel bone.
Rather than the traditional position of the legs that we see in many depictions of crucifixion victims, the study suggests that “the victim’s legs straddled the vertical shaft of the cross, one leg on either side, with the nails penetrating the heel bones.”
This study also explains why the remains of crucifixion victims are sometimes found with the nails. Apparently, the condemned man’s family found it impossible to remove the nails, which were normally bent due to the hammering, without destroying the heel bone. “This reluctance to inflict further damage to the heel led [to his burial with the nail still in his bone, and this, in turn, led] to the eventual discovery of the crucifixion.”
1 Abolition By Emperor Constantine
Under the Romans, Christianity underwent a surprising transformation. It started as an offshoot of the Jewish religion, turned into an outlaw cult, became a tolerated religious expression, developed into a state-sponsored faith, and finally became the hegemonic religion of the late Roman Empire.
The Roman emperor Constantine the Great (AD 272–337) proclaimed the Edict of Milan in AD 313, decreeing the tolerance of the Christian faith and granting Christians full legal rights. This crucial step helped Christianity become the official Roman state religion.
After centuries of practicing crucifixion as a torture and execution method, Emperor Constantine abolished it in AD 337, motivated by his veneration for Jesus Christ. |
From the Guidebook:
Here, on January 8, 1863,
Governor Leland Stanford turned the first spade of earth
to begin construction of the Central Pacific Railroad.
After more than six years of labor, crews of the Central
Pacific Railroad from the west and the Union Pacific Railroad
from the east met at Promontory, Utah where, on May 10,
1869, Stanford drove the gold spike signifying completion
of the First Transcontinental Railroad. The Central Pacific
Railroad, forerunner of the Southern Pacific Company,
was planned by Theodore D. Judah and constructed largely
through the efforts of the 'Big Four'-Sacramento businessmen
Leland Stanford, Collis P. Huntington, Charles Crocker,
and Mark Hopkins. |
Providing an over-arching framework for global education is critical to the future of young people. Therefore, the Global Studies program was developed using resources such as strategic elements of forecast models focusing on innovation and retooling academic programs, reports and surveys on career and workplace trends, and NAIS' independent school global education survey. Several major themes emerged, including the following:
- Independent schools have been most clearly successful at integrating global education programs when their decisions have been driven by the mission and core values of the school.
- Schools identify a number of characteristics of a “globally educated student” that include: specific content knowledge, a commitment to service of others, fluency in one or more world languages, orientation to a new paradigm of leadership and work style, international experience, and skills to analyze and solve problems collaboratively with people of diverse backgrounds.
- Leading in a global age requires educators to work with students today with an eye on what they will need tomorrow. School leaders are committed to providing all students with a learning environment where they are challenged in age-appropriate ways to use their knowledge in innovative, practical, and flexible contexts in preparation for the future.
- Educators are experiencing a paradigm shift of purpose. The traditional goal of college preparation, historically seen as the quintessential outcome of an excellent high school education, is no longer enough. Increasingly, educators believe that global education provides the baseline for students who will live and work in a world that will make demands very different from those which their parents faced as young professionals.
The Global Studies program is supported by the principles of good practice for global citizenship articulated by NAIS, which include the following practices and outcomes for students:
- Habits of mind that invite and reward curiosity concerning the richness and diversity of all human societies.
- A curriculum that helps students recognize how differing cultures, traditions, histories, and religions may underlie views and values that can sharply contrast with their own.
- A seeking beyond the institution (the school) itself to form partnerships and networks that provide a mechanism for students to come to know each other through active engagement on projects and communication.
- Opportunities for students to practice newly-acquired skills by engaging directly with students from other countries through exchanges, international service learning, and other projects. |
There are large numbers of equations and formulas available in mathematics that is used for different kinds of mathematical problems. However, these theorems and equations are also helpful for real-world applications. Among them, the easiest to use and essential to learn a concept is this equation. As if you find out complexity to calculate equations manually, you can also use such online tools like a parametric equation calculator. No matter, there are several online calculators available; this kind of tool is still used for a specific purpose and their respective methods and equations.
For using a parametric equations calculator, it is needed to know about the exact meaning of all terms. This word is used to define and describe the techniques in mathematics that introduce and discuss extra and independent variables known as a parameter to make them work. This equation defines a collection or group of quantities (which is considered as functions) of the independent variables known as parameters. It is mostly used to explore the coordinates of the points that define a geometric object. To get a clear picture of this term and its equation, go through the below example. Let's take an example of these equations of a circle, which is defined as given below using two equations.X = r cos (t)
In the above equations, t is the parameter which is a variable but not the real part of the circle. Still, parameter T will generate the value of X and Y value pair that depends on the circle radius r. You can use any geometric shape to define these equations. Added to that, you can use it into a parametric equations calculator.
The steps given are required to be taken when you are using a parametric equation calculator.
You can get the graph of output in a separate window of a parametric equation solver.
As you are changing the form of the standard equation to this form, the tool is also used as a parametric form calculator, which defines the circumferential way concerning variable t. Initially, you may find this conversion process a little much complex, but after the use of a parametric equation calculator; it will convert into a simple procedure in less time.
After the conversion of function into this process, you can revert this also through eliminating this calculator. In the elimination, you will eliminate the parameter that is used in the parametric equation calculator.
It is also known as the process of transformation. As you are converting these equations to a normal one, you need to eliminate or remove the parameter t which is added to find out the pair or set that is used to calculate for the different shapes in the parametric equations calculator.
To do the elimination, first, you have to solve the x=f (t) equation and remove it from it using the derivation process and then put the value of t into the Y. Then, you will get the value of X and Y. The output will be a normal function consisting of only x and y in which y is based on the x which can be found on the separate window of the parametric equation solver.
Moreover, the parametric representation calculator displays the graph of a given input with their calculated output. You can find it in graphical form in the separate window after converting the standard format to such a shape. This form calculator is required to find out such a form when derivation of standard functions is needed.
However, its primary purpose is to find out the coordination. This representation calculator offers the functionality of a graphical display of coordinator points as per the given input in this form. |
The Chilko sockeye is the Michael Phelps of salmon. Once a year, it swims 650 kilometers up British Columbia’s Fraser River, fighting rapids and strong currents, to reach a spot where it can lay and fertilize its eggs. New research reveals that the fish is well-adapted to this journey—it has a bigger, better heart and uses oxygen more efficiently than do other local salmon. And thanks to these attributes, the Chilko sockeye may be more likely than these other fish to survive a warming world.
Not all of the Fraser River’s salmon swim as far as the Chilko sockeye. Some stay relatively close to the coast; others swim a bit farther upstream to spawn. There are so many different migration distances that the fish have split into 100 distinct populations—one of which is the Chilko—with different swimming behaviors and body types.
Hot summers take a toll on these migrations. In 2004, for example, 80% of some salmon populations died of heat stress before reaching their spawning destinations. Water temperatures in the Fraser River have risen 2°C in the past 60 years, and, with global warming, researchers expect even bigger die-offs to come. Erika Eliason, a graduate student at the University of British Columbia, Vancouver, in Canada, wanted to know whether different populations of Fraser River salmon were better than others at handling the heat.
Over three summers, she caught 97 salmon heading upstream and gave them stress tests in a portable “fish treadmill” mounted on a boat trailer. With the fish in this enclosed tank, she could increase the current speed of the water and raise the water temperature from 8°C to 26°C to determine how well the fish swam at different temperatures. At the same time, Eliason monitored the amount of oxygen in the water to learn how well their bodies were utilizing oxygen, a measure of athletic ability. She then dissected some fish to look at their hearts. All together, she studied individuals from eight populations.
The Chilko sockeye was the most versatile. It swam best at 17°C, a moderate river temperature, but it could cope with the hottest water tested, 26°C. The Weaver sockeye, which spawns downstream of the river’s big series of rapids, collapsed in water above 21°C. Its heart was smaller and had a poorer blood supply than did the hearts of Chilko and other salmon populations that had to fight the rapids, Eliason and her colleagues report online today in Science. Moreover, Chilko hearts appear to be more sensitive to adrenaline, making it easier for them to keeping going when overheating. “I was surprised at how much variation there was between populations,” Eliason says.
“The message is pretty clear that these sockeye salmon are highly adapted to the energetic demands of their upstream migration,” says Brian Riddell, a fisheries scientist who heads the Pacific Salmon Foundation in Vancouver. “I am continually amazed at how well adapted these animals are to their environment.”
Evolutionary biologist Michael Kinnison of the University of Maine, Orono, says it’s useful to know which populations are most vulnerable to climate change. The Chilko sockeye may do okay, but the Weaver sockeye will likely have a much harder time surviving if the river continues to warm. Still, these vulnerable fish may have other traits—such as disease resistance—that might be valuable under other conditions. So Riddell is calling for protection of all of the Fraser River’s salmon populations. That may mean restricting fishing during unusually hot summers to reduce the stress on these fish, Eliason says.
But ecologist Thomas Quinn of the University of Washington (UW), Seattle, questions whether climate change is threatening salmon as much as some experts think and therefore says corrective measures might not be necessary. Aquatic biologist Daniel Schindler of UW agrees, saying that salmon may change the timing of when they go upriver to spawn to avoid extreme river temperatures. |
To determine the value of i raised to a power greater than two, we rewrite the term using exponent rules.Remember that i^2 = -1 and i^4 = 1. Therefore, any exponent of i that is a multiple of four will equal one; any even exponent not divisible by four will equal negative one. Also, negative exponents indicate a reciprocal of the base; if i is in the denominator, it will need to be rationalized.
Sample Problems (2)
Need help with "Powers of i" problems? Watch expert teachers solve similar problems to develop your skills. |
Children with achondroplasia can lead normal lives provided they receive appropriate care and follow-up by knowledgeable providers.
What is Achondroplasia?
Achondroplasia is the most common form of short-limb dwarfism. It is an autosomal dominant disorder caused by a mutation in the gene that creates the cells (fibroblasts) which convert cartilage to bone. This means, if the gene is passed on by one parent, the child will have achondroplasia. However, over 80% of individuals born with this disorder are born to parents who do not have the disorder. It affects mainly the long bones. As a result, individuals who have achondroplasia have short limbs but normal trunk height and head size with a prominent forehead.
What are the Symptoms of Achondroplasia?
During pregnancy, a prenatal ultrasound which shows excess amniotic fluid and abnormal bone length measurements may be suspicious for achondroplasia. However, the diagnosis is usually made through physical examination of the infant after birth and through utilization of x-rays and ultrasound. Characteristic features of an infant with achondroplasia include:
- Disproportionately large head-to-body size difference with shortened arms and legs (especially the upper arm and thigh)
- Prominent forehead (frontal bossing) and depressed nasal bridge
- Underdeveloped midface and relative jawbone prominence
- Underdeveloped cheekbone resulting in tooth crowding
- Short appearing fingers with the ring and middle fingers pointing in opposite directions giving the hand a three-pronged (trident) appearance
- Limited elbow extension and rotation as well as limited hip extension
- Decreased muscle tone (hypotonicity)
- Often prominence of the mid-to-lower back with a small hump (gibbus)
Other signs and symptoms of achondroplasia which may develop over time include:
- Short stature (significantly below the average height for a person of the same age and sex). The average height of an adult with achondroplasia is 131 cm (52 inches, or 4 foot 4 inches) in males and 124 cm (49 inches, or 4 foot 1 inch) in females.
- Bowed legs (genu varum)
- Spine curvatures called kyphosis and lordosis
- Delay in reaching developmental milestones
- Delay in walking independently until 2-3 years of age
- Difficulty with speech because of a tongue thrust, but this usually resolves by school age
- Normal intelligence
- Obesity is common
- 10% of affected individuals have respiratory problems
- Narrowing of the spinal cord canal which can cause compression on the spinal nerves (spinal stenosis)
Treatment for Achondroplasia
Although the cause of achondroplasia is known, there is currently no known treatment for the underlying condition itself. Human growth hormone has been used to treat other types of dwarfism but has not proven beneficial for patients with achondroplasia. Overall, most treatment involves prevention and treatment of complications related to achondroplasia.
Babies with achondroplasia need to be monitored for problems with too much fluid on the brain (hydrocephalus) and may require a shunt to drain the fluid. Similarly, some babies may need the base of the skull (foramen magnum) to be surgically enlarged to prevent spinal cord compression. It is important children with achondroplasia receive timely dental care to prevent tooth overcrowding. Treating and preventing ear infections to prevent long-term hearing loss is critical. Limb-lengthening is a controversial treatment to increase the overall height and limb length of patients with achondroplasia. Preventing obesity, to reduce joint and back problems, is also important. Some patients may require a laminectomy for spinal stenosis as young adults.
When to Refer?
- Children with achondroplasia should be referred to the Orthopedic Department within 3 months. |
French is an international language, spoken around the world. An international education, such as we provide here, allows children to learn about other countries and cultures. At FISW, we promote global awareness by cultivating a diverse student body, promoting tolerance, and celebrating world holidays.
Students are immersed in the French language at least 75% of the time and receive a rigorous, complete, challenging education conforming to the French national curriculum in French, math, science, history and geography, art, music, and physical education. Immersion in another language gives children the opportunity to become bilingual in a natural and effective way. The children learn while acting on concepts, creating, listening, actively participating, and learning new skills.
The ability to understand and speak French is only one benefit of our bilingual program. Research shows that students who are educated in a second language, particularly those who learn in an immersion setting, demonstrate increased mental flexibility and creative thinking. Bilingual students are also better able to analyze language. Because they learn that there are at least two ways to say the same thing, they have a greater understanding of the relationship between words and meaning. They also have a greater ability to focus, taking into account only relevant pieces of information.
Through curriculum content and exposure to cultural differences, bilingual students also learn to respect differences between people and cultures.
There are benefits to introducing another language early, and maintaining it through a lifetime, as compiled by the news editor for Psychology Today in his article, Of Two Minds,
• "Languages learned before the age of 5 are represented differently in the brain than are later languages. For example, they trigger sensory associations more actively." Proverbio, Alice Mado, Adorni, Roberta, and Zani, Alberto, www.sciencedirect.com, "Inferring native language from early bio-electrical activity"
• "Learning a second language can help you out decades down the road. On average, lifelong bilinguals incur dementia four years later than others..." Bialystok, Ellen, Craik, Fergus I. M., and Freedman, Morris. www.sciencedirect.com, "Bilingualism as a protection against the onset of symptoms of dementia"
Research shows that students who are educated in a second language, particularly those who learn in an immersion setting, demonstrate increased mental flexibility and creative thinking. |
Targeting tumors with toxins
Arsenic is a naturally occurring toxin found in groundwater that is absorbed into what we eat and drink – including foods such as rice and apple juice – and New Mexico has some of the highest concentrations of the metallic mineral in the U.S.
It’s considered a co-carcinogen, because it promotes the activity of other cancer-causing agents. Despite this, UNM researcher Jim Liu, PhD, thinks arsenic has potential as an anti-cancer treatment.
Liu, associate dean for research and professor of pharmaceutical sciences in the UNM College of Pharmacy, started out wondering about arsenic’s cancer-promoting properties. He teamed-up with fellow faculty member Laurie Hudson, PhD to establish that at environmental doses arsenic is a more potent co-carcinogen than a carcinogen by itself.
It turns out that arsenic helps drive cancer development when ultraviolet radiation is absorbed through the skin via exposure to sunlight. So when someone who has ingested arsenic receives a lot of UV radiation (as is often the case in New Mexico), their cells may suffer DNA damage, triggering cancer.
In effect, Liu says, arsenic has a synergistic effect, amplifying the likelihood of carcinogenesis compared to when UV radiation acts alone.
But there’s a bright spot in this picture.
“We have a protection system,” Liu says. “Not everyone gets cancer.” Cells have mechanisms to repair damaged DNA and block the development of tumors. So Liu and Hudson decided to figure out how arsenic interferes with that DNA repair process.
They found that arsenic replaces zinc in a DNA-repairing protein known as poly[ADP-ribose] polymerase 1, hampering its efficacy. This allows UV-induced DNA damage to accumulate in the tissue, increasing cancer risk.
But this process can be flipped around to attack tumor cells in patients undergoing radiation treatment for their cancers.
Radiation therapy often fails when tumor cells resist the treatment by repairing their own DNA, allowing them to survive. According to researchers at the Department of Physics at Oslo University, 80 percent of cancer cell lines assessed in laboratories are sensitive to radiation therapy, but the remainder resist the treatment.
Liu and Hudson reasoned that if they could deliver arsenic to tumor cells being targeted with radiation their DNA repair mechanisms would fail, leading to increased tumor cell death.
“Radiation combined with arsenic is localized into one area,” Liu explains. “Arsenic is all through the body, but effective in one area.”
Liu’s studies show the combined therapy may shrink tumors more effectively than radiation alone and eventually eliminate cancer. Meanwhile, the remaining arsenic in the body is eliminated through the urine in a few days following injection.
Figuring out how to enlist a potential harmful substance like arsenic for use as an anti-cancer treatment is part of the scientific process. Liu smiles: “Everything you do leads to something more interesting.” |
The properties of a solid depend on the arrangement of its atoms, which form a periodic crystal structure. At the nanoscale, arrangements that break this periodic structure can drastically alter the behavior of the material, but this is difficult to measure. Recent advances by scientists at the U.S. Department of Energy's (DOE) Argonne National Laboratory are starting to unravel this mystery.
Using state-of-the art neutron and synchrotron X-ray scattering, Argonne scientists and their collaborators are helping to answer long-held questions about a technologically important class of materials called relaxor ferroelectrics, which are often lead-based. These materials have mechanical and electrical properties that are useful in applications such as sonar and ultrasound. The more scientists understand about the internal structure of relaxor ferroelectrics, the better materials we can develop for these and other applications.
The dielectric constants of relaxor ferroelectrics, which express their ability to store energy when in an electric field, have an unusual dependence on the frequency of the field. Its origin has long been a mystery to scientists. Relaxor ferroelectrics can also have exceedingly high piezoelectric properties, which means that when mechanically strained they develop an internal electric field, or, conversely, they expand or contract in the presence of an external electric field. These properties make relaxor ferroelectrics useful in technologies where energy must be converted between mechanical and electrical.
Because lead is toxic, scientists are trying to develop non-lead-based materials that can perform even better than the lead-based ferroelectrics. To develop these materials, scientists are first trying to uncover what aspects of the relaxor ferroelectric's crystal structure cause its unique properties. Although the structure is orderly and predictable on average, deviations from this order can occur on a local, or nanoscale level. These breaks in the long-range symmetry of the overall structure play a crucial role in determining the material's properties.
"We understand the long-range order very well, but for this experiment we developed novel tools and methods to study the local order," said Argonne senior physicist Stephan Rosenkranz.
Scientists from Argonne and the National Institute of Standards and Technology, along with their collaborators, studied a series of lead-based ferroelectrics with different local orders, and therefore different properties. Using new instrumentation designed by Argonne scientists that is able to provide a much larger and more detailed measurement than previous instruments, the team studied the diffuse scattering of the materials, or how the local deviations in structure affect the otherwise more orderly scattering pattern.
Previous researchers have identified a certain diffuse scattering pattern, which takes the shape of a butterfly, and associated it with the anomalous dielectric properties of relaxor ferroelectrics. When Argonne scientists analyzed their experimental data, however, they found that the butterfly-shaped scattering was strongly correlated with piezoelectric behavior.
"Now we can think about what kind of local order causes this butterfly scattering, and how can we design materials that have the same structural features that give rise to this effect," said Argonne physicist Danny Phelan.
As for the real cause of the anomalous dielectric properties, the scientists propose that it arises from competing interactions that lead to "frustration" in the material.
The new discoveries stemmed from the scientists' use of both neutron scattering and X-ray scattering. "There is invaluable complementarity to using both of these techniques," said Phelan. "Using one or the other doesn't give you the whole picture."
The scientists will use these discoveries to inform models of relaxor ferroelectrics that are used to develop new materials. Future experiments will further illuminate the relationship between local order and material properties.
Explore further: Physicists link topological defects to unusual behavior in ferroelectrics
M. J. Krogstad et al, The relation of local order to material properties in relaxor ferroelectrics, Nature Materials (2018). DOI: 10.1038/s41563-018-0112-7 |
The idea of using thermal mass materials -- materials that have the capacity to store heat -- to store solar energy is applicable to more than just large-scale solar thermal power plants and storage facilities. The idea can work in something as commonplace as a greenhouse.
All greenhouses trap solar energy during the day, usually with the benefit of south-facing placement and a sloping roof to maximize sun exposure. But once the sun goes down, what's a grower to do? Solar thermal greenhouses are able to retain that thermal heat and use it to warm the greenhouse at night.
Stones, cement and water or water-filled barrels can all be used as simple, passive thermal mass materials (heat sinks), capturing the sun's heat during the day and radiating it back at night.
Bigger aspirations? Apply the same ideas used in solar thermal power plants (although on a much smaller level) and you're on your way to year-round growing. Solar thermal greenhouses, also called active solar greenhouses, require the same basics as any other solar thermal system: a solar collector, a water storage tank, tubing or piping (buried in the floor), a pump to move the heat-transfer medium (air or water) in the solar collector to storage and electricity (or another power source) to power the pump.
In one scenario, air that collects in the peak of the greenhouse roof is drawn down through pipes and under the floor. During the day, this air is hot and warms the ground. At night, cool air is drawn down into the pipes. The warm ground heats the cool air, which in turn heats the greenhouse. Alternatively, water is sometimes used as the heat-transfer medium. Water is collected and solar heated in an external storage tank and then pumped through the pipes to warm the greenhouse. |
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Negative reinforcement is a form of reinforcement and is an increase in the future frequency of a behavior when the consequence is the removal of an aversive stimulus. Turning off (or removing) an annoying song when a child asks their parent is an example of negative reinforcement (if this results in an increase in asking behavior of the child in the future). Another example is if a mouse presses a button to avoid shock. Do not confuse this concept with punishment. There are two variations of negative reinforcement:
- Avoidance conditioning occurs when a behavior prevents an aversive stimulus from starting or being applied.
- Escape conditioning occurs when behavior removes an aversive stimulus that has already started.
The following table shows the relationships between positive/negative reinforcements and increasing/decreasing required behaviour.
decreases likelihood of behavior increases likelihood of behavior presented positive punishment positive reinforcement taken away negative punishment negative reinforcement
Distinguishing "positive" from "negative" can be difficult, and the necessity of the distinction is often debated. For example, in a very warm room, a current of external air serves as positive reinforcement because it is pleasantly cool or negative reinforcement because it removes uncomfortably hot air. Some reinforcement can be simultaneously positive and negative, such as a drug addict taking drugs for the added euphoria and eliminating withdrawal symptoms. Many behavioral psychologists simply refer to reinforcement or punishment—without polarity—to cover all consequent environmental changes.
- ↑ Michael, J. (1975, 2005). Positive and negative reinforcement, a distinction that is no longer necessary; or a better way to talk about bad things. Journal of Orgnizational Behavior Management, 24, 207-222.
- ↑ Iwata, B. A. (1987). Negative reinforcement in applied behavior analysis: an emerging technology. Journal of Applied Behavior Analysis, 20, 361-378. |
District Heating – Community Heating – Heat Sharing Networks™
Traditional District heating is a system for generating heat at a central location and distributing heat at high temperatures for space heating and hot water to residential and commercial properties using heat networks. District heating plants can provide higher efficiencies and lower carbon emissions than local boilers. Some District networks also provide a cold distribution circuit to facilitate cooling.
Heat Sharing – Community Heating – Heat Networks – Decarbonising Heat
There is an alternative means of sharing heat using a lower temperature distribution circuit. This communal circuit, which is linked to a communal ThermalBank, can be accessed by each member of the group which uses its own heat pump to raise the temperature to the temperature it requires for its own heating and hot water requirements. Heat Sharing Networks™ are also termed "Cold Water Heat Networks", "Low Temperature Heat Networks", "Ambient Ground Loops", or "Fifth Generation Heat Networks".
A heat sharing circuit is cheaper to install than a high temperature circuit because it does not require the same degree of insulation to prevent heat losses to the ground. In fact, heat exchange with the ground that the pipes pass through can be beneficial: the ground adjacent to the pipes extends the contact with the ground and the pipes can draw heat from the ground.
Those buildings with excess heat can exchange heat with the Heat Sharing Network. This is more efficient for them than heat exchanging with hot external air. It also raises the temperature of the Heat Sharing Network for the benefit of those whose heat pumps need to extract heat.
Interseasonal Heat Transfer™ has many advantages for district heating. IHT can ensure that building groups have a reliable, independent and sustainable source of Renewable Heating and Renewable Cooling.
Interseasonal Heat Transfer™ can also bring On-site Renewable Energy to district heating, instead of relying on external fossil fuel supplies commonly used in Combined Heat and Power (CHP) systems.
Heat Sharing based on Interseasonal Heat Transfer can ...
- provide a reliable and low-cost green energy source for space heating and cooling
- save over 70% on carbon emissions on heating compared to emissions from gas boilers
- save over 80% on carbon emissions on cooling buildings compared to emissions (from the power stations) that power electric air conditioning and electric chillers
- provide a low-cost heat energy (or cooling) source for industrial processes
- provide opportunities for reducing carbon emissions by re-cycling solar energy instead of burning fossil fuels
- provide the opportunity to recover heat from buildings with high occupancy and high passive heat gains and transfer it to buildings needing heat
- attract Renewable Heat Incentive cashback for use of ground source heat pumps
- attract Renewable Heat Incentive cashback for use of solar thermal collectors
- improve urban air quality by avoiding combustion of fossil fuels or biomass in densely populated areas
Groups of Buildings
IHT is very well suited to providing Heat Sharing to groups of houses. The cost of providing an efficient installation can be shared across a number of houses, and the benefits increase if the district heating system includes other buildings such as schools or offices whose heating and cooling requirements may follow a different daily pattern (and different weekly pattern) from the heating demand for houses. Where the district covers offices, or data centres, the heat recovered from cooling these buildings can be transferred to homes requiring heating (or other buildings with a heating need such as a community swimming pool).
Where the cooling demand is separated in time from the heating demand, surplus heat can be stored in ThermalBanks from the time it is available to the time it is needed. This efficient use of heat is at the heart of Interseasonal Heat Transfer and enables ICAX to provide cheaper heating and cheaper cooling than conventional methods, as well as proving heating and cooling with a very low carbon footprint.
Sharing heat between buildings
Many commercial buildings have an overall cooling load over the year: they have a requirement to disperse heat. This often applies to modern office buildings in south east England with extensive glazing and high solar gains. These buildings may be adjacent to older buildings with an overall annual heating load. ICAX has developed systems to allow for the transfer of heat between buildings: this form of heat transfer can save fuel and carbon emissions for both buildings.
Both buildings can benefit from a "heat sharing dividend" when they enjoy "joined up heating".
Other buildings with a need to loose excess heat include underground train tunnels, data centres and supermarkets.
A comparison of the advantages of Heat Sharing Networks over traditional gas powered District Heating is shown in the Heat Sharing Table.
District Energy Management System
ICAX has developed a District Energy Management System ("EMS") to control the transfer of thermal energy from the times and places it is most cheaply available to the times and places where it is most needed. This involves control of Thermal Energy Storage to maximise the benefits, minimise costs and minimise carbon emissions.
Even in a group of similar houses there will be variations in the heating requirements between houses: some houses will be unoccupied during the working day, others with small children, or pensioners, may have higher heating loads during the day. IHT can meet these variations in demand successfully – and meter the use of heating in different buildings.
The combined benefits make Interseasonal Heat Transfer an attractive option for offices, schools and universities, hospitals, community centres, urban and suburban housing developments, industrial developments and private houses aiming for low energy use based on solar power.
Energy Hub – sharing warmth feels good
District Heating using Interseasonal Heat Transfer combines the benefits of on-site renewable energy with sharing heating loads between neighbours.
The Intranet of Heat
This new heat sharing infrastructure is the birth of an "Intranet of Heat". The Intranet of Heat enables the exchange of information about sources and needs of heat – and cooling – and then allows heat exchange from those buildings with surplus heat to those in need of heat.
See also Heat Networks Investment Project |
Acceleration due to gravity
A downloadable resource in which students determine a value for gravitational acceleration.
This downloadable resource is a worksheet for a practical activity where students determine a value for the acceleration due to gravity and then compare it to 9.8 ms-2.
Students drop a ball from a height and measure the distance covered and time taken.
The worksheet details the necessary calculations.
- Year 10 > Science Inquiry Skills > Processing and analysing data and information > Analyse patterns and trends in data, including describing relationships between... > ACSIS203
- Year 10 > Science Inquiry Skills > Processing and analysing data and information > Use knowledge of scientific concepts to draw conclusions that are consistent... > ACSIS204
- Year 10 > Science Understanding > Physical Sciences > The motion of objects can be described and predicted using the laws of physics > ACSSU229
- Year Senior Secondary > Science Understanding > Physical Sciences |
There is a common misconception in many space and science fiction genre pieces. The misconception is that the ship\‘s engines also produce power. This works with internal combustion engines, henceforth refereed to as ICE, since the engine also turns an electric generator and has a battery system. Chemical combustion engines, IE rockets and missiles, do not have this electrical generation capability, and the more advanced engines generally consume power than create it.
In the age of Space Colonization, the most common type of engine is the ion engine, which consumes electricity produced by the power core to create a controlled weak nuclear reaction, producing thrust. While the material component required to power an ion engine is neglible, the power requirements are not. Thus the power core is considered the most important part of the ship, followed by the life containment and support systems, defensive systems, and the onboard computer.
Full Item Description
The Fissionable Engine contains a large core of radioactive material. The most common military power cores are fueled by rods of Actinoids like Uranium and Plutonium. While the civilian counterparts are fueled with less volatile and cheaper to produce and extract Lanthanides such as Cesium and Praseodymium.
The heart of the core is the reaction chamber where unshielded rods heat a liquid shell of mercury to high temperature. The thermodynamics of the liquid then force the hot expanding mercury through a generator coil producing electricity. Cooler mercury is drawn in through a coupling valve, and does double duty as coolant as well. Surrounding this core of fissionable material is a thick shell of layered lead, steel, and boron plating to protect the rest of the ship from the radiation produced by the core.
Most Fissionable cores are very heavy, often composing roughly 45% of the host ship\‘s mass to provide power enough to run the required systems at safe levels. While these cores are very reliable, often operating for decades without incident, they are very heavy and changing out fissionable materials in the core can be dangerous, and a full core swap is a major undertaking. Plus the ship runs the danger of a core meltdown that while might not end in an atomic explosion, will at the very least irradiate the ship.
Following the end of the Pan-Solar war (2362 - 2364), travel between the planets of the Solar System entered a 175 year lull with only a scattered handful of prewar ships surviving with functional ion engines. The rest of the ships of that time were powered by solar panels, and banks of thick rockets. The push back into space following the Consolidation War on Earth (2531 - 2539) required the Terran Hegemony to reacquire power sources previously lost in the Pan-Solar War. The fissionable core was the first non-chemical core recovered, and within ten years the Terran Hegemon commanded a flotilla of seventy interplanetary capable warships outfitted with conventional, atomic, and chemical rockets as well as primitive beam and particle weapons.
Following the sweep up of the Terran Sol system, the Hegemony recovered many technological secrets from the ruins of advanced engineering labs and facilities abandoned at the Jovian/Galilean Research facilities as well as the Martian shipyards that had survived the previous wars largely undamaged by conventional weaponry. The labs contained information and mock ups of theoretical fusion, hybrid-fusion, and other power cores. It would take a number of decades of government backed and private research and development before this information would evolve into something usable. In that time, the Fissionable Core went through a number of modifications to enhance it\‘s power output, reduce it\‘s weight, and increase it\‘s safety rating.
The End of the Fissionable Age
In 2592 the first Fusion Core was successfully tested at Mars, heralding the end of the 53 year reign of the Fissionable Engine. During that time, the Fissionable engine had decreased to 39% of a ship\‘s total mass and power output had increased by nearly 400%. The final generation of Fissionable cores would stay in production for another sixty years during the slow proliferation of the Fusion Core, and many of the fusion core equipped ships carried a smaller back-up fissionable core in case of fusion core failure.
By 2652, the civilian sector and bulk cargo sectors had reliable and available Fusion Cores, rendering the Fissionable Cores obsolete. Despite this, many ships with said cores are still in service in secondary roles and in the private sector.
The Fate of Fission
The Fissionable Core has a unique position in the history of the Terran Hegemony, as it was on fission powered flames that the Solar System was consolidated and the first 13 colonies were founded. While Fusion and hybrid-fusion cores would eventually replace the Fissionable core, it is this more primitive power source seen in the space frontier sagas and epics, and thusly, a key component in romanticized space frontier dramas and sagas.
Additional Ideas (3)
The first factory produced fissionable core, the Model 41 was assembled at the PanTek SpaceWerks above the Earth's southern pole. The model 41 was used in ships that massed from 5,000 to 30,000 tons, though the heavier ships were by magnitudes more rare and required more than one model 41 core. The 4,750 ton Valiant class Destroyer had a single core, while the monsterous 33,500 ton Gigas class battleship/carrier has now fewer than 8 fissionable cores. It is worth noting that the Valiants served in the Hegemon fleet for nearly a century before being replaced while the bloated Gigas was scrapped after fewer than 20 years in service.
The Model 41 entered service in 2542 and was not replaced in main line ships until 2565 with the release of the behind schedule Model 50i Fissionable Core.
Entering service in 2562, the Gargant was the first Fissionable Core designed, and it was plagued from the beginning by a number of problems. Considered a white elephant, nothing could stop this lumbering behemoth from plowing through billions upon billions of dollars in research and development. Not even the deaths of a number of researchers in a coolant explosion evne slowed the pace of work on the colossal reactor. It was helped by the success of the Model 41, and scientists and engineers were able to use that data to help improve the Gargant core before it finally reached production.
Quite to the surprise of the engineers and the fleet at large, the Gargant MMDLXII ended up being a solidly performing power core capable of powering ships that excedded 20,000 tons. this was a great improvement over the previous arrangement of using multiple smaller Model 41 cores as it reduced the number of components to be monitored and serviced. The Gargant series was primarily used on the Hegemon's Aegis heavy cruisers and Guillotine Heavy Transports.
The Gargant MMDLXII was built for six years before being replaced by the Gargant MMDLXVIII cores. Over the next decade ships with the original MMDLXII cores would be rotated through dry docks to have their cores upgrades during general refit.
Planning for the 50i began in 2545, just three years after the release of the Model 41. The original time table had the upgraded power core to be released in 2550, but a number of events conspired to keep the 50i behind schedule. The most pressing matter was the Anthagen Scare of 2547 - 2548 that brought the Martian economy and workforce to a near standstill. The threat of terrorists attacking research centers was serious enough that the work at the Martian yards was shut down for nearly two years. It is easy to acuse the officials of overreacting, but at the time it was believed that the terrorist faction held a number of archaic but fully functional thermonuclear weapons as well as a pre-war fusion powered cruiser. In truth, there was no cruiser, and the claims of thermonuclear weapons was a pair of matched crude sub-atomic detonators. |
The term "Rococo" comes from a particular shell-looking ornament which can be considered as the guiding principle of this style.
This new period, often referred as "late Baroque" which can lead to confusion, followed the Baroque period and appeared between 1720 and 1770 - 1780. This movement prevailed particularly in France, Germany and Italy.
Rococo transformed the heavy, sumptuous, dramatic forms of Baroque into light, finesse and grace. The reccurent themes in paintings were religious, such as festive or pastoral scenes. The prevailing painting technique of Rococo consisted on using light colors and pastels. |
Improve your knowledge of logistic population growth with this interactive quiz and printable worksheet. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. Population Problems. ... logistic growth problems: ... a piece of paper and then use the solution steps function to check your answer. Time-saving lesson video on Logistic Growth with clear explanations and tons ... Said owners are not affiliated with Educator.com. Ask a ... Logistic Growth Model Word Problem - So Confused. Let y be the population of a bacterial colony counted in thousands. Calculate the individual growth rate. AP Calculus Logistic Function Exponential Growth Logistic Equation b, k are constants L is a constant called the carrying capacity. ... You will receive your score and answers at the end. Distinguish between exponential and logistic population growth. Name: Date: 3.1 Word Problems Logistic & Growth and Decay Directions: Answer the following with a calculator. Improve your skills with free problems in 'Word Problems Logistic Growth Models' and thousands of other practice lessons. Engaging math & science practice! Logistic Growth Final Project ... Two sample logistic growth problems (40pts) ... Find the value of a in the logistic model using your answers to (a) and (b). Logistic Growth sample problems. 1. Logistic growth of a population is described by the logistic equation (dy/dt)=ky(M-y). Worksheet 4.1Exponential and Logistic Functions Show all work. BIOL 3700 Example questions set #2 ... as well as phase-plane plots for logistic growth. An application of integration by partial fractions is solving problems involving logistic growth. Work each problem then check your progress by looking at the Answer key. A population of 500 mealworms exhibits logistic growth. Suppose a population of butterflies is growing according to the logistic equation. Distinguish between exponential and logistic population growth. Logistic Growth Final Project ... Two sample logistic growth problems (40pts) ... Find the value of a in the logistic model using your answers to (a) and (b). Page 1; 1. ... Logistic Growth 3.1 Exponential and Logistic Functions PreCalculus 3 ... Write a logistic growth function given the y-intercept, ... 3_1a Exponential and Logistic Functions Partial Fractions and Logistic Growth MATH 151, ... Answer: if f(x) = p(x) ... = L satises the initial value problem. Logistic growth of a population is described by the logistic equation (dy/dt)=ky(M-y). MATH 120 The Logistic Function Elementary Functions Examples & Exercises ... certain kinds of growth. Said owners are not affiliated with Educator.com. Logistic Growth Worksheet continued page 2 of 2 II. The answers are now posted. AP Biology Name _____ Ecology- Population Growth Rate Problems 1. Answer to The following problem deals with logistic growth. problems, such as modeling the ... favorable conditions that inhibit growth. Bio 270 Practice Population Growth Questions 1 Population Growth Questions Answer Key 1. Worksheet 4.1Exponential and Logistic Functions ... exact answers! Calculus 131, supplemental sections 11.111.2 Logistic Growth notes by Tim Pilachowski Exponential Growth and Decay In Algebra, you Exponential and Logistic Growth Worksheet. Ask a ... Logistic Growth Model Word Problem - So Confused. The following population B is experiencing logistic growth. Exponential and Logistic Growth Worksheet. All solutions approach the carrying capacity, , ... the intrinsic growth rate. Review Problems I - Bio 182 - Population Biology Lectures. Partial Fractions and Logistic Growth MATH 151, Calculus for Management J. Robert Buchanan Department of Mathematics Summer 2010 J. Solving logistic growth problems: AP BIO EQUATIONS AND FORMULAS REVIEW SHEET #4 Answer Key Formulas: Rate Population Growth Exponential Growth Logistic Growth dY/dt dN/dt = B D r N AP Biology Population Ecology Practice Problems ... where applicable and record your answers here ... What is their dN/dt in this logistic growth situation? Population Problem A) Given below are the world population estimates. Answer to The Logistic Function Problem Set Math 140 This Logistic Function Problem Set will give you practice with a realistic logistic function. Showing 8 items from page AP Calculus Exponential and Logistic Growth Videos sorted by Day, create time. Vartanian: Logistic Regression Problems ... Logistic regression Number of obs = 2187 ... \Word 2010\Lect2.phd\logist\Logit problems.docx Answers: Figure 1: Behavior of typical solutions to the logistic equation. AP Calculus Logistic Function Exponential Growth Logistic Equation b, k are constants L is a constant called the carrying capacity. Khan Academy is a nonprofit with the mission of providing a free, world-class education for anyone, anywhere. |
Shape notes are a music notation designed to facilitate congregational and community singing. The idea behind shape notes is that the parts of a vocal work can be learned more quickly and easily if the music is printed in shapes that match up with the solfege syllables with which the notes of the musical scale are sung. The notation, introduced in 1801, became a popular teaching device in American singing schools. Shapes were added to the note heads in written music to help singers find pitches within major and minor scales without the use of more complex information found in key signatures on the staff.
Shape notes of various kinds have been used for over two centuries in a variety of music traditions, mostly sacred but also secular, originating in New England, practiced primarily in the Southern region of the United States for many years. Although seven-shape books may not be as popular as in the past, there are still a great number of churches in the South, in particular Primitive Baptist, Independent Fundamental Baptist, and Churches of Christ, as well as Conservative Mennonites throughout North America, that regularly use seven-shape songbooks in Sunday worship.1
Sacred Harp singing is a tradition of sacred choral music that took root in the Southern region of the United States. It is part of the larger tradition of shape note music. Sacred Harp music is performed a capella (voice only, without instruments) and originated as Protestant Christian music. The songs sung are primarily from the book The Sacred Harp.2
Sacred Harp Singing
University of Mississippi: Sacred Harp Singing
Smithsonian Education: A Shape-Note Singing Lesson
Minnesota Public Radio: Shaped Note Singing
Encyclopedia of Southern Culture: Shape-Note Singing Schools |
Definición de learning difficulties en inglés:
- Many people with learning difficulties interviewed felt that professionals did not listen to them.
- The following tutorials explain options helpful for individuals with learning difficulties and impairments.
- Under the general heading of learning difficulties we have decided to use four separate classifications.
The phrase learning difficulties became prominent in the 1980s. It is broad in scope, covering general conditions such as Down’s syndrome as well as more specific cognitive or neurological conditions such as dyslexia and attention deficit disorder. In emphasizing the difficulty experienced rather than any perceived ‘deficiency’, it is considered less discriminatory and more positive than other terms such as mentally handicapped, and is now the standard accepted term in Britain in official contexts. Learning disability is the standard accepted term in North America.
¿Qué te llama la atención de esta palabra o frase?
Los comentarios que no respeten nuestras Normas comunitarias podrían ser moderados o eliminados. |
PDF Version of this Fact Sheet
Mycoplasma can cause sore throat, bronchitis, and pneumonia.
Mycoplasma is usually spread from person-to-person through the air and by direct contact
Mycoplasma is found in the throat of infected persons and is spread to other people through the air by sneezing or coughing. It can also be spread by touching tissues or other things recently soiled by secretions from the nose or throat of an infected person.
People of any age can get Mycoplasma
Children under 5 years usually have mild symptoms or no symptoms at all. The illness is recognized more in school-age children and young adults. Occasionally, epidemics can occur, especially in military populations and institutions (colleges, for example) where people live in close quarters. These occur more often in late summer or fall.Symptoms to look for include:
Symptoms start from 6 to 32 days after exposure. The illness can last from a few days to a month or more (especially coughing). Complications do not happen often. No one knows how long an infected person remains contagious, but it is probably less than 20 days.
Mycoplasma pneumonia is usually diagnosed by blood tests and x-ray of the chest
Treatment is available
The disease can be treated with antibiotics. While antibiotics help an infected person to feel better faster, they do not remove the bacteria from the throat. Mycoplasma can remain in the throat for as long as 13 weeks.
Steps to take to prevent the spread of Mycoplasma infection
201 W. Preston Street, Baltimore, MD 21201-2399
(410) 767-6500 or 1-877-463-3464 |
Raining season is the period when communicable diseases are rampant. Diseases usually spread and common when the rain commences. Food infection, water infection, Cholera, Dengue, cold and flu are common diseases associated with the raining season.
The following simple precaution can make you keep your family healthy and save you from paying bill unnecessarily at the hospital during the raining season. They are:
- Ensure you warm your foods before eating it;
- Ensure your environments are clean and hygienic to prevent the family from infectious organisms;
- Ensure your hands are properly and thoroughly washed before eating and before feeding the children;
- Ensure your children are prevented from playing in stagnant and contaminated water as it may lead to skin infection;
- Ensure you disposed your dust bin on time. |
A tumor is an abnormal growth caused by the uncontrolled division of cells. Benign tumors do not have the potential to spread to other parts of the body (a process called metastasis) and are curable by surgical removal. Malignant or cancerous tumors, however, may metastasize to other parts of the body and will ultimately result in death if not successfully treated by surgery and/or other methods.
Surgical removal is one of four main ways that tumors are treated. Chemotherapy, radiation therapy, and biological therapy are other treatment options. There are a number of factors used to determine which methods will best treat a tumor. Because benign tumors do not have the potential to metastasize, they are often treated successfully with surgical removal alone. Malignant tumors, however, are most often treated with a combination of surgery and chemotherapy and/or radiation therapy (in about 55% of cases). In some instances, non-curative surgery may make other treatments more effective. Debulking a cancer—making it smaller by surgical removal of a large part of it—is thought to make radiation and chemotherapy more effective.
Surgery is often used to accurately assess the nature and extent of a cancer. Most cancers cannot be adequately identified without examining a sample of the abnormal tissue under a microscope. Such tissue samples are procured during a surgical procedure. Surgery may also be used to determine exactly how far a tumor has spread.
There are a few standard methods of comparing one cancer to another for the purposes of determining appropriate treatments and estimating outcomes. These methods are referred to as staging. The most commonly used method is the TNM system.
- "T" stands for tumor and reflects the size of the tumor.
- "N" represents the spread of the cancer to lymph nodes , largely determined by those nodes removed at surgery that contain cancer cells. Since cancers spread mostly through the lymphatic system, this is a useful measure of a cancerís ability to disperse.
- "M" refers to metastasis and indicates if metastases are present and how far they are from the original cancer.
Staging is particularly important with such lymphomas as Hodgkin's disease, which may appear in many places in the lymphatic system. Surgery is a useful tool for staging such cancers and can increase the chance of a successful cure, since radiation treatment is often curative if all the cancerous sites are located and irradiated.
The American Cancer Society estimates that approximately one million cases of cancer are diagnosed in the United States each year. Seventy-seven percent of cancers are diagnosed in men and women over the age of 55, although cancer may affect individuals of any age. Men develop cancer more often than women; one in two men will be diagnosed with cancer during his lifetime, compared to one in three women. Cancer affects individuals of all races and ethnicities, although incidence may differ among these groups by cancer type.
Surgery may be used to remove tumors for diagnostic or therapeutic purposes.
Diagnostic tumor removal
A biopsy is a medical procedure that obtains a small piece of tissue for diagnostic testing. The sample is examined under a microscope by a doctor who specializes in the effects of disease on body tissues (a pathologist) to detect any abnormalities. A definitive diagnosis of cancer cannot be made unless a sample of the abnormal tissue is examined histologically (under a microscope).
There are four main biopsy techniques used to diagnose cancer. These include:
- Aspiration biopsy. A needle is inserted into the tumor and a sample is withdrawn. This procedure may be performed under local anesthesia or with no anesthesia at all.
- Needle biopsy. A special cutting needle is inserted into the core of the tumor and a core sample is cut out. Local anesthesia is most often administered.
- Incisional biopsy. A portion of a large tumor is removed, usually under local anesthesia in an outpatient setting.
- Excisional biopsy. An entire cancerous lesion is removed along with surrounding normal tissue (called a clear margin). Local or general anesthesia may be used.
Therapeutic tumor removal
Once surgical removal has been decided, a surgical oncologist will remove the entire tumor, taking with it a large section of the surrounding normal tissue. The healthy tissue is removed to minimize the risk that abnormal tissue is left behind.
When surgical removal of a tumor is unacceptable as a sole treatment, a portion of the tumor is removed to debulk the mass; this is called cytoreduction. Cytoreductive surgery aids radiation and chemotherapy treatments by increasing the sensitivity of the tumor and decreasing the number of necessary treatment cycles.
In some instances the purpose of tumor removal is not to cure the cancer, but to relieve the symptoms of a patient who cannot be cured. This approach is called palliative surgery. For example, a patient with advanced cancer may have a tumor causing significant pain or bleeding; in such a case, the tumor may be removed to ease the patient's pain or other symptoms even though a cure is not possible.
The surgical removal of malignant tumors demands special considerations. There is a danger of spreading cancerous cells during the process of removing abnormal tissue (called seeding). Presuming that cancer cells can implant elsewhere in the body, the surgeon must minimize the dissemination of cells throughout the operating field or into the blood stream.
Special techniques called block resection and no-touch are used. Block resection involves taking the entire specimen out as a single piece. The no-touch technique involves removing a specimen by handling only the normal tissue surrounding it; the cancer itself is never touched. These approaches prevent the spread of cancer cells into the general circulation. Pains are taken to clamp off the blood supply first, preventing cells from leaving by that route later in the surgery.
A tumor may first be palpated (felt) by the patient or by a health care professional during a physical examination . A tumor may be visible on the skin or protrude outward from the body. Still other tumors are not evident until their presence begins to cause such symptoms as weight loss, fatigue, or pain. In some instances, tumors are located during routine tests (e.g. a yearly mammogram or Pap test).
Retesting and periodical examinations are necessary to ensure that a tumor has not returned or metastasized after total removal.
Each tumor removal surgery carries certain risks that are inherent to the procedure. There is always a risk of misdiagnosing a cancer if an inadequate sample was procured during biopsy, or if the tumor was not properly located. There is a chance of infection of the surgical site, excessive bleeding, or injury to adjacent tissues. The possibility of metastasis and seeding are risks that have to be considered in consultation with an oncologist.
The results of a tumor removal procedure depend on the type of tumor and the purpose of the treatment. Most benign tumors can be removed successfully with no risk of the abnormal cells spreading to other parts of the body and little risk of the tumor returning. Malignant tumors are considered successfully removed if the entire tumor can be removed, if a clear margin of healthy tissue is removed with the tumor, and if there is no evidence of metastasis. The normal results of palliative tumor removal are a reduction in the patient's symptoms with no impact on survival.
Morbidity and mortality rates
The recurrence rates of benign and malignant tumors after removal depend on the type of tumor and its location. The rate of complications associated with tumor removal surgery differs by procedure, but is generally very low.
If a benign tumor shows no indication of harming nearby tissues and is not causing the patient any symptoms, surgery may not be required to remove it. Chemotherapy, radiation therapy, and biological therapy are treatments that may be used alone or in conjunction with surgery.
Abeloff, Martin D., James O. Armitage, Allen S. Lichter, and John E. Niederhuber. "Cancer Management." Clinical Oncology , 2nd ed. Philadelphia, PA: Churchill Livingstone, Inc., 2000.
"Principles of Cancer Therapy: Surgery." Section 11, Chapter 144 in The Merck Manual of Diagnosis and Therapy , edited by Mark H. Beers, MD, and Robert Berkow, MD. Whitehouse Station, NJ: Merck Research Laboratories, 1999.
American Cancer Society. 1599 Clifton Rd. NE, Atlanta, GA 30329-4251. (800) 227-2345. http://www.cancer.org .
National Cancer Institute (NCI). NCI Public Inquiries Office, Suite 3036A, 6116 Executive Boulevard, MSC8332, Bethesda, MD 20892-8322. (800) 4-CANCER or (800) 332-8615 (TTY). http://www.nci.nih.gov .
Society of Surgical Oncologists. 85 West Algonquin Rd., Suite 550, Arlington Heights, IL 60005. (847) 427-1400. http://www.surgonc.org .
American Cancer Society. All About Cancer: Detailed Guide , 2003 [cited April 9, 2003]. http://www.cancer.org/docroot/CRI/CRI_2_3.asp .
J. Ricker Polsdorfer, MD Stephanie Dionne Sherk
WHO PERFORMS THE PROCEDURE AND WHERE IS IT PERFORMED?
Tumors are usually removed by a general surgeon or surgical oncologist. The procedure is frequently done in a hospital setting, but specialized outpatient facilities may sometimes be used.
QUESTIONS TO ASK THE DOCTOR
- What type of tumor do I have and where is it located?
- What procedure will be used to remove the tumor?
- Is there evidence that the tumor has metastasized?
- What diagnostic tests will be performed prior to tumor removal?
- What method of anesthesia/pain relief will be used during the procedure? |
An example of a CNOR exam practice question is: "The Mini-Cog test to assess for dementia includes what?" This question would be asked in a multiple choice format and pertains to the test taker's medical knowledge.Continue Reading
Another example of a question about medical knowledge might be: "What agent is least likely to be responsible for anaphylaxis during surgery?" These kinds of questions are the primary focus of the test and make up more than 40 percent of the 200 question CNOR test.
The test also asked questions about the operation of medical facilities. An example of this is the following: "open shelves used for storage of sterile packages must meet which of the following criteria?" Questions about cleaning, storing and disposing of the instruments typically make up 12 percent of the test. These types of questions might also ask about safety precautions, like the following: "When administering or handling open containers of chemotherapeutic agents, what precautions should a nurse take to protect her hands?"
The CNOR also questions about chemistry and physics. For example, the test may ask: "What is the purpose of diluting ethylene oxide with inert gases and hydrochlorofluorocarbon?" These questions allow the test to gauge the taker's understanding of handling sensitive materials in a medical setting.Learn more about Standardized Tests |
Polymers & Plastics - Classification & Models
Students will use their prior knowledge about changes of matter including physical and chemical changes from the Houghton Mifflin science curriculum. Students will be examining and categorizing various types of plastics (polymers) to identify where they are found in everyday life and how their chemical properties (molecules link together as a chain) allow them to have unique physical properties (various plastic types hard, soft, sticky, or malleable).
This activity is designed for students to:
Classify plastic (polymers) versus non-polymers
Distinguish between types of plastics (polymers)
Make a model for a polymer
Apply the knowledge to their daily life (where & how they use plastic products / recycling - tie in from earlier unit of study - Houghton Mifflin series 2007)
Process skills used in the investigation include: observing, questioning, comparing, and classifying.
Key vocabulary (concepts)::
Prior knowledge - atoms (Atoms combine to make molecules.), solid liquid gas (These are the states of matter that the students will review)
New knowledge - polymer (Molecules that are built as a long repeating chains.), malleable (plastics in which a change can be made to its shape.)
Context for Use
Class size 24-30; Rural Public School Facility
Introductory Lesson of a Topic Study - 1-2 lessons (including Lecture/Vocabulary, Classification Activity, Model Building, and Assessment)
Materials: various types of plastic and non-plastic items
This activity is to build upon and enrich their prior knowledge of atoms and molecules as they relate to physical/chemical properties and changes.
Resource Type: Activities:Classroom Activity
Grade Level: Intermediate (3-5)
Description and Teaching Materials
Various plastic objects for sorting activity (fleece items, plastic spoons, plastic plates, remotes, toys)
Non-plastic objects for sorting activity (wooden items, wooden spoons, metal spoons, glass, ceramics)
Plastic examples from students homes (from various locations of the home - kitchen, bathroom, living room, bedroom, etc...)
Introductory Activity - Sorting Activity -
Begin the lesson with a review of the curriculum studied from the textbook (definition of atoms, molecules, physical properties, and chemical properties). Share with the class that they are going to study the chemical properties of plastics (polymers) and how they create unique physical properties for various types.
Hand the students a shopping bag full of items (included should be various types of plastic and non-plastic items - examples wooden or metal items). Ask small groups to sort the items into two groups based on characteristics they observe. Remind students to use their five senses with the exception of taste to classify items. When a few minutes have passed, stop the class to allow for discussion on how and whey they sorted the materials as they did. Hopefully students were able to sort the plastic and non-plastic items. Guide their classification by having them focus on what do they think the object was made of and sort the items by what they are made from. They should have groups that have plastics versus a group with non-plastics (woods / metals). Ask how they were sure the item was plastic (compare examples of hard plastics to malleable types), and if they have questions about various plastic types.
Discuss with the students their findings and journal in their notebooks about the two types of groups the found. Have them extend their thinking about plastics by focusing on what they found in that group. Did the plastics they find all have the same qualities, or are they different in texture and flexibility? The plastics should have been selected to have different qualities from each other; this should lead to further questions about the composition of plastic. Have students write down their questions about plastics and what they predict is happening with each different type. Let students know in a different lesson they will be observing a demonstration and making a specific hypothesis about what happens with polymer structure (plastics).
Polymer Pet -
Remind students of the study of atoms and basic molecules (how we made water molecules by linking together one oxygen atom and two hydrogen atoms together - students demonstrated by holding hands to make an example of a molecule). Now we are going to make a polymer using a similar model, but this time we are going to link together molecules to form a chain. Our human chain of linked molecules will create our polymer. (Another example would be to link paper clips or strips of paper to form a chain). Our polymer would continue linking molecules until there are no more molecules to link together. After the human demonstration the polymer could then be modeled by using the objects from the above sort. The plastics could be linked together by sitting them close to one another. At one end a drawing of a face could be attached to the beginning plastic item and a tail to the ending item. This chain of items could be the new classroom pet - the Polymer Pet. Students could name the pet and add plastic items daily. If the pet is too big to store in the room it could be placed in the hallway. Allowing others to see it in the hallway may help others to realize how many types of objects are considered polymers.
Plastics can be grouped into categories (4 examples may include: hard, soft, sticky, and malleable). As an assignment students can bring in plastics found at home to share as examples of these four categories. Their examples can be represented by 4 different polymer pets. When students bring in their objects they should try to place them in the correct polymer area and link it to the other items students or the teacher has brought in. Other variations of the pet could include polymers that can be recycled or ones that we use the most of to show others more about recycling or our everyday use of such items.
Culminating Activity -
To review what students have learned about polymers keep pets up as long as the study lasts (remember this activity is linked to a larger unit of study). One way to expand space for the pet would be to place it in the school hallways and ask other classes to add to the pet. Students could create simple posters to explain the pet at various places in the hallway.
A way to assess knowledge would be to have students make human links of basic molecules you have studied (such as water or oxygen) and then ask them to make a polymer.
To further the study students can be asked to bring in various plastics from certain rooms in their homes to demonstrate how plastics are used most everywhere in our lives.
Students could also research and write about ways these plastics could be reused or recycled. They could draw posters to display in the hallway helpful hints to reuse or recycle certain items used at school or home.
Jancie VanCleave's 204 Sticky, Gloppy, Wacky, & Wonderful Experiments. 2002 Jossey-Bass.
Houghton Mifflin Science 2007.
Lilly's Purple Plastic Purse by Kevin Henkes
Teaching Notes and Tips
This activity is meant to be taught as part of a larger study that would incorporate an experiment and website activities. Please look for those activities with polymers taught by other grade 3 teachers at Oak Crest Elementary (Gloria Brandt - Demonstration & Experiment with Polymers / Don Fraser - expansion & assessment activities) on the SERC website.
Students can be assessed in the follow ways:
Classification of plastics (polymers) versus non-plastics (small quiz)
Polymer pet (class participation)
Molecule / Polymer representations (students link together)
Posters about reusing or recycling plastics |
An astonishing discovery could pave the way for "Chickenosaurus."
An amazing new study published in the journal Evolution has detailed a project that has gotten scientists closer to creating the “Dino-Chicken.”
No, that’s not a joke: scientists are actually trying to create a chicken that fully reverted to its dinosaur ancestors, although researchers were focused in this case on a different goal, according to a University of Chile statement.
Essentially, scientists have been trying to figure out how the lower leg bone of modern birds evolved, as dinosaurs lost the lower end of the fibula and stopped having it connect to the ankle as they evolved into birds.
Years ago, scientists determined that bird embryos appear to develop a tubular fibula similar to dinosaurs, but as it grows the leg bone gets shorter than the tibia and takes on the drumstick shape. So scientists decided to try to inhibit the gene that caused the bone to do that, creating birds with dinosaur-like legs. In an experiment with chickens, they were able to indeed create birds with the leg bones of a dinosaur.
It’s a fascinating discovery that takes scientists one step closer to creating an actual dino-chicken, and it sheds light on an important change in the history of evolution.
“The experiments are focused on single traits, to test specific hypotheses” Alexander Vargas of the University of Chile said in the statement. “Not only do we know a great deal about bird development, but also about the dinosaur-bird transition, which is well-documented by the fossil record. This leads naturally to hypotheses on the evolution of development, that can be explored in the lab.”
Don’t be surprised if scientists soon unveil a “Chickenosaurus.” |
Chromosome Mutations or Chromosome Aberrations
Changes in the total number of chromosome, such as a deletion or rearrangement
Variations in chromosome number. When an organism gains or loses one ore more chromosomes and has other than an exact multiple of the haploid set
Chromosomal variation arises from nondisjunction, where chromosomes or chromatids fail to disjoin and move to opposite poles during Meiosis I or II
2n+1 Chromosomes. Often lethal for autosomes, but not sex chromosomes. Three copies of one chromosome are present so pairings are irregular.
An unpaired chromosome, can be present along with a bivalent instead of a trivalent. When 3 chromosomes aren't synapsed, instead 2 are and 1 is left unpaired.
Amniocentesis or Chorionic Villus Sampling (CVS)
Testings for women who become pregnant late in their reproductive years
The addition of one or more sets of chromosomes identical to the haploid complement of the same species
The combination of chromosome sets from different species as a consequence of interspecific matings
The condition in which only certain cells in an otherwise diploid organism are polyploid
Chromosome breaks in one or more places and a portion if it is lost, the missing piece.
Results form a segmental deletion of a small terminal portion of the short arm of chromosome 5
Arise as the result of unequal crossing over during meiosis or through a replication error prior to meiosis
Involves a rearrangement of the linear gene sequence rather than the loss of genetic information. Segment turned 180 degrees in a chromosome, requires two breaks and reinsertion of the inverted segment. May arise from chromosomal looping
Includes centromere, does change the relative lengths of the two arms of a chromosome
Paracentric inversion crossover
One recombinant chromatid is dicentric (two centromeres) and one is acentric (lacking a centromere)
Involves the exchange of segments between two nonhomologous chromosomes. Unusual synapsis |
The function of loading dye in electrophoresis is to allow the DNA sample to sink into the wells of the gel and to allow scientists to visually track the DNA sample as it runs through the gel. Gel electrophoresis is a method used by scientists to separate DNA into various size strands. The loading dye causes the DNA sample to be denser than the running buffer.
The difference in density forces the DNA sample to sink into the bottom of the well and prevents the sample from diffusing into the buffer. Loading dyes consist of tracking dyes that migrate with the DNA samples. Glycerol and bromophenol blue is a common mixture used to create a loading buffer. The glycerol binds to the DNA and makes it heavier, while bromophenol blue stains the DNA. An ethidium bromide solution is generally used when running a gel electrophoresis. Ethidium bromide is a chemical that can be seen under ultraviolet lighting. Ethidium bromide and other fluorescent dyes bind to DNA samples and glow when placed in a transilluminator. The amount of dye used is crucial when running a gel electrophoresis. Specific dyes correlate to certain band sizes in DNA samples. Too much or too little dye can obscure the expected DNA sample size. |
Today we’ll be covering one of the prime aspects of creating music. Before one considers song structure, harmony, or melody, one must start with the fundamental element, the beat. This is what lays the foundation, and the layers of the song come from there. To properly understand what makes a beat, it is important to know the correct terminology and concepts. Music is a language, and when one knows the terms, ideas can be communicated to other musicians. We will cover time signature, measures, phrases, beat counts, and also what they look like on the music page. Knowing this will help line up different drums to match the rhythm, for example.
A note is the time measure of a sound in how it relates to the music. To make a rhythm, one puts notes together into beats. A beat is made up of one or more notes. Try counting 1-2-3-4, while tapping your toe along with it. Each one of these numbers is considered a beat. In a four count like this, when the note and the beat are the same length, the note is called a 1/4 (quarter) note. It is also called this because this note/beat is 1/4 of a measure, typically. A measure is made up of beats, in our current example there are 4 beats in a measure.
Now lets look at some other note lengths. If we kept the beat the same, 1-2-3-4 (one measure), but doubled the amount of notes (making there be 8), having two notes per beat, these notes would be called 1/8th (eighth) notes. One note is 1/8th of the measure long.
We could double the notes again, still keeping four beats per measure. This would make us have four notes per beat, 16 in total. These notes are called 16th (sixteenth) notes, also because there are 16 of them per measure.
This is the basically how the count works, however measures are not limited to 4 beats. How does one know what a measure is worth in a song? This is dictated by what is called the time signature. The time signature controls the overall count of a song, saying how many beats are in a measure and what notes to use to make up the beats.
Let’s break down what it means when someone says a song is in 4/4 time. The bottom 4 means each beat is worth a 1/4 note, and the top 4 means there are 4 of these 1/4 notes in a measure. If we change it to 3/4 for example, there are 3 beats in a measure. You get the idea.
It gets tricky when we change the bottom number to say an 8. Let’s look at the 6/8 time signature. The 8 on the bottom shows that the beat is using 1/8th notes, and the 6 says there are six in a measure. We will delve further into other time signatures in future blogs. 4/4 is the typical time signature is most music.
The final term to know is what is called a phrase. A phrase is a group of measures. Usually a phrase will be four, eight or sixteen measures long. Phrasing is what is used to make a verse, chorus, and other aspects to constructing a song. When one has the time signature, puts the beats into measures, groups the measures into phrases, a song is born! Hopefully this was helpful in understanding the fundamental concepts for creating music. With this understanding, you can work with other musicians and also better construct your music.
Click here For information
about private lessons, webinars, and classes on Ableton Live and more |
There is currently a Seoul virus outbreak affecting up to 15 states in the U.S., which is the first of its kind in the nation’s history . Twelve people have been infected thus far: seven from Illinois, three from Wisconsin, one from Indiana , and one from Utah . The remaining 12 states (CO, ND, MN, IA, MO, AR, LA, TN, AL, IN, MI, and SC) have received rats from rat-breeding facilities that are believed to be the source of the infections, and thus have the potential for cases to develop .
The outbreak began in December 2016 when two persons in Wisconsin were hospitalized. The two first cases operated a home-based rat breeding facility, and had purchased rats from animal suppliers in Wisconsin and Illinois prior to becoming infected .
Seoul virus is a type of hantavirus, which is transmitted when humans come in contact with rodent excrement. Seoul virus is transmitted from infected Norway rats (also commonly known as brown rats) to humans through their urine, droppings, or saliva . It is also possible to become infected through “aerosolization”, which occurs when nesting materials or excrement are stirred up by things such as vacuuming or sweeping, and tiny particles containing the virus are released into the air for humans to inhale .
Symptoms in humans are usually mild and typically begin within one to two weeks of exposure. Symptoms may include fever, headache, nausea and chills, back and abdominal pain, blurred vision, inflammation or redness of the eyes, and a rash. In rare cases, hemorrhagic fever with renal syndrome (HFRS) may develop, and an estimated one to two percent of people die as a result of a Seoul virus infection . In the current outbreak, two of the 12 cases have been hospitalized and no deaths have so far been reported .
The Centers for Disease Control and Prevention (CDC) currently recommends blood testing for anyone that experiences illness after handling rats from a facility that has been lab-confirmed to have Seoul virus infection, and encourages providers to do blood testing if a patient reports symptoms consistent with a Seoul virus infection and has a history of rat contact. In addition, the CDC recommends that people who may have potentially infected rats, to not sell, trade, or release their rats . Investigations are currently being conducted by state and local health departments, in partnership with the CDC, to identify the original source of infection, target infected rat populations, and control the outbreak. |
Sensors that detect and count single photons, the smallest quantities of light, with 88 percent efficiency have been demonstrated by physicists at the National Institute of Standard and Technology (NIST). This record efficiency is an important step toward making reliable single photon detectors for use in practical quantum cryptography systems, the most secure method known for ensuring the privacy of a communications channel.
Described in the June issue of Physical Review A, Rapid Communications,* the NIST detectors are composed of a small square of tungsten film, 25 by 25 micrometers and 20 nanometers thick, chilled to about 110 milliKelvin, the transition temperature between normal conductivity and superconductivity. When a fiber-optic line delivers a photon to the tungsten film, the temperature rises and results in an increase in electrical resistance. The change in temperature is proportional to the photon energy, allowing the sensor to determine the number of photons in a pulse of monochromatic light.
This type of detector typically has limited efficiency because some photons are reflected from the front surface and others are transmitted all the way through the tungsten. NIST scientists more than quadrupled the detection efficiency over the past two years by depositing the tungsten over a metallic mirror and topping it with an anti-reflective coating, to enable absorption of more light in the tungsten layer.
The NIST sensors operate at the wavelength of near-infrared light used for fiber-optic communications and produce negligible false (or dark) counts. Simulations indicate it should be possible to increase the efficiency well above 99 percent at any wavelength in the ultraviolet to near-infrared frequency range, by building an optical structure with more layers and finer control over layer thickness, according to the paper.
Quantum communications and cryptography systems use the quantum properties of photons to represent 1s and 0s. The NIST sensors could be used as receivers for quantum communications systems, calibration tools for single photon sources, and evaluation tools for testing system security. They also could be used to study the performance of ultralow light optical systems and to test the principles of quantum physics. The work is supported by the Director of Central Intelligence postdoctoral program and the Advanced Research and Development Activity.
*D. Rosenberg, A.E. Lita, Aaron J. Miller, and S.W. Nam. 2005. Noise-free, high-efficiency, photon-number-resolving detectors. Physical Review A, Rapid Communications. June. |
Chromosome Segregation and the Plane of Cell Division
Chromosome segregation, mitotic spindle, aneuploidy, mitosis, kinetochore, microtubule
During cell division, microtubules of the mitotic spindle impart forces to accomplish two important functions: (i) Spindle microtubules capture chromosomes and pull the DNA apart into two equal sets. (ii) Astral microtubules interact with the cell cortex and rotate the entire spindle towards a predefined axis. These complex microtubule-mediated events of force generation are controlled precisely, during every cell division, to ensure the accurate segregation of the genome and proper plane of cell division. The molecular details of how microtubule interaction with chromosomes and the cell cortex are established and monitored remain unclear. To uncover the biochemical principles that govern microtubule-mediated functions during chromosome segregation and spindle rotation, we use a combination of high-resolution cell biology and high-throughput biochemistry tools.
Every human being experiences tens of trillions of cell divisions. And errors in division start accumulating in all tissues throughout the body. Defects in chromosome segregation can lead to chromosomal instability and aneuploidy, hallmarks of aggressive cancers. Defects in spindle orientation can lead to incorrect plane of cell division and loss of tissue organization, commonly found in age-related disorders. So, we work with pharmacogenomics experts for employing our knowledge of mitosis and microtubule regulation to develop therapeutic and diagnostic tools.
Mechanisms of chromosome-microtubule capture: The End-On Conversion Process
Microtubules capture chromosomes at a specalised sub-micron sized multi-protein structure called the ‘kinetochore’. Correct attachment of kinetochores to microtubule-ends is important for translating microtubule growth and shrinkage into pulling and pushing forces that move chromosomes. In a way, the kinetochore acts as a machine-control unit that can regulate microtubule growth and shrinkage phases and thereby, controls the powering of chromosome movement (reviewed in Tamura and Draviam, 2012). Thus it is important that the ends of microtubules are tethered properly at the kinetochore; how tethering to microtubule-ends is achieved is not understood and this is our primary focus of study.
By developing a high-resolution imaging methodology, we showed that although kinetochores are capable of attaching to both lateral-walls and ends of the microtubule fibre, the attachment to lateral walls is gradually converted to microtubule-ends through a multi-step process (Shrestha and Draviam, 2013). We termed this the end-on conversion process wherein distinct sets of proteins are required for tethering the kinetochore to microtubule-walls versus microtubule-ends. We study how these two different modes of kinetochore-tethering are achieved, monitored and controlled using a combination of biochemistry and cell biology tools.
Regulation of cortex-microtubule interaction: Biased spindle rotation and orientation maintenance
The mitotic cell’s cortex recruits force-generators (Gαi-LGN-NuMA-dynein/dynactin) that pull the astral microtubules of the mitotic spindle and thus mediate spindle rotation. We developed a semi-automated software Spindle3D to monitor the temporal evolution of spindle movements during mitosis and showed that cortical force generators are required for the biased rotation (Corrigan et al, 2013). We find that the cortical force generators are however dispensable for the stable maintenance of an already oriented spindle, suggesting the presence of unrecognized cortical tethering mechanisms. How is the spindle stably positioned in the absence of LGN and cortical force generators? We are currently investigating this using microtubule and spindle tracking tools in combination with protein depletion and mutant protein expression studies.
3 key publications
- Lateral to End-on conversion of chromosome-microtubule attachment requires kinesins CENP-E and MCAK. Shrestha R R L and Draviam V M. (2013) Current Biology. 23: 1-13
- Automated tracking of mitotic spindle pole positions shows that LGN is required for spindle rotation but not orientation maintenance. Corrigan A M, Shrestha R L, Zulkipli I, Hiroi N, Liu Y, Tamura N, Yang B, Patel J, Funahashi A, Donald A, Draviam V M. (2013) Cell Cycle. 12: 16
- Microtubule plus-ends within a mitotic cell are ‘moving platforms’ with anchoring, signaling and force-coupling roles. Tamura N and Draviam V M. (2012) Open Biol. 2: 120132
Page updated 9 June 2014 |
Practical Illustrations of Astronomical Concepts Relating to the Solar System
Eighth graders are introduced to concepts related to the Solar System. In groups, they participate in an experiment in which they must describe a ray of light and how it travels. They draw a diagram of the electromagnetic spectrum and describe the wavelengths associated with each type of light. They end the lesson by showing how light is reflected by mirrors. |
Key Stage 1
National Curriculum - Knowing and using number facts
Question 1 of 3
How confident are you that you can help children to:
derive and remember addition and subtraction facts to 20 using an empty number line and the patterns in addition tables?
Addition and subtraction facts to 20 can be developed on an empty number line by bridging through 10.
8 + 7 = 15
The symmetry of an addition table can help children to memorise and reason with addition facts.
This can be used to develop an understanding of the commutative principle a + b = b + a. Realising that the number of addition facts to be remembered is halved aids fluency. Subtraction does not have this property.
Children can reason with trios of numbers such as 5, 8 and 13: Addition and subtraction facts can be learned by applying inverse operations and developing families; for example,
8 + 5 = 13; 5 + 8 = 13; 13 – 8 = 5; 13 – 5 = 8
What this might look like in the classroom
Use the numbers 6, 7 and 13 to make four different number sentences.
6 + 7 = 13; 13 - 7 = 6
7 + 6 = 13; 13 - 6 = 7
Each of these trios uses numbers up to 20. What is the missing number in each problem?
6, 19, ?
11, 18, ?
2, 16, ?
4, 9, ?
14 or 18
5 or 13
Taking this mathematics further
Links to fractions and decimals
Once fluent with addition and subtraction of whole numbers on a number line try decimals and fractions e.g.
1.6 + 2.7
34 + 112
Using the number line to demonstrate multiplication as repeated addition and division as repeated subtraction helps children to make connections, e.g.
4 × 3 = 3 + 3 + 3 + 3 = 12
12 ÷ 3 = 12 - 3 - 3 - 3 - 3, four groups of 3
Children can solve problems involving trios of decimals or fractions, e.g.
2.4, 3.5 and 6.9
112, 214 and 334
3, 5, 15
Other types of number line
To reinforce and consolidate work with number lines, it is important to make links with other types of number line such as an analogue clock face, vertical axis on a graph, measuring equipment; e.g. cylinder, scales, thermometer, ruler. This will enable children to reason with them in different contexts and to apply the skills for basic number lines to these others.
Children should begin practically, adding quantities of beads, for example, to make numbers to 20. They should talk about what they are doing and the results obtained. As they make each number they should check by taking away the number added to ensure they have the number they started with.
Before adding and subtracting on a number line, children need to be confident with counting along one, identifying numbers and saying one more or less than the given number by moving their finger accordingly. Children should then be able to identify missing numbers on a partially labelled number line.
In the transition from the practical activity of counting out the beads to using a number line, they should complete the practical, and then record on a number line and with a number sentence. Once they are confident with this, they move on to record on an empty number line placing the appropriate numbers where they think they should go as in the example.
Children develop this method for adding and subtracting 2- and 3-digit numbers
e.g. 67 + 34
Each time they do this they check with the inverse operation
i.e. 67 + 34 = 101, 101 - 34 = 67
Once an understanding of addition and subtraction has been developed, children progress to using columnar addition and subtraction with increasingly large numbers.
Related information and resources from the NCETM
Related information and resources from other sites
Related courses from the NCETM |
Union Carbide’s factory was built to provide insecticide for India’s farmers so they would not lose crops. When demand for insecticide dropped Union Carbide began to cut costs wherever it could and, in the process, created circumstances that led to the disaster.
In the early hours of December 3, 1984, tons of poisonous gas escaped from Union Carbide’s factory at Bhopal, India. Methyl isocyanate, a highly toxic substance, was being processed here to produce insecticide for farmers. The nighttime gas leak caught people still in their beds. Eight thousand were killed and another quarter million injured, some very seriously. The problem began late on the evening of December 2 when water entered one of the big storage tanks containing methyl isocyanate at some stage of conversion. A chemical reaction was triggered and both temperature and pressure rose quickly. Officials at the plant knew what was happening and could also see that pressure was going to build up until something gave way but they were unsure about what to do.
A warning siren was available to warn local residents of any danger but workers were slow in turning it on. Shortly after midnight, the storage tank was breached and gas shot outward. Even then, no siren was sounded for an hour. By that time an area of more than fifteen square miles was contaminated and thousands were dying. Bhopal was a city of 800,000 people, mostly Moslems, which had tripled in size over the previous twelve years, largely due to the arrival of Union Carbide’s pesticide plant in 1969. The first ten years of the plant were highly successful and adequate safety precautions were in place.
Indian chemical engineers were taken to the United States for training and then brought back to their own country to oversee operations and train new staff. By the beginning of the 1980s it was a different story. Huge losses had overtaken the company, partly due to lack of demand for pesticides. The green revolution, the use of new and better grains for seed, was yielding a surplus of food and there was less need to buy expensive pesticides in order to reduce losses from insects.
As profits slumped, cost cutting measures appeared. Instead of sending their chemical engineers to the United States for training, men who had taken some university science were given a four-month crash course locally and then handed major responsibilities within the plant. These people were not qualified chemical engineers so they could be paid less, thus reducing the budget for staff. For people at this level of responsibility it was usually $30 a month. The level of training steadily deteriorated with each group of new workers. Additional workers were frequently needed because the best-trained chemical engineers often left for better pay and greater security elsewhere.
Men were hired to work in the very sensitive and highly toxic Methyl Isocyanate (MIC) Unit with limited training and little practical experience. This was the unit that had been a highly controversial addition to the plant. It was added in 1980 for the same reason that lay behind other decisions of that time—it was cheaper. Bhopal was the only Indian plant to use this chemical and the company’s U.S. plant in West Virginia was the only other one using it. In addition to the risks associated with MIC, instead of the safer yet more expensive chemicals used at all the other Indian plants there was the challenge of adding one more building to the Bhopal installation to store MIC.
Local government leaders knew that Union Carbide’s factory should never have been built where it was. It was too close to areas of concentrated settlement and, since it first opened, more and more people moved to places close to the plant. The local officials were faced with a big addition because Union Carbide decided it would save a lot of money if large quantities of MIC were stored at the site instead of frequent additions of small amounts being added from time to time.
The city administrator was insistent. He asked the company to set it up farther out, away from the populated areas in order to avoid tragedies like the one that hit Mexico City only a few weeks earlier and killed large numbers of workers whose homes were close to the plant. In the debate that ensued, the company won out and the city administrator lost his job. He said it was not due to the position he took over the MIC unit but others wondered if that was really true.
Symptoms of the victims who were exposed to the poisonous gas took different forms depending on distance from the factory. They included immediate irritation, chest pain, breathlessness, and if no help was at hand the problem developed into asthma, pneumonia, and finally cardiac arrest. Almost nothing was known by those affected as to what to do in a tragedy of this kind. Had they known, simple protective measures were possible. If, for example, a wet cloth is placed over nose and mouth until help arrives, many lives can be saved.
The accident shocked the world and Union Carbide, the United States parent company, was particularly concerned because it operated a facility of the same kind in West Virginia. Some months later on, in August of 1985, that same plant experienced a leak like the Bhopal one but fortunately safety measures were in place to prevent widespread damage. For the people of Bhopal, similar safety measures were almost nonexistent. The failure to anticipate the developing leak was only the beginning. An analysis conducted in January of 1985 revealed that safety measures were totally inadequate.
A refrigerator designed to prevent dangerous chemical reactions in storage tanks had been shut down, ostensibly as a cost-cutting move. Had this been in place the buildup of pressure and the resultant leak would never have happened. A mechanical vent scrubber to detoxify escaping gas with caustic soda was not working. A network of waterspouts for neutralizing toxic gas was also inoperative, and so was another safety installation, a high-flare tower that would burn off dangerous gases high in the air. These conditions together with evidence of unreliable instruments throughout the plant confirmed the investigators’ findings. Bhopal’s security was totally inadequate.
Bhopal had experienced as many as six smaller accidents in the previous three years, all of them related to gas leaks, most frequently chlorine, a part of the methyl isocyanate manufacturing process. This particular gas is best known because of its use as poison gas in World War I. Chlorine comes from simple salt. Once broken away from its partner sodium, chlorine becomes a heavier-than-air gas, and an unstable chemical. It will recombine easily with carbon, and with material in the bodies of living things. But the chemical combinations formed by chlorine are known to cause cancer and other diseases. A single accident at a chlorine plant has the potential to kill hundreds of thousands of people. The accident at Bhopal killed 8,000 and injured a quarter million more.
Fallout from the accident was felt across the chemical industry. Safety audits and new regulatory standards became a primary focus of government and industry. Nongovernmental agencies increased their public awareness campaigns to ensure there would never again be another Bhopal. Concerns about technology transfers, the relations between economic and environmental issues, and the interests of labor all led to intense debate over public policy. In India, The Disaster Management Institute was formed to provide long-term planning in order to prevent future industrial accidents. The chemical industry responded with the formation of The Center for Chemical Process Safety to develop management strategies for the industry.
Poisonous gas spilled from a Union Carbide plant at Institute, West Virginia, in August of 1985, sending 130 people to hospital. The cause of the accident was almost identical to the one that was caused by the same company on a much bigger scale in India. New equipment had just been installed to make the plant safer but something went wrong. The lessons from Bhopal had not yet been learned. The plant at Institute produced the pesticide Temik from MIC just like Union Carbide’s operation in India.
Before the Bhopal tragedy the company transported MIC to other plants across the United States. After Bhopal, public concern forced the company to convert MIC to a less toxic chemical, aldicarb, before shipping it to other locations. This concern was heightened when the cause of the accident was known. The very same thing that went wrong in India was repeated when a valve failed and aldicarb heated up, bursting the container and escaping outside.
Within twenty minutes of the accident Union Carbide notified local emergency services. Fifteen minutes later the gas reached the town of Institute. People were warned to stay indoors but many were caught outside. These suffered from irritations to eyes, nose, throat, and lungs. It appeared that aldicarb had broken down into more volatile irritants in the course of being heated up before it escaped. The runaway reaction was identical to what happened in India. Fortunately, in West Virginia, action to correct the problem was quick and effective.
Some concern remained after the accident, particularly since its cause had been directly related to the installation of a new warning system designed to prevent the kind of thing that happened at Bhopal. The new system, known as “Safer,” analyzed wind speed and weather conditions on a continuing basis in order to predict the movement of escaping gas in case of a leak. Unfortunately, once again, even at the headquarters of the chemical company’s operation, the new safer system failed to work. |
People who have not owned forestry or agricultural land can often find it hard to visualise what an acre looks like. This is an attempt to help you visualise how big an area one acre is.
An acre measures an area of land and is about 70 yards by 70 yards, which means about 4,900 square yards (or roughly 44,000 square feet).
A typical football pitch is about 110 yards by about 70 yards (the rules allow some flexibility in the size) so that a pitch covers about one and a half acres of field or, including the immediately surrounding land that goes with it, the football pitch takes up about 2 acres.
Another way to visualise one acre is as the area in which you could park about 150 cars. A typical supermarket, excluding the car park, covers about 0.6 of an acre (about 26,000 square feet).
A 9 acre woodland might be only equivalent to about 6 football pitches, but it will usually appear bigger than that for various reasons: you can’t see across it and a wood will have bumps and dips and other features, but the main thing is that a forest is three-dimensional. The trees give the extra dimension which makes a woodland so much more interesting and so much richer in biodiversity, and make it seem much bigger.
The other measurement often used for surface area is the metric measure – hectares. A hectare is precisely 100 metres by 100 metres and is much larger than an acre. About 2.47 acres make up one hectare, so an acre is only about 40% of the size of a hectare. One reason that acres, rather than hectares, are used in the UK is that, being a smaller measure, you get more of them in a given piece of land and it is easier to remember a round number of acres than a hectare measurement with a decimal point. However, one advantage of using hectares is that more detailed maps use grid lines where the distance between the lines is equivalent to 100 metres. The result of this is that each square covers exactly one hectare or approximately two and a half acres.
If you are trying to measure in approximate terms an acre of woodland you can pace it out as about 80 paces by 80 paces, though in woodlands people often take shorter paces so 70 yards may take more like 90 paces. |
M ental illnesses are disorders of brain functioning. They are often misunderstood and ideas about mental illness are only starting to be confirmed by science and research into the brain and how it works.
Even though many people suffer from mental illnesses, you often will only know someone has a mental illness if they tell you directly. Unlike a broken arm or leg, it is often difficult to understand what that person may be experiencing and how you can help. Sometimes their mental illness will make them act in unusual ways, which may make you feel uncomfortable. When we feel uncomfortable we often treat others differently whether we do it consciously or unconsciously.
When we treat someone differently, we may be stigmatizing them based on ideas about mental illness that come from inaccurate news reports or over dramatized movies and television programs. This stigma often deters those with mental illness from accessing health care or maximizing their potential because they’re afraid of being judged.
So how can we fight stigma? By improving mental health literacy, we can challenge our misinformation and negative attitudes.
Please visit our YouTube channel to watch some personal stories. |
We often hear temperature changes explained on a global scale, but just how are those changes playing out in your local temperatures? This calculator answers that question for every American state.
The new tool is the work of NOAA's National Climatic Data Center. Using data on average temperatures collected since 1895, you can look at how average, maximum, and minimum temperatures have shifted.
But, that data is not just available by year — you can also break it down further into winter, spring, summer, and fall, which is particularly useful as we hear more and more about temperatures busting through seasonal records. So, if you're curious about how to contextualize new information about temperature changes in your area, you can see on a graph, for instance, how an average Texas winter would have felt over 100 years ago:
To illustrate just how pervasive the changes are, the NCDC also put together a mapping tool that you can use to see the cumulative effect of how the averages have changed. For instance, look at this map of how far the average temperatures of 1901-1910 differed from 20th century norms:
And then compare it with the most recent map of how far temperatures in 2011-2013 deviate from those same averages:
You can check out the data for your own state right here.
Maps and Graphs made using the NCDC tool, Top image: Gary Whitton / Shutterstock |
Frequently Asked Questions
Why is it important to teach study skills?
Although every student must study, few are taught how to perform this important task. Research shows that students who take a study skills class are six times more likely to stay in college a second year. Most students develop their own process for studying which is very often ineffective. For example, they may think the more they re-read subject material, the better they will understand it. The Bible tells us that zeal without knowledge is not good. Students try to do better without knowing how to do better. This system teaches students how to study so they might learn. It is most helpful if students can begin learning these skills when they first begin school, building on the skills year after year. Just like we begin teaching math to a kindergartner, we also begin teaching study skills at a young age.
How is Victus different from other study skills courses?
The Latin word Victus means “way of life” and this course teaches students skills that will help them succeed in school and in life. It teaches concepts and applications of those concepts. Victus is a system of study with component parts, making it more effective and easier to remember and use. Most study skills courses are a list of skills. Victus consists of component parts working together to accomplish an aim. The component parts are understanding where you are now, knowing where you want to be, and implementing a plan to get there (see table of contents). The aim is the accomplishment of the mission, be it large or small.
Students begin to understand that the Victus Study Skills System will help them accomplish their mission, succeeding in academics and in life.
Research shows that students are seven times more likely to be engaged in instruction when teachers help students “see the curriculum as critical to their current lives, their future and their culture.”
Victus helps students see the instruction as critical.
What is included in the course?
Traditional study skills such as note taking, test taking, and reading for information are taught, as well as the essentials of motivation, time management, goal setting, keeping a calendar, understanding the importance of your mission on earth and the importance of choosing and living out priorities. Students are taught concepts, followed by application in their school work. The course is built on understanding three foundational cornerstones of where we are now (which includes self assessments of current study habits and learning styles), where you want to be (which includes creating a mission statement and identifying goals and priorities) and determining how to achieve your goals (which includes time management, organization, reading for information, test taking, and note taking).
Can I see a copy of the Table of Contents?
Teaching and Materials
What are the teaching options?
Teacher Edition with Student Workbook – If you choose the teacher-led option, you will need one Teacher Edition and one Student Workbook for each student.
Student DIY Workbook – If you choose the student-led option, each student will need a copy of the DIY workbook.
Video and Powerpoint – Although helpful but not essential, you can use the video or Powerpoint along with either option as an additional reinforcement.
What if I have questions about the lessons?
We are always happy to help — email us anytime!
How do I reinforce Victus once it is taught?
Educators can easily reinforce each skill in each subject taught and it is important to begin using each skill as soon as it is taught. Greatest memory loss is within the first 24 hours of any new skill being introduced.
Ideally, schools introduce the concepts to all teachers, students and parents so that the skills can be reinforced in all arenas of academics and life. Ideally, students begin Victus Study Skills System when they begin school so that these skills become habits. They learn these skills in a developmental way, much like math or reading, in which the basic concepts are taught first. Then students do not have to unlearn bad habits, but rather develop good habits that will help them succeed in academics and in life.
How long does it take to teach Victus?
The course is five total hours of instruction that can be broken down into shorter segments of time.
Schools that want a semester or year long course can easily do that by using the supplemental materials at the back of the workbooks and by teaching only portions of each lesson during each class time and reinforcing those.
Can I teach more than one student at a time?
Absolutely! All you need is one Teacher Edition and a Student Workbook for each student. If you have a fairly large group it’s best to also order the Power Point.
If you choose the DIY approach, each student will need a workbook. The video is the best supplement for groups using the DIY.
What do I need to purchase?
First, it’s best to decide if you want to teach the course or if you want the students to work through the material on their own. Then purchase the Teacher Edition and Student Workbook if you wish to teach it or the DIY if you want the student to work on his or her own. Regardless, it will be most effective if you become familiar with the curriculum so you can reinforce it inside and outside the classroom.
What is the DIY (Do It Yourself) book?
DIY is the workbook that students complete on their own. It can be used with or without the video.
What is the Student Workbook?
The Student Workbook is the workbook that you will need for each student when a teacher is leading the course. In addition, you will need a Teacher Edition to help guide you in teaching the lessons.
What is the Teacher Edition?
The Teacher Edition gives you all you need to teach the course, including specific lesson plans and procedures.
What is the video?
The video is a video of an actual class in which teachers are teaching Victus to students. It can be used to accompany the DIY or the Student and Teacher Editions.
What is the Personal Planning Book?
It is a simple way for you to develop your own personal plan. Author Susan Ison has taught this to hundreds of individuals and encourages you to try it! It will help you better teach your children if you understand it yourself.
To what ages and grade level is Victus taught?
The Victus program is for all ages and is adjusted by the teacher according to the student’s understanding. It’s much like developing reading skills that are introduced age appropriately, then built & reinforced year after year.
The young student learns basic terms. The middle school student builds on those basic terms and begins to learn and apply new study and life skills. The high school student is establishing these skills as habits in all areas of life. The college bound and college age student knows the basics and has established good habits, and understands the seriousness of time management and other skills as a way of life. These skills have become a way of life, helping students succeed in all areas.
Below are some general recommendations:
4 yrs through 7yrs: Use the Teacher Edition and introduce each skill and concept. Students are never to young to learn terms like goals and organization, how to listen, etc. Introduce the actual Student Workbook only when the student seems ready to complete pages shown in the Teacher Edition. Victus now offers a supplement which can be used by the teacher and the younger student. Please email us for more information.
8 years on up: The student should be able to work through the Student Workbook while being led by a teacher who has the Teacher Edition.
10 years on up: Independent learners can learn study skills while using the DIY. However it is very important that the teacher be certain the student understands the work in the workbook by asking questions.
College bound and college students: Victus now offers a Victus Supplement for the college age student. It is never too late to learn these skills and the supplement is designed to teach the basics as well as those skills so essential to success in college. Please email us for more information.
Students who enjoy group interaction will benefit more from the teacher led approach, which includes the Teacher Edition & the Student Workbook.
The video is a great resource for the students taking the course as well as for teachers teaching the course. Although a few of the details like page numbers may differ, the ten easy to follow lessons are the same.
How does this course help students in school?
Most of us develop our own study habits which may or may not be effective. Victus helps students become more confident as they see that the processes they learn are more effective than previous hit-or-miss methods. They see their learning increase and their grades improve. Research says new habits can be formed in 21 days. The hope is these effective methods of study will become habits and a way of life in every academic endeavor. They learn concepts first, followed by learning specific applications they can apply immediately in school for each concept.
How does this course help students in life?
Not only does our course help in school with necessary study skills, but also the life skills taught in this curriculum help students to learn the connection between what they do today and the results they will see in their future. They learn how to identify priorities, make decisions based on what is important, and manage their time according to priorities. They learn concepts first, followed by learning specific applications they can apply immediately in life for each concept.
Who developed Victus and where has it been taught?
Victus continues to be developed by professional educators who see first hand the skills students need. The course had been taught as a non-system approach until about twenty years ago when God showed Susan that the study skills her company had been teaching and the strategic planning consulting she had been doing could be integrated into a more effective systems approach course for students. For many years, it has been taught in schools and in tutoring sessions throughout the United States. Last year the primary developer, Susan Ison, knew God wanted her to make it available to an even wider audience. Now the materials are available online through this website.
What do you mean by the concept that the “results come from the process?”
If you take any job and list the steps it takes to accomplish the job, you will find it is not always easy. If you try to teach someone how to make a cup of coffee and you leave out one step, it will affect the results. W. Edwards Deming was a brilliant statistician and quality expert who introduced this concept to businesses that had difficulty accepting the truth that one-third of each dollar they spent was wasted because of unclear or wrong processes. Students need to learn that the process they use in study will affect the result. They need to learn that it is important how we achieve goals.
What do you mean by the concept that Victus is a system of study, and why is that important?
A system can be defined as a component of parts working together to accomplish an aim. A car has parts that work together to move a person from one place to another. As professional educators, we have learned that students are more likely to remember things that makes sense to them. The Victus Study Skills System makes sense to them because it does have parts with aim and purpose.
What is a systems approach?
A system is made up of several components. A system has an aim or a purpose. A car is an example of a system which has several components and its aim is to take us from point A to point B. Having good brakes or enough oil is not enough. Each component must work well together to accomplish the aim of the system. Each component has it’s own process and the results come from the process. If anyone process is amiss, poor results can be expected.
Effective study is no different. It is a system with component parts, note taking for example. The process of note taking will yield good or bad results depending on the process used. The results of the entire process of study will be affected by the effectiveness of each component process.
Why is this approach helpful?
This approach makes sense to students of all ages. It is not a series of unrelated tasks, but a process of study that will help the student be a more effective lifelong learner. Students are helped when they see any curriculum as critical in their lives, and Victus helps them understand the value of these skills through the three foundational cornerstones.
Why was this course developed?
Since 1977 our organization of professional educators has been teaching students in a primarily one on one atmosphere. In such a relationship based effort, the teacher is quick to see that more often than not the student has few effective study skills and that content itself is rarely the problem. To provide for that need pur educators developed the current Study Skills system.
Who wrote it?
This course was primarily written by our study skills teachers and our founder.
Who teaches it?
Specially trained study skills teachers teach the workshops and act as consultants. Anyone who uses the Teacher Edition can teach the lesson plans very effectively.
To whom is it taught?
The Study Skills System can be individualized and taught to any age student.
How many have taken the course?
Thousands of students throughout the United States have taken the course.
What are people saying about it?
See video on homepage. |
Curriculum unit on the historical context of Upton Sinclair's novel The Jungle and how the book helped reform efforts in Congress to pass the Meat Inspection Act and the Pure Food and Drug Act in 1906.
A curriculum unit of three lessons in which students explore Hopi place names, poetry, song, and traditional dance to better understand the ways Hopi people connect with the land and environment through language. The unit is centered on the practice of growing corn. Students make inferences about language, place, and culture and also look closely at their own home environment and landscape to understand the places, language, and songs that give meaning to cultures and communities
William Golding’s Lord of the Flies is a novel that engages middle school students in thought-provoking discussion, and provides practice in literary analysis skills. The three lessons in this unit all stress textual evidence to support observations and generalizations uncovering the novel’s central character traits, symbols and themes.
As the students learn the history of the alphabet they will be introduced to three important ancient civilizations, and to the idea of cultural inheritance. The concept of chronological order will be reinforced through an emphasis on the fact that each group of people passed on the alphabet. In addition to learning history, the children will practice language arts and art skills.
In this curriculum unit, students will learn about the origins of four major types of British surnames. They will consult lists to discover the meanings of specific names and later demonstrate their knowledge of surnames through various group activities. They will then compare the origins of British to certain types of non-British surnames. In a final activity, the students will research the origins and meanings of their own family names.
In this curriculum unit, students look at the role of President as defined in the Articles of Confederation and consider the precedent-setting accomplishments of John Hanson, the first full-term “President of the United States in Congress Assembled.” |
Why do we need a database for oceanic methane and nitrous oxide?
Methane (CH4) and nitrous oxide (N2O) are trace gases in the atmosphere that contribute significantly to the earth's greenhouse effect. N2O is furthermore becoming the most important substance responsible for ozone depletion in the stratosphere. The ocean contributes moderately to global CH4 (up to 10% of natural emissions) and strongly to N2O emissions (up to 34% of natural emissions).
Oceanic emissions of nitrous oxide and methane are produced during natural microbial processes, and the distribution of methane and nitrous oxide in the ocean varies strongly over space and time. Their production and consumption in the ocean are sensitive to environmental changes, such as changes in temperature, oxygen concentration or primary productivity. Climate change may thus affect the oceanic emissions of nitrous oxide and methane.
A compilation of all available measurements into a global database is a useful tool to identify regions with strong emissions, to assess their variability and to quantify the oceanic CH4 and N2O emissions. It also serves as a powerful resource for validation of biogeochemical models.
The MEMENTO database currently contains about 120,000 surface and depth profile measurements of N2O and more than 20,000 measurements for CH4 all over the oceans.
Watersampling on board of RC Littorina.
N2O measurements in the laboratory. |
Value: 60 Points
Due Date: June 2
Essential Question: How is _____(insert national park) beautiful, powerful and inspiring.
Transcendentalist in the 19th century viewed nature as beautiful, powerful and inspiring. In the 21st century individuals still view nature in the same way. The National Parks are celebrating their “100th birthday” and in this assessment students will analyze the essential question and apply to the National Park that has been assigned.
Insert Groan: This is an individual assessment.
Step 1: In order to analyze the essential question it is important research the topic in detail.
Step 2: Students need to type in the following url https://www.nps.gov/index.htm and find their park.
Step 3: Each national park has their own website as well - go to it , explore, what do you see?Step 4: Take notes that will help you to answer the essential question. ( you will need them)
This tutorial guides students to :
1. Create an audioboom account using google chrome.
2.How to navigate audioboom .
3.How to create a voicethread.
4.Yuck- Hated It - How to delete a voice thread from audioboom.
5.How to publish audioboom.
6.How to insert image.
I highly recommend that students create a script .
This tutorial guides students on the following:
If you already have an account from a previous class just log in and create. |
Biogas, sometimes called renewable natural gas, could be part of the solution for providing people in rural areas with reliable, clean and cheap energy. In fact, it could provide various benefits beyond clean fuel as well, including improved sanitation, health and environmental sustainability.
What is Biogas?
Biogas is the high calorific value gas produced by anaerobic decomposition of organic wastes. Biogas can come from a variety of sources including organic fraction of MSW, animal wastes, poultry litter, crop residues, food waste, sewage and organic industrial effluents. Biogas can be used to produce electricity, for heating, for lighting and to power vehicles.
Using manure for energy might seem unappealing, but you don’t burn the organic matter directly. Instead, you burn the methane gas it produces, which is odorless and clean burning.
Biogas Prospects in Rural Areas
Biogas finds wide application in all parts of the world, but it could be especially useful to developing countries, especially in rural areas. People that live in these places likely already use a form of biomass energy — burning wood. Using wood fires for heat, light and cooking releases large amounts of greenhouse gases into the atmosphere.
The smoke they release also has harmful health impacts, particularly when used indoors. You also need a lot to burn a lot of wood when it’s your primary energy source. Collecting this wood is a time-consuming and sometimes difficult as well as dangerous task.
Many of these same communities that rely on wood fires, however, also have an abundant supply of another fuel source. They just need the tools to capture and use it. Many of these have a lot of dung from livestock and lack sanitation equipment. This lack of sanitation creates health hazards.
Turning that waste into biogas could solve both the energy problem and the sanitation problem. Creating a biogas system for a rural home is much simpler than building other types of systems. It requires an airtight pit lined and covered with concrete and a way to feed waste from animals and latrines into the pit. Because the pit is sealed, the waste will decompose quickly, releasing methane.
This methane flows through a PCV pipe to the home where you can turn it on and light on when you need to use it. This system also produces manure that is free of pathogens, which farmers can use as fertilizer.
A similar but larger setup using rural small town business idea can provide similar benefits for urban areas in developing countries and elsewhere.
Benefits of Biogas for Rural Areas
Anaerobic digestion systems are beneficial to developing countries because they are low-cost compared to other technologies, low-tech, low-maintenance and safe. They provide reliable fuel as well as improved public health and sanitation. Also, they save people the labor of collecting large amounts of firewood, freeing them up to do other activities. Thus, biomass-based energy systems can help in rural development.
Biogas for rural areas also has environmental benefits. It reduces the need to burn wood fires, which helps to slow deforestation and eliminates the emissions those fires would have produced. On average, a single home biogas system can replace approximately 4.5 tons of firewood annually and eliminate the associated four tons of annual greenhouse gas emissions, according to the World Wildlife Fund.
Biogas is also a clean, renewable energy source and reduces the need for fossil fuels. Chemically, biogas is the same as natural gas. Biogas, however, is a renewable fuel source, while natural gas is a fossil fuel. The methane in organic wastes would release into the atmosphere through natural processes if left alone, while the greenhouse gases in natural gas would stay trapped underground. Using biogas as a fuel source reduces the amount of methane released by matter decomposing out in the open.
What Can We Do?
Although biogas systems cost less than some other technologies, affording them is often still a challenge for low-income families in developing countries, especially in villages. Many of these families need financial and technical assistance to build them. Both governments and non-governmental organizations can step in to help in this area.
Once people do have biogas systems in place though, with minimal maintenance of the system, they can live healthier, more comfortable lives, while also reducing their impacts on the environment. |
Wet Bulb Calculator
With many countries reported their highest ever temperatures, such asreaching 49.6 C, it's important that you understand the different factors affecting heat and the body's ability to regulate it. What many people might fail to take into account is the combined result of the heat and humidity - the wet bulb effect. This tool will teach you about the wet bulb, and lets you estimate it's scale.
The wet bulb calculator operates on a simple principle. You can use it to work out the wet bulb temperature from just two numbers: temperature and relative humidity. Keep reading if you want to discover some wet-bulb applications, what the military benefits of the Wet-Bulb Globe Temperature are, and what it has to do with our health - especially within the context of our scorching summers! If you want to find out more about our atmosphere, check out our heat index calculator and our air density calculator.
What is the wet-bulb temperature?
Despite what you might think at first, wet-bulb temperature has nothing to do with light bulbs. It is instead the temperature read by a special thermometer that is wrapped in water-soaked fabric and ventilated. This thermometer is part of a device called a psychrometer. It includes a dry-bulb thermometer, a wet-bulb thermometer and a psychrometric chart - a graph that plots the relationships between the dry and wet-bulb temperature, relative humidity, and dew point at constant pressure.
By definition, wet-bulb temperature is the lowest temperature a portion of air can acquire by evaporative cooling only. When air is at its maximum (100 %) humidity, the wet-bulb temperature is equal to the normal air temperature (dry-bulb temperature). As the humidity decreases, the wet-bulb temperature becomes lower than the normal air temperature.
Data about the wet-bulb temperature is essential when it comes to preventing our body from overheating. Our bodies sweat to cool off, but, because water evaporates slower in more humid conditions, we cool down a lot slower in humid conditions. This causes our internal body temperature to rise. If the wet-bulb temperature exceeds 35 °C (95 °F) for an extended period of time then people in the surrounding area are at risk of hyperthermia.
How to calculate the wet-bulb temperature?
Although many equations have been created over the years our calculator uses the Stull formula, which is accurate for relative humidities between 5% and 99% and temperatures between -20°C and 50°C. It loses its accuracy in situations where both moisture and heat are low in value, but even then the error range is only between -1°C to +0.65°C.
The wet-bulb calculator is based on the following formula:
It might look intimidating, but don't worry - we do all the calculations for you. Just input two numbers:
Temperature - air temperature or dry-bulb temperature is the temperature given by a thermometer not exposed to direct sunlight.
% Relative humidity - a ratio of how much water vapor is in the air to how much it could contain at a given temperature.
Remember that both temperature and wet bulb temperature in this formula are expressed in °C! If you would like to use other units, you need to convert them to the Celcius scale before you start calculations.
If you would like to know more about the relative humidity formula, check out our dew point calculator. You can also use our RH calculator to calculate the relative humidity if you know the dew point temperature. Alternatively, you can also find it from the mixing ratio of air. Feel free to check it out!
Wet-bulb calculator applications
The wet-bulb temperature might not be a widely known measure, but it has some valuable functions:
Construction - different materials react differently to different humidities, so this temperature is needed when designing a building in different climates.
Snowmaking - snow production needs low temperatures and when the humidity decreases the temperature rises.
Meteorology - forecasters use wet-bulb temperature to predict rain, snow, or freezing rain.
Wet-Bulb Globe Temperature
Wet-Bulb Globe Temperature is a kind of an apparent temperature - the temperature perceived by humans - used to estimate the effect of temperature, humidity, wind speed, and sunlight on humans. Athletes, industrial hygienists and the military use it to prevent heat stroke by following guidelines for physical activity and water intake.
Wet-Bulb Globe Temperature is determined by the following equation:
where - the globe thermometer temperature - is measured by a thermometer situated in a black globe. It allows for the estimation of direct solar radiation.
Measured indoors, without direct solar radiation/sunlight, the Wet-Bulb Globe Temperature uses a shortened formula:
Wet-Bulb Globe Temperature gives us vital information if we want to be safe on warmer days. |
Volume 14, Number 6—June 2008
Persistence of Yersinia pestis in Soil Under Natural Conditions
As part of a fatal human plague case investigation, we showed that the plague bacterium, Yersinia pestis, can survive for at least 24 days in contaminated soil under natural conditions. These results have implications for defining plague foci, persistence, transmission, and bioremediation after a natural or intentional exposure to Y. pestis.
Plague is a rare, but highly virulent, zoonotic disease characterized by quiescent and epizootic periods (1). Although the etiologic agent, Yersinia pestis, can be transmitted through direct contact with an infectious source or inhalation of infectious respiratory droplets, flea-borne transmission is the most common mechanism of exposure (1). Most human cases are believed to occur during epizootic periods when highly susceptible hosts die in large numbers and their fleas are forced to parasitize hosts upon which they would not ordinarily feed, including humans (2). Despite over a century of research, we lack a clear understanding of how Y. pestis is able to rapidly disseminate in host populations during epizootics or how it persists during interepizootic periods (2–6). What limits the geographic distribution of the organism is also unclear. For example, why is the plague bacterium endemic west of the 100th meridian in the United States, but not in eastern states despite several known introductions (7)?
Persistence of Y. pestis in soil has been suggested as a possible mechanism of interepizootic persistence, epizootic spread, and as a factor defining plague foci (2,3,5,7,8). Although Y. pestis recently evolved from an enteric bacterium, Y. pseudotuberuclosis, that can survive for long periods in soil and water, studies have shown that selection for vector-borne transmission has resulted in the loss of many of these survival mechanisms. This suggests that long-term persistence outside of the host or vector is unlikely (9–11). Previous studies have demonstrated survival of Y. pestis in soil under artificial conditions (2,3,12–14). However, survival of Y. pestis in soil under natural exposure conditions has not been examined in North America.
As part of an environmental investigation of a fatal human plague case in Grand Canyon National Park, Arizona, in 2007, we tested the viability of Y. pestis in naturally contaminated soil. The case-patient, a wildlife biologist, was infected through direct contact with a mountain lion carcass, which was subsequently confirmed to be positive for Y. pestis based on direct fluorescent antibody (DFA) testing (which targets the Y. pestis–specific F1 antigen), culture isolation, and lysis with a Y. pestis temperature-specific bacteriophage (15). The animal was wearing a radio collar, and we determined the date of its death (October 26, 2007) on the basis of its lack of movement. The case-patient had recorded the location at which he encountered the carcass and had taken photographs of the remains, which showed a large pool of blood in the soil under the animal’s mouth and nose. During our field investigation, ≈3 weeks after the mountain lion’s death, we used global positioning satellite coordinates and photographs to identify the exact location of the blood-contaminated soil. We collected ≈200 mL of soil from this location at depths of up to ≈15 cm from the surface.
After collection, the soil was shipped for analysis to the Bacterial Diseases Branch of the Centers for Disease Control and Prevention in Fort Collins, Colorado. Four soil samples of ≈5 mL each were suspended in a total volume of 20 mL of sterile physiologic saline (0.85% NaCl). Samples were vortexed briefly and allowed to settle for ≈2 min before aliquots of 0.5 mL were drawn into individual syringes and injected subcutaneously into 4 Swiss-Webster strain mice (ACUC Protocol 00–06–018-MUS). Within 12 hours of inoculation, 1 mouse became moribund, and liver and spleen samples were cultured on cefsulodin-Irgasan-novobiocin agar. Colonies consistent with Y. pestis morphology were subcultured on sheep blood agar. A DFA test of this isolate was positive, demonstrating the presence of F1 antigen, which is unique to Y. pestis. The isolate was confirmed as Y. pestis by lysis with a Y. pestis temperature–specific bacteriophage (15). Additionally, the isolate was urease negative. Biotyping (glycerol fermentation and nitrate reduction) of the soil and mountain lion isolates indicated biovar orientalis.
Of the 3 remaining mice, 1 became moribund after 7 days and was euthanized; 2 did not become moribund and were euthanized 21 days postexposure. Culture of the necropsied tissues yielded no additional isolates of Y. pestis. Pulsed-field gel electrophoresis (PFGE) typing with AscI was performed with the soil isolate, the isolate recovered from the mountain lion, and the isolate obtained from the case-patient (16). The PFGE patterns were indistinguishable, showing that the Y. pestis in the soil originated through contamination by this animal (Figure). Although direct plating of the soil followed by quantification of CFU would have been useful for assessing the abundance of Y. pestis in the soil, this was not possible because numerous contaminants were present in the soil.
It is unclear by what mechanism Y. pestis was able to persist in the soil. Perhaps the infected animal’s blood created a nutrient-enriched environment in which the bacteria could survive. Alternatively, adherence to soil invertebrates may have prolonged bacterial viability (17). The contamination occurred within a protected rock outcrop that had limited exposure to UV light and during late October, when ambient temperatures were low. These microclimatic conditions, which are similar to those of burrows used by epizootic hosts such as prairie dogs, could have contributed to survival of the bacteria.
These results are preliminary and do not address 1) the maximum time that plague bacteria can persist in soil under natural conditions, 2) possible mechanisms by which the bacteria are able to persist in the soil, or 3) whether the contaminated soil is infectious to susceptible hosts that might come into contact with the soil. Answers to these questions might shed light on the intriguing, long-standing mysteries of how Y. pestis persists during interepizootic periods and whether soil type could limit its geographic distribution. From a public health or bioterrorism preparedness perspective, answers to these questions are necessary for evidence-based recommendations on bioremediation after natural or intentional contamination of soil by Y. pestis. Previous studies evaluating viability of Y. pestis on manufactured surfaces (e.g., steel, glass) have shown that survival is typically <72 hours (18). Our data emphasize the need to reevaluate the duration of persistence in soil and other natural media.
Dr Eisen is a service fellow in the Division of Vector-Borne Infectious Diseases, Centers for Disease Control and Prevention, Fort Collins. Her primary interest is in the ecology of vector-borne diseases.
We thank L. Chalcraft, A. Janusz, R. Palarino, S. Urich, and J. Young for technical and logistic support.
- Barnes AM. Conference proceedings: surveillance and control of bubonic plague in the United States. Symposium of the Zoological Society of London. 1982;50:237–70.
- Gage KL, Kosoy MY. Natural history of plague: perspectives from more than a century of research. Annu Rev Entomol. 2005;50:505–28.
- Drancourt M, Houhamdi L, Raoult D. Yersinia pestis as a telluric, human ectoparasite-borne organism. Lancet Infect Dis. 2006;6:234–41.
- Eisen RJ, Bearden SW, Wilder AP, Montenieri JA, Antolin MF, Gage KL. Early-phase transmission of Yersinia pestis by unblocked fleas as a mechanism explaining rapidly spreading plague epizootics. Proc Natl Acad Sci U S A. 2006;103:15380–5.
- Webb CT, Brooks CP, Gage KL, Antolin MF. Classic flea-borne transmission does not drive plague epizootics in prairie dogs. Proc Natl Acad Sci U S A. 2006;103:6236–41.
- Cherchenko II, Dyatlov AI. Broader investigation into the external environment of the specific antigen of the infectious agent in epizootiological observation and study of the structure of natural foci of plague. J Hyg Epidemiol Microbiol Immunol. 1976;20:221–8.
- Pollitzer R. Plague. World Health Organization Monograph Series No. 22. Geneva: The Organization; 1954.
- Bazanova LP, Maevskii MP, Khabarov AV. An experimental study of the possibility for the preservation of the causative agent of plague in the nest substrate of the long-tailed suslik. Med Parazitol (Mosk). 1997; (
- Achtman M, Zurth K, Morelli G, Torrea G, Guiyoule A, Carniel E. Yersinia pestis, the cause of plague, is a recently emerged clone of Yersinia pseudotuberculosis. Proc Natl Acad Sci U S A. 1999;96:14043–8.
- Brubaker RR. Factors promoting acute and chronic diseases caused by yersiniae. Clin Microbiol Rev. 1991;4:309–24.
- Perry RD, Fetherston JD. Yersinia pestis—etiologic agent of plague. Clin Microbiol Rev. 1997;10:35–66.
- Baltazard M, Karimi Y, Eftekhari M, Chamsa M, Mollaret HH. La conservation interepizootique de la peste en foyer invetere hypotheses de travail. Bull Soc Pathol Exot. 1963;56:1230–41.
- Mollaret H. Conservation du bacille de la peste durant 28 mois en terrier artificiel: demonstration experimentale de la conservation interepizootique de las peste dans ses foyers inveteres. CR Acad Sci Paris. 1968;267:972–3.
- Mollaret HH. Experimental preservation of plague in soil [in French]. Bull Soc Pathol Exot Filiales. 1963;56:1168–82.
- Chu MC. Laboratory manual of plague diagnostics. Geneva: US Centers for Disease Control and Prevention and World Health Organization; 2000.
- Centers for Disease Control and Prevention. Imported plague—New York City, 2002. MMWR Morb Mortal Wkly Rep. 2003;52:725–8.
- Darby C, Hsu JW, Ghori N, Falkow S. Caenorhabditis elegans: plague bacteria biofilm blocks food intake. Nature. 2002;417:243–4.
- Rose LJ, Donlan R, Banerjee SN, Arduino MJ. Survival of Yersinia pestis on environmental surfaces. Appl Environ Microbiol. 2003;69:2166–71. |
Multiplication Chart 1-12 where all the numbers are mentioned which are multiplying with each other in a queue of 1 multiplication table to 12 multiplication table. These table charts are suitable for the kids from the 1st standard to the 5th standard. They really are in need of these table charts so that they can easily learn and calculate with the help of this table. Kids must have these tables because usually, they take the help of their parents in doing their homework but with the help of this table, they don’t need any help as they can do it on their own.
Multiplication Chart 1-12
Below we provide Multiplication table chart 1 to 12 so if you are looking 1 to 12 worksheets for your child then you are right place below we provide you can easily download.
Printable Multiplication Chart 1-12
Printable Multiplication Table Chart is given here with all the numbers there in the chart format and also in a colorful table that attracts kids to see them and learn from it. Printable Multiplication Table Chart is provided in a printable format where all the digits and numbers from 1-12 contained. And for other table charts to may check out our website and also get those table charts to download from there.
Multiplication Table Chart 1-12 Printable
Use our given multiplication table chart from 1-12 table which is already printed. Along with these table charts we have provided the details of why we need a multiplication table, its uses, tables, and much more.
Multiplication Table 1-12 PDF
From here you can download this multiplication table from the 1-12 numbers table for free. Have this and more calculation table chart for free just from here. This table mostly helps in learning math problems, number tables, quick calculations, fast calculations, methods of learning table, etc. These are some advantages of the charts that are provided to you in our article.
Blank Multiplication Table 1-12 Worksheet
For kids, it is necessary to have a good printed blank table worksheet so that they learn and love to do calculations on that chart. For this, we have given some blank multiplication table chart worksheets which are totally blank and also colorful so that kids learn and write the numbers and the calculations on it. |
Why is Hypertension Called the Silent Killer?
Hypertension is a highly prevalent lifestyle disease that affects around 30% of India’s adult population. Although this disease has no apparent symptoms, it has still been termed as one of the most dangerous lifestyle diseases, and rightly so. Though it was thought to be a disease of the old, it is observed in young adults in their late 20s and early 30s.
Untreated or delayed treatment of hypertension can cause cardiovascular and other health complications. Before we understand why high blood pressure is known as the silent killer, here’s what you need to know about this condition.
What is High Blood Pressure or Hypertension?
Blood pressure is when blood flows through the arteries and exerts pressure on its walls. On average healthy adults, the blood pressure level is around 120/80 mm Hg. A blood pressure reading is usually made of systolic and diastolic pressure.
The systolic pressure is the blood pressure recorded when the heart contracts and is the first number in a reading. The diastolic pressure is a blood pressure reading when the heart relaxes between two beats and is the second number in a blood pressure reading.
It is termed pre-hypertension when your systolic pressure is over 120 mm Hg but less than 129 mm Hg. High blood pressure is when your blood pressure levels are above 129/80 mm Hg.
When your blood pressure levels are :
- 130/80 mm: You have stage 1 hypertension
- 140/90 mm Hg: You have stage 2 hypertension
Getting a blood pressure reading of 180/110 mm Hg is termed a ‘hypertensive crisis’ and is a medical emergency requiring immediate treatment.
What are the Types of Hypertension?
Depending upon the cause of high blood pressure, there are two main types of hypertension, namely :
Also called essential hypertension, this type of disease is most common. Primary hypertension is when the cause of your blood pressure is not an underlying disease. People usually develop this type of hypertension as they grow older.
When your blood pressure levels increase due to another medical condition or the use of certain medications, it is termed secondary hypertension. This type of disease usually improves or resolves after the medical condition is treated or you stop taking medicines responsible for raising blood pressure.
Why Is It Important to Know About Hypertension?
High blood pressure is a silent killer because it does not cause any signs or symptoms in its early stages. When your blood pressure levels are chronically elevated, it forces your heart to pump harder against the resistance in the blood vessels. This can overwork your heart and result in serious health problems such as heart attack, stroke, heart failure, and kidney failure.
Hypertension is a silent killer for numerous reasons. Several death cases in the developing and developed nations of the world are usually due to high blood pressure as a significant or primary contributing factor.
To avoid severe and life-threatening complications due to undiagnosed and untreated hypertension, it is essential to know your blood pressure readings and if you have high blood pressure.
How To Manage Hypertension?
Management of hypertension is relatively easy, and with the suitable methods and timely intervention, people with this lifestyle disease can live a healthy, good quality of life.
Hypertension management and treatment usually involve the following :
Other lifestyle changes
If you have been diagnosed with chronically high blood pressure, your doctor will determine the most suitable medication for you, depending upon the cause. Some commonly used groups of anti-hypertensive medicines include ACE inhibitors, beta-blockers, diuretics, or a combination of any two groups of anti-hypertensives.
Your diet plays an essential role in the development and management of hypertension. Hypertensive people must cut down the salt in their food and preferably follow a DASH diet loaded with fiber, proteins, and other healthy nutrients.
Apart from salt, it would help to cut down on processed, fried, and packaged foods loaded with saturated and trans fats that contribute to your blood pressure and increase cardiovascular diseases.
Hypertensive or otherwise, you must get a minimum of 30 minutes of aerobic exercise every day or 150 minutes of exercise a week. Some activities that effectively keep your blood pressure levels in the normal range include running, jogging, brisk walking, swimming, and cycling. Don’t forget to include strength training at least two days a week.
Stress is a major contributing factor to hypertension in people today. If you wish to keep your blood pressure levels in the optimum range, you must bust the stress. Some activities that can help you are yoga, meditation, mindfulness, a long relaxing walk, getting a massage, or pursuing a hobby.
Other lifestyle changes that can help manage your hypertension are maintaining optimum weight and BMI, limiting alcohol consumption and quitting smoking.
Ignoring early signs and symptoms of hypertension can result in organ damage and lead to severe health complications. Make sure you keep your BP in control under all circumstances. Find expert advice and more information articles on hypertension, how to identify it, prevent and manage it right here on BPinControl! |
The ‘Springs’ contains a single (‘Springs’; see Section 2.3.3 in companion product 2.3 for the Galilee subregion ()). Within the , the that comprise this landscape group occur mainly in the central and western parts of the (Figure 6). There are two main types of springs defined within this landscape group: recharge (also referred to as ‘outcrop’) springs and discharge springs (Figure 7). Section 188.8.131.52.2.3 in companion product 2.3 for the Galilee subregion () provides a detailed conceptualisation of the groundwater flow systems that contribute to both spring types.
In the , recharge springs are typically associated with topographically elevated areas, such as along the eastern margin of the Eromanga Basin where major of the Great Artesian Basin (GAB) outcrop. For this spring type, the aquifer is largely unconfined, and flow occurs away from topographically high areas, discharging near to where the ground surface intersects with a saturated aquifer. As a , recharge springs may be strongly influenced by rainfall events and can exhibit dynamism in flow in response to recent rainfall (). Groundwater flow paths for recharge springs are thought to be relatively short and associated with shallow, local-scale (Figure 7).
In contrast to recharge springs, the source aquifers for discharge springs tend to be regional-scale systems with longer groundwater flow paths (Figure 7). Discharge springs originate from aquifers that are largely confined and under pressure, and form in areas where the confining bed or is weakened or thin, or where groundwater flow is disrupted by faults, folds or some other flow barrier such as a change in rock type (e.g. associated with geological rocks). In these aquifers groundwater typically has a longer residence time compared to groundwater that occurs in source aquifers for recharge springs. Unlike recharge springs, discharge springs are generally located remote from their recharge zones.
At the surface, discharge springs are commonly mounded with moisture accumulation around the vent. In contrast to recharge springs, the water flow of discharge springs is disconnected from the local rainfall regime. However, the size of the wetland area surrounding a spring is influenced by seasonal conditions and may fluctuate, for example, between the dry and wet seasons. Another feature of some discharge springs is the presence of salt scalds (). These form in areas surrounding spring wetlands due to the precipitation of salts (including carbonates) when the discharged groundwater evaporates. In arid regions, salt scalds are accentuated because of the absence of flushing by overland flow.
The three geographic clusters of springs that occur within the zone of potential hydrological change of the Galilee subregion were all previously recognised and described as part of the work presented in the Lake Eyre Basin Springs Assessment (). This is one of a series of research projects funded by the Department of the Environment and Energy as part of the broader Bioregional Assessment Programme. From north to south these spring clusters are: (i) the Doongmabulla Springs complex, (ii) a series of springs that overlie either the Colinlea Sandstone or the Joe Joe Group, which are geological units of Permian age (hereafter referred to as the ‘Permian springs cluster’), and (iii) a series of springs associated with Triassic geological units (hereafter referred to as the ‘Triassic springs cluster’). The assessment that follows in this section covers each of the three geographic clusters separately as there are important differences between them. The Barcaldine Springs supergroup in the Lake Eyre Basin Springs Assessment Project () includes springs associated with GAB recharge beds around the margin of the Eromanga Basin and the Doongmabulla Springs complex. The Permian and Triassic springs clusters are separate entities and are not recognised as part of the Barcaldine Springs supergroup (Figure 8).
Discharge springs occur in areas where the hydrostatic pressure in a confined aquifer is artesian, and where the overlying aquitard is compromised, for example, by thinning of the aquitard or due to the influence of geological structures, or the presence of a barrier that disrupts regional groundwater flow. Discharge springs are typically remote from the recharge areas for their source aquifers.
Recharge (or outcrop) springs, are associated with unconfined aquifers that occur either at or near to aquifer outcrop areas. In contrast to discharge springs, groundwater flow systems tend to be more localised, with recharge springs commonly occurring around outcrop margins or near the break of a valley slope.
Figure 8 Distribution of the three spring groups (Doongmabulla Springs complex, Permian springs cluster and Triassic springs cluster) in relation to the Barcaldine Springs supergroup and the proposed coal resource developments for the Galilee Basin
Spring complexes showing 100% active springs (solid), partially (1% to 99%) active (grey) and 100% inactive (open symbols). Recharge springs (triangles) are distinguished from discharge springs (circles).
The Barcaldine Springs supergroup (springs enclosed by black line) includes two north-trending lines of springs to the north of Blackall, as well as the Doongmabulla Springs complex.
Doongmabulla Springs complex
The Doongmabulla Springs complex is located on Doongmabulla and Labona pastoral stations near the confluence of Dyllingo Creek and Cattle Creek. The form an isolated cluster of wetlands associated with the Carmichael River and its tributaries (Figure 9). The springs complex consists of 187 individual spring vents forming 160 separate wetlands ().
Springs situated in areas underlain by Triassic rocks of the Moolayember Formation are classed as discharge springs (Figure 9). This categorisation is based on the relatively flat topography, mounded vents and the absence of source outcrop (). Discharge springs in the Doongmabulla Springs complex include: the House Springs, Joshua Spring, the Mouldy Crumpet Springs, the Stepping Stone Springs, the Moses Springs (comprising 65 separate vents), the Keelback Springs, Geschlichen Spring, Camp Springs (Figure 10), Bush Pig Trap Springs, Camaldulensis Spring, the Wobbly Springs and the Bonanza Springs. One of the largest of these individual spring groups, the Moses Springs, includes spring-fed wetlands with a combined area of approximately 3.25 ha (about 0.03 km2) ().
The more easterly springs in the Doongmabulla Springs complex are interpreted (based on morphology) as being recharge springs as they occur in areas where either the Clematis Group aquifer or the Dunda beds aquifer (the upper part of the generally low Rewan Group) subcrops beneath the Carmichael River. These springs have vents on the edge of wetlands at the base of gentle slopes which suggests lateral () and include Little Moses Springs and Yukunna Kumoo Springs (Figure 9). Little Moses Springs (Figure 11) supports a wetland of 200 m by 50 m ().
Dusk Springs and Surprise Spring are the most easterly springs in the Doongmabulla Springs complex. The source aquifer for these recharge springs is likely to be the Dunda beds aquifer, as they are both situated in areas dominated by Dunda beds outcrop. For most hydrogeological interpretation purposes of the (BA) for the , the Dunda beds were grouped with the thicker and more extensive (underlying) Rewan Group . This is due to a lack of data specifically defining the lateral and vertical extents of the Dunda beds at an appropriate regional scale (companion product 2.1-2.2 for the Galilee subregion ()).
Data: Queensland Herbarium, Department of Science, Information Technology, Innovation and the Arts (); Bioregional Assessment Programme (, , ); Queensland Department of State Development, Infrastructure and Planning ()
Permian springs cluster
The Permian springs cluster consists of that are interpreted as being sourced from of Permian age within the Galilee Basin, particularly the Colinlea Sandstone or Joe Joe Group (). The Permian springs cluster includes: Lignum Spring, the Mellaluka Springs complex and the Albro Springs. The Mellaluka Springs complex consists of three vents and the Albro Springs group has two vents (). Of these three spring groups, the Albro Springs group are considered to be recharge springs and Lignum and Mellaluka springs are considered to be discharge springs by . In contrast to work undertaken to support the environmental impact statement for the proposed Carmichael Coal Mine defined the Mellaluka Springs complex to include the Mellaluka Springs, Stories Spring and Lignum Spring ().
Triassic springs cluster
Relatively little information is known about the Triassic springs cluster, which encompasses the southernmost within the . included three groups in the Triassic springs cluster: Hunter, Greentree and Hector. Hunter Springs consists of two vents, whereas Hector Springs has three main vents and several smaller ones (). Greentree Springs is inactive, and has not flown since the 19th century (). All springs in the Triassic springs cluster are interpreted as being recharge springs by . As Hunter and Greentree springs are situated on Dunda beds outcrop (i.e. the upper and more permeable part of the Rewan Group), the Dunda beds is considered the likely source . However, outcrop at Hector Springs is obscured by an extensive cover of Cenozoic sediments. It is possible though that the primary source is the Dunda beds, as the Hector Springs group is located several kilometres east of Dunda beds outcrop (), but west of known occurrences of sedimentary rocks that comprise the upper Permian coal measures.
Doongmabulla Springs complex
Both recharge and discharge springs occur within the Doongmabulla Springs complex (refer to Section 184.108.40.206.1 and companion product 3-4 () for the Galilee subregion). Fensham (as cited in GHD, 2012a) estimated the daily flow rate of all the springs in this complex (combined) to be about 1.35 ML/day , which equates to some 493 ML/year (companion product 2.5 for the Galilee subregion ()). The daily flow rate of Joshua Spring is estimated to be 432 to 864 KL/day (). There was no information provided in either or to indicate how these various spring flow rate estimates were derived.
Discharge from some springs of the Doongmabulla Springs complex contributes flow to tributaries of the Carmichael River (Figure 9). The outflow from Joshua Spring and the House Springs group converge to provide the main discharge feeding the Carmichael River for a distance of up to 20 km downstream (). These springs also provide to adjacent wetlands. As noted previously, the Doongmabulla Springs complex has 187 vents that feed 160 separate wetlands, of which 149 wetlands are fed by discharge springs (). The largest spring wetland in the complex is about 8.7 hectares (Fensham et al., 2016, p. 189). The surface water in the springs is perennial. The larger wetlands, such as those fed by Moses Springs and Keelback Springs, flow into permanent open pools and channels in the bed of Cattle Creek. In turn, these flow into the Carmichael River. However, during periods of low flow due to lower rainfall or drought conditions, the channels do not discharge into the Carmichael River (). Further information and analysis of hydrological dynamics and temporal variability of the different spring vents that comprise the Doongmabulla Springs complex is in companion product 3-4 for the Galilee subregion (). This includes preliminary analysis of remotely sensed data sourced from the available 30-year Landsat archive provided by Digital Earth Australia (see Section 3.2 in companion product 3-4 for the Galilee subregion () for further details).
The source of that supplies the Doongmabulla Springs complex has been a contentious issue, and the cause of considerable debate. Further detail on the available evidence and the various interpretations that have been made about the spring’s source aquifer is in Section 220.127.116.11.1 of companion product 3-4 for the Galilee subregion (). In the of the for the Galilee subregion, the primary source aquifer of the Doongmabulla Springs complex is considered to be the Clematis Group aquifer. The multiple lines of evidence and reasoning supporting this interpretation is outlined in Section 18.104.22.168.1 of companion product 3-4 for the Galilee subregion (), as well as Section 2.3.2 of companion product 2.3 for the Galilee subregion ().
For the purposes of the BA of the Galilee subregion, the main features of the hydrogeological conceptualisation of the Doongmabulla Springs complex includes:
- The discharge springs (mound springs) in the western part of the Doongmabulla Springs complex (Figure 9) are most likely fed by groundwater leakage from the confined Clematis Group aquifer through the Moolayember Formation aquitard. This occurs in areas where the integrity of the aquitard is compromised, which may be due thinning or weathering of the aquitard near its contact with the Clematis Group aquifer, or the influence of geological structures (or possibly a combination of these factors). At the surface, the discharge springs are formed on alluvium that overlies the Moolayember Formation aquitard. The discharge springs source water from regional-scale groundwater flow that occurs in confined parts of the Clematis Group aquifer. Groundwater flow within this aquifer occurs from the west and south, and focuses towards the discharge springs.
- The recharge (outcrop) springs immediately east of the discharge springs are sourced from the unconfined parts of the Clematis Group aquifer. These include the Little Moses (Figure 11) and Yukunna Kumoo springs (Figure 9), which are located on or near outcrop of the Clematis Group. These springs are fed by more local-scale with to the aquifer occurring in nearby hills to the east and north of the springs.
- The source for the easternmost recharge springs in the Doongmabulla springs complex (Figure 9, Surprise and Dusk) is likely to be the Dunda beds aquifer. This aquifer outcrops in nearby hills, as well as underlying the alluvium where these springs occur in the valley of the Carmichael River.
- Groundwater discharge at the surface across the Doongmabulla Springs complex contributes directly to in the Carmichael River and helps to maintain permanent pools in nearby drainage channels (discussed in Section 3.5 of companion product 3-4 for the Galilee subregion ()). There is also potential for groundwater from the Clematis Group and Dunda beds aquifers to discharge directly into the alluvium, where these units subcrop beneath alluvium (Figure 9).
Permian springs cluster
The Permian springs cluster occurs to the west of the Belyando River. The likely sources for these springs are either the Colinlea Sandstone (part of the upper Permian coal measures) or the stratigraphically lower Joe Joe Group (the basal sequence of the Galilee Basin’s Carboniferous to Permian ).
The Permian springs cluster has four wetlands fed by the . The flow is predominantly south to north in this region. Albro Springs has moderate flows (combined flow of the two vents is about 40 L/min) and Lignum Spring has low flows (about 0.5 L/min). The highest flows are in the three vents of the Mellaluka Springs complex (~1200 L/min, combined).
Triassic springs cluster
The in the Triassic springs cluster are all recharge springs. All three of these springs groups are likely to source from the Dunda beds . The Greentree and Hunter springs are surrounded by outcrop of the Dunda beds. Hector Springs are about 2 km east of the currently mapped extent of the Dunda beds but also appear to have a gravity-fed source.
Doongmabulla Springs complex
Within the Doongmabulla Springs complex some and spring groups are substantially disturbed, either by human activity or by the actions of livestock. An example of this is Joshua Spring (Figure 12), which has been heavily modified to provide drinking water for the Doongmabulla Station homestead and for livestock water supplies. It is now enclosed by a turkey’s nest dam ().
Other springs and spring groups are relatively intact. Dominant vegetation surrounding the various springs includes: (i) bare, scalded plains supporting very sparse grass and herb cover; (ii) grassland generally dominated by Sporobolus pamelae; (iii) mixed sedgeland dominated by sedges in the genus Cyperus; (iv) coolibah (Eucalyptus coolabah) or river red gum (E. camaldulensis var. obtusa) woodland and open woodland; (v) weeping paperbark (Melaleuca leucadendra) forest; (vi) peppermint box (E. persistens) low open woodland with a grassy ground layer dominated by spinifex (Triodia); and (vii) Reid River box (E. brownii) woodland.
The first three vegetation assemblages and the weeping paperbark forest are contained within regional ecosystem (RE) 10.3.31 under the Queensland Government’s remnant vegetation mapping (). This RE is described as ‘Artesian springs emerging on alluvial plains’. It has the conservation status ‘Of concern’. Three of the four vegetation assemblages within RE 10.3.31 (the exception is bare, scalded plains) are considered to be obligate groundwater-dependent systems.
The vegetation assemblage containing coolibah and/or river red gum woodland is contained within RE 10.3.14 ‘Eucalyptus camaldulensis and/or E. coolabah woodland to open woodland along channels and on floodplains’. It is considered to be a facultative although in some areas around the springs access to groundwater will be permanent. This RE is listed as ‘Least concern’. The Reid River box woodland occurs within RE 10.3.6, whereas the peppermint box low open woodland is within RE 10.7.2 (). These vegetation assemblages are not considered to be groundwater-dependent and both are listed as ‘Least concern’.
Figure 12 (left) aerial view of Joshua Spring highlighting the rectangular 'turkey's nest' dam (foreground) that now encloses the spring; (right) outflow pipe for Joshua Spring constructed through the right-hand dam wall
Permian springs cluster
The wetland vegetation in the Mellaluka Springs group is mostly a tall sedgeland dominated by the sedge Baumea rubiginosa, the fern Cyclosorus interruptus and the grass Phragmites australis. Drier areas adjacent to the springs support grassland of Sporobolus mitchellii with a variety of chenopod shrubs and sub-shrubs. The vegetation in the vicinity of the springs group is mostly ‘non-remnant’; however, the supports up to 0.04 km2 of RE 11.3.22 that is classified as ‘Of concern’. RE 11.3.22 is described as ‘Springs associated with recent alluvia, but also including those on fine-grained sedimentary rocks, basalt, ancient alluvia and metamorphic rocks’.
The wetlands at Lignum Spring and Stories Spring almost exclusively contain cumbungi (Typha domingensis) (). The springs are surrounded by grassy woodland that is either silver-leaved ironbark (E. melanophloia) woodland (RE 10.3.28) or Reid River box woodland (RE 10.3.6). Both REs are classified as ‘Least concern’.
Triassic springs cluster
Doongmabulla Springs complex
The discharge in the Doongmabulla Springs complex are part of a nationally threatened ecological community listed under the Commonwealth’s Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act), ‘The community of native species dependent on natural discharge of groundwater from the Great Artesian Basin’ (). The community occurs in parts of NSW, within the Galilee (and elsewhere in Queensland), and also in parts of SA (). The Doongmabulla Springs complex differs from other GAB spring complexes in being adjacent to an easterly flowing, outward-draining river system. Specifically, it occurs in the vicinity of the Carmichael River which flows into the Burdekin River and then to the sea along the east coast of Queensland between Ayr and Home Hill. By comparison, the other major GAB spring complexes are in the internally draining Lake Eyre Basin, and occur in more arid environments.
The wetlands associated with discharge springs in the Doongmabulla Springs complex support a number of spring-endemic plants. These include two nationally threatened herbs, salt pipewort (Eriocaulon carsonii) and blue devil (Eryngium fontanum) (). Other spring-endemic plants include Hydrocotyle dipleura, Myriophyllum artesium, Sporobolus pamelae and Utricularia fenshamii.
Salt pipewort is a small aquatic herb that grows in shallow water (including water depths as shallow as 10 cm) where it forms dense floating mats. It is listed as ‘Endangered’ nationally under the EPBC Act and as ‘Endangered’ under Queensland’s Nature Conservation Act 1992 (Nature Conservation Act). The species occurs in 20 spring complexes within the GAB in Queensland, NSW and SA. It also occurs at two non-GAB springs in Queensland ().
Blue devil is an erect perennial herb that can reach a height of up to 80 cm. The entire distribution of this species occurs in only two spring complexes, one of which is the Moses Springs group in the Doongmabulla Springs complex, within the (). It is listed as ‘Endangered’ both nationally (EPBC Act) and in Queensland (Nature Conservation Act). The species occupies two spring wetlands at Moses Springs. One wetland has an area of 2.4 ha, and the other, 0.02 ha. The approximate population size of the species at Moses Springs is estimated at 10,000 plants ().
Hydrocotyle dipleura, Myriophyllum artesium and Sporobolus pamelae are listed as threatened in Queensland under the Nature Conservation Act, but not nationally. Hydrocotyle dipleura is a perennial prostrate herb that occurs in saline sands and clay soils beyond the of discharge spring wetlands. It has been recorded in low woodland of Melaleuca bracteata. The distribution of this species is confined to seven springs complexes in the GAB of Queensland including the Moses Springs group at the Doongmabulla Springs complex (). It is listed as ‘Vulnerable’ in Queensland (Nature Conservation Act).
Myriophyllum artesium is an aquatic, mat-forming herb that grows to 15 cm. It has a distribution that is confined to wetland habitat in arid Queensland and is listed as ‘Endangered’ in Queensland (Nature Conservation Act). This species generally grows in shallow pools of spring wetlands and is also found in drains ().
Sporobolus pamelae is a tussock grass that grows to a height of 80 to 120 cm along the margins of springs and spring wetlands. It has a geographic range that is confined to six spring complexes in the GAB of Queensland (). It is listed as ‘Endangered’ in Queensland (Nature Conservation Act). The species is found at 15 spring wetlands within the Doongmabulla Springs complex ().
The salt scalds around spring wetlands at Moses and Mouldy Crumpet springs support endemic plants – so called ‘scald endemics’ (). These species include Sporobolus partimpatens, Sclerolaena “dioceia” and Trianthema sp. (Coorabulka RW Purdie 1404). None of these scald endemics is currently listed as threatened.
In addition to spring wetland and scald endemics, another threatened plant occurs at Doongmabulla Springs complex. Waxy cabbage palm (Livistona lanuginosa), a species that occurs mainly in the ‘Streams’ , has been recorded at Moses Springs (). It is endemic to the Burdekin river basin and is listed as ‘Vulnerable’ both nationally (EPBC Act) and in Queensland (Nature Conservation Act) (). The population at Moses Springs is the only one known to occur at a GAB spring and is estimated to be about 20 individuals ().
The Doongmabulla Springs complex supports a diversity of fish species though, based on current knowledge, none are known to be endemic. Up to 18 fish species are expected to occur in the area (). Eleven fish species were recorded during recent surveys in the vicinity of the Doongmabulla Springs complex (): Agassiz's glassfish (Ambassis agassizii), Midgley's carp gudgeon (Hypseleotris species 1), purple-spotted gudgeon (Mogurnda adspersa), sleepy cod (Oxyeleotris lineolata), eastern rainbowfish (Melanotaenia splendida splendida), Hyrtl's tandan (Neosilurus hyrtlii), spangled perch (Leiopotherapon unicolor), barred grunter (Amniataba percoides), flyspecked hardyhead (Craterocephalus stercusmuscarum), western carp gudgeon (Hypseleotris klunzingeri) and bony bream (Nematalosa erebi). Most of these species are likely to periodically occupy the spring wetlands.
The aquatic invertebrates of the Doongmabulla Springs complex are poorly known. Two spring-endemic invertebrate species have been recorded from the area. These are the mollusc Gabbia rotunda, which is endemic to the Doongmabulla Springs complex, and the water mite Mammersela sp. AMS KS85341, which is endemic to GAB spring wetlands (). It is highly likely that further sampling will detect new (previously unknown) species of molluscs and other aquatic invertebrates.
Permian springs cluster
The Permian springs cluster does not support any endemics (). The plants present are common and widespread species of no conservation significance. The fish fauna is limited, with only the spangled perch and eastern rainbowfish positively identified ().
Triassic springs cluster
Product Finalisation date
- 2.7.1 Methods
- 2.7.2 Overview
- 22.214.171.124 Introduction
- 126.96.36.199 Potentially impacted landscape groups
- 188.8.131.52 'Springs' landscape group
- 184.108.40.206 Streams landscape groups
- 220.127.116.11 'Floodplain, terrestrial GDE' landscape group
- 18.104.22.168 'Non-floodplain, terrestrial GDE' landscape group
- 22.214.171.124 Outline of content in the following landscape group sections
- 2.7.3 'Springs' landscape group
- 2.7.4 Streams landscape groups
- 2.7.5 'Floodplain, terrestrial groundwater-dependent ecosystem' landscape group
- 2.7.6 'Non-floodplain, terrestrial groundwater-dependent ecosystem' landscape group
- 2.7.7 Limitations and gaps
- Contributors to the Technical Programme
- About this technical product |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.