content
stringlengths 275
370k
|
---|
OT (occupational therapy) involves the use of assessment as well as intervention to maintain, recover, or develop the meaningful occupations, or activities, of communities, groups, or individuals. It’s an allied health profession that is performed by occupational therapists. Occupational therapists typically work with people who have impairments, injuries, disabilities, or health problems.
The AOTA (American Occupational Therapy Association) is the national professional association that represents the interests as well as concerns of OTs plus improves the quality of OT services. It defines occupational therapists as people who help individuals across their lifespan to take part in the things that they need and want to do, and this is through therapeutic use of daily activities (occupations).
The common occupation therapy interventions are helping kids with disabilities to fully take part in social situations and school, injury rehabilitation, as well as providing support for older people experiencing cognitive and physical changes.
Occupational therapists are typically university-educated professionals, and they are required to pass a licensing exam for them to practice. OTs usually work closely together with professionals in medicine, clinical psychology, social work, nursing, audiology, speech therapy and physical therapy.
Working with Children
The occupations of children are centered around learning and playing. OTs work with kids with any condition, impairment or disability that affects their capability of performing the normal activities of life, like making friends, going to school, eating, getting dressed, as well as being part of a group or club. This includes the following:
- Sensory and attention issues
- Developmental delay and disabilities
- Physical disabilities (for instance, spina bifida)
- Acute medical, surgical plus orthopedic conditions
- Neurological conditions (for example, cerebral palsy)
Occupational therapists work in close cooperation with the child, his/her parents, and other people who are important in the child’s life, like their teacher, doctor, as well as other health professionals.
OTs can help kids achieve their developmental milestones like motor skills as well as hand-eye coordination for helping them with school, play, or independent skills such as throwing a ball, holding utensils or a pen). They can also educate and involve carers, parents, and other people to facilitate children’s development and learning. Occupational therapists can help children who have developmental delays learning everyday tasks.
Additionally, they can help children with behavioral issues maintain positive behaviors in all environments. For instance, instead of cutting out or hitting others, they can use positive methods to deal with anger like taking part in a physical activity or writing about it.
OTs work in inpatient hospital environments providing specialist interventions for individuals with different health conditions, including acute mental health, HIV, burns, falls, and post-surgery.
Acute care occupational therapists work to asses clients’ function, cognition, as well as psychosocial needs, plus they monitor client’s function plus progress, prescribing intervention and if it is necessary adaptive equipment for facilitating a safe and successful hospital discharge.
OTs in mental health use group and individual programs and activities in enhancing participation in the occupations of daily life – working, looking after oneself, and engaging in leisure as well as social pursuits. For instance, occupational therapists may work in partnership with consumers to develop particular strategies to allow participation in social activities.
Workplace Injury Management
An occupational therapist uses specialized assessments to determine functional necessities of different jobs, and the capacity of people to return to work. OTs can help with designing as well as coordinating graded programs of return to work. They can assist with educating employers and clients in safe work practices.
Also, they can help in modifying your work environment to accommodate the needs of people to prevent or minimize injuries plus ill-health. Additionally, they can work with individuals who have mental or physical needs in the workplace.
Rehabilitation and Independence
An occupational therapist work with individuals of all age groups to rehabilitate illness or injury. These rehabilitation areas include the following:
- Helping enhance or regain participation in the occupation of daily life after specific events like stroke, spinal injury or a hip replacement, or within a condition like multiple sclerosis or rheumatoid arthritis
- Prescribing and educating clients as well as carers in the adaptive equipment use to assist participation
- Assessing and modifying community and home environments to improve their independence and safety
- Ergonomic assessment as well as modification in the community, workplace, or home
- Providing or manufacturing splints after upper limb or hand injury
- As OTs work with both mental and physical health needs, they are well placed to provide a holistic approach that pays attention to physical needs and emotional wellbeing when they are working with people after illness or injury. |
How Corporate Sustainability Efforts Can Benefit Growers
Corporate partnerships can provide growers with financial and educational resources and entry points to larger markets....Read More
Fertilizers can be an important source of nutrients to support crop growth. Applying fertilizer in excess of crop requirements, though, can actually harm crops and soil as well as damage environmental and human health.
Studies have shown that U.S. growers overspend on fertilizer by as much as $6 billion per year, an economic drain that has surprising impacts both on and off the farm. Applying too much fertilizer can impair farm soil health, reduce crop quality, and even cause crop yields to decline. Excess nutrients can also become pollutants that damage water quality, cause emissions of greenhouse gases, and contribute to smog formation—costing billions more across multiple economic sectors every year.
Sustainable nutrient management strategies, which aim to reduce nutrient loss to the environment and maximize uptake by crops, can help growers improve farm health, save money on fertilizer costs, and contribute to environmental and human health. These strategies can be as simple as applying the appropriate fertilizers for specific crop needs, lending numerous benefits for growers like greater crop yields, better bottom lines, and a healthier planet.
When more fertilizer is applied than plants can take up, the surplus nutrients, particularly nitrogen, can be lost to the environment. Unused nitrogen fertilizer can leach downward into groundwater, enter nearby surface waters through runoff, or be released into the atmosphere as gases.
One easy-to-spot sign of over-fertilization is heightened nitrogen concentrations in waterways adjacent to fields. Long-term over-application of ammonium-containing nitrogen fertilizers can also cause soils to become more acidic, so if soil pH is declining, it may be a sign that not all of the nitrogen fertilizer is being taken up by crops.
Over-fertilization also introduces excess salts to soils, which diminishes crops’ ability to take up water. This can cause drought stress even when water is readily available, damage roots, cause leaves to wilt, and eventually even kill crops. Drought stress in well-watered crops can, therefore, be another indicator of over-fertilization.
Yes. In addition to excess salts from fertilizer diminishing the crops’ ability to take up water, over-fertilized crops may take up more nitrogen than they need, which disrupts the balance of nutrients in plant tissue. The result is that crops will be deficient in those other necessary nutrients, such as sulfur and zinc, reducing crop quality.
Above a certain threshold, any nutrient, including nitrogen, ceases to promote plant growth and can actually be toxic. Below the toxicity threshold, deficiencies in other nutrients will prevent crops from responding effectively to the application of any one nutrient. For example, sulfur is necessary for plants to metabolize nitrogen, but isn’t included in typical superphosphate fertilizers.
“You can apply 300 pounds of nitrogen to a field, but, without sulfur, the plants won’t be able to use it,” explains Farm Journal Field Agronomist Ken Ferrie. These interactions between nutrients mean that past a certain rate of application, crop yields will plateau even when more fertilizer is applied.
Fertilizer leads to gaseous emissions via two main pathways:
Microbial denitrification. When soils are saturated with water, oxygen availability plummets because soil pore spaces are filled with water instead of air. Under these conditions, certain soil microbes that can use nitrate instead of oxygen in their metabolism will transform accumulated nitrate into a gas—this process is called denitrification. A typical agricultural soil will lose about 15 percent of its nitrogen this way; the more excess fertilizer nitrogen in the soil, the more gas produced. This rate can be even higher in soils in prolonged waterlogged conditions.
Ammonia volatilization. This can happen when nitrogen is applied as urea or manure on the soil surface, especially in alkaline (pH > 8) soils. Urea is easily converted into ammonia gas, which then escapes to the atmosphere.
The process of microbial denitrification can produce two polluting gases:
Nitrous oxide (N2O). This is a potent greenhouse gas that has a global warming potential almost 300 times that of carbon dioxide and can persist in the atmosphere for a millennium. Today, about half of the man-made N2O in the atmosphere originated from agricultural soils.
NOx. Oxides of nitrogen—like nitric oxide (NO) and nitrogen dioxide (NO2)—are pollutants that contribute to smog and acid rain. They can also cause respiratory illness, cancer, and cardiovascular disease. Recent research in California and the Midwest estimates that that 25 to 40 percent of NOx emissions come from agricultural soils.
The process of ammonia volatilization produces ammonia gas. Ammonia is not itself harmful to human health, but it reacts with nitrogen and sulfur oxides in the atmosphere to produce PM 2.5, tiny airborne pollutants that contribute to smog formation and can damage human health by aggravating respiratory and cardiovascular illnesses. Ammonia can also hurt aquatic life if too much reaches lakes and streams.
Applying the correct amount of fertilizer can still harm the environment, if it is not applied in the correct form, at the right time, or in the right location. These four factors—rate, source, time, and location—are known as the 4 Rs of sustainable nutrient management. Following these principles, growers apply fertilizers in a way that maximizes crop nutrition without over-fertilizing:
Right source. Providing fertilizers in crop-available forms that are appropriate for soil conditions, and contain the right balance of nutrients for the particular crop.
Right rate. Applying fertilizer in amounts that correspond to crop nutrient demand, taking into account the nutrients already present in soil, and nutrients in other amendments being applied.
Right time. Supplying fertilizer at times when plants need it.
Right place. Providing fertilizers in a location where plant roots can access the nutrients, and considering spatial variation in soil nutrient availability across fields.
All of these factors are also dependent on the local environment and soil conditions. Because of these interactions, what is “right” will vary by farm and by crop type.
One strategy to remediate soils that have been over-fertilized is to plant cover crops, which can mine the soil for excess nutrients. Deeply rooted cover crops such as tillage radish are especially beneficial, as they can recapture nutrients that have leached down to lower depths.
To prevent excess nutrients from polluting waterways, planting riparian buffer zones along streams and sloughs bordering croplands can help take up fertilizer runoff before it reaches the water.
Applying nutrients in a sustainable way helps growers increase crop production and save money.
More fertilizer doesn’t always mean more yield. When fertilizer application rates are above the economic optimum, reducing fertilizer application can maintain (and even improve) yield at a lower cost, increasing farm revenue. For example, a case study by researchers at the University of Nebraska showed that maize growers in the Midwest could cut their fertilizer application by nearly half without any loss of yield, representing significant potential economic savings.
By managing nutrients sustainably, growers can apply only what is necessary while still maximizing their yields. Incorporating additional soil conservation strategies will help reduce fertilizer loss while also improving soil quality—so fewer fertilizer inputs are needed in the future. Precision agricultural technologies that help growers control the exact timing and placement of fertilizers can also help with nutrient use efficiency and minimize losses to the environment.
Sustainable nutrient management strategies have been highly successful in the European Union (EU), where over-fertilization used to consistently threaten drinking water safety. By implementing new nutrient management techniques and cutting fertilizer use by more than half since 1987, growers in the EU have reduced over-fertilization without reductions in crop yield, resulting in safer drinking water and a 47-reduction in ammonia emissions.
Ultimately, applying nutrients in a sustainable way helps growers increase crop production and save money while contributing to the health of both their farm and the surrounding environment. Dr. Maya Almaraz, a researcher studying NOx emissions from agriculture, notes that by implementing nutrient stewardship strategies, “growers can produce food more efficiently, increasing their bottom line and improving environmental health.” |
- Submitted by: getafix
- Views: 2213
- Category: Other
- Date Submitted: 05/06/2010 07:12 AM
- Pages: 5
Ptlls Assignment 1
Q.4. – Analyse different ways in which you would establish ground rules with your learners, which underpin behaviour and respect for others.
When a tutor or indeed a learner first comes into a class they are looking to set the dynamic of the structure and their surroundings for the remainder of the course. For the learners, it may partly be with whom they feel comfortable sitting with, or whether they prefer the front or back of a classroom. For the tutor it may be getting to know the individual personalities in their class and how best to approach their learning style to achieve the most efficient learning curve possible for that particular member of the class.
A fundamental start to any course/session is the need to set ground rules which both the tutor and the learners will be bound by for the subsequent duration of the course as a whole. This allows the base line for both respect and behaviour that is acceptable to both me as the tutor and my students as the learners.
All students will have different levels of acceptance and tolerances when it comes to behaviour and respect for others. What may be perfectly normal or acceptable behaviour per one individual could have a devastating effect on another. Establishing ground rules is therefore essential for a controlled and effective learning environment which will then allow the students their right to learn and work through the aims and objectives for each assignment/subject as they arise.
There are plenty of basic ground rules and ways to implement them, which can then be amended or adjusted to suit the age and level of the learners that are being pitched to in a class environment.
“Before you create ground rules, explain their purpose and the reason you have chosen to create them as a group rather than simply stipulating them yourself”
By telling my students why I am giving them the opportunity to come up with rules... |
Bullying in schools has become a nationwide concern, with many anti-bullying practices being implemented in every state. Social and emotional learning (SEL) can provide an effective foundation for reducing bullying in schools. Practicing SEL skills will create a school environment that fosters positive interactions. Here are four characteristics of SEL, that aim to curb bullying in schools:
1. Open, supportive relationships between students and teachers.
Open communication between students and teachers presents an opportunity for students to learn positive conflict resolution techniques. These techniques allow students to resolve problems before they escalate into fully fledged bullying.
2. Solid communication between schools and families.
Families need to be involved with their child’s school. When a parent is actively engaged in what happens to their child at school on a daily basis, they can help teach positive behaviour and reinforce messages from the teachers. Working as a team with the child’s school, ensures that the same positive messages are being taught on a variety of levels and in a variety of environments.
3. Emphasis on respect and tolerance.
SEL requires school policies that highlight respect for peers, acceptance and appreciation of everyone’s differences. A school community in which students understand and embrace differences is a place where positive behaviour will thrive.
4. Teaching skills that allow kids to recognise and handle emotions, and engage in caring peer relationships.
In addition to school policies requiring respect and tolerance, students must be taught how to engage in positive social interactions and develop caring peer relationships with one another. Teaching students how to express and handle emotions positively will support responsible decision-making and avoid negative scenarios that could escalate into bullying.
SEL skills arm students with the ability to handle their emotions in a positive way that results in enhanced social problem solving, supportive attitudes toward others, and overall academic success. Social and emotional learning provides students with many benefits that enhance the school community as a whole, creating a caring and nurturing environment in which bullying has no place.
Quirky Kid has also recently published a comprehensive SEL program called The Best of Friends. Find out more about it online. Equip your child with some of our therapeutic resources such as the Quirky Kid ‘Face It’ cards, which are designed to increase emotional awareness. Most importantly, please feel free to contact us to learn more about the benefits of social and emotional learning.
Bullying within the school context has gained much recognition and response over the last decade. As teachers, parents and students have become more aware of the nature and definition of bullying, namely, repeated aggression that is intended to cause harm, distress and/or fear to another in a position of less power, there has been a call for a greater response from schools and the wider community to address this serious and pervasive issue.
Australian research suggests that one in four children will experience bullying at some time in the schooling, with the transition years between primary and high school seeing the highest incidences of bullying. While we know the pathways to bullying behaviour can be complex and varied, there are a number of factors, which addressed in the early years of a child’s schooling, can help minimise the incidences of bullying within a school and build children’s resilience in the face of difficult and aggressive peer interactions. Interestingly, longitudinal research is showing us that behaviours such as aggression and dominance in a child’s early years can develop into serious and persistent bullying behaviour as the child grows and points to the necessity of early intervention and skills training for children in their preschool and primary school years.
Sense of connectedness
One of the most significant factors which is common to children that both bully and fall victim to bullying, is a reported lack of significant connection and positive feelings towards their school, teachers and peers. Having meaningful and supportive relationships with others in the school appears to build children’s resilience and ability to cope, even when difficulties occur within their school-based relationships. Interestingly, children at the Quirky Kid Clinic most commonly talk about a significant teacher when asked about what they enjoy at school, rather than a favourite subject. It is the relationship and positive experiences derived from the relationship that children derive most value from. Schools need to consider how to develop children’s sense of connectedness to their school, whether it be through fostering child-teacher mentoring relationships, shared child-teacher projects or peer-led initiatives within the school.
Friendships play an integral part in bullying experiences. We know that bullies derive reinforcement through onlookers who do not act to stop their bullying behaviour and that children who have at least one meaningful, reciprocated friendship are less likely to be bullied. Selecting, making and maintaining friendships is a skill that needs to be modelled and supported in children, teaching them basic skills such as how to start a conversation through to more complex skills of managing peer conflict and using humour in peer relationships. Children at the Quirky Kid Clinic enjoy role playing friendship skills, giving them room to learn and test out how their friendship skills might play out, in a fun and safe environment. Helping children learn how to help their friends if they see they are being bullied is essential to promote bystander intervention, with strategies such as seeking a teachers support and telling the bully that they are being mean and need to stop, commonly used strategies at the Clinic.
Whole School environment
The most common answer given when children are asked why they bully, is that their peer was in some way different, whether it be in looks, in their family structure, sexaulity or cultural identity. In Australia, differences in cultural identity remains one of the most significant reasons children choose to bully another. Although the development of attitudes and beliefs is a very complex process, children’s attitudes towards cultural tolerance are very much shaped close to home via parents, peers and the media. Recent research suggests that one in ten Australians believe some races are naturally superior or inferior and advocate segregation.
Teachers’ attitudes in the classroom are also key. Having limited knowledge of the cultural details of students can result in a stereotypical view of students, which may then negatively influence teacher’s behavior and expectations of students. Because children’s attitudes develop and flourish from very early experiences, the kindergarten and primary school years are ideal focal points for addressing the cultural attitudes of children and reinforcing the importance of inclusion and acceptance.
Community members have indicated that schools are a top priority in terms of converting ideas into action. Positive outcomes have been found with the utilisation of projects within schools that celebrate and embrace cultural differences. Some suggestions for fostering an inclusive, positive, accepting attitudes in schools include:
Talking positively about people as a whole and including books and materials which contain pictures and stories of culturally and linguistically diverse people, people from a wide range of family structures and with different physical appearances, for example.
- Discussing difference and cultural diversity openly
- Embracing opportunities to engage with many diverse cultures and backgrounds, particularly from families within the school environment
- Improving professional development opportunities for teachers and staff, for example through the ‘School Days Project’ by Quirky Kid
- Actively participating in Harmony Day
In addition to promoting and encouraging the acceptance of diversity and difference within the school setting is also the necessity of promoting a safe and predictable environment for children. Children need to understand the rules and expectations in their environment and understand the predictable consequences of their behaviour. Keep expectations visible and accessible through discussion and practice and ensure consistency among the staff.
Address the individual child
Some children may need more focused and individual support to help them develop prosocial behaviour and positive coping strategies to manage difficult peer relationships. While children who bully and children who become victims to bullying may present with very different individual and familial characteristics, supporting these children with the development of their social skills appears to be a necessary area of intervention. The Best of Friends Program, developed by the Quirky Kid Clinic, addresses social skills in children and can be conducted in a school setting with children from 3-13 years. The Best of Friends Program is designed to support children in developing and integrating social skills important to developing positive and effective peer relationships, such as conversational, empathy building and conflict resolution skills.
What we know from the literature and our experiences at the Quirky Kid Clinic, is that if children do not have the skills and strategies to develop positive peer relationships, that they are more likely to engage in unhelpful conflict resolution skills such as violence, submission and emotional dysregulation which have been demonstrated to maintain conflict and bullying. Directing, modelling and practicing social skills is an important component in fostering positive relationships in the school environment.
NATIONAL SAFE SCHOOLS FRAMEWORK RESOURCE MANUAL 18 March 2011 www.safeschools.deewr.gov.au
Bradshaw C. P., Koth C. W., Thornton L. A., and Leaf P. J. (2009a) ‘Altering school climate through school-wide positive behavioural interventions and supports: Findings from a group randomized effectiveness trial’. Prevention Science, Vol.10, No.2, pp.100-115.
Bradshaw, C. P., Mitchell, M. M., & Leaf, P. J. (2009b). ‘Examining the effects of School-Wide Positive Behavioral Interventions and Supports on student outcomes: Results from a randomized controlled effectiveness trial in elementary schools’. Journal of Positive Behavior Interventions, Vol.12, No.3, pp.133-148
Bradshaw, C.P., Reinke, W.M., Brown, L. D., Bevans, K.B., & Leaf, P.J. (2008). ‘Implementation of school-wide Positive Behavioral Interventions and Supports (PBIS) in elementary schools: Observations from a randomized trial’. Education & Treatment of Children, No. 31, 1-26.
Espelage, D. L. and Swearer, S. M. (2003) ‘Research on school bullying and victimization: What have we learned and where do we go from here?’ School Psychology Review, Vol.32, No.2, pp.365-383.
Farrington, D.P. and Ttofi, M.M. (2009); School-based programs to reduce bullying and victimization. Campbell Systematic Reviews, No.6
Ferguson, C. J., Miguel, C. S., Kilburn, J. C. and Sanchez, P. (2007) ‘The effectiveness of school based anti-bullying programs’. Criminal Justice Review, Vol.32, 401 – 414.
Kimberley O’Brien, our principal child psychologist, discussed kids wearing glasses and its implication on bullying with writer Melanie Kell for the Mivision magazine. Next time you are visiting your optometrist , grab a copy of the magazine to find more about the interview. The key points Kimberley discussed were:
- “Schools are becoming more competitive – intelligence is valued and glasses are linked to intelligence,” said Kimberley O’Brien, the Clinic’s Principal Psychologist.
- That said, some kids just love to stand out. Ms. O’Brien from Quirky Kid Clinic described one patient who chose “very funky green glasses” that leap out rather then toned in. “Gemma tried to look individual and she was quite popular because of that,” she said.
- The bottom line? “Children need to feel comfortable with the glasses they wear,” said Ms. O’Brien. “If possible, give them the freedom to choose their own frames.”
You can find useful, practical and informative advice about parenting and young people by visiting our resources page, – or discussing it on our forum.
If you have a story and would like to discuss it with us, please contact us to schedule a time.
Kimberley O’Brien enjoys sharing the best of her therapeutic moments with the media. View our media appearances to-date.
Kimberley O’Brien, Principal Child Psychologist at the Quirky Kid Clinic was recently asked to review a book on Cyberbullying.
“Teen Cyberbullying Investigated” written by author and judge, Tom Jacobs, presents a powerful collection of landmark court cases involving teens and charges of cyberbullying and cyberharassment. Each chapter features a seminal cyberbullying case and resulting decision, asks readers whether they agree with the decision, and urges them to think about how the decision affects their lives. Chapters also include related cases, tips, important facts and statistics, and suggestions for further reading.
Kimberley’s review was included in the interior of the book and stated that “This book is at the forefront of cyberbullying literature. It has the capacity to inform school policy as parents, teachers, and principals race to find solutions for bullies and support for victims”.
To find out more about cyberbullying and other teen issues, visit Judge Tom Jacobs website, Ask the Judge.
Additional information regarding cyberbullying can also be found on our fact sheets. |
Romanization of Japanese
The romanization of Japanese is the application of the Latin script to write the Japanese language. This method of writing is sometimes referred to in English as rōmaji (ローマ字?, literally, "roman letters") (Japanese pronunciation: [ɽóːmadʑi] listen (help·info)), less strictly transcribed romaji, sometimes incorrectly transliterated as romanji or rōmanji. There are several different romanization systems. The three main ones are Hepburn romanization, Kunrei-shiki Rōmaji (ISO 3602), and Nihon-shiki Rōmaji (ISO 3602 Strict). Variants of the Hepburn system are the most widely used.
Japanese is normally written in logographic characters borrowed from Chinese (kanji) and syllabic scripts (kana) which also ultimately derive from Chinese characters. Rōmaji may be used in any context where Japanese text is targeted at non-Japanese speakers who cannot read kanji or kana, such as for names on street signs and passports, and in dictionaries and textbooks for foreign learners of the language. It is also used to transliterate Japanese terms in text written in English (or other languages that use the Latin script) on topics related to Japan, such as linguistics, literature, history, and culture. Rōmaji is the most common way to input Japanese into word processors and computers, and may also be used to display Japanese on devices that do not support the display of Japanese characters.
All Japanese who have attended elementary school since World War II have been taught to read and write romanized Japanese. Therefore, almost all Japanese are able to read and write Japanese using rōmaji, although it is extremely rare in Japan to use this method to write Japanese, and most Japanese are more comfortable reading kanji/kana.
The word rōmaji literally means "roman letters", and in Japan it is more often used to refer to the Latin alphabet itself (as used in English and other European languages) than to any specific form of romanized Japanese.
- 1 History
- 2 Modern systems
- 3 Non-standard romanization
- 4 Example words written in each romanization system
- 5 Differences among romanizations
- 6 Spacing
- 7 Kana without standardised forms of romanization
- 8 Historical romanizations
- 9 Alphabet letter names in Japanese
- 10 See also
- 11 References
- 12 Further reading
- 13 External links
The earliest Japanese romanization system was based on the Portuguese orthography. It was developed around 1548 by a Japanese Catholic named Yajiro. Jesuit presses used the system in a series of printed Catholic books so that missionaries could preach and teach their converts without learning to read Japanese orthography. The most useful of these books for the study of early modern Japanese pronunciation and early attempts at romanization was the Nippo jisho, a Japanese-Portuguese dictionary written in 1603. In general, the early Portuguese system was similar to Nihon-shiki in its treatment of vowels. Some consonants were transliterated differently: for instance, the /k/ consonant was rendered, depending on context, as either c or q, and the /ɸ/ consonant (now pronounced /h/) as f, so Nihon no kotoba ("The language of Japan") was spelled Nifon no cotoba. The Jesuits also printed some secular books in romanized Japanese, including the first printed edition of the Japanese classic The Tale of the Heike, romanized as Feiqe no monogatari, and a collection of Aesop's Fables (romanized as Esopo no fabulas). The latter continued to be printed and read after the suppression of Christianity in Japan (Chibbett, 1977).
Following the expulsion of Christians from Japan in the late 1590s and early 17th century, rōmaji fell out of use, and was only used sporadically in foreign texts until the mid-19th century, when Japan opened up again. The systems used today all developed in the latter half of the 19th century.
From the mid-19th century several systems were developed, culminating in the Hepburn system, named after James Curtis Hepburn who used it in the third edition of his Japanese–English dictionary, published in 1887. The Hepburn system included representation of some sounds that have since changed. For example, Lafcadio Hearn's book Kwaidan shows the older kw- pronunciation; in modern Hepburn romanization, this would be written Kaidan (lit., ghost tales.)
As a replacement for Japanese writing system
In the Meiji era (1868–1912), some Japanese scholars advocated abolishing the Japanese writing system entirely and using rōmaji instead. The Nihon-shiki romanization was an outgrowth of that movement. Several Japanese texts were published entirely in rōmaji during this period, but it failed to catch on. Later, in the early 20th century, some scholars devised syllabary systems with characters derived from Latin (rather like the Cherokee syllabary); these were even less popular, because they were not based on any historical use of the Latin script. Today, the use of Nihon-shiki for writing Japanese is advocated by the Oomoto sect and some independent organizations.
Hepburn romanization (ヘボン式ローマ字 Hebon-shiki rōmaji?, "Hepburn-style romaji") generally follows English phonology with Romance vowels. It is an intuitive method of showing Anglophones the pronunciation of a word in Japanese. It was standardized in the USA as American National Standard System for the Romanization of Japanese (Modified Hepburn), but that status was abolished on October 6, 1994. Hepburn is the most common romanization system in use today, especially in the English-speaking world.
The Revised Hepburn system of romanization uses a macron to indicate some long vowels and an apostrophe to note the separation of easily confused phonemes (usually, syllabic n ん from a following naked vowel or semivowel). For example, the name じゅんいちろう, is written with the kana characters ju-n-i-chi-ro-u, and romanized as Jun'ichirō in Revised Hepburn. Without the apostrophe, it would not be possible to distinguish this correct reading from the incorrect ju-ni-chi-ro-u. This system is widely used in Japan and among foreign students and academics.
Nihon-shiki romanization (日本式ローマ字 Nihon-shiki rōmaji?, "Japan-style Rōmaji"), which predates the Hepburn system, was originally invented as a method for the Japanese to write their own language in Latin characters. It follows the Japanese syllabary very strictly, with no adjustments for changes in pronunciation. It is therefore the only major system of romanization that allows (almost) lossless mapping to and from kana. It has also been standardized as ISO 3602 Strict. Also known as Nippon-shiki, rendered in the Nihon-shiki style of romanization the name is either Nihon-siki or Nippon-siki.
Kunrei-shiki romanization (訓令式ローマ字 Kunrei-shiki rōmaji?, "Cabinet decree style Rōmaji") is a slightly modified version of Nihon-shiki which eliminates differences between the kana syllabary and modern pronunciation. For example, when the words kana かな and tsukai つかい are combined, the result is written in kana as かなづかい with a dakuten (voicing sign) ゛on the つ (tsu) kana to indicate that the tsu つ is now voiced. The づ kana is pronounced in the same way as a different kana, す (su), with dakuten, ず. Kunrei-shiki and Hepburn ignore the difference in kana and represent the sound in the same way, as kanazukai, using the same letters "zu" as are used to romanize ず. Nihon-shiki retains the difference, and romanizes the word as kanadukai, differentiating the づ kana from the ず kana, which is romanized as zu, even though they are pronounced identically. Similarly for the pair じ and ぢ, which are both zi in Kunrei-shiki and both ji in Hepburn romanization, but are zi and di respectively in Nihon-shiki. See the table below for full details.
Kunrei-shiki has been standardized by the Japanese Government and the International Organisation for Standardisation as ISO 3602. Kunrei-shiki is taught to Japanese elementary school students in their fourth year of education.
Written in Kunrei-shiki, the name of the system would be rendered Kunreisiki.
It is possible to elaborate these romanizations to enable non-native speakers to pronounce Japanese words more correctly. Typical additions include tone marks to note the Japanese pitch accent and diacritic marks to distinguish phonological changes, such as the assimilation of the moraic nasal /n/ (see Japanese phonology).
JSL is a romanization system based on Japanese phonology, designed using the linguistic principles used by linguists in designing writing systems for languages that do not have any. It is a purely phonemic system, using exactly one symbol for each phoneme, and marking pitch accent using diacritics. It was created for Eleanor Harz Jorden's system of Japanese language teaching. Its principle is that such a system enables students to better internalize the phonology of Japanese. Since it does not have any of the advantages for non-native speakers that the other rōmaji systems have, and the Japanese already have a writing system for their language, JSL is not widely used outside the educational environment.
In addition to the standardized systems above, there are many variations in romanization, used either for simplification, in error or confusion between different systems, or for deliberate stylistic reasons.
Notably, the various mappings that Japanese input methods use to convert keystrokes on a Roman keyboard to kana often combine features of all of the systems; when used as plain text rather than being converted, these are usually known as wāpuro rōmaji. (Wāpuro is a blend of wādo purosessā word processor.) Unlike the standard systems, wāpuro rōmaji requires no characters from outside the ASCII character set.
While there may be arguments in favour of some of these variant romanizations in specific contexts, their use, especially if mixed, leads to confusion when romanized Japanese words are indexed. Note that this confusion never occurs when inputting Japanese characters with a word processor, because input Latin letters are transliterated into Japanese kana as soon as the IME processes what character is input.
- Oh for おお or おう (Hepburn ō).
- Oo for おお or おう. This is valid JSL and modified Hepburn.
- Ou for おう. This is also an example of wāpuro rōmaji.
Example words written in each romanization system
Differences among romanizations
This chart shows the significant differences among them.
Japanese is written without spaces between words, and in some cases, such as concatenated compounds, it may not be completely clear where "word" boundaries ought to lie, resulting in varying romanization styles. For example, 結婚する, meaning "to marry", and composed of the noun 結婚 (kekkon, "marriage") combined with する (suru, "to do"), is romanised as kekkonsuru by some authors but kekkon suru by others.
Kana without standardised forms of romanization
There is no universally accepted style of romanization for the smaller versions of the y kana (ゃ/ャ, ゅ/ュ and ょ/ョ) when used outside the normal combinations (きゃ, きょ etc.), nor for the sokuon or small tsu kana っ/ッ when it is not directly followed by a consonant. Although these are usually regarded as merely phonetic marks or diacritics, they do sometimes appear on their own, such as at the end of sentences, in exclamations, or in some names. The detached sokuon is sometimes represented as an apostrophe or as t; for example, あっ might be written as a' or at.
- 1603: Vocabvlario da Lingoa de Iapam (1603)
- 1604: Arte da Lingoa de Iapam (1604–1608)
- 1620: Arte Breve da Lingoa Iapoa (1620)
|1603||a||i, j, y||v, u||ye||vo, uo|
|1603||ca||qi, qui||cu, qu||qe,que||co||qia||qio, qeo||qua|
|1620||ca, ka||ki||cu, ku||ke||kia||kio|
|1620||ga, gha||ghi||gu, ghu||ghe||go, gho||ghia||ghiu||ghio|
|1603||za||ii, ji||zu||ie, ye||zo||ia, ja||iu, ju||io, jo|
|1603||na||ni||nu||ne||no||nha||nhu, niu||nho, neo|
|1603||ma||mi||mu||me||mo||mia, mea||mio, meo|
|1603||ra||ri||ru||re||ro||ria, rea||riu||rio, reo|
|1603||va, ua||vo, uo|
|1603||n, m, ~ (tilde)|
|1603||-t, -cc-, -cch-, -cq-, -dd-, -pp-, -ss-, -tt, -xx-, -zz-|
|1604||-t, -cc-, -cch-, -pp-, -cq-, -ss-, -tt-, xx-|
|1620||-t, -cc-, -cch-, -pp-, -ck-, -cq-, -ss-, -tt-, -xx-|
Alphabet letter names in Japanese
The list below shows how to spell Latin character words or acronyms in Japanese. For example, NHK is spelled enu-eichi-kei (エヌエイチケイ). The following pronunciations are based on English letter names; otherwise, for example, A would likely be called ā (アー) in Japanese.[clarification needed]
- A; Ē or ei (エー or エイ)
- B; Bī (ビー, alternative pronunciation bē, ベー)
- C; Shī (シー, sometimes pronounced sī, スィー)
- D; Dī (ディー, alternative pronunciation dē, デー)
- E; Ī (イー)
- F; Efu (エフ)
- G; Jī (ジー)
- H; Eichi or Etchi (エイチ or エッチ)
- I; Ai (アイ)
- J; Jē or Jei (ジェー or ジェイ)
- K; Kē or Kei (ケー or ケイ)
- L; Eru (エル)
- M; Emu (エム, alternative pronunciation en(em), エン)
- N; Enu (エヌ, sometimes pronounced en, エン)
- O; Ō (オー)
- P; Pī (ピー, alternative pronunciation pē, ペー)
- Q; Kyū (キュー)
- R; Āru (アール)
- S; Esu (エス)
- T; Tī (ティー, though sometimes pronounced chī, チー, and alternatively pronounced tē, テー)
- U; Yū (ユー)
- V; Vi (ヴィ, though often pronounced bui, ブイ)
- W; Daburyū (ダブリュー)
- X; Ekkusu (エックス)
- Y; Wai (ワイ)
- Z; Zetto, zeddo, or zī (ゼット, ゼッド, or ズィー, though sometimes pronounced jī, ジー)
- Cyrillization of Japanese
- List of ISO romanizations
- Japanese writing system
- Transcription into Japanese
- Chibbett, David (1977). The History of Japanese Printing and Book Illustration. Kodansha International Ltd. ISBN 0-87011-288-0.
- Jun'ichirō Kida (紀田順一郎 Kida Jun'ichirō?). Nihongo Daihakubutsukan (日本語大博物館?) (in Japanese). Just System (ジャストシステム Jasuto Shisutem?). ISBN 4-88309-046-9.
- Tadao Doi (土井忠生?) (1980). Hōyaku Nippo Jisho (邦訳日葡辞書?) (in Japanese). Iwanami Shoten (岩波書店?).
- Tadao Doi (土井忠生?) (1955). Nihon Daibunten (日本大文典?) (in Japanese). Sanseido (三省堂?).
- Mineo Ikegami (池上岑夫?) (1993). Nihongo Shōbunten (日本語小文典?) (in Japanese). Iwanami Shoten (岩波書店?).
- Hiroshi Hino (日埜博?) (1993). Nihon Shōbunten (日本小文典?) (in Japanese). Shin-Jinbutsu-Ôrai-Sha (新人物往来社?).
- (Japanese) Hishiyama, Takehide (菱山 剛秀 Hishiyama Takehide), Topographic Department (測図部). "Romanization of Geographical Names in Japan." (地名のローマ字表記) (Archive) Geospatial Information Authority of Japan.
- Rōmaji sōdan shitsu contains an extremely extensive and accurate collection of materials relating to rōmaji, including standards documents and HTML versions of Hepburn's original dictionaries. (Japanese)
- The rōmaji conundrum by Andrew Horvat contains a discussion of the problems caused by the variety of confusing romanization systems in use in Japan today. |
A template to use when exploring the personality, appearance, feelings and actions of a character from a narrative.
Provide students with a copy of the character profile template. Allow them to either create their very own character, or explore another character from a familiar narrative.
Students identify the character’s personality, appearance, feelings and actions.
NSW Curriculum alignment
Draws on an increasing range of skills and strategies to fluently read, view and comprehend a range of texts on less familiar topics in different media and technologies
Recognises that there are different kinds of texts when reading and viewing and shows an awareness of purpose, audience and subject matter
Victorian Curriculum alignment
Australian Curriculum alignment
Discuss features of plot, character and setting in different types of literature and explore some features of characters in different textsElaborationsexamining different types of literature including traditional tales, humorous stories and poetry (...
Create events and characters using different media that develop key events and characters from literary textsElaborationscreating imaginative reconstructions of stories and poetry using a range of print and digital media (Skills: Literacy, Informatio...
We create premium quality, downloadable teaching resources for primary/elementary school teachers that make classrooms buzz!
Find more resources for these topics
Request a change
You must be logged in to request a change. Sign up now!
Report an Error
You must be logged in to report an error. Sign up now! |
See Figure 1
Spark plugs are used to ignite the air and fuel mixture in the cylinder as the piston reaches the top of the compression stroke. The controlled explosion that results forces the piston down, turning the crankshaft and the rest of the drive train.
A typical spark plug consists of a metal shell surrounding a ceramic insulator. A metal electrode extends downward through the center of the insulator and protrudes a small distance. Located at the end of the plug and attached to the side of the outer metal shell is the side electrode. The side electrode bends in at a 90 degree angle so that its tip is just past and parallel to the tip of the center electrode. The distance between these two electrodes (measured in thousandths of an inch or hundredths of a millimeter) is called the spark plug gap. The spark plug does not produce a spark but instead provides a gap across which the current can arc. The transistorized ignition coil (used on some later model vehicle covered by this information) produces produces considerably more voltage than the standard (breaker point) type, approximately 20,000 volts, which travels through the wires to the spark plugs. The current passes along the center electrode and jumps the gap to the side electrode, and in doing so, ignites the fuel/air mixture in the combustion chamber. All plugs should have a resistor built into the center electrode to reduce interference to any nearby radio and television receivers. The resistor also cuts down on erosion of plug electrodes caused by excessively long sparking. Resistor spark plug wiring is original equipment on all models.
Spark plug life and efficiency depend upon condition of the engine and the temperatures to which the plug is exposed. Combustion chamber temperatures are affected by many factors such as compression ratio of the engine, fuel/air mixtures, exhaust emission equipment, and the type of driving you do. Spark plugs are designed and classified by number according to the heat range at which they will operate most efficiently. The amount of heat that the plug absorbs is determined by the length of the lower insulator. The longer the insulator (it extends farther into the engine), the hotter the plug will operate; the shorter it is, the cooler it will operate. A plug that has a short path for heat transfer and remains too cool will quickly accumulate deposits of oil and carbon since it is not hot enough to burn them off. This leads to plug fouling and consequently to misfiring. A plug that has a long path of heat transfer will have no deposits but, due to the excessive heat, the electrodes will burn away quickly and, in some instances, pre-ignition may result. Pre-ignition takes place when plug tips get so hot that they glow sufficiently to ignite the fuel/air mixture before the spark does. This early ignition will usually cause a pinging during low speeds and heavy loads. In severe cases, the heat may become enough to start the fuel/air mixture burning throughout the combustion chamber rather than just to the front of the plug as in normal operation. At this time, the piston is rising in the cylinder making its compression stroke. The burning mass is compressed and an explosion results producing tremendous pressure. Something has to give, and it does; pistons are often damaged. Obviously, this detonation (explosion) is a destructive condition that can be avoided by installing a spark plug designed and specified for your particular engine.
A set of spark plugs usually requires replacement after about 12,000 miles (point type ignitions) or about 10,000 miles (electronic ignitions). The electrode on a new spark plug has a sharp edge but, with use, this edge becomes rounded by erosion causing the plug gap to increase. In normal operation, plug gap increases about 0.001 in. (0.0227mm) for every 1,000-2,000 miles. As the gap increases, the plug's voltage requirement also increases. It requires a greater voltage to jump the wider gap and about 2-7 times as much voltage to fire a plug at high speed and acceleration than at idle.
The higher voltage produced by the electronic transistorized ignition coil is one of the primary reasons for the prolonged replacement interval for spark plugs in late model cars. A consistently hotter spark prevents the fouling of plugs for much longer than could normally be expected; this spark is also able to jump across a larger gap more efficiently than a spark from a conventional system. However, even plugs used with these systems wear after time in the engine.
When you remove the spark plugs, check their condition. They are a good indicator of the condition of the engine. It is a good idea to remove the spark plugs every 7,000 miles to keep an eye on the mechanical state of the engine. A small deposit of light tan or gray material (or rust red with unleaded fuel) on a spark plug that has been used for any period of time is to be considered normal. Any other color or abnormal amounts of deposit, indicates that there is something amiss in the engine.
When a spark plug is functioning normally or, more accurately, when the plug is installed in an engine that is functioning properly, the plugs can be taken out, cleaned, gapped, and reinstalled without doing the engine any harm. But, if a plug fouls, causing misfire, you will have to investigate, correct the cause of the fouling, and either clean or replace the plug.
Worn or fouled plugs become obvious during acceleration. Voltage requirement is greatest during acceleration and a plug with an enlarged gap (or that is fouled) may require more voltage than the coil is able to produce. As a result, the engine misses and sputters until acceleration is reduced. Reducing acceleration reduces the plug's voltage requirement and the engine runs smoother. Slow, city driving is hard on plugs. The long periods of idle experienced in traffic creates an overly rich gas mixture. The engine does not run fast enough to completely burn the gas and, consequently, the plugs become fouled with gas deposits and engine idle becomes rough. In many cases, driving under the right conditions can effectively clean these fouled plugs.
Many shops have a spark plug sandblaster and there are a few inexpensive models that are designed for home use and available from aftermarket sources. After sandblasting, the electrode should be filed to a sharp, square shape and then gapped to specifications. Gapping a plug too close will produce a rough idle while gapping it too wide will increase its voltage requirement and cause missing at high speed and during acceleration.
There are several reasons why a spark plug will foul and you can usually learn what is at fault by just looking at the plug. Refer to the spark plug diagnosis figure in this section for spark plug diagnosis
In most cases the factory recommended heat range is correct; it is chosen to perform well under a wide range of operating conditions. However, if most of your driving is long distance, high speed travel, you may want to install a spark plug one step colder than standard. If most of your driving is of the short trip variety, when the engine may not always reach operating temperature, a hotter plug may help burn off the deposits normally accumulated under those conditions.
See Figures 2, 3, 4 and 5
When you're removing spark plugs, you should work on one at a time. Don't start by removing the plug wires all at once because unless you number them, they are going to get mixed up. On some models though, it will be more convenient for you to remove all the wires before you start to work on the plugs. If this is necessary, take a minute before you begin and number the wires with tape before you disconnect them. The time you spend doing this will pay off later when it comes time to reconnect the wires to the plugs.
- Disconnect the negative battery cable.
- Tag the spark plug wires to assure proper installation.
- Twist the spark plug boot slightly in either direction to break loose the seal, then remove the boot from the plug. You may also use a plug wire removal tool designed especially for this purpose. Do not pull on the wire itself or you may separate the plug connector from the end of the wire. When the wire has been removed, take a wire brush and clean the area around the plug. An evaporative spray cleaner such as those designed for brake applications will also work well. Make sure that all the foreign material is removed so that none will enter the cylinder after the plug has been removed.
If you have access to a compressor, use the air hose to blow all material away from the spark plug bores before loosening the plug. Always protect your eyes with safety glasses when using compressed air
- Remove the plug using the proper size socket, extensions, and universals as necessary. Be careful to hold the socket or the extension close to the plug with your free hand as this will help lessen the possibility of applying a shear force which might snap the spark plug in half.
- If removing the plug is difficult, drip some penetrating oil on the plug threads, allow it to work, then remove the plug. Also, be sure that the socket is straight on the plug, especially on those hard to reach plugs. Again, if the socket is cocked to 1 side a shear force may be applied to the plug and could snap the plug in half.
See Figures 6 and 7
Check the plugs for deposits and wear. If they are not going to be replaced, clean the plugs thoroughly. Remember that any kind of deposit will decrease the efficiency of the plug. Plugs can be cleaned on a spark plug cleaning machine, which can sometimes be found in service stations, or you can do an acceptable job of cleaning with a stiff brush. If the plugs are cleaned, the electrodes must be filed flat. Use an ignition points file, not an emery board or the like, which will leave deposits. The electrodes must be filed perfectly flat with sharp edges; rounded edges reduce the spark plug voltage by as much as 20%.
Check and adjust the spark plug gap immediately before installation. The ground electrode (the L-shaped one connected to the body of the plug) must be parallel to the center electrode and the specified size gauge (see Tune-Up Specifications) should pass through the gap with a slight drag. Always check the gap on new plugs, too; since they are not always set correctly at the factory.
Do not use a flat feeler gauge when measuring the gap on used plugs, because the reading may be inaccurate. The ground electrode on a used plug is often rounded on the face closest to the center electrode. A flat gauge will not be able to accurately measure this distance as well as a wire gauge. Most gapping tools usually have a bending tool attached. This tool may be used to adjust the side electrode until the proper distance is obtained. Never attempt to move or bend the center electrode or spark plug damage will likely occur. Also, be careful not to bend the side electrode too far or too often; if it is overstressed it may weaken and break off within the engine, requiring removal of the cylinder head to retrieve it.
- Inspect the spark plugs and clean or replace, as necessary. Inspect the spark plug boot for tears or damage. If a damaged boot is found, the spark plug wire must be replaced.
- Using a feeler gauge, check and adjust the spark plug gap to specification. When using a gauge, the proper size should pass between the electrodes with a slight drag. The next larger size should not be able to pass while the next smaller size should pass freely.
- Lubricate the spark plug threads with a drop of clean engine oil, then carefully start the spark plugs by hand and tighten a few turns until a socket is needed to continue tightening the spark plug. Do not apply the same amount of force you would use for a bolt; just snug them in. If a torque wrench is available, tighten the plugs to 11-12 ft. lbs. (12-20 Nm).
A spark plug threading tool may be made using the end of an old spark plug wire. Cut the wire a few inches from the top of the spark plug boot. The boot may then be used to hold the plug while the wire is turned to thread it. Because the wire is so flexible, it may be turned to bend around difficult angles and, should the plug begin to crossthread, the resistance should be sufficient to bend the wire instead of forcing the plug into the cylinder head, thus preventing serious thread damage
- Apply a small amount of silicone dielectric compound to the end of the spark plug lead or inside the spark plug boot to prevent sticking, then install the boot to the spark plug and push until it clicks into place. The click may be felt or heard, then gently pull back on the boot to assure proper contact.
- Connect the spark plug wires as tagged during removal.
- Connect the negative battery cable. |
The men of the Derbyshire village of Pentrich formed themselves into an armed force in 1817 and marched towards Nottingham expecting to be part of a national uprising to overthrow the government. The main reason for their action was anger and despair at the lack of work, lack of food and the apparent indifference of the government and local authorities to their ever more desperate plight .
Why was there an uprising at this time and why did the “Regenerators” of Pentrich think they were taking part in a national revolution? The reasons are complex, but some factors may be summarised as follows:
The leadership and activities of influential radical leaders.
There was a new mood among radical thinkers fired by events in America and France. The ideas of philosophers such as Rousseau (“Discourse on the Origins of Inequality" and “Social Contract” ), and the dissemination of their ideas through the work of “pamphleteers” such as Tom Paine, meant that there was a ferment of radical ideas surging among the working population. Some wanted all out revolution, while others believed that a reform of the voting system would be sufficient to bring about major changes. Sir Francis Burdett MP attempted to push for voting reform, but with no success. Of the 558 members of parliament, most of them represented electorates of under 500 people. Major industrial towns such as Manchester, Birmingham and Leeds had no MPs at all. The government argued that an MP was not “the agent of the place that chose him, but of the whole community.”
Political clubs flourished throughout the country, many of them set up as “Hampden” clubs by the radical Major John Cartwright. In January 1817 a major rally was called at the Crown and Anchor public house in The Strand, London. Though half a million signatures had been gathered for a petition on voting reform, the rally was not a success as the leaders could not agree among themselves. Burdett backed out from taking the petition to government, nevertheless Admiral Lord Cochrane delivered it. Unfortunately stones were thrown by the mob at the Prince Regent’s carriage as he drove from the Houses of Parliament, behaviour that did not encourage the government to view the petition favourably!
Unrest continued to grow and in some parts of the Midlands. Every village had a club where meetings ended with revolutionary songs following speeches which were “destructive of the social order, recommending the equalisation of property.” (Report of a House of Lord’s committee of enquiry.) A reform meeting at Clerkenwell ended in a riot and a man shot. Secret meetings were called at Methodist meeting rooms, private houses and in isolated country barns. Public shows of discontent, such as the “Blanketeers” failed march from Manchester to London, were frequent. (The Blanketters carried blankets to keep them warm on the journey.) Revolution was in the air, but the leaders were far from united. Cartwright was very old, Burdett did not trust his fellow radical “Orator” Henry Hunt and soon sailed with Cobbett to America to avoid arrest.
The Luddite movement in Yorkshire and the Midlands (an attempt to destroy the new factory machines which were decimating cottage industries) had schooled men in organisational and leadership skills so that local leaders were well able to control and manage revolutionary groups. (Jeremiah Brandreth was typical of these.)
Much of the legislation at this time caused great resentment.
Suspension of Habeus Corpus
Alarmed at the growing unrest, the government passed legislation in 1817 suspending “Habeus Corpus” which thereby allowed the government to detain political opponents without trial, they also curtailed the freedom of the press, and decreed that any meeting of fifty or more people without the consent of the Lord Lieutenant of the region could incur the death penalty for those taking part.
The Corn Laws. (1815)
During the war with France British farmers prospered as a result of favourable long term leasing arrangements and the profitable development of heavy clay lands for the cultivation of grain. With the end of the war fluctuating prices led to leasing arrangements becoming shorter and less advantageous. There was also a threat of cheap imported grain. Protests to parliament from farmers (“No set of men cry so loud or so soon as farmers” James Loch 1814) led to the enactment of The Corn Laws which were designed to keep out foreign grain and keep up prices to the consumer. Imported grain was prohibited if the price of domestic grain fell below £4 per quarter ton. The effect was to make the cost of bread prohibitive to the general population.
Abolition of Income Tax.
In the last year of the war with France Prime Minister Liverpool faced an expenditure of 45% over income. The difference was made up by borrowings costing £30m. per year. After the war the “country gentlemen” were no longer willing to pay income tax to finance the government and complaints to parliament from landowners and farmers led to the abolition of income tax in 1816. This led to further expensive government borrowing and a greater burden of “indirect taxes” on the general population who were already struggling with rising prices, unemployment and poor wages.
Other factors leading to unrest.
Bad harvests and rising food prices – In 1816 the weather was cold and wet. Crops were badly affected and insufficient corn was produced to feed the population.
The Industrial Revolution – the demand for long established home-based crafts had dwindled away as the result of the new factories. This caused a drift of population from rural into industrial areas and many of an ever growing population were obliged to accept badly paid semi-skilled factory work.
Post War Problems – The demand for armaments ceased, which put pressure on the massive ironworks businesses who in turn reduced their need for coal. Some 300,00 soldiers and sailors returning from the wars found that work was almost unobtainable. Many banks failed and great trading companies went bankrupt. Around a third of the working population were thrown out of employment and became paupers thereby putting strain on the uneven and often inadequate “Parish Relief”.
Post war depression – there was a loss of exports as a result of competition from continental markets.
Effects of the French Revolution – The government feared that revolutionary principles could lead to violence as witnessed in France at the height of the revolution. They reacted with severely repressive legislation.
Unpopular royalty – The Prince Regent’s extravagant lifestyle was highly unpopular with the people.
Why did the “revolution” fail?
Lack of co-ordinated leadership.
Lack of will of the people.
Government control of the situation as a result of a network of spies which ensured that all revolutionary activity was known and action taken before any serious situation could develop. ( Note the very high number of treason trials reported in newspapers of the period.)
Why did the Pentrich men march?
The men themselves said that they were misled by the spy William Oliver. Whether he acted as an argent provocateur or whether the failure of communications between the revolutionary groups caused the Pentrich men to march to their downfall is open to question. Much has been written on the subject, but proof one way or another remains very elusive.
DO PLEASE READ the play BRANDRETH which gives a narrative account of the uprising using material from many documents, letters and other information sources of the period. The LETTERS FROM AUSTRALIA give a vivid account of what life was like for the Pentrich rebels sent to Australia as convicts.
[The buttons above left will get them for you] |
This section discusses connecting a Linux system to a
network and/or the Internet.
Configuring the network settings the easy way
Configuring your network is very easy with Red Hat Linux and
much of it (if not all of it) is done when you install Linux.
You can change the network configuration by clicking
on the Red Hat in the bottom left corner of your screen,
then System Settings-->Network. You will be guided
through the steps to configuring your network.
The process id described here and an
example is shown next:
Setting your hostname, IP address, netmask, gateway, DNS
server via files
It is sometimes helpful to know what is going on behind the scenes
or if you want to modify the network configuration via changing
For example, assume you want to modify the network configuration
by modifying files with the following:
Networking is set up in 4 files:
- hostname: summer
- domainname: acme.com
- Static IP address: 192.168.12.21
- Netmask: 255.255.255.0
- Gateway: 192.168.12.254
- Primary DNS server: 192.168.12.21
- Secondary DNS server: 192.168.12.23
First, add your host to the /etc/hosts file:
Your /etc/sysconfig/network file would be:
# The next line "127.0.0.1" is needed. Do not remove it.
127.0.0.1 localhost.localdomain localhost
DNS servers are set in: /etc/resolv.conf. An example:
Using ifconfig and route
You normally would not need to use ifconfig or
route unless you
want to change your IP address, disable the Ethernet interface, etc.
change your gateway, etc.
This is sometimes helpful, so the information is here.
To set an ip address:
ifconfig eth0 192.168.12.56 netmask 255.255.255.0 up
To set a default route or gateway ("gw" = "gateway"):
route add default gw 192.168.12.1 eth0
If your system is saying there is no Ethernet card found,
make sure the card is in the kernel.
dmesg | less
Look for any info about eth0 to help track down the problem.
Look for a line line this:
alias eth0 driver-name-like-wdi or 3c503
Where to get more information
Red Hat Network Configuration
Linux Quick Reference Home |
Anammox, an abbreviation for ANaerobic AMMonium OXidation, is a globally important microbial process of the nitrogen cycle. The bacteria mediating this process were identified in 1999, and at the time were a great surprise for the scientific community. It takes place in many natural environments and anammox is also the trademarked name for the anammox-based ammonium removal technology that was developed by the Delft University of Technology.
In this biological process, nitrite and ammonium are converted directly into dinitrogen gas. Globally, this process may be responsible for 30-50% of the dinitrogen gas produced in the oceans. It is thus a major sink for fixed nitrogen and so limits oceanic primary productivity. The overall catabolic reaction is:
- NH4+ + NO2− → N2 + 2H2O.
The bacteria that perform the anammox process belong to the bacterial phylum Planctomycetes. Currently, five anammox genera have been discovered: Brocadia, Kuenenia, Anammoxoglobus, Jettenia (all fresh water species), and Scalindua (marine species). The anammox bacteria are characterized by several striking properties: they all possess one anammoxosome, a membrane bound compartment inside the cytoplasm which is the locus of anammox catabolism. Further, the membranes of these bacteria mainly consist of ladderane lipids so far unique in biology. Of special interest is the conversion to hydrazine (normally used as a high-energy rocket fuel, and poisonous to most living organisms) as an intermediate. A final striking feature of the organism is the extremely slow growth rate. The doubling time is anywhere from 7–22 days. The anammox bacteria are geared towards converting their substrates at very low concentrations; in other words, they have a very high affinity to their substrates ammonium and nitrite (sub-micromolar range). Anammox cells are packed with cytochrome c type proteins (~30% of the protein complement), including the enzymes that perform the key catabolic reactions of the anammox process, making the cells remarkably red. The anammox process was originally found to occur only from 20 °C to 43 °C but more recently, anammox has been observed at temperatures from 36 °C to 52 °C in hot springs and 60 °C to 85 °C at hydrothermal vents located along the Mid-Atlantic Ridge.
In 1932, it was reported that dinitrogen gas was generated via an unknown mechanism during fermentation in the sediments of Lake Mendota, Wisconsin, USA. More than 40 years ago, Richards noticed that most of the ammonium that should be produced during the anaerobic remineralization of organic matter was unaccounted for. As there was no known biological pathway for this transformation, biological anaerobic oxidation of ammonium received little further attention. Thirty years ago, the existence of two chemolithoautotrophic microorganisms capable of oxidizing ammonium to dinitrogen gas was predicted on the basis of thermodynamic calculations. It was thought that anaerobic oxidation of ammonium would not be feasible, assuming that the predecessors had tried and failed to establish a biological basis for those reactions. By the 1990s, Arnold Mulder's observations were just consistent with Richard's suggestion. In their anoxic denitrifying pilot reactor, ammonium disappeared at the expense of nitrite with a clear nitrogen production. The reactor used the effluent from a methanogenic pilot reactor, which contained ammonium, sulphide and other compounds, and nitrate from a nitrifying plant as the influent. The process was named "anammox," and was realized to have great significance in the removal of unwanted ammonium. The discovery of the anammox process was first publicly presented at the 5th European congress on biotechnology. By the mid-1990s, the discovery of anammox in the fluidized bed reactor was published. A maximum ammonium removal rate of 0.4 kg N/m3/d was achieved. It was shown that for every mole of ammonium consumed, 0.6 mol of nitrate was required, resulting in the formation of 0.8 mol of N2 gas. In the same year, the biological nature of anammox was identified. Labeling experiments with 15NH4+ in combination with 14NO3- showed that 14-15N2 was the dominant product making up 98.2% of the total labeled N2. It was realized that, instead of nitrate, nitrite was assumed as the oxidizing agent of ammonium in anammox reaction. Based on a previous study, Strous et al. calculated the stoichiometry of anammox process by mass balancing, which is widely accepted by other groups. Later, anammox bacteria were identified as planctomycetes, and the first identified anammox organism was named Candidatus "Brocadia anammoxidans." Before 2002, anammox was assumed to be a minor player in the nitrogen cycle within natural ecosystems. In 2002 however, anammox was found to play an important part in the biological nitrogen cycle, accounting for 24-67% of the total N2 production in the continental shelf sediments that were studied. The discovery of anammox process modified the concept of biological nitrogen cycle, as depicted in Figure 2.
Possible reaction mechanisms
According to 15N labeling experiments carried out in 1997, ammonium is biologically oxidized by hydroxylamine, most likely derived from nitrite, as the probable electron acceptor. The conversion of hydrazine to dinitrogen gas is hypothesized to be the reaction that generates the electron equivalents for the reduction of nitrite to hydroxylamine. In general, two possible reaction mechanisms are addressed. One mechanism hypothesizes that a membrane-bound enzyme complex converts ammonium and hydroxylamine to hydrazine first, followed by the oxidation of hydrazine to dinitrogen gas in the periplasm. At the same time, nitrite is reduced to hydroxylamine at the cytoplasmic site of the same enzyme complex responsible for hydrazine oxidation with an internal electron transport (Figure 3a). The other mechanism postulates the following: ammonium and hydroxylamine are converted to hydrazine by a membrane-bound enzyme complex, hydrazine is oxidized in the periplasm to dinitrogen gas, and the generated electrons are transferred via an electron transport chain to nitrite reducing enzyme in the cytoplasm where nitrite is reduced to hydroxylamine (Figure 3b). Whether the reduction of nitrite and the oxidation of hydrazine occur at different sites of the same enzyme or the reactions are catalyzed by different enzyme systems connected via an electron transport chain remains to be investigated. In microbial nitrogen metabolism, the occurrence of hydrazine as an intermediate is rare. Hydrazine has been proposed as an enzyme-bound intermediate in the nitrogenase reaction.
A possible role of nitric oxide (NO) or nitroxyl (HNO) in anammox was proposed by Hooper et al. by way of condensation of NO or HNO and ammonium on an enzyme related to the ammonium monooxygenase family. The formed hydrazine or imine could subsequently be converted by the enzyme hydroxylamine oxidase to dinitrogen gas, and the reducing equivalents produced in the reaction are required to combine NO or HNO and ammonium or to reduce nitrite to NO. Environmental genomics analysis of the species Candidatus Kuenenia stuttgartiensis, through a slightly different and complementary metabolism mechanism, suggested NO to be the intermediate instead of hydroxylamine (Figure 4). However, this hypothesis also agreed that hydrazine was an important intermediate in the process. In this pathway (Figure 4), there are two enzymes unique to anammox bacteria: hydrazine hydrolase (hh) and hydrazine dehydrogenase (hd). The hh produces hydrazine from nitric oxide and ammonium, and hd transfer the electrons from hydrazine to ferredoxin. Few new genes, such as some known fatty acid biosynthesis and S-adenosylmethionine radical enzyme genes, containing domains involved in electron transfer and catalysis have been detected.
Till now, ten anammox species have been described, including seven that are available in laboratory enrichment cultures. All have the taxonomical status of Candidatus, as none were obtained as classical pure cultures. Known species are divided over five genera: (1) Kuenenia, represented by Kuenenia stuttgartiensis, (2) Brocadia (three species: B. anammoxidans, B. fulgida, and B. sinica), (3) Anammoxoglobus (one species: A. propionicus, (4) Jettenia (one species: J. asiatica, and (5) Scalindua (four species: S. brodae, S. sorokinii, S. wagneri, and S. profunda Representatives of the first four genera were enriched from sludge from wastewater treatment plants; K. stuttgartiensis, B. anammoxidans, B. fulgida, and A. propionicus were even obtained from the same inoculum. Scalindua dominates the marine environment, but is also found in some freshwater ecosystems and wastewater treatment plants. Together, these 10 species likely only represent a minute fraction of anammox biodiversity. For instance, there are currently over 2000 16S rRNA gene sequences affiliated with anammox bacteria that have been deposited to the Genbank (http://www.ncbi.nlm.nih.gov/genbank/), representing an overlooked continuum of species, subspecies, and strains, each apparently having found its specific niche in the wide variety of habitats where anammox bacteria are encountered. Species microdiversity is particularly impressive for the marine representative Scalindua. A question that remains to be investigated is which environmental factors determine species differentiation among anammox bacteria.
The sequence identities of the anammox 16S rRNA genes range from 87 to 99%, and phylogenetic analysis places them all within the phylum Planctomycetes, which form the PVC superphylum together with Verrucomicrobia and Chlamydiae. Within the Planctomycetes, anammox bacteria deeply branch as a monophyletic clade. Their phylogenetic position together with a broad range of specific physiological, cellular, and molecular traits give anammox bacteria their own order Brocadiales.
The application of the anammox process lies in the removal of ammonium in wastewater treatment and consists of two separate processes. The first step is partial nitrification (nitritation) of half of the ammonium to nitrite by ammonia oxidizing bacteria:
- 2NH4+ + 3O2 → 2NO2- + 4H+ + 2H2O
The resulting ammonium and nitrite are converted in the anammox process to dinitrogen gas and circa 15% nitrate (not shown) by anammox bacteria:
- NH4+ + NO2- → N2 + 2 H2O
For the enrichment of the anammox organisms a granular biomass or biofilm system seems to be especially suited in which the necessary sludge age of more than 20 days can be ensured. Possible reactors are sequencing batch reactors (SBR), moving bed reactors or gas-lift-loop reactors. The cost reduction compared to conventional nitrogen removal is considerable; the technique is still young but proven in several fullscale installations. The first full scale reactor intended for the application of anammox bacteria was built in the Netherlands in 2002. In other wastewater treatment plants, such as the one in Germany (Hattingen), anammox activity is coincidentally observed though were not built for that purpose. As of 2006, there are three full scale processes in The Netherlands: one in a municipal wastewater treatment plant (in Rotterdam), and two on industrial effluent. One is a tannery, the other a potato processing plant.
Advantages of the anammox process
Conventional nitrogen removal from ammonium-rich wastewater is accomplished in two separate steps: nitrification, which is mediated by aerobic ammonia- and nitrite-oxidizing bacteria and denitrification carried out by denitrifiers, which reduce nitrate to N2 with the input of suitable electron donors. Aeration and input of organic substrates (typically methanol) show that these two processes are: (1) highly energy consuming, (2) associated with the production of excess sludge and (3) produce significant amounts of green-house gases such as CO2 and N2O and ozone-depleting NO. Because anammox bacteria convert ammonium and nitrite directly to N2 anaerobically, this process does not require aeration and other electron donors. Nevertheless, oxygen is still required for the production of nitrite by ammonia-oxiding bacteria. However, in partial nitritation/anammox systems, oxygen demand is greatly reduced because only half of the ammonium needs to be oxidized to nitrite instead of full conversion to nitrate. The autotrophic nature of anammox bacteria and ammonia-oxidizing bacteria guarantee a low yield and thus less sludge production. Additionally, anammox bacteria easily form stable self-aggregated biofilm (granules) allowing reliable operation of compact systems characterized by high biomass concentration and conversion rate up to 5–10 kg N m−3. Overall, it has been shown that efficient application of the anammox process in wastewater treatment results in a cost reduction of up to 60% as well as lower CO2 emissions.
- Arrigo, R. A. Marine microorganisms and global nutrient cycles. Nature 437, 349–355 (2005)
- Strous, M. et al. Missing lithotroph identified as new planctomycete. Nature 400, 446–449 (1999)
- Jetten Michael Silvester Maria, Van Loosdrecht Marinus Corneli; Technische Universiteit Delft, patent WO9807664
- Kartal, B. et al. How to make a living from anaerobic ammonium oxidation. FEMS Microbiology Reviews 37, 428-461 (2013)
- Devol, A. H. et al. Nitrogen cycle: solution to a marine mystery. Nature 422(6932), 575-576 (2003)
- Jetten, M. S. M. et al. Biochemistry and molecular biology of anammox bacteria. Critical Reviews in Biochemistry and Molecular Biology 44(2-3), 65-84 (2009)
- Boumann H. A. et al. Biophysical properties of membrane lipids of anammox bacteria: I. Ladderane phospholipids form highly organized fluid membranes. Biochim Biophys Acta 1788(7), 1444-1451 (2009)
- "Pee power: Urine-loving bug churns out space fuel". Agence France Press. 2011-10-02. Retrieved 2011-10-03.
- Strous, M., Kuenen, J.G., Jetten, M.S. 1999. Key Physiology of Anaerobic Ammonium Oxidation. App. Environ. Microb. (3248-3250)
- Yan J, Haaijer SCM, Op den Camp HJM, van Niftrik L, Stahl DA, Ko ̈nneke M, Rush D, Sinninghe Damste ́ JS, Hu YY, Jetten MSM (2012) Mimicking the oxygen minimum zones: stimulating interaction of aerobic archaeal and anaerobic bacterial ammonia oxidizers in a laboratory- scale model system. Environ Microbiol 14:3146–3158
- Kartal B, Maalcke WJ, de Almeida NM, Cirpus I, Gloerich J, Geerts W, den Camp HJMO, Harhangi HR, Janssen- Megens EM, Francoijs K-J, Stunnenberg HG, Keltjens JT, Jetten MSM, Strous M (2011) Molecular mechanism of anaerobic ammonium oxidation. Nature 479:127–130
- Jaeschke et al. 2009. 16s rRNA gene and lipid biomarker evidence for anaerobic ammonium-oxidizing bacteria (anammox) in California and Nevada hot springs. FEMS Microbiol. Ecol. 343-350
- Byrne, N., Strous, M., Crepeau, V, et al. 2008. Presence and activity of anaerobic ammonium-oxidizing bacteria at deep-sea hydrothermal vents. The ISME Journal.
- Allgeier, R. J. et al. The anaerobic fermentation of lake deposits. International Review of Hydrobiology 26(5-6), 444-461 (1932)
- F. A. Richards. Anoxic basins and fjordsin. Chemical Oceanography, J.P. Ripley and G. Skirrow, Eds., pp 611-645, Academic Press, London, UK, 1965
- Arrigo, K. R. Marine microorganisms and global nutrient cycles. Nature 437(7057), 349-355 (2005)
- Broda, E. Two kinds of lithotrophs missing in nature. Zeitschrift fur Allgemeine Mikrobiologie 17(6), 491-493 (1977)
- Kuenen, J. G. Anammox bacteria: from discovery to application. Nature Reviews Microbiology 6(4), 320-326 (2008)
- A. A. van de Graaf, A. Mulder, H. Slijkhuis, L. A. Robertson, and J. G. Kuenen, “Anoxic ammonium oxidation,” in Proceedings of the 5th European Congress on Biotechnology, C. Christiansen, L. Munck, and J. Villadsen, Eds., pp. 338–391, Copenhagen, Denmark, 1990
- A. Mulder, A. A. Van De Graaf, L. A. Robertson, and J. G. Kuenen, “Anaerobic ammonium oxidation discovered in a denitrifying fluidized bed reactor,” FEMS Microbiology Ecology, vol. 16, no. 3, pp. 177–184, 1995
- A. A. Van de Graaf, A. Mulder, P. De Bruijn, M. S. M. Jetten, L. A. Robertson, and J. G. Kuenen, “Anaerobic oxidation of ammonium is a biologically mediated process,” Applied and Environmental Microbiology, vol. 61, no. 4, pp. 1246–1251, 1995
- M. Strous, J. J. Heijnen, J. G. Kuenen, and M. S. M. Jetten, “The sequencing batch reactor as a powerful tool for the study of slowly growing anaerobic ammonium-oxidizing microorganisms,” Applied Microbiology and Biotechnology, vol. 50, no. 5, pp. 589–596, 1998
- M. Strous, J. A. Fuerst, E. H. M. Kramer et al., “Missing lithotroph identified as new planctomycete,” Nature, vol. 400, no. 6743, pp. 446–449, 1999
- J. G. Kuenen and M. S. M. Jetten, “Extraordinary anaerobic ammonium oxidising bacteria,” ASM News, vol. 67, pp. 456–463, 2001
- C. A. Francis, J. M. Beman, and M. M. M. Kuypers, “New processes and players in the nitrogen cycle: the microbial ecology of anaerobic and archaeal ammonia oxidation,” ISME Journal, vol. 1, no. 1, pp. 19–27, 2007
- B. Thamdrup and T. Dalsgaard, “Production of N2 through anaerobic ammonium oxidation coupled to nitrate reduction in marine sediments,” Applied and Environmental Microbiology, vol. 68, no. 3, pp. 1312–1318, 2002
- Van De Graaf, A. A. et al. Metabolic pathway of anaerobic ammonium oxidation on the basis of 15N studies in a fluidized bed reactor. Microbiology 143(7), 2415-2412 (1997)
- Ni, S-Q. and Zhang, J. Anaerobic Ammonium Oxidation: From Laboratory to Full-Scale Application. BioMed Research International 2013, 1-10 (2013)
- Jetten, M. S. M. et al. The anaerobic oxidation of ammonium. FEMS Microbiology Reviews 22(5), 421-437 (1998)
- Schalk, H. et al. The anaerobic oxidation of hydrazine: a novel reaction in microbial nitrogen metabolism. FEMS Microbiology 158(1), 61-67 (1998)
- Dilworth M. J. and Eady R. R. Hydrazine is a product of dinitrogen reduction by the vanadium-nitrogenase from Azotobacter chroococcum. Biochemical Journal 277(2), 465-468 (1991)
- Hooper, A. B. et al. Enzymology of the oxidation of ammonia to nitrite by bacteria. Antonie van Leeuwenhoek 71(1-2), 59-67 (1997)
- Strous, M. et al. Deciphering the evolution and metabolism of an anammox bacterium from a community genome. Nature 440(7085), 790-794 (2006)
- Kartal, B. et al. Candidatus 'Brocadia fulgida': an autofluorescent anaerobic ammonium oxidizing bacterium. FEMS Microbiol. Ecol. 63, 46-55 (2008)
- Oshiki, M. et al. Physiological characteristics of the anaerobic ammonium-oxidizing bacterium Candidatus 'Brocadia sinica'. Microbiology 157, 1706-1713 (2011)
- Kartal, B. et al. Candidatus "Anammoxoglobus propionicus" a new propionate oxidizing species of anaerobic ammonium oxidizing bacteria. Syst Appl Micrbiol 30, 39-49 (2007)
- Quan, Z. X. et al. Diversity of ammonium-oxidizing bacteria in a granular sludge anaerobic ammonium-oxidizing (anammox) reactor. Environ Microbiol 10, 3130-3139 (2008)
- Hu, B. L. et al. New anaerobic, ammonium-oxidizing community enriched from peat soil. Appl Environ Microbiol 77: 966–971 (2011)
- Schmid, M. et al. Candidatus “Scalindua brodae”, sp. nov., Candidatus “Scalindua wagneri”, sp. nov., two new species of anaerobic ammonium oxidizing bacteria. Syst Appl Microbiol 26: 529–538. (2003)
- Woebken, D. et al. A microdiversity study of anammox bacteria reveals a novel Candidatus Scalindua phylotype in marine oxygen minimum zones. Environ Microbiol 10: 3106–3119 (2008)
- Van de Vossenberg, J. et al. The metagenome of the marine anammox bacterium ‘Candidatus Scalindua profunda’ illustrates the versatility of this globally important nitrogen cycle bacterium. Environ Microbiol. doi:10.1111/j. 1462-2920, 2012.02774.x. [Epub ahead of print](2012)
- Schubert, C. J. et al. Anaerobic ammonium oxidation in a tropical freshwater system (Lake Tanganyika). Environ Microbiol 8: 1857–1863 (2006)
- Hamersley, M. R. et al. Water column anammox and denitrification in a temperate permanently stratified lake (Lake Rassnitzer, Germany). Syst Appl Microbiol 32: 571–582 (2009)
- Schmid, M. C. et al. Anaerobic ammonium-oxidizing bacteria in marine environments: widespread occurrence but low diversity (2007)
- Dang, H. et al. Environmental factors shape sediment anammox bacterial communities in hypernutrified Jiaozhou Bay, China. Appl Environ Microbiol 76: 7036–7047 (2010)
- Hong, Y. G. et al. Residence of habitat-specific anammox bacteria in the deep-sea subsurface sediments of the South China Sea: analyses of marker gene abundance with physical chemical parameters. Microb Ecol 62: 36–47 (2011a)
- Hong, Y. G. et al. Diversity and abundance of anammox bacterial community in the deep-ocean surface sediment from equatorial Pacific. Appl Microbiol Biotechnol 89: 1233–1241 (2011b)
- Li, M. et al. Spatial distribution and abundances of ammonia-oxidizing archaea (AOA) and ammonia-oxidizing bacteria (AOB) in mangrove sediments. Appl Microbiol Biotechnol 89: 1243–1254 (2011)
- Fuerst J. A. & Sagulenko E. Beyond the bacterium: planctomycetes challenge our concepts of microbial structure and function. Nat Rev Microbiol 9: 403–413 (2011)
- Wagner M & Horn M (2006) The Planctomycetes, Verrucomicrobia, Chlamydiae and sister phyla comprise a superphylum with biotechnological and medical relevance. Curr Opin Biotechnol 17: 241–249
- Jetten MSM, Op den Camp HJM, Kuenen JG & Strous M (2010) Description of the order Brocadiales. Bergey’s Manual of Systematic Bacteriology, Vol 4 (Krieg NR, Ludwig W, Whitman WB, Hedlund BP, Paster BJ, Staley JT, Ward N, Brown D & Parte A, eds), pp. 596–603. Springer, Heidelberg
- B. Kartal, G.J. Kuenen and M.C.M van Loosdrecht Sewage Treatment with Anammox, Science, 2010, vol 328 p 702-3
- Knight, Helen (May 7, 2010). "Bugs will give us free power while cleaning our sewage". New Scientist. Retrieved May 2010.
- van der Star WRL, Abma WR, Blommers D, Mulder J-W, Tokutomi T, Strous M, Picioreanu C, Van Loosdrecht MCM (2007) Startup of reactors for anoxic ammonium oxidation: experiences from the first full-scale anammox reactor in Rotterdam. Water Res 41:4149–4163
- Hu Z, Lotti T, Lotti T, de Kreuk M, Kleerebezem R, van Loosdrecht M, Kruit J, Jetten MSM, Kartal B (2013) Nitrogen removal by a nitritation-anammox bioreactor at low temperature. Appl Environ Microbiol. doi:10.1128/ AEM.03987-12
- van Loosdrecht MCM (2008) Innovative nitrogen removal. In: Henze M, van Loosdrecht MCM, Ekama GA, Brdjanovic D (eds) Biological wastewater treatment: principles, modelling and design. IWA Publishing, London, pp 139–155
- Siegrist H, Salzgeber D, Eugster J, Joss A (2008) Anammox brings WWTP closer to energy autarky due to increased biogas production and reduced aeration energy for N-removal. Water Sci Technol 57:383–388
- van Dongen U, Jetten MSM, van Loosdrecht MCM (2001) The SHARON((R))-Anammox((R)) process for treatment of ammonium rich wastewater. Water Sci Technol 44: 153–160 |
[I'm trying to catch up with all the news that's been released this week while I was off lecturing in Texas. This is Part 2 of a few articles just about exoplanets. Here's Part 1, Part 2, and Part 3.]
In September, astronomers announced the discovery of a planet (Kepler-16b) that orbited not one but two stars. The stars orbit each other (in what's called a binary system) and the planet circles both. This was the first such planet found doing this (out of hundreds of planets orbiting single stars discovered), which opened up the question: how rare is this kind of system? Is Kepler-16b one of a kind?
The answer appears to be no: two more such systems have just been announced! Dubbed Kepler-34b and Kepler-35b, both are gas giants, similar in size to Saturn.
The planet Kepler-34b orbits two Sun-like stars once every 289 days. The two stars (Kepler-34A and Kepler-34B; note the capital letter denoting a star versus the lower case letter denoting a planet -- which technically should be called Kepler-34(AB)b, but at some point I have to draw the line and simplify) orbit each other every 28 days. The planet Kepler35-b orbits a pair of somewhat lower-mass stars every 131 days (the stars orbit each other every 21 days).
Note that in both cases, the planets orbit their stars at distances much larger than the distances between the two stars themselves. That's not surprising to me. From far away, a circumbinary planet (literally, "around two stars") feels the combined gravity of the two stars more than either individual star, much like distant headlights on the highway look like a single light. When you're close, the two lights resolve themselves. Same thing with a planet; if it orbits much closer in the gravity field is a bit more distorted by the individual stars. Too close, and the orbit becomes unstable and the planet can be ejected from the system entirely! But it looks like both Kepler-34b and 35b have nice, stable orbits.
Binary stars are very common in the Milky Way: roughly half of all stars are binary, and now we know that at least three such systems have circumbinary planets. And we've only just started looking! Mind you, these planets were found using the transit method, so the orbits have to align just right from our viewpoint or else we don't see them transit. For every one transiting system we find there are many more that exist but don't transit, so we don't see them. But they're out there.
I suspect that the fraction of binary stars with planets is probably lower than for single stars, since planets forming (or moving) closer in to the binary center will get ejected. But still, even with a lower fraction we're still talking about a pool of hundreds of billions of stars, so it's likely that there are millions of circumbinary planets out there: millions of Tatooines!
And hmmmm. Kepler 34 and 35 are 4900 and 5400 light years away, respectively, making them among the more far-flung planetary systems seen. You might say that if there's a bright center to the Universe, they're the planets that it's farthest from.
I've always dreamed of standing on a hill and watching twin suns set in the west. Sadly, the wind won't blow through my hair like it did Luke Skywalker's, but that would be a small price to pay. What a view that would be!
[UPDATE: Wait a sec! Right after posting, I realized: the two planets are both gas giants, but far enough from their stars that big, terrestrial moons might be possible. So imagine that: a binary sunset with a gigantic planet looming in the sky as well! That would be incredible.]
Image credit: Lynette Cook and SDSU |
1999). Things however turned out differently and this was the very first factor that led to Germany's eventual defeat. This article identifies and discusses this and other key factors that led to Germanys defeat in the First World War as well the reasons and impact of the United States entry into the war.
Even though the defeat at Marne came very early into the war, its importance as a factor that led to Germanys defeat cannot be undermined. This is because it meant the failure of the Schlieffen's plan and a smash to any hopes of a quick victory and therefore a short war. The German force was not prepared for a lengthy war and by eliminating the possibility of a short war the chances of winning began to decrease. This was general and Helmuth Vou Mottke' biggest mistakes.
Initial strategy of the Germans was to take France before Russia could mobilize effectively and then move by railroad and use united and excessive force on Russia. This would have allowed the Germany forces to take their enemies one by one. The Schlieffen plan had not anticipated later developments such as the three-day resistance by Belgium. The fast move by British authority to enter into war and the fierce resistance by the French army. All these factors slowed Germany's progress significantly and the effects of a lengthy war eventually wore them down.
The war put a lot of strain on the Germany economy that heavily relied on external trade. ... |
COMPARED TO WHAT?
Americans know little of other nations' politics and governments. Recent surveys have revealed that many Americans are unable to identify other nations on a map, much less know something of their governmental systems. This may help to explain why many in the United States think of their governmental system as the world's best. America's electoral system has delivered governments that have presided over a period of liberty, peace, and prosperity, so what's not to like? An answer to that question requires knowledge of how the electoral systems of other constitutional democracies operate. This chapter examines America's electoral system in comparison with those of other nations and evaluates them in terms of the accountability, voter participation, deliberation, and governmental stability they produce.
America's electoral system is unique due to a combination of several unusual characteristics. First, America has the longest election campaigns in the world. It is the only major democracy that nominates party candidates through primaries, which add both months and expense to elections. No nation takes so long to select a chief executive, with American presidential campaigns—requiring primaries in dozens of states before the general election contest—often lasting up to twelve months. The length of American campaigns helps electoral politics encroach upon governing processes, as discussed in chapter i. The federalism of American electoral administration is also relatively rare in the world, and only in America do citizens elect so many statewide executive officials (the attorney general, auditor, secretary of state, and, in some states, even the secretary of agriculture) or state judges (Lijphart 1999, 189). This contributes to another |
With hectic routines, homework, and dizzying after-school schedules, there is almost no time left to unwind! As parents, we’re often overwhelmed around this time of the year, which makes it a great time to shift our focus and to make time for our kids to play.
What is Play?
Play is the common language of childhood. It is an innately driven activity through which children develop their physical, intellectual, emotional, social and moral capabilities. These competencies are the primary building blocks needed for the future. It is normal for adults to associate play with voluntary activities chosen for fun and pleasure, but children – particularly young children – are able to experience the world through play.
Benefits of Play
Young children learn best through play and the advantages of socialization, imagination, and planning that play affords prepares them well for academic and social accomplishment in later years.
The playground is an environment where children can make strides in cognitive and social growth as children have the ability to team up with their peers, make decisions, and problem solve. Partaking in some form of physical exercise – big or small – before and during school helps reduce in-class inattentiveness and moodiness and improves academic performance.
These abilities that we find exercised during play are needed in the classroom, too. Play assists in the development of critical social skills such as collaboration, cooperation, negotiation, and conflict resolution. Play can aid children in understanding their environment and their role within it and supports kids in developing imaginations and discovery of their unique interests. Play helps children develop relationships and friendships – and most importantly it’s fun!
Tips for Parents
Kids need time to be kids and embrace the open-ended nature of childhood. Here are some tips for making sure your children get the most out of their play time:
- Plan out unstructured time in your schedule, as well as your child’s, so that your children have the freedom to make choices about what they want to do. Providing your child with access to open-ended materials, blocks, tools, art materials, sand, water, and ingredients for cooking encourages play.
- Children should spend time outdoors exploring the natural world. Shipley benefits from close proximity to Bryn Mawr College and Ashbridge Park, both of which offer great outdoor spaces and opportunities for physical activity.
- Start (and finish) your child’s school day with some exercise. Shipley welcomes families to arrive at school a little early and enjoy the playground together – or do some jumping jacks in the kitchen before the bus arrives.
- Play games with your kids! Cooperative games are a great way to develop social skills. It might even be fun for you too. |
Q2. How many solution(s) of the equation 3x+2=2x-3 are there on the
i) Number Line ii) Cartesian plane
Q3. Draw the graph of the equation represented by the straight line which is parallel to the x-axis and 3 units above it.
Q4. Find the solutions of the linear equation x+2y=8, which represents a point on i) x axis ii) y-axis
Q5. For what values of c, the linear equation 2x+cy=8 has equal values of x and y as its solution.
Q6. Give the geometrical interpretations of 5x+3=3x-7 as an equation
i) in one variable ii) In two variables
Q7. Draw the graph of the equation 3x+4y=6. At what points, the graph cut the x-axis and the y-axis.
Q8. At what point does the graph of equation 2x+3y=9 meet a line which is parallel to y -axis at a distance 4 units from the origin and on the right side of the y-axis.
Q9. Express the following linear equations in the form ax + by + c = 0 and indicate the values of a, b and c in each case: (i) -2x + 3y = 6 (ii) x = 3y (iii) 2x = -5y
Q10. Find the value of k if x = 2, y = 1 is a solution of the equation 2x + 3y = k.
Q11. If the point (3, 4) lies on the graph of the equation 3y = ax + 7, find the value of a?
Q12. (i) Draw the graph of the linear equation using given Celsius for x-axis and Fahrenheit for y-axis.
F =(9/5)C + 32
(ii) If the temperature is 30°C, what is the temperature in Fahrenheit?
(iii) If the temperature is 95°F, what is the temperature in Celsius?
(iv) If the temperature is 0°C, what is the temperature in Fahrenheit and if the temperature is 0°F, what is the temperature in Celsius?
(v) Is there a temperature which is numerically the same in both Fahrenheit and Celsius? If yes, find it. |
Citations are what you find in databases and bibliographies. They provide the reader with all of the information needed to identify and find the source of information.
Citing references, or the sources of information used in research, is critical for a number of reasons. Most importantly, writers have an ethical responsibility to indicate when they have used someone else's ideas or words.
Citing references also:
When citing a reference or compiling a bibliography, there are many style choices. Check with your professor to see which style (AMA, MLA, etc.) you should use.
Refer to this page for more specialized information on citation styles.
Image source: Reasonist. Used with permission.
A citation describes a source by providing information about that source (book, article, web page, etc.) in a standard format. It tells:
Article Citation Example
Book Citation Example
Book Chapter Citation Example |
by Brian Thomas, M.S. *
Reinforcing strap structures are found throughout the living world, holding critical biological systems together. Consider human hip structure. The pelvic girdle is comprised of the strongly interconnected bones ilium, ischium, and sacrum. These are gathered into a hoop that serves as the ideal anchoring structure for the thoracic trunk to the legs. During walking or running, more than the weight of the whole body is distributed over the breadth of this biological belt. Specific ligaments and tendons attach to points on the pelvic girdle, making bipedal locomotion not just operable, but efficient.
The cardiac muscle fibers (contractile tissues of the human heart) are connected to a tough, fibrous, protein-rich belt that is shaped in a curled figure eight, wrapped around the heart for precise muscle attachments. Coordinated to the timing of the wavelike contraction pulse, the size, strength, shape, and placement of the heart's belt together produce smooth, consistent blood flow. Without its belt, this organ would be a quivering muscular blob, incapable of maintaining blood circulation. If the belt were made of more rigid material like cartilage or bone, the heart would tear itself apart after years of pumping and abrading against such a stiff structure.
Certain cells are anchored tightly to one another with rows of protein "buttons" that form criss-crossing belts called "tight junctions." These serve to isolate specific membrane proteins to critical regions, as well as force outside substances to enter the cell bodies instead of leaking between the cells. Without tight junctions, our brain, testes, kidneys, and intestines would not function.
A new belt has just been discovered, the smallest yet.1 Bacterial cell walls are held together by proteins connected to tough sugar molecules. Visualized for the first time by high-resolution cryotomography, the lead author of the recent study commented to the media, "What we saw were long skinny tubes wrapping around the bag [bacterial cell wall] like the ribs of a person or a belt around the waist."2
Biological belts can be observed forming according to biological instructions and machinery, but they have objectively discernible features that demand a more adequate causative explanation than just "natural processes." In many cases, they are part of irreducibly complex systems. Removing just one breaks both its own system and the other systems that depend on it. Gradual, bit-by-bit accretion of parts could not have formed these belts, because there is no known mechanism that preserves, let alone adds to, such partially formed, effectively inoperative systems.
Many naturalists insist that because operational science works by observation and experimentation, origins science must also employ the same techniques. However, a broader suite of techniques is routinely and appropriately used in the scientific study of past events, as is portrayed on popular forensics television shows. Repeatable, empirical tests can answer the question "How does this biological belt work?" but forensic inferences are required to answer the question "How did the first of this kind of biological belt originate?" The operations of biological processes are the effects of preceding natural causes. But this does not also demand natural causes for the origin of irreducibly complex structures. In fact, since natural processes cannot adequately justify their existence, a supernatural cause is by far the most reasonable.
The parts of the first pelvic girdle, heart, tight junctions, and bacterial cell walls must have been formed at the same time, even as Psalm 33:9 states: "For he spake, and it was done; he commanded, and it stood fast." Any substandard versions of these belts would render their whole organism immobile or dead. These and many other biological belts must have been instantly emplaced in the beginning, just as Genesis describes.
- Gan, L., C. Songye and G. J. Jensen. 2008. Molecular organization of Gram-negative peptidoglycan. Proceedings of the National Academy of Sciences. 105 (48): 18953-18957.
- Caltech Researchers Use Electron Cryotomography to Get First 3-D Glimpse of Bacterial Cell-Wall Architecture. California Institute of Technology press release, November 17, 2008.
* Mr. Thomas is Science Writer.
Cite this article: Thomas, B. 2009. Biological Belts. Acts & Facts. 38 (1): 13. |
Shells of fossilised plankton
Research, led by The Open University and published today in the scientific journal Nature, is helping scientists understand how Earth’s carbon cycle may respond to the current, human-induced, interval of global warming. It has uncovered the likely cause of repeated episodes of natural global warming 50 million years ago in the Eocene epoch, when Earth last experienced the elevated temperatures and atmospheric carbon dioxide levels predicted for the end of this century.
Dr Philip Sexton, Lecturer in The Open University’s Faculty of Science, led the research, working with the Integrated Ocean Drilling Program to obtain sediment cores from beneath the deep-sea floor. Chemical analyses of the microscopic shells of fossilised plankton hosted within these sediments allowed Dr Sexton and his colleagues to establish the connections between global climate change and the carbon cycle during the warm Eocene ‘greenhouse’.
The beginning of the Eocene epoch was marked by the most dramatic natural global warming event ever known: the PETM (Paleocene-Eocene Thermal Maximum). This episode of warming lasted for about 170,000 years, with global average temperature increasing by around 6˚C and the deep sea experiencing a dramatic depletion in oxygen levels. These environmental changes were significant, not least because they led to the extinction of half of all deep-sea species, but also the major redistribution of many land dwelling species, including early primates.
It has long been accepted that the intense global warming across the PETM (known to geologists as a ‘hyperthermal’) was the result of massive release of carbon dioxide into the atmosphere. “Scientists believe that extreme PETM global warming was caused by a geologically rapid release of carbon dioxide from the huge carbon reservoirs in rocks deep below the Earth’s surface, in a manner similar to the current transfer of buried ‘fossil fuel’ carbon from rocks into our atmosphere and oceans” said Dr Sexton. “Recovery of Earth’s temperatures was likely achieved by subsequent burial of carbon back into sedimentary rocks over a long period of time, controlled by rather slow rock weathering reactions”.
Recent discoveries of a number of additional, but more modest, hyperthermals during the Eocene have led scientists to assume that they, too, were triggered by the same thing: carbon release from ‘fossil fuel’ reservoirs into Earth’s atmosphere.
Dr Sexton and his team have discovered that the relatively modest global warming events were at least 3 times more frequent than was first thought. However, with average durations of around 40,000 years, these warming episodes were much shorter-lived than the PETM, with their recoveries being particularly rapid. “We believe that the mechanisms driving these more modest, relatively short-lived hyperthermals were different from those responsible for the PETM” said Dr Sexton.
The comparatively rapid recovery of the more modest hyperthermals implicates redistribution of carbon among the readily exchangeable, active reservoirs at Earth’s surface (the ocean, land biosphere and atmosphere). “We think that large amounts of carbon dioxide were repeatedly released into the atmosphere, and subsequently rapidly taken back up again, by the ocean,” said Dr Sexton. Specifically, they implicate a much larger-than-modern, and dynamic, oceanic reservoir of dissolved organic carbon.
This study shows that past climates of high global temperatures and atmospheric carbon dioxide levels, similar to levels we are likely to experience by the end of this century, were more unstable than previously assumed. This climatic instability appears to have arisen from a more erratic shuttling around of carbon between its various reservoirs at Earth's surface.
Notes to editors
The researchers are Philip Sexton (The Open University, Scripps Institution of Oceanography, California and National Oceanography Centre, Southampton (NOC)), Richard Norris (Scripps), Paul Wilson, Heiko Pälike, Clara Bolton and Samantha Gibbs (NOC), and Thomas Westerhold and Ursula Röhl (University of Bremen).
This research used samples and data provided by the International Ocean Drilling Program (IODP). The research was sponsored by the US National Science Foundation and supported by the European Commission, the Leverhulme Trust, the UK’s Natural Environment Research Council (NERC), and the DFG-Leibniz Center for Surface Process and Climate Studies at the University of Potsdam.
Phil Sexton is a member of The Open University’s Centre for Earth, Planetary, Space & Astronomical Research (CEPSAR). The strength and excellence of the research supported by the Centre was acknowledged in the UK’s last Research Assessment Exercise (RAE 2008), with 70% of its research deemed internationally excellent and world-leading, achieving 3*/4* rankings. CEPSAR website: http://cepsar.open.ac.uk. |
If all of the elements of this lesson plan are employed, students will develop the following powers, skills, and understanding:
Students will be able to read the text closely to understand its structure, characters, vocabulary, and subtleties.
Students will be able to understand the significant cultural influence of Dr. Jekyll and Mr. Hyde, both in its time and in subsequent centuries.
Students will be able to consider how the life of Robert Louis Stevenson and the historical era in which he lived influenced his writing.
Students will be able to observe and analyze the way Stevenson builds suspense and recreate those techniques in their own work. |
Describe a situation in which you or someone else might be learning a new task or new information (for example, in school, at work, or in a social setting).
Explain how these strategies will help, making sure to include references to encoding and retrieval as well as to relevant theories related to these concepts.
An example of a situation in which someone might be learning a new task could include many things, but let's say someone is learning to play the game of basketball. To learn this game, one might start by watching others play the game. After having seen how the game is played, the person could concentrate on one task at a time. So, for instance, first they could start with learning how to dribble the ball by bouncing it around on a basketball court. It would of course be helpful to have some kind of coaching, but by imitating what one has seen, one can use this memory method to learn how to dribble a ball. Imitation, then, is one method one could use to learn how to dribble a basketball.
Another way one could improve their memory is to use elaborative rehearsal. This is different than maintenance rehearsal where one just uses their memory. In elaborative rehearsal the person forms associations with other concepts in order to remember something. So let's say that someone is ...
Memory strategies are reinforced. |
Bromine vs Bromide | Br vs Br–
The elements in the periodic table are not stable except the noble gases. Therefore, elements try to react with other elements to gain the noble gas electron configuration to achieve stability. Likewise, bromine also has to get an electron to achieve the electron configuration of the noble gas, Krypton. All metals react with bromine, forming bromides. Bromine and bromide has different physical and chemical properties due to the change of one electron.
Bromine is an element in the periodic table which is denoted by Br. It is a halogen (17th group) in the 4th period of the periodic table. The atomic number of bromine is 35; thus, it has 35 protons and 35 electrons. Its electron configuration is written as [Ar] 4s2 3d10 4p5. Since the p sub level should have 6 electrons to obtain the Krypton, noble gas electron configuration, bromine has the ability to attract an electron. Bromine has a high electronegativity, which is about 2.96, according to the Pauling scale. The atomic weight of bromine is 79.904 amu. Under room temperature 79.904 exists as a diatomic molecule (Br2). Br2 is a Red-brown color liquid. Bromine has a melting point of 265.8 K and a boiling point of 332.0 K. Among all the bromine isotopes, Br-79 and Br-81 are the most stable isotopes. 79Br present in 50.69 % and 81Br present in 49.31%. Bromine is slightly soluble in water but soluble well in organic solvents like chloroform. Bromine has 7, 5, 4, 3, 1, -1 oxidation states. Chemical reactivity of bromine lies between that of chlorine and iodine. Bromine is less reactive than chlorine but more reactive than iodine. Bromine produces the bromide ion by taking up one electron. Therefore, bromine participates in ionic compound formation easily. Actually in nature bromine exists as bromide salts instead of Br2. Bromine has the ability to oxide the anions of elements which are located below bromine in the periodic table. However, it cannot oxidize chloride to give chlorine. Bromine can be produced by treating bromide rich brines with chlorine gas, or else by treating HBr with sulfuric acid, bromine gas can be produced. Bromine is widely used in industry and chemical laboratories. Bromide compounds are used as gasoline additives and for pesticides. Bromine can also be used as a disinfectant in water purification.
Bromide is the resulted anion when bromine abstracts an electron from another electropositive element. Bromide is represented by the symbol Br–. Bromide is a monovalent ion with -1 charge. Therefore, it has 36 electrons and 35 protons. The electron configuration of bromide is [Ar] 4s2 3d10 4p6. Bromide exists in ionic compounds such as sodium bromide, calcium bromide and HBr. Bromide also exists naturally in water sources.
What is the difference between Bromine and Bromide?
• Bromide is the reduced form of bromine. Bromide has 36 electrons compared to 35 electrons of bromine, and both have 35 protons. Therefore, bromide has a -1 charge whereas bromine is neutral.
• Bromine is more chemically reactive than Bromide.
• Bromide has achieved the krypton electron configuration, therefore, stable than the bromine atom. |
The discovery of the Mauer 1 mandible shows that ancient humans are present in Germany at least 600,000 years ago. The oldest complete hunting weapons found anywhere in the world are discovered in a coal mine in Schöningen where three 380,000 year old wooden javelins about 2 meters long are unearthed. The Neander Valley is the location where the first ever non-modern human fossil are discovered; the new species of human is named Neanderthal man. The Neanderthal 1 fossils are known to be 40,000 years old. Evidence of modern humans similarly dated are found in caves in the Swabian Jura near Ulm. The finds include 42,000 year old bird bone and mammoth ivory flutes which are the oldest musical instruments ever found; the 40,000 year old Ice Age Lion Man which is the oldest uncontested figurative art ever discovered; and the 35,000 year old Venus of Hohle Fels which is the oldest uncontested human figurative art ever discovered. The Nebra sky disk is a bronze artifact created during the European Bronze Age attributed to a site near Nebra, Saxony-Anhalt. It is part of the UNESCO Memory of the World Register.
The Deutscher Bund (German Confederation) is a loose association of 39 German states in Central Europe that the Congress of Vienna creates in 1815 to coordinate the economies of separate German-speaking countries and to replace the former Holy Roman Empire. The Confederation is weak and ineffective as well as an obstacle to German nationalist aspirations. It collapses due to the rivalry between Prussia and Austria, warfare, the 1848 revolution, and the inability of the multiple members to compromise. It dissolves with Prussian victory in the Seven Weeks’ War and the establishment of the North German Confederation in 1866. The dispute between the two dominant member states of the confederation, Austria and Prussia, over which has the inherent right to rule German lands ends in favor of Prussia after the Austro-Prussian War in 1866 and the collapse of the confederation. This results in the creation of the North German Confederation with a number of south German states remaining independent although allied first with Austria (until 1867) and subsequently with Prussia (until 1871) after which they became a part of the new German Empire.
The obsolete units of measurement of German-speaking countries consist of a variety of units with varying local standard definitions. Some of these units are still used in everyday speech and even in stores and on street markets as shorthand for similar amounts in metric units. Some customers ask for ein Pfund (one pound) of something when they want exactly 500 grams. Some obsolete German units have names similar to units that had been traditionally used in other countries and that are still used in the United Kingdom and the United States. Almost every town has its own unit definitions prior to German metrication. Towns would often post the local definitions on a wall of city hall. The front wall of the old Rudolfestat city hall (still standing) has two marks which show the Rudolstädter Elle. Supposedly by 1810 in Baden alone there are 112 different standards for the Elle which is a distance between elbow and fingertip. The smallest known German Elle is 402 mm and the longest 811 mm. The metric system became compulsory on 1 January 1872 in Germany.
13 Jul 2015 |
AT THE BOTTOM OF THE PAGE THERE ARE 3 ACTION ASSIGNMENTS (ESSAYS) TO BE COMPLETED AND EMAILED TO ME BY THE END OF THE WEEK.
Lesson 2 - CHAPTER 2 – Social Research
Study Assignment:Read Chapter 2 – Sociology by Jon M. Shepard
After careful study of this chapter, you will be able to:
- Identify major nonscientific sources of knowledge; then explain why science is a superior source of knowledge.
- Discuss cause and effect concepts and apply the concept of causation to the logic of science.
- Differentiate the major quantitative research methods used by sociologists.
- Describe the major qualitative research methods used by sociologists
- Explain the steps sociologists use to guide their research.
- Describe the role of ethics in research.
- State the importance of reliability, validity, and replication in social research.
Below are highlights from Chapter 2:
1. Four major nonscientific sources of knowledge are intuition, common sense, authority, and tradition.Nonscientific sources of knowledge often provide false or misleading information.(see page 39)
A. Intuition is quick and ready insight that is not based on rational thought.Example – the decision against dating a particular person because “it feels wrong” is a decision based on intuition.
B. Common sense refers to opinions that are widely held because they seem so obviously correct.The problem with common sense ideas is that they are often wrong.(see page 39).
C. Authority is someone who is supposed to have special knowledge that we do not have.A king believed to be ruling by divine right is an example of an authority.Reliance on authority is often appropriate.See page 39).
D. Tradition.Despite evidence to the contrary, it is traditional to believe that an only child will be self-centered and socially inept.In fact, most Americans still wish to have two or more children to avoid these alleged personality traits.
2. Science as a source of knowledge:
A. Objectivity – The principle or rule stating that scientist are expect to prevent their personal biases from influencing the interpretation of their results.Can scientists really be objective?Sometimes (see page 40)
B. Verifiability – a principle or rule of science by which any given piece of research can be duplicated by other scientists.
3. Causation and the Logic of Science:
A. What is causation?The idea that events occur in a predictable, nonrandom way and that one event leads to another.
B. What is a variable?A variable is a characteristic such as age, education, or social class that is subject to change.It occurs in different varying degrees.(see page 41 - 42.)
C. What is correlation?Correlation exists when a change in one variable is associated with a change (either positively or negatively) in the other.
4. Experiment – An experiment takes place in a laboratory in an attempt to eliminate all possible contaminating influences (see page 43 & 44).
5.Quantitative Research Methods:
A. Survey Research – This is when people are asked to answer a series of questions.This research method is the most widely used method among sociologists.It is ideal for studying large numbers of people.
5. Qualitative Research Methods
A. Field Research – A research approach for studying aspects of social life that cannot be measured quantitatively and that are best understood within a natural setting.
B. Case study –The most popular approach to field research is the Case Study - a thorough investigation of a single group, incident, or community (page 48).
6. Participant observation – a researcher becomes a member of the group being studied.(Page 49).
7. Subjective Approach – studies an aspect of human social behavior by ascertaining the interpretations of the participants themselves.A prominent example of the subjective approach is ethnomethodology, a development in microsociolgy that attempts to uncover taken-for-granted social routines.
8. Ethnomethodology is the study of processes people develop and use in understanding the routine behavior expected of themselves and other in everyday life.(page 52)
9. A model for doing research:
a. Identify the problem
b. Reviewing the literature
c. Formulating hypotheses
d. Developing a Research Design
e. Collecting Data
f. Analyzing Data
g. Stating Findings and Conclusions
h. Using the Research Model
Realistically, do sociologists follow these steps?Yes, to some degree.
10. Ethics in Social Research:The formal code of ethics for sociologists covers a variety of important areas beyond research, including relationships with students, employees and employers.In broad terms, the code of ethics is concerned with maximizing the benefits of sociology to society and minimizing the harm that sociological work might create.Of importance in the present context are the research-related aspects of code.Do ethical concerns make research harder?Yes, but it is the researcher’s responsibility to decide when a particular action crosses an ethical line – a decision not always easy to make, because moral lines are often blurred (Page 56).
11. Reliability – a measurement technique must yield consistent results on repeated applications.
12. Validity – exists when a measurement technique actually measures what it is designed to measure.
13. Replication – The duplication of the same study to ascertain its accuracy is closely linked to both reliability and validity in that reliability and validity problems unknown to original research are likely to be revealed as subsequent social scientist repeat the research.It is partially through replication that scientific knowledge accumulates and changes over time.(page 58)
Action Assignments – Answer the following questions.Each response must be at least 100 words.
No. 1:Differentiate the major quantitative research methods used by sociologists.
No. 2:Explain the steps sociologists use to guide their research.
No. 3:State the importance of reliability, validity, and replication in social research.
The first step is to select a research topic. Sociologists have different ways of choosing a topic. Many sociologists choose a topic based on a theoretical interest they may have. It could be because funding is already available or they follow their own curiosity about a particular topic. Sometimes a growing social problem has been brought to their attention by the media or other sociologists (Henslin, 2014). Step two in the research model is to define the problem, to determine what is to be learned from the research. To develop a good question that is researchable, the question should focus on a certain area or problem (Henslin, 2014). The more specific the better. The third step is to review the literature to find out what has already been published on the chosen topic, and identify any gaps in knowledge (Henslin, 2014). Reading literature on the topic may help to stir up ideas about what has not been answered. This step takes time to complete thoroughly. A thorough literature review is necessary to determine the new study’s possible contribution. The fourth step is to formulate an hypothesis. An hypothesis is a statement on what you expect to find based on the theory (Henslin, 2014). You must set operational definitions, which is “precise ways to measure concepts” in the hypothesis (Henslin, 2014). Sociological research examines the relationship between variables, or whether an independent variable affects a dependent variable. The fifth step is to choose a research method. Sociologists use many different designs and methods to study society and social behavior. Sociological research methods include the case study, survey, observational, correlational, experimental, and cross ‐ cultural methods, as well |
NASA’s Curiosity rover has detected both methane in Mars’s atmosphere and carbon-bearing organic compounds in its rocks. But it’s unclear where these molecules come from — or whether there’s any biological connection.
Samples taken from two drill holes on Mars support the idea that Mars lost a whole lot of water fairly early in its history.
Mission planners have devised an unusual strategy for protecting orbiting spacecraft when Comet Siding Spring passes the Red Planet in October 2014.
NASA’s MAVEN mission has discovered a new population of particles in Mars’s upper atmosphere. It’s also found a plume of particles escaping from the planet’s poles, confirming atmospheric loss is happening today.
NASA’s MAVEN spacecraft has detected dust high in Mars’s atmosphere and auroras across the planet’s northern hemisphere.
Six years from now, there will be a new NASA robot heading to the Red Planet: the Mars 2020 rover. On July 31st mission planners unveiled the rover’s seven scientific instruments, which will pave the way for human exploration of Mars.
Scientists have detected glass in Martian craters, created by the fierce heat of impacts that melted the Red Planet’s surface.
On July 5th, the Moon has a remarkably close brush with Mars, followed two nights later by a similar rendezvous with Saturn. |
Climate change is a major factor in reducing the variety of life on earth. With more species of plants and animals under threat than ever before, action is needed to stop the loss of nature and to help eco-systems adapt to climate change.
“I go to nature to be soothed and healed, and to have my senses put in tune once more,” wrote the American author John Burroughs, who was one of the first people to make conscious efforts to protect nature in the early part of the last century. Many people continue to enjoy its relaxing and spiritual benefits, but nature plays a much more important role. It provides the fundamental elements that we need to live – our sources of food, drinkable water, breathable air, fibres and fuel.
Yet across the world, ‘biodiversity’ – the name given to the number and variety of animals and plants in a given area – is being lost faster than ever before. According to the most recent estimates, two-thirds of natural eco-systems are in decline. The World Conservation Union’s (IUCN) annual ‘Red list’ of animals and plants under threat recently showed the highest ever number of species facing extinction.
These changes have grave implications for economies, security and health. In Europe we are seeing the signs of collapsing fish stocks, damage to soil, flood damage, and disappearing wildlife. In other parts of the world, the losses are happening even more quickly, with vast areas of tropical rainforests, for example, disappearing every year.
Climate change is, of course, not the only cause. The expansion of cities, deforestation, industrial farming practices and the over-exploitation of natural resources are all playing a part. However, the gradual warming of our climate that has taken place in the last century is already having an impact on natural systems and this is set to get more dramatic in the future. Temperature rises and changes in rainfall patterns and weather systems are affecting where animals and plants live and grow.
“We are seeing startling changes in growing seasons," says Jacqueline McGlade, Executive Director of the European Environment Agency. "Many species are already on the move, expanding northwards as temperatures rise.”
Europe's most sensitive natural ecosystems are to be found in mountain regions, coastal zones, the Arctic and various parts of the Mediterranean. Animals and plants are likely to have particular difficulty in adapting to climate change in all of these areas.
Conservation action is important as biodiversity loss adds to the effects of climate change. The more degraded eco-systems become, the less able the planet’s natural defence systems are to cope with the impact of rising temperatures and more extreme weather. One example of this was Hurricane Katrina, which hit the southeast of the United States in 2004 and was one of the worst natural disasters ever to strike the country. The loss of coastal marshlands that used to buffer New Orleans from flooding and storm surges worsened the impact of the storm in the city.
Forests, oceans and other natural areas also play an important role in slowing global warming as they absorb carbon dioxide (CO2), the main gas responsible for climate change, from the atmosphere. The loss of large areas of tropical rainforests and the worsening health of many oceans mean that this natural function is being disrupted, and that this in turn is accelerating the rate of global warming.
Many people have realised that preserving natural systems is important and most governments from around the world are parties to the 2002 UN Convention on Biological Diversity that commits them to significantly slowing the loss of biodiversity by 2010. Member States of the European Union went further and agreed in 2001 to halt biodiversity loss by the same year. However, further efforts will be needed on all levels to achieve these targets in sectors such as agriculture, regional development, energy, transport and trade.
In the EU, the main action to protect biodiversity is the creation of the Natura 2000 network, a joined-up network of conservation areas. The links between these areas must be strengthened, and future actions will need to plan for the effects of climate change on natural habitats. Appropriate management of the wider landscape, in ways that allow for the movement and dispersal of species, will be essential to complement the network.
Species threatened – 42% of Europe's native mammals, 43% of birds, 45% of butterflies, 30% of amphibians, 45% of reptiles and 52% of freshwater fish are facing extinction. Some 800 types of plants in Europe are at also a risk.
Land quality dropping – since the 1950s, Europe has lost more than half of its wetlands and most high-nature-value farmland.
Less pristine land – only 1-3% of Western Europe's forests can be classed as "undisturbed by humans".
Barren seas – most major fish stocks are below safe biological limits.
Large-scale destruction – since the late 1970s, an area of tropical rain forest larger than the EU has been destroyed. An area equivalent to the size of France is destroyed every 3-4 years.
Rising losses – extinction rates are now around 100 times greater than those shown in fossil records and are projected to accelerate further. Globally, an estimated 34,000 plant and 5,200 animal species face extinction. Of the species that have been assessed, one in four mammals, one in eight birds, one third of all amphibians and 70% of plants are in danger.
Source: The United Nation’s Millennium Ecosystem Assessment (MEA) and the World Conservation Union’s (IUCN) Red List
For more information:
The European Commission’s biodiversity and climate change pages (links to the latest issue of the Natura 2000 Newsletter dedicated to biodiversity and climate change) |
It is well established that a student’s reading proficiency level in elementary school is a good predictor of high school graduation success. The lower the reading level, the more likely it is that the student will not graduate on time. Against this background, it is sobering that many U.S. students reach high school without the reading and comprehension skills they need. According to NAEP data, in 2011, more than a third (33 percent) of 4th-graders were reading at a below basic level; among 8th-grade and 12th grade students, the percentage of students who were stuck at the below basic reading level had dropped, but only to about 25 percent. Many of these students drop out; many go on to earn a diploma, but enter the work world singularly unprepared to earn a living.
What is to be done? Certainly, intensive remediation is part of the answer, but so are practice and motivation and interest. The challenge for struggling readers at the high school level is hard to overstate; by the time they enter high school, they often display a negative and despairing attitude toward school that has been hardened by years of failure. Furthermore, most high school teachers are not trained in literacy instruction, a specialized skill which is theoretically the purview of early elementary school. Indeed, for many urban teachers, motivating kids just to come to school is the major challenge.
How do we motivate these kids, who sometimes exhibit stubborn resistance to reading or to any other kind of schoolwork? One effective strategy is to make the purpose of reading as interesting and obvious as possible. For many youngsters, that means access to high-quality Career and Technical Education (CTE).
It seems commonsensical that kids who are not academically oriented (not a crime, by the way) could be motivated to learn if they see and understand the relationship of that learning to their real world aspirations. This is one of the reasons that kids in CTE programs tend to complete high school and enjoy post-secondary success in occupations, training, and education at greater rates than their comparable peers.
It is now fairly well established that CTE -- a sort of hybrid of vocational education and rigorous academics -- is a promising strategy offering a genuine pathway to academic and career success for turned-off students. Until the last decade or so, however, the role of CTE in building literacy skills has been less well known. It is a natural fit, really. Indeed, illiteracy is simply not an option for CTE students. Today’s workplace requires the ability to read and absorb technical manuals, understand and program computers, write and respond to memos on technical and professional matters, and interpret tables of instructions. In fact, CTE texts can contain very difficult content, on a par with or more difficult than traditional academic courses.
Accordingly, the CTE community has incorporated literacy strategies into their programs in many areas. Some states have embedded literacy learning approaches reaching back into the elementary grades. Other states have recognized the need for teachers to be trained to, basically, teach reading in a CTE courses and have prepared courses and instructional materials for them. Others have made literacy and reading in a CTE context a center of state concern.
Despite all the activity, “literacy-in-CTE” is only just beginning to be subject of research. A recently published Cornell University study found that combining instructional literacy strategies with activities in a CTE context resulted in significant improvements in students’ reading ability and their grades. The daunting, highly-technical terminology of so many CTE texts became “manageable” for struggling students as a result of teacher implementation of these strategies.
This makes a huge amount of sense, really, when one considers that domain knowledge is such an enormous factor in any student’s reading success. That is, the learning of CTE students is "contextualized" – students who are interested in a subject are taught about it, and the more they learn the more complex the text the are able to read about it.
One can only hope that the education reform community will eventually replace the failed mantra of “college for all” with the equally ambitious (but vastly more sensible) mantra of “multiple pathways to success” – and, in so doing, the successful literacy strategies being field tested in CTE will extend to the broader population of struggling students who need them desperately.
- Randall Garton |
The Chauci (German: Chauken, and identical or similar in other regional modern languages) were an ancient Germanic tribe living in the low-lying region between the Rivers Ems and Elbe, on both sides of the Weser and ranging as far inland as the upper Weser. Along the coast they lived on artificial hills called terpen, built high enough to remain dry during the highest tide. A dense population of Chauci lived further inland, and they are presumed to have lived in a manner similar to the lives of the other Germanic peoples of the region.
Their ultimate origins are not well understood. In the Germanic pre-Migration Period (i.e., before c. 300 AD) the Chauci and the related Frisians, Saxons, and Angles inhabited the Continental European coast from the Zuyder Zee to south Jutland. All of these peoples shared a common material culture, and so cannot be defined archaeologically. The Chauci originally centered on the Weser and Elbe, but in c. AD 58 they expanded westward to the River Ems by expelling the neighboring Ampsivarii, whereby they gained a border with the Frisians to the west. The Romans referred to the Chauci living between the Weser and Elbe as the 'Greater Chauci' and those living between the Ems and Weser as the 'Lesser Chauci'.
The Chauci entered the historical record in descriptions of them by classical Roman sources late in the 1st century BC in the context of Roman military campaigns and sea raiding. For the next 200 years the Chauci provided Roman auxiliaries through treaty obligations, but they also appear in their own right in concert with other Germanic tribes, opposing the Romans. Accounts of wars therefore mention the Chauci on both sides of the conflict, though the actions of troops under treaty obligation were separate from the policies of the tribe.
The Chauci lost their separate identity in the 3rd century when they merged with the Saxons, after which time they were considered to be Saxons. The circumstances of the merger are an unsettled issue of scholarly research.
Society and life
The Germans of the region were not strongly hierarchical. This had been noted by Tacitus, for example when he mentioned the names of two kings of the 1st century Frisians and added that they were kings "as far as the Germans are under kings". Haywood (Dark Age Naval Power, 1999) says the Chauci were originally neither highly centralised nor highly stratified, though they became more so after 100 AD. Yorke (The Conversion of Britain c.600–800, 2006), speaking of the 5th century, describes the 'Continental Saxons' (which then included the Chauci) as having powerful local families and a dominant military leader.
Writing in AD 79, Pliny the Elder said that the Germanic tribes were members of separate groups of people, suggesting a distinction among them. He said that the Chauci, Cimbri and Teutoni—the people from the River Ems through Jutland and for some distance inland—were members of a group called Ingaevones (a "Cimbri" people were also given as members of a different group, and this is likely a different people).
Tacitus, writing in AD 98, described the inland, non-coastal Chauci homeland as immense, densely populated, and well-stocked with horses. He was effusive in his praise of their character as a people, saying that they were the noblest of the Germans, preferring justice to violence, being neither aggressive nor predatory, but militarily capable and always prepared for war if the need arose.
Pliny (AD 23–79) had visited the coastal region and described the Chauci who lived there. He said that they were "wretched natives" living on a barren coast in small cottages (or huts) on hilltops, or on mounds of turf built high enough to stay dry during the highest tide (i.e., terpen). They fished for food, and unlike their neighbors (i.e., those living inland, away from the coast) they had no cattle, and had nothing to drink except rainwater caught in ditches. They used a type of dried mud (i.e., "surface peat") as fuel for cooking and heating. He also mentioned their spirit of independence, saying that even though they had nothing of value, they would deeply resent any attempt to conquer them.
Classical Roman history
The record is incomplete. The bulk of historical information about the Chauci is from the Annals of Tacitus, written in 117. Many parts of his works have not survived, including an entire section covering the years AD 38–46, as well as the years after AD 69.
The earliest mention of the Chauci is from 12 BC and suggests that they were assisting other Germanic tribes in a war against the Romans. Drusus campaigned against those Germans along the lower Rhine, and after devastating the lands west and north of the Rhine he won over (or defeated or intimidated) the Frisians. He was in the process of attacking the Chauci when his vessels were trapped by an ebb tide. Drusus gave up the attack and withdrew.
Aftermath of Teutoburg Forest, c. 15
The Germans under Arminius had destroyed 3 Roman legions under Varus at the Battle of the Teutoburg Forest in AD 9. The Romans recoiled at first but then Germanicus initiated destructive campaigns against those Germans whom the Romans blamed for their defeat. The Chauci were not among them, and were said to have promised aid, and were associated with the Romans in "military fellowship". However, in defeating Arminius' own tribe (the Cherusci) the Romans were unable to capture or kill Arminius, who escaped. There were Chauci among the Roman auxiliaries, and they were rumored to have allowed the escape. In one of the campaigns a Roman fleet (probably riverine, not ocean-going) was broken up by a storm, causing many casualties. Germanicus himself managed to survive by reaching the lands of the Chauci, who provided him with a safe haven.
A parenthetical note concerns the Ampsivarii. They had not supported the German cause led by Arminius in 9 AD and had been ostracized as a result. The Chauci had suffered no such disaffection from the other Germanic tribes in the aftermath of Teutoburg Forest, nor had they alienated the Romans. Many years later, c. AD 58, the Chauci seized an opportunity to expel the Ampsivarii and occupy their lands at the mouth of the River Ems, whereby they gained a border with the Frisians to the west.
Roman war against Gannascus, c. 47
In AD 47 (and perhaps for some time earlier), the Chauci along with the Frisians were led by a certain Gannascus of the Canninefates. They raided along the then-wealthy coast of Gallia Belgica (i.e., the land south of the Rhine and north of the Rivers Marne and Seine), and the Chauci made inroads into the region that would later become the neighbouring Roman province of Germania Inferior, in the area of the Rhine delta in what is now the southern Netherlands.
Corbulo was made the local Roman military commander. He successfully engaged the Germans on both land and water, occupied the Rhine with his triremes and sent his smaller vessels up the estuaries and canals. The Germanic flotilla was destroyed in a naval engagement, Gannascus was driven out, and Frisian territory was forcibly occupied.
A negotiation between the Romans and Gannascus was arranged under the auspices of the 'Greater Chauci', which the Romans used as an opportunity to assassinate their opponent. The Chauci were outraged by the act of bad faith, so the emperor Claudius forbade further attacks on the Germans in an effort to ease tensions, and the Romans withdrew to the Rhine.
Batavian Revolt, c. 69
In AD 69 the Batavi and other tribes rose against Roman rule in the Revolt of the Batavi, becoming a general uprising by all the Germans in the region. Led by Civilis, they inflicted huge casualties on the Romans, including the destruction of a Roman fleet by a Germanic one off the North Sea coast. Led by Cerialis, the Romans gave as good as they had gotten, ultimately forcing a humiliating peace on the Batavi and stationing a legion on their territory.
Both the Chauci and the Frisians had auxiliaries serving under the Romans, and in a siege and assault by Civilis at Colonia Claudia Ara Agrippinensis (at modern Cologne), a cohort of Chauci and Frisians had been trapped and burned. The Chauci had supported Civilis in their own name, providing him with reinforcements.
The Chauci were one of the most prominent early Germanic sea raiders. They are probable participants in the Germanic flotilla that was destroyed by Drusus in 12 BC. They were raiding the coasts of Roman Belgica in AD 41, long before they participated in further raids of the same coasts under Gannascus in AD 47. It is likely that their raiding was endemic over the years, as the few surviving accounts probably do not reflect all occurrences. Tacitus describes the Chauci as 'peaceful' in his Germania (AD 98), but this is in a passage describing the non-coastal, inland Chauci, whereas sea raiders are necessarily a coastal people.
By the late 2nd century Chauci raiding was ongoing and more serious than before, continuing in the North Sea Channel until their last recorded raids c. 170–175. While there are no historical sources to inform us one way or the other, it is likely that the Chauci continued their raiding and then played a role in the formation of the new Germanic powers, the Franks and Saxons who were raiders in the 3rd century.
There is archaeological evidence of destruction by raiders between 170–200, ranging along the Continental coast down to the Bay of Biscay, to northwest Belgica (e.g., fire destruction at Amiens, Thérouanne, Vendeuil-Caply, Beauvais, Bavai, Tournai and Arras), to coastal Britain (e.g., fire destruction at the eastern Essex sites of Chelmsford, Billericay, Gestingthorpe, Braintree, Wickford, Kelvedon, Great Chesterford and Harlow). The perpetrators are unknown, but Chauci raiders are among the prime suspects.
The Romans responded with defensive measures. Caistor-by-Norwich, Chelmsford and Forum Hadriani (present day Voorburg) (the civitas of the Canninefates near The Hague) were all fortified c. 200, and the Romans began a defensive system of protection especially along the coasts of Britain and the Continent. This system would be continually maintained and improved upon, which the Romans would not have done unless there was a continuing threat to be addressed. The system would continue to evolve through the disappearance of Chauci raiders and their replacement by the Frankish and Saxon ones, up to the end of the 4th century. By then it would be known as the Saxon Shore, a name given it by the Notitia Dignitatum.
A passage written by Zosimus has been interpreted as one of the last mentions of the Chauci, and one where they are specifically mentioned as a Saxon group; but it depends upon whether we can equate them with the "Kouadoi" in Zosimus's Greek, a name he had apparently used wrongly. Julian the Apostate fought against Saxons and Franks, including the Salians, but then allowed the Salians "descended from the Franks" to settle in Toxandria in 358. According to Zosimus, this happened in response to an attack from the sea by the "Kouadoi" Saxons which affected both Romans and Salians, who had been living in the river delta.
Beowulf is an Old English heroic poem where the hero (Beowulf) engages in battles with antagonists. Set in long-ago Scandinavia, it makes frequent references to the peoples who are a part of the story, and efforts have been made to connect those peoples with peoples mentioned in ancient historical records. The "Hugas" of the poem are said to be a reference to the Chauci.
- Haywood 1999:14, Dark Age Naval Power. Haywood uses the term 'North German' to distinguish them from the 'Rhine Germans' (the Caninnefates, Batavians, and "Frankish" tribes).
- Haywood 1999:17–19, Dark Age Naval Power. Haywood cites Todd's The Northern Barbarians 100 BC–AD 300 (1987) for this conclusion.
- Tacitus 117:253–254, The Annals, Bk XIII, Ch 55. Events of AD 54–58. The Germans under Arminius had wiped out 3 Roman legions under Varus at the Battle of the Teutoburg Forest. The Ampsivarii had not supported the German cause and had been ostracised as a result. Many years later, c. AD 58, the Chauci then took the opportunity to expel them and occupy their land at the mouth of the River Ems.
- Haywood 1999:17–19, Dark Age Naval Power. Haywood cites Tacitus as well as a number of other sources.
- Tacitus 117:355, The Annals, Translator's note on Bk XI, Ch 19.
- Haywood 1999:28, Dark Age Naval Power.
- Tacitus 117:253, The Annals, Bk XIII, Ch 54. Events of AD 54–58.
- Haywood 1999:19–20, Dark Age Naval Power. The referenced footnote notes that the Chauci heartland between the Elbe and Weser contained huge cremation cemeteries with a uniform range of poor quality grave goods. In the 2nd century aristocratic cemeteries with rich grave goods appear.
- Yorke, Barbara (2006), Robbins, Keith, ed., The Conversion of Britain: Religion, Politics and Society in Britain c.600–800, Harlow: Pearson Education Limited, p. 59, ISBN 978-0-582-77292-2
- Pliny the Elder 79_1:346–347, Natural History, Bk IV, Ch 28: Germany. A footnote suggests that the two references to the Cimbri in two different groups were not references to the same people.
- Tacitus 98:61–62, The Germany, XXXV.
- Pliny the Elder 79_3:339, Natural History, Bk XVI, Ch I: Countries that have no trees. Pliny also notes that the Chauci lived between the Rivers Ems and Elbe.
- Cassius Dio 229:365, Roman History, Bk LIV, Ch 32.
- Tacitus 117:30, The Annals, Bk I, Ch 60. Events of AD 15–16.
- Tacitus 117:48, The Annals, Bk II, Ch 17. Events of AD 16–19
- Tacitus 117:50, The Annals, Bk II, Ch 24. Events of AD 16–19.
- Tacitus 117:253–254, The Annals, Bk XIII, Ch 55. Events of AD 54–58.
- Tacitus 117:189, The Annals, Bk XI, Ch 18–19. Events of AD 47–48.
- Tacitus 117:400, The Annals, Bk XVI, Ch 17. Events of 65–66 (Rome and Parthia—Campaigns of Corbulo in the East). Tacitus makes the parenthetical comment that Corbulo had driven the Chauci out of the provinces of Lower Germany which they had invaded in AD 47.
- Tacitus 117:190, The Annals, Bk XI, Ch 18–19. Events of AD 47–48.
- Haywood 1999:22–23, Dark Age Naval Power.
- Tacitus 105:7, The Histories, Translator's Summary of Chief Events.
- Tacitus 105:193, The Histories, Bk IV, Ch 79.
- Tacitus 105:222, The Histories, Bk V, Ch 19. A footnote makes reference to "Cp IV.79".
- Haywood 1999:15, Dark Age Naval Power.
- Haywood 1999:21, Dark Age Naval Power.
- Tacitus 98:61–62, The Germany, XXXV.
- Haywood 1999:28, Dark Age Naval Power.
- Haywood 1999:24, Dark Age Naval Power.
- Haywood 1999:24–25, Dark Age Naval Power.
- Haywood 1999:24–28, Dark Age Naval Power.
- Haywood, John, Dark Age Naval Power: A Re-Assessment of Frankish and Anglo-Saxon Seafaring ..., p. 42
- Zosimus Nova Historia Book III
- Lumsden, H. W. (1881), "The Fire Drake (Part III)", Beowulf, An Old English Poem, translated into modern rhymes, London: C. Kegan Paul & Co., p. 77
- Cassius Dio (229), Cary, Earnest (translator), ed., Dio's Roman History VI, London: William Heinemann (published 1917)
- Haywood, John (1999), Dark Age Naval Power: Frankish & Anglo-Saxon Seafaring Activity (revised ed.), Frithgarth: Anglo-Saxon Books, ISBN 1-898281-43-2
- Pliny the Elder (79), Bostock, John; Riley, H. T., eds., The Natural History of Pliny I, London: Henry G. Bohn (published 1855)
- Pliny the Elder (79), Bostock, John; Riley, H. T., eds., The Natural History of Pliny III, London: George Bell and Sons (published 1892)
- Schmitz, Leonhard (1853), "CHAUCI", in Smith, William, A Dictionary of Greek and Roman Geography I, London: John Murray (published 1872), pp. 605–606
- Tacitus, Publius Cornelius (98), The Germany and the Agricola of Tacitus (revised translation, with notes), Chicago: C. M. Barnes Company (published 1897)
- Tacitus, Publius Cornelius (105), Fyfe, W. Hamilton (translator), ed., The Histories II, Oxford: Clarendon Press (published 1912)
- Tacitus, Publius Cornelius (117), Church, Alfred John; Brodribb, William Jackson, eds., Annals of Tacitus (translated into English), London: MacMillan and Co. (published 1895) |
One of the mysteries of the English language finally explained.
A solid or surface having plane sections that are hyperbolas, ellipses, or circles.
- ‘Out of this work came another of Wren's important mathematical results, namely that the hyperboloid of revolution is a ruled surface.’
- ‘In the vicinity of point M, we assume that the surface of the hyperboloid is a plane.’
- ‘To calculate atomic volumes, the Voronoi cell procedure using hyperboloid interfaces between atoms was applied.’
- ‘In 1826 he generalised his theorem to a hyperboloid of revolution, rather than a cone.’
- ‘To compare this funny word with something more familiar, a hyperboloid is a two-dimensional pseudosphere.’
Top tips for CV writingRead more
In this article we explore how to impress employers with a spot-on CV. |
We are pleased to join other amazing educators and educational bloggers this month in the 28 Days of STEM and STEAM project by Left Brain Craft Brain. Opportunities to explore STEM and STEAM (Science, Technology, Engineering, the Arts, and Math) challenges in the classroom or at home do NOT have to be complicated. By offering simple challenges, kids will develop critical thinking skills to take beyond the moments of the challenge and into their everyday world. Come explore, play, and learn this Valentine’s Day with an EASY PEEPS Playdough recipe and a free-standing heart engineering challenge that even preschoolers can enjoy!
Valentine’s Day STEAM: PEEPS Playdough and a Heart Engineering Challenge!
Objective: To encourage the use of critical thinking skills and cross-disciplinary tools to gain new problem solving knowledge that can be applied to the everyday world.
Skills Presented in this Simple Challenge:
Science: Kids will use skills within the scientific method (observing, communicating, comparing, organizing, and relating) while making the PEEPS playdough and will plan, construct, and document learning in the playdough free-standing heart engineering challenge.
Technology: Students will use digital cameras to photograph their own free-standing heart designs and print the photos to share with peers. We also made a class book to share online with family and friends.
Engineering: Kids will construct a free-standing heart from simple supplies that exhibit early attempts at engineering.
Arts: Kids will display individual (or small group) artistic expression when designing their own version of the free-standing heart challenge.
Math: Students will use mathematical skills (estimation, same/different, lines, patterns) to construct the free-standing hearts and gain knowledge of mathematical relationships in the challenge.
Disclosure – this post and others within the blog contain affiliate/distributor and/or sponsored links. Please see our disclosure page for more information.
- 5 Large or 10 Small Valentine’s Day PEEPS hearts – per small group or child (please note: the challenge can be done in ANY season simply by changing the PEEPS and the constructed item for the challenge).
- 3 TBLS.(plus additional for kneading the dough) Corn Starch
- 1 1/2 tsp. oil (any kind – we used melted coconut oil)
- Food coloring (if desired)
- Toothpicks (any kind)
To make the Valentine’s Day PEEPS Heart Playdough:
- Place 5 of the large PEEPS hearts (or 10 small ones) in a microwave safe bowl.
- Microwave for 20-30 seconds until the PEEPS begin to puff up (watch closely). Note for parents and teachers: when marshmallows are heated in a microwave, some moisture evaporates adding gas to the bubbles which causes the marshmallows to puff up.
- Add the cornstarch, oil, and food coloring – mix well with a spoon or fork.
- Turn the mixture out onto a pastry mat lined with additional cornstarch. Knead the mixture well to distribute the color. It took quite a bit of additional cornstarch to get the texture needed. Add cornstarch while kneading until the mixture is no longer sticky.
Our kids had a blast making their PEEPS Playdough. We kneaded our dough, pressed and pushed it, until it was a nice red ball of dough.
Valentine’s Day Free-Standing Heart Engineering Challenge for Kids
STEAM INQUIRY: Kids are asked to engineer a basic free-standing PEEPS playdough heart using only the simple supplies below.
- Valentine’s Day PEEPS homemade playdough (recipe above)
The kids will ask HOW to construct their free-standing hearts. Try to answer their questions with open-ended feedback to allow them to use critical thinking tools to solve the problem independently. Invite the kids to draw a picture of how they want their designs to look. It is truly amazing to watch young kids creating, thinking, and learning through play.
Documentation of Learning and Use of Technology: As the kids construct, have them take digital photos of their constructions to print and share with their peers. Kids will often develop “new” ideas from looking at their own photos or ones of peers.
- Set out the homemade PEEPS playdough and toothpicks on a large table (this challenge works great for small groups or kids can do individually, too).
- Kids are challenged to engineer a free-standing heart with ONLY their PEEPS playdough and toothpicks (the heart must stand up on its own without aid from the kids).
- Discuss the STEPS for STE(A)M Success:
Our kids had so much fun with this challenge! They worked hard to plan their designs, test the designs to see if they worked, modify what didn’t, and make our PEEPS playdough hearts stand tall!
A Few of our own Free-Standing PEEPS Playdough Heart Constructions:
We hope your kids enjoy the simple STEAM challenge this Valentine’s Day, too. The free-standing heart STEAM challenge can be modified for any season by changing the PEEPS marshmallows and the item created for the challenge (Ghosts, Christmas Trees, Easter Eggs, Flags, etc.). Your kids might also enjoy the following STEM and STEAM activities:
The Valentine’s Day PEEPS Playdough and Free-Standing Heart Engineering Challenge is part of a series of 28 Days of STEAM and STEM activities from Left Brain Craft Brain. 29 other blogs will share some awesome STEM and STEAM activities all throughout the month. Join us on Facebook, Instagram, or Twitter to share in the fun all month long.
For an entire thematic unit of playful learning this Valentine’s Day, please see the Valentine Love and Friendship Theme here on the blog! |
How many times has your child presented you with a picture that you’ve had to turn sideways, or even upside down to try to figure out what it was? Your child may respond with something like, “It’s a rocket ship in outer space.” Or, “We went to the fire station. This is my fire truck.” We adults may not see those specific shapes in children’s artwork, but in their minds, the shapes are vivid images of what they have seen or experienced.
How many times has your child presented you with a piece of paper with random lines, curves, and squiggles on it? How many times have you smiled and nodded, yet dismissed that work as nothing more than scribbling? Did you know scribbling is a vital part of a child’s development? Just as crawling is important for a child’s ability to walk, scribbling is the stepping stone to reading and writing. It also aids in a child’s cognitive, physical, language, and social developments, as well as self-expression. When a child scribbles, she strengthens her finger, hand, and arm muscles. As a child ages and develops, she goes through stages of scribbling: placement, shape, design, and pictorial.
Young toddlers around eighteen months of age become interested in scribbling. They may see siblings or peers drawing and want to do the same. They grip a crayon, marker, or pencil in a fisted hold and make large movements with their arms. The marks on their papers may appear all over the place with little placement. Sometimes they may draw off the paper and onto the table. Older toddlers around two and three are able to control their scribbles with smaller marks. They put more thought into where the marks are placed on the page. According to experts, there are seventeen different placements including all over the page, in the middle of the page, bottom or top, diagonal, right or left, and top or bottom quarter. Three-year-olds start to add shapes to their marks. Some of these marks may resemble circles, large scribbled Xs, boxes, triangles, and oddly shaped forms. They are able to maintain better control as they grip their writing utensils. Plus, scribbling allows them to create the world around them, and it gives them a sense of independence. Close to age four, young children move into the design stage. At this point in their development, children combine two diagrams. For example, they may draw a circle
with an X, a square with a triangle on top to represent a house, or a square on top of a rectangle to represent a bed. Preschoolers and young kindergarteners transition into the pictorial stage where their designs depict pictures adults can recognize. Children put a lot of thought into what they write and draw. A child may scribble, yet what she sees and what an adult sees may be two different things. A child may make marks and call it a grocery list or the beginning of a story, but a grown up may see it as a jagged line. As she grows and learns to recognize letters, her scribbles may take on letter-like shapes. Once she forms letters correctly, she may string them together in a random fashion to form words that she recognizes. As her writing matures, she may create invented spelling based on sounds she hears and understands. Letters may be left out or placed in the wrong order. Eventually, the child will be able to read and write according to developmental guidelines for her age. Parents, care givers, and educators can greatly influence a child’s writing ability by providing her with the appropriate materials. Chunky crayons and fat markers are good for tiny hands that may have trouble gripping smaller utensils. As a child gets older, consider switching to skinnier crayons, markers, pencils, paint brushes that will reinforce a proper tripod hold on the utensil—the way the thumb, pointer, and middle fingers grip a writing instrument.
Easels with large pads of paper allow for full ranges of motion and body movements. Sidewalk chalk provides a different texture. Dry erase boards allow children to write, erase, write, and erase over and over. Finger painting allows children to exercise their fingers as they create scribbles, shapes, and letters. Paint allows a child to express himself, as well as exercise his early learning writing abilities. Magnet boards with magnetic letters and numbers provide children with a tangible example of how numbers and letters are correctly formed.
Think of the child’s learning abilities when providing different writing materials, and focus less on the mess they could make. Children learn best through exploration and hands-on experiences.
The next time your child presents you with a picture, compliment her choice of colors, the way she made a shape. Ask her to tell you a story about the picture. On the paper, write what she says. Date the picture and keep it to track her progress. Your interest will enrich her self-esteem and ultimately her future writing abilities.
Lisa Jordan is a family child care provider and writer in Warren County. She is pursuing her degree in Early Childhood Education through Clarion University and will graduate in May. |
UC Davis Researchers Believe They Now Know Why Zebras Have Stripes
DAVIS (KCBS) – A mystery of the animal kingdom that has baffled scientists for centuries may finally have been solved, according to UC Davis researchers who examined the pests that share the traditional range of the zebra.
It turns out the zebra may sport black and white stripes as a strategy to ward off flies, a not inconsequential problem considering that some flies bite hard enough to draw blood.
“If you’re ever bitten by a horse fly or a deer fly, you’ll know exactly that that’s the problem,” said Tim Caro, a UC Davis professor of conservation ecology.
“Zebras have shorter hair than other antelopes that live next door to them, like impala or buffalo, and that may make them very susceptible to having their skin penetrated by the mouth parts of these flies.”
Caro and his team arrived at their conclusion by overlaying the traditional ranges of the zebra and its related species with locations where biting flies are found.
“We plotted the geographic ranges of all the different species of horses and asses and zebras on maps of the Old World of Africa and Asia,” he said.
“Every time, we find that we get intense striping in areas where they’re really annoyed by biting flies.”
The dark and light bands may also have changed how air flows around the zebra, helping to keep it cool, Caro said, or the striped coat may have provided camouflage against a woodland background, somehow confusing predators or encouraging certain types of grooming by drawing attention to particular areas of the zebra’s body.
But deterring biting flies may be among the most compelling reasons for the success of stripes in the animal’s evolution, according to the study published in the online journal Nature Communications.
“We’re not sure whether these animals are really worried in a (sic) anthropomorphic sense about being bitten by many, many flies and losing lots and lots of blood,” Caro said, “or whether it’s that these flies in Africa carry fatal diseases for zebras.”
The zebra still has some secrets however, such as exactly why or how its stripes discourage flies from landing on it in the first place.
“Flies see the world in a different way than we see the world. They have different eyes, and they can see polarized light, whereas we can’t. And it may be that the hairs of the zebras are giving off different sorts of polarized light,” he said, different from the light given off by the fabric of a t-shirt, for example. |
Venus is the second planet from the Sun, orbiting at an average distance of 108.2 million km. Venus takes a total of 224.7 days to orbit the Sun.
The Sun and Venus are vastly different sizes, of course. The diameter of Venus is 12,103 km, while the diameter of the Sun is 1.4 million km. In other words, the Sun is 115 times larger than Venus. You could fit about 1.5 million planets the size of Venus inside the Sun.
Venus is a terrestrial planet. It has a metal core surrounded by a mantle of silica rock. This is surrounded by a thin crust of rock. The Sun, on the other hand, is a massive ball of hydrogen and helium gas. Temperatures at its core are hot enough to ignite nuclear fusion – more than 15 million Kelvin.
The Sun has an enormous impact on Venus. The radiation from the Sun is trapped by the thick atmosphere of Venus, raising average temperatures across the planet to around 460 °C. In fact, this makes Venus the hottest planet in the Solar System.
Both the Sun and Venus formed at the same time, 4.6 billion years ago, with the rest of the Solar System. They formed out of the solar nebula, a cloud of gas and dust that collapsed down to become the Sun and planets.
Because Venus orbits closer to the Sun than the Earth, we always see it close to the Sun in the sky. Venus is either trailing the Sun or leading it across the sky. The best times to see Venus are just before sunrise or just after sunset.
We have written many articles about Venus for Universe Today. Here’s an article about Venus’ wet, volcanic past, and here’s an article about how Venus might have had continents and oceans in the ancient past.
We have recorded a whole episode of Astronomy Cast that’s only about planet Venus. Listen to it here, Episode 50: Venus. |
A River Ran Wild Lesson Plan
- Grades: 3–5
About this book
The students will understand the cause and effect relationships in the story.
the book A River Ran Wild by Lynne CherrySet Up and Prepare
On a piece of chart paper create two columns, one labeled “cause” and the other labeled “effect”. In the “Cause” column write “ I didn’t study for the Social Studies unit test.” Then have students think about what might happen if they didn’t study for a unit test. Write the students responses in the “effect” column. Explain to the students that “Cause” is “Why something happens” and “Effect” is “What happens”. Have the students come up with other cause and effect relationships that might have occurred in their lives. Explain to students that when they understand the cause and effect relationship to events in the book it helps them better understand the story.
Show the students the cover of the book, and have them make predictions about the story. Explain that the book is based on a true story about the Nashua River in Massachusetts. Show the students the map in the book, and have them locate the river. Read the first 2 pages, and model the first cause and effect relationship that occurs in the story.
The river had clean water,fish, and other natural resources.
The Nashua Native Americans settled by the river.
Then have the students work with a partner and finish pair reading the story and completing the attached cause and effect chart. When students are finished, review the different cause and effect relationships on the worksheet.
Supporting All Learners
If you do not have enough copies of this book to have students pair read you can create the attached worksheet on posterboard and complete it together after reading the story.Since some of the vocabulary is difficult for my readers that struggle, I also had it on tape. You could also create a lesson just based on teaching the vocabulary in the book.
Science: Students can study the ecosystem of a river and discover different aspects that have a negative impact on that ecosystem. Students can then create a before and after diorama or poster of the ecosystem before and after the negative effects had occurred.
Writing: Have the students think about environmental issues that might be occurring in their community. Students can write a letter to a local government official explaining the problem, why it’s important to address the problem, and solutions they have to solve the problem.
Social Studies/Art: The illustrator of this story has created an artistic boarder around each page that depicts important places, objects, and animals that were important in the time period(s) represented in the story. Students can research a Native American tribe and write a picture book about their way of life and create an artistic border around each page depicting important places, objects, homes, and animals to that tribe.
Other books about the environment by Lynne Cherry include Dragon and the Unicorn, Shaman's Apprentice: A Tale of the Amazon Rain Forest and The Great Kapok Tree. |
K-12 School Computer Networking/Chapter 26/Using YouTube to Teach the Performing Arts< K-12 School Computer Networking | Chapter 26
Using Youtube to Teach the Performing ArtsEdit
The Basics: YouTube has pretty much become the standard in free Internet video uploading. Perhaps it was solidified by the CNN/YouTube debates during the presidential primaries. In any case, the use of YouTube has now become so pervasive that it is acceptable for use in the classroom. It’s important to understand as a technical support specialist that although the teachers in one’s school may be innovative pedagogues, when it comes to incorporating new technologies into lessons, technical support and even prodding is often needed. The goal of this entry is to provide technical specialists with the knowledge of how to help instructors incorporate the use of YouTube into the classroom. It also provides some guidelines and cautions. Often instructors have trepidations about using anything Internet related in their classrooms for fear of safety. Used properly though, YouTube can be a useful instructional tool in the classroom.
What it is: What is YouTube in the classroom? YouTube is probably useful for all types of information. According to Talab and Butler, you can find all kinds of instructional videos on YouTube, even how to install a hard drive on a Mac. However, it is particularly well suited to the instruction of the performing and visual arts. That is what this section of the Wikibook is dedicated to.
Why: Why should YouTube be used? The ability to have a universal platform for students to access information, review lessons, and more is invaluable. And as far as the world of education is concerned, is unprecedented. It would be wasteful not to take advantage of this tool. But first, education technical specialists have to push teachers to be early adapters. That requires a knowledge of how the site it can be used in instruction, what precautions to take, and how to communicate with the subject area teachers.
Example Subjects YouTube Can Be Used ForEdit
Learning Steps: Dance, like anything physical, requires constant practice to acquire muscle memory. In order to achieve the most muscle memory as possible, rehearsing and practicing is key. However, students need to practice the correct steps with the correct techniques, or they are merely achieving muscle memory for the wrong things. (This then takes extra practice time to un-learn). YouTube can be used to mitigate this difficulty by providing dance students with an accurate representation of the steps and techniques they need to learn. Instructors at your school can easily upload short instructional videos to the YouTube site so that when students go home to work on the choreography, they are actually practicing the correct steps and techniques. These steps can also be homework for students to learn. Rather than using valuable class time to teach difficult steps, instructors can upload demonstrations and have students come to class prepared with the step. This saves class time for other important things like staging and working on getting the group to dance in unison.
Improving Choreography: A major part of the dance teacher’s job is choreographing pieces for students. But when an instructor is busy working on the minute details of a performance, it’s easy to overlook problems with choreography and miss opportunities for improving the original vision. YouTube provides a platform by which instructors can study their work from any computer with Internet access and improve their work, thus enhancing the students’ later performance. Even if they do not use the site themselves, they can send it to colleagues all over the world for input. Having another professional’s eye can be a great resource in creating choreography. YouTube can be the mechanism for this self-review and review from other professionals.
Aside from uploading one’s own videos, instructors can also search for ideas. From song choices, to choreography, staging, and costuming—teachers can look to other performances on YouTube for inspiration and concrete examples. They can also find what other students are doing who are at the same age and ability level as their own students. Using those examples, they can motivate their students to work harder and try more difficult steps.
A Tool for Practicing: Students can also view an entire routine and practice up to speed with videos uploaded to the site. Instructors are not limited to posting only one step at a time. Entire routines can be loaded for students to study how the pieces of choreography fit together as well as spacing. Furthermore, seeing oneself on a video can help the individual student make adjustments to their techniques, improving themselves before the final performances and pushing their dancing to new levels.
Sharing the Performances: Many hours, often hundreds of hours, are spent on preparing students for a single ninety-second performance. Inevitably all of their family and friends cannot be there to share every performance for which the students work so hard. YouTube’s pervasiveness can be used to the students’ and instructors’ advantage once again in sharing the performances with family, friends, and other professionals. It’s necessary for students to share their work so that they can have a product of which they can be proud. Linking their YouTube performance videos to social networking sites and sending it to far away relatives in friends via e-mail is a great way for students to share their performances with others. Since YouTube videos are viewable through a simple web link, their work is highly accessible. The more instructors that post videos of their students’ performances, the more that will be available on YouTube for other instructors to view. This further enhances the entire process, whereby instructors can get inspiration from the work of other schools. YouTube, with as popular as it’s become, can act as a virtual portfolio for student work. With dance a portfolio is normally difficult as it is usually captured in real time. However, YouTube makes it accessible.
Dancing Concluded: It’s obvious that dance teachers in schools can use YouTube as a tool for learning new steps, sharing performances, enhancing choreography, and giving all students access to practice materials. As an example, the various ways YouTube can be used to teach dance are documented on this video.
Techniques: The archetypal distance learning in art is probably Bob Ross’s public television show. Art teachers can create the same type of experience in K-12 settings. Classroom teachers can provide instructions, techniques, slide shows, and more for their students to view at home or in the classroom. Descriptions of techniques in painting, drawing, sculpture, design and more can already be found on YouTube. Teachers can use what is already there or add their own video.
An example of using YouTube to teach painting can be found here
Displaying Student Work Just as with dance, YouTube can be used to showcase student work. Creating a slide show of exemplary pieces students made in class is a great way to reward students for their hard work as well as give the instructor a permanent example to show subsequent classes.
A classic difficulty with music in the schools is finding time for one-on-one instruction with students. Working on individual student’s embouchure (for the wind instruments), learning new notes, music theory, and other techniques is a must for music teachers attempting to improve the quality of their overall ensemble. Short instructional clips online for vocal and instrumental techniques can be a great use of multimedia to improve the overall level of the musical group.
As with dance, musical performances are particularly suited to video recording to capture the entire experience. Students can share these with family and friends. Teachers can use performance videos to motive and critique everything from posture, rhythm, and pitch. YouTube videos provide students with their own accessible copy of the performances so they can actually refer to it and watch their improvements over time.
Example Subjects ConcludedEdit
In all three of these performing arts subjects practicing and performing are essential parts of the student experience. YouTube can serve as a multimedia aid to help students practice more effectively and share final products. Just as was mentioned by Jeffrey Gentry, YouTube solves the question of bandwidth. Emailing full videos is normally out of the question for their large file sizes; but YouTube solves that problem as well as other compatibility issues instructors often may have run into.
Recruitment and RetentionEdit
YouTube videos can also be used for recruitment and retention purposes. Getting students interested and motivated to participate in the performing arts can often be a struggle. Traditional promotional videos are costly. However, teachers can use their recorded dance and music performances as well as artwork slideshows to advertise their programs to the whole school and improve enrollment. Showing a quick video on the morning announcements can go a long way in attracting new students to a particular program. Students already participating in the programs will again have a reason to be proud of their work and are likely to continue on. In this way, these videos can be used for improving retention rates, which can even lead to the increased security of performing arts programs in K-12 institutions that are too often in jeopardy.
Limitations and CautionsEdit
If you work as a technical coordinator in education you know that Internet safety is a top priority for teachers, administrators, and parents. Teachers need to be made aware of the possible problems with using YouTube. Of course they need to follow YouTube’s terms of service, using only royalty free music and images. But they also need to protect their students’ privacy. Full names should not be used; in fact, no identifying information of the children should be used whatsoever. It would also be prudent to have parents sign a permission release to make sure they are aware of what the teacher intends to do. Most of the time the instructor will be putting up his or her own videos without students. In the cases of student performances though, students will inevitably be in the videos. YouTube has the option to make these videos private so that only the people the instructor allows has access to view the video content. Many of these issues can be overcome with proper awareness and communication with school administrators and parents.
In short, when used properly, YouTube is a valuable tool for K-12 performance arts educators. It is the job of the school technical coordinator to help make teachers aware of this resource and help teachers use them appropriately.
1) What are two ways YouTube can be used to help in teaching the performing arts?
2) What problems of multimedia sharing does the use of YouTube solve?
3) What is one way a teacher can help protect a student’s identity in using YouTube with K-12 students?
4) What performing arts subjects are particularly suited to the use of YouTube for teaching purposes?
1) YouTube can help in learning new techniques, giving examples of entire pieces, sharing final products, and finding inspiration.
2) YouTube solves problems of compatibility of formats and also of bandwidth.
3) Teachers can avoid putting students full names on YouTube, post only videos of the teachers themselves, and get parents permission before using YouTube in the classroom.
4) Dance, music, and art are particularly suited to using YouTube for teaching.
- Gentry, J. (2008) Using YouTube: Practical applications for 21st century education. Online Cl@ssroom. 1-8.
- Talab, T. S., & Butler, R. P. (2007). Shared electronic spaces in the classroom: Copyright, privacy, and guidelines. TechTrends. 51, 1, 12-15.
- Blog on Classroom 2.0 |
Contact: Science Press Package
American Association for the Advancement of Science
Gears 'invented' by insects, not humans
Photograph of an Issus nymph.
[Image courtesy of Malcolm Burrows]
Sometimes scientists study nature to learn new engineering tricks—like the researchers who modeled the wing beats of flies to create tiny, flying robots. But, other times, scientists are surprised to learn that so-called "human inventions" have already existed in nature for a long time—like the classic screw-and-nut-system, which existed in the legs of beetles long before we humans dreamt it up.
Now, researchers have realized that a particular plant-hopping insect, known as Issus coleoptratus, is such a good jumper because it has interacting gears in their hind legs that rotate just like mechanical gears. The discovery is another example of nature beating researchers to the punch.
Malcolm Burrows and Gregory Sutton took high-speed videos of the insects and found that the planthoppers have curved strips of 10 to 12 gear teeth on segments of their hind legs. However, the trait is only present in young insects (nymphs), and the gears disappear when the insects become adults, according to the researchers.
Before jumping forward, the young insects will hook the gear teeth on one leg into the gear teeth on the other leg and "cock" their legs for leaping. The action couples the insects' legs together, making sure that they both move at the same time—within just microseconds of each other—during a jump.
The findings prove that gears—once thought to be a human invention—actually evolved in nature, and that they play an important role in the behavior of plant-hopping insects. |
Most of the time cavities are due to a diet high in sugary foods and a lack of brushing.
Limiting sugar intake and brushing regularly, of course, can help. The longer it takes your child to chew their food, the longer the residue stays on their teeth and the greater the chances of getting cavities.
Every time someone eats, an acid reaction occurs inside their mouth as the bacteria digests the sugars. This reaction lasts approximately 20 minutes. During this time the acid environment releases calcium from the saliva matrix. However when the intake of sugars and carbohydrates is high then the matrix can’t release enough calcium ions and then the tooth structure, gets damaged, eventually leading to cavities.
Constituents of a person’s saliva also makes a difference as thinner saliva breaks up and washes away food more quickly. When a person eats diets high in carbohydrates and sugars, they tend to have thicker saliva, which in turn produces more of the acid-producing bacteria that causes cavities. Exposing the saliva to tooth structure constituents (calcium, phosphate and fluorides), enriches it with mineralizing components and helps the saliva fight back the acid. This process happens when the child or adult brushes frequently with minute amount of toothpaste (preferably fluoridated one).
Some general tips for cavity prevention:
- Limit frequency of sugary meals and snacks.
- Encourage brushing, flossing, and rinsing, preferably right after the intake.
- Watch what you drink.
- Avoid sticky foods.
- Make treats part of meals.
- Choose nutritious snacks.
- Brush with minute amount of toothpaste at least three times a day.
- Supervised brushing by parents or care takers is recommended till early teens. |
Fens are low to moderately fertile wetlands supporting sedge species such as Schoenus and Baumea and also manuka (Leptospermum sp.). They are fed by both groundwater and surface runoff and often occur on gently sloping ground such as the toes of hillsides. They are characterised by having the water table close to the surface which makes fens very wet but with limited water movement. Another charcteristic feature is that the water table of a fen does not fluctuate much throughout the year. Because they occur on slightly sloping ground, fens are more fertile than bogs (although they often share similar features) and they often grade into more fertile swamps. Within fens there is generally a build-up of peat from the breakdown of dead plant matter.
For more information see*:
*The Network is not responsible for the content of external internet sites |
Students will be able to use a line plot to represent data.
Students will be able to describe the characteristics of a line plot with grade-level academic vocabulary and represent data using line plots and sentence frames.
Gather students so they are sitting near the whiteboad. Project the Comparing Sets of Data worksheet on the whiteboard. Say, "Can anyone tell me what these are?" Have students turn and talk first with an elbow partner and then allow a few students to share out their ideas with the rest of the group. Ideas may include graphs, pictures, types of sports, etc.
Ask students if the graphs are the same or different. Have students turn and talk to a partner and then ask a few students to share out their ideas with the rest of the class. Encourage students to explain their thinking by offering prompting questions, such as:
Why do you think the graphs are the same?
Why do you think the graphs are different?
Encourage students to come up to the whiteboard to show similarities and differences between the graphs. Provide sentence stems and frames for students who need extra support during the discussion.
Clarify that each graph represents, or shows, a set of data. Point to the graph on the right and say, "Can anyone tell me what type of graph is here on the right?" Allow a student to share their idea and elaborate that the graph is a bar graph. If the student was able to label the graph accurately, ask the student to explain how they knew the graph was a bar graph. Point to the graph on the left and say, "This is a line plot. We are going to learn about line plots today and how to represent data using a line plot." |
The intestinal lining is an important tissue. Among its other functions, it protects the body from inflammation that can be generated by the actions of gut microbes. This barrier declines with age, and this is thought to be influential in the increased chronic inflammation observed in older people. Ways to spur greater maintenance and repair on the part of cell populations making up intestinal tissue would likely be of great benefit, given the importance of chronic inflammation as a driver of age-related disease.
A strong cellular lining is essential for a healthy gut as it provides a barrier to the billions of microbes and harmful toxins present in our intestinal tract. This barrier is often damaged by infection and inflammation, which causes many painful symptoms. Researchers investigated the environment that surrounds gut stem cells and used "mini gut" organoid methodology where tiny replicas of gut tissue were grown in a dish. The study defined key cells that reside in close proximity to stem cells in the gut that produce the biomolecule Neuregulin-1 (NRG1) that acts directly on stem cells to kick-start the repair process.
"Our really important discovery is that supplementation with additional Neuregulin-1 accelerates repair of the gut lining by activation of key growth pathways. Our findings open new avenues for the development of Neuregulin 1-based therapies for enhancing intestinal repair and supporting rapid restoration of the critical gut function."
Gastrointestinal disease, such as Crohn's disease and ulcerative colitis, is a major health issue worldwide and results in severe damage to the epithelial cell layer lining the gut. Under these conditions, the intestine has a limited capacity to repair efficiently to restore its main absorptive function and is associated with symptoms including diarrhoea, dehydration, loss of weight and malnutrition. Developing ways to support intestinal tissue repair will dramatically improve patient recovery. |
Click on the subject of your choice for a list of available courses.
A programmable logic controller (PLC) or programmable controller is a digital computer used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or lighting fixtures. PLCs are used in many industries and machines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.
The Alabama Robotics Technology Park is designed to close the skills gap within the robotics industry, while also ensuring that students obtain training on the latest robotic technology in the industry. |
Audiologists specialize in diagnosing and treating hearing-related issues
Information on Audiograms
So, you have gone through all the tests with your audiologist, and you are ready to see your results. But how do you read them? Initially, it might look like many indecipherable lines and symbols, but once you learn how to read and interpret your audiogram, you will better understand your hearing. More so, your audiologist will use the results to determine the next steps for you.
What is an Audiogram?
An audiogram is a graph or chart that displays your hearing test results, and it measures your hearing ability. An audiogram is used to help identify what level of hearing you have compared against normal hearing capabilities. For example, it can show the level at which sound becomes uncomfortably loud for you. This is called the uncomfortable loudness level (ULL).
Your audiologist will explain your results to you in terms of volume, pitch and speech sounds.
By interpreting your results, your audiologist can understand the extent and nature of any loss measure and if any referable conditions exist.
How is an Audiogram Created?
During a hearing test, sounds are presented at different pitches and volumes. A computer-driven audiometer is used to pitch to generate the sounds. An audiologist controls which the pitch of the sound is presented at what intensity.
How to Read an Audiogram
The audiogram graph is used to make a comparison between the degree of hearing loss and which frequencies or pitch the hearing loss is at.
The frequency is on the horizontal axis, which is displayed in Hertz (Hz), so for example, 250Hz is a low pitch sound and 8000 Hz is a high pitch sound. The amount of hearing loss is shown on the vertical axis in decibels (dB), where the higher the number, the greater the degree of hearing.
The vertical axis of the audiogram chart is used to show the loudness or intensity of the signal presented. This is measured in decibel. The axis starts at -10dB and increases in five decibels steps up to 120dB.
An O is often used to represent responses for the right ear, and an X is used to describe responses for the left ear. A key on the audiogram, similar to one found on the map, identifies what the different symbols mean.
The pitches shown on the audiogram are those most important for hearing and understanding conversation. When someone speaks, each sound we hear has a different pitch or loudness. For example, the s sound is high pitched and quiet. The o sound is low in pitch and louder.
The decibel scale is a logarithmic scale in which the doubling of the sound pressure level corresponds to a level increase of six decibels. Decibels are not fixed values like volts or meters.
The audiologist will interpret the results for each ear to determine the severity and nature of any loss that may be present. But it is not an easy feat to understand an audiogram, and if you do have any questions, your audiologist is there to help you get a clearer understanding. |
Newswise — Global food production is incredibly efficient, and the world’s farmers produce enough to feed the global population. Despite this abundance, a quarter of the global population do not have regular access to sufficient and nutritious food. A growing and more affluent population will further increase the global demand for food and create stresses on land, for example, through deforestation.
Additionally, climate change is a major threat to agriculture. Increased temperatures have contributed to land degradation and unpredictable rainy seasons can lead to crop failure. While climate extremes impact the ability to produce food, the guarantee of food is more than just agricultural productivity. Today’s globalized food system consists of highly interconnected social, technical, financial, economic, and environmental subsystems. It is characterized by increasingly complex trade networks and an efficient supply chain, with market power located in the hands of few. A shock to the food system can lead to ripple effects in political and social systems. The 2010 droughts in wheat-producing countries such as China, Russia, and Ukraine, led to major crop failures, pushing up food prices on the global markets. This in turn was one of the factors that led to deep civil unrest in Egypt, the world’s largest wheat importer, as people were facing food shortages, which possibly contributed to the 2011 revolution spreading across the country.
Not all shocks to the global food system are directly linked to agricultural productivity or climatic conditions. The vulnerability of the interconnected food system has become painfully evident in recent months following the appearance of a different type of shock: a global pandemic. Although it started as a health crisis, COVID-19 quickly filtered through the political, social, economic, technological, and financial systems. Business interruptions resulted in a chain reaction that is projected to contribute to food crises in many parts of the world.
“Although harvests have been successful and food reserves are available, global food supply chain interruptions led to food shortages in some places because of lockdown measures,” writes the author of the commentary Franziska Gaupp, an IIASA researcher working jointly with the Ecosystems Services and Management (ESM) and Risk and Resilience (RISK) programs. “Products cannot be moved from farms to markets. Food is rotting in the fields as transport disruptions have made it impossible to move food from the farm to the consumer. At the same time, many people have lost their incomes and food has become unaffordable to them.”
The World Food Program has warned that by the end of 2020, an additional 130 million people could face famine. In the fight against the global COVID-19 pandemic, borders have been closed and a lack of local production has led to soaring prices in some countries. In South Sudan, for example, wheat prices have increased 62% since February 2020. Difficult access to food, and related stress could then lead to food riots and collective violence.
“There will likely be more shocks hitting our global food system in the future. We need global collaboration and transdisciplinary approaches to ensure that the food chains function even in moments of crises to prevent price spikes and to provide all people with safe access to food,” concludes Gaupp.
Gaupp, F et al (2020). Extreme Events in a Globalized Food System. One Earth DOI: 10.1016/j.oneear.2020.06.001
The International Institute for Applied Systems Analysis (IIASA) is an international scientific institute that conducts research into the critical issues of global environmental, economic, technological, and social change that we face in the twenty-first century. Our findings provide valuable options to policymakers to shape the future of our changing world. IIASA is independent and funded by prestigious research funding agencies in Africa, the Americas, Asia, and Europe. www.iiasa.ac.at
MEDIA CONTACTRegister for reporter access to contact details |
4 May 2017
San Francesco - Via della Quarquonia 1 (Classroom 2 )
Folding is a process in which bending is localized at sharp edges separated by almost undeformed elements. This process is rarely encountered in Nature, although some exceptions can be found in unusual layered rock formations (called 'chevrons') and seashell patterns (for instance Lopha cristagalli). In mechanics, the bending of a three-dimensional elastic solid is common (for example, in bulk wave propagation), but folding is usually not achieved. The route leading to folding is shown for an elastic solid obeying the couple-stress theory with an extreme anisotropy. This result is obtained with a perturbation technique, which involves the derivation of new two-dimensional Green's functions for applied concentrated force and moment. While the former perturbation reveals folding, the latter shows that a material in an extreme anisotropic state is also prone to a faulting instability, in which a displacement step of finite size emerges. Another failure mechanism, namely the formation of dilation/compaction bands, is also highlighted. Finally, a geophysical application to the mechanics of chevron formation shows how the proposed approach may explain the formation of natural structures. The results of the presented study introduce the possibility of exploiting constrained Cosserat solids for propagating waves in materials displaying origami-patterns of deformation. |
What does this mean?
Researchers used a sample of 153 toddlers between the ages of 2.5 – 3.5, of whom had individual sessions in which they wore a net of head sensors to mark brain activity while hearing different pitches sounded throughout the room. The noise of the tones occurred while the toddlers watched silent cartoons.
Each pitch change during the toddlers’ session exemplified a change in the environment. This “testing” corresponds with most changes within a setting, and is particularly similar to a transitional phase in a social interaction. It is important to note that the brains of aggressive kids are usually unable to successfully detect a change in the tone of a person they are interacting with. For example, while one neutral kid may be playfully making fun of another, the aggressive kid might interpret that as bullying and will respond with hostility.
When evaluating the head sensors, researchers found that “toddlers who had smaller spikes in the P3 brain wave when confronted with a situational change were more aggressive than children registering larger P3 brain-wave peaks,” as cited in the Science Daily.
Why is this important?
This finding in research will allow for earlier interventions in stopping aggressive impulses in toddlers, which is usually a tendency that continues throughout their adolescence. When confronting aggressive behaviors at an older age, it is harder to treat and dispose of combative traits that are instilled within a child’s demeanor.
As someone who has worked with kids with special needs of ages 4-10, I have watched kids grow up and continuously get more aggressive as they age. Although it is hard to combat these tendencies at the age of 4, it is easier to help them better react to social interactions they do not understand at 4 years old rather than at 10 years old. Therefore, in helping parents recognize key behavioral issues with their children at the young age of 2, this brain wave scanning of P3 will allow said parents to find new ways to decrease the aggressive behaviors their child will exhibit.
However, I am unsure whether or not this will become a routine thing doctors check for in toddlers, or how much it will cost. Do you think it is necessary for parents to know this information? If you were a parent, would you want your child to go under this “testing?” |
Kids and Teeth
Fluoride: Start Early
Ask us or your pediatrician to prescribe daily fluoride tablets for your child or daily fluoride drops for your infant. Store the fluoride in a safe place – treat it as medication. The dosages are as follows:
Infants to 2 years …………0.25mg of fluoride daily (drops)
2 to 3 years old ……………0.5mg of fluoride daily (tablets)
3 to 14 years old ………….
Baby Teeth: Very Important
It is very important to keep baby teeth healthy even though they eventually “fall out” to be replaced by permanent teeth.
- Growth: The bones of the jaw grow with the progressive eruption of baby teeth. Without them permanent teeth may be more crowded, crooked, even blocked out from erupting at all.
- Speech: Presence of front teeth is especially important for a young child learning how to speak in order to properly pronounce many sounds (th, s, f, etc.).
- Eating: Baby teeth are important for a child’s ability to chew and properly digest their food. Nutrition is extremely important during the growing years.
- Appearance: A child’s self-image (especially ages 4 to 7) is important in his or her development. Loss of teeth can make a child very self-conscious.
Bottle Mouth Syndrome:
“Bottle Mouth Syndrome” is characterized by decay of the upper front teeth caused by prolonged exposure to sugars and acids. To prevent this condition, avoid giving your child any sugary liquid in his or her bottle especially at nap-time and bed-time. This includes fruit juices, soda and even cow’s milk. Get your child used to water in the bottle, which will not harm their teeth.
- At Home: It is important to remove the plaque from the child’s teeth daily. Start from their first tooth. You can wipe the tooth clean with a wash cloth at bath time and lead into brushing as the child grows.
- Professional Care: It is recommended that your child visit us starting at about 2 to 3 years of age. |
Vitamin B1 for Kids: Understanding the Role of This Nutrient
- Vitamin B1 for kids is an essential nutrient with a variety of health benefits.
- Vitamin B1 deficiency can result in serious health conditions, such as beriberi.
- Vitamin supplements can ensure your child gets enough of the nutrient.
Vitamins play an important role in children’s health, growth, and development. It’s essential to know which ones they need and how to get them.
Vitamin B1 is one of the nutrients your kids can’t afford to miss. But how familiar are you with vitamin B1 for kids?
In this post, we’ll explore the basics of vitamin B1 for kids. Read on to learn what vitamin B1 is, how much vitamin B1 your kids need every day, the food sources of vitamin B1, and how to supplement it if necessary.
Vitamin B1 for Kids: The Basics
Vitamins are organic substances the body requires in small amounts to function well. Vitamin B1, also known as thiamine, is a water-soluble vitamin that belongs to the B complex group of vitamins.
Vitamin B1 is essential, meaning it can’t be produced by the body and must be obtained from the diet. Also, water-soluble vitamins can’t be stored by the body because they’re easily lost in the urine. Therefore, vitamin B1 must constantly be replenished to avoid its deficiency.
Before we look at how you can ensure your kids don’t miss out on this important nutrient, let’s see why they need it in the first place.
Benefits of Vitamin B1 for Kids
Vitamin B1 is important due to several reasons. For your kids to be healthy and full of energy, they need adequate amounts of vitamin B1.
The health benefits of vitamin B1 are:
1. Helps in Energy Production
When we eat, the food needs to be broken down to release the energy it contains. Vitamin B1 plays a role in the breakdown of carbohydrates and amino acids from food to produce energy that the body can utilize.
2. Needed for a Healthy Nervous System
Vitamin B1 is important for healthy nerve function. It helps in the synthesis of acetylcholine, a substance needed for communication between nerves.
3. Supports Cardiovascular Function
4. Supports the Immune System
Vitamin B1 strengthens the immune system by reducing inflammation. Inflammation is a biological process that’s damaging to the cells. Vitamin B1 boosts the body’s ability to withstand stressful conditions, which cause inflammation.
Whole Foods Rich in Vitamin B1
The best way to obtain vitamin B1 is through the diet, but that’s if you have access to these food sources and can get your kids to consume them. Vitamin B1 is usually found in protein-rich foods like legumes, whole grains, and nuts.
The food sources of vitamin B1 include:
- Green peas
- Sunflower seeds
- Flax seeds
- Brown rice
Are Your Kids Getting Enough Vitamin B1?
Since vitamin B1 can’t be stored by the body, regular intake is required to maintain adequate levels. The reference for how much vitamin B1 your kids need every day is provided by the Recommended Daily Allowance.
The Recommended Daily Allowance (RDA) refers to the average daily level of intake required to meet the nutritional requirements of a healthy person. The RDA is an important guide to how much of a nutrient your kids need to avoid deficiencies and remain healthy.
According to the National Institutes of Health, the RDAs for thiamine are:
- Birth to 6 months: 0.2 mg
- 7-12 months: 0.3 mg
- 1-3 years: 0.5 mg
- 4-8 years: 0.6 mg
- 9-13 years: 0.9 mg
- 14-18 years: 1-1.2 mg
What Happens If Your Kids Lack Vitamin B1?
Though vitamin B1 is needed in very small quantities, the ramifications of vitamin B1 deficiency can be huge. One of the most serious health effects of thiamine deficiency is a disease called beriberi.
Beriberi is a condition that causes damage to both the cardiovascular and nervous systems. Though it’s rare in the U.S., if your child suffers from malnutrition or chronic diarrhea, they could be at risk of beriberi. Infants of breastfeeding mothers who lack enough vitamin B1 are also at risk.
The symptoms of vitamin B1 deficiency that you should look out for are:
- Increased heart rate
- Paralysis or numbness of the limbs
- Confusion and memory problems
- Visual impairments
- Muscle weakness
- Weight loss and anorexia
If you notice any of these symptoms, talk to your healthcare provider for medical advice.
Vitamin B1 Supplements for Kids
Vitamin B1 is available in a variety of foods, so you might be wondering why your kids would need vitamin B1 supplements?
Kids are usually at risk of vitamin B1 deficiency for several reasons. First, they have picky eating habits that might exclude important micronutrient sources. Second, their gut isn’t well developed to absorb and process the nutrients from whole foods. Third, frequent illnesses like diarrhea make them lose vital nutrients from the body.
Vitamin B1 supplements can come in handy to help your kids achieve adequate levels of the nutrient. However, you should always consult your healthcare provider before giving your kids any supplements.
And with so many brands on the market, it can be difficult to figure out which one is best for your child. The U.S. Food and Drug Administration (FDA) regulates dietary supplements differently than food and drugs, so make sure you do your research to check that the brand you’re buying from is trustworthy, and read the product label carefully.
Only go for products from reputable brands with an established track record. The three things you should look out for are:
- Quality and safety: Good supplements are made from natural, healthy ingredients with little or no added artificial ingredients.
- Third-party testing: Third-party testing adds an extra layer of quality control.
- Dosage: Supplement overdose can have side effects. Read supplement product labels carefully and choose products formulated for children.
With this in mind, where can you get the best high-quality supplements for your kids? Llama Naturals is here for you. Our whole-fruit gummy vitamins are made from real fruit and vegetables, slow-cooked to retain their goodness.
Llama Naturals Multivitamin Gummies: Your Best Choice
Thanks to Llama Naturals, getting the best high-quality supplements has never been this easy. Llama Naturals’ whole-fruit gummy vitamins are packed with 13 essential plant-based vitamins, including folate, vitamin A, vitamin C, and vitamin B1 for kids, carefully formulated for optimal absorption.
Our gummies are made from whole fruits and vegetables, slow-cooked to retain the rich phytonutrients and fruit flavor. They contain no added sugar, artificial sweeteners, flavors, or synthetic vitamins that are hard to absorb. Yet, they still taste great.
Each serving of Llama Naturals gummies delivers nature’s goodness in delicious, bite-sized, and chewable portions that your kids will absolutely love. To top it all off, the gummies are fully organic and vegan without any unhealthy additives.
Vitamin B1 for Kids: The Healthy Start Your Kids Deserve
Vitamin B1 is one of the nutrients your child needs to be healthy and full of energy. Without it, they’re at risk of serious health conditions.
Luckily, it doesn’t have to reach that point. Beyond encouraging a healthy diet, vitamin supplements are also available to ensure your child gets enough of this nutrient. If you’re looking for somewhere to start, Llama Naturals has you covered. Choose Llama Naturals’ supplements for the healthy start your little one deserves.
Llama Naturals is a plant-based nutrition brand that has created the World's First Whole Fruit Gummy Vitamins that are made with no added sugar and whole-food vitamins. They are USDA Organic, Vegan, Gluten Free, free of common allergens, and are slow-cooked on low heat to retain rich phytonutrients & fruit flavor. It’s a win-win gummy vitamin that the whole family will love. |
Time for a post about geometry, which I tutor in addition to algebra and many other subjects.
I especially enjoy helping students learn how to do proofs, which I find is the hardest area of geometry for most kids.
Recently I came up with an analogy to help students understand the special usefulness of definitions in geometric proofs.
The analogy is: Definitions are like reversible coats.
What? … you say.
Coats. Reversible coats. As in two for the price of one.
Similarly with definitions: you get two IF-THEN statements for the price of one when you work with a definition.
Here’s what I mean.
First consider a “standard theorem” in geometry, viewed in the IF-THEN format.
Theorem: IF two angles are complements of the same angle, THEN they are congruent.
Notice that the converse of this statement doesn’t make much sense:
IF two angles are congruent, THEN they are complements of the same angle. (What other angle? We haven’t even mentioned another angle!)
But when it comes to definitions, you can:
a) First, turn the definition into and IF-THEN statement, and
b) Secondly, you can flip that IF-THEN statement around, and this new statement, called the “converse,” will always be true. You can bank on it!
Example of a definition: A right angle is an angle that measures 90 degrees.
And here’s one IF-THEN statement that flows out of this definition:
1) IF an angle is a right angle, THEN it measures 90 degrees.
But notice that the converse is also true:
2) IF an angle measures 90 degrees, THEN it is a right angle.
Let’s try this again, for the definition of perpendicular lines.
Definition: Two lines are perpendicular if they form four right angles.
First IF-THEN statement:
1) IF two lines are perpendicular, THEN they form four right angles.
Second IF-THEN statement, the converse.
2) IF two lines form four right angles, THEN the lines are perpendicular.
I am wondering if you are wondering why this is true. Why is it that, for definitions, both the statement and its converse are always true? The reason, I believe, has to do with the nature of a definition. With a definition, we are giving a name to some geometrical object, and stating what we consider to be the defining characteristic of that object.
To take a nonsensical example, suppose that you live in a world that has objects called “Snurfs,” which are measured in units called “Goobles.” Now imagine that some of the Snurfs are special because they have a measure of 100 Goobles. This fact makes these Snurfs so special that you wind up talking about them a lot. And because you talk about them a lot, it is helpful to give them a name. So you do give them a name; you decide to call them “Wombats.” What this means is that anytime a Snurf has a measure of 100 Goobles, you will call it a Wombat. And anytime you see the thing you call a Wombat, you can be sure that it will have a measure of 100 Goobles. For that is just what you have decided the word Wombat will mean. Based on this, you put forth the formal definition:
A Wombat is a Snurf with a measure of 100 Goobles.
Given this definition, notice that you can create two IF-THEN statements:
1) IF a Snurf is a Wombat, THEN its measure is 100 Goobles.
And you can also state the converse, and it will be true:
2) IF a Snurf has a measure of 100 Goobles, THEN it is a Wombat.
To me, this is how definitions work. They involve people noticing something they are talking about, and they decide to give it a name so they can talk about it more easily. When they define what the word means, they attach the word to the primary characteristic of this thing, and through this act, the word is born, and along with it, its definition.
Anyhow, in terms of doing geometry, the important thing to keep in mind is that all definitions can be used reversibly. So, going back to the example of the right angle, here’s what this means.
If, in the course of a proof, you establish that a particular angle is a right angle, you can conclude that the measure of this angle is 90 degrees. Reason: Definition of a right angle.
And similarly, if in a proof you establish that a particular angle has a measure of 90 degrees, then you can conclude that this angle is a right angle. Reason: Definition of a right angle.
This reversibility factor is why, when you read through geometric proofs, you will notice that “Definition of … ” is used quite often as a reason for steps. Because they are logically reversible, definitions are TWICE as useful as standard theorems. |
by Danielle Bodicoat
Statistics can tell us a lot about our data, but it’s also important to consider where the underlying data came from when interpreting results, whether they’re our own or somebody else’s.
Not all evidence is created equally, and we should place more trust in some types of evidence than others.
In medical research, there’s a well-known evidence hierarchy that ranks the main types of evidence. It looks like this:
The hierarchy basically shows that the best quality evidence we have comes from systematic reviews, followed by trials, then observational studies. Expert opinion is the lowest form of evidence. Whilst this hierarchy, and some of the specific study types, are mostly used for medical research, the concept translates well to other disciplines.
Below, we’ll walk through each level of the hierarchy, what it is and how to analyze it.
But there’s a caveat!
The quality of the evidence will also depend on how well the study is conducted. So, for example, a large, well-conducted trial might be better than a poorly-conducted, biased systematic review.
For this article, we’ll assume everybody has done a great job and we’re talking about well-conducted studies.
Systematic reviews are a specialist type of literature review. We’re essentially trying to find all of the available evidence on a particular research question. The evidence might be published or unpublished (grey literature).
We then combine all of that evidence either qualitatively (narrative review) or quantitatively (meta-analysis) to get a definitive answer to our research question.
This type of evidence is top of the hierarchy because systematic reviews are:
- Objective – there should be no opinion or selection bias involved when choosing which evidence to include in a systematic review
- Comprehensive – includes all of the evidence on a topic
- Precise – a review should answer a very specific research question
- Reproducible – if somebody else followed the same methodology then they should get the exact same answer.
Trials are tests or experiments designed to answer a specific research question. They have an experimental and control group, and units of observation (such as people) are allocated randomly to each group. This random allocation, along with some other good practices, helps to keep trials unbiased and that’s why they appear second in the hierarchy.
As with any analysis that we do, lots of different things will affect the approach that we take. However, the design of trials means that often we can use fairly simple statistical methods since there may not be any confounders to adjust for.
The main exception to this is where the randomization has been stratified, in which case you will need to adjust for the stratification factors in your analysis.
In this type of study, we take a group of people (or whatever else we’re interested in) with a characteristic or exposure that we’re interested in, and a group without that characteristic. We then follow them up for a period of time to see whether our outcome of interest develops more often in the exposed group than the unexposed group.
This is the strongest form of non-experimental evidence that we have, because we follow unbiased groups (i.e. when we start the study we have no idea who will develop the outcome of interest).
This design works best where you have a fairly common outcome, otherwise you wouldn’t have any events to analyze. It can also be a great design when you’ve got a rare exposure so that you can make sure you have plenty of exposed people in your study.
People will be followed for different lengths of time. Some will choose to withdraw from the study. You’ll lose touch with others and not be able to find out whether the outcome occurred. Some will develop the outcome of interest at which point you may stop following them up.
We need to account for these differences in follow-up time in our analysis, so we’ll typically use approaches that allow us to include it, such as survival analysis or a comparison of the incidence rates in the exposed and unexposed groups to estimate an incidence rate ratio.
Case-control studies are sort of the opposite of cohort studies in that we select a group with our outcome of interest (cases), and a group without it (controls). We then look back to see whether the cases were more likely than controls to have been exposed to the potential causal factor that we’re interested in.
Case-control studies work best for rare outcomes and common exposures. Our outcome here is the binary case/control status so this type of study is typically analyzed using logistic regression.
In this type of study, we just look at a single-point in time to get a ‘snapshot’ of what is happening. This means that everything is measured at the same point in time, although we can ask about the past.
They are particularly useful for measuring prevalence, i.e. how common something is within a population of interest. There is no time element to include in the analyses so again they are typically analyzed using a generalized linear model, though as always your choice of analysis will depend on your research question.
Case-series and expert opinion
In case-series, everybody with an exposure, or outcome, of interest is included in a study. They are typically used in medical research and are often based on medical notes from one hospital.
Because everybody with the exposure or outcome is included, there is no comparator group, and so it isn’t possible to calculate a relative risk. Case-series are often described using a narrative review, rather than analytical methods.
Similarly, expert opinion papers often don’t include any analysis. Whilst they can be very helpful in terms of providing context, they are subjective in nature and so they don’t provide a strong form of evidence.
Danielle Bodicoat works with health researchers helping them to get confident with using statistics to analyze their data. She’s an escaped academic now working as a medical statistics consultant through her company, Simplified Data. She has spent nearly 15 years designing, conducting and supervising statistical analyses, and has 80+ peer-reviewed publications. |
Chemistry - Equilibrium
What is a chemical equilibrium?
In a chemical reaction, chemical equilibrium is the state in which both reactants and products are present in concentrations which have no further tendency to change with time. Usually, this state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium. |
The native Arawak peoples of the Caribbean have provided the world with bountiful products and words—tomatoes, potatoes, and, you guessed it—hammocks.
While it is impossible to know precisely where and when hammocks first originated, it’s clear that indigenous communities stretching all throughout what is today Latin America regularly used them for work and leisure long before Europeans arrived.
A Tree, A Net, A Bed
Sailing up to the New World, Columbus and his crew encountered the indigenous Arawak peoples on the island of Hispaniola (today’s Haiti and Dominican Republic), who called their hanging beds ‘amaca’ in their Taíno dialect. ‘Amaca’ referenced the local hamac trees, the bark of which was used to weave not only the hammocks in that region, but also the fishing nets. The same word, ‘amaca,’ was also used to refer to these nets, hinting that perhaps a tired fisherman strung his up one hot afternoon and decided it looked like a tempting spot for a nap. And, voilà! The hammock we know today was born.
Columbus returned to Europe bearing news not only of the New World he’d encountered, but also never before seen resources and goods, including the hammock.
Hammocks Invade Europe
The travelogues and letters of subsequent European explorers, conquerors, and colonizers of the New World are filled with references to netted, hanging beds. It was clear that hammocks were widely used by the native peoples from the Yucatán of Mexico down to Brazil, who constructed them in various styles and materials—fibers from hamaca, palm, and sisal plants, and even cotton.
The first written usage of the word ‘hammock’ appears in the letters of Bartolomé de Las Casas. Writing about the native peoples of Hispaniola, he observed:
“They lye on a coarse Rug or Matt, and those that have the most plentiful Estate or Fortunes, the better sort, use Net-work, knotted at the four corners in lieu of Beds, which the Inhabitants of the Island of Hispaniola, in their own proper Idiom, term ‘Amacas.’”
Hamacas, Hammacks, and Hängemattes
By the mid-1600s, ‘amaca’ had transformed into ‘hamaca’, officially entering the Spanish language.
As the hamaca spread across Europe, so did various linguistic forms of the word itself. Soon it appeared in an anglicized form, ‘hammock’, which we use today. The word first appeared in the English Dictionary of Sea Terms in 1841, for it was sailors who most appreciated and used these leisurely and functional devices.
In the past, some have argued that ‘hammock’ derives from Northern Europe, where words such as ‘Hängematte’ (German), ‘hängmatta’ (Swedish), and ‘hangmat’ (Dutch) have long been in use. It seems obvious that these words are descriptive—that is, they’re all ‘hanging mats.’ But this has been disproven as folk etymology. Rather, Europeans modified the original ‘hamaca’ in a way that created meaning in their own tongue.
So What’s In A Name?
Today, across Latin America, hammocks no longer double as fishing nets, nor are they made of the itchy bark of a hamaca tree. From Colombia to Brazil to Mexico, the art of hammock-making has been cultivated and refined. And rest assured that nearly anywhere you travel, ask for a hammock, hamaca, or a Hängematte, and your meaning is sure to get across! |
Intervention is also how scientists figure out if there is a cause-and-effect relation in the world. Scientists first come up with ideas or theories of how the world works. These theories are often beliefs about a cause-and-effect relation. For example, does room temperature cause a change in a person’s mood, does the shape of an airplane’s wing make it faster or slower, or does stress cause medical problems?
Scientists then test out their theories by acting on a potential cause to see it has an effect. They perform an intervention on the potential cause to see if the effect will happen. In other words, they do an experiment. This is just like when children push the button on a newly purchased toy to see if that turns it on. Children have a belief that the button causes the toy to turn on. They test this belief by performing an intervention. They push the button to see what happens. If it works, then the child can conclude that the button has a cause-and-effect relation with the toy turning on. In the same way, a scientist can change the shape of a plane’s wing to see if it goes faster. If it works, then the scientist can conclude that the shape of the plane’s wing has a cause-and-effect relation with how fast the plane flies. |
According to a recent study prepared by the University of Hawaii and published in Nature scientific journal on October 10th, unprecedented global changes are about to happen. Scientists predict the year at which the Earth’s climate will shift beyond the past 150 years’ variability as early as 2047 under the business-as-usual scenario and 2069 if greenhouse gas emissions (GHG) are stabilized. The “Projected Timing of Climate Departure from recent Variability” study adds new information supporting the global importance of switching to fossil-fuel free energy as soon as possible.
Historic variability and predictions for the next 100 years
Scientists have estimated the pace of climate change by calculating an index based on the minimum-maximum range of mean annual temperatures of 1860 – 2005 (the “historic variability”). The data comes from 39 Earth System Models developed independently by 21 climate centers in 12 different countries. ”The models have been effective at reproducing current climate conditions and varied in their projected departure times by no more than five years, which proves the study’s validity.
Starting from this historic variability, the scientists projected the temperatures of the next century, to discover the year when the mean annual temperature will exceed historic boundaries, defined as the year of the climate departure. The calculation method works for “any given location”, and projections were created covering the world’s major cities.
Mean annual earth’s near-surface air temperature calculations were essential, but the study also integrated additional climate variables such as precipitation and evaporation. The ocean’s surface temperature and pH were also taken into account to determine the biodiversity implications of climate change.
“Business-as-usual” versus stabilized GHG scenarios
There are two major scenarios of climate’s evolution depending on the future concentration of GHG in the atmosphere. If the concentration continues to grow, reaching 936 CO2 parts-per-million (ppm) by 2100 – the “business-as-usual” scenario RPC 85- the climate departure year will occur sooner. If the atmosphere’s CO2 concentration stabilizes at 538 ppm (RPC 45, or “optimistic” scenario), the process will be obviously delayed. However, calculations show that the difference between the two climate departure times is usually not longer than two decades.
Main results: tropics will be affected first, within a decade
Depending on exact location on the earth, climate departure timing varies, but the overall figures are alarming. With the global average year of climate departure predicted just 34 years from now, major changes will occur much sooner for countries such as Indonesia – the city of Manokwari’s projected year is 2020 under stabilized GHG scenario and 2025 for the worst case scenario. India’s average projected year is 2045 / 2069 (optimistic / business-as-usual scenario), with major cities such as Mumbai having an earlier timing (2034 / 2051). Countries located along the tropics will be affected first: Nigerian city of Lagos will experience unprecedented climate warming in 2029 / 2043; Colombia’s capital Bogota’s projected years are 2033 / 2047;for Mexico City, the projected figures are 2031 / 2050.
The explanation why these countries are affected sooner relies in the existing geographic and climatic features of the tropics. This area has almost constant, hot and humid conditions that have registered little variation along the centuries. Therefore, their narrow climate boundaries can easily be exceeded by the fast changes in the earth’s atmosphere. To make matters worse, “small but fast changes in the climate could induce considerable biological responses in the tropics, because species there are probably adapted to narrow climate bounds”. This phenomenon will affect the area’s biodiversity in an unprecedented ay, with most species forced to rapidly adapt to the new changes either migrating to cooler areas or going extinct.
Implications of the study’s predictions and other views
This new research predicts that climate change will determine severe “ecological and societal disruptions”, due to the complexity of the worldwide symbiosis between natural resources and man-made systems. “The impacts on the tropics have implications globally as they are home to most of the world’s population, contribute significantly to total food supplies, and house much of the world’s biodiversity”. Food crisis is one of the most direct implications, but thinking at the whole food supply chain – from the fuel needed to transport the food to the supermarket to the energy it needed to process and preserve it under steady-state conditions in our refrigerators- prices will probably go up dramatically. This overarching phenomenon is already happening, and nowadays everybody seems to try to cope with the ever-increasing cost of daily life. Financial sites such as money.co.uk offer advice on reducing energy bills, by shopping for energy-efficient electrical appliances, for instance, as this is a common problem. Although the correlation between global warming and costs increasing may be criticized by skeptics, the Nature study is not the only warning issued recently. The Intergovernmental Panel on Climate Change (IPCC) just published its Fifth Report in September 2013, warning that following a predicted 5 – 9 centigrade increase in global mean temperatures, food crisis is imminent. The report also states a 95% certainty that global warming is the product of anthropogenic-sourced GHG emissions.
USA’s projected climate departure timing
According to the University of Hawaii’s study, the USA’s average climate departure year is 2051 / 2078 (stabilized atmosphere/ business-as-usual scenarios), with Honolulu having the earliest timing (2043 / 2047) and Anchorage the latest (2071/2095). Depending on their location, major US cities will be affected by the unprecedented shift beyond recent climate variability as early as 2043 / 2073 (Phoenix, Arizona) or 2046 / 2075 (San Diego).For Washington DC and New York, the projected year of climate departure has been calculated as 2047 / 2071, while cities like Austin and Dallas will experience the phenomenon later (2058 / 2090 and 2063 / 2093). Across the nation, major changes will occur affecting life at all levels.
The need to develop alternative energy solutions across the world is becoming crucial in the light of this new research. This unprecedented climate shift will affect our lifestyle severely in a few decades, and the phenomenon will continue, affecting the future. As shown by the study, in order to delay the global implications of this phenomenon started in the industrial era, it is critical that we reduce greenhouse gas emissions as soon as possible.
Autthor: Melissa Whitehall |
A study tool for improving English pronunciation for Cantonese-speaking students
The proposed project designs a study tool for improving Cantonese-speaking students’ English pronunciation by bringing awareness to them the differences in the sounds and sound systems between the two languages. Many Cantonese-speaking students, after taking many English lessons, nonetheless pronounce the sounds of English with heavy Cantonese accent. Heavily accented speech not only affects communication to varying degree, it may also stigmatize the speaker. The sound systems of English and Cantonese differ in many ways. A significant difference is while English is a stress and polysyllabic language, Cantonese is a tone and monosyllabic language. The two languages also differ in sound inventory, arrangement of sounds in sequence and syllable structure. Language teachers of English are generally not familiar with the sound system of Cantonese and are unable to diagnose and effectively correct the English pronunciation mistakes. The study tool imparts to students the knowledge of the sound systems of English and Cantonese and their differences. Students are made aware that it is the substitution of the Cantonese sounds and tone for the English sounds and stress that causes Cantonese accent in their spoken English. The expected result is that students will be able to self-correct the English pronunciation errors. The study tool will contain (1) a concise summary of the information on (i) the sounds and sound systems of English and Cantonese and the differences, including the sound inventories, articulatory characteristics of the sounds, sequential constraints of the sounds and syllable structures between the two languages, and (ii) the nature of the stress system in English and tone system in Cantonese and the differences between the two systems; (2) all the possible cases of mispronunciation of the English sounds and errors in English stress patterns; and (3) a guide for correcting the mispronunciations and errors. |
Why is wombat scat shaped like a cube?
The Scoop on Wombat Poop
Members of the animal family Vombatidae, there are three species of wombat: common wombat, southern hairy-nosed wombat and northern hairy-nosed wombat. They're all native to Australia and some of the surrounding islands and have nothing in common with bats that fly around at night. In fact, these rotund, furry mammals resemble overgrown prairie dogs of the Great Plains in the United States.
Along with koalas and kangaroos, wombats are marsupials. Like koalas, female wombats have pouches that face to the rear of their bodies, in which they carry and nurse their young. The orientation of the pouches keeps them free of dirt while they dig. Wombats are sizable animals, weighing up to 88 pounds (40 kilograms) and measuring up to 4 feet (1.2 meters) in length. When necessary, wombats can sprint up to 25 mph (40 kph) [source: Animal Planet].
As nocturnal animals, wombats are active at night, foraging for grasses, roots and bark. Rivaling the reputation of sloths, wombats may sleep up to 16 hours every day [source: Jackson]. Also like the sloth, this sleep pattern is influenced by their herbivorous -- and thus low energy-yielding -- diet, which requires them to conserve energy. In fact, while sleeping, the wombat's metabolism may drop to two-thirds of its normal rate [source: Animal Planet].
Dwarfing petite prairie dogs, the wombats' body size makes them the largest burrowing mammal in the world. Using their front claws to dig burrows, wombats spend most of the day underground, away from the sun. Wombats may dig out a series of up to 30 underground, interconnected burrows, referred to as warrens. Southern hairy-nosed wombats are known to share space within warrens, but generally won't occupy the same burrow -- sort of like how family members have their own bedrooms in one house. That's because wombats are highly territorial, preferring to spend most of their time alone, except during mating season.
As we mentioned earlier, animal droppings are natural ways of marking territory to prevent confrontation and promote mating. In addition to scent markings, or scents produced by the hormones that animals release, wombats leave their cube-shaped scat as territorial signposts on the tops of rocks and logs. That distinct shape is beneficial since the flat sides of the cubes keep the droppings in place on their precarious locations. If wombat poop was rounded, like that of koalas, it would probably roll off its intended drop point. And since wombats can produce between 80 and 100 pellets per day, stray scats could lead to a lot of disgruntled wombats [source: Wombat Protection Society].
While it still ranks among one of the most unpleasant products of our biological functions, poop certainly serves a purpose in the wild. Bringing it into your home on the bottom of your shoe is another matter, however. |
The viburnum leaf beetle, Pyrrhalta viburni, is a small brown insect native to Eurasia that was accidentally introduced into North America during the early 1900s. It was first detected in the U.S. in 1994, and has been rapidly expanding its range ever since. It is now (Sept. 2008) found throughout New England, all of New York state and into western Pennsylvania, Ohio and Michigan. A separate invasion is taking place in the Pacific Northwest.
June 5, 2015
Both the adult and larval stages are voracious eaters that can defoliate viburnum shrubs entirely. Plants may die after two or three years of heavy infestation. The most susceptible species of Viburnum happens to be Viburnum dentatum, commonly known as arrowwood.
Viburnum leaf beetles only infest viburnums. They complete just one life cycle each year. Adult females lay up to 500 eggs on viburnum twigs in summer and early fall. The eggs overwinter and hatch in spring. Larvae feed on foliage until early summer, then crawl down the shrub and pupate in the soil. Adults emerge from the soil in midsummer, feed again on viburnum foliage, and mate. Egg hatch to adult takes just 8 to 10 weeks.
Beetles in the family Chrysomelidae are commonly called leaf beetles. It is the largest of beetle families among the phytophagous (plant-eating) beetles; chrysomelids are second in number of species only to the weevil, family Curculionidae. There are as many as 35,000 described species and perhaps up to 60,000 total species. Presently, the Chrysomelidae are classified in 195 genera and approximately 1,720 valid species and subspecies (plus 149 Bruchinae species) accepted as occurring in North America north of Mexico.
Leaf beetles feed strictly on plant materials. The adults usually consume leaves, stems, flowers, and pollen. Most larvae are subterranean in habit, feeding on roots and rootlets, but others will consume foliage as well. Many chrysomelids are very specific to particular host plants, but most are able to live on a variety of plants; i.e. the so-called dogbane leaf beetle, Chrysochus auratus, which feeds on prairie plants such as milkweed (Asclepias sp.) and plants in dogbane genus Apocynum.
Order Coleoptera: Beetles are the dominant form of life on earth: one of every five living species is a beetle. Coleoptera is the largest order in the animal kingdom, containing a third of all insect species. There are about 400,000 known species worldwide, ~30,000 of which live in North America. Beetles live in nearly every habitat, and for every kind of food, there's probably a beetle species that eats it.
Beetles Index | Longhorns | Leaf Beetles | Soldier | Blister | Lady | Scarab |
What is toxic shock syndrome?
Toxic shock syndrome (TSS) describes a cluster of symptoms that involve many systems of the body. The following bacteria commonly cause TSS:
- Staphylococcus aureus
- Streptococcus pyogenes
TSS from Staphylococcus infections was identified in the late 70s and early 80s when highly absorbent tampons were widely used in menstruating women. Due to manufacturing changes in tampons, the incidence of tampon-induced TSS has declined.
TSS from Streptococcus infections is most commonly seen in children and the elderly. Other populations at risk include individuals with diabetes, HIV, chronic lung disease, or heart disease.
What are the possible causes of TSS:
- history of using super-absorbent tampons
- surgical wounds
- a local infection in the skin or deep tissue
- history of using the diaphragm or contraceptive sponge
- history of childbirth or abortion
Treatment for toxic shock syndrome:
Specific treatment will be determined by your physician based on:
- your age, health, and medical history
- extent of the disease
- your tolerance for specific medications, procedures, or therapies
- expectations for the course of the disease
- your opinion or preference |
Seagrasses are plants with roots, stems and leaves adapted to living in the marine environment and capable of producing flowers, fruits and seeds. These plants are more evolved and complex than seaweed, which have a more simple structure, although the two species are often confused.
Seagrass beds occur extensively in shallow waters and can reach up to depths of 40m or more, if the environmental conditions permit photosynthesis. Present in warm and temperate waters, they form "seagrass meadows". There are 60 known species of seagrasses in the world that can create meadows, although the European coasts harbour mainly four species, all of which are present on the Spanish coasts; including Zostera marina, Zostera noltii, Cymodocea nodosa and Posidonia oceanica. In addition, there is a fifth species present on the coasts of the Canary Islands, Halophila decipiens, which mainly occurs in African waters.Download the Report |
By Dana Ullman MPH
(Excepted from Discovering Homeopathy: Medicine for the 21st Century, North Atlantic Books)
- The Homeopathic and Ecological View of Infectious Disease
- Are Antibiotics Helpful in Ear and Throat Infections?
- Homeopathic Treatment of Infectious Disease
- Homeopathic Treatment of Viral Conditions
Towards the end of Louis Pasteur’s life, he confessed that germs may not be the cause of disease after all, but may simply be another symptom of disease. He had come to realize that germs seem to lead to illness primarily when the person’s immune and defense system (what biologists call “host resistance”) is not strong enough to combat them. The “cause” of disease is not simply a bacteria but also the factors that compromise host resistance, including the person’s hereditary endowment, his nutritional state, the stresses in his life, and his psychological state. In describing one of his experiments with silkworms, Pasteur asserted that the microorganisms present in such large numbers in the intestinal tract of the sick worms were “more an effect than a cause of disease.” (1)
With these far-reaching insights Pasteur conceived an ecological understanding of infectious disease. Infectious disease does not simply have a single cause but is the result of a complex web of interactions within and outside the individual.
An analogy to help develop an understanding of the ecological perspective of infectious disease can be developed from the situation of mosquitoes and swamps. It is commonly known that mosquitoes infest swamps because swamps provide the still waters necessary for the mosquitoes to lay their eggs and for them to hatch without disruption. In essence, swamps are a perfect environment for the mosquitoes to reproduce.
A farmer might try to rid his land of mosquitoes by spraying insecticide over the swamps. If lucky, he will kill all the mosquitoes. However, because the swamp is still a swamp, it is still a perfect environment for new mosquitoes to fly in and to lay their eggs. The farmer then sprays his insecticide again, only to find that more mosquitoes infest the swamp. Over time, some mosquitoes do not get sprayed with fatal doses of the insecticide. Instead, they adapt to the insecticide that they have ingested, and with each generation they are able to pass an increased immunity to the insecticide on to their offspring.
Soon, the farmer must use stronger and stronger varieties of insecticide, but as the result of their adaption, some mosquitoes are able to survive, despite exposure to the insecticide. Similarly, finding streptococcus in a child’s throat does not necessarily mean that the strep “caused” a sore throat, any more than one could say that the swamp “caused” the mosquitoes. Streptococcus often inhabits the throat of healthy people without leading to a sore throat. Symptoms of strep throat only begin if there are favorable conditions for the strep to reproduce rapidly and aggressively invade the throat tissue. Strep, like mosquitoes, will only settle and grow in conditions which are conducive for them.
The child with the strep throat generally gets treated with antibiotics. Although the antibiotics may be effective in getting rid of the bacteria temporarily, they do not change the factors that led to the infection in the first place. When the farmer sprays with insecticide or the physician prescribes antibiotics but doesn’t change the conditions which created the problem, the mosquitoes and the bacteria are able to return to those environments that are favorable for their growth.
To make matters worse, the antibiotics kill the beneficial bacteria along with the harmful bacteria. Since the beneficial bacteria play an important role in digestion, the individual’s ability to assimilate necessary nutrients to his body is temporarily limited, ultimately making him more prone to reinfection or other illness in the meantime.
Marc Lappe’, PhD, University of Illinois professor and author of When Antibiotics Fail, notes that, “When these more benevolent counterparts die off, they leave behind a literal wasteland of vacant tissue and organs. These sites, previously occupied with normal bacteria, are now free to be colonized with new ones. Some of these new ones have caused serious and previously unrecognized diseases.” (2)
Some clinicians have found that inappropriate antibiotic usage can transform common vaginal “yeast” infections (candida albicans), which are characterized by simple itching, into a system-wide candida infection which can cause a variety of acute and chronic problems. (3) Although the diagnosis of “systemic candidiasis” is controversial, there is general consensus that frequent antibiotic use can also transform bacteria that normally live in our bodies without creating any problems into irritating and occasionally serious infections in the elderly, the infirm, and the immunodepressed. (4)
And of course, the bacteria learn to adapt to and survive antibiotics. Scientists then must slightly change the antibiotics (there are over 300 varieties of penicillin alone), or make stronger and stronger antibiotics (which generally also have more and more serious side effects). Despite the best efforts of scientists, Dr. Lappe’ asserts that we are creating many more germs than we are medicines, since each new antibiotic brings to life literally millions of Benedict Arnolds.
Just 15-20 years ago penicillin was virtually always successful in treating gonorrhea. Now there are gonorrhea bacteria which have learned to resist penicillin, and these bacteria have now been found in all fifty states as well as throughout the world. From 1983 to 1984 alone the number of cases in the U.S. with resistant strains of gonorrhea doubled. (5)
Alexander Fleming, the scientist who discovered penicillin, cautioned against the overuse of antibiotics. Unless the scientific community and the general public heed his warning, Harvard professor Walter Gilbert, a Nobel prizewinner in chemistry, asserted, “There may be a time down the road when 80% to 90% of infections will be resistant to all known antibiotics.” (6)
The scientific community and the general public have ignored the insights of the late Pasteur and have ignored the importance of host resistance in preventing illness. Most scientists broadly accepted the germ theory, while only rare individuals have since acknowledged the importance of the ecological balance of microorganisms in the body. But the wisdom of Pasteur remains relevant, and more and more scientists are beginning to acknowledge the importance of alternatives to antibiotics. Even an editorial in the prestigious New England Journal of Medicine affirmed the need for the treatment of infections with “less ecologically disturbing techniques.” (7) Homeopathic medicines will inevitably play a major role as one of these alternatives.
Claude Bernard, the esteemed “father of experimental physiology,” affirmed Pasteur’s contention that bacteria are not the cause of disease. In his most famous book, An Introduction to the Study of Experimental Medicine, Bernard said, “If the exciting cause were the principle factor, for instance, in pneumonia, everyone exposed to cold would come down with this disease, whereas only an occassional case of chilll turns into pneumonia. Unless the subject is predisposed, the most powerful causes will have no effect on him. Predisposition is the ‘pivot of all experimental physiology’ and the real cause of most disease.” (8)
At a health conference in 1976 Jonas Salk noted that there are basically two ways to heal sick people. First, one can try to control the individual symptoms the sick person is experiencing, and second, one can try to stimulate the person’s own immune and defense system to enable the body to heal itself. (9) Whereas conventional medicine’s allegiance is to the first approach, homeopathy and a wide variety of natural healing systems attempt the latter.
A good example of the questionable value of antibiotic use is their application in children’s earache. Ear infection has become one of the most common childhood illness. The infection of the middle ear and eardrum is called “otitis media,” a condition for which most physicians prescribe antibiotics. Several researchers, however, have found that antibiotics do not improve health of children compared to those not given antibiotics. (10) Others have found that antibiotics provide a brief relief of symptoms, but subsequently there was no difference compared to those children given placebo. (11) Still others have found that 70% of children with otitis media still had fluid in the ear after four weeks of treatment and that 50% of children experience another ear infection within three months. (12)
Although some physicians assert that antibiotics are responsible for the presently low incidence of complications from ear infections such as mastoiditis, research has shown that there no evidence that antibiotics reduce the incidence of mastoiditis. (13) Homeopaths claim a similarly low complication rate without the use of antibiotics. (14)
One of the more significant studies showed that patients with ear infection who were treated with antibiotics had appreciably more recurrences (as much as 2.9 times) than those people who didn’t use any treatment. (15)
In chronic ear infection it has become standard procedure for physicians to use ear tubes in conjunction with antibiotics or in place of it. These tubes help drain the pus from the ear, but this treatment only deals with the results of the problem; it does nothing to treat the reason the infection was able to spread in the first place. This physiological fact may be the reason ear tubes have been found to be of questionable value. (16)
Antibiotics and ear tubes treat symptoms of a problem. They do not strengthen the organism so that it can fight the infection itself, nor do they make the organism less resistant to future infection.
Another myth which continues to be perpetuated is that of the value of antibiotics in treating sore throats. The primary rationale for using antibiotics to treat a sore throat has been to prevent the person from getting rheumatic fever, a potentially fatal condition. Researchers point out that there is presently an extremely low incidence of rheumatic fever. (17)* This low incidence is not the result of antibiotic use because there was a decrease in rheumatic fever incidence even prior to antibiotic use.
[* In 1986 there have been some reports of new outbreaks of rheumatic fever in some parts of the United States. However, Ellen Wald, M.D., medical director of Children’s Hospital of Pittsburgh, noted that too-early treatment with antibiotics may impair the body’s normal immunlogic response and open up the possibility of reinfection, and that this problem must be weighed against the benefit of possibly preventing rheumatic fever. One study showed that those children who were treated with antibiotics immediately upon diagnosis had eight times the recurrent rate of strep throat compared to those children who delayed treatment. (18) In the context of other studies cited in this chapter, it may be worthwhile to compare those who received delayed treatment with those who received no antibiotics. It may also be worthwhile to compare these groups with a group of people prescribed a homeopathic medicine.]
Recent research has even determined that today’s strains of streptococcus very rarely cause rheumatic fever (19) and that antibiotics do not even eradicate the strep in 25-40% of the cases, despite demonstrated sensitivity of the organism to the antibiotic. (20)
Also, it is widely recognized that most strep infections are left untreated, and yet, a vast majority of these people do not get rheumatic fever. Further, from 33% to 50% of the cases of rheumatic fever occur without sore throat symptoms. (21) A recent outbreak of rheumatic fever was reported in the New England Journal of Medicine. (22) Two-thirds of the children with this disease had no clearcut history of a sore throat within a three month period preceding the onset of their condition. Of particular significance, of the 11 children who had throat symptoms and who thus had a throat culture performed, 8 tested positive for strep. These children were prescribed antibiotics, and yet, each still developed rheumatic fever.
New evidence shows that antibiotics do help reduce the symptoms of sore throat faster than placebo. However, it is questionable if antibiotics should be used simply to relieve self-limited conditions. It is certainly understandable that antibiotic use be considered when there is a life-threatening condition. However, it is uncertain how effective they are in preventing one rare disease. It is also uncertain if it is worth prescribing these powerful drugs to mass numbers of children in the hope that a very small number might benefit.
Antibiotics should definately not be given routinely to children with suspected strep throat. Recent research has now shown that 60% of children’s sore throats are virally caused for which antibiotics are useless. (23)
This evidence strongly suggests that alternatives to antibiotic usage should be sought for ear and throat infection. Homeopathy offers a viable alternative.
When people think about the successes of modern medicine, they often assert that we are now living considerably longer than our parents or their parents. They also usually point to modern medicine’s successes in treating the infectious diseases that raged during previous centuries such as the plague, cholera, scarlet fever, yellow fever, and typhoid.
Scientists and historians alike agree that these assumptionsare myths, pure myths. Scientists point out that we are now living longer than ever before, but this has not primarily been the result of new medical technologies. Rather, our lengthening life is mostly because of a significant decrease in infant mortality, which is the result of better hygiene during birth (hurray for soap!), better nutrition (the creation of cities has enabled more people to have access to a greater variety of foods, thereby decreasing malnutrition), and improvements in various public health measures such as sanitation, better sewage, cleaner water, and pest control. (24)
Even with all these considerations, the increase in life expectancy for adults has not been very significant. Statistics show that the average white male who reached 40 years of age in 1960 lives to be 71.9; whereas an average white male who reached 40 years of age in 1920 lives to be 69.9. The average white male who reached 50 years of age in 1982 lives to be 75.6 years, while the average white male who reached 50 years of age in 1912, survived until 72.2 years. (25)
Nobel Prize-winning microbiologist Rene Dubos noted, “the life expectancy of adults is not very different now from what it was a few generations ago, nor is it greater in areas where medical services are highly developed than in less prosperous countries.” (26)
Historians remind us that conventional medicine was not at all responsible for the disappearance or decrease in the fatal infectious diseases of the 15th to 19th century. Antibiotics were not even available until the 1940s and 1950s, and no other conventional drugs were successfully used to treat most of the infectious epidemics of the past. Even mortality (incidence of death) from tuberculosis, pneumonia, bronchitis, influenza, and whooping cough were on the sharp decline prior to the introduction of any conventional medical treatment for them. An important exception was the decrease in the death rate from polio after the introduction of the polio vaccine.
A little known fact of history is that homeopathic medicine developed its popularity in the United States as well as in Europe because of its successes in treating the infectious epidemics that raged during the 19th century. Dr. Thomas L. Bradford’s The Logic of Figures, published in 1900, compares in detail the death rate in homeopathic versus allopathic (conventional) medical hospitals and shows that death rates per 100 patients in homeopathic hospitals were often one-half or even one-eighth that of conventional medical hospitals. (27)
In 1849 the homeopaths of Cincinnati claimed that in over a thousand cases of cholera only 3% of the patients died. To substantiate their results they even printed the names and addresses of patients who died or who survived in a newspaper. (28) The death rate of patients with cholera who used conventional medicines generally ranged from 40 to 70%.
The success of treating yellow fever with homeopathy was so impressive that a report from the United States Government’s Board of Experts included several homeopathic medicines, despite the fact that the Board of Experts was primarily composed of conventional physicians who despised homeopathy. (29)
The success of homeopathy in treating modern-day infections is comparable to its successes in treating the infectious diseases of the last century. It is common knowledge that homeopathic practitioners rarely resort to using antibiotics or other drugs commonly given for infectious conditions. Homeopaths, like any good medical professional, will use antibiotics when clearly necessary, but it is worthwhile having alternatives that work.
Homeopath Randall Neustaedter of Palo Alto, California, notes that acute ear infection is “a simple problem to manage with acute (homeopathic) remedies.” (30) Common acute ear infection medicines are Belladonna (deadly nightshade), Chamomilla (chamomille), Pulsatilla (windflower), Ferrum phos (phosphate of iron), and Hepar sulph (Hahnemann’s calcium sulphide).
If the child gets treated with antibiotics and then has recurrent ear infections, homeopathic treatment generally takes more time but is often curative. Such recurrent problems, Neustaedter asserts, require the homeopathic “constitutional approach,” the approach where a homeopathic medicine is prescribed based on the totality of present symptoms as well as on an evaluation of the patient’s past history. While it is common for parents to prescribe successfully for acute ear infections, it is recommended that children receive professional care for recurrent ear infections or for any chronic condition.
Homeopaths have also found great success in treating a wide variety of other bacterial infections. Throat infections are commonly treated with Belladonna (deadly nightshade), Arsenicum (arsenic), Rhus tox (poison ivy), Mercurius (mercury), Hepar sulph, Lachesis (venom of the bushmaster), Apis (bee venom), or Phytolacca (pokeroot). Boils which result from bacterial infection are often successfully treated with Belladonna, Hepar sulph, Silica (silica), Arsenicum, or Lachesis. And styes, which usually result from a Staphylococcus infection, are effectively treated with Pulsatilla (windflower), Hepar sulph (Hahnemann’s calcium sulphide), Apis (bee venom), Graphities (graphite), and Staphysagria (stavesacre).
Conventional drugs at least relieve the symptoms of bacterial infection; however, there is little in conventional medicine has to treat most viral conditions. Since homeopathic medicines stimulate the body’s own defenses rather than directly attack specific pathogens, homeopathy again has much to offer in the treatment of viral diseases.
In recent research on viruses that attack chicken embryos, 8 of the 10 homeopathic medicines tested inhibited the growth of the viruses 50 to 100%. (31) This research is of particular significance because conventional science knows only a very select number of drugs that have antiviral action, and none of these drugs are as safe as the homeopathic medicines.
Homeopaths commonly treat people suffering from acute and chronic viral conditions. People with viral respiratory and digestive conditions, viral infection of the nervous system, herpes, and even a few with AIDS have reported significant improvement using homeopathic medicines. Sometimes this improvement is dramatic and immediate, though most of the time there is a slow, progressive improvement in the person’s overall health.
British physician Richard Savage notes, “While the search goes on to find specific antiviral preparations which are free from side effects, homeopathy can be used effectively to treat patients in four ways:
- Prophylaxis to generate resistance to the infection;
- Treatment in the acute illness to reduce the length and severity of the illness;
- Restoration to revitalize the patient during convalescence; and
- Corrrection of the chronic sequelae to restore the patient to his former state of health.” (32)
Homeopaths have found that their medicines can prevent and treat various infections. There is not much research demonstrating the efficacy of the homeopathic medicines in preventing viral conditions, though there is some evidence that the medicines can be used to prevent other infectious diseases. Homeopathic microdoses can be used as immunizations; for instance, a single dose of Meningococcin 10c (a homeopathic preparation of Neisseria meningitidis), 18,000 people in Brazil were immunized in 1974. The immunized group had significantly less meningitis infections than a control group. (33)
In the 1800s homeopaths commonly used medicines to prevent or cure what later came to be understood as bacterial or viral infections. Aconite and Ferrum phos were frequently given at the early onset of fever and aches as a way to prevent influenza. Belladonna was the most common medicine for preventing or treating scarlet fever, and Camphora (camphor) was the major medicine used to prevent or treat cholera. The dramatic success of the medicines in the prevention and treatment of these dread diseases gained homeopathy a large following.
Homeopaths commonly find that successful treatment of acute or chronic disease with homeopathic medicines often leads to stronger and healthier people who do not get severely or recurrently ill. During the late 1800s many life insurance companies offered lower rates to people who went to homeopathic physicians because actuarial statistics showed that homeopathic patients were healthier and lived longer. (34) There is also a record that these life insurance companies paid out larger sums of money to homeopathic patients since they lived longer than those under conventional medical care. (35)
One of the additional advantages of using homeopathy in treating viral conditions is that homeopathic medicines can be prescribed even before a definitive diagnosis has been made. This is because homeopaths prescribe based on the totality of symptoms, and laboratory work is not always necessary to find the correct medicine. Since some viral conditions are difficult to diagnose even after laboratory tests, one is often able to cure people with homeopathy before a conventional medical diagnosis can be made.
Antibiotics are only helpful in certain bacterial infections, and since viral diseases are particularly common, conventional medicine offers little help. In comparison, homeopaths often successfully treat acute viral conditions such as the common cold, virus-induced coughs, influenza, gastroenteritis (sometimes called the “stomach flu”), and viral hepatitis.
Homeopaths use Allium cepa (onion), Euphrasia (eyebright), Natrum mur (salt), or other individually chosen medicines for the common cold. Aconite (monkshood), Belladonna, Bryonia (wild hops), Phosphorous (phosphorous), or others are helpful in treating common viral respiratory infections.
Influenza is a condition which results from viral infection, and it is also a condition that is easily treated with homeopathy. Although individualization of homeopathic medicines is generally a necessity in order to them to work, there are conditions in which certain medicines are particularly effective. Oscillococcinum (pronounced o-cill-o-cock-i-num) is a medicine that homeopaths have found particularly effective in treating the flu. Its manufacturer, Boiron Laboratories of Lyon, France, have found that it is 80-90% effective in treating the flu when taken within 48 hours of onset of symptoms. Its success is so widely known in France that it is the most widely used treatment for the flu in that country.
Interestingly enough, Oscillococcinum is a microdose of the heart and liver of a duck. One might easily wonder how such a substance might ever be beneficial for the flu, but there actually is some sound logic to it. Perhaps you too heard about the research at the Mayo Clinic that showed that chicken soup has some antiviral action. Since chicken soup is basically a broth of the organs of chickens, perhaps Oscillococcinum is effective because it is “duck soup.”
Ben Hole, M.D., a practicing homeopath in Spokane, Washington, reports, “Oscillococcinum is impressively successful, but if in the rare situations where it doesn’t work or isn’t available, there are several other homeopathic medicines which can be used with excellent results when they are individually prescribed.” Otherher commonly used homeopathic medicines for the flu include Gelsemium (yellow jasmine), Bryonia, Rhus tox, and Eupatorium perfoliatum (boneset).
Although conventional medicine offers very little relief for recurrent or longlasting viral infections, homeopaths have observed that microdoses relieve the symptoms of various chronic viral conditions such as herpes simplex, herpes genitales, chronic Epstein-Barr virus, and warts. One cannot claim that homeopathic medicines actually “cure” these viral conditions since the virus is assumed to remain in the body throughout one’s life, though homeopaths find that their patients get significantly less severe bouts of infection or do not get any symptoms for long periods of time.
The homeopathic approach to treating all these disorders includes a thorough analysis of the person’s totality of symptoms. There is thus no one medicine for a specific disease.
After a viral (or even bacterial) infection people sometimes feel they are still not back to their same healthy self. Generally, an individually chosen homeopathic medicine is prescribed. If the individualized medicine is not working, homeopaths will occasionally give a potentized dose of the specific virus which previously infected the person as a way to strengthen their ability to regain health. Varicellinum (the chickenpox virus) is commonly given in a safe microdose for symptoms that linger after the chickenpox, and Parotidinum (the mumps virus) is often given for symptoms that linger after the mumps.
For the post-herpetic neuralgias, the common medicines are Hypericum (St. John’s Wort), Kalmia (mountain laurel), Magnesia phosphoria (phosphate of magnesia), Causticum (Hahnemann’s potassium hydrate), Mezereum (spurge olive), or Arsenicum.
A state of weakness after a bout of influenza is often treated with China (cinchona bark), Gelsemium, Sulphur (sulphur), Phosphoricum acidum (phosphoric acid), Cadmium (cadmium), and Avena sativa (oat).
Respiratory infections occasionally linger creating chronic nasal discharge, sinusitis, and ear infections. Some of the common medicines given are Kali bichromium (bichromate of potash), Kali iodatum (potassium iodide), Kali carbonicum (potassium carbonate), Kali muriaticum (Chloride of potassium), Kali sulphuricum (potassium sulphate), Silica, Mercurius, Pulsatilla, Alumina (aluminum), Nux vomica (poison nut), and Conium (hemlock).
1. Rene Dubos, Mirage of Health, San Francisco: Harper and Row, 1959, 93-94.
2. Marc Lappe’, When Antibiotics Fail, Berkeley: North Atlantic, 1986, xii.
3. William Crook, The Yeast Connection, New York: Vintage, 1986.
4. Lappe’, xiii.
5. Lappe’, xvii.
6. R. Cave, editor. “Those Overworked Miracle Drugs,” Newsweek, August 17, 1981, 63.
7. R.B. Sack, “Prophylactic Antibiotics? The Individual Versus the Community,” New England Journal of Medicine, 300, 1979, 1107-1108.
8. Claude Bernard, An Introduction to the Study of Experimental Medicine, New York: Dover, 1957 (originally written in 1865), 160-163.
9. Jonas Salk, Mandala Holistic Health Conference, San Diego, September, 1976. Proceedings published in Journal of Holistic Health, 1976.
10. F.L. Buchem, “Therapy of Acute Otitis Media: Myringotomy, Antibiotics, or Neither? A Double-Blind Study in Children,” Lancet, 883, October 24, 1981.
11. J. Thomsen, “Penicillin and Acute Ototis Media: Short and Long-term Results,” Annals of Otolology, Rhinology, and Laryngology. Supplement. 68:271, 1980.
12. E.M Mandel, et.al., “Effifacy of Amoxicillin with and without Decongestant–Antihistamine for Otitis Media with Effusion in Children,” New England Journal of Medicine, 316:8, February 19, 1987, 432-437.
14. Randall Neustaedter, “Management of Otitis Media with Effusion in Homeopathic Practice,” Journal of the American Institute of Homeopathy, 79(3-4)87-99, 133-140, September- December, 1986.
15. M. Diamant, “Abuse and Timing of Use of Antibiotics inAcute Otitis Media,” Archives of Otolaryngology, 100:226, 1974.
16. D. Kilby, “Grommets and Glue Ears: Two Year Results,” Journal of Laryngology and Otology, 86:105, 1972. M.J.K.M. Brown, “Grommets and Glue ear: A Fie-year Followup of a Controlled Trial,” Journal of Social Medicine, 71:353, 1978. T. Lildholdt, “Ventilation Tubes in Secretory Otitis Media,” Acta Otolaryngology. Supplement. 398:1, 1983.
17. Bisno, 1983. M. Land, “Acute Rheumatic Fever: A Vanishing Disease in Suburbia,” JAMA, 249:895-898, 1983.
18. “Pediatricians Urge Confirmatory Test for Suspected Strep Throat,” Medical World News, January 12, 1987, 42.
19. Alan L. Bisno, “Where Has All the Rheumatic Fever Gone?” Clinical Pediatrics, December, 1983, 804-805.
20. A. Gastanaduy, “Failure of Penicillin to Eradicate Group A Streptococci During an Outbreak of Pharyngitis,” Lancet, 8193:498- 502, 1980. E. Kaplan, “The Role of the Carrier in Treatment Failures After Antibiotic Therapy for Group A Streptococci in the Upper Respiratory Tract,” Journal of Laboratory and Clinical Medicine, 98:326-335, 1981.
21. Alan L. Bisno, The Concept of Rheumatogenic and Non-rheumatogenic Group A Stregtococci,” in Red: Streptococcal Diseases and the Immune Response, New York: Academic Press, 1980, 789-803. Alan L. Bisno, “Streptococcal Infections that Fail to Cause Recurrences of Rheumatic Fever,” Journal of Infectious Disease, 136:278-285, 1977.
22. A. George Veasy, et.al., “Resurgence of Acute Rheumatic Fever in the Intermountain Area of the United States,” New England Journal of Medicine, 316,8, February 19, 1987, 421-426.
23. Health Facts, 12, 96, May, 1987, 2.
24. Rene Dubos, Mirage of Health, New York: Harper and Row, 1959. Thomas McKeown, The Role of Medicine, Princeton: Princeton University, 1979.
25. Vital Statistics of the United States, 1982, Life Tables, volume II, section 6, Hyattsville, Md.: National Center for Health Statistics, 13.
26. Rene Dubos, Man Adapting, New Haven: Yale University Press, 1965, 346.
27. Thomas L. Bradford, The Logic of Figures or Comparative Results of Homoeopathic and Others Treatments, Philadelphia: Boericke and Tafel, 1900.
28. Ibid., 68.
29. Harris L. Coulter, Divided Legacy: The Conflict Between Homoeopathy and the American Medical Association, Berkeley: North Atlantic, 1973, 302.
30. Neustaedter, 87.
31. L.M. Singh and Girish Gupa, “Antiviral Efficacy of Homoeopathic Drugs Against Animal Viruses,” British Homoeopathic Journal, 74(3):168-174, July, 1985.
32. Richard Savage, “Homoeopathy: When No Effective Alternative,” British Homoeopathic Journal, 73(2):75-83, April, 1984.
33. “Sesenta mil Brasilenos se Vuelcan en Farmacias Homeopaticas: Cunde la Meningitis,” (front page headline), Excelsior, July 29, 1974.
34. Transactions of the New York State Homoeopathic Medical Society, 1867, 57-59.
35. “Report of Life Insurance Committee,” Transactions of the American Institute of Homoeopathy, 1897, 53-58; 1898, 81-90.
36. Victor Gong, Understanding AIDS: A Comprehensive Guide, New Brunswick, NJ: Rutgers University, 1985, 77-89.
37. Richard Smith (editor), Newsweek, August 12, 1985, 22.
38. Physicians’ Desk Reference, Oradell, N.J.: Medical Economics Co., 1985.
39. Hans H. Neumann, “Use of Steroid Creams as a Possible Cause of Immunosuppression in Homosexuals,” New England Journal of Medicine, 306,15, 935, April 15, 1982.
40. Personal Communication. For additional information, see Mike Strange, “Aid: What Homoeopathy Can Offer,” The Homoeopath: Journal of the Society of Homoeopaths, 6,3, 1987, 117-124.
41. Singh and Gupa.42. R.G. Gibson, et.al., “Homoeopathic Therapy in Rheumatoid Arthritis: Evaluation by Double-Blind Clinical Therapeutic Trial,” British Journal of Clinical Pharmacology, 1980,9, 453- 459. |
Convergent patterns of long-distance nocturnal migration in noctuid moths and passerine birds.
Publikation/Tidskrift/Serie: Proceedings of the Royal Society. Biological Sciences
Förlag: Royal Society Publishing
Vast numbers of insects and passerines achieve long-distance migrations between summer and winter locations by undertaking high-altitude nocturnal flights. Insects such as noctuid moths fly relatively slowly in relation to the surrounding air, with airspeeds approximately one-third of that of passerines. Thus, it has been widely assumed that windborne insect migrants will have comparatively little control over their migration speed and direction compared with migrant birds. We used radar to carry out the first comparative analyses of the flight behaviour and migratory strategies of insects and birds under nearly equivalent natural conditions. Contrary to expectations, noctuid moths attained almost identical ground speeds and travel directions compared with passerines, despite their very different flight powers and sensory capacities. Moths achieved fast travel speeds in seasonally appropriate migration directions by exploiting favourably directed winds and selecting flight altitudes that coincided with the fastest air streams. By contrast, passerines were less selective of wind conditions, relying on self-powered flight in their seasonally preferred direction, often with little or no tailwind assistance. Our results demonstrate that noctuid moths and passerines show contrasting risk-prone and risk-averse migratory strategies in relation to wind. Comparative studies of the flight behaviours of distantly related taxa are critically important for understanding the evolution of animal migration strategies.
- Biological Sciences
- seasonal migration
- Autographa gamma
- flight speed
- ISSN: 0962-8452 (Print)
- ISSN: 1471-2954 (Online) |
A butterfly is an insect of the Order Lepidoptera, and belongs to one of the superfamilies Hesperioidea (the skippers) or Papilionoidea (all other butterflies). Some authors would include also members of the superfamily Hedyloidea, the American butterfly moths. They are notable for their usual life cycle—proceeding from the larval stage as caterpillars through a pupic metamorphisis into their winged adult form. The patterns formed by their brightly coloured wings and their erratic-yet-graceful flight has made butterfly watching a popular hobby.Butterflies live primarily on nectar from flowers. Some also derive nourishment from pollen, tree sap, rotting fruit, dung, and dissolved minerals in wet sand or dirt. Butterflies play an important ecological role as pollinators.
Antennae shape in the lepidoptera from C. T. Bingham (1905)As adults, butterflies are able to consume liquids only by means of their proboscis. They regularly feed on nectar and sip water from damp patches. This they do for water, for energy from sugars in nectar and for sodium and other minerals which are vital for their reproduction.
Several species of butterflies need more sodium than provided by the nectar they drink from flowers. As such, they are attracted to the sodium in salt (which the males often give to the females to ensure fertility). As human sweat contains significant quantities of salt, they sometimes land on people.
Besides damp patches, some butterflies also visit dung, rotting fruit or carcasses to obtain the essential minerals that they need.
Butterflies sense the air for scents, wind and nectar using their antennae. The antennae come in various shapes and colours. The hesperids have a pointed angle or hook to the antennae.
Some butterflies, such as the Monarch butterfly, are migratory. |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2007 July 12
Explanation: The universe is filled with galaxies. But to see them astronomers must look out beyond the stars of our galaxy, the Milky Way. For example, consider this colorful telescopic view of spiral galaxy NGC 6384, about 80 million light-years away in the direction of the constellation Ophiuchus. At that distance, NGC 6384 spans an estimated 150,000 light-years. The sharp image shows details in the distant galaxy's blue spiral arms and yellowish core. Still, the individual stars seen in the picture are all in the close foreground, well within our own galaxy. The brighter Milky Way stars show noticeable crosses, or diffraction spikes, caused by the telescope itself. This particular field of view is about 1/4 degree wide and is relatively rich in foreground stars because it looks out near the crowded center of the Milky Way.
Authors & editors:
Jerry Bonnell (UMCP)
NASA Official: Phillip Newman Specific rights apply.
A service of: ASD at NASA / GSFC
& Michigan Tech. U. |
A new kind of thin film solar cell developed by researchers at the UCLA Samueli School of Engineering just broke a record for efficiently harvesting energy from sunlight. By adding a new layer to common solar cells, the researchers estimate they can reduce solar energy costs by about a fifth, in a relatively simple process that can easily be incorporated into mainstream solar cell manufacturing.
The new technique involved a compound called perovskite, which is a combination of lead and iodine that’s good at capturing energy from sunlight. The researchers sprayed traditional solar cells with the perovskite, making a thin second layer. These “dual layer” solar cells harvested far more energy. The results were published today in the journal Science.
“We’re drawing energy from two distinct parts of the solar spectrum over the same device area,” Yang Yang, a professor of materials science at UCLA, told UCLA Newsroom. “This increases the amount of energy generated from sunlight compared to the CIGS layer alone.”
Best of all, making perovskite is cheap and inexpensive, meaning that the whole process could theoretically be easily incorporated into the manufacturing process for traditional solar panel makers.
The team started with a traditional cell that was capable of harvesting about 18.7 percent of the energy in sunlight on its own. The base cell was tiny, just two thousandths of a millimeter thick. They then added the even slimmer layer made with perovskite to the base using a new type of nanoscale interface that the researchers developed, which helped boost the voltage.
Instead of 18.7 percent, these new cells were able to capture 22.4 percent of the sunlight’s energy, an increase of around 20 percent. The team hopes to continue refining the cells and getting efficiency up to 30 percent.
Tactics that can increase solar energy efficiency are hugely important to the renewable energy transition, the maximum theoretical efficiency for the most common kind of photovoltaic cell is only 29 percent. As we wrote in a previous story, “sunlight’s bouncing around, getting absorbed as waste heat, and just generally not being converted into electricity.”
Their new innovation also helped the group shatter the previous record for efficiency for a perovskite-CIGS tandem solar cell, which was just 10.9 percent. That record was set back in 2015 by a group of researchers working with IBM. |
Additional Treatment Options
Systemic Radiation Therapy
Certain cancers may be treated by swallowing radioactive pills or receiving radioactive fluids in the vein (intravenous). This type of treatment is called systemic radiation therapy because the medicine goes to the entire body. For example, radioactive iodine (I-131) capsules are given to treat some types of thyroid cancer. Another example is the use of intravenous radioactive material to treat pain due to cancer that has spread to the bone. Radiolabeled antibodies are monoclonal antibodies with radioactive particles attached. These antibodies are designed to attach themselves directly to the cancer cell and damage it with small amounts of radiation.
Novel Targeted Therapies
Cancer doctors now know much more about how cancer cells function. New cancer therapies use this information to target cancer cell functions and stop them. Called targeted therapies, they can be more specific in stopping cancer cells from growing and may make other treatments work better. For example, some medicines work to prevent cancers from growing by preventing the growth of new blood vessels that would nourish the cancer.
Other targeted therapies work more directly on cancer cells by blocking the action of molecules on the surface of cancer cells called growth factors.
Any drug that can make tumor cells more sensitive to radiation is called a radiosensitizer. Combining radiation with radiosensitizers may allow doctors to kill more tumor cells. Some types of chemotherapy and some novel targeted therapies can act as radiosensitizers.
Some medicines called radioprotectors can help protect healthy tissue from the effects of radiation.
Intraoperative Radiation Therapy
Radiation therapy given during surgery is called intraoperative radiation therapy. Intraoperative radiation therapy is helpful when vital normal organs are too close to the tumor. During an operation, a surgeon temporarily moves the normal organs out of the way so radiation can be applied directly to the tumor. This allows your radiation oncologist to avoid exposing those organs to radiation. Intraoperative radiation can be given as external beam therapy or as brachytherapy.
Medicines prescribed by a medical oncologist that can kill cancer cells directly are called chemotherapy. Some are given in pill form, and some are given by injection. Chemotherapy can also be considered a type of systemic therapy, because medicines go through the bloodstream to the entire body.
Some treatments are designed to help your own body’s immune system fight the cancer, similar to how your body fights off infections. |
Garibaldi Volcanic Belt
The Garibaldi Volcanic Belt is a northwest-southeast trending volcanic chain in the Pacific Ranges of the Coast Mountains that extends from Watts Point in the south to the Ha-Iltzuk Icefield in the north. This chain of volcanoes is located in southwestern British Columbia, Canada. It forms the northernmost segment of the Cascade Volcanic Arc, which includes Mount St. Helens and Mount Baker. Most volcanoes of the Garibaldi chain are dormant stratovolcanoes and subglacial volcanoes that have been eroded by glacial ice. Less common volcanic landforms include cinder cones, volcanic plugs, lava domes and calderas. These diverse formations were created by different styles of volcanic activity, including Peléan and Plinian eruptions.
Eruptions along the length of the chain have created at least three major volcanic zones. The first began in the Powder Mountain Icefield 4.0 million years ago. The Mount Cayley massif began its formation during this period. Multiple eruptions from 2.2 million to 2,350 years ago created the Mount Meager massif, and eruptions 1.3 million to 9,300 years ago formed Mount Garibaldi and other volcanoes in the Garibaldi Lake area. These major volcanic zones lie in three echelon segments, referred to as the northern, central, and southern segments. Each segment contains one of the three major volcanic zones. Apart from these large volcanic zones, two large poorly studied volcanic complexes lie at the northern end of the Pacific Ranges, namely Silverthrone Caldera and Franklin Glacier Complex. They are considered to be part of the Garibaldi Volcanic Belt, but their tectonic relationships to other volcanoes in the Garibaldi chain are unclear because of minimal studies.
- 1 Geology
- 2 History
- 3 Volcanic hazards
- 4 See also
- 5 References
- 6 External links
Prior to Garibaldi Belt formation, a number of older, but related volcanic belts were constructed along the Southern Coast of British Columbia. This includes the east-west trending Alert Bay Volcanic Belt on northern Vancouver Island and the Pemberton Volcanic Belt along the coastal mainland. The Pemberton Belt began its formation when the former Farallon Plate was subducting under the British Columbia Coast 29 million years ago during the Oligocene epoch. At this time, the north-central portion of the Farallon Plate was just starting to subduct under the U.S. state of California, splitting it into northern and southern sections. Between 18 and five million years ago during the Miocene period, the northern remnant of the Farallon Plate fractured into two tectonic plates, known as the Gorda and Juan de Fuca plates. After this breakup, subduction of the Juan de Fuca Plate might have been coincident with the northern end of Vancouver Island eight million years ago during the late Miocene period. This is when the Alert Bay Belt became active. A brief interval of plate motion adjustment about 3.5 million years ago may have triggered the generation of basaltic magma along the descending plate edge. This eruptive period postdates the formation of the Garibaldi Belt and evidence for more recent volcanism in the Alert Bay Belt has not been found, indicating that volcanism in the Alert Bay Belt is likely extinct.
Bedrock under the Garibaldi chain consists of granitic and dioritic rocks of the Coast Plutonic Complex, which makes up much of the Coast Mountains. This is a large batholith complex that was formed when the Farallon and Kula plates were subducting along the western margin of the North American Plate during the Jurassic and Tertiary periods. It lies on island arc remnants, oceanic plateaus and clustered continental margins that were added along the western margin of North America between the Triassic and Cretaceous periods.
The Garibaldi Belt has formed in response to ongoing subduction of the Juan de Fuca Plate under the North American Plate at the Cascadia subduction zone along the British Columbia Coast. This is a 1,094 km (680 mi) long fault zone running 80 km (50 mi) off the Pacific Northwest from Northern California to southwestern British Columbia. The plates move at a relative rate of over 10 mm (0.39 in) per year at a somewhat oblique angle to the subduction zone. Because of the very large fault area, the Cascadia subduction zone can produce large earthquakes of magnitude 7.0 or greater. The interface between the Juan de Fuca and North American plates remains locked for periods of roughly 500 years. During these periods, stress builds up on the interface between the plates and causes uplift of the North American margin. When the plate finally slips, the 500 years of stored energy are released in a mega-earthquake.
Unlike most subduction zones worldwide, there is no deep oceanic trench present in the bathymetry of the continental margin in Cascadia. This is because the mouth of the Columbia River empties directly into the subduction zone and deposits silt at the bottom of the Pacific Ocean to bury the oceanic trench. Massive floods from prehistoric Glacial Lake Missoula during the Late Pleistocene also deposited massive amounts of sediment into the trench. However, in common with other subduction zones, the outer margin is slowly being compressed, similar to a giant spring. When the stored energy is suddenly released by slippage across the fault at irregular intervals, the Cascadia subduction zone can create very large earthquakes, such as the magnitude 9.0 Cascadia earthquake on January 26, 1700. However, earthquakes along the Cascadia subduction zone are fewer than expected and there is evidence of a decline in volcanic activity over the past few million years. The probable explanation lies in the rate of convergence between the Juan de Fuca and North American plates. These two tectonic plates currently converge 3 cm (1.2 in) to 4 cm (1.6 in) per year. This is only about half the rate of convergence of seven million years ago.
Scientists have estimated that there have been at least 13 significant earthquakes along the Cascadia subduction zone in the past 6,000 years. The most recent, the 1700 Cascadia earthquake, was recorded in the oral traditions of the First Nations people on Vancouver Island. It caused considerable tremors and a massive tsunami that traveled across the Pacific Ocean. The significant shaking associated with this earthquake demolished houses of the Cowichan Tribes on Vancouver Island and caused several landslides. Shaking due to this earthquake made it too difficult for the Cowichan people to stand, and the tremors were so lengthy that they were sickened. The tsunami created by the earthquake ultimately devastated a winter village at Pachena Bay, killing all the people that lived there. The 1700 Cascadia earthquake caused near-shore subsidence, submerging marshes and forests on the coast that were later buried under more recent debris.
Many thousand years of dormancy are expected between large explosive eruptions of volcanoes in the Garibaldi Belt. A possible explanation for the lower rates of volcanism in the Garibaldi chain is that the associated terrain is being compressed in contrast to the more southern portions of the Cascade Arc. In continental rift zones, magma is able to push its way up through the Earth's crust rapidly along faults, providing less chance for differentiation. This is likely the case south of Mount Hood to the California border and east-southeast of the massive Newberry shield volcano adjacent to the Cascade Range in central Oregon because the Brothers Fault Zone lies in this region. This rift zone might explain the massive amounts of basaltic lava in this part of the central Cascade Arc. A low convergence rate in a compressional setting with massive stationary bodies of magma under the surface could explain the low volume and differentiated magmas throughout the Garibaldi Volcanic Belt. In 1958, Canadian volcanologist Bill Mathews proposed there could be a connection between regional glaciation of the North American continent during glacial periods and higher rates of volcanic activity during regional glacial unload of the continent. However, this is hard to predict due to the infrequent geological record in this region. But there is specific data, including the temporal grouping of eruptions synglacially or just postglacial within the Garibaldi Belt, that suggests this could be probable.
Dominating the Garibaldi chain are volcanoes and other volcanic formations that formed during periods of intense glaciation. This includes flow-dominated tuyas, subglacial lava domes and ice-marginal lava flows. Flow-dominated tuyas differ from the typical basaltic tuyas throughout British Columbia in that they are composed of piles of flat-lying lava flows and lack hyaloclastite and pillow lava. They are interpreted to have formed as a result of magma intruding into and melting a vertical hole through adjacent glacial ice that eventually breached the surface of the glacier. As this magma ascends, it ponds and spreads into horizontal layers. Lava domes that were formed mainly during subglacial activity comprise steep flanks made of intense columnar joints and volcanic glass. Ice-marginal lava flows form when lava erupts from a subaerial vent and ponds against glacial ice. The Barrier, a lava dam impounding Garibaldi Lake in the southern segment, is the best represented ice-marginal lava flow in the Garibaldi Belt.
Flow-dominated tuyas and the absence of subglacial fragmental deposits are two uncommon glaciovolcanic features in the Garibaldi chain. This is due to their different lava compositions and decline of direct lava-water contact during volcanic activity. The lava composition of these volcanic edifies changes their structure because eruption temperatures are lower than those associated with basaltic activity and lava containing silica increases thickness and glass differentiation temperatures. As a result, subglacial volcanoes that erupt silicic content melt less qualities of ice and are not as likely to contain water close to the volcanic vent. This forms volcanoes with structures that display their relationship with the regional glaciation. The surrounding landscape also changes the flow of meltwater, favouring lava to pond within valleys dominated by glacial ice. And if the edifice is eroded, it could change the prominence of fragmental glaciovolcanic deposits as well.
On the eastern side of Howe Sound lies the southernmost zone of volcanic activity in the Garibaldi chain. This zone, known as the Watts Point volcanic centre, is a small outcrop of volcanic rock that is a portion of a subglacial volcano. The outcrop covers an area of about 0.2 km2 (0.077 sq mi) and an eruptive volume of roughly 0.02 km3 (0.0048 cu mi). The location is heavily forested and the BC Rail mainline passes through the lower portion of the outcrop about 40 m (130 ft) above sea level. It represents a feature in the Squamish volcanic field.
Mount Garibaldi, one of the larger volcanoes in the southern Garibaldi Belt with a volume of 6.5 km3 (1.6 cu mi), is composed of dacite lavas that were erupted in the past 300,000 years. It was constructed when volcanic material erupted onto a portion of the Cordilleran Ice Sheet during the Pleistocene period. This created the unique asmmetrical shape of the mountain. Successive landslides on Garibaldi's flanks occurred after glacial ice of the Cordilleran Ice Sheet retreated. Subsequent volcanism about 9,300 years ago produced a 15 km (9.3 mi) long dacite lava flow from Opal Cone on Garibaldi's southeastern flank. This is unusually long for a dacite flow, which commonly travel only short distances from a volcanic vent due to their high viscosity. The Opal Cone lava flow represents the most recent volcanic feature at Mount Garibaldi.
On the western shore of Garibaldi Lake, Mount Price represents a stratovolcano with an elevation of 2,050 m (6,730 ft). It was constructed during three periods of activity. The first phase 1.2 million years ago formed a hornblende andesite stratovolcano on the drift-covered floor of a circular basin. After this stratovolcano was constructed, volcanism moved to the west where a series of andesite-dacite lava flows and pyroclastic flows were extruded during a period of Peléan activity 300,000 years ago. This created the 2,050 m (6,730 ft) high cone of Mount Price, which was later buried under glacial ice. Before Mount Price was overridden by glacial ice, volcanic activity took place on its northern flank where a satellite vent is present. Renewed activity took place at Clinker Peak on the western flank of Mount Price 9,000 years ago. This produced the Rubble Creek and Clinker Ridge andesite lava flows that extend 6 km (3.7 mi) to the northwest and southwest. After these flows traveled 6 km (3.7 mi), they were dammed against glacial ice to form an ice-marginal lava flow more than 250 m (820 ft) thick known as The Barrier.
Cinder Cone on the north shore of Garibaldi Lake is a cinder cone partly engulfed by the Helmet Glacier. It consists of volcanic ash, lapilli and dispersed ropy and lava bomb segments that bring the cone's prominence to 500 m (1,600 ft). Its minimal degree of erosion indicates that it might have erupted in the past 1,000 years. A series of basaltic andesite flows were erupted from Cinder Cone about 11,000 years ago that traveled into a deep north trending U-shaped valley on the eastern flank of The Black Tusk. Subsequent volcanism produced another sequence of basaltic lava flows 4,000 years ago that flowed in the same glacial valley.
The Black Tusk, a black pinnacle of volcanic rock on the northwestern shore of Garibaldi Lake, is the glacially eroded remnant of a much larger volcano that formed during two periods of volcanic activity. The first between 1.1 and 1.3 million years ago erupted hornblende andesite lava flows and tuffs. These volcanics compose mountain ridges southwest, southeast and northwest of the prime volcanic structure. Subsequent erosion demolished the newly formed volcano. This ultimately exposed the roots of the cone, which currently form the rugged edifice of The Black Tusk. After the cone was eroded, a series of hypersthene andesite lava flows were erupted between 0.17 and 0.21 million years ago. These end at adjacent ice-marginal lava flows that form 100 m (330 ft) cliffs. This eruptive phase also produced a lava dome that comprises the current 2,316 m (7,598 ft) high pinnacle. Consequently, the regional Late Pleistocene ice sheet carved a deep north-trending U-shaped valley into the eastern flank of the second-stage cone. Here, subsequent lava flows from Cinder Cone filled the valley.
Immediately southeast of Mount Cayley lies Mount Fee, an extensively eroded volcano containing a north-south trending ridge. It is one of the older volcanic features in the central Garibaldi chain. Its volcanics are undated, but its large amount of dissection and evidence of glacai ice overriding the volcano indicates that it formed more than 75,000 years ago before the Wisconsinan Glaciation. Therefore, volcanism at Mount Fee does not display evidence of interaction with glacial ice. The remaining product from Fee's earliest volcanic activity is a minor portion of pyroclastic rock. This is evidence of explosive volcanism from Fee's eruptive history, as well as its first volcanic event. The second volcanic event produced a sequence of lavas and breccias on the eastern flank of the main ridge. These volcanics were likely placed when a sequence of lava flows and broken lava fragments erupted from a volcanic vent and moved down the flanks during the construction of a large volcano. Following extensive dissection, renewed volcanism produced a viscous series of lava flows forming its narrow, flat-topped, steep-sided northern limit and the northern end of the main ridge. The conduit for which these lava flows originated from was likely vertical in structure and intruded through older volcanics deposited during Fee's earlier volcanic events. This volcanic event was also followed by a period of erosion, and likely one or more glacial periods. Extensive erosion following the last volcanic event at Mount Fee has created the rugged north-south trending ridge that currently forms a prominent landmark.
Ember Ridge, a volcanic mountain ridge between Tricouni Peak and Mount Fee, consists of at least eight lava domes composed of andesite. They were likely formed between 25,000 and 10,000 years ago when lava erupted beneath glacial ice of the Fraser Glaciation. Their current structures are comparable to their original forms due to the minimal degree of erosion. As a result, the domes display the shaps and columnar joints typical of subglacial volcanoes. The random shaps of the Ember Ridge domes are the result of erupted lava taking advantage of former ice pockets, eruptions taking place on uneven surfaces, subsidence of the domes during volcanic activity to create rubble and separation of older columnar units during more recent eruptions. The northern dome, known as Ember Ridge North, covers the summit and eastern flank of a mountain ridge. It comprises at least one lava flow that reaches a thickness of 100 m (330 ft), as well as the thinnest columnar units in the Mount Cayley volcanic field. The small size of the columnar joints indicates that the erupted lava was cooled immediately and are mainly located on the dome's summit. Ember Ridge Northeast, the smallest subglacial dome of Ember Ridge, comprises one lava flow that has a thickness no more than 40 m (130 ft). Ember Ridge Northwest, the most roughly circular subglacial dome, comprises at least one lava flow. Ember Ridge Southeast is the most complex of the Ember Ridge domes, consisting of a series of lava flows with a thickness of 60 m (200 ft). It is also the only Ember Ridge dome that contains large amounts of rubble. Ember Ridge Southwest comprises at least one lava flow that reaches a thickness of 80 m (260 ft). It is the only subglacial dome of Ember Ridge that contains hyaloclastite. Ember Ridge West comprises only one lava flow that reaches a thickness of 60 m (200 ft).
To the northwest, the Mount Cayley massif constitutes the largest and most persistent volcano in the central Garibaldi Belt. It is a highly eroded stratovolcano composed of dacite and rhyodacite lava that was deposited during three phases of volcanic activity. The first eruptive phase started about four million years ago with the eruption of dacite lava flows and pyroclastic rock. This resulted in the creation of Mount Cayley itself. Subsequent volcanism during this volcanic phase constructed a significant lava dome. This acts like a volcanic plug and composes the lava spines that currently form pinnacles on Cayley's rugged summit. After Mount Cayley was constructed, lava flows, tephra and welded dacite rubble were erupted. This second phase of activity 2.7 ± 0.7 million years ago resulted in the creation of the Vulcan's Thumb, a craggy volcanic ridge on the southern flank of Mount Cayley. Lengthy dissection from an extended period of erosion demolished much of the original stratovolcano. Volcanic activity after this prolonged period of erosion produced thick dacite lava flows from parasitic vents 300,000 years ago that extended into the Turbid and Shovelnose Creek valleys near the Squamish River. This subsequently created two minor parasitic lava domes 200,000 years ago. These three volcanic events are in contrast to several others around Cayley in that they do not show signs of interaction with glacial ice.
Pali Dome, an eroded volcano north of Mount Cayley, consists of two geological units. Pail Dome East is composed of a mass of andesite lava flows and small amounts of pyroclastic material. It lies on the eastern portion of the large glacial icefield that covers much of the Mount Cayley volcanic field. Much of the lava flows form gentle topography at high elevations but terminate in finely jointed vertical cliffs at low elevations. The first volcanic activity likely occurred about 25,000 years ago, but it could also be significantly older. The most recent volcanic activity produced a series of lava flows that were erupted when the vent area was not covered by glacial ice. However, the flows show evidence of interaction with glacial ice in their lower units. This indicates that the lavas were erupted about 10,000 years ago during the waning stages of the Fraser Glaciation. The ice-marginal lava flows reach thicknesses of up to 100 m (330 ft). Pali Dome West consists of at least three andesite lava flows and small amounts of pyroclastic material; its vent is presently buried under glacial ice. At least three eruptions have occurred at Pali Dome East. The age of the first volcanic eruption is unknown, but it could have occurred in the past 10,000 years. The second eruption produced a lava flow that was erupted when the vent area was not buried under glacial ice. However, the flow does show evidence of interaction with glacial ice at its lower unit. This indicates that the lavas were erupted during the waning stages of the Fraser Glaciation. The third and most recent eruption produced another lava flow that was largely erupted above glacial ice, but was probably constrained on its northern margin by a small glacier. Unlike the lava flow that was erupted during the second eruption, this lava flow was not impounded by glacial ice at its lower unit. This suggests that it erupted less than 10,000 years ago when the regional Fraser Glaciation retreated.
Cauldron Dome, a subglacial volcano north of Mount Cayley, lies west of the massive glacier covering much the region. Like Pali Dome, it is composed of two geological units. Upper Cauldron Dome is a flat-topped, oval-shaped pile of at least five andesite lava flows that resembles a tuya. The five andesite flows are columnar jointed and were likely extruded through glacial ice. The latest volcanic activity might have occurred between 10,000 and 25,000 years ago when this area was still influenced by glacial ice of the Fraser Glaciation. Lower Cauldron Dome, the youngest unit comprising the entire Cauldron Dome subglacial volcano, consists of a flat-topped, steep-sided pile of andesite lava flows 1,800 m (5,900 ft) long and a maximum thickness of 220 m (720 ft). These volcanics were extruded about 10,000 years ago during the waning stages of the Fraser Glaciation from a vent adjacent to upper Cauldron Dome that is currently buried under glacial ice.
Lying at the northern portion of the Mount Cayley volcanic field is a subglacial volcano named Slag Hill. At least two geologic units compose the edifice. Slag Hill proper consists of andesite lava flows and small amounts of pyroclastic rock. Lying on the western portion of Slag Hill is a lava flow that likely erupted less than 10,000 years ago due to the lack of features indicating volcano-ice interactions. The Slag Hill flow-dominated tuya 900 m (3,000 ft) northeast of Slag Hill proper consists of a flat-topped, steep-sided pile of andesite. It protrudes through remnants of volcanic material erupted from the Slag Hill proper, but it represents a separate volcanic vent due to its geographical appearance. This small subglacial volcano possibly formed between 25,000 and 10,000 years ago throughout the waning stages of the Fraser Glaciation.
Ring Mountain, a flow-dominated tuya lying at the northern portion of the Mount Cayley volcanic field, consists of a pile of at least five andesite lava flows lying on a mountain ridge. Its steep-sided flanks reach heights of 500 m (1,600 ft) and are composed of volcanic rubble. This makes it impossible to measure its exact base elevation or how many lava flows constitute the edifice. With a summit elevation of 2,192 m (7,192 ft), Ring Mountain had its last volcanic activity between 25,000 and 10,000 years ago when the Fraser Glaciation was close to its maximum. Northwest of Ring Mountain lies a minor andesite lava flow. Its chemistry is somewhat unlike other andesite flows comprising Ring Mountain, but it probably erupted from a volcanic vent adjacent to or at Ring Mountain. The part of it that lies higher in elevation contains some features that indicate lava-ice interactions, while the lower-elevation portion of it does not. Therefore, this minor lava flow was likely extruded after Ring Mountain formed but when glacial ice covered a broader area than it does currently, and that the lava flow extends beyond the region in which glacial ice existed at that time.
The Mount Meager massif is the most voluminous composite volcano in the Garibaldi chain and British Columbia, as well as the most recent to erupt. It has a volume of 20 km3 (4.8 cu mi) and consists of an eroded stratovolcano, ranging in composition from andesite to rhyodacite. Several dissected lava domes and volcanic plugs are present on its glaciated summit, as well as a clearly defined volcanic crater with a lava dome placed within it. At least eight volcanic vents compose the complex and have been the sources for volcanic activity throughout massif's 2.2 million year history. A well-documented history of volcanism is present at the Mount Meager massif, with its most recent eruption about 2,350 years ago that was similar in character to the 1980 eruption of Mount St. Helens and the continuous eruption of Soufrière Hills on the island of Montserrat. This is the largest recorded Holocene explosive eruption in Canada, originating from a volcanic vent on the northeastern flank of Plinth Peak. It was Plinian in nature, sending an eruption column at least 20 km (12 mi) high into the stratosphere. As prevailing winds carried ash of the column eastwards, it deposited across British Columbia and Alberta. Subsequent pyroclastic flows were sent down the flanks of Plinth Peak for 7 km (4.3 mi) and were later succeeded by the eruption of a lava flow that demolished many times. This created thick agglutinated rubble that successfully blocked the adjacent Lillooet River to form a lake. Subsequently, the breccia dam collapsed to produce a catastrophic flood that deposited house-sized boulders more than 1 km (0.62 mi) downstream. After the flood took place, a small dacite lava flow was erupted that later solidified to form a series of well-preserved columnar joints. This is the last phase of the 2350 BP eruption, and subsequent stream erosion has cut though this lava flow to form a waterfall.
A group of small volcanoes on the upper Bridge River, known as the Bridge River Cones, includes stratovolcanoes, volcanic plugs and lava flows. These volcanoes are unlike others throughout the Garibaldi Volcanic Belt in that they are mainly composed of volcanic rocks with mafic compositions, including alkaline basalt and hawaiite. The different magma compositions might be related to a smaller degree of partial melting in the Earth's mantle or a descending plate edge effect. The oldest volcano in the group, known as Sham Hill, is a 60 m (200 ft) high volcanic plug with a potassium-argon date of one million years. It is about 300 m (980 ft) wide and its uncovered glaciated surface is strewn with glacial erratics. Its massive level rock columns were constructed inside the main volcanic vent of a stratovolcano that has since been reduced by erosion. To the southeast, the Salal Glacier volcanic complex was constructed between 970,000 and 590,000 years ago. It consists of subaerial tephra and thin lava flow deposits that are surrounded by 100 m (330 ft) thick ice-ponded lava flows. These ice-marginal lava flows were created when lava ponded against glacial ice in the nearby valleys before the Wisconsin Glaciation. North of the Salal Glacier complex lies a small basaltic stratovolcano named Tuber Hill. It began to form about 600,000 years ago when adjacent valleys were filled by glacial ice. When lava flows were erupted from Tuber Hill, they interacted with the valley-filling glaciers on its southern flank and produced a glacial meltwater lake. Here, more than 150 m (490 ft) of stacked hyaloclastite, lahars and lacustrine tuff were deposited. A series of pillow lavas were also deposited during this eruptive period. The most recent volcanic activity in the Bridge River volcanic field produced a series of basaltic lava flows in the regional valleys that overlie till of the last glacial period. The age of these valley-filling lava flows is unknown but the presence of unconsolidated glacial till under the flows suggests that they are less than 1,500 years old.
To the northwest, the Franklin Glacier Complex is a set of volcanic bedrock that encompasses an area 20 km (12 mi) long and 6 km (3.7 mi) wide. It has an elevation of over 2,000 m (6,600 ft) and is largely destroyed by erosion. A series of dikes and subvolcanic intrusions compose the complex, a few of which seem to represent vents for the overlying sequence of volcanic deposits. Volcanics include dacite breccia and small remnants of hornblende andesite lava flows associated with tuffs that reach 450 m (1,480 ft) thick. The complex is poorly known due to minimal studies, but potassium-argon dates obtained from some of the subvolcanic intrusions indicate that Franklin formed during two volcanic events, each separated by about five million years of dormancy. The first event occurred between six and eight million years ago when volcanic activity in the Garibaldi Belt had not moved to its current location, but was becoming more aerially restricted within a large band to the east and west. During this period, volcanic activity in the Garibaldi Belt and other portions of the northern Cascade Arc took place mainly at the Franklin Glacier Complex and in the Intermontane Belt further east. When the Garibaldi Belt moved to its current location five million years ago, another volcanic event occurred at the Franklin complex. This final and most recent volcanic event occurred between two and three million years ago, about a million years after Mount Cayley to the south began its formation.
Silverthrone Caldera is the largest and best-preserved of the two caldera complexes in the northern Garibaldi chain, the other being the Franklin Glacier Complex 55 km (34 mi) to the east-southeast. The caldera has a diameter of 20 km (12 mi) and contains breccia, lava flows and lava domes. Like Franklin to the east-southeast, the geology of Silverthrone is poorly known due to minimal studies. The region surrounding the Silverthrone complex is significantly jagged due to the mountainous terrain of the Coast Mountains. Near vertical flanks extend from near sea level to more than 3,000 m (9,800 ft) in elevation. Silverthrone is significantly younger than the Franklin Glacier Complex to the east-southeast and its volcanics likely have ages comparable to other volcanics throughout the Garibaldi chain. The oldest volcanics at the Silverthrone Caldera complex are composed of volcanic breccias, some of which became fused together by intense volcanic heat from when the deposits were first erupted. After these volcanics were deposited, a series of dacite, andesite and rhyolite lava flows were erupted upon volcanic breccia from the first volcanic phase. These eroded lava flows in total are 900 m (3,000 ft) thick. Volcanics in the lower portion of this series of lava flows give a potassium-argon date of 750,000 years while volcanics slightly above the lava flows are 400,000 years old. The most recent volcanic activity produced a series of andesite and basaltic andesite lava flows down Pashleth Creek and the Machmell and Kingcome river valleys. The lava flow extending from near Pashleth Creek to down the Machmell River valley is over 25 km (16 mi) in length. Its small amount of erosion indicates that it could be 1,000 years old or younger.
Geothermal and seismic activityEdit
At least four volcanoes have had seismic activity since 1985, including Mount Garibaldi (three events), Mount Cayley massif (four events), Mount Meager massif (seventeen events) and the Silverthrone Caldera (two events). Seismic data suggest that these volcanoes still contain active magma chambers, indicating that some Garibaldi Belt volcanoes are likely active, with significant potential hazards. The seismic activity corresponds with some of Canada's recently formed volcanoes and with persistent volcanoes that have had major explosive activity throughout their history, such as Mount Garibaldi and the Mount Cayley and Mount Meager massifs.
A series of hot springs adjacent to the Lillooet River valley, such as the Harrison, Sloquet, Clear Creek and Skookumchuck springs, are not known to occur near areas with recent volcanic activity. Instead, many are located close to 16–26 million year old intrusions that are interpreted to be the roots of heavily eroded volcanoes. These volcanoes formed part of the Cascade Volcanic Arc during the Miocene period and their intrusive roots extend from the Fraser Valley in the south to Salal Creek in the north. The relationship of these hot springs to the Garibaldi Belt is not clear. However, a few hot springs are known to exist in areas that have experienced relatively recent volcanic activity. About five hot springs exist in valleys near Mount Cayley and two small groups of hot springs are present at the Mount Meager massif. The springs at Meager massif might be evidence of a shallow magma chamber beneath the surface. No hot springs are known to exist at Mount Garibaldi like those found at the Mount Meager and Mount Cayley massifs, although there is evidence of abnormal high heat flow at the adjacent Table Meadows and other locations. Abnormal warm water adjacent to Britannia Beach could be geothermal activity linked to the Watts Point volcanic zone.
People have used resources in and around the Garibaldi Volcanic Belt for centuries. Obsidian was collected by the Squamish Nation for making knives, chisels, adzes and other sharp tools in pre-contact times. This material appears in sites dated 10,000 years old up to protohistoric time periods. The source for this material is found in upper parts of the mountainous terrain that surround Mount Garibaldi. At Opal Cone, lava of the Ring Creek flow was normally heated to cook food because its pumice-like texture is able to maintain heat. It also did not break after it was used for a long period of time.
A large pumice outcrop adjacent to the Mount Meager massif has been mined several times in the past, and extends more than 2,000 m (6,600 ft) in length and 1,000 m (3,300 ft) in width with a thickness of about 300 m (980 ft). The deposit was first hired by J. MacIsaac, who died in the late 1970s. In the mid 1970s the second hirer, W.H. Willes, investigated and mined the pumice. It was crushed, removed then stored close to the village of Pemberton. Later, the bridge that was used to access the pumice deposit was washed out. Mining operations resumed in 1988 when the deposit was staked by L.B. Bustin. In 1990, the pumice outcrop was bought by D.R. Carefoot from the owners B. Chore and M. Beaupre. In a program from 1991 to 1992, workers evaluated the deposit for its properties as a construction material, absorber of oil and stonewash. About 7,500 m3 (260,000 cu ft) of pumice was mined in 1998 by the Great Pacific Pumice Incorporation.
The hot springs associated with Meager and Cayley have made these two volcanoes targets for geothermal explorations. At Mount Cayley, temperatures of 50 °C (122 °F) to more than 100 °C (212 °F) have been measured in shallow boreholes on its southwestern flank. Further north, geothermal exploration at the Mount Meager massif has been undertaken by BC Hydro since the late 1970s. Bottom hole temperatures have been calculated at an average of 220 °C (428 °F) to 240 °C (464 °F), with 275 °C (527 °F) being the highest recorded temperature. This indicates that the area around Meager is a major geothermal site. The geothermal power is expected to run throughout Western Canada and the likelihood of it extending into the western United States is probable.
The belt of volcanoes has been the subject of myths and legends by First Nations. To the Squamish Nation, Mount Garibaldi is called Nch'kay. In their language it means "Dirty Place". This name of the mountain refers to the volcanic rubble in the area. This mountain, like others located in the area, is considered sacred as it plays an important part of their history. In their oral history, they passed down a story of the flood covering the land. During this time, only two mountains peaked over the water, and Garibaldi was one of them. It was here that the remaining survivors of the flood latched their canoes to the peak and waited for the waters to subside. The Black Tusk on the northwestern end of Garibaldi Lake and Mount Cayley northwest of Mount Garibaldi are called tak'takmu'yin tl'a in7in'axa7en in the Squamish language, which means "Landing Place of the Thunderbird". The Thunderbird is a legendary creature in North American indigenous peoples' history and culture. The rocks that make up The Black Tusk and Mount Cayley were said to have been burnt black by the Thunderbird's lightning.
Protection and monitoringEdit
A number of volcanic features in the Garibaldi Belt are protected by provincial parks. Garibaldi Provincial Park at the southern end of the chain was established in 1927 to protect the abundant geological history, glaciated mountains and other natural resources in the region. It was named after the 2,678 m (8,786 ft) stratovolcano Mount Garibaldi, which in turn was named after the Italian military and political leader Giuseppe Garibaldi in 1860. To the northwest, Brandywine Falls Provincial Park protects Brandywine Falls, a 70 m (230 ft) high waterfall composed of at least four basaltic lava flows with columnar joints. Its name origin is unclear, but it may have originated from two surveyors named Jack Nelson and Bob Mollison.
Like other volcanic zones in Canada, the Garibaldi Volcanic Belt is not monitored closely enough by the Geological Survey of Canada to ascertain how active its magma system is. This is partly because several volcanoes in the chain are located in remote regions and no major eruptions have occurred in Canada in the past few hundred years. As a result, volcano monitoring is less important than dealing with other natural processes, including tsunamis, earthquakes and landslides. However, with the existence of earthquakes, further volcanism is expected and would probably have considerable effects, particularly in a region like southwestern British Columbia where the Garibaldi volcanoes are located in a highly populated area.
The volcanoes comprising the Garibaldi chain are adjacent to the highly populated southwest portion of British Columbia. Unlike the central Cascade Arc, renewed volcanic activity in the Garibaldi Belt at a single feeder to create stratovolcanoes is not typical. Instead, volcanic activity results in the formation of volcanic fields. Of the entire Cascade Arc, the Garibaldi chain has the lowest rate of volcanic activity. In the past two million years, the volume of erupted material in the Garibaldi Belt has been less than 10% of that in the U.S. states of California and Oregon and about 20% of that within the U.S. state of Washington. As a result, the risk of eruptions throughout this part of the Cascade Arc is minor. Individual volcanoes and volcanic fields remain quiet for a long period of time and certain vents may never erupt again. However, considerable volcanic activity has taken place in the geologically recent past, most notably the explosive eruption that occurred at the Mount Meager massif 2,350 years ago.
Jack Souther, a leading authority on geothermal resources and volcanism in the Canadian Cordillera has stated, "at present the volcanoes of the Garibaldi Belt are quiet, presumed dead but still not completely cold. But the flare-up of Meager Mountain 2,500 years ago raises the question, 'Could it happen again?' Was the explosive eruption of Meager Mountain the last gasp of the Garibaldi Volcanic Belt or only the most recent event in its on-going life? The short answer is nobody really knows for sure ... So just in case I sometimes do a quick check of the old hot-spots when I get off the Peak Chair ..." Recent seismic imaging from Geological Survey of Canada employees supported lithoprobe studies in the region of Mount Cayley in which scientists found a large reflector interpreted to be a pool of molten rock roughly 15 km (9.3 mi) below the surface. The existence of hot springs at the Mount Meager massif and Mount Cayley indicates that magmatic heat is still present beneath or near these volcanoes. This long history of volcanic activity along a still active plate boundary indicates that volcanic eruptions in the Garibaldi Belt have not ended and risks for future eruptions remain.
The largest threat from volcanoes in the Garibaldi chain would likely be due to tephra released during explosive eruptions. Mount Meager massif in particular poses a major long-distance threat to communities throughout southern British Columbia and Alberta because of its explosive history. It is estimated that over 200 eruptions have occurred throughout the entire Cascade Volcanic Arc in the past 12,000 years, many of them in the United States. Many eruptions in the western United States have sent large amounts of tephra in southern British Columbia. However, all major cities in southwestern British Columbia with populations more than 100,000 are located west of the Garibaldi Volcanic Belt and prevailing winds travel eastwards. Therefore, these communities are less likely to have large amounts of tephra. In the Lower Mainland, a 10 cm (3.9 in) thick layer of volcanic ash can deposit once every 10,000 years and 1 cm (0.39 in) once every 1,000 years. More minor amounts of volcanic ash can be expected more commonly. During Mount St. Helens' eruption in 1980, 1 mm (0.039 in) of tephra was deposited from southeastern British Columbia to Manitoba.
Even though all major cities in southwestern British Columbia are located west of the Garibaldi chain, future eruptions from Mount Garibaldi are expected to have significant impacts on the adjacent townships of Squamish and Whistler. An eruption column released during Peléan activity would discharge large amounts of tephra that would endanger aircraft. Tephra may also melt the large sheets of glacial ice east of Garibaldi and cause floods. This could later endanger water supplies from Pitt Lake and fisheries on the Pitt River. An explosive eruption and the associated tephra may also create temporary or longer-term water supply difficulties for Vancouver and most of southern British Columbia. The water reservoir for the Greater Vancouver drainage area is south of Mount Garibaldi.
Landslides and laharsEdit
Several landslides and lahars have occurred throughout the Garibaldi Belt. At the Mount Meager massif, considerable landslides have occurred from Pylon Peak and Devastator Peak in the past 10,000 years that have reached more than 10 km (6.2 mi) downstream in the Lillooet River valley. At least two significant landslides from the southern flank of Pylon Peak 8,700 and 4,400 years ago dumped volcanic debris into the adjacent valley of Meager Creek. More recently, a large landslide from Devastation Glacier buried and killed a group of four geologists on July 22, 1975. The estimated volume of this landslide is 13,000,000 m3 (460,000,000 cu ft). A considerable landslide as large as Meager's largest throughout the Holocene would likely produce a lahar that would devastate most of the growth in the Lillooet River valley. If such an event would occur without it being identified by authorities who would send out a public warning, it would kill hundreds or even thousands of residents. Because of this, computer programs would be able to identify the approaching information and activate an automatic notice when a large lahar is identified. A similar system for identifying such lahars exists at Mount Rainier in the U.S. state of Washington.
Large landslides from the Mount Cayley massif have occurred on its western flank, including a major debris avalanche about 4,800 years ago that dumped an areal extent of 8 km2 (3.1 sq mi) of volcanic material into the adjacent valley bottom. This blocked the Squamish River for a long period of time. Although there are no known eruptions from the massif in the past 10,000 years, it is associated with a group of hot springs. Evans (1990) has indicated that a number of landslides and debris flows at the Mount Cayley massif in the past 10,000 years might have been caused by volcanic activity. Since the 4,800 BP landslide, a number of more minor landslides have occurred at it. In 1968 and 1983, a series of landslides took place that caused considerable damage to logging roads and forest stands, but did not result in any casualties.
The threat from lava flows in the Garibaldi Belt is minor unless an eruption takes place in winter or under or adjacent to areas of glacial ice, such as ice fields. When lava flows over large areas of snow, it creates meltwater. This can produce lahars that could flow further than the associated lavas. If water were to enter a volcanic vent that is erupting basaltic lava, it may create a massive explosive eruption. These explosions are generally more extreme than those during normal basaltic eruptions. Therefore, the existence of water, snow, or glacial ice at a volcanic vent would increase the risk of an eruption having a large impact on the surrounding region. Subglacial eruptions have also caused catastrophic glacial outburst floods.
- "Tricouni Southwest". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-04.
- "Columnar Peak". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2006-02-19. Retrieved 2010-03-04.
- "Opal Cone". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2006-02-19. Retrieved 2010-03-04.
- "Mount Price". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2009-06-28. Retrieved 2010-03-04.
- "Slag Hill". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-12. Retrieved 2010-03-04.
- "Sham Hill". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-04.
- "Silverthrone Caldera". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-12. Retrieved 2010-03-04.
- Smellie, J.L.; Chapman, Mary G. (2002). Volcano-Ice Interaction on Earth and Mars. Geological Society of London. pp. 195, 197. ISBN 1-86239-121-1.
- "Garibaldi volcanic belt". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-04-02. Archived from the original on 2011-05-06. Retrieved 2010-02-20.
- Wood, Charles A.; Kienle, Jürgen (2001). Volcanoes of North America: United States and Canada. Cambridge, England: Cambridge University Press. pp. 112, 113, 140, 141, 142, 143, 144, 145, 136, 137, 138, 148. ISBN 978-0-521-43811-7. OCLC 27910629.
- "Franklin Glacier". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-02-20.
- Lewis, T. J.; Judge, A. S.; Souther, J. G. (1978). "Possible geothermal resources in the Coast Plutonic Complex of southern British Columbia, Canada". Pure and Applied Geophysics. 117 (1–2): 172–179. Bibcode:1978PApGe.117..172L. doi:10.1007/BF00879744.
- Mahoney, J. Brian; Gordee, Sarah, M.; Haggart, James W.; Friedman, Richard M.; Diakow, Larry J.; Woodsworth, Glenn J. (2009). "Magmatic evolution of the eastern Coast Plutonic Complex, Bella Coola region, west-central British Columbia". Geological Society of America. Retrieved March 29, 2010.
- Girardi, James Daniel (2008). "Evolution of magmas and magma sources to the Coast Mountains Batholith, British Columbia, Canada, refelcted [sic] by elemental and isotopic geochemistry" (PDF). University of Arizona: 5. Retrieved 2010-02-22. Cite journal requires
- "Tectonic overview of the CPC". University of Arizona. Retrieved 2010-03-04.
- "Cascadia Subduction Zone". Geodynamics. Natural Resources Canada. 2008-01-15. Archived from the original on 2010-01-22. Retrieved 2010-03-06.
- "Pacific Mountain System – Cascades volcanoes". United States Geological Survey. 2000-10-10. Retrieved 2010-03-05.
- Dutch, Steven (2003-04-07). "Cascade Ranges Volcanoes Compared". University of Wisconsin. Archived from the original on 2012-03-18. Retrieved 2010-05-20.
- "The M9 Cascadia Megathrust Earthquake of January 26, 1700". Natural Resources Canada. 2010-03-03. Archived from the original on 2013-01-01. Retrieved 2010-03-06.
- Monger, J.W.H. (1994). "Character of volcanism, volcanic hazards, and risk, northern end of the Cascade magmatic arc, British Columbia and Washington State". Geology and Geological Hazards of the Vanvouver Region, Southwestern British Columbia. Natural Resources Canada. pp. 232, 235, 236, 241, 243, 247, 248. ISBN 0-660-15784-5.
- "Types of volcanoes". Volcanoes of Canada. Natural Resources Canada. 2009-04-02. Archived from the original on 2011-05-06. Retrieved 2010-05-27.
- "The Barrier". BC Geographical Names.
- Bye, A.; Edwards, B. R.; Hickson, C. J. (2000). "Preliminary field, petrographic and geochemical analysis of possible subglacial, dacitic volcanism at the Watts Point volcanic centre, southwestern British Columbia" (PDF). Current Research, Part A. Natural Resources Canada. 2000-A20: 1, 2, 3. Archived from the original (PDF) on 2011-07-06. Retrieved 2010-03-04.
- "Watts Point". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-07-22.
- Edwards, Ben (November 2000). "Mt. Garibaldi, SW British Columbia, Canada". VolcanoWorld. Archived from the original on 2010-07-31. Retrieved 2010-03-18.
- "Lava Domes, Volcanic Domes, Composite Domes". Volcanic Lava Domes. United States Geological Survey. 2009-06-25. Retrieved 2010-03-18.
- "Garibaldi volcanic belt: Garibaldi Lake volcanic field". Catalogue of Canadian volcanoes. 2009-04-01. Archived from the original on 2006-02-19. Retrieved 2010-03-12.
- "Cinder Cone". BC Geographical Names.
- "Mount Fee". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2006-02-19. Retrieved 2010-03-03.
- "Ember Ridge North". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-28.
- "Ember Ridge Northeast". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-28.
- "Ember Ridge Northwest". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-28.
- "Ember Ridge Southeast". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-12. Retrieved 2010-03-28.
- "Ember Ridge Southwest". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-28.
- "Ember Ridge West". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2010-12-11. Retrieved 2010-03-28.
- "Garibaldi Volcanic Belt: Mount Cayley volcanic field". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-04-07. Archived from the original on 2011-05-06. Retrieved 2010-03-03.
- "Pali Dome East". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Retrieved 2010-03-07.[dead link]
- "Pali Dome West". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2011-05-06. Retrieved 2010-03-07.
- "Cauldron Dome". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2011-05-06. Retrieved 2010-03-07.
- "Slag Hill tuya". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2011-05-06. Retrieved 2010-03-08.
- "Ring Mountain (Crucible Dome)". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-03-10. Archived from the original on 2007-03-20. Retrieved 2010-03-07.
- Earle, Steven (2005). "3 Volcanism" (PDF). Malaspina University-College: 21, 24. Retrieved 2010-03-19. Cite journal requires
- "Meager". Global Volcanism Program. Smithsonian Institution. Retrieved 2010-02-24.
- "Garibaldi volcano belt: Mount Meager volcanic field". Catalogue of Canadian volcanoes. Natural Resources Canada. 2009-04-01. Archived from the original on 2005-12-28. Retrieved 2010-03-04.
- Friele, Pierre; Jakob, Matthias; Clague, John (March 16, 2008). "Hazard and risk from large landslides from Mount Meager volcano, British Columbia, Canada". Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards. Taylor & Francis. p. 61. doi:10.1080/17499510801958711. ISSN 1749-9518.
- "Canada Volcanoes and Volcanics". Canada Volcanoes and Volcanics. United States Geological Survey. 2009-11-06. Retrieved 2010-03-29.
- "Distribution of tephra deposits in Western North America". Volcanoes of Canada. Natural Resources Canada. 2008-02-12. Archived from the original on 2011-05-06. Retrieved 2010-03-29.
- Etkin, David; Haque, C.E.; Brooks, Gregory R. (2003-04-30). An Assessment of Natural Hazards and Disasters in Canada. Springer. pp. 569, 582, 583. ISBN 978-1-4020-1179-5.
- "Volcanology in the Geological Survey of Canada". Volcanoes of Canada. Natural Resources Canada. Archived from the original on 2006-10-08. Retrieved 2008-05-09.
- Woodsworth, Glenn J. (April 2003). "Geology and Geothermal Potential of the AWA Claim Group, Squamish, British Columbia". Vancouver, British Columbia: Gold Commissioner's Office: 9, 10. Cite journal requires
- Reimer/Yumks, Rudy. "Squamish Nation Cognitive Landscapes" (PDF). McMaster University: 5, 6. Archived from the original (PDF) on 2010-03-16. Retrieved 2008-05-19. Cite journal requires
- "Mount Meager, Lillooet River Pumice, Pum, Great Pacific, Mt. Meager Pumice". MINFILE Mineral Inventory. Government of British Columbia. 1998-12-04. Retrieved 2010-03-16.
- "South Meager Geothermal Project". Western GeoPower Corp. Retrieved 2011-05-09.
- Yumks; Reimer, Rudy (April 2003). "Squamish Traditional Use Study: Squamish Traditional Use of Nch'kay Or the Mount Garibaldi and Brohm Ridge Area" (PDF). Draft. First Heritage Archaeological Consulting: 8, 11, 17. Retrieved 2010-03-30. Cite journal requires
- "Garibaldi Provincial Park". BCParks. Retrieved 2010-03-06.
- "Mount Garibaldi". BC Geographical Names.
- Stelling, Peter L.; Tucker, David Samuel (2007). "Floods, Faults, and Fire: Geological Field Trips in Washington State and Southwest British Columbia". Current Research, Part A. Geological Society of America: 2, 14. ISBN 978-0-8137-0009-0. Retrieved 2010-03-04.
- "Brandywine Falls Provincial Park". BCParks. Retrieved 2010-03-06.
- "Monitoring volcanoes". Volcanoes of Canada. Natural Resources Canada. 2009-02-26. Archived from the original on 2011-05-06. Retrieved 2010-03-24.
- "CanGEA Honourary [sic?] Member 2008 Dr. Jack Souther" (PDF). Canadian Geothermal Energy Association. Archived from the original (PDF) on 2010-10-22. Retrieved 2010-03-04.
- Clague, Friele; Clague, John J. (2004). "Large Holocene landslides from Pylon Peak, southwestern British Columbia". Canadian Journal of Earth Sciences. Natural Resources Canada. 41 (2): 165. Bibcode:2004CaJES..41..165F. doi:10.1139/e03-089. Retrieved 2010-03-03.[dead link]
- "Landslide: Devastator Glacier BC, Jul 22 1975". Natural Resources Canada. 2009-12-01. Archived from the original on 2011-07-21. Retrieved 2010-03-03.
- "Where do landslides occur?". Government of British Columbia. Archived from the original on 2010-08-18. Retrieved 2010-03-03.
- G. Evans, S.; Brooks, G. R. (1992). "Prehistoric debris avalanches from Mount Cayley volcano, British Columbia:1 Reply". Canadian Journal of Earth Sciences. Natural Resources Canada. 29 (6): 1346. Bibcode:1992CaJES..29.1343E. doi:10.1139/e92-109.
- "Photo Collection". Landslides. Natural Resources Canada. 2007-02-05. Archived from the original on 2011-05-06. Retrieved 2010-03-03. |
Beacon Lesson Plan Library
DescriptionWhile doing this lesson, students will be able to remember, compare, and contrast two different Cinderella stories of their choice.
ObjectivesThe student increases comprehension by rereading, retelling, and discussion.
The student retells specific details of information heard, including sequence of events.
The student uses prior knowledge, illustrations, and text to make predictions.
The student identifies similarities and differences between two texts (for example, in topics, characters, and problems).
The student uses simple reference material to obtain information (for example, table of contents, fiction and nonfiction books, picture dictionaries, audio visual software).
The student generates ideas before writing on self-selected topics and assigned tasks (for example, brainstorming, observing surroundings, reading texts, discussion with peer).
The student listens for specific information in stories (including but not limited to sequence, story details).
The student speaks clearly and uses appropriate volume in a variety of settings (for example, large or small groups, learning centers).
The student knows various broad literary forms (for example, nonfiction, fiction, poetry, picture and predictable books).
The student knows main characters, setting, and simple plot in a story.
The student relates characters and simple events in a story or biography to own life.
Materials- Two different versions of the Cinderella story
- TV Elite
- Access to the Internet
- Inspirations Software
- Digital Camera
- File cards
- Charts, diagrams
- Worksheets (included in file)
- Assorted tea, hot water, cups
Preparations1. Collect Cinderella stories
2. Search the internet for Cinderella stories
3. Prepare worksheets
4. Buy tea materials
Procedures1. After reading different versions of Cinderella, use -Fairy Tales Around the World- to compare the different Cinderella stories.
2. Choose two characters (one good and one bad) to compare using the -Fairy Tale Lessons- sheet. (See Associated File)
3. Write original tales with the class using -Tales With a Twist.-
4. The Internet -Annotated Cinderella- would be good to use with the gifted children!
5. Present an -Inspiration- version of the Venn Diagram to compare/contrast two different Cinderella stories.
6. During this unit of study, various questioning strategies will be used. Questions can be done orally and in groups. Examples are:
-Knowledge - Can you recall the problem in the tale we just read?
-Comprehension - How would you compare Tale 1 with Tale 2? How would you contrast them?
-Application - What questions would you ask Cinderella in an interview, and why?
-Analysis - Why do you think the sisters were so mean to someone who was so nice?
-Synthesis - Suppose you could have talked to the stepmother, what would you tell her and why?
-Evaluation - Do you think the Prince made a good choice for his best friend and wife, and why or why not?
7. ESE and ESOL Strategies would be drawing, charts, mapping, peer work, hands on activities, pictures, and computer activities.
8. To set the environment for learning, the following can be done: play instrumental music (Classical or Cinderella sound track), have pictures of characters and places in the Cinderella stories around the room, and dim the lights while telling and retelling the stories!
AssessmentsSelf Assessment, and Teacher Assessment:
Use the Rubric:
Rubric for comparing and contrasting two different Cinderella stories, using the Venn Diagram
Using the numbers 0-4: 0 being the worst, and 4 being the best, give your work a number for each of the following:
_____1. I wrote each title in the correct place, and
remembered to write my name.
_____2. On the left, I wrote how the first fairy tale was different from the second one.
_____3. On the right, I wrote how the second tale was different from the first.
_____4. In the middle, I wrote how the tales were the same.
_____5. I used good handwriting.
_____6. I wrote at least 4 things in each spot on the Venn Diagram.
_____ Total of points
If your total is 23 or 24, give yourself a 4 for the Venn Diagram.
If your total is 21 or 22, give yourself a 3 for the Venn Diagram.
If your total is 19 or 20, give yourself a 2 for the Venn Diagram.
If your total is 18 or 17, you get a 1.
If your total is 16 or below, you get a 0.
ExtensionsFor a Science connection, make: Jolly Tea for Cinderella's Ball! For Math and Art: cut out shapes to make Cinderella's Castle. To connect Social Studies, read the -Jolly Map-, and reinforce Lifeskills with the game: -Giants, Wizards, and Elves.-
Web LinksWeb supplement for Cinderella Stories
Web supplement for Cinderella Stories
Web supplement for Cinderella Stories
Attached FilesWorksheets : List of Cinderella Stories, Mapping, Sequence, Character/Tale Comparison, Tales with a Twist, Jolly Tea, Math, Map, Game. File Extension: pdf
Return to the Beacon Lesson Plan Library. |
Before we look at simultaneous equations let us brush up some of the fundamentals. First, we define what is meant by an equation. It is a statement which indicates that two algebraic expressions are equal. For instance, let 3x - 4 be an expression and 5x - 10 be another expression. If these two expressions are related to each other by an equality sign in the fashion shown below we call it as an equation.
3x - 4 = 5x - 10 .......... (1)
The side on which we have the expression 3x - 4 is referred to as Left Hand Side (LHS) and the one which has 5x - 10 as the Right Hand Side (RHS). If we substitute x = 3 in the above equation we find that both sides of the equation gives us 5. Now we substitute some other value say x = 2. We find that the LHS gives us 2 whereas the RHS gives us 0. Looking at these two cases we conclude that only when x = 3, the equation holds and for other values of x it does not. But consider an equation which is shown below.
3x + 2 + 2x - 5 = 5x - 3 .......... (2)
The LHS and the RHS of this equation gives us the same values for any value of x. In other words, this equation holds for any value of x. Equations like these are called identities and the one we have seen before are referred to as equation of condition or more simply as an equation.
Above we have seen that only when we have substituted x = 3 in equ.(1), it holds true. That is, the value of x = 3 is said to be satisfying the equation. Since we are expected to find the value of x for which the equation holds true, the quantity x is known as the unknown quantity. The value of x found after solving the equation is called the solution or the root of the equation.
While solving equations, we have to remember these points.
1. If we are to add or subtract any quantity from/to one side of the equation, we should do so for the other side also. We look at this by taking an example.
For instance, we are required to solve the equation
x + 3 = 15
That is, on the LHS we ought to have only x. Can we subtract 3 from the LHS so that +3 and -3 cancel each other leaving behind only x? We can. But as stated above this operation should be done on both sides of the equation. That is, we will have
x + 3 - 3 = 15 - 3
x + 0 = 12
x = 12
If we do not perform this operation on both the sides, the balance which exists between the sides gets disturbed as a result the equality sign loses its relevance and it no longer has any meaning.
We take another example and check the same for addition. We have an equation x - 3 = 12, for which we have to obtain a solution.
x - 3 + 3 = 12 + 3
x = 15
As we are aware of this, while solving equations we directly transpose the quantity to the other side of the equation with its sign changed. Here we introduced a new word "Transpose". What is meant by Transposing? Bringing any term from one side of the equation to the other side is called transposing.
2. If we are to multiply or divide a particular element or the whole expression on one side of the equation, then we should do the same on the other side of the equation also. Let us take an example and understand this. We have to find the solution for the equation 3x + 5 = 20. We begin by subtracting 5 from both the sides. That will be
3x + 5 - 5 =20 - 5
3x + 0 =15
Since only x ought to be there on the LHS (i.e. solving for x), we divide the LHS by 3 and do a similar operation on the RHS also. We have
3x/3 = 15/3
Therefore, x = 5 is the solution of the given equation.
Suppose we are given an equation like
and asked to solve, how should we proceed?
We begin by multiplying both the sides of the equation by 3. We have
(x-4)/3 x 3 = 6 x 3
x - 4 =18
x - 4 + 4 =18 + 4 |
Faith In Action: Religious Bullying
Activity time: 0 minutes
Materials for Activity
- Newsprint, markers, and tape
- Handout 3, Religious Prejudice Simmers
- Handout 4, Anti-Bullying Resources
Preparation for Activity
- Copy both handouts for all participants. Handout 3, Religious Prejudice Simmers, is for Part 1 of this activity. Set aside Handout 4, Anti-Bullying Resources, to distribute at a subsequent meeting when the group will do Part 2.
- Plan a specific date and time for participants to return to the group with their completed research on bullying policies and complete Part 2 of this activity-ideally, the next workshop meeting.
- Post blank newsprint.
Description of Activity
In this two-part activity, participants explore issues of religious prejudice in the local community through a story from Eboo Patel, founder of the Interfaith Youth Core.
Distribute Handout 3, Religious Prejudice Simmers. Ask volunteers to read the piece aloud, one paragraph per person. Then, engage participants in a conversation about the story, beginning with these questions:
- Has an incident like this ever happened at your school? What did you, or others, do about it?
- Have you ever felt bullied because of your faith? What was that like? How was it addressed by: you, friends, families, and/or authorities?
- When the article originally appeared in USA Today, readers posted a variety of comments. (Share this sampling with the group.) How might you respond to people with these or similar opinions?
- "The only way to change students' behavior is to change their perceptions of other faiths, and parents, rather than educators, hold the key."
- "Often the world we live in offers difficult and unfair challenges. Learning to deal with them is a defining aspect of maturity. No amount of adult supervision or legislative tinkering will ever level the playing field."
- "Until Muslims start speaking out against terrorism and the militancy of Islam they will get no sympathy here."
- "The fact is that all religions are LIES, and that we would be better to simply get rid of all religions. They are nothing more than the likes and dislikes of people who lived ages ago, foisted upon the people out of the fear of a 'god' that never existed, doesn't exist, and never will exist."
- What kind of policies on bullying does your school have? Do they include specific language about sexual harassment? Harassment based on race or ethnicity? Gender or sexual orientation? Religion? If you do not know, seek a volunteer from each school represented in the group and ask them to research the matter and bring back their findings. (If some participants are home-schooled, have them research local or state anti-discrimination laws.)
- If your school does not have rules against acts of prejudice, what can you do to change that?
Have participants brainstorm some ideas and write them on newsprint.
Gather the group and review policies on bullying they have obtained from their schools. Invite youth to lobby for bullying policies if their school does not have them or to strengthen them, if the policies seem insufficient. Have participants formulate a plan of action. Distribute Handout 4, Anti-Bullying Resources. Brainstorm allies and make plans to solicit their support. How will youth report on their actions and the results? Schedule an additional meeting devoted to anti-bullying action and/or plan to produce and share a written report.
Share, Print, or Explore
For more information contact [email protected]. |
Description of Lesson: One common technique in computer science is to debug a faulty computer program. In distance education, students can be brought together using a synchronous tool and by sharing a screen, work collaboratively to debug a computer program, just as they might in a real world situation.
Appropriate Content Areas: Primarily computer science, but it could be modified to group problem solving using online application sharing.
Goals & Objectives:
Generally, the goal of group debugging is to develop student knowledge of coding and their ability to evaluate and decipher code in order to locate problems and find solutions.
Students need knowledge of the code aspects to be studied during the exercise. In general, debugging in a knowledge in application study rather than a core learning activity.
Materials and Resources:
The students needs access to a synchronous application sharing application and the software used for the actual coding.
Guiding Questions for this Lesson:
Can students identify and fix coding errors?
Lesson Outline and Procedure:
As an example, the instructor could be running VB Studio and sharing the application with students using an application such as HorizonWimba or Elluminate. Using a chat application, students in groups of 3-5 would discover where the error is and then suggest solutions. Solutions could be chosen and then tested to solve the problem.
- Discerning the error should challenge the students knowledge without moving beyond their zone of proximal development. In other words, make it challenging, but don't require them to access knowledge they do not yet have.
- Debugging can also be performed individually. Group debugging allows for more complex problems to be solved through synergistic activity though.
- Adjust groups so that one student is not always leading a given group.
What accommodations may be needed for students with disabilities or other special needs? Time is a primary accommodation. A blind student may need time for a Braille reader to disperse code. Some individuals may not be able to keep up with the group and may need to be assigned individual problems.
The timeline will vary by the activity. Typically, you want a group of 3-5 students to take less than 15 minutes to find a solution to a given error. This time avoids annoyance and student drop out, but there will obviously be situations where more advanced problems may take an hour to solve.
Ideas for Lesson Evaluation and Teacher Reflection:
How did the students like the lesson? End of semester evaluations should ask about the usefulness and learning accomplished through such activities. Also, the conversation that occurs during the activity will help guage how the students are enjoying various aspects and whether they are learning and/or participating.
How was student learning verified? Participation can be assessed through analysis of a session transcript. A summative assessment may be performed through a final code. Peer assessment can be used on group performance. |
Giant volcanic eruptions could be triggered by the buoyancy of magma alone, report two independent studies published online this week in Nature Geoscience. Although smaller, more frequent eruptions are known to often be triggered by the injection of new magma into a magma chamber, the trigger for larger eruptions has remained enigmatic.
Magma is buoyant because it is hot compared with the Earth’s cool crust that hosts the magma chamber beneath a volcano. Luca Caricchi and colleagues and Wim Malfait and colleagues used computer models and laboratory experiments, respectively, to show that the pressure placed on magma chamber walls within a volcano by buoyant magma can be sufficient to trigger a giant eruption. Specifically, if magma is gradually added to a large magma chamber over a period of hundreds of thousands to millions of years, the build-up of pressure can cause the magma chamber roof to collapse, generating a giant eruption similar to those that have previously occurred at Yellowstone, USA.
In an accompanying News and Views article, Mark Jellinek writes that the results imply that “rare, giant super-eruptions and smaller, more frequent events reflect a transition in the essential driving forces for volcanism.” |
How Many Ladybugs Are There?
In this counting worksheet, students find and circle all the ladybugs they see. Then, students count the total number of ladybugs on the page. In addition, teacher may choose to discuss the other animals on the page (dog, mouse, shark).
3 Views 0 Downloads
Teaching Place Value in First Grade: A Resource Guide
Here is a guide for teaching place value concepts to first graders. The guide includes collaborative learning station ideas and independent learning station ideas for pupils. The skills focused are skip counting by 2s, 5s, and 10s,...
1st Math CCSS: Designed
How Many Spots are on Each Ladybug?
The ever popular ladybug serves as a excellent tool for developing the number sense of young mathematicians on this math activity. Presented with a series of colorful pictures, children practice counting and writing the numbers 1-10 as...
Pre-K - K Math CCSS: Adaptable
Activities that Build Number Sense
Have fun while building the number sense of young mathematicians with this list of ten-frame learning games. From developing cardinality and counting skills to learning place value and basic addition strategies, ten-frames are excellent...
K - 2nd Math CCSS: Adaptable
Math Stars: A Problem-Solving Newsletter Grade 1
Keep the skills of your young mathematicians up-to-date with this series of newsletter worksheets. Offering a wide array of basic arithmetic, geometry, and problem solving exercises, this resource is a great way to develop the critical...
1st - 3rd Math CCSS: Adaptable
Let’s Count!: English Language Development Lessons (Theme 5)
Counting is the theme of this compilation of ESL lessons. Through listening, speaking, and moving, your young learners take part in a variety of activities to enhance their English proficiency such as making menus and books,...
K Math CCSS: Adaptable |
What makes plastic so useful for humans is the same thing that makes it so harmful for wildlife – especially birds.
Plastic, in its many different forms, is virtually indestructible. Not only that, but despite all of it being manufactured on land, over 80% of it ends up in the world’s oceans, according to Charles Moore of the Algalita Marine Research Foundation. Plastics are washed down rivers and flood drains, blown into the sea from landfills, or jettisoned from cargo ships as they criss-cross the world’s seas.
When these plastic items make their way into the oceans, they become extremely hazardous for sea life and seabirds. Chris Jordan’s photographic work on Midway Atoll in the Pacific illustrates some of the horrors of seabirds swallowing the plastic debris which has been discarded from countries around the Pacific Rim. Chris Jordan notes:
“The nesting chicks are fed lethal quantities of plastic by their parents, who mistake the floating trash for food as they forage over the vast polluted Pacific Ocean”
The North Pacific Gyre is a huge area of ocean northeast of Hawaii where, due to the prevailing tidal currents, a vast area of calm water the size of Texas has become the final resting place for plastic waste from across the Pacific (see video). Further areas, or gyres, exist in the Atlantic, Mediterranean and at least five other seas across the planet.
Although large plastic items such as discarded fishing nets have been a cause of concern for decades, more recent research has highlighted the worrying fact that plastic never really disintegrates: it is only reduced by salt water into tiny nodules – almost invisible to the naked eye. This means that plastics are being eaten by ever smaller sea creatures. Marine Biologist Richard Thompson of the University of Plymouth in England notes that, “When they get as small as powder, even zooplankton will swallow them.”
In recent years, Greenpeace have begun sampling plastic content in the Arctic Ocean in order to establish a baseline for future investigations. Sampling is done by towing a plankton net and a flow meter alongside a ship.
Inevitably, the near-indestructible plastic compounds are finding their way into the human food chain, and will continue to do so. University of Exeter researcher Clare Miller took part in the Arctic research:
“Plastic affects everything that depends on the ocean for survival….Chemical contaminants such as BPA and heavy metals are present in many plastics, leading to biomagnification throughout the food chain”.
The fact that plastics are manufactured from crude oil, and their widespread use in modern societies, makes them a particularly urgent problem to address.
As Clare Miller notes:
“Plastic cannot be easily removed from the ocean, so the only way to reduce it is to prevent plastic reaching the ocean in the first place.”
What changes have you been able to make so far in reducing your own dependence on plastics? Share your thoughts below. |
The Herschel Space Observatory’s large telescope and state-of-the-art infrared detectors have provided the first confirmed finding of oxygen molecules in space. The molecules were discovered in the Orion star-forming complex.
Individual atoms of oxygen are common in space, particularly around massive stars. But molecular oxygen, which makes up about 20 percent of the air we breathe, has eluded astronomers until now.
“Oxygen gas was discovered in the 1770s, but it’s taken us more than 230 years to finally say with certainty that this very simple molecule exists in space,” said Paul Goldsmith, NASA’s Herschel project scientist at the agency’s Jet Propulsion Laboratory in Pasadena, Calif. Goldsmith is lead author of a recent paper describing the findings in the Astrophysical Journal. Herschel is a European Space Agency-led mission with important NASA contributions.
Astronomers searched for the elusive molecules in space for decades using balloons, as well as ground- and space-based telescopes. The Swedish Odin telescope spotted the molecule in 2007, but the sighting could not be confirmed.
Goldsmith and his colleagues propose that oxygen is locked up in water ice that coats tiny dust grains. They think the oxygen detected by Herschel in the Orion nebula was formed after starlight warmed the icy grains, releasing water, which was converted into oxygen molecules.
“This explains where some of the oxygen might be hiding,” said Goldsmith. “But we didn’t find large amounts of it, and still don’t understand what is so special about the spots where we find it. The universe still holds many secrets.”
The researchers plan to continue their hunt for oxygen molecules in other star-forming regions. |
Post Colonial History of Bolivia: 1800-1900 A.D. At the beginning of the Post Colonial Era, in 1809, there was an uprising in Chuquisaca (Sucre
). Mestizos and indigenous groups together fought for Bolivia’s independence from Spain, led and supported by freedom fighters such as Pedro Domingo Murillo (who was captured and murdered), Venezuelan Simón Bolivar, and José de San Martín. But it wasn’t until 16 years later that the territory was actually established as a republic (with the first declaration of independence on August 6, 1825) after a decisive battle was won in Ayacucho on December 9, 1824 by Antonio José de Sucre, whose republican army of 7000 men defeated José de La Serna's Spanish army of 10,000 by capturing La Serna. The Spanish surrendered the next day. (Read about Bolivian Independence Day
Bolivia was the first of the Spanish colonies to win its independence from Spain, beginning the Post Colonial period of Bolivia's history. Simón Bolivar (featured in this painting) drew up a constitution for the new republic in 1826, named the country Bolivia, and renamed Chuquisaca as “Sucre” in honor or Antonio José de Sucre who was named Bolivia’s first president. However, internal turmoil made it impossible for Sucre to organize the new state.
Peru invaded Bolivia one year later on May 28, 1828 under General Agustín Gamarra and in September he forced Sucre to resign. Sucre went into exile in Colombia. Mariscal (Marshall) Andres Santa Cruz was elected president of Bolivia in 1829 to replace him and held the position for the next 10 years.
In 1836, Mariscal Andrés de Santa Cruz, and General Gamarra of Peru agreed Peru and Bolivia should never have been separated and, aiming to reunite Bolivia and Peru, formed a federation with the legislative brances of both countries in agreement. However, Gamarra believed the federation should be headed by Peru while Mariscal Santa Cruz believed Bolivia should be given more political power.
Simón Bolivar did not agree with either of them, having formed a project of his own to reunite most of the Spanish colonies, called the Gran Colombia Federation. He appointed the deposed Antonio José de Sucre as Commander of the Colombian Army and on June 3, 1828 declared war against Peru. However, Sucre was soon murdered and only 2 years later Bolivar died as well. At this time Colombian troops withdrew from Peru and the war came to an end.
Meanwhile, in Peru, General Gamarra deposed Peruvian President José de la Mar and declared himself as the new president of Peru. In 1833 a new Peruvian parliament was formed but instead of calling for elections, turned the presidency over to General Luis Orbegoso. Gamarra refused to acknowledge the new government.
In 1835 Orbegoso was overthrown by General Felipe Salaverry. Orbegoso turned to Bolivian President Mariscal de Santa Cruz and the Bolivian army invaded Peru. With Bolivia's aid, Orbegoso was returned to power. In exchange for this aid, Orbegoso agreed to the formation of a new Peru-Bolivia Confederation to attempt to reunite Bolivia and Peru and converting it back into the previous single territory of "Alto Peru". Initially this new country was divided into 3 states: North Peru, South Peru, and Bolivia. Its capital city was Tacna, in South Peru.
Post colonial times continued to be violent and unorganized. The creation of this federation caused alarm among neighboring countries, especially Chile which perceived it as a threat to its regional power and independence. Simultaneously, both Peru and Chile wee competing for control of commercial routes on the Pacific Coast.
Three years later Peru went to war against Chile, and several Peruvian rebel groups that supported Chile. The conflict began initially due to disagreements over tariffs. Because of the federation formed with Peru, Bolivia was obligated to support the Peruvian government, defeating the Chileans and Peruvian rebels in Paucarpata.
On the same day, a treaty called the Paucarpata Treaty was signed. According to this treaty, the Chileans and rebel Peruvians surrendered unconditionally and were allowed to return to Chile WITH all their arms and equipment intact.
The Chilean government later discarded this treaty and the Chilean army, again with the Peruvian rebels, on January 20, 1839 went to war once more against the Peru-Bolivia federation (once again led by Mariscal Santa Cruz) in Yungay. Chile defeated the federation this time using the same weapons and equipment Santa Cruz had allowed them to take home.
This resulted in the failure of the Peru-Bolivia federation in the same year, 1839. Gamarra declared its dissolution and ordered the states of North Peru and South Peru to merge, leaving Bolivia as a separate entity once again.
The post colonial period was rife with struggles for power. Between 1879 and 1884 Bolivia went to war against Chile in the War of the Pacific, this time over saltpeter (potassium nitrate) mining concessions in Bolivia's Atacama region that had been given to Chilean companies. In this war Bolivia lost a large part of its seacoast, its rich nitrate deposits in the Atacama, and the port of Antofagasta, to Chile (as shown on map). This was a major blow to Bolivia as it lost its access to the sea. It's history has been marked by this ever since. (1879 Map copyright undetermined).
Antofagasta is a port city in what is now northern Chile, about 700 miles North of Santiago in the Atacama Desert. In 1866 it was founded as a seaport for recently discovered silver mines. The city's name comes from the Quechua term "town of the great saltpeter bed" and the Aymara word that means "great salt bed". Because of this, the War of the Pacific between Bolivia and Chile is sometimes called the Great Saltpeter War.
The city's original Spanish name was Peñas Blancas (meaning "white boulders") and it was a part of the province of Litoral, belonging to Bolivia.
Before the Spanish arrived, the first inhabitants of this region were the Changos, who fished, gathered shellfish, and hunted sea lions. This region was once a part of the Incan empire.
During the late post colonial era, the price of silver rose worldwide and Bolivia prospered for a short period of time, although Bolivia's indigenous groups did not benefit from this improved economy. By the early 1900s, the silver mines were depleted after being mined for nearly four centuries, and tin replaced silver as Bolivia’s primary source of wealth. The population (and importance) of Potosí, once the world's most populated city, eventually waned and Oruro, where major tin mines were located, became a major commercial hub. The post colonial era was coming to a close as Bolivia consolidated it's independence and began to shape its own history as a self-governed nation. Although that hasn't gone too awfully well. See the next chapter on Bolivia's recent history, which covers the period between 1900 and today.
In 1898 the seat of two Bolivian government branches (the executive and legislative) was moved to La Paz (above) reflecting the changes in the Sucre - Potosí region's status and nearing the end of the Post Colonial era. Bolivia's Supreme Court remains in
Bolivia's only capital city. Now read about our Recent History. |
35 slides! This presentation covers the broad rhetorical categories critical for passing the AP Language and Composition rhetorical analysis response. Included in this presentation are the structures students may focus on when investigating a form of communication.
Within the presentation students will take notes, but will also analyze two separate articles (links provided); use the terminology; and practicing giving commentary connected to how the structures persuade the audience. There are examples, questions, and activities included.
The presentation ends with students being asked to write a small rhetorical analysis. |
Earth's energy is out of balance: more is absorbed from the sun than emitted back to space.
Courtesy of NASA
“Missing” Heat May Affect Future Climate Change
Scientists know that more of the Sun’s energy gets to our planet than leaves. It hasn’t always been this way. More energy is sticking around as heat because there are more heat-trapping greenhouse gases in the air than there used to be. That’s causing global warming.
Where does the energy go? Scientists would like to know. They have been looking for heat energy in the atmosphere and ocean using satellites, ocean floats, and other instruments. But they can’t seem to find all the heat. In fact, they can only find about half of it. If the energy came to Earth and has not left, then it must be around here somewhere, but where?
Lots of heat might be lurking in places that we can’t watch with satellites or other instruments. The deep ocean is one of those places. Scientists have found warmer ocean water as much as 6,500 feet deep in the ocean (about 2,000 meters). There may be more heat even deeper in the ocean, but we don’t have a way to measure it.
It is important to measure where energy goes on Earth so that we can understand how climate is changing. Scientists are hoping that when we invent new ways to measure heat in the deep ocean and other places, we will be able to solve the mystery of the missing heat.
Shop Windows to the Universe Science Store!
Our online store
on science education, classroom activities in The Earth Scientist
specimens, and educational games
You might also be interested in:
Only a tiny amount of the gases in Earth’s atmosphere are greenhouse gases. But they have a huge effect on climate. There are several different types of greenhouse gases, but they all have something in...more
Earth’s climate is getting warmer. During the past 100 years Earth’s average temperature rose about 0.6° Celsius (1.0° F). Things that people are doing like burning fossil fuels, changing the way land...more
Scientists have learned that Mount Hood, Oregon's tallest mountain, has erupted in the past due to the mixing of two different types of magma. Adam Kent, a geologist at Oregon State University, says this...more
The Earth's mantle is a rocky, solid shell that is between the Earth's crust and the outer core. The mantle is made up of many different reservoirs that have different chemical compositions. Scientists...more
Some faults look strong and like they wouldn’t cause an earthquake. But it turns out that they can slip and slide like weak faults causing earthquakes. Scientists have been looking at one of these faults...more
The sun goes through cycles that last approximately 11 years. These solar cycle include phases with more magnetic activity, sunspots, and solar flares. They also include phases with less activity. The...more
Studying tree rings doesn't only tell us the age of that tree. Tree rings also show what climate was like while the tree was alive. This means that tree rings can tell us about climates of the past. Two...more |
What are the significant differences between Matthew’s and Luke’s narratives of Jesus’s infancy?
The different purposes with which Matthew and Luke approach their narratives influence the ways that they tell the story of Christ’s birth. Because both authors are primarily interested in establishing the divinity of Christ, they both call Jesus’s birth miraculous, and cite God alone as the creator of Jesus’s life. But Matthew, who is interested in the Jewish lineage of Christ and the relationship between Christ’s teachings and the Judaic tradition, focuses on the social ramifications of Mary’s pregnancy more than Luke does. Matthew lauds Joseph for not abandoning his fiancée, even though Jewish custom dictates that pregnancy outside of wedlock is so shameful as to require a man to abandon his future wife. Luke’s narrative seeks to declare the good news of Christ’s birth to the poor and outcast, including women. As a result, Luke focuses on the humility of Jesus’s origins, pointing out that Jesus’s birth occurs in humble peasant surroundings. Luke also exalts Mary for her courage, making her a prominent female character with whom women in his audience might be able to sympathize.
How does the historical context of the Book of Revelation affect its content?
The Book of Revelation was written between 81 and 96 a.d. by a leader in a small church community on the island of Patmos. This community experienced persecution by the Roman Empire, which forced early Christians to put their allegiance to the empire before their allegiance to religion. When the Book of Revelation was written, the Roman Empire was expanding, and many Christians resisted both this expansion and Roman cults. Much of the Book of Revelation focuses on the contrast between the evils of the Roman Empire, personified as the two beasts in Revelation 13, and the true Christian God, who, according to Revelation, will “wipe away every tear” (21:4). Furthermore, in the first century a.d., apocalyptic literature like the Book of Revelation was very common, and Revelation contains many of the conventions of this literary form. Apocalyptic literature involves revelations that claim to predict future events, whereas previous revelations had only claimed to deliver the word of God. Moreover, apocalyptic literature almost always follows dual narratives of hope and despair, at once describing the current evils of the world and promising a figure who would save the righteous or faithful from the ultimate demise of the sinful world. The Book of Revelation uses the conventions of a popular literary form to address a pressing contemporary event. By describing equally vivid scenes of destruction and salvation, the Book of Revelation attempts to instill a hatred for the Roman Empire and strengthen faith in Christianity.
What is Paul’s relationship to Judaism, and what does he see as the relationship between Judaism and Christianity?
Paul of Tarsus calls himself a “Jew of Jews,” and never would have thought of himself otherwise. Like most of the early followers of Jesus, he came from a Jewish background, and saw Jesus’s teachings as an extension rather than a challenge to Judaism. However, the two religions come into conflict on many points. For Paul, the most significant conflict is between the Jewish idea that people will be judged according to their good or bad deeds on Earth and the Christian idea that faith in Christ is the only way to earn eternal salvation. Paul’s egalitarian approach emphasizes equality rather than inequality between Jews and Gentiles, saying that only with faith in God and Jesus Christ is salvation possible. His writing does not reconcile this conflict, but he does express his belief that the people of Israel are chosen and merit special grace, but that the death and resurrection of Jesus Christ could also assure a promise of grace. Paul’s belief that forgiveness and love are given to all people, Jew and Gentile alike, made him a popular missionary. Rather than preaching religion as an exclusionary institution, his writing suggests that there is room within Christianity for people of different backgrounds. He views his belief as a renewed form of Judaism, not as an abandonment of his tradition.
1. Choose one New Testament parable that is found in more than one Gospel. Provide an analysis of the similarities and the differences between the versions. What is the significance of this comparison for understanding the distinctive theological perspectives of the Gospels?
2. Describe the similarities and differences among two of the Passion narratives (i.e., the trial and crucifixion). What is significant for the authors of these accounts? What is at stake in answering the question of who killed Jesus?
3. Consider the Book of Revelation. How might one be able to use the book in a contemporary learning context, without using it to claim salvation for the few and destruction for the many? Does it have anything to say to contemporary society?
4. The New Testament contains numerous discussions pertaining to the resurrection of Jesus. Compare and contrast a resurrection account in one of the Gospels to Paul’s understanding of the living Christ in one of his letters.
This is a great place to start in the bible!
10 out of 24 people found this helpful
the bible does not have a specific number of wise men, it is just assumed that there were 3. There could have been 2 and there could have been much more.
The starting claim that the two books "Luke" and "Acts" were originally a single volume is not vindicated from any archaeological source nor by quotes from other ancient Christian writers. The real reason behind claiming they were originally a single work is to try to excuse dating the books after the fall of the temple. the script of Acts ends in abruptly with Paul in Rome, and can be dated as AD62, over two years after Festus became governor of Judea and sent him there.
The dating of the books may be commonly stated to be past AD80,... Read more→
16 out of 16 people found this helpful |
Life in space does strange things to the human body — it stretches the spine, it turns muscles to jelly, and now new research led by NASA shows that spaceflight confuses the immune system, too.
Scientists found that the immune systems of 28 astronauts seemed to be temporarily altered during their six-month missions aboard the International Space Station. It's not yet clear how these changes arise, or whether they pose serious medical risks to today's astronauts. But the findings suggest future spaceflyers on longer missions to Mars or an asteroid could be more susceptible to illness.
"Things like radiation, microbes, stress, microgravity, altered sleep cycles and isolation could all have an effect on crew member immune systems," Brian Crucian, NASA biological studies and immunology expert, who led the study, said in a statement. "If this situation persisted for longer deep space missions, it could possibly increase risk of infection, hypersensitivity, or autoimmune issues for exploration astronauts." [The Human Body in Space: 6 Weird Facts]
Crucian and colleagues looked at blood plasma samples from the astronauts taken before, during and after their spaceflights. The scientists found that the distribution of immune cells in the blood was relatively unchanged throughout the mission.
But some cell activity was quite depressed, and the immune system was not producing appropriate responses to threats, the researchers said. This might explain why some astronauts experience "asymptomatic viral shedding." This phenomenon, described in previous studies, reawakens latent viruses — including common herpes viruses like chickenpox and cytomegalovirus (CMV) — but without any symptoms of illness.
The researchers also found that some cell activity was heightened, and the immune system was having an overly aggressive reaction. This could explain why some astronauts experience increased allergy symptoms and persistent rashes.
"These studies tell us that this is an important issue and that we are measuring the right things," Mark Shelhamer, chief scientist of NASA's Human Research Program, said in a statement. "They also tell us there is no place during spaceflight where we see stabilization of the immune system. This is critical as we pursue longer duration missions and why we are studying this further during the upcoming one-year mission."
Next spring, NASA astronaut Scott Kelly will embark on a yearlong mission aboard the space station alongside Russian cosmonaut Mikhail Kornienko. Kelly's twin brother, retired NASA astronaut Mark Kelly, will serve as an earthly control as NASA scientists probe how spaceflight changes the human body. |
Students enrolled in The Inclusive Classroom (SPED) program must complete:
- Four core courses required for the Curriculum & Instruction degree
- Foundation courses for the specific concentration
M.Ed. in Curriculum & Instruction - The Inclusive Classroom (SPED)
EDCI 528 (3)
Foundation Concepts for Inclusive Teaching
The general education classroom teacher’s role in identifying and teaching a growing population of students with other special needs in the general education classroom is the major focus of the Foundation Concepts for Inclusive Teaching. This course presents the fundamental concepts related to teaching students with disabilities and students with other special needs in transformative general education classrooms. Information on the history of special education and the federal policies related to serving students with disabilities in public schools is reviewed in the initial phase of the course. Included in this review is an analysis of the general education teacher’s role in the various phases of serving students with disabilities in the general education classroom including the implementation of an Individualized Education Program (IEP) or a 504 plan. The second phase of the course will examine: the characteristics and needs of students with persistent academic disabilities, students with significant cognitive disabilities, and the categories of disabilities (high incidence and low incidence disabilities) as they relate to teaching students with disabilities in general education classrooms and are defined within the Individuals with Disabilities Improvement Act of 2004 (IDEA). A similar analysis of the characteristics and needs of students with other special needs who are served in general education classrooms will also be conducted. Accommodations, modifications and adaptations that support the success of students with disabilities and students with other special needs in transformative general education classrooms will conclude this course of study.
EDCI 548 (3)
The Inclusive Classroom: Instructional Strategies and Interventions
The effective use of transformational instructional strategies and interventions to maximize teaching and learning for all students in an inclusive classroom will be studied. Accommodations and modifications that provide access to the general education curriculum specific to the categories of disabilities defined in the Individuals with Disabilities Improvement Act of 2004 (IDEA) will be identified. A focus on differentiated instruction and adaptations appropriate for students with other special learning needs (students with gifts and talents, English language learners and low language native English speakers, and students at-risk for school failure) will be included. Emphasis will be placed on practical instructional strategies and interventions that promote learning and can be readily implemented by the general education classroom teacher in an inclusive learning environment.
EDCI 549 (3)
Effective Classroom Management Strategies
The focus of this course is the use of strategies and procedures proven effective in establishing and maintaining a positive and supportive learning environment for all students in an inclusive general education classroom. Practical, preventative strategies rooted in positive teacher-student relationships and well-designed learning activities as well as more formal classroom management strategies will be studied. In addition, effective responses to inappropriate and disruptive behavior will be identified with an emphasis on appropriate academic and social behavior development that results in optimal student motivation and engagement.
EDCI 545 (3)
Principles of Collaboration and Partnerships
The role of the general education inclusive classroom teacher in establishing and working effectively in building partnerships through collaboration with school personnel, parents and community agencies will be defined and explored. Specific partnering and collaboration responsibilities of the general education classroom teacher as part of providing services to students with disabilities will be addressed including the general education pre-referral process, implementing a response to intervention model, co-teaching, and practices and procedures essential to the successful inclusion of students with disabilities.
EDGR 601 (3)
This course provides students with the basic competencies necessary to understand and evaluate the research of others, and to plan their own research with a minimum of assistance. This course includes the basics of both qualitative and quantitative research.
The Master of Education culminates with one of three courses
EDGR 698 (3)
Action Research (CAPSTONE)
Action research is one of the capstone projects for the Master of Education program. During this five-week course, candidates will learn more about the action research methodology, complete final edits of the Literature Review, and design a complete Action Research proposal including data collection methods and analysis approaches. (During this course, the proposal will NOT be implemented with students/participants.)
This design provides students with the requisite skills and means to pursue the transformative practice called "Action Research" in their classroom, school, district or other work environment. The design method for the capstone project closely aligns with current classroom realities, with district and school requirements, and the needs of teachers and students.
EDGR 699 (3)
The Thesis offers the graduate student the opportunity to investigate, in depth, a topic in the field of education. The student, working with his or her thesis instructor, will explore relevant literature and present a thesis following the procedure established by the College of Education.
EDGR 696 (3)
Practitioner Inquiry (CAPSTONE)
Practitioner Inquiry focuses on the reflective acts of the candidate as an educator seeking to improve teaching practice. Premised in the self-study research methodological traditions (Samaras, 2011), Practitioner Inquiry provides the opportunity to reflect on teaching practice and generate improvements based on classroom observation. Practitioner Inquiry focuses on the educator and her/his own practices, developing skills of inquiry, observation, reflection, and action in teachers. Prerequisite: Successful completion of EDGR 601 Educational Research
Any of the above options provide candidates with an understanding of the role of research in the field of education as a tool to solve problems and as a way to improve student learning. |
Pa Learning Standards(Reprint from PA Dept. of Education)
The Pa Department of Education has developed S A S, Standards Aligned System. This is designed as a comprehensive approach to supporting student achievement in public school systems throughout the state. This approach was recently extended on a voluntary basis to early education, infants through kindergarten and mandated as a requirement for Keystone Star 3 and Star 4 centers.
Pennsylvania’s Learning Standards for Early Childhood were constructed as a joint project of the Departments of Education and Public Welfare. The Office of Child Development and Early Learning, established in 2006 to administer both Departments’ early childhood programs, oversees revisions and implementation of these standards.
Each set of Standards has been formulated with help and guidance from practitioners who represent early childhood programs and advocacy groups, higher education, and policy analysts and researchers. Support for the development of the Standards was provided through the national Build Initiative, a multi-state partnership that helps states construct a coordinated system of programs and policies that respond to the needs of all young children.
INTRODUCTION (read more)
Children are born with an incredible capacity and desire to learn. Over 30 years of research confirms the foundational importance of early education and care for children’s school and life success. It is essential, then, that students’ first school experiences are robust ones, steeped
in expectations that develop critical thinking and problem solving skills, a deep understanding about themselves in a social society and age appropriate content.
Teachers’ instructional practices must embed the domains of development: cognitive, social-emotional, language, and physical within the foundations or approaches to learning that enable children to explore, understand and reach beyond the “here and now” to challenge themselves and to experiment and transform information into meaningful content and skills.
Teachers of very young children have the awesome task of providing rich information and experiences that build skills and understanding in the context of every day routines and within intentionally-designed play opportunities that capture children’s interests, wonder and curiosity so they want to know more. Pennsylvania’s learning standards join hand-in-hand with the learning environment; the responsive relationships that have been built with children, families and the community; the age, cultural and linguistically-appropriate curriculum; and the practices being used to assess children, classrooms and programs to create the best possible experiences for learning success.
The Pa Department of Education and the Office of Child Development and Early Learning utilize a Standards Aligned System (SAS) that links the elements of instruction, materials and resources, curriculum framework, fair assessment and interventions, and learning standards to children’s engagement in learning and their school success.
MATERIALS AND RESOURCES (read more)
Every early learning classroom, whether it is in a home atmosphere or center-based setting, must be a comfortable, safe and nurturing environment where children can play with blocks, manipulatives, art materials, and dramatic play items to enhance skill development. Children discover and understand science, social studies, and math information when they actively explore materials and ideas that are guided by teachers who intentionally design activities that engage children in critical thinking and processing. Children also learn about their own abilities and
learning styles, how to get along with others and how to appreciate others’ contributions in
classrooms that include a diverse set of materials and experiences.
School environments should be linked to a child’s home environment, incorporating cultural and ethnic materials and children’s home language and provide experiences that are inclusive for all children, regardless of ability, socioeconomic status, or family background. Well-designed classrooms demonstrate a commitment to the whole child by offering materials and activities that
promote social, physical, cognitive and language learning.
Classroom assessment instruments that help providers assess the arrangement of indoor and outdoor space, the provision of materials and activities, and their development of class schedules are useful in assuring best practice implementation and alignment to Pennsylvania’s Learning Standards for early childhood.
INSTRUCTION (read more)
Instruction in the early years often looks different than in the older grades. Learning occurs within the context of play and active learning strategies where children are engaged in concrete and hands-on discovery and in experimentation and interaction with materials, their peers and nurturing adults.
Teachers help construct knowledge during these active learning times by designing activities that build on children’s prior knowledge to create new understandings and information. A limited amount of direct teaching combined with child-initiated play produce optimal conditions for young children’s education. Teachers become facilitators or guides of learning who interact with
children throughout the school day. They ask open-ended questions that encourage children to think about what comes next or want to know more and they support children’s creativity, problem solving, intuition and inventiveness (approaches to learning) by challenging and encouraging them. Teachers design focused instruction that is based on the identified individual needs of every child and assure these experiences encompass their interests, abilities and culture.
CURRICULUM FRAMEWORK (read more)
A curriculum framework reminds us what information should be taught to young children within each of the Key Learning Areas. It assures the continuum of learning that begins at birth and continues through graduation. Pennsylvania’s curriculum framework includes big ideas, essential questions, vocabulary, concepts and competencies that further define the learning standards.
FAIR ASSESSMENTS (read more)
Teachers must use both informal and formal assessments to understand children’s progress. In early childhood, formative assessments that provide information about how children are progressing in the classroom allow teachers to make adaptations or adjustments in the individualized learning plans for every child.
Early childhood professionals observe and assess children in their classroom setting using the materials that are found in their school environment. Blocks that children count or stack, for example, provide the information teachers need to understand children’s math or fine motor skills. Outdoor play or recess allows the adult to observe children’s gross motor skills or the social interactions with peers.
Teachers must use the information they have documented during observation, along with information from the parent, to identify goals and next steps for children’s learning through play.
CLEAR STANDARDS (read more)
Learning Standards provide the framework for learning. They provide the foundational information for what children should know and be able to do. Pennsylvania’s learning standards build on information learned previously, creating a continuum of learning that assures consistent and linked learning that begins in infancy, gradually getting more difficult as it extends through
Pennsylvania also uses program standards that assure children’s experiences are being offered in high-quality settings. Keystone STARS, PA Pre-K Counts, ABG, HSSAP all use similar sets of standards that provide guidance on program operation that exhibits best practices.
INTERVENTIONS (read more)
When teachers are observant and assess children’s abilities, interests and achievement using the standards as a guide, interventions become part of the teachers’ everyday practice. Revising activities, adjusting lesson plans and accommodating children’s individual differences becomes matter-of-fact and the norm. Successful strategies that allow children to master skills at his or hew own pace provide benefits for all children as they interact with others of varying abilities and cultures.
1.Early Childhood Special Education
Early childhood classrooms should be inclusive ones where children with disabilities and developmental delays are enjoying learning experiences alongside their typically developing peers. Teachers may need to adapt or modify the classroom environment, teacher interactions and/or materials and equipment to help children with disabilities fully participate.
Pennsylvania’s Learning Standards for Early Childhood are designed to be used for all children. The content within these standards does not need to be specific to an age, grade or specific functional level, but instead provide the breadth of information from which to create goals and experiences for children that will help them reach their highest potential while capturing
their interests and building on what they already know. Teachers must emphasize and celebrate all children’s accomplishments and focus on what all children can do.
2. English Language Learners
Children develop language much the same way they acquire other skills. Children learn native and second languages using an individual style and rate. Differences among English Language Learners such as mixing languages or a silent period are natural. Each child’s progress in learning English needs to be respected and viewed as acceptable and part of the ongoing process of
learning any new skill. The skills needed for young English language learners to become proficient in English are fully embedded in the Pennsylvania’s Standards for Early Childhood.
EARLY CHILDHOOD CONNECTIONS (read more)
High quality early learning programs also promote connections that assure children’s school
success. Programs that build relationships with children and families and coordinate their work
with other early learning programs, school districts and grades within districts create strong
partnerships for success.
1. CONNECTIONS TO CHILDREN
Relationships are the key to successful connections between a teacher and the students. Teachers must take time to know every child, to understand the way in which they learn best, to identify the special talents and skills each child possesses and the interests that excite them to learn more. Adults who work with young children must be students themselves as they learn about children’s home experiences and culture so they can design learning environments that support the home-school connection and expand prior learning and experiences into new achievements and acquisition of knowledge.
2. CONNECTIONS TO FAMILIES
Parents of young children have much to offer in the learning process. When a partnership is formed between teacher (or school) and the family, the connection between home and school has been strengthened, assuring that children receive consistent messages about learning and skill development. Parents should be given opportunities to learn about their children’s day at school, to provide input into the information they want children to learn and master, and to understand what they can do at home to enhance the school experience. Frequent informal conversations, invitations to participate in classroom life and voluntary take-home activities that relate to school experiences help to build the partnership.
At-home resources for parents such as Kindergarten, Here I Come, Kindergarten, Here I Am or Learning is Everywhere provide both teachers and families with tools to connect at home and school learning and to share age appropriate expectations and activities that support that connection.
Families’ ethnicity and culture must be interwoven into the life of an early childhood program and classroom. Staff must embrace all children’s heritages and provide activities, materials and experiences that help children become aware of and appreciate their own culture while learning about and appreciating the similarities and differences of others’. Staff in high quality early education programs know and understand their own attitudes and biases and are
culturally sensitive and supportive of diversity.
3. CONNECTIONS WITH OTHER EARLY LEARNING PROGRAMS
Children and families often have other needs and priorities in addition to participation in high quality early childhood learning programs. Families may need to coordinate their early learning program services with child care, health services or early intervention services, as well as with their other children’s school experiences. Programs within a community that support families’ single point of contact or help to coordinate services for children demonstrate a strong understanding and respect for families. Providers that reach out to neighborhood schools to facilitate transition into the public school or who have developed a working relationship with their early intervention provider assure linkages that support children’s school readiness
and ongoing success.
4. CONNECTIONS FOR LEARNING
Young children make learning connections through play. Providers that allow children time to explore and discover, both inside and outside, have optimized children’s capacity to internalize and generalize content by making their own connections to prior-learned knowledge. All children, regardless of age and ability, need opportunities to engage in practice activities and experiences that are steeped in play.
Adults must also use literature connections in all domains. Literature
supports both content and social and cultural learning. It is a foundation for
THE LEARNING STANDARDS CONTINUUM (read more)
Within all of Pennsylvania’s Early Childhood Standards, the Key Learning Areas define the domains or areas of children’s learning that assure a holistic approach to instruction. All children, regardless of age and ability, should be exposed to experiences that build their skill development in approaches to learning, social-emotional development, language and literacy development, physical or motor development, creative expression and the cognitive areas of mathematics, science and social studies. The Standards within each Key Learning Area provide the information
that children should be able to know and/or do when they leave the age level or grade. The Standards are also organized by Standard Statements that specify specific skills. New, in 2009, strands further define the standards by organizing the information into focus areas. The strands become the connections to the Academic Standards for grades 3-12. They, too, use these strands to organize the content that all children in Pennsylvania should be able to know and do.
Infant-toddler, Pre-kindergarten and Kindergarten standards are connected through the Continuum of Learning and further linked to the 3rd grade academic standards. Using the strands as the organizer, professionals are able to look across ages and grades to understand how children’s development emerges. Some skills will not emerge in a noticeable way until a child is older. These standards statements will be identified on the continuum as “emerging”. For example, concepts about money are not ones that infant teachers need to develop. They show in the social studies standards for infants as “emerging”. Strands that are missing numerically are skills that do not need attention during the Early Childhood Education years.
Teachers who view children’s skill development across ages and grades will be able to understand the sequential way children learn and become familiar with the way in which teachers at higher grade levels support learning.
1. LEARNING STANDARDS FOR EARLY CHILDHOOD DO:
• Inform teachers and administrators about curriculum and assessment and guide the selection of program materials and the design of instruction
• Inform parents of age-appropriate expectations for children
• Provide a common framework for community-based work on curriculum and transitions
2.THE LEARNING STANDARDS FOR EARLY CHILDHOOD ARE NOT USED:
• As a specific curriculum or to mandate specific teaching practices and materials
• To prohibit children from moving from one grade or age level to another
• To assess the competence of children or teachers
3. INFANT-TODDLER LEARNING STANDARDS
The Infant-Toddler Standards are divided into three age levels: infant (birth through 12 months), young toddler (9 months – 27 months) and older toddler (24 months through 36 months). These age divisions are arbitrary as a means for organizing the content; very young children’s development is uneven and may span two or all three of the age levels in different Key Areas of Learning. This is reflected by the overlap of the age 9 – 27 months in younger toddlers.
The Standards in each Key Area of Learning are displayed on an infant-toddler continuum with the content within one strand presented together on one page. Practitioners can look across each age level to determine the skills that best match their children’s current development, identifying additional standard statements, examples and supportive practices to scaffold children’s
When strands include “Emerging” under infant or young toddler, these concepts are beginning to emerge but are expected to be mastered. For example, infants and young toddlers may be exploring mathematical estimation as they interact with materials, but intentional instruction would not be appropriate for that age. Adults should continue to introduce these concepts
whenever appropriate for the individual child without expectation of mastery.
4. LEARNING STANDARDS FOR PRE-KINDERGARTEN
Teachers will find the skills that pre-kindergarteners (ages three and four) are practicing and mastering within the pre-kindergarten standards. Younger preschoolers will be learning the content, while older children will be mastering the skills and showing proficiency in many of them. Classroom environments, materials and activities that are developed for this age will be appropriate for both three and four year olds; expectations for mastery will be different.
5. LEARNING STANDARDS FOR KINDERGARTEN
Students who complete kindergarten should demonstrate mastery of many of the skills within the Kindergarten Standards. This document is designed for full day kindergarten classrooms. Half day kindergarten teachers will need to modify the amount of content that is introduced to children during the kindergarten year, but the cognitive processing that children must develop and
the holistic instruction will remain constant regardless of the length of the kindergarten day.
It is critical that kindergarten instruction occurs through an active learning approach where teachers use differentiated instructional strategies and focus on learning centers and play as key elements of the daily schedule. Child-directed instruction should be predominant with language and literacy and math infused through the day in addition to their special focus learning times.
Kindergarten children should be given opportunities to develop social and emotional skills, physical skills and their creative expression within the course of a kindergarten day. |
1. A surface forming a common boundary between adjacent regions, bodies, substances, or phases.
2. A point at which independent systems or diverse groups interact: "the interface between crime and politics where much of our reality is to be found" (Jack Kroll).
a. A system of interaction or communication between a computer and another entity such as a printer, another computer, a network, or a human user.
b. A device, such as a cable, network card, monitor, or keyboard, that enables interaction or communication between a computer and another entity.
c. The layout or design of the interactive elements of a computer program, an online service, or an electronic device.
v. (ĭn′tər-fās) in·ter·faced, in·ter·fac·ing, in·ter·fac·es
1. To join by means of an interface.
2. To serve as an interface for.
1. To serve as an interface or become interfaced.
2. Usage Problem To interact or coordinate smoothly: "Theatergoers were lured out of their seats and interfaced with the scenery" (New York Times).
Usage Note: The noun interface, meaning "a surface forming a common boundary, as between bodies or regions," has been around since the 1880s. But the word did not really take off until the 1960s, when it began to be used in the computer industry to designate the point of interaction between a computer and another system, such as a printer. The word was applied to other interactions as well—between departments in an organization, for example, or between fields of study. Shortly thereafter, interface developed a use as a verb, but many people objected to it, considering it an example of bureaucratic jargon. The Usage Panel has been unable to muster much enthusiasm for the verb. In our 2011 survey, 57 percent found it unacceptable in an example designating interaction between people: The managing editor must interface with a variety of freelance editors and proofreaders. This level of disapproval is only slightly lower than the 63 percent recorded in our 1995 survey, suggesting that writers who wish to avoid a jargony tone would do well to avoid the usage. In 2011, a slightly larger percentage disapproved of interface in examples indicating interaction between a corporation and the public (66 percent) or between various communities in a city (65 percent).
The American Heritage® Dictionary of the English Language, Fifth Edition copyright ©2018 by Houghton Mifflin Harcourt Publishing Company. All rights reserved.
The American Heritage Dictionary Blog
Check out our blog, updated regularly, for new words and revised definitions, interesting images from the 5th edition, discussions of usage, and more. |
A planetary nebula is an astronomical object consisting of a glowing shell of gas and plasma formed by certain types of stars at the end of their lives.
They are in fact unrelated to planets; the name originates from a supposed similarity in appearance to giant planets.
They are a short-lived phenomenon, lasting a few tens of thousands of years, compared to a typical stellar lifetime of several billion years.
About 1,500 are known to exist in the Milky Way Galaxy.
Planetary nebulae are important objects in astronomy because they play a crucial role in the chemical evolution of the galaxy, returning material to the interstellar medium which has been enriched in heavy elements and other products of nucleosynthesis (such as carbon, nitrogen, oxygen and calcium).
In other galaxies, planetary nebulae may be the only objects observable enough to yield useful information about chemical abundances. |
Ken Tapping, July 22nd, 2015
As I write this, the New Horizons spacecraft is within a million kilometres of Pluto, which was until recently regarded as the outermost planet in the Solar System. The images coming back are very intriguing, and in the next few weeks there should be a flow of new discoveries to puzzle over. To better appreciate these, it is worth looking at a planet at the opposite extreme: Mercury, the closest planet to the Sun.
Imagine a rocky, mountainous and cratered surface, rather like our Moon. The atmosphere is thin, so the sky is black. It is dominated by a Sun looking more than twice as large as it looks in our sky. The heat is intense. On Earth, if we had no atmosphere to reflect some of it back into space, a square metre of ground with the Sun overhead would receive just under 1500 Watts of energy. A similar square metre on Mercury is getting around 9800 Watts. For comparison, an equivalent square metre of Pluto is getting less than 1 Watt of solar energy. Pluto is obviously a very cold place, whereas Mercury will be extremely hot.
Mercury is a rocky ball, rather like the Moon, with a diameter of about 4900 km, compared with our Moon’s diameter of almost 3500. Both bodies are very cratered, with mountains and lava flows. As in the case of the Moon, the extensive cratering means Mercury has been geologically quiet for billions of years. That is not the case on Earth. The weather slowly erodes mountains and craters away, and our planet’s surface is being continuously recycled by plate tectonics.
Mercury rotates on its axis once every 30 or so of our days. If the Sun is in the sky the surface gets very hot. When the Sun is overhead the temperature reaches about 500 Celsius, with an average daily temperature of around 200 degrees. With little atmosphere and long nights, temperatures drop rapidly after sunset, getting down to –150 C or so. The absence of a significant atmosphere in combination with the alternation of freezing and frying is very effective at removing water, so the planet's surface is very dry. It was therefore a surprise to find there is ice on Mercury.
It was nearly as big a surprise a few years ago to discover ice on the Moon’s surface. Although the Moon’s temperatures are not as extreme as those on Mercury, its surface has also become freeze-fry dried. However, a spacecraft surveying the Moon’s Polar Regions found ice at the bottom of deep craters that the sunlight never reached. Spacecraft surveying Mercury have found the same thing. There are deep craters around the planet’s poles with bottoms perpetually in shadow. Even that close to the Sun, with no solar heat ever getting to them, those crater bottoms are cold, around –170 C, which is cold enough for ice to accumulate. It sounds paradoxical that we can find some of the coldest places in the Solar System on the planet closest to the Sun.
Unlike Venus, Jupiter and Saturn, which are currently prominent in our skies, it is actually in the sky now immediately before sunrise, but it’s hard to spot because of the brightness of the sky. Since the planet orbits close to the Sun, it is never very far from the Sun in the sky, so we always see it against the sunset or sunrise sky glow.
In the 19th Century, French astronomer Urbain Leverrier proposed the existence of another planet, even closer to the Sun. It was even (appropriately) named, Vulcan, after the Roman god of fire. Always being buried in the Sun’s glare it was not expected to be easy to find, so it was only after years of failed or mistaken observations and a lot of wishful thinking, that astronomers finally concluded that Vulcan does not exist. It would have been an amazing world, extremely hot and a very difficult spacecraft destination.
Report a problem or mistake on this page
- Date modified: |
The ancient Egyptian King Tutankhamun has captured imaginations since the discovery of his entombed remains in 1922 by Howard Carter. The study of the artifacts left behind in Tutankhamun's tomb as well as his mummy provide insight into the life and status of this teen pharaoh and the Egyptian society he briefly ruled.
Tutankhamun performed many significant acts during his brief reign as king. His father Amenhotep IV had previously abandoned the god Amun and other Egyptian deities to solely worship Aten. Amenhotep also changed his name to Akhenaten meaning "he who is beneficial to the Aten." Tutankhamun rejected this departure from tradition by leaving Amarna, the city Akhenaten built as a new religious capital, and returning to the traditional capital of Thebes to rededicate himself and his wife to the cult of Amun. Christened Tutankhaten (the living image of Aten) by his father, Tutankhamun changed his name to further distance himself from Akhenaten's deviation.
The Mummified Remains
King Tutankhamun's father Akhenaten was the full-blooded sibling of his mother, whose name is unknown. However the royal family did not suffer from outright congenital disease, as had been previously suggested from Amarna period artwork depicting elongated faces. DNA research indicates that Tutankhamun was infected with several strains of malaria, which weakened his immune system and acted as a co-factor if not the primary cause of his death. Numerous theories abound as to how the young king died, including complications from a broken or deformed leg, a blood infection and more recent theories of a chariot accident.
The Embalming Process
The mummy of King Tutankhamun provided insight into the methods used in ancient Egyptian royal funerary practices. The embalming process used for pharaohs in the New Kingdom was highly sophisticated, and included the dessication of the body and removing of the digestive tract and brain. Tutankhamun's remains were coated in a resin which had hardened and forced Carter's excavation team to dismember the mummy in order to remove it from the sarcophagus. More recent CT scans revealed that the embalming oils used during King Tutankhamun's internment caused the spontaneous combustion of his remains within the sarcophagus.
Signficance of the Tomb
Tutankhamun's tomb in the Valley of Kings is relatively small, and historians theorize that he was interred there because his own royal tomb was not prepared at the time of his early death. The burial chamber was still decorated with images worthy of a king, including scenes from the Book of the Dead, Imydwat and the Opening of the Mouth ritual as well as the king depicted with several deities. Tutankhamun's remains were buried among gold amulets, his famous solid gold death mask, bracelets and other jewelry that the king likely owned in life. The objects within the tomb, the gilded wooden shrines and the sarcophagus itself demonstrate the wealth of Tutankhamun's dynasty, and the importance the ancient Egyptians placed on the afterlife.
- Journal of the American Medical Association: Ancestry and Pathology in King Tutankhamun's Family
- National Geographic: King Tut's Family Secrets
- National Geographic: King Tut Disabled, Malarial
- National Geographic: King Tut Murder
- Egypt Exploration Society: Tutankhamun: The Mystery of the Burnt Mummy
- Spurlock Museum: Egyptian Mummification
- Theban Mapping Project: KV 62 Tutankhamun
- Griffith Institute: Howard Carter's Diary
- Smithsonian Mag: New Theory King Tut Died in a Chariot Crash
- Photos.com/Photos.com/Getty Images |
Mauna Loa, meaning “Long Mountain” in Hawaiian, is Earth’s largest active volcano, standing 13,678 feet (4,169 meters) above sea level. When measured from its underwater base, it’s nearly twice as tall as Mount Everest. This volcanic marvel has been forming for over 700,000 years, last erupted in 1984 and hosts the Mauna Loa Observatory for climate research.
This shield volcano encompasses twelve long-dormant craters and two active vents, the latter of which have issued flows of lava within living memory. Indeed, Mauna Loa popularly marks the region as one of constant renewal: Its remarkable stature casts a captivating presence in this archipelago, both from its physical magnitude bearing down from above and historical associations based on years of eruption cycles.
Beyond such iconic effects, Mauna Loa is an undeniable force behind Hawai’i’s unique ecology – actively shaping the landscape and responsible for contributing many resources crucial to everyday life for local people.
Mauna Loa Facts for Kids
- The volcano is located in the Hawaiian Islands
- Mauna Loa is the largest volcano in the world.
- It is a shield volcano
- It has erupted at least 34 times since 1843.
- Its last eruption was in 1984
- It is considered active, and it is still producing lava.
Mauna Loa is set on the Big Island, one of Hawaii’s eight major islands. This volcanic giant dwells in the southeast region and is encompassed by other volcanic formations such as Kilauea and Mauna Kea.
The Big Island is the largest and most recent of all Hawaiian islands. This landmass was formed from ancient lava flows which fused naturally into one mighty island over multiple million years.
Size and Geological Features
Mauna Loa stands high above the Pacific Ocean, rising almost 13,680 feet above the surface. But just below that towering peak hides another world – a realm that falls nearly 16,400 feet into a basin carved by this massive mountain. The depth of Mauna Loa grants it an enormous total height of 56,000 feet from its base to its summit.
Its expansive flanks aren’t only monumental in size; they claim more than half of the area of Hawaii Island. Reproducing its mass would require all other Hawaiian isles combined, dwarfing them all with planet-spanning grandeur.
This volcano’s shape is delicate and majestic in equal measure, sculpted by a series of fluid lava flows. Its lava flows are renowned for their length and speed, some of the longest and fastest-moving on Earth. On Mauna Loa’s slopes, there are also calderas, cinder cones, and lava tubes to explore.
Mauna Loa, the world’s largest active volcano, boasts a long and complicated geologic lifespan. Believed to have begun growing around 1 million years ago, it has cycled through growth and declines several times. In the last 175 years, the volcano has erupted thirty times: notably in 1843 and 1984.
Its eruptions have been directed by seismic activity and lava flow, reshaping its landscape over time. Today, Mauna Loa is part of an expansive shield formed by volcanism within Hawaii’s archipelago – an amazing sight to behold. Efforts are now underway to monitor its activities with advanced technology so that reliable forecasts can be made and risks minimized.
Table 1: Mauna Loa Liost of Eruptions
|Year||Type of Eruption||Impact|
|1843||Effusive||Covered an area of 400 square miles|
|1852||Explosive||Sent ash plume 18 miles into the air|
|1855||Effusive||The lava flow destroyed homes and farmland|
|1880||Explosive||Ashfall reached 25 miles away|
|1899||Effusive||The lava flow destroyed a sugar plantation|
|1935||Explosive||Ashfall reached 75 miles away|
|1942||Effusive||The lava flow destroyed roads and homes|
|1984||Effusive||The lava flow destroyed homes and farmland|
Mauna Loa looms high in the sky, a proud reminder of its importance to Hawaiians. The name itself translates to ‘Long Mountain,’ echoing the reverence that culture has for this incredible natural landmark.
It is a site of mythic tales and legends. Here, Pele, the Hawaiian Volcano goddess, resides, granting protection and strength to native peoples. Stories handed down through generations flicker in its shadows and flames, showing us that both nature and culture are resolute forces here on Earth.
Mauna Loa’s beauty is worshipped, and its power is recognized. It’s a source of pride, a symbol of Hawaii’s historical yet ongoing journey; it stands as an example that recognizes our past while looking forward to more lasting bonds as we move into a shared future.
Impact on Environment and Communities
Standing tall on the island of Hawaii, Mauna Loa is an impressive geological feature that has significant benefits to its surrounding communities. Agricultural productivity is boosted by its incredibly fertile volcanic soil, while its slopes provide a haven for abundant plants and wildlife.
But Mauna Loa can also cause harm. In 1950, a lava flow left roads and houses in disarray, while in 1975 saw its violence escalate as it reached the outskirts of Hilo. Fears of future activity remain, and USGS scientists monitor the volcano closely to alert nearby communities to danger.
The big picture shows more than this local risk; the emissions of carbon dioxide released by Mauna Loa are a major contributor to global warming, with data collected helping experts track this damage.
Mauna Loa offers unparalleled opportunities for geothermal energy. Its volcanic activity releases abundant heat that can be used to generate electricity with minimal disruption to the environment.
Hawaii has several geothermal power plants harnessing this natural resource and using it to reduce dependence on fossil fuels. These plants have been instrumental in elevating Hawaii’s renewable energy industry and providing clean, renewable energy to its citizens. This innovative approach is not only ecologically sound but also economically beneficial, paving the way for a more sustainable future for generations to come.
Impact on Astronomy
Mauna Loa’s summit is an astronomical hub. The Mauna Loa Observatory, perched atop the mountain, had a crucial role in advancing our knowledge of atmospheric conditions and climate change.
The elevation of Mauna Loa ensured the thinness of air there, resulting in clear skies ideal for study. The contributions made by the observatory helped us understand the Earth’s atmosphere and climate much better. Many discoveries were made that paved the way to tackling our future climatic woes.
Discover majestic natural beauty at Mauna Loa. Hawaii Volcanoes National Park is the perfect place to explore and enjoy stunning views, trails, and more. Climb to the top of the volcano on the Mauna Loa Summit Trail and take in the incredible sights.
Delve into history and culture with a visit to nearby Hilo town. Or explore Punalu’u Beach Park and observe green sea turtles in their natural habitat. Enjoy a range of attractions offering something for everyone when you visit this popular tourist destination.
Mauna Loa is a breathtakingly beautiful volcano, but it could also cause harm to nearby communities. Hazardous conditions, such as lava flows, toxic gasses, and ashfall, could arise with limited warning. Caution must be taken – visitors must follow the instructions of park rangers and maintain safety barriers and warning signs.
The US Geological Survey works hard to ensure that potential threats are minimized through close monitoring of the volcano. Scientists strive to advance our understanding of Mauna Loa’s behavior in order to gain better predictions. Even though exact forecasting remains elusive due to volcanic activity’s unpredictability, ongoing research provides hope for improved safety in the future.
Mauna Loa’s Impact on Climate Change
Mauna Loa sends 6 million metric tons of carbon dioxide into the atmosphere yearly. That’s 1% of the total global emissions from volcanoes on Earth. The gases released by eruptions have both cooling plus warming effects on the climate.
Sulfur dioxide may form sulfuric acid aerosols when combined with water vapor, which reflects sunlight and cools the Earth’s surface temperature. Research on Mauna Loa’s emissions has helped scientists refine their models to accurately predict how volcanic activity affects the climate.
Mauna Loa’s Impact on the Ocean
Mauna Loa is a significant force of nature with far-reaching consequences in the ocean. Its eruptions send carbon dioxide and other gases into the atmosphere, and their dissolution in the sea causes acidification. This, in turn, affects aquatic life considerably: coral reefs, shellfish, and countless creatures are impacted.
But there’s more – its underwater slopes host some remarkable ecosystems. A blend of corals, sponges, and myriad organisms serve as homes for many fish species and invertebrates, too, valuable habitats that can’t be overlooked.
The Future of Mauna Loa
Mauna Loa’s future remains unclear. But sensors, instruments, and drones keep an eye on seismic activity and ground deformation. They measure gas emissions, too, gauging when an eruption may come. All this is to provide warning to nearby communities of impending danger.
High-resolution images offer glimpses of the volcano’s surface, allowing tracking of changes over time. Knowledge gathered enables us to better predict Mauna Loa’s activity and potentially mitigate its hazardous effects.
Continuous study reveals more about this mysterious peak and offers hope that disruption can be minimized through prudent preparation. Ultimately, technology helps us maintain vigilance in the face of the unknown.
What Type of Volcano is Mauna Loa
Mauna Loa, the world’s largest active volcano, is an exemplary shield volcano. Its gently sloping sides, created by multiple lava flows, distinguish it from other volcano types and make it a geological marvel.
Mauna Loa is a powerful force of nature, at once beautiful and awe-inspiring. A record-setting volcano, it’s the world’s largest active one, and its significance affects more than just the Hawaiian Islands. Its geological history, cultural importance, and potential dangers add up to an experience that leaves both experts and everyday people in awe.
To understand the grandeur of Mauna Loa, it helps to be aware of its location, size, origin story, and potential risks. As an observant visitor or resident of Hawaii, you can appreciate why this natural wonder remains so valued in local culture. With sensible precautions taken for safety reasons, we can continue to learn from this amazing volcano.
Mauna Loa stands as a testament to the raw, primal energy at work on our planet. It commands respect with its sheer power while also providing unique chances to witness the beauty of nature up close. To truly appreciate its magnificence requires us to study and revere this living giant from afar. |
What are online training simulations??
Online training simulations enable learners to practice and master their skills in a risk-free environment.
What is the use of simulators in education??
Simulation is a tool for learning and training as well as for assessment of performance.[10,11] The skills requirement which can be enhanced with the use of simulation include: Technical and functional expertise training Problem-solving and decision-making skills
How can simulation based training be used in medicine??
CONCLUSION Simulation-based training has opened up a new educational application in medicine. Evidence-based practices can be put into action by means of protocols and algorithms, which can then be practiced via simulation scenarios. The key to success in simulation training is integrating it into traditional education programmes.
What is the role of simulations in eLearning??
Simulation training has always been an important strategy in instructional design to provide an interactive learning experience to learners. The ever-evolving technology behind rapid eLearning authoring tools has made building simulation-based courses easier and less expensive than ever before. |
Understanding your child's grade report
When you understand your child's grade report, you'll find a powerful tool that can help you support your child at school and at home. There are two types of report cards used in our district. Standards-based grade reports are used for students in kindergarten through fourth grade, and traditional format grade reports are used to report progress of students beginning in fifth grade until graduation.
First-quarter grade reports are shared at parent-teacher conferences in October. Grade reports for the other grading periods are provided online, through the PowerSchool parent portal.
Standards-based grade reports (grades K-4)
Standards-based grade reporting emphasizes learning over earning. Standards are specific areas of learning -- knowledge, skills and abilities -- that are evaluated by grade level and quarter. Your child typically has many opportunities to practice and demonstrate his or her understanding of specific standards during the year. When standards are revisited in the classroom, your child also is able to develop deeper connections to the content and the application of skills.
Classroom teachers assign the mark for standards-based grade reporting based on your child's progress toward each of the grade level's standards. Teachers use careful observation, student work, discussion, projects, performance tasks, quizzes and summative assessments to assign the mark. This grading system allows teachers to keep track of exactly what your child has already learned, but it also gives you valuable information about the academic and developmental areas where your child would benefit from more help and practice.
Students in third and fourth grade will also receive traditional letter grades in language arts, mathematics, science and social studies, in order to help introduce students and families to the grading systems used at Eudora Middle School and Eudora High School.
Students will be given one of the following marks for each standard area:
- M - Mastery
If marked with an M, your child is working at the mastery level for the standard as required for the grade level in the current quarter. Your child grasps, applies and extends the key concepts, processes, and skills for the grade level.
- P - Progressing
If marked with a P, your child is progressing toward the standards as required for the grade level. Your child demonstrates progress but shows incomplete and/or inconsistent understanding and application of grade level concepts. Your child, with some errors, is beginning to grasp and apply the key concepts, processes and skills for the grade level.
- R - Remediation
If marked with an R, your child shows an emerging awareness of concepts and skills but consistently needs help and support to understand the foundational content, skills and concepts taught. He or she needs remediation and significant practice to reach mastery.
- -- - Not assessed
If marked with --, your child was not assessed on this standard during the quarter.
Here are the standards that are evaluated in each grade level:
- M - Mastery
Traditional grade reports (grades 5-12)
A traditional style of grade report is used to show progress of students beginning in fifth grade and are listed by subject area. These grade reports include grades of A, B, C, D and F. They also include attendance and absence information, as well as the GPA, or grade point average. |
The way a dog’s teeth fit together is called occlusion. Dogs, it should surprise no one, are a different species than humans, and how our jaws meet and mesh is accordingly distinct. Where our standard occlusion sees the top row of teeth mirroring those below, the typical occlusion for a dog’s mouth is called the scissor bite. This means that when a dog’s mouth is closed, the canine teeth and the premolars just behind them meet like the blades on a pair of scissors. We’ve all seen dogs whose lower jaw is more pronounced, often leading to the bottom teeth poking out when they are at rest. This type of malocclusion is seen in dogs with underbites.
Upper and lower jaw and teeth alignment can be a bit tricky in dogs. As long as a dog can comfortably chew and eat solid food and their bite is functional, an underbite malocclusion can be considered both normal and healthy. In fact, among certain breeds, usually flat-faced or brachycephalic dogs, the lower dog jaw is typically a little longer than the upper. A dog with an underbite where only the jaw is affected is a skeletal malocclusion. Let’s examine this structural phenomenon with a focus on the following points:
- What constitutes a canine underbite?
- What causes an underbite?
- Dog breeds with underbites
- Is having an underbite bad?
- Can a dog underbite be fixed?
What’s an dog underbite?
The kind of underbite we’re looking at in this article is a Class 3 malocclusion. There are actually a number of technical and medical terms for the condition, including mandibular prognathism and mandibular mesioclusion. All of them refer to the same relative positioning of the teeth situated in the lower and upper jaws, where the dog’s lower jaw sticks out beyond the upper jaw. The malocclusion occurs because the result of different jaw lengths causes the teeth to meet in atypical ways.
What does an underbite look like? The photos of dogs with underbites scattered throughout this article give you a good idea. The lower jaw may be, or appear, physically longer than the upper, or situated in such a way that the dog’s bottom teeth are visible even when the dog’s mouth is closed. A dog with an underbite who also has teeth that are themselves also jagged or misaligned falls under a different category, called a dental malocclusion.
What causes an underbite?
Most of us have some idea of how the practice of pure-breeding has affected the skeletal structure of dogs through the years. In certain short-muzzled, or brachycephalic, dog breeds, the establishment of breed standards is a major reason why skeletal malocclusion is not only considered normal, but necessary among dogs raised to compete in conformation shows. When you watch dog shows and see judges poking around in a dog’s mouth, it is usually so that they can check the dog’s occlusion (or malocclusion) against what is formally required by the breed standard.
In dogs from breeds and mixes whose muzzles are more pronounced, a slight underbite during the first stages of life may self-correct as the baby teeth are discarded and the adult teeth come in. The rule of thumb is that by the age of 10 months, the alignment of a dog’s teeth is set. Dental malocclusion is another cause of a persistent underbite in dogs of otherwise normal muzzle length. Teeth can become misaligned when a dog’s baby teeth fail to come in and fall out as they should. This problem can affect any dog, but is most frequently seen in small and toy dog breeds.
What dog breeds have underbites?
Among certain breeds, as well as mixes where one of these breeds is a contributor, an underbite is considered normal. The cause for this, then, is genetic, a structural abnormality that has become not only expected, but desired. Breeds for whom a skeletal malocclusion is a common feature include the Boston Terrier, Boxer, Bulldogs (English and French), Cavalier King Charles Spaniel, Lhasa Apso, Pekingese, Pug and the Shih Tzu. Mixed-breed dogs with at least one parent from among that list are also a greater likelihood of developing, or being born with, an underbite.
As mentioned above, for some dogs, an underbite is the result of deciduous teeth that are either impacted, or, when they do erupt, emerge irregularly. This is the most common scenario for underbites in non-brachycephalic dogs, although it happens much more often in breeds such as the Chihuahua, Maltese, Pomeranian and Yorkshire Terrier, especially in their toy and teacup varieties.
Dog underbite problems
Is having an underbite bad? In dogs for whom skeletal malocclusion is hereditary, their slight underbite is no cause for concern. There is usually no appreciable negative impact on the way their mouths function for eating, drinking, self-grooming and other mundane tasks. Cases like these can be monitored throughout a dog’s life during regular exams and checkups.
Dental malocclusion, on the other hand, can lead to serious health issues. When bite mesh issues are caused by jagged, awkwardly emerging or impacted teeth, normal functioning can be impaired. An inability to tear and grind food is among the problems a dog with a more severe underbite can suffer. More detrimental things happen when a tooth comes in wrong, doing unintended damage or wounding sensitive parts of the mouth, whether the cheek, gums or the roof of a dog’s mouth.
Dog orthodontics: Can an underbite be fixed?
The vast majority of cases of skeletal malocclusion require no treatment. If a misaligned tooth is at no risk of causing damage to a dog’s mouth, usually it can be left alone as well. However, physical traumas from misaligned teeth that make regular and inappropriate contact with the mouth’s interior can lead to complications like infections and formation of cysts, and can even accelerate the process of tooth decay. Baby teeth that do not erupt properly or remain impacted can also lead to pain and suffering.
Can a dog underbite be fixed? If the problem, or the potential for underbite development to cause substantive health issues, exists, canine dental specialists have options in these latter scenarios, though they can be costly. After determining the nature of the problem and the areas affected, solutions can range from extracting errant teeth and dog braces to corrective surgery. |
History of boxing
Boxing is one of our most ancient sports, with evidence that it was enjoyed up to six thousand years ago in Ethiopia. Engravings in stone showing scenes of boxing have been found in Iraq, suggesting that five thousand years ago, its inhabitants (the ancient Sumerians) enjoyed an early ancestor of what we now know as boxing. From these origins, the sport spread to Egypt, where cave paintings thought to be two thousand years old depict boxers.
This pass-time then spread to Europe, and thrived in Greece, where one version of boxing involved two opponents sitting facing each other, and beating each other until one emerged victorious (when his opponent was knocked unconscious).
Romans developed the sport further by introducing the ring. They also began to wrap their fists in leather bands with metal studs to use as weapons. It was not long until the sport became so dangerous that it was banned by the Romans in about 30BC.
This ban and the surge of Christianity quashed the spread of boxing and it is thought that it was not taken up again until the seventeenth century in Britain. Bare knuckles clashed in the re-emerged sport, making it marginally less brutal than the Roman version.
With its renewed popularity came attempts to regulate boxing. For information on the London Prize Ring Rules and the Queensberry Rules, see the ‘History of boxing rules’ section below.
The boxing craze crossed the Atlantic to become an enormous success in America. Despite a dip in popularity in Britain shortly after World War II, when televisions were becoming more widespread, boxing has since gone from strength to strength, with boxing icons such as Mohammad Ali household names. |
21.9: Nucleic Acids
Nucleic acids are the most important macromolecules for the continuity of life. They carry the cell's genetic blueprint and carry instructions for its functioning.
DNA and RNA
The two main types of nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). DNA is the genetic material in all living organisms, ranging from single-celled bacteria to multicellular mammals. It is in the nucleus of eukaryotes and in the organelles, chloroplasts, and mitochondria. In prokaryotes, the DNA is not enclosed in a membranous envelope.
The cell's entire genetic content is its genome, and the study of genomes is genomics. In eukaryotic cells but not in prokaryotes, DNA forms a complex with histone proteins to form chromatin, the substance of eukaryotic chromosomes. A chromosome can contain tens of thousands of genes. Many genes contain the information to make proteins. Other genes code for RNA products. DNA controls all of the cellular activities by turning the genes “on” or “off.”
The other type of nucleic acid, RNA, is mostly involved in protein synthesis. The DNA molecules never leave the nucleus but instead use an intermediary to communicate with the rest of the cell. This intermediary is the messenger RNA (mRNA). Other types of RNA—like rRNA, tRNA, and microRNA—are involved in protein synthesis and its regulation.
DNA and RNA consist of monomers called nucleotides. Three components comprise each nucleotide: a nitrogenous base, a pentose (five-carbon) sugar, and a phosphate group. Each nitrogenous base in a nucleotide is attached to a sugar molecule, which is attached to one or more phosphate groups. The nitrogenous bases, important components of nucleotides, are organic molecules and are so named because they contain carbon and nitrogen. They are bases because they contain an amino group that has the potential of binding an extra hydrogen, and thus decreasing the hydrogen ion concentration in its environment, making it more basic. Each nucleotide in DNA contains one of four possible nitrogenous bases: adenine (A), guanine (G), cytosine (C), and thymine (T). Adenine and guanine are classified as purines. The purine's primary structure is two carbon-nitrogen rings. Cytosine, thymine, and uracil are classified as pyrimidines which have a single carbon-nitrogen ring as their primary structure. Each of these basic carbon-nitrogen rings has different functional groups attached to it. In molecular biology shorthand, we know the nitrogenous bases by their symbols A, T, G, C, and U. DNA contains A, T, G, and C; whereas, RNA contains A, U, G, and C.
The pentose sugar in DNA is deoxyribose, and in RNA, the sugar is ribose. The difference between the sugars is the presence of the hydroxyl group on the ribose's second carbon and hydrogen on the deoxyribose's second carbon. The phosphate residue attaches to the hydroxyl group of the 5′ carbon of one sugar and the hydroxyl group of the 3′ carbon of the sugar of the next nucleotide, which forms a 5′–3′ phosphodiester linkage.
DNA Double-Helix Structure
DNA has a double-helix structure. The sugar and phosphate lie on the outside of the helix, forming the DNA's backbone. The nitrogenous bases are stacked in the interior, like a pair of staircase steps. Hydrogen bonds bind the pairs to each other. Every base pair in the double helix is separated from the next base pair by 0.34 nm. The helix's two strands run in opposite directions, meaning that the 5′ carbon end of one strand will face the 3′ carbon end of its matching strand. Only certain types of base pairing are allowed- A can pair with T, and G can pair with C. This is the complementary base rule. In other words, the DNA strands are complementary to each other.
Ribonucleic acid, or RNA, is mainly involved in the process of protein synthesis under the direction of DNA. RNA is usually single-stranded and consists of ribonucleotides that are linked by phosphodiester bonds.
There are four major types of RNA: messenger RNA (mRNA), ribosomal RNA (rRNA), transfer RNA (tRNA), and microRNA (miRNA). The first, mRNA, carries the message from DNA, which controls all of the cellular activities in a cell. If a cell requires a certain protein, the gene for it turns “on” and the messenger RNA is synthesized in the nucleus. The RNA base sequence is complementary to the DNA's coding sequence from which it has been copied. In the cytoplasm, the mRNA interacts with ribosomes and other cellular machinery.
The mRNA is read in sets of three bases known as codons. Each codon codes for a single amino acid. In this way, the mRNA is read and the protein product is made. Ribosomal RNA (rRNA) is a major constituent of ribosomes on which the mRNA binds. The rRNA ensures the proper alignment of the mRNA and the ribosomes. The ribosome's rRNA also has an enzymatic activity (peptidyl transferase) and catalyzes peptide bond formation between two aligned amino acids. Transfer RNA (tRNA) is one of the smallest of the four types of RNA, usually 70–90 nucleotides long. It carries the correct amino acid to the protein synthesis site. It is the base pairing between the tRNA and mRNA that allows for the correct amino acid to insert itself in the polypeptide chain. MicroRNAs are the smallest RNA molecules, and their role involves regulating gene expression by interfering with the expression of certain mRNA messages.
Even though the RNA is single-stranded, most RNA types show extensive intramolecular base pairing between complementary sequences, creating a predictable three-dimensional structure essential for their function.
This text is adapted from Openstax, Biology 2e, Chapter 3.5: Nucleic Acids. |
Millions more people across the globe are relying on our endangered coral reefs than 20 years ago, according to new figures into population growth in coastal areas.
Research from the University of Essex has found that the number of people living in coastal areas by coral reefs has grown to nearly one billion — a rise of 250 million since 2000 and rates are going up in these areas quicker than global averages.
Worryingly, the areas closest to the coral reefs — where people’s direct livelihoods rely on these valuable ecosystems — has seen a population density boom which is now four times the global average.
This new study, published in the journal Global Change Biology, provides the most up-to-date and extensive statistics on global, regional and nation-level differences in coastal population trends living within 5-100km of coral reefs.
Human populations near ecosystems are used to indicate the dependency on those ecosystems, as well as an estimated threat. Climate change is the greatest threat facing coral reefs but in these coral adjacent coastal areas there is a balance to be found between the important services coral reefs provide to humans, such as protection against storms, food supply and livelihoods, and the potentially damaging human activities that occur there — from overfishing and pollution to destruction of corals for construction. Human activities also play a large role in coral reef health and survival.
Coastal populations are calculated as 100km (60 miles) from coastlines and the number of people living near coral reefs is often used in regional coral reef management and the evaluation of risk at regional and global scales.
Dr Amy Sing Wong, from Essex’s School of Life Sciences, who led the research, said: “Coral reefs are at the forefront of our climate catastrophe. They are also subject to many human-made impacts — from pollution and overfishing to tourism and extraction of raw materials.
“Yet, coral reefs are also a lifeline to millions, acting as a primary source of protein to some of the poorest people on Earth. Broadly, more people by coral reefs translates to more impacted coral reefs.
“Our research into understanding changes in human populations close to coral reefs is therefore crucial. We knew that global populations near coral reefs were high, but we did not expect nearly one billion people within 100km of coral reefs in 2020.”
Coral reefs cover less than 0.1% of the world’s oceans, are extremely biodiverse, hosting up to one quarter of all marine fish species and are among the most productive and complex ecosystems found in the world.
However, coral reefs are predicted to decline between 70-90% in the next decade and up to 99% if global warming reaches 2°C above pre-industrial levels.
The scientists are now hoping their findings will help identify areas at higher risk which will prove a powerful management tool to inform policies around coastal protection — something crucial for securing the future of our vulnerable coral reef ecosystems and the people who rely on them.
Dr Michelle Taylor, Senior Lecturer of Marine Biology at the University of Essex, said: “There is a concern regarding high coastal zone human population growth as it has been associated with the degradation of coastal and marine ecosystems.
“By providing country-level coastal population data we hope it can be used in global policy goals such as the Sustainability Development Goals (SDGs). Our study highlights the millions of people that have a potential dependency on coral reefs and are therefore vulnerable to climate-change impacts on these sensitive ecosystems.”
The data from 117 coral reef countries found the Indian Ocean saw a 33% increase in populations within 100km of a coral reef and 71% at 5km. There are 60 countries with 100% of their population within 100km of coral reefs.
Of particular interest are Small Island Developing States (SIDS), where the dependency on marine ecosystems are particularly high and are recognised as a special group of countries that are disproportionately vulnerable to climate change. The study found that 94% of SIDS population live within 100km of a coral reef.
Materials provided by University of Essex. Note: Content may be edited for style and length. |
Variability in ASD: Autism presents itself in various forms and degrees. The spectrum ranges from individuals with severe impairments—who may be non-verbal and intellectually challenged—to those with less noticeable symptoms who may have average or above-average intelligence. This diversity necessitates a personalized approach to diagnosis and support.
Early Diagnosis and Intervention: Early detection of autism is crucial. Symptoms often appear in the first two years of life. Early intervention can make a significant difference in the child's development. It typically includes speech therapy, occupational therapy, and behavioral interventions.
Supporting Different Types of Autism
Tailored Educational Programs: Education for children with autism should be tailored to their specific needs. Special education services and individualized education programs (IEP) are common. These include modifications in teaching methods, curriculum adjustments, and the provision of a supportive learning environment.
Behavioral Therapies: Applied Behavior Analysis (ABA) is widely recognized for treating ASD. It involves structured interventions to help improve social skills, communication, and learning skills.
Speech and Language Therapy: Many individuals with ASD struggle with communication. Speech and language therapy can help improve these skills, enhancing the ability to express thoughts and needs.
Occupational Therapy: This therapy helps with everyday skills and promotes independence. It's particularly beneficial for those who have difficulties with motor skills or sensory processing.
Social Skills Training: Programs designed to enhance social skills can be crucial for individuals with ASD, helping them understand social cues and interact more effectively with others.
Community and Family Support
Support Groups and Communities: Joining autism support groups can be beneficial for families and individuals. These communities offer a platform to share experiences, resources, and emotional support.
Family Education and Counseling: Educating family members about autism is vital. Family counseling can help in coping with the challenges of raising or living with someone who has ASD.
Advocacy and Awareness: Raising awareness about autism in the broader community is important. It helps in reducing stigma and encourages inclusivity and acceptance.
The Future of Autism Support
Advancements in research are continually shaping our understanding of autism. Emerging technologies like virtual reality and AI are opening new avenues for therapy and support. However, at the core of autism support is the understanding that every individual with ASD is unique, and so too should be their support and care.
Holistic approaches for treating Autism Spectrum Disorder (ASD), including the use of phytotherapy (herbal medicine), involves considering the whole person - their physical, emotional, social, and environmental needs - rather than just focusing on reducing symptoms of autism. This approach recognizes the interconnectedness of various aspects of health and wellness.
Holistic Treatment Approaches for Autism
Diet and Nutrition: A holistic approach often includes a focus on diet and nutrition. Some individuals with ASD may have food sensitivities or gastrointestinal issues. Tailored diets, potentially excluding gluten or casein, can sometimes alleviate symptoms. Incorporating nutrient-rich foods and avoiding processed foods can also be beneficial.
Phytotherapy (Herbal Medicine)
Use of Specific Herbs: Certain herbs are known for their calming effects and may be used to alleviate symptoms of anxiety and improve sleep quality in individuals with ASD.
Jampha’s Tibetan Pharmacy has created formulas thatencompasses these concerns to support cognitive function, reduce stress and anxiety, and offer neuroprotective properties with careful selection of botanical compounds for these purposes.
Jampha Tibetan Pharmacy Botanical pharmacology: Offers a holistic and natural approach to supporting individuals on the Autism Spectrum. Nootropics like Healing Heart, neurocognitive enhancers like White Elephant, and botanical infusions like Calm SETI along with our intelligent collection of botanical and mineral infusions provide a wide array of tools to enhance focus, manage emotions, and promote overall well-being. By embracing the wisdom of Tibetan Medicine, we unlock the potential of nature's treasures to provide valuable support.
Physical Activity and Exercise: Regular physical activity is beneficial for individuals with ASD. It not only improves physical health but also aids in reducing anxiety and improving mood. Activities like yoga and tai chi can also be therapeutic, emphasizing mind-body connection.
Mindfulness and Meditation: Mindfulness practices can help manage stress and improve focus and emotional regulation. Techniques such as guided meditation and deep breathing exercises can be adapted for individuals with ASD.
Art and Music Therapy: These therapies provide a non-verbal outlet for expression and can be particularly effective for those who have difficulty with traditional communication. They can also help in improving motor skills and emotional regulation.
Sensory Integration Therapy: Many individuals with autism have sensory processing issues. Sensory integration therapy, often administered by an occupational therapist, can help them cope with sensory input more effectively.
Family Involvement and Support: A holistic approach includes the family unit. Family therapy and education about autism can empower family members to provide the best possible support at home.
Considerations and Cautions
Professional Guidance: Before starting any holistic or phytotherapy treatments, it's crucial to consult with healthcare professionals who are knowledgeable about autism and holistic health. This ensures that the chosen therapies complement existing treatments and do not pose health risks.
Recognizing the highly individualized nature of Autism Spectrum Disorder (ASD), Jampha Tibetan Pharmacy embraces the philosophy that what benefits one individual might not necessarily aid another. This insight, rooted in both traditional herbal medicine and contemporary pharmacology, is a guiding principle in their creation of holistic formulas. Jampha prioritizes tailoring these treatments to meet the unique needs and responses of each person, ensuring an approach that is as personalized as it is effective. The pharmacy's commitment to individualized care reflects a deep understanding of the diverse and spectrum-based characteristics of ASD, underscoring the importance of customizing therapeutic interventions to achieve optimal health outcomes.
Integrating with Conventional Treatments: Holistic and phytotherapeutic approaches are often most effective when used in conjunction with traditional therapies and interventions for ASD.
Holistic approaches to treating autism, including phytotherapy, emphasize treating the individual as a whole, considering both physical and emotional wellbeing. These approaches aim to provide comprehensive support that goes beyond symptom management, addressing various aspects of life and health that can impact individuals with ASD. As research and understanding of both autism and holistic health care grow, these approaches continue to evolve, offering new opportunities for improving the quality of life for those on the autism spectrum. |
November is upon us, which means that Thanksgiving is right around the corner! Fall has also arrived, and with it comes a world of opportunities to incorporate STEM activities into the classroom that can directly tie in to the season. There are countless fun science, technology, engineering, and math activities to do with your upper elementary kids that will really get their brains going this fall season!
Here are three fun Thanksgiving themed STEM Challenges that your students will love!
The Cornucopia Catch Game!
Students must construct 5 paper cornucopias along with paper fruits and vegetables. The cornucopias must be strong enough to hold, catch, and toss paper vegetables. Students must come up with their own unique game with instructions, rules, and challenges using the cornucopias. When their STEM group has invented the game, they must teach it to the class. The group with the most creative game wins the challenge!
**For more instructions and a list of supplies, scroll to the end of the post!
Crack the Turkey Code Challenge!
Using a list of supplies, students will first create a turkey craft. On the turkey feathers of the craft, students must decide on made-up symbols that represent letters of the alphabet. They must create an answer key of their symbols that correlate with the letters A-Z. Their turkey craft will have a coded message. The message must be a fact about turkeys and it must contain at least five words. Different colors must be used as a clue for students to know how many letters each word contains in the sentence. (See sample code). When multiple groups are finished, they can give their turkey codes to each other and try to guess what the other group’s message is. The vowels are automatically given to the group who is trying to guess the code to make it easier to decipher.. **For more directions and a list of supplies, scroll to the end of the post!
Toothpick Thanksgiving Table Challenge
Students must construct a Thanksgiving table only using toothpicks and glue. The table must be at least 6 inches long and 4 inches wide, but no longer than 1 ftby 6 inches. The table cannot have more than 4 legs. The challenge is to make the table as strong as possible. At the end of the challenge, students will place weights on top the table. The STEM group with the table that holds the most weight wins the challenge! Students have three days to complete this challenge (to allow for drying time).
Would you like printable guides and listed materials for these Thanksgiving STEM projects? Click here! |
Updated: Dec 30, 2018
Small intestinal bacterial overgrowth (SIBO) is classified according to the types of gasses produced by microflora occupying the upper gut. The two types of gases are hydrogen, and methane. However, a new classification has emerged based on relatively recent findings which is characterized by excess hydrogen sulfide production. Hydrogen sulfide-dominant SIBO is now becoming recognized an independent entity, and may be very common in people who have been diagnosed with SIBO. In a previous article, I described the process of hydrogen sulfide metabolism in detail and presented the reasons for why this might occur as a beneficial adaptation in the context of glyphosate poisoning.
Hydrogen sulfide is a gas known for its characteristic sulfur-like odor. People passing gas that smells like rotten egg or cabbage are very familiar with this. This gas is mostly produced as part of the metabolism of a specific set of bacteria referred to as "sulfate-reducing bacteria", although it can be synthesized by many other bacteria and is also part of normal human metabolism in small amounts.
Sulfate-reducing bacteria metabolize hydrogen in conjunction with dietary sulfur sources (sulfite, sulfate, or other organic forms) and convert them into hydrogen sulfide gas. One of these types of bacteria which has been well-studies in the past is Desulfovibrio.
Hydrogen sulfide is established as a potent signalling molecule with both beneficial and detrimental effects on cells, depending on the type of cell and the concentration of the gas. In elevated concentrations, one of the ways hydrogen sulfide exerts its toxic effects is through its interaction with heme-containing proteins, causing severe cytochrome c oxidase deficiency and mitochondrial dysfunction.
Its transport is passive, meaning that the gas can freely travel throughout the blood stream and diffuse directly into cells through the cell membrane. Although hydrogen sulfide can exert pro-inflammatory effects on cells, it can also be fed through into the mitochondria to be used converted into energy (ATP) and the bioactive form of sulfur which cells can actually utilize (sulfate). Sulfate is needed for a wide variety of things including conjugation and detoxification, blood flow, and structural support. The conversion of hydrogen sulfide into sulfate involves multiple different steps performed by enzymes located in the mitochondria which include:
The initial step is catalyzed by sulfide-quinone-oxidoreductase, which serves to oxidase hydrogen sulfide by using ubiquinone (the oxidized form of CoQ) as an electron acceptor. CoQ deficiency has been shown to dramatically reduce SQR expression, inhibiting the cell's ability to break down hydrogen sulfide. (). Animal research shows that CoQ deficiency leads to the intracellular accumulation of hydrogen sulfide, alterations in downstream enzyme activity, and depleted glutathione levels. However, the researchers also found that supplementation with CoQ10 successfully restored SQR levels back to normal.
This suggests that supplementation with CoQ10 may therefore be an unexpectedly useful adjunct in a protocol designed to support healthy sulfur metabolism.
Side note: Many protocols used to treat problems with sulfur metabolism are aimed at restricting sulfur-containing foods, such as the amino acids cysteine and methionine. However, restriction of sulfur amino acids has actually been shown to increase hydrogen sulfide synthesis via upregulated cystathionine γ‐lyase, along with a concomitant decrease in the levels of glutathione. Therefore, dietary sulfur restriction is likely not a beneficial solution in the long-term.
A later step in the process of sulfide metabolism involves the conversion of sulfite, a cytotoxic metabolic intermediate, into sulfate. As you can seen above, the enzyme responsible for this is called sulfite oxidase (SUOX). SUOX is dependent on molybdenum as a cofactor, and in the event of reduced sulfite oxidase activity (due to genetics or cofactor deficiency), a high influx of hydrogen sulfide into the cell may result in a buildup of intracellular sulfite.
Elevated sulfite poses many potential risks for the cell, which include damage to proteins, lipids, and mitochondrial dysfunction. However, the point I would like to focus on next is the effects it has on thiamine.
Thiamine (vitamin B1) is a critical component of glucose, amino acid, and lipid metabolism. It is also needed for the synthesis of NAPH, nucleic acids, and neurotransmitters, along with facilitating nerve transmission. High concentrations are found in pork, organ meats, fortified grains, and (high sources), although there is a small amount in most other foods.
Thiamine is particularly susceptible to degradation by thiaminase enzymes found in certain foods such as raw fish, the polyphenols in coffee and tea, along with metabolic byproducts of mold such as Aspergillus. Metabolism of refined sugars and carbohydrates also raises the requirement for thiamine by increasing the flow of glucose through glycolysis.
Although host gut bacteria are responsible for synthesizing a large quantity of thiamine and may contribute significantly toward total thiamine intake, there are also groups of bacteria including Clostridia and Bacillus which produce thiaminases and can destroy thiamine. It is therefore possible that severe cases of dysbiosis could have a severe impact thiamine status. Long-term gastrointestinal malabsorption frequently accompanies dysbiosis and SIBO, and is another factor which has been implicated in thiamine deficiency.
It is also possible that thiamine status is negatively affected in hydrogen-sulfide dominant SIBO. Although the animal research investigating hydrogen sulfide has yielded inconclusive results, it is well established that sulfite (an intermediate in hydrogen sulfide metabolism) can destroy thiamine. The sulfite ion is capable of cleaving thiamine at its methylene bridge, rendering the vitamin ineffective. Hence, because hydrogen sulfide influx into the cell may result in higher sulfite concentrations, this could place intracellular thiamine stores at risk for degradation.
In support of this, in vitro data shows that sulfide reduces the concentration of thiamine in white blood cells. Furthermore, veterinary data shows that excess dietary sulfur coupled with a subsequent rise in hydrogen sulfide production increases the requirement for thiamine in the brain and central nervous system and can eventually lead to secondary thiamine deficiency. Additionally, thiamine supplementation was shown to protect animals against sulfide toxicity, which suggests that excess hydrogen sulfide does indeed negatively impact the integrity of thiamine.
Interestingly, clinical reports have shown that people diagnosed with SIBO are more likely to be deficient in thiamine. Since thiamine is absorbed via the gastrointestinal tract, maldigestion and malabsorption likely contribute to this finding. However, it is also possible that this could relate to the defects in sulfide metabolism described above.
How to test for thiamine
Based on the above information, it may be prudent for anyone experiencing digestive issues to test for functional thiamine status. Unfortunately, this is not as simple as just measuring blood or plasma thiamine. According to Dr Derrick Lonsdale, a world-leading expert on thiamine, blood thiamine concentrations provide an inaccurate measurement of thiamine status inside the cell and are only reflective of recent thiamine intake. Likewise, urinary excretion of thiamine is not a good measure and will not necessarily detect insufficiency.
On the other hand, functional sufficiency can sometimes be detected by measuring erythrocyte transketolase activity. Since tranketolase requires thiamine, low transketolase activity can indicate poor intracellular availability of thiamine. However, Dr Lonsdale's experience showed that transketolase testing alone was not always sufficient, and that it could remain somewhat high despite inadequate thiamine levels. As a solution, a "thiamine pyrophosphate effect" test used in conjunction with tranketolase activity could provide a lot more information. Thiamine pyrophosphate effect testing measures how readily thiamine is uptaken by cells. A high pyrophosphate effect indicates that cells need a lot of thiamine and so are likely deficient, whereas a low pyrophosphate effect suggests that cells already have sufficient amounts.
Transketolase activity testing is provided by a laboratory in the UK called Biolab and can be purchased cheaply, so that is one option. However, the thiamine pyrophosphate effect test is not currently offered by any lab at the time of writing this article. Another method for testing thiamine status also exists with reportedly very high sensitivity. It measures whole-blood thiamine pyrophosphate, which is reported to correlate strongly with intracellular thiamine concentrations and provide a more accurate representation of thiamine status. This test is currently offered by LabCorp.
Functional markers providing an indication of thiamine status can also be picked up on urinary organic acids, and plasma or urinary amino acids. Some of the markers to look out for include:
Elevated urinary pyruvate and/or lactate - Thiamine is a cofactor for the PDHC, an enzyme complex responsible for converting pyruvate into acetyl coA. In thiamine deficiency, PDHC may be inhibited and this can lead to a buildup of pyruvate, which may also be shunted toward lactate production. The result is higher levels of pyruvate and lactate.
Elevated alpha ketoglutarate/2oxoglutarate - Thiamine is a cofactor the KGDH, an enzyme complex involved in the TCA cycle responsible for converting alpha ketoglutarate into succinyl-coA. Low thiamine may reduce the rate of this reaction, resulting in higher levels of alpha ketoglutarate.
Elevated alanine - Pyruvate can be used to synthesize alanine, and alanine can also be converted back into pyruvate to be used in energy production. Again, the conversion of pyruvate requires adequate PDHC activity, and so thiamine deficiency can result in elevated alanine along with pyruvate.
Hopefully after reading this, the audience can appreciate how SIBO is particularly relevant to thiamine metabolism. The purpose of this article was to describe the how SIBO and gut-related issues can potentially lead to thiamine deficiency, and why people suffering from this condition would do well to test their thiamine status.
In the case long-term deficiency, thiamine repletion may actually go a long way toward fixing the initial problem. This is because thiamine deficiency might not only be a consequence, but also a cause of SIBO in the first place.
In the next article, I will be explaining how chronic thiamine deficiency can be causally linked with SIBO and other gut dysfunction. |
21 Jul Attention Span is the Key to Effective Learning
Concentration is Crucial
A child’s attention span is a very important factor in the learning process. The amount of time a child spends listening and understanding the teacher affects how much he or she has taken from the lesson.
Hyperactivity is one of the biggest enemies of good concentration the other is the environment. If a child is not in the mood for studying, he or she will sit idly and daydream or talk and disrupt the rest of the class. A short attention span has little to do with your child and more to do with their surroundings.
A good learning environment is vital; it creates an atmosphere that places children in the right mind-set to study and improves concentration levels. If a few kids in the class pay attention and respond positively, it catches on to the rest of the class.
The average attention span of a seven year old is 14-25 minutes and increases by 2-5 minutes every year. It is important that teachers know this, so they can plan each class accordingly and teach the most important part of the lesson first.
Three Things You Can Try at Home
The development of your child is the most important thing. If you are worried about your child’s attentiveness in the classroom, here are few things you can try at home.
- Set goals for practice. E.g. Read for 30 minutes without a break.
- Play games and activities that require concentration. E.g. Crosswords, word searches.
- Practice deep breathing to improve focus.
The activity section of the ALOHA Newsletter is a great way for you to sit with your kids and help them improve their attention span.
If you haven’t signed up for our newsletter, you can do so at your nearest ALOHA center.
Founded in 1993, ALOHA Mind Math has been guiding children between the ages of five through 12 years to achieve academic excellence. The interactive learning process is proven to enhance a child’s math, reading and writing capabilities. The teachers also assist children in developing skills and abilities such as observation and listening that result in the overall growth of the child.
ALOHA Mind Math is currently training children in over 20 countries with 4200 different centers. For more details on this unique program, please visit alohamindmath.com or search for the center closest to you by using our locator alohamindmath.com/locations/.
Top 6 Benefits Of A Hands-On Learning Approach | TaughtupPosted at 03:16h, 23 May
[…] to studies, the average attention span for a seven-year-old child is between fourteen and twenty-five minutes, and it increases by […] |
RFID (Radio Frequency Identification) is a technology that is used for detection, tracking and identification of products and things. The technology is based on storing data in a RFID tag, and wireless reading of these data with a RFID reader using radio waves.
The benefit of the RFID technology, vis-à-vis many other automatic identification methods, is readability of objects remotely, quickly, while maintaining data protection. Enclosed tags withstand rough handling and can preserve their usability for dozens of years. Moreover, tags can carry a large body of information.
A RFID tag can be integrated in a product at its manufacturing stage or, alternatively, be added to the designated object subsequently e.g. with adhesive tape. The core idea of the system is simple: a RFID tag is attached to the designated object, data is written to and read from the tag with a RFID reader, and utilised using the back-end system.
From a number of aspects the RFID technology is comparable with a bar code. An object is supplied with a tag that tells something about the object. However, the difference between RFID and a bar code lies in the fact that identification can take place without direct visual contact, i.e. for instance through packages or crates. Furthermore, it is possible to read dozens of RFID tags simultaneously, and their content can be altered in the course of the process. As opposed to this, bar codes can only be read one by one, and cannot be altered after they are printed out. Moreover, RFID tags can withstand dirty industrial conditions better than conventional bar codes.
RFID can be adapted to a number of different applications. They are used, inter alia, for monitoring of objects and processes, in logistics, movement and access control, retail sales and payment applications, as well as for identification and tracking of humans and animals. The potential range of applications is countless, and continued development of the technology only increases their number. |
The 14 Points Plan by Woodrow Wilson
‘The 14 Points Plan’ discusses the factors, which made it necessary for America to enter the war. The writer holds the opinion that for peace to prevail and for people to get their right back, sometimes conflict or war may be necessary. The document also talks about how to ensure that the world never reaches such extremities again. The fourteen points offer the writer’s perspective on how world peace can be achieved and sustained. The 14 points plan is derived from a speech that the 28th President of America, Woodrow Wilson gave before a Joint Concession of Congress on January 8, 1918 during World War I. In the plan, he elaborated specifically the solutions to the problems of the European countries such as France, Belgium, Russia, Poland, Turkey, Austria-Hungary, Italy and Germany.
This speech was important because it paved the way for the creation of the United Nations, which aims to promote world peace and cooperation. The rules of the fourteen points plan are that all agreements between countries should be done transparently. The aim of this rule was to eliminate future secret defense agreements, which were one of the main causes of World War I. In the plan, Woodrow also advocated for the removal of trade barriers, freedom of movement, reduction of armament, self-determination of the liberated Balkan states and the creation of a League of Nations. The 14-point plan was a foreign policy document aimed at restoring and sustaining world peace after World War I (Pestritto & Atto, 2008).
Woodrow was a liberalist; his document advocated for freedom, independence and justice. At the time, the situation of the world was complicated; many countries were suffering greatly because of the war, therefore it was almost impossible for them to support his idealistic notions. They wanted revenge against their enemies and compensation for the losses they had suffered. The workability of Woodrow’s 14 points plan was therefore questioned because it appeared to be too idealistic although it contained positive ideas. He was not realistic in his picture of a perfect world. Another one of the weaknesses of the plan emerges when he says that there shall be open covenants of peace between nations “after which there shall be no private international understandings of any kind…” It is not possible for all countries in the world not to have private international understandings with each other, especially in trade. Relations between countries are a complex issue due to factors such as a shared history, common goals and objectives. Therefore, it is quite common for some implicit understandings to exist between countries.
The other limitation of the 14-point plan is that it was inconsiderate of the position that France and the United Kingdom had been put in by Germany during the war. They had suffered massively because of the war, and there was a high likelihood that Germany would become aggressive again if let off without any reprimand. The plan offered solutions that were too simplistic given the complicatedness of the war. Another weakness of the 14-point plan was that the ideals it stood for were too optimistic for that period in time and this made them impractical. The 14-point plan was opposed to colonialism and stated that colonizing countries should free their colonies. Again, this was too idealistic because at the time, almost a quarter of the world’s countries were under colonial rule, and the colonizers were benefiting immensely, therefore immediate withdrawal would not have been possible. The fourteen-point plan was an indirect cause of World War II; this is because it only offered solutions to world peace but did not deal with the root cause of the First World War. After the World War I, many European countries were still devastated for a long time by the losses they had suffered; therefore, despite the fact that they had agreed to adopt the 14-point plan, the leaders still wanted vengeance against Germany. Since the Versailles treaty was derived from the 14-point plan, when the tome came for the conversion of the plan into an agreement (treaty), the European leaders who were disgruntled with Germany twisted the plan to state that Germany would face punishment and sanctions. This is one of the factors that led to the start of World War II. Although Wilson’s plan contained positive principles and ideals, it required the creation of a New World. His plan envisioned a world where there would be transparency, democracy and justice. At that time, the world was in the middle of a war that had torn apart many countries and claimed many lives. Therefore, it was almost impossible for these countries to embrace his ideals and change because they were deeply engrained in militarism, autocracy and injustice. These principles were only enforced after the mass devastation caused World War II. It is then that the world saw the need for real change (Ruggiero, 2002).
The 14-point plan had both positive and negative effects. One of the benefits was that it was translated into the Versailles Treaty. This Treaty derived four of its points from the Versailles Treaty, including the 14th point, which advocated for the formation of a League of Nations. Thus, the United Nations was formed as a result of the 14-point plan. I agree with the 14-point plan on the fact that there was necessity to establish a League of Nations in order to ensure future world peace. World peace and cooperation is a crucial element An example of the consequences that conflict and disagreement between countries brings are the statistics of the World War I, over 25 million people died and economies of most of the European nations collapsed leading to an extremely harsh financial climate. The effects of the First World War were felt all over the world for a long time. It is therefore important to have a governing body that would facilitate peace, cooperation and development among all the countries of the world. This 14th point led to the eventual establishment of the United Nations, which over the years has helped many countries during tumultuous and devastating periods. Woodrow’s 14-point plan also facilitated the establishment of international courts of justice where perpetrators of war and violence are prosecuted.
The 14-point plan is still applicable today; many lessons can be learnt and applied from it. One of the key principles of the document is that every country should live and let live if peace is to be achieved. No country should invade other country’s territories and resources or violate the rights of another country. Another important lesson that countries can learn from this document is the fact that they should all take responsibly in order to uphold world peace because war affects all countries directly or indirectly. The plan also acts as a guideline on how countries should conduct themselves in order to avoid conflict with other countries. For instance, it is important to avoid making secret treaties with other countries. Diplomacy should be employed in all aspects of relations with other countries.
The United States government has also adopted some of Wilson’s principles in its foreign policies. Most of Woodrow Wilson’s successors have used international democratization, security and justice as tools for the promotion of world peace. As a result, countries all over the world are more aware of their rights and responsibilities towards each other. For instance, in the world today, there is freedom of movement and trade among the countries of the world. This has helped to facilitate international peace and understanding. Therefore, the 14-point plan can be considered as an important document in history that has facilitated the development and sustenance of world peace and has led to the development of foreign relations among countries in addition to the enhancement of world trade.
Pestritto, R. J., & Atto, W. J. (2008). American progressivism: a reader, Oregon: Lexington Books
Ruggiero, A., (2002). World War I, New York, NY: Marshall Cavendish |
Gaullism (French: Gaullisme) is a French political stance based on the thought and action of World War II French Resistance leader Charles de Gaulle, who would become the founding President of the Fifth French Republic. De Gaulle withdrew French forces from the NATO Command Structure, forced the removal of Allied bases from France, as well as initiated France's own independent nuclear deterrent programme. His actions were predicated on the view that France would not be subordinate to other nations.
According to Serge Berstein, Gaullism is "neither a doctrine nor a political ideology" and cannot be considered either left or right. Rather, "considering its historical progression, it is a pragmatic exercise of power that is neither free from contradictions nor of concessions to momentary necessity, even if the imperious word of the general gives to the practice of Gaullism the allure of a programme that seems profound and fully realised". Gaullism is "a peculiarly French phenomenon, without doubt the quintessential French political phenomenon of the 20th century".
Lawrence D. Kritzman argues that Gaullism may be seen as a form of French patriotism in the tradition of Jules Michelet. He writes: "Aligned on the political spectrum with the right, Gaullism was committed nevertheless to the republican values of the Revolution, and so distanced itself from the particularist ambitions of the traditional right and its xenophobic causes". Furthermore, "Gaullism saw as its mission the affirmation of national sovereignty and unity, which was diametrically opposed to the divisiveness created by the leftist commitment to class struggle".
Gaullism was nationalistic. In the early post-WWII period, Gaullists advocated for retaining the French Empire. De Gaulle shifted his stance on empire in the mid-1950s, suggesting potential federal arrangements or self-determination and membership in the French Community.
Berstein writes that Gaullism has progressed in multiple stages:
- The first phase (1940–45) occurred during World War II. In this period, Gaullism is identified with those French who rejected the armistice with Nazi Germany and the Vichy collaborators led by Philippe Pétain, and joined with General Charles de Gaulle and the Free French Forces, who sought to put France back in the war on the Allied side.
- In the second phase (1946–1958), Gaullism was a type of opposition to the Fourth French Republic. Gaullists in this period challenged the unstable parliamentary government of the Fourth Republic and advocated its replacement with "a president of the republic with preeminent constitutional powers."
- In the third phase (1958–69), "Gaullism was nothing other than the support given to the general's own politics after he returned to power in 1958 and served as president of the newly formed Fifth Republic from 1959 until his resignation in 1969."
Since 1969, Gaullism has been used to describe those identified as heirs to de Gaulle's ideas. The Cross of Lorraine, used by the Resistant Free France (1940–1944) during World War II, has served as the symbol of many Gaullist parties and movements, including the Rally of the French People (1947–1955), the Union for the New Republic (1958–1967), or the Rally for the Republic (1976–2002).
|This article is part of a series on|
|Conservatism in France|
The "fundamental principle" of Gaullism is a "certain idea of France" as a strong state. In his War Memoirs, de Gaulle describes France as "an indomitable entity, a 'person' with whom a mystical dialogue was maintained throughout history. The goal of Gaullism, therefore, is to give precedence to its interests, to ensure that the voice is heard, to make it respected, and to assure its survival … to remain worthy of its past, the nation must endow itself with a powerful state." Kritzman writes that "the Gaullist idea of France set out to restore the honor of the nation and affirm its grandeur and independence" with de Gaulle seeking to "construct a messianic vision of France's historic destiny, reaffirm its prestige in the world, and transcend the national humiliations of the past." Accordingly, de Gaulle urged French unity over divisive "partisan quarrels" and emphasized French heritage, including both the Ancien Régime and the Revolution. The French political figures most admired by de Gaulle "were those responsible for national consensus—Louis XIV, Napoleon, Georges Clemenceau—who saw as their goal the creation of political and social unity by a strong state."
In order to strengthen France, Gaullists also emphasize the need for "a strong economy and a stable society." Gaullism believes, according to Berstein, that "it is the imperative of the state, as guardian of the national interest, to give impetus to economic growth and to guide it. Liberal opinion is accepted if it promises more efficiency than planning. As for social justice, so long as its natural distrust of big business can be allayed, it is less a matter of doctrine than a means of upholding stability. To put an end to class struggle, Gaullists hope to make use of participation, a nineteenth-century concept of which the general spoke frequently, but which he allowed his associates to ignore."
As part of a strong state, de Gaulle highlighted the necessity to found state institutions on a strong executive, contrasting with the French republican tradition, which emphasized the role of the elected assembly. During his time in office, de Gaulle sought to establish authority by holding direct universal votes and popular referendums and by directly engaging with the nation (via speeches broadcast over radio, press conferences, and trips to the provinces). Even though he frequently spoke on his respect for democracy, his political opponents perceived in his rule a tendency toward dictatorial power; many feared a Bonapartist revival or a republican monarchy. France remained a democracy, however, and de Gaulle's decision to step down as president following voters' rejection of the April 1969 constitutional referendum showed that his commitment to democratic principles was not merely a rhetorical ploy.
In foreign policy, Gaullists are identified with both realism and French exceptionalism, and de Gaulle sought to impose French influence on the global order. Gaullists supported decolonization, which freed France from the burden of empire. This was reflected in de Gaulle's resolution of the Algeria crisis (1954–1962), which was strongly influenced by de Gaulle's realpolitik, or "keen sense of political expediency." Realizing that decolonization was inevitable, and that a continued crisis and extended Algerian War would harm the French economy and perpetuate national disunity, "de Gaulle felt that it was in France's best interests to grant independence and desist from military engagement," thereby preserving French unity and grandeur.
Gaullists emphasize the need for France to "guarantee its national independence without resorting to allies whose interests might not coincide with those of France." The development of independent French nuclear capability, undertaken at significant effort despite much international criticism, was an outgrowth of this worldview. However, de Gaulle simultaneously initiated one of the first international nonproliferation efforts by quietly unshackling and distancing the French program from a diplomatically troublesome secret involvement with an Israeli junior partner, attempting to demilitarize and open to international oversight the Israeli nuclear arms program.
France under de Gaulle sought to avoid a post-World War II bipolar global political order dominated by the two superpowers of the United States and the Soviet Union, and sought to avoid dependence on the United States. Kritzman writes: "Gaullist foreign policy was motivated by its need to distinguish itself from … the two great superpowers. Paradoxically, [de Gaulle] desired to be part of the Western alliance and be critical of it at the same time on key issues such as defense." Most notably, de Gaulle withdrew France from North Atlantic Treaty Organization (NATO) military operations in 1966, and directed non-French NATO troops to leave France, although France remained a NATO member. Gaullists were also critical of the overseas economic influence of the U.S. and the role of the U.S. dollar in the international monetary system. Under de Gaulle, France established diplomatic relations with China earlier than most other Western nations; imposed an arms embargo against Israel (1967); and denounced American imperialism in the Third World.
De Gaulle and the Gaullists did not support Europe as a supranational entity, but did favour European integration in the form of "a confederation of sovereign states mutually engaged in "common policy, autonomous from the superpowers," and significantly influenced by France. De Gaulle's hopes to advance this sort of union largely failed, however, "in the face of the desire of the other European powers to remain closely allied to the United States."
Political legacy after de Gaulle
De Gaulle's political legacy has been profound in France and has gradually influenced the entirety of the political spectrum. His successor as president, Georges Pompidou, consolidated Gaullism during his term from 1969 to 1974. Once-controversial Gaullist ideas have become accepted as part of the French political consensus and "are no longer the focus of political controversy." For instance, the strong presidency was maintained by all of de Gaulle's successors, including the socialist François Mitterrand (1981–1995). French independent nuclear capability and a foreign policy influenced by Gaullism–although expressed "in more flexible terms"–remains "the guiding force of French international relations." During the 2017 presidential election, de Gaulle's legacy was claimed by candidates ranging from the radical left to the radical right, including Jean-Luc Mélenchon, Benoît Hamon, Emmanuel Macron, François Fillon and Marine Le Pen.
According to Berstein, "It is no exaggeration to say that Gaullism has molded post-war France. At the same time, considering that the essence of Gaullist ideas are now accepted by everyone, those who wish to be the legitimate heirs of de Gaulle (e.g., Jacques Chirac of the RPR) now have an identity crisis. It is difficult for them to distinguish themselves from other political perspectives." Not all Gaullist ideas have endured, however. Between the mid-1980s and the early 2000s, there have been several periods of cohabitation (1986–1988, 1993–1995, 1997–2002), in which the president and prime minister have been from different parties, a marked shift from the "imperial presidency" of de Gaulle. De Gaulle's economic policy, based on the idea of dirigisme (state stewardship of the economy), has also weakened. Although the major French banks, as well as insurance, telecommunications, steel, oil and pharmaceutical companies, were state-owned as recently as the mid-1980s, the French government has since then privatized many state assets.
|This article is part of a series on|
|Conservatism in France|
The term "traditional Gaullism" (Gaullisme traditionnel) has been used by scholars to describe the core values of Gaullism embodied by the actions and policies of Charles de Gaulle, generally in distinction with other Gaullist currents such as "social Gaullism" and "neo-Gaullism".
Resistant Gaullism (Gaullisme de Résistance) emphasizes the need for French political and military independence from potentially hostile powers, inspired by de Gaulle's role in the fight against Nazi Germany and Vichy France during World War II. The term "first-generation Chiraquian Gaullism" (Gaullisme chiraquien de première génération) has been used to describe politicians loyal to the populist stance and the opposition to European integration and the free market as initially advocated by Jacques Chirac in the late 1970s. This position was embodied in particular by Charles Pasqua and Philippe Séguin, who came to oppose Chirac's shift to neo-Gaullism during the 1990s.
Social Gaullism (or "left-wing Gaullism") focuses on the social dimensions of Gaullism, and has often been linked by scholars to social democracy. Opposed to the class conflict analysis of Marxism, which was perceived as a threat to national unity, de Gaulle advocated instead a "capital-labour association", that is the need for the direct participation of workers in their company's financial results and management, which he believed was a necessary condition for them to take an interest in its functioning and development. This aspect of Gaullism has been promoted by the Democratic Union of Labour between 1959 and 1967, and by politicians like René Capitant, Jacques Chaban-Delmas, Jean Charbonnel, Léo Hamon, Philippe Dechartre or Jean Mattéoli.
"Neo-Gaullism" has been used in the literature to describe a movement that emerged after the death of de Gaulle in 1970 and drew more influence from economic liberalism. Many aspects of neo-Gaullism, such as support for the Maastricht Treaty (1992) and French rapprochement with NATO under Chirac's presidency, have been described as difficult to reconcile with the historical idea of Gaullism. However, key components of Gaullism have remained, including the concept of a strong, independent state, the unity of the French people and references to de Gaulle's leadership. Neo-Gaullists have also conserved in some aspects the idea that France has a role to play in containing the world's "hyperpowers", as seen in Chirac's refusal to follow the US in the Iraq War in 2003.
Pompidolian Gaullism (Gaullism pompidolien) highlights the need for France to adapt its economy in an increasingly competing world that may threaten social peace at home, in the legacy of French president Georges Pompidou (1969–1974). "Second-generation Chiraquian Gaullism" (or "Chiraquian neo-Gaullism"), which emerged in the mid-1980s, has been influenced by neoliberalism and is more open to European integration, in the legacy of French president Jacques Chirac (1995–2007).
Gaullist political parties
The following is a list of Gaullist political parties and their successors:
- 1947–1955: Rally of the French People (RPF)
- 1954–1958: National Centre of Social Republicans (RS)
- 1958–1962: Union for the New Republic (UNR)
- 1958–1962: Democratic Union of Labour (UDT)
- 1962–1967: Union for the New Republic – Democratic Union of Labour (UNR – UDT)
- 1967–1976: Union of Democrats for the Republic (UDR)
- 1974-1980's: Democrats Movement (MDD)
- 1976–2002: Rally for the Republic (RPR)
- 1993-2003: Citizen Movement (MDC)
- 1994-2018: Movement for France (MPF)
- 1999–2011: Rally for France (RPF)
- 2002–2015: Union for a Popular Movement (UMP)
- 2003–present: Citizen and Republican Movement (MRC)
- 2008–2014: Debout la République (DLR)
- 2014–present: Debout la France (DLF)
- 2015–present: The Republicans (LR)
- 2017–present: The Patriots (LP)
- 2018–present: Citizen Movement (MDC)
- Berstein 2001b, pp. 307–308.
- Guntram H. Herb, David H. Kaplan. Nations and Nationalism: A Global Historical Overview. Santa Barbara, California, USA: ABC-CLIO, Inc., 2008. Pp. 1059.
- Kritzman & Reilly 2006, pp. 51–54.
- Kahler, Miles (1984). Decolonization in Britain and France: The Domestic Consequences of International Relations. Princeton University Press. pp. 77–99. ISBN 978-1-4008-5558-2.
- Lachaise, Bernard (1998). "Contestataires et compagnons : les formes de l'engagement gaulliste". Vingtième Siècle. Revue d'histoire. 60 (1): 71–81. doi:10.3406/xxs.1998.2759.
- "Nuclear Weapons - Israel".
- Demossier, Marion; Lees, David; Mondon, Aurélien; Parish, Nina (2019). The Routledge Handbook of French Politics and Culture. Routledge. ISBN 978-1-317-32589-5.
- Henri Astier, French wrestle with De Gaulle's legacy, BBC News (15 April 2002).
- Bréchon, Pierre; Derville, Jacques; Lecomte, Patrick (1987). "L'Univers Idéologique des Cadres RPR: Entre l'héritage gaulliste et la dérive droitière". Revue française de science politique. 37 (5): 675–695. doi:10.3406/rfsp.1987.411575. ISSN 0035-2950. JSTOR 43118723.
- Lavillatte, Bruno (2006). "Un gaullisme intransmissible". Médium. 7 (2): 96–105. doi:10.3917/mediu.007.0096. ISSN 1771-3757.
- Knapp, Andrew; Wright, Vincent (2006). The Government and Politics of France. Routledge. p. 226. ISBN 978-0-415-35732-6.
- Lachaise, Bernard (1994). "Le RPR et le gaullisme. Les infortunes d'une fidélité". Vingtième Siècle. Revue d'histoire. 44 (1): 25–30. doi:10.3406/xxs.1994.3107.
- Pozzi, Jérôme (12 May 2020). "Le gaullisme social : le rendez-vous manqué de la droite française ?". The Conversation.
- Berstein, Serge (2001a). Histoire du gaullisme. Perrin. p. 370. ISBN 2-262-01155-9. OCLC 407137019.
- Tiersky, Ronald (1996). "A Likely Story: Chirac, France-NATO, European Security, and American Hegemony". French Politics and Society. 14 (2): 1–8. ISSN 0882-1267. JSTOR 42844543.
- Jackson, Julian (1999). "General de Gaulle and His Enemies: Anti-Gaullism in France Since 1940". Transactions of the Royal Historical Society. 9: 43–65. doi:10.2307/3679392. ISSN 0080-4401. JSTOR 3679392. S2CID 154467724.
- Miller, John J. (3 January 2005). "Liberté, Egalité, Absurdité". The New York Times.
- Choisel, Francis, Bonapartisme et gaullisme, Paris, Albatros, 1987.
- Choisel, Francis, Comprendre le gaullisme, L'Harmattan, 2016.
- Gordon, Philip H. A Certain Idea of France: French Security Policy and the Gaullist Legacy (1993) online edition
- Grosser, Alfred. French foreign policy under De Gaulle (1977)
- Jackson, Julian. De Gaulle (2018) 887pp; the most recent major biography.
- Kritzman, Lawrence D; Reilly, Brian J (2006). "Gaullism". The Columbia History of Twentieth-century French Thought. Columbia University Press. ISBN 0-231-10791-9.
- Kulski, W. W. De Gaulle and the World: The Foreign Policy of the Fifth French Republic (1966) online free to borrow
- Touchard, Jean, Le gaullisme (1940–1969), Paris, Seuil, coll. Points Histoire.1978.
- Berstein, Serge (2001b). "Gaullism". The Oxford Companion to Politics of the World second edition ed. Joel Krieger. Oxford University Press. ISBN 0-195-11739-5. |
Play can describe everything from unstructured activities to activities that have full support from an adult. But all play is active and fun!
Free play refers to spontaneous, unstructured play. Children choose what and how to explore. They are actively engaged, rather than watching another person play or teach. A baby might engage in free play by banging blocks together. A toddler might insert blocks into a shape sorter or make tea in her play kitchen.
Guided play is like free play, in that it’s focused on what the child is interested in. But unlike free play, an adult is present to facilitate a playful learning experience. That doesn’t mean that adults take over the play. Instead, adults act as play partners and curious onlookers. They take on a subtle role by asking questions to help children think. When playing pretend store, you could ask, “How many more tomatoes do you need for your sauce?” They also make suggestions about what else they might try. You might suggest a smaller or different shaped block will keep the tower from toppling over.
Both types of play benefit math learning in their own way. Let’s look at some examples of how children learn about math during free and guided play.
- Free play
- is spontaneous, unstructured play that is child-directed
- Guided play
- is like free play, in that it's focused on what the child is interested in. But unlike free play, an adult facilitates a playful learning experience |
Conjunctivitis is a term describing inflammation of the superficial tissue (conjunctiva) of the eye. It can be either infectious or non-infectious. Infectious causes can be viral or bacterial. Non-infectious causes include allergies, chemical irritation, physical injury or foreign bodies, or certain systemic diseases. These must be differentiated from infectious causes since the treatments are completely different, and if the wrong treatment is ordered, the condition can worsen.
Viral conjunctivitis is the most common type of infectious conjunctivitis. It is frequently bilateral and transmitted by hand-to-eye touch from infected surfaces. Adenovirus is the most common culprit and is treated with strict hygiene measures and symptomatic treatment. Since it is highly contagious through touch, protect household contacts by not sharing towels, clothing, or bedding, washing hands frequently, and avoiding touching the face. Other more serious viral infections include herpes simplex and varicella-zoster. These infections need specialty consultation and patients often need antiviral medication.
Bacterial conjunctivitis can spread from person to person in many ways. These include hand-to-eye contact, eye contact with contaminated objects, sexual encounters with eye to genital contact, or vertically from mother to baby. Bacteria can also spread by large respiratory tract droplets. Respiratory pathogens such as Streptococcus pneumoniae, Moraxella catarrhalis, and Haemophilus influenzae are common causes of bacterial conjunctivitis. Other potentially serious causes are Escherichia coli, Chlamydia trachomatis, and Neisseria gonorrhoeae.
Diagnosing the type of conjunctivitis can be challenging since the signs and symptoms of causes overlap. The ability to quickly determine the presence of specific pathogens can guide care and expedite specialty referral if necessary.
Benefits of Working with Solaris Diagnostics
Now you can provide patients with the quickest and most accurate diagnosis possible from our high-complexity, CLIA accredited laboratory. Choose Solaris Diagnostics for:
- Simple collection
- Results by 5:00 pm on the day the lab receives the specimen
- Direct access to experienced scientific staff
- Among the most personal customer service experiences in the industry
Our experts assist clinicians and health care providers in rapidly identifying the pathogens and underlying causes of disease. We know that fast and accurate diagnoses lead to better patient outcomes. |
Electronics is about IoT, microcontroller and microprocessor. Right??
Wrong, electronics ain’t about coding an microcontroller that is the job of CS guys. I know that skill is important but that ain’t your stream. you are an electronics hobbyist
Building a project from Arduino is a child’s game students even a 6th class student can also easily learn how to code Arduino and build fantastic projects. The electronics person have these skills as well as a lot more.
There are different skills that electronics hobbyist should know.
Some of them are listed below:
The process of circuit design can cover systems ranging from complex electronic systems all the way down to the individual transistors within an integrated circuit. For simple circuits the design process can often be done by one person without needing a planned or structured design process, but for more complex designs, teams of designers following a systematic approach with intelligently guided computer simulation are becoming increasingly common. In integrated circuit design automation, the term “circuit design” often refers to the step of the design cycle which outputs the schematics of the integrated circuit. Typically this is the step between logic design and physical design.
Formal circuit design usually involves a number of stages. Sometimes, a design specification is written after liaising with the customer. A technical proposal may be written to meet the requirements of the customer specification. The next stage involves synthesising on paper a schematic circuit diagram, an abstract electrical or electronic circuit that will meet the specifications. A calculation of the component values to meet the operating specifications under specified conditions should be made. Simulations may be performed to verify the correctness of the design.
PCB Layout Designing
A printed circuit board (PCB) mechanically supports and electrically connects electronic components or electrical components using conductive tracks, pads and other features etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate. Components are generally soldered onto the PCB to both electrically connect and mechanically fasten them to it.
Printed circuit boards are used in all but the simplest electronic products. They are also used in some electrical products, such as passive switch boxes.
Alternatives to PCBs include wire wrap and point-to-point construction, both once popular but now rarely used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Specialized CAD software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, and the layout only has to be done once. PCBs can also be made manually in small quantities, with reduced benefits.
Debugging a circuit
Debugging is the process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or a system. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, and profiling.
When a complicated circuit is first built, is not uncommon for the circuit to be non-functional, due to wiring/connection errors, faulty parts, and/or incorrect equipment settings (e.g. wrong power-supply settings). The process of finding and correcting these problems is called debugging. It is very easy to get overwhelmed by the circuit complexity and to get lost in the myriad of possible sources of error. However, it is not difficult to debug a circuit if you approach the problem systematically. Like a doctor examining a patient, an electrical engineer should find the cure for a malfunctioning circuit by observing, measuring, posing and testing various hypotheses, until the error(s) is (are) identified and corrected. We’ve already learned how to use two of the basic electronic measurement tools – the digital multimeter (Experiment #1) and the oscilloscope (Experiment #2); these will be our tools for debugging electrical circuits.
Soldering and De-soldering
Soldering , is a process in which two or more items (usually metal) are joined together by melting and putting a filler metal (solder) into the joint, the filler metal having a lower melting point than the adjoining metal. Soldering differs from welding in that soldering does not involve melting the work pieces. In brazing, the filler metal melts at a higher temperature, but the work piece metal does not melt. In the past, nearly all solders contained lead, but environmental and health concerns have increasingly dictated use of lead-free alloys for electronics and plumbing purposes.
Fixing home appliance’s small circuits
This skill is must if you are an electronics hobbyist as if you can’t repair a stuff that means you don’t have skill and understanding of the circuit or its functioning. You should know how to read a resister, capacitor and inductor ( yes those also have color codes ). Used IC, 555, gates (TTL/CMOS, etc) .
Ahh that’s too much for an electronics guy. No that is just a beginning I haven’t listed all there.
Electronics hobby is not just about the coding of a microcontroller and connecting few sensors or modules with some pre written library. Its about enhancing the circuits using and understanding the building blocks of big things in deep details and making new tings.
Go buy yourself a basic kit and start working on electronics not CS. Solder your components for the PCB that you designed from your own made circuit and make it work, I am sure you will love it. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.