content
stringlengths 275
370k
|
---|
Facts on File Science Online: Contains science experiments, biographies of scientists, timelines, and more.
Kids Search: Research tool for middle school and elementary age students.
Learning Express Library: Provides practice tests and tutorial course series designed to help succeed with academic or licensing tests.
Searchasaurus: Search tool designed for children with access to topics like animals, history, health, science, and what's in the news.
Student Research Center: Contains thousands of curriculum-targested primary documents, biographies, topical essays, background information, critical analyses, full-text coverage of over 300 magazines, newspapers, over 20,000 photographs and illustrations, and more than 8 hours of audio and video clips.
TOPICsearch: Allows users to explore social, political and economic issues, scientific discoveries and other popular topics discussed in today's classrooms.
World Book Early Book of Learning: Contains three learning environments, each targeting a critical area of the development of young learners in early elementary grades.
World Book Kids: Encyclopedia for children.
World Book Online Resource Center: Online version of the World Book Encyclopedia. |
If you wish to contribute or participate in the discussions about articles you are invited to join SKYbrary as a registered user
From SKYbrary Wiki
The present article is under construction.Reader enquiries are welcome, contact the editor: [email protected].
Motivation can be described in several ways, including the following two:
- the reason, or reasons, one has for acting or behaving in a particular way – these reasons could also be called motivations, or motivating factors or triggers, and
- the general desire or willingness of someone to do something – this definition is probably connected more closely with an attitude (positive or negative) than any specific “motive”.
Other words that can often be used in the same context as motivation, or motive, include: incentive, stimulus and impulse.
Motivation is goal orientated; sometimes “towards” the specific object of our goal, and sometimes, less specifically, just generally “away” from our current situation. At a basic human level we may be motivated by hunger to go and make a sandwich, or by coldness to put on a jumper. Both of these actions are specifically goal orientated. However, if we were in an uncomfortable situation, e.g. a noisy environment, or we felt under-confident within a group, then we may just be happy to be anywhere else – it wouldn’t really matter. At the basic level these two different scenarios can be compared with the “Carrot and the Stick” respectively; we move towards things we like and away from things we don’t.
Aviation and Motivation
Within the aviation environment, motivation can be considered as an on-going process that includes:
- initiating, or activating, motivation – how we get started (often achieved by imagining a successful end result).
- guiding motivation – how we use feedback to measure our progress and ensure we are travelling in the right direction.
- persisting motivation – how we overcome obstacles and cope with setbacks (re-setting the end goals if necessary).
Each of these motivation processes can be modulated by “intensity”. Intensity determines how much energy and urgency we put into achieving our goal, and how many obstacles we are prepared to overcome. Intensity is governed by our evaluation of importance – how important do we consider the task to be. It is this last point that organisations and managers (motivators) need to consider when motivating a workforce – persuading employees of the importance of a goal.
In determining the importance of motivation in aviation we need to ask the questions:
- what goals are we trying to achieve, and
- how do we want employees to achieve these goals – i.e. in what manner.
The first goal above can be considered as something specific that can be measured when achieved, and the second is less well-defined, but can be “observed” i.e. we can “see” if someone is working professionally, or safely, with discipline and focus, care and attention.
Clearly the range of goals is broad, from the simple - fitting a component in accordance with the instructions, to the complex - completing a shift in air traffic control without endangering any aircraft. So, for the purposes of this Article, and purely as an example, it may be useful to consider the following two goals, one specific and the other more general:
- all employees following standard operating procedures (safety rules, task instructions, job-cards etc), and
- all employees communicating, behaving and working safely.
In other words, what motivates employees to:follow or break the rules, and work safely?
Leadership & Motivation
How to achieve these goals of employees following standard operating procedures and working safely (all the time) is down to leadership, resourcing and trust; in other words it is a culture-based concept, and like culture motivation can be built on structure within an organisation, from policies through controlling the workplace environment to practices.
Using a “stick” as motivation will see employees find a variety of solutions to avoid punishment i.e. there is no control over the outcome; something common to organisations with negative safety and organisational cultures.
Whereas using a “carrot, or carrots” as motivation can be connected to specific outcomes and therefore control of the result is much easier; something common to positive safety and organisational cultures, backed up by a just culture.
The question is what carrots are most effective? The most frequently cited and strongest rewards that people find motivating in the workplace include:
- a sense of achievement
- recognition from colleagues for good work
- enjoying aspects of the job in itself
- a sense of responsibility
- a sense of career advancement, and
- a feeling of personal growth.
This list is completely detached from financial rewards; people will sometimes work relentlessly hard regardless of financial reward (because it is satisfying and the right thing to do). In fact financial rewards, in the form of bonuses, eventually act as de-motivators as they lose their value and create jealousy amongst colleagues.
How are aviation organisations and workplaces structured to facilitate the above carrots? Whenever meetings of senior managers are convened to wave a stick, then maybe not at all. Organisations can learn to facilitate managers to motivate by considering the following:
- wherever possible (i.e. safe) give employees the freedom to do the job their way – this requires clear goals, boundaries and standards to be agreed
- the above statement will give employees more responsibility and ownership over their work
- provide feedback to employees so that they have a concept of what is good performance – this will instil pride
- provide perspective to employees so they can understand their role and contribution within the greater organisation and operation, i.e. what difference they are making.
Harvard psychologist Richard White has invented a word to sum this up, he calls it Effectance.
The above motivating measures do not refer to either carrot or stick, instead they are about creating a supporting environment, which needs to include resources and training.
The Motivated Individual
Provided the organisation facilitates positive motivation in its workforce by providing the right structure and environment, then individual employees are more likely to contribute, cooperate, perform and achieve: and enjoy their work.
Some professionals are more likely to be self-motivated, irrespective of the company they work for and regardless of the task. This can be through professional pride connected to a certainty of career progression, and to some extent job security.
Other workers, i.e. temporary and unskilled workers may require more creative motivation techniques from leaders; however, job satisfaction, recognition and responsibility are still key factors. For this group, low reward, few career prospects, and lack of job security can all have a negative impact on individual motivation.
Employees will occasionally omit to follow standard operating procedures, this can be for a variety of reasons:
- confusion (especially poorly written procedures)
- ambiguous instructions
- lack of clarity with goals
- lack of knowledge
- under-resourcing (equipment, manpower, time)
- conflicting interests (commercial, safety and personal)
It is important that an organisation’s Safety Management System is capable of investigating each case and developing strategies that reduce future risk, including motivating tactics to encourage adherence to procedures and professional working practices.
Work nowadays can demand more from employees than ever before, and yet in many societies work plays a less important role in an individual’s life (in terms of satisfaction). Therefore organisations will more likely motivate employees when they can blend employees’ personal goals with work and organisational goals.
Finally, everyone can become demotivated through routine and excessive demands. Therefore, opportunities for rest and recuperation need to be offered and used. In addition to re-charging our batteries, change and variety at work can be used to motivate.
- ^ a b Persaud. Dr. R. 2005. The Motivated Mind. London. Bantam Press.
- ^ Haidt. J. 2006. The happiness Hypothesis: putting ancient wisdom and philosophy to the test of modern science. London. Arrow Books., Random House.
- ^ Gostick. A. & Elton. C. 2007. The Carrot Principle: how the best managers use recognition to engage their people, retain talent, and accelerate performance. New York. Free Press.
Categories: Human Behaviour |
have become the modern rival of public schools, but does the reality of charter performance match the hype? According to Change.org, "Charter schools get overwhelmingly positive press and make a lot of claims about their success. But actually, numerous studies confirm that their achievement is indistinguishable from that of traditional public schools. Some are very successful, some are troubled and struggling, and the rest are somewhere in between just like traditional public schools."
In a closer examination, charter schools, as explained by US News and World Repor
t, are publicly funded institutions that operate under their own standards of conduct and curriculum outside the realm of local public school districts. Although these institutions are funded by tax dollars, charter schools are ultimately given the freedom to establish their own methods of operation, similar to how many private schools
are able to design their instructional and social practices. According to the National Education Association
, although some state statutes, regulations and rules may still apply to charter schools, they are generally outside the bounds of traditional educational oversight by the state and instead are governed by a board of directors. The original impetus for the creation of charter schools was to increase competition for students, thus giving parents more choices in terms of where their children go to school. It was also theorized that increased competition between public and charter schools would lead to better educational programs for all students.
Yet, despite these freedoms, many experts argue that the charter schools are under-performing in comparison to public schools. On the other hand, supporters of charter programs argue that the data used to draw negative attention to charter school scores is misleading, biased, or falsely computed. With staunch supporters on both sides of the debate, charter schools and public schools are continually being thrown into the boxing ring.
Test Scores: Charter Schools vs. Public Schools
In evaluating some of the statistical studies that seek to compare the performance of charter and public schools, recent investigations conducted by the Center for Research on Education Outcomes (CREDO
) at Stanford University reveal that students' test scores may prove that public schools are now outperforming charter schools. The Stanford analysts compared reading and math state-based standardized test scores
between charter school and public school students in 15 states, as well as scores in the District of Columbia. Experts found that 37 percent of charter schools posted improvements in math
scores; however, these improvement rates were significantly below the improvement rates of students in public school classrooms. Furthermore, 46 percent of charter schools experienced math improvements that were "statistically indistinguishable" from the average improvement rates shown by public school students.
Another study reported by the New York Daily News
found that public schools and charter schools in New York City showed equally “dismal” performance on state assessments aligned to more rigorous standards. Just 25 percent of charter school students achieved proficiency in English, one percent less than public school students. In math, 35 percent of students at charter schools were proficient, as compared to 30 percent of public school students. These most recent scores represent a continuous five-year drop in math and English scores for all schools in New York City.
Yet, in Chicago, charter schools seem to be finding success where public schools are not. According to a story by the Chicago Tribune
, charter school students are showing greater gains in both math and English than their public school counterparts. These gains show even greater significance for low-income and minority students. Over the last five years, charter school students in Chicago performed as well or better than public schools in terms of achievement in math and English.
Looking Between the Numbers
While recent reports seem to support the triumphs of public schools in some areas, a deeper assessment of various studies and statistics reveals that students who come from lower-income families or students who are English language learners
have higher success and performance rates in charter schools than their public school counterparts. Adding to these positive findings, supporters of charter schools also tend to boast that their programs offer significantly more rigorous challenges and requirements than public schools.
In addition, math and reading scores alone may not be a sufficient analysis of the performance of charter schools, as some institutions cultivate students with a particular talent for arts, technology
, or music
. The innovation and curricular experimentation seen in charter schools benefits not just charter school students, but also public school students whose schools introduce new programs of their own in order to keep pace with those offered at charter schools.
Conversely, opponents of charter schools argue that although they have more latitude for developing curricula for high achieving students, charter schools lack extensive special needs programs
. Therefore, many believe that charter schools discourage the enrollment of special needs students or that they simply pick and choose the brightest students without adjusting their programs for accommodating circumstances.
What Does the Future of Charter Schools Hold?
In their most ideal form, charter schools were originally meant to serve the poorest of low-income students. In reality, however, charter schools may accept small percentages of low-income kids, but they generally do not admit extremely high risk, high need, or challenging students.
In addition, charter school enrollments are propelled only by self-initiative. By law, a school leader cannot demand that a student attends a charter program; thus, only parents who are made aware of the benefits of various local charter programs are able to sign their child up for such opportunities. As a result, parents
who are unable or unmotivated to take a driven interest in their child's education typically leave their children in traditional public schools. Sadly, it is this same pool of children who typically are the under-performing students.
As Change.org further asserts, educators who are working with unique family circumstances and challenges are forced to deal with "Parents who have been charged with drug possession, prostitution, and other crimes. These are the types of parents who aren't likely to be researching the best charter schools for their children and filling out all the forms."
While the debate between charter and public school programs continues to gain attention, President Obama has declared his strong support for charter school investments. In fact, President Obama allocated a large sum of stimulus money
towards the enhancement of charter schools across the country.
Unfortunately, since charter schools have only been in existence in the United States since the 1990s, it may be too soon to tell whether or not these institutions are fairly, justly, and effectively providing students with more rigorous challenges and opportunities than their public school counterparts. Ultimately, the conflicting data from research demonstrates that wide variability is found in the quality of education and the performance of children at both charter and public schools. Thus, the debate about which school is better rages on. |
While preparing a workshop on DNA detection in Oaxaca, Mexico, David Quist found a surprise: an alien gene embedded in a sample of native corn, known as criollo. The alien was actually familiar, a type of gene commonly found in the genetically modified crops grown in the United States. Indeed, Quist observed the same DNA signature in a can of American corn he had brought for comparison. But this was an ancient strain of maize cultivated in the remote mountains of southern Mexico. How did the gene get there?
Quist and Ignacio Chapela, both microbial ecologists at the University of California at Berkeley, performed a comprehensive study of criollo in the region. Sampling four fields more than 12 miles from the closest mountain road, the researchers discovered that the native corn had incorporated not one but several genes found in engineered American corn. One sample of criollo even contained the gene for Bt toxin, an insecticide derived from bacteria.
How these genes migrated into Mexican criollo remains a mystery. Pollen grains can carry genes from engineered plants into nearby native strains or even into closely related weeds, but in 1998 Mexico banned the planting of genetically modified crops. Chapela suspects farmers may be planting imported corn originally distributed as food, or that migrant workers may be bringing samples back from the United States. Whatever its source, the genetic pollution could have grave consequences. Corn originated in this region, and it is here that it is most genetically diverse. "If you lose that diversity, you lose the possibility of finding disease-resistance genes in the future," says Chapela. "It's a serious challenge to food security worldwide." |
Language Teaching Ideas
contributed by Claire Garbett
writes: 'Here are some teaching ideas. I have used them mainly in Spanish and French but they would work for most languages. I am teaching in Key Stage 2 but most ideas I have also used with adult learners.
Flash Card Guessing Game
One child has a flash card and the other children have to guess what it is by asking closed questions, Is it an animal? Is it big? Is it a fruit? Is it blue? and so on. The cardholder can only answer: "yes", "no" or "more or less". They really enjoy this and it can be built on as vocabulary increases.
Pictionary (or in Spanish I call it Picionario)
Version one - a child draws something on the (interactive) whiteboard and their team/whole class has to guess what it is in the target language. Make sure they draw something you have taught!
Version two - Each child has an A4 piece of paper and folds it into 8. They have coloured pencils.
You then call out 8 objects linking to a topic or mixed, they have to draw the objects. At the end you can instruct them to "Muestra/Montrez" and they show their drawings. They enjoy seeing which ones they got right.
Teaching Food and Drink
I use real items and pass them around the class. if this is not practicle then use flash cards. Also I used a worksheet with a list of fruit and vegetables and ask them to split them into fruit or vegetable. then they can write an idea for a soup, salad and juice. You can encourage horrible combinations like strawberry and onion soup, garlic and cabbage juice. If they have time to draw their invention even better!
As a follow up, I give each child a paper plate and they draw and label food on i- e.g "Tapas", "Verduras" etc.
This is a favourite with all ages of children (I did it with a mixed age class of adults and 7-12 year olds). I have a collection of sheep of allsorts of colours and patterns but any toy animals could be used. The children sit back to back, one describes the animal and then the other has to draw it. Then they all show their efforts to the class. This reinforces colours, body parts and words such as stripes, spots, checks.
If you are aware of your children’s ability in maths then simple sums can be used to reinforce numbers.
Shops and Restaurants
This may seem obvious for those who employ such strategies in KS1 but turning the classroom into a French restaurant or Spanish market with a few props and giving children chance to interact in role (with some props to help) really works wonders for language retention and they enjoy and remember it too. So get an old tablecloth and start saving boxes and jars!' |
Remember when we watched all the videos from Russian dashboard cameras back in Feb.? The house sized meteor that broke up over Chelyabinsk Russia, streaking through the sky like so many end-of-the-world type movies? That one event, while not extinction level by any means, has allowed NASA satellites to track the dust pattern in the atmosphere, but there may be more meteors to come.
NASA, with the use of the NASA-NOAA (the National Oceanic and Atmospheric Association) Suomi National Polar-orbiting Partnership satellite, has been able to track the dust cloud from the Chelyabinsk meteor that entered our atmosphere on Feb. 15 of this year. The meteor broke up 14 and one half miles above the Russian city with the force of about 30 atomic bombs, similar to the one used on Hiroshima at the end of WWII. The event destroyed some property and left about 1,000 people injured.
The event did leave some debris to fall to the ground, but the more interesting thing, to meteorologists, was the dust cloud that was left behind. The NASA-NOAA satellite began tracking the cloud in earnest in order to have a detailed picture of how meteor dust clouds move across our planet.
A meteor this size has not entered our atmosphere since 1908, which devastated a forrest in Siberia. This has created quite the opportunity to understand and more accurately model what may have not only happened to the dinosaurs, but what a larger meteor may do in the future.
The dust cloud reached East and was over the Aleutian Islands, a cluster of islands off the coast of Alaska, within about 24 hours. Four days later the dust cloud had circumnavigated the Northern Hemisphere and returned to Chelyabinsk. Now, three months after the event, a noticeable dust belt still exists around the Northern Hemisphere. This provides scientists with a fairly clear picture on the movement and time it would take for a large dust cloud to move around the Earth.
A couple of scientists from the University of Madrid, Carlos Marcos and Raul de la Fuente Marcos, published a letter in the Monthly Notices of the Royal Astronomical Society: Letters detailing their research into where the meteor came from and if there are possibly more to come.
The pair ran billions of simulations for possible trajectory of the Chelyabinsk meteor and from that narrowed it down to the 10 most likely paths it could have taken. From this information, Marcos and Marcos searched NASA’s catalogue of known asteroids that are following a similar path, which lead them to infer that more meteors may be on their way to Earth.
In the letter they stated, “we assume that the meteoroid responsible for the Chelyabinsk event was the result of a relatively recent asteroid break-up event and use numerical analysis to single out candidates to be the parent body or bodies.”
In fact, their analysis has lead them to believe that the parent asteroid may be EO40, a 200m-wide object. Marcos and Marcos assume that this asteroid has broken up creating a cluster of meteoroids that are on a similar path to Earth. In their estimation there could be a number of smaller meteors, like the one that hit near Chelyabinsk, and possibly two much larger ones as a part of the group.
Assuming that their calculations are correct and the gravitational pull of planets and other stars do not alter their course, we may see more meteors entering our atmosphere. With NASA’s tracking and modeling of the Chelyabinsk meteor dust cloud, we will at least be better prepared if more do show up in the future.
By Iam Bloom |
As Iowa’s population and economy both grow in the 21st century, challenging questions face the state’s residents about how to best use land to balance the competing, yet interdependent, needs of Iowans, its economy, and its environment. Students watch a video about working landscapes—defined as places where people live and work in a way that minimizes damage to the local environment—to see if they are a possible solution to Iowa’s modern challenges. Students then consider their own lives, including where their families live, work, and go to school, to determine if the three elements of a working landscape (social, ecological, and economic) are in balance, or if one element outweighs the other two.
This lesson is part of "Great States: Iowa | Unit 8: Modern Iowa." In this unit, students will explore the challenges and opportunities facing Iowa in the 21st century.
SS.3.6/SS.4.5/SS.5.6: Identify challenges and opportunities when taking action to address problems, including predicting possible results.
An interactive whiteboard, projector, or another type of screen to show videos to the class
Class set of Working Landscapes handout
- Explain to students that as Iowa’s population and economy both grow in the 21st century, questions challenge the state’s residents about how to best use land to balance the competing, yet interdependent, needs of Iowans, its economy, and its environment. We all need to utilize the Earth to live, but we cannot overextend our use of the Earth’s resources. The goal is to have our living and working needs met without harming the environment.
- Tell students they will be watching a video about a “working landscape” that shows the healthy balance between humans and the environment. Distribute the Working Landscapes handout, and instruct students to take notes while watching the video.
- Play the video, Working Landscapes – Basics. [3:03]
- Have students complete the handout and review the answers as a class.
- A “working landscape” is a healthy, natural ecosystem that thrives under human influence. [0:02]
- The three areas of a working landscape that must be kept in balance are; social, ecological, and economic. [0:40]
- Mutual sustainability is when everyone within the environment balances their own needs with the needs of the ecosystem. [0:30]
- Answers will vary.
- Answers will vary. |
“A tragic flaw is an error or defect in the tragic hero that leads to his downfall.” (http://www.bedfordstmartins.com/literature/bedlit/glossary_t.htm) In the history of literature, if the question of who was the most indecisive character was brought up, Hamlet would be a prime candidate. Hamlet had numerous chances to reap revenge for his father’s death but was only able to follow through after the accidental murder of his mother. Hamlet’s inability to make a decision ultimately leads to his demise, and for that is his tragic flaw.
What makes a tragic hero? Dr. Peter Smith, Associate Professor of English at Kentucky State University, broke the archetypical characteristics of a tragic hero down into six groups. Of the six, four will be discussed, the first being “noble stature.” (http://www.kysu.edu/artsscience/ENG411/tragic%20hero.htm) Smith said that the fall of one with noble stature will not only affect their life but also the lives of the people who look to them for support. Hamlet is the prince of Denmark; the people of Denmark rely on a strong royal family to rule and support the country. Next, Smith discussed the “tragic flaw” (http://www.kysu.edu/artsscience/ENG411/tragic%20hero.htm) which leads to the decline of the hero. Hamlet’s inability to make a decision lead to his death, which will be discussed in more detail further on. Thirdly, Smith says that one must have “free choice. The tragic hero falls because he chooses one course of action over another.” (http://www.kysu.edu/artsscience/ENG411/tragic%20hero.htm) Hamlet is not forced to kill but makes the decision on his own. Finally, Smith says, “the punishment must exceed the crime.” (http://www.kysu.edu/artsscience/ENG411/tragic%20hero.htm) The audience cannot f...
... middle of paper ...
... the one behind the curtain and kills Polonius by mistake without a second thought.
Hamlet is a tragic hero because he follows the guidelines set by Dr. Smith; he has noble stature, he has a tragic flaw, he has free choice, and finally, he has unjust punishment. (http://www.kysu.edu/artsscience/ENG411/tragic%20hero.htm) His downfall was his inability to make a decision. He vowed revenge for his father’s death only to stall time and time again until he finally goes through with it only to die himself.
Clark, William George, and Wright, William Aldis. The Unabridged William Shakespeare. Philadelphia: Courage Books, 1997.
“Glossary of Literary Terms.” The Meyer Literature Site. February 7, 2002.
Smith, Peter. “The Characteristics of an "Archetypal" Tragic Hero”. Characteristics of a Tragic Hero. 2002. Kentucky University. February 7, 2002.
Need Writing Help?
Get feedback on grammar, clarity, concision and logic instantly.Check your paper »
- Hamlet, the titled character of Hamlet, Prince of Denmark, William Shakespeare’s most prominent play, is arguably the most complex, relatable, and deep character created by Shakespeare. His actions and thoughts throughout the play show the audience how fully developed and unpredictable he is with his mixed personalities. What Hamlet goes through in the play defines the adventures encountered by a tragic hero. In this timeless tragedy, despite Hamlet’s great nobility and knowledge, he has a tragic flaw that ultimately leads to his ironic death.... [tags: Hamlet, Shakespeare]
1364 words (3.9 pages)
- Hero’s are defined by the actions they take, but they either live to see there fall or die heroically. One of shakespheres most memorable tragic hero’s Hamlet is the definition of a tragic hero. In the book, Hamlet, Shakespeare’s character hamlet is determined on killing his uncle the king. This goal proves to be challenging to him due to his morals. He often struggles with this throughout the book. This proves to be his downfall for not deciding to kill the king until the very end. A tragic hero has to have a fatal flaw that, combined with fate, brings tragedy.... [tags: classic, shakespeare]
1015 words (2.9 pages)
- The tragedy of Hamlet, Shakespeare’s most popular and greatest tragedy, presents his genius as a playwright and includes many numbers of themes and literary techniques. In all tragedies, the main character, called a tragic hero, suffers and usually dies at the end. Prince Hamlet is a model example of a Shakespearean tragic hero. Every tragedy must have a tragic hero. A tragic hero must own many good traits, but has a flaw that ultimately leads to his downfall. If not for this tragic flaw, the hero would be able to survive at the end of the play.... [tags: Shakespearean Literature]
685 words (2 pages)
- An Aristotelian’s tragic hero is a person of nobility who is ill-fated by a defect - seemingly intertwined with attributes that make him/her prosperous - in his/her character. Usually the protagonist, a tragic hero is commended for his/her honorable traits and is depicted to be the victim in most works of literature. In Hamlet, by William Shakespeare, the traditional portrayal of a tragic hero is defied: in lieu of being the victim, the tragic hero becomes the culprit of the play. By instilling the antagonist, King Claudius, with honorable qualities like that of a tragic hero, Shakespeare demonstrates that a person is never at the extreme ends of the moral spectrum but rather at the center:... [tags: hamlet, shakespeare, tragic hero]
1362 words (3.9 pages)
- Hamlet as the Tragic Hero Hamlet is the best known tragedy in literature today. Here, Shakespeare exposes Hamlet’s flaws as a heroic character. The tragedy in this play is the result of the main character’s unrealistic ideals and his inability to overcome his weakness of indecisiveness. This fatal attribute led to the death of several people which included his mother and the King of Denmark. Although he is described as being a brave and intelligent person, his tendency to procrastinate prevented him from acting on his father’s murder, his mother’s marriage, and his uncle’s ascension to the throne.... [tags: Shakespeare Hamlet]
828 words (2.4 pages)
- Hamlet as a Tragic Hero William Shakespeare, the greatest playwright of the English language, wrote a total of 37 plays in his lifetime, all of which can be categorized under tragedy, comedy, or history. The Tragedy of Hamlet, Shakespeare's most popular and greatest tragedy, displays his genius as a playwright, as literary critics and academic commentators have found an unusual number of themes and literary techniques present in Hamlet. Hamlet concerns the murder of the king of Denmark and the murdered king's son's quest for revenge.... [tags: Shakespeare Hamlet Essays]
1037 words (3 pages)
- An Examination of Hamlet as a Tragic Hero Webster’s dictionary defines tragedy as, “a serious drama typically describing a conflict between the protagonist and a superior force (such as destiny) and having a sorrowful or disastrous conclusion that excites pity or terror.” A tragic hero, therefore, is the character who experiences such a conflict and suffers catastrophically as a result of his choices and related actions. The character of Hamlet, therefore, is a clear representation of Shakespeare’s tragic hero.... [tags: Shakespeare Hamlet]
1445 words (4.1 pages)
- Hamlet as a Tragic Hero in William Shakespeare's Play According to the Aristoltelian view of tragedy, a tragic hero must fall through his own error. This is typically called "the tragic flaw" and can be applied to any characteristic that causes the downfall of a hero. Hamlet can be seen as a aristotelian tragedy and hamlet as its tragic hero. Hamlet's flaw, which in accordance with Aristotle's principles of tragedy causes demise, is his inability to act. This defect of hamlet's character is displayed throughout the play.... [tags: William Shakespeare Hamlet Essays]
740 words (2.1 pages)
- Hamlet: Shakespeare Tragic Hero In Shakespeare's play, Hamlet, the main character is a classic example of a Shakespearean tragic hero. Hamlet is considered to be a tragic hero because he has a tragic flaw that in the end, is the cause of his downfall. The play is an example of a Shakespearean tragic play because it has all of the characteristics of the tragic play. As defined by Aristotle, a tragic play has a beginning, middle, and end; unity of time and place; a tragic hero; and the concept of catharsis.... [tags: Shakespeare Hamlet Essays]
528 words (1.5 pages)
- Dear Kylie, I noticed your submission to Culture Magazine, regarding Shakespeare’s great play “Hamlet”. Having recently studied “Hamlet” in Year 12 English, I think I can help answer one of your questions. You asked why is Hamlet regarded as a tragic hero and the play a classic tragedy. Before I can answer your question, you must first understand the difference between the meaning of tragedy today and what is meant by tragedy in drama. Whereas a tragedy in life may be considered something such as a death or accident, in drama a tragedy in drama is much more.... [tags: essays research papers]
990 words (2.8 pages) |
In calves, the mandible becomes thick and soft, and in the worst cases, calves have difficulty eating. In calves so affected, there can be slobbering, inability to close the mouth and protrusion of the tongue (Craig and Davis, 1943). Joints (particularly the knee and hock) become swollen and stiff, the pastern straight and the back arched. In severe cases, synovial fluid accumulates in the joints (NRC, 1989). Posterior paralysis may also occur as the result of fractured vertebrae. The structural weakness of the bones appears to be related to poor mineralization. The advanced stages of the disease are marked by stiffness of gait, dragging of the hind legs, irritability, tetany, labored and rapid breathing, weakness, anorexia and cessation of growth. Calves born to vitamin D-deficient dams may be born dead, weak or deformed (Rupel et al., 1933).
In older animals with vitamin D deficiency (osteomalacia), bones become weak and fracture easily, and posterior paralysis may accompany vertebral fractures. For dairy cattle, milk production may be decreased and estrus inhibited by inadequate vitamin D (NRC, 1989). Cows fed a vitamin D-deficient diet and kept out of direct sunlight showed definite signs of vitamin D deficiency within six to 10 months (Wallis, 1944). Functions that deplete vitamin D are high milk production and advancing pregnancy, especially during the last few months before calving. The visible signs of vitamin D deficiency in dairy cows are similar to those of rickets in calves. The animal begins to show stiffness in her limbs and joints, which makes it difficult to walk, lie down and get up. The knees, hocks, and other joints become swollen, tender and stiff. The knees often spring forward, the posterior joints straighten, and the animal is tilted forward on her toes. The hair coat becomes coarse and rough and there is an overall appearance of unthriftiness (Wallis, 1944). As the deficiency advances, the spine and back often become stiff, arched and humped. In deficient herds, calving rates are lower, and calves can be born dead or weak. Hypocalcemia, either milk fever (parturient hypocalcemia) or unexplained lactational hypocalcemia and paresis, may also be observed as a result of chronic vitamin D deficiency in dairy cattle. These signs are also produced by calcium, phosphorus or electrolyte deficiency or imbalances and are therefore not specific to vitamin D deficiency.
1. Milk Fever in Dairy Cattle (Parturient Paresis)
Milk fever (parturient paresis) is a metabolic disease characterized by hypocalcemia at or near parturition in dairy cows. Goff et al. (1991b) and Horst et al. (1994) discussed milk fever and calcium metabolism of dairy cattle in detail. In essence, milk fever is a failure of calcium homeostasis in the face of increased metabolic demand for calcium. Causative and risk factors are partly, but not completely, understood (Enevoldsen, 1993; Horst et al., 1994; Liesegang et al., 1998). Milk fever is related to factors such as (a) previous calcium and phosphorus intakes, (b) previous vitamin D intake, (c) previous intakes and dietary ratios of potassium, chloride, magnesium, sulfur and sodium, and (d) age and breed of cow. Cows that develop milk fever are unable to meet the sudden demand for calcium brought about by the initiation of lactation. Milk fever usually occurs within 72 hours after parturition and is manifested by circulatory collapse, generalized paresis, depression and eventually coma and death. The most obvious and consistent clinical sign is acute hypocalcemia in which serum calcium decreases from a normal 8 to 10 mg % to 3 to 7 mg % (average 5 mg %). Initially a cow may exhibit some unsteadiness of gait. More commonly, the cow is observed lying on her sternum with her head turned sharply toward her flank in a characteristic posture. The eyes are dull and staring, and the pupils fixed and dilated. If treatment is delayed, paresis will progress into coma, which becomes progressively deeper, leading to death. Treatment with intravenous calcium boro-gluconate is an extremely effective treatment. Some cows will relapse, sometimes with multiple episodes of paresis that indicate a severe failure of the calcium regulatory system or in some cases, severe depletion of body calcium stores. Oral calcium pastes and gels are also used both prophylactically and as an adjunct to intravenous calcium treatment.
Aged cows are at the greatest risk of developing milk fever. Heifers rarely develop milk fever, which is borne out by their superior calcium status at parturition (Shappell et al., 1987). Jersey cattle are generally more susceptible than Holsteins. Older animals have a decreased response to dietary calcium stress due to both decreased production of 1,25-(OH)D and a decreased responsiveness to the 1,25-(OH)2D.
In older cows, fewer osteoclasts exist to respond to hormonal stimulation, which delays the bone contribution of calcium to the plasma calcium pool (Goff et al., 1989, 1991b). The aging process is also associated with reduced renal 1-alpha-hydroxylase response to hypocalcemia, therefore, reducing the amount of 1,25-(OH)2D produced from 25-(OH)D (Goff et al., 1991b; Horst et al., 1994). Tissue receptors for 1,25-(OH)2D decline at parturition (Goff et al., 1995), although there was not a significant difference between paretic and nonparetic cows. Osteoblast activity also appears to be decreased during late pregnancy and around parturition (Naito et al., 1990). This may be related to the reduced plasma calcitonin concentrations around parturition and especially in hypocalcemic, aged cows (Shappell et al., 1987). Low magnesium status is also a risk factor for parturient hypocalcemia as well as hypomagnesemia (Van de Braak et al., 1987; Van Mosel et al., 1991). Infection with the common brown stomach worm (Ostertagia) has been strongly implicated as a causative agent of milk fever and displaced abomasum in dairy cows (Axelsson, 1991), apparently due to an anaphylactic reaction at parturition.
Parturient paresis can be prevented effectively by feeding a low-calcium and adequate-phosphorus diet for the last several weeks prepartum, followed by a high-calcium diet after calving (Horst et al., 1994). Feeding low-calcium diets prepartum is associated with increased plasma PTH and 1,25-(OH)2D concentrations during the peripartum period (Kichura et al., 1982; Green et al., 1981). Green et al. (1981) suggested that the increased PTH and 1,25-(OH)2D concentrations resulted in "prepared" and effective intestinal absorption and bone resorption of calcium at parturition that prevents parturient paresis. Phosphorus deficiency did not affect plasma concentrations of vitamin D3 or its 25-OH or 1,25-OH active metabolites, but did elevate plasma calcium and appeared to increase 1,25-(OH)2D3 receptor binding in the duodenum of phosphorus- depleted lactating goats (Schroder et al., 1990).
Prepartal dietary cation-anion balance (DCAD) influences the degree and incidence of milk fever (Ender et al., 1971; Block, 1984; Gaynor et al., 1989; Oetzel et al., 1988; Enevoldsen, 1993). Dietary excess of cations, especially sodium and potassium, relative to anions, primarily chloride and sulfur, tends to induce milk fever, while anionic diets can prevent milk fever. The cation-anion balance of the diet affects acid-base status of the animal, with cationic diets producing a more alkaline state and anionic diets a more acid state of metabolism. Mild metabolic acidosis in turn promotes calcium mobilization and excretion (Lomba et al., 1978; Fredeen et al., 1988; Won et al., 1996). Anionic diets increase the amount of 1,25-(OH)2D produced per unit increase in parathyroid hormone (Goff et al. 1991a). Debate remains as to the mechanisms of action and the relative importance of individual mineral ions (Enevoldsen, 1993; Horst et al., 1994).
Used correctly, anionic diets prepare the cow's metabolism for a sudden demand for calcium at calving and reduce the incidence of subclinical hypocalcemia and paresis (Horst et al., 1994). Because most legumes and grasses are high in potassium, typical dry cow rations are alkaline. Addition of anions, usually as anionic salts, to the diet for two to four weeks prepartum has been used successfully to reduce the incidence of milk fever. Goff et al. (1991a) concluded that low calcium diets, anionic diets and PTH administration all increase renal 1-alpha hydroxylase activity, resulting in increased production of 1,25-(OH)2D and prevention of milk fever. Increased plasma 1,25-(OH)2D3 concentration in response to feeding acidified diets prepartum was reported by Phillippo et al. (1994).
Supplemental vitamin D has been used to prevent parturient paresis in dairy cows for a number of years (Hibbs and Conrad, 1976, 1983; Littledike and Horst, 1982). Feeding or injecting massive doses of vitamin D has been an effective preventive of milk fever, but toxicity symptoms and death have occurred as well. In some cows, milk fever has been induced by the treatment. Due to the toxicity of vitamin D3 in pregnant cows and the low margin of safety between vitamin D3 doses that prevent milk fever and those that induce milk fever, Littledike and Horst (1982) concluded that injecting vitamin D3 prepartum is not a practical solution to milk fever. However, more recent reports from the same laboratory have provided data suggesting that injection of 24-F-1,25-(OH)2D (fluoridation at the 24 position) delivered at seven-day intervals prior to parturition can effectively reduce incidence of parturient paresis (Goff et al., 1988). Hodnett et al. (1992) used a combination of 25-(OH)D3 and 1-alpha-hydroxy D3 to reduce parturient paresis in dairy cows fed high dietary calcium. The incidence of the disease was reduced from 33% to 8%.
Feeding high doses of vitamin D has been more successful than parental administration in preventing milk fever without inducing toxicity (Hibbs and Conrad, 1976). Feeding 20 million to 30 million IU of vitamin D2 for three to eight days prepartum prevented 80% of expected milk fever cases in aged, Jersey cows (Hibbs and Conrad, 1976). However, prolonging the treatment to 20 days prepartum has resulted in toxicity. The same authors fed cows 100,000 to 580,000 IU vitamin D2 per day on a continuous, year-round basis and reported a reduction in milk fever in cows with a history of the disease but not in cows without a history of milk fever. The most practical approach to controlling milk fever today appears to be through optimizing macro-mineral levels in the diet and providing continuous supplementation with vitamin D at normal levels. |
How does our body protect us against viruses?
Dr. Greene’s Answer:
Two types of defense against viruses predominate in the bloodstream: humoral immunity and cellular immunity. The humoral (or one might say ‘liquid’) immune system attacks viruses when they are loose in the body, either in the bloodstream or in bodily secretions. The cellular immune system attempts to destroy viruses once they have taken up residence inside the body’s cells.
The humoral response consists of antibodies made to specific viruses. These antibodies remain present in the circulation and secretions, hopefully eliminating the virus and protecting against future infections. The more water soluble a particular virus is, the more effective the humoral response. A good example of this is the poliovirus. Polio vaccines (and other vaccines) work precisely because they so effectively stimulate specific antibody formation. When a person is re-exposed to polio, antibodies destroy the virus before infection sets in.
The cellular response consists of certain white blood cells, such as cytotoxic lymphocytes or natural killer cells, which attack and destroy our own cells that have been altered by viruses. Some viruses, such as herpes, are ‘sneaky’ enough to hide in our cells without changing the way they look to the cellular immune system. These viruses can remain dormant within cells for years, only to re-emerge periodically when our humoral defenses are weak and allow the viruses to get loose in the circulation once again.Reviewed by: Khanh-Van Le-Bucklin, Stephanie D'Augustine
Last reviewed: February 10, 2009 |
Bastille Day, known in France as “la Fête Nationale”, takes place on 14 July every year. It’s far from a mere bank holiday then – this date is considered the ultimate celebration of French culture and heritage. This special day marks the Storming of the Bastille, a political event that shaped the France we know today. Throughout the country, you’ll hear impassioned cries of “Vive la France! Vive la République!” as people gather in the streets for huge parties. But how did Bastille Day come to be?
Toward the end of the 18th Century, the French monarchy was in crisis. The country was on the brink of bankruptcy due to costly involvement in the American Revolution, yet King Louis XVI and his coiffed queen Marie Antoinette continued to spend lavishly while ordinary people starved. On 14 July 1789, Parisian rebels stormed the Bastille, a royal prison that had come to signify the cruelty of the sovereignty. This event marked the beginning of the French Revolution, a decade of political turmoil in which the king was overthrown by the new radical state and sent to the guillotine. Eventually of course, the absolute monarchy was replaced by a constitutional government.
Here we give you some brilliant historical ways to mark the most important day in France’s calendar. |
This book about structure and function at the cellular level is part of a thirty book series that collectively surveys all of the major themes in biology. Rather than just present information as a collection of facts, the reader is treated more like a scientist, which means the data behind the major themes are presented. Reading any of the thirty books by Campbell and Paradise provides readers with biological context and comprehensive perspective so that readers can learn important information from a single book with the potential to see how the major themes span all size scales: molecular, cellular, organismal, population and ecologic systems. The major themes of biology encapsulate the entire discipline: information, evolution, cells, homeostasis and emergent properties.
In the twentieth century, biology was taught with a heavy emphasis on long lists of terms and many specific details. All of these details were presented in a way that obscured a more comprehensive understanding. In this book, readers will learn about what defines and constrains a cell and some of the supporting evidence behind our understanding. The historic and more recent experiments and data will be explored. Instead of believing or simply accepting information, readers of this book will learn about the science behind cellular structure and function the same way professional scientists do—with experimentation and data analysis. In short, data are put back into the teaching of biological sciences.
Readers of this book who wish to see the textbook version of this content can go to www.bio.davidson.edu/icb where they will find pedagogically-designed and interactive Integrating Concepts in Biology for introductory biology college courses or a high school AP Biology course. |
The Celts were a diverse group of people who resided in Europe during the Iron Age. They originated from the Hallstatt and La Tène cultures of central Europe, and later spread across Europe and Britain. The word ‘Celt’ comes from the Greeks, who named this group of ‘barbarians’ Keltoi (‘the hidden people’).
Although we use the term ‘Celtic’ to describe these people, they were not a cohesive group of people or a distinct race. Rather they were a collection of warring tribes loosely tied by similar language, culture, and religion.
Most of what we know about the Celts come from written accounts by the Romans and Greeks, as the Celts did not preserve their history in writing. Unfortunately, these second-hand accounts are often biased, as the Romans were in constant battle with Celtic tribes, and did not see them in a favourable light. Archaeological evidence tells some of the story, however much of what defined the Celtic world has been lost.
During the Roman invasion of Europe, the remaining Celts were pushed towards the outer regions of Britain, and what remained of Celtic spirituality blended with Christianity. Today, there are six known remaining Celtic regions; Ireland, Wales, Scotland, Brittany, Cornwall, and the Isle of Man.
The Celts were known for their stunning artwork, including metal artifacts like weapons and jewellery. Later Celts produced iconic illuminated manuscripts, such as the Book of Kells. Celtic spirals and knotwork continue to be seen today, and are a symbol of Celtic identity.
© The Celtic Journey (2013) |
problem 1: prepare a program that draws the initials J G P on the form similar to that shown in figure (using straight lines and curve semicircles). The figure can not have corners. All ends are replaced by a semicircle and corners by a quarter circle. The program will have a Timer figure will draw the line by line (or arc) and then erase the same way. The interval between draw (or erase) and the next line is 100 to 200 ms, that is, 0.1 to 0.2 seconds. Figure below shows an ex of how you might go erasing the initial, one segment at a time, each fraction of a second. The intention is to be able appreciate how to draw and how is deleted, it does not happen all at once as when CLS makes
problem 2: Figure below is a representation of the solar system. In a basic model of the same concentric orbits planets rotate around the sun. The closer the Planet in less time Sun completes a full orbit around it. While the Earth rotates around the Sun, the Moon revolves around the Earth. Design a program that presents an animation of the solar system by keeping above characteristics. The colors of the planets will be similar to the presented in Figure below and proportions of sizes also planets maintained. The orbits of the planets must be far enough apart as not to collide with each other. Consider that as the Earth goes around the Sun Moon gives about 12 laps around the Earth. The animation will be controlled using a timer.
problem 3: prepare a program to draw the figure shown below. the program must provide:
a) Close the fish's mouth slowly (at least 5 moves from the original position to close the mouth)
b) Open the fish's mouth slowly
c) Move slowly lower wing back
d) Move slowly lower wing to the front
The movements of the flap is made by rotating the blade tip about a point Reference may be found at its base. The movement to open and close the mouth performed by rotating the points lying on both ends of the jaws around the found point on the end (red point) thereof. The degrees of rotation must be small enough to look soft movement. Not bring the total rotations constraints that can be made so that, for ex, the fin can rotate completely around its base. Although it will look strange, something similar may performed with the mouth
problem 4: prepare a program to draw the truck following figure. The program should buttons to provide:
- Increase the size of the figure by 10%
- Reduce the size of the figure by 10%
- Run the truck to the left.
- Run the truck to the right.
- Add the truck intil it reaches a position of 90 degrees.
- Lower the truck
- Rotate the truck rear axle
- Rotate the truck to its normal position.
When the truck runs left wheels must rotate in the correct direction. Rotating wheels which means that the squared view rotating around the center there of. The same should happen when you move to the right (reverse). When you open the truck cargo box this must occur on the same rotating the lower corner Right. Once the casing reaches the vertical position can not rotate further in favor of the clockwise. Similarly when lowered can not rotate beyond the position horizontal. When the truck Wheelee rotate it over the rear wheels, so the same will remain motionless while the rest of the truck rotates about them. All movements can be combined. For ex, one can reduce the size of the truck lift the box and then run to the left or right with the case open. All movements must be smooth, that is, there should be no abrupt jumps. |
Imagine your students becoming independent sight-readers! This book provides a systematic process for improving the following: sight-singing skills, integration of visual, auditory and kinesthetic learning styles in reading music, and overall musicianship. The learning strategies which accompany the melodies presented in this text incorporate all senses through multisensory learning. These ideas give direction to the learner and the teacher in developing the necessary musical skills and confidence to sight-sing in a manner which includes musical accuracy while encouraging good vocal technique. Using this type of consistent, thoughtful organization of concepts, students can learn to be secure in their abilities as music readers, as well as becoming insightful singers. |
NASA's Fermi Gamma-ray Space Telescope discovered the first pulsar that beams only in gamma rays. The pulsar (illustrated, inset) lies in the CTA 1 supernova remnant in Cepheus. The newest major space observatory, the Fermi Gamma-ray Space Telescope, is working to unveil the mysteries of the high-energy universe. Fermi studies the most energetic particles of light, observing physical processes far beyond the capabilities of earthbound laboratories. FGST's main instrument, the Large Area Telescope (LAT), operates more like a particle detector than a conventional telescope. From within its 1.8-meter cube housing, the LAT uses 880,000 silicon strips to detect high-energy gamma rays with unprecedented resolution and sensitivity, pushing new boundaries in particle physics and astrophysics. (Image Credit: NASA, S. Pineault, DRAO)
| 0 Comments... |
Spinning at a rate of about three times a second, the rotating corpse of a 10,000-year-old star sweeps a beam of gamma rays toward Earth. This object, known as a pulsar, is the first one known to "blink" at Earth only in gamma rays, and was discovered by an orbiting observatory launched in June 2008 with significant involvement from researchers at Stanford University and the Stanford Linear Accelerator Center (SLAC) National Accelerator Laboratory.
"This is the first example of a new class of pulsars that will give us fundamental insights into how stars work," said Stanford astrophysicist Peter Michelson, the principal investigator for the Large Area Telescope (LAT), which is carried aboard NASA's orbiting observatory, the Fermi Gamma-ray Space Telescope.
Researchers at SLAC get the first peek at the celestial data beamed down from LAT before sending it on to an international collaboration of scientists for analysis.
The gamma-ray-only pulsar lies within a supernova remnant known as CTA 1, about 4,600 light-years away from Earth in the constellation Cepheus. Its lighthouse-like beam of gamma rays sweeps across Earth every 316.86 milliseconds and emits 1,000 times the energy of our sun.
A pulsar is a rapidly spinning neutron star, the crushed core left behind when a massive star explodes. Astronomers have cataloged nearly 1,800 pulsars. Although most were found through their pulses of radio waves, some of these objects also beam energy in other forms, including visible light, X-rays and gamma rays, each of which occupy their own spot on the electromagnetic spectrum.
Unlike previously discovered pulsars, the source in CTA 1 appears to blink only in gamma-ray energies and offers researchers a new way to study the stars in our universe. Scientists think CTA 1 is only the first of a large population of similar objects. "The LAT provides us with a unique probe of the galaxy's pulsar population, revealing objects we would not otherwise even know exist," said Steve Ritz, NASA's project scientist for the Fermi observatory. He is stationed at NASA's Goddard Space Flight Center in Greenbelt, Md.
The pulsar in CTA 1 is not located at the center of the supernova remnant's expanding gaseous shell. Supernova explosions can be asymmetrical, often imparting a "kick" that sends the neutron star careening through space. Based on the remnant's age and the pulsar's distance from its center, astronomers believe the neutron star is moving at about a million miles per hour.
It is possible that the pulsar is emitting radio waves—thus far unseen—in addition to gamma rays. "The radio beam probably never swings toward Earth, so we never see it. But the wider gamma-ray beam does sweep our way," explained NASA's Alice Harding.
The LAT scans the entire sky every three hours and detects photons with energies ranging from 20 million to more than 300 billion times the energy of visible light. The instrument sees about one gamma ray each minute from CTA 1. That's enough for scientists to piece together the neutron star's pulsing behavior, its rotation period and the rate at which it is slowing down.
A pulsar's beams arise because neutron stars possess intense magnetic fields and rotate rapidly. Charged particles stream outward from the star's magnetic poles at nearly the speed of light to create the gamma-ray beams the telescope sees. Because the beams are powered by the neutron star's rotation, they gradually slow the pulsar's spin. In the case of CTA 1, the rotation period is increasing by about one second every 87,000 years.
This measurement is also vital to understanding the dynamics of the pulsar's behavior and can be used to estimate the pulsar's age. From the slowing period, researchers have determined that the pulsar is actually powering all the activity in the nebula where it resides.
"This observation shows the power of the LAT," Michelson says. "It is so sensitive that we can now discover new types of objects just by observing their gamma-ray emissions."
For more information:
Astromart News Archive: |
Previous Page: How Acidic...
One of the ways acids were originally classified was by their characteristic reactions. Did they react with metals and if they did were there consistent observations? Did they do anything interesting when an alkali was dropped in them? Over time folks who liked to watch for these things drew certain conclusions, of which the following are deemed to be particularly important:
You might like to note that the reactions above are generic and ignore the fact that 'oxidising acids' notably concentrated sulfuric and nitric acids, react with metal to give much more interesting products. For example, concentrated nitric acid reast with copper metal to give salt, water AND a particularly unfriendly gas, N2O4.
In the above discussion the term salt has occurred several times; it is not, of course, referring solely to NaCl, so called table salt. In this context the term salt refers to any ionic compound resulting from a reaction involving acids. So if you mix solutions of HCl and NaOH you get aqueous NaCl and HOH (or H2O). NaCl is the salt. On the other hand, if you mix HNO3 and CuO you get Cu(NO3)2 and H2O. In this case Cu(NO3)2 is the salt.
As it turns out, each of the typical reactions listed above follows a pattern, as set out in this table. In all cases the resulting salt is CuCl2(aq).
|metal or compound||example|
|pure metal||Cu(s) + 2HCl(aq) → CuCl2(aq) + H2(g)|
|hydroxide||Cu(OH)2(s) + 2HCl(aq) → CuCl2(aq) + 2H2O(g)|
|oxide||CuO(s) + 2HCl(aq) → CuCl2(aq) + H2O(g)|
|carbonate||CuCO3(s) + 2HCl(aq) → CuCl2(aq) + H2O(g) + CO2(g)|
|hydrogen carbonate||Cu(HCO3)2(s) + 2HCl(aq) → CuCl2(aq) + 2H2O(g) + 2CO2(g)|
|sulfite||CuSO3(s) + 2HCl(aq) → CuCl2(aq) + H2O(g) + SO2(g)|
The key thing about this table is that any metal and any acid can be substituted, with the only caveat being always be sure to balance the final equation.
For example, how would you derive the equation for the reaction between potassium hydroxide and sulfuric acid? Well, having been told it involves and acid, refer to the table; in particular row 2: metal hydroxides. Substituting potassium for copper and sulfate for chloride you would get, initially:
KOH(s) + H2SO4(aq) → K2SO4(aq) + H2O(g).
This, however, requires balancing, giving
2KOH(s) + H2SO4(aq) → K2SO4(aq) + 2H2O(g).
Only the ratios and the salt, now potassium sulfate, change.
Likewise, consulting the table suggests copper oxide and nitric acid give
CuO(s) + 2HNO3(aq)→ Cu(NO3)2(aq) + H2O(g).
Link to Index |
A C Theory
In the case of a battery, electricity flows in one direction, from positive to negative. Everything is straightforward. In the case of a generator, however, things get a bit more complicated. It is possible to generate electricity by spinning a coil within a magnetic field. The coil is in constant motion within the magnetic field, and thus is transformed into electricity via the magnets. The electricity exits by way of the brushes and slip rings, but it is not exactly like the electricity which is produced by a battery.
If we look at the current leaving the battery, it is constantly moving in the same direction. We call this DIRECT CURRENT . But if we attach a generator instead of a battery in the same circuit, we notice a major change. The meter would swing back and forth from negative to positive. This seems strange until we examine what is going on inside the generator.
However as the coil begins to turn, one side of the coil moves toward the north pole. This end of the wire would become positive. At the same time, the other side of the coil moves toward the south pole. This side of the coil becomes negative. At this time, current begins to flow from the positive to the negative. Current continues to flow in this direction and reaches a peak in its cycle. This Maximum amount of current flow is reached when the coil is pointing exactly north and south. We call this the 90 o point, and say that the signal has reached its positive peak. After it passes this point, the voltage begins to drop, but doesn't reach 0 until once again the coil is positioned directly between the permanent magnets. This is the 180 o point.
Now comes the switch up. As the coil continues to turn, the end that was positive now moves toward the south pole of the magnet. Because it is passing by the south pole, this end of the coil swings negative. At the same time, the side of the coil that was negative, is now swinging positive. Thus, the direction of current flow within the wire is switched. The current flow continues in this direction until it again reaches a (this time negative) peak at 270 o . Finally, as the coil approaches its original position, it swings positive until current flow again reaches 0.
By graphing the current vs. time, we end up with a pattern known as a SINUSOIDAL WAVE , or SINE WAVE for short. We say that the sine wave has positive and negative peeks at 90 o and 270 o respectively.
|(On The Following Indicator... PURPLE will indicate your current location)|
|Otherwise - please click to visit an advertiser so they know you saw their ad!| |
|HPS 0410||Einstein for Everyone||Spring 2013|
Back to main course page
1. Consider a geometry in which Euclid's 5th postulate is replaced by:
Through any point NO straight line can be drawn parallel to a given line.
Show that there is at least one triangle in this geometry whose angles sum
to more than two right angles.
(Hint: On a line PQ, select two points A and B. Construct lines AC and BD perpendicular to PQ. What happens if AC and BD are extended in both directions?)
2. If you had before you a two
dimensional surface of constant curvature, how could your determine whether the
curvature was positive, negative or zero by measuring
(a) the sum of angles of a triangle;
(b) the circumference of a circle of known radius?
3. What is the difference between extrinsic and intrinsic curvature?
4. Imagine that you are a two dimensional being trapped in a flat two dimensional surface.
(a) How would you use geodesic deviation to confirm the flatness of your surface?
(b) Imagine that a three dimensional being picks up your surface and bends it into cylinder, without in any way stretching your surface. (This is just what happens when someone takes a piece of paper and rolls it into a cylinder.) You are still trapped in the surface. If you now use geodesic deviation to determine the curvature of your surface, would you get the same result as in (a)? Explain why.
For discussion in the recitation.
A. In a space with non-zero curvature, a geodesic is the analog of the straight line of ordinary Euclidean geometry. Why is it appropriate to take geodesics as their analogs? If you were in such a space, how would identify the geodesics?
B. Does it make sense to say that a space has a curved geometry if there is no higher dimensioned space into which the space can curve?
C. In a space with three or more dimensions, the curvature need not be the same in every two dimensional sheet that passes though some point in the space. Of course sometimes things are simple and the curvature does work out the same. Here's an example. Imagine that you are in an ordinary, three dimensional Euclidean space. You slice the space up into the flattest two dimensional sheets you can find, all built out of intersecting straight lines. The first set of sheets run left-right and up-down. The second set of sheets run left-right and front-back. The third set of sheets run up-down and front-back. You use geodesic deviation to determine the curvature of the sheets in each set. What is the curvature of:
(a) The left-right and up-down sheets?
(b) The left-right and front-back sheets?
(c) The up-down and front-back?
(d) Things need not work out so simply. In what space discussed in the chapter would the results be different?
D. The discovery of non-Euclidean geometries eventually precipitated a crisis in our understanding of what has to be and what just might be the case. At one extreme are necessities, such the truths of logic; they have to be true. At the other extreme are mundane factual matters--contingent statements that may or may not be true. Somewhere in between is a transition. Locating that transition has traditionally been of great importance in philosophy and philosophy of science. For if something is necessarily true, we need harbor no doubt over it. If something is contingent, the mainstream empiricist philosophy says we can only learn it from experience. Sometimes the contingent proposition is very broad. For example, consider the proposition that there never has been and never will be a magnet with only one pole. We may come to believe this proposition with ever greater confidence. But we can never be absolutely certain of it. We never know whether tomorrow will bring the counterexample.
Just where should the transition between necessity and contingency come?
Here is a list of propositions that begins with logical truths and bleeds off into ordinary contingent propositions. Sort them into necessary truths and contingent propositions. How are you deciding which is which?
If A and B are both true, then A is true.
If one of A or B is true and A is false, then B is true.
For any proposition A, either A is true or A is not true.
1 + 1 = 2
7 + 5 = 12
There are an infinity of prime numbers.
Every circle has one center.
The sum of the angles of a triangle is two right angles.
Only the fittest survive.
Every effect has a cause.
Every occurrence has a cause.
No effect comes before its cause.
Improbable events are rare.
Energy is always conserved.
Force equals mass times acceleration.
The earth has one moon. |
Antibiotic resistant bacteria is a growing threat to our continued health and wellness. For many diseases, conventional antibiotics are no longer effective. Scientists at OSU have discovered a way to restore bacterial susceptibility to antibiotics.
You may remember I covered the story of Shu Lam for Edgy Labs, centering around the doctoral student who developed a technique for combating drug-resistant germs without the use of antibiotics. Now we cover a similar effort, albeit focused on removing a bacteria’s antibiotic resistance.#OSU scientists used the #PPMO molecule to combat #antibiotic resistance.Click To Tweet
The Superbug War of Antibiotic Resistance
Researchers at Oregon State University, led by Bruce Geller, have used a certain molecule to neutralize bacteria’s ability to fend off antibiotics. The OSU scientists were part of an international collaboration attempting to provide solutions for the threat of antibiotic resistance.
The results of the study, published in the journal Antimicrobial Chemotherapy, could lead to a viable new strategy for treating resistant infections.
A New Antimicrobial Strategy
In this study, Geller and his colleagues at Oregon State University have designed, synthesized and tested a new molecule that would thwart antibiotic resistance in some bacteria. The synthetic molecule, PPMO, or peptide-conjugated phosphorodiamidate morpholino oligomer, does not kill bacteria, but blocks their ability to counteract antibiotics.
PPMO inhibits the expression of an enzyme called New Delhi metallo-beta-lactamase (NDM-1). The bacteria’s gene that creates the enzyme is responsible for the increased resistance to most antibiotics.
In vitro, the PPMO molecule made the antibiotic meropenem, potent again against three genera of NDM-1 positive bacteria. The combination of the PPMO and meropenem antibiotic was effective in treating mice infected with a strain of E. coli.
Geller notes that this method will be ready for testing on humans in three years. Until then, effectiveness against antibiotic resistance will have to be tested on many other bacterial strains. The results can also be used to develop new antibiotics, which is crucial at a time where the number of disease strains that show antibiotic resistance increase constantly. |
History of the Presdio
The Original Presidio of Monterey
The military has played a vital role on the Monterey Peninsula since the area was “discovered” and claimed for Spain by Sebastian Vizcaino in 1602. Vizcaino named the Bay Monterey, in honor of his benefactor, Gaspar de Zuniga y Acevedo, Conde de (count of) Monterrey, then viceroy of New Spain (Mexico).
The Monterey Bay area was colonized by a small Spanish expedition that reached Monterey Bay in May 1770. Captain Don Gaspar de Portola commanded the military component of this expedition, and Franciscan Father Junipero Serra was in charge of the religious element. Portola officially took possession of Alta (Upper) California for Spain, and Serra celebrated a Thanksgiving mass, on June 3, 1770. Portola established a presidio (fort) and mission at the southern end of Monterey Bay the same day, in accordance with his orders to “erect a fort to occupy and defend the port from attacks by the Russians, who are about to invade us.” Portola’s actions were spurred by the Spanish fear that other nations, particularly Russia had designs upon her New World Empire. Spain then moved to occupy that portion of the western American coast that she had previously neglected. The Monterey Presidio was one of four presidios and 21 missions established by Spain in California.
The original Presidio consisted of a square of adobe building located near Lake El Estero in the vicinity of what is now downtown Monterey. The fort’s original mission, the Royal Presidio Chapel, established in 1770, was renovated and reopened in 2008. The original Presidio was protected by a small fort with 11 cannons, called El Castillo. It was built in 1792 on land now part of the present Presidio of Monterey. The original Presidio fell into disrepair, as Mexican rule replaced that of Spain in California in 1822.
Commodore John Drake Sloat, commanding the U.S. Pacific Squadron, seized Monterey in July 1846, during the Mexican War. He landed unopposed with a small force in Monterey and claimed the territory and the Presidio for the United States. He left a small garrison of Marines and seamen who began improving defenses near the former El Castillo, to better protect the town and the harbor. The new defenses were named Fort Mervine in honor of Captain William Mervine, who commanded one of the ships in Sloat’s squadron.
Company F, 3rd Artillery Regiment arrived in Monterey in January 1847, and the U.S. Army then assumed from the Navy responsibility for the continuing construction of Fort Mervine. Two of the artillery lieutenants, William Tecumseh Sherman and E.O.C. Ord, plus Engineer Lieutenant Henry W. Halleck, became prominent generals during the Civil War.
During its early history, this fortification seemed to have many names, including Fort Halleck, Fort Savannah and the Monterey Redoubt. In 1852, the Monterey Redoubt was renamed the Monterey Ordnance Depot and used until 1856 as a military storehouse. From 1856 to the closing months of the Civil War, the fort, then called Ord Barracks, was abandoned. It was manned again in 1865, and abandoned a second time in 1866, although the U.S. Government “reserved” for possible future use a 140-acre military reservation surrounding the redoubt.
The Modern Presidio of Monterey
Near the end of the Philippine Insurrection in 1902, the Army recognized it needed additional forts, particularly on the West Coast. As possible sites were being surveyed, the Army “discovered” that it already owned a large area in Monterey that would be suitable for a military post. In July 1902, the Army announced plans to build a cantonment area and station one infantry regiment at Monterey. The 15th Infantry Regiment, which had fought in China and the Philippines, arrived in Monterey in September 1902 and began building the cantonment area. The 1st Squadron, 9th Cavalry, “Buffalo Soldiers,” arrived shortly thereafter.
In 1902, the name of the cantonment area was the Monterey Military Reservation. It was changed to Ord Barracks on July 13, 1903, and to the Presidio of Monterey on Aug. 30, 1904. Various infantry regiments rotated through the Presidio of Monterey, including the 15th Infantry (1902-1906), 20th Infantry (1906-1909), and 12th Infantry (1909-1917), frequently with supporting cavalry and artillery elements. The Army School of Musketry, the forerunner of the Infantry School, operated at the Presidio of Monterey from 1907 to 1913. In 1917, the U.S. War Department purchased a nearby parcel of 15,609.5 acres of land, called the Gigling Reservation, to use as training areas for Presidio of Monterey troops. This post, supplemented by additional acreage, was renamed Fort Ord on Aug. 15, 1940.
The 11th Cavalry Regiment was posted at the Presidio from 1919 to 1940, and the 2nd Battalion, 76th Field Artillery Regiment, from 1922 to 1940. During the summer months, Presidio soldiers organized and led Civilian Conservation Corps, Citizens’ Military Training Corps and Reserve Officer Training Corps camps in the local area.
In 1940, the Presidio became the temporary headquarters of the III Corps, and served as a reception center until 1944. Declared inactive in late 1944, the Presidio was reopened in 1945 and served as a Civil Affairs Staging and Holding Area for civil affairs soldiers preparing for the occupation of Japan.
In 1946 the Military Intelligence Service Language School was moved to the Presidio of Monterey. It added Russian, Chinese, Korean, Arabic and six other languages to its curriculum, and was renamed the Army Language School (ALS) in 1947. The size of the faculty and student classes and number of languages taught increased throughout the Cold War years. |
Reconfigurable systems make it possible to create extremely high-performance implementations for many different types of applications. While techniques such as logic emulation provide a new tool specifically for logic designers, many other FPGA-based systems serve as high-performance replacements for standard computers for everything from embedded systems to supercomputers.
The creators of these implementations are often software programmers, not hardware designers. However, if these systems hope to be usable by software programmers, they must be able to translate applications described in standard software programming languages into FPGA realizations. Thus, mapping tools that can synthesize hardware implementations from C, C++, Fortran or assembly language descriptions must be developed.
Although there are ways to transform specifications written in hardware-description languages into electronic circuits, translating a standard software program into hardware presents extra challenges. HDLs focus mainly on constructs and semantics that can be efficiently translated into hardware (though even these languages allow the creation of nonsynthesizable specifications).
Software programming languages have no such restrictions. For example, hardware is inherently parallel and HDLs have an execution model that easily expresses concurrency. Most standard software languages normally have a sequential execution model, with instructions executing one after another.
This means that a hardware implementation of a software program is either restricted to sequential operation, yielding an extremely inefficient circuit, or the mapping software must figure out how to make parallel an inherently sequential specification. Also, there are operations commonly found in software programs that are relatively expensive to implement in hardware. This includes multiplication and variable-length shifts, as well as floating-point operations.
Although hardware can be synthesized to support these operations, software that makes extensive use of these operations will result in extremely large designs. Finally, software algorithms operate on standard-size data values, using 8-, 16-, 32- or 64-bit values even for operations that could easily fit into smaller bit-widths. By using wider than necessary operands, circuit data paths must be made wider, increasing the hardware costs. Thus, because we are using a language designed for specifying software programs to create hardware implementations, the translation software faces a mapping process more complex than for standard HDLs.Code translators
Many research projects have developed methods for translating code in C , C++ , Ada, Occam, data parallel C, Smalltalk, assembly and Java, as well as special HDLs into FPGA realizations. These systems typically take software programs written in a subset of the programming language, translate the da-ta computations into hardware operations and insert multiplexing, latches and control state machines to recreate the control flow.
But software programming languages contain constructs that are difficult to handle efficiently in FPGA logic. And because of this such translating techniques can restrict the language constructs that can be present in the code to be translated. Most do not allow multiplication, division or floating-point operations; some ban the use of structures, pointers or arrays, eliminate recursion or do not support function calls or control flow constructs such as case, do-while and loops without fixed iteration counts.
Some techniques, which are intended primarily to compile only short code sequences, may restrict data structures to only bit vectors, or not support memory accesses at all. Other techniques extend C++ for use as an HDL.
It is relatively simple to translate straight-line code from software languages into hardware. Expressions in the code have direct implementations in hardware that can compute the correct result (they must, since a processor on which the language is intended to run must have hardware to execute it). Variables could be stored in registers, with the result from each instruction latched immediately and one instruction executing at a time. However, this loses the inherent concurrency of hardware. We can instead combine multiple instructions, latching the results only at the end of the overall computation.
Renaming is a standard compiler technique for making such code parallel, a variant of which is contained in the Transmogrifier C compiler developed at the University of Toronto. The compiler moves sequentially through a straight-line code sequence, remembering the logic used to create the value of each assignment. Variables in an assignment that comes from outside this piece of code draw their values from registers for that variable. Variables that have been the target of an assignment in the code sequence are replaced with the output from the logic that computes its value earlier in the sequence.
For complex control flow, such as loops that execute a variable number of times or "if" statements containing function calls, control-state machines must be implemented. The code is broken into blocks of straight-line code. Each of these code segments is represented by a separate state in the control state machine. The straight-line code is converted into logic functions with the series of operations collapsed into a combinational circuit. Variables whose scope spans code segments are held in registers. The input to the register is a mux that combines the logic for assignment statements from different code sequences.
Once the controller for the hardware is constructed, techniques can be used to simplify this state machine. States can be combined sequentially or in parallel, allowing greater concurrency in the hardware as well as minimizing the hardware cost of the controller. However, an even more important matter is the simplification of the data path, something that hasn't yet been dealt with adequately. In the construction given above every operation in the source code generates unique hardware. For simple computations this is fine.
However, complex opera-tions such as multiplication and division will be scattered throughout the source code, implying that a huge amount of hardware will be needed to implement all of these computations. But each multiplier will be used only within the state corresponding to that portion of the source code; otherwise, it will sit idle. The circuit's size could be greatly reduced if those hardware multipliers were reused in different places in the code. A single hardware multiplier can be used for many separate multiplication operations from the source code, as long as each occurs in different states.
There are several different systems that convert code sequences in software programming languages into hardware realizations. While they all support the translation of control flow and data path operations into circuits, they differ on the amount of code they convert and the model of operation they assume.
Perhaps the most straightforward is the Transmogrifier C system. It takes a complete program written in C and translates it into a circuit that can be implemented directly in the configurable logic blocks of Xilinx FPGAs. Special pragma (compiler directive) statements are included to declare external inputs and outputs, including the assignment of those communications to specific FPGA pins. This yields a system where the resulting hardware implementation is expected to be a complete self-contained system implementing the entire functionality of the desired behavior.
Most of the systems for translating software programs into hardware algorithms assume that only the most time-critical portions of the code are mapped. Those systems use the FPGA, or FPGAs, as a coprocessor to a standard CPU. The processor implements most of the program, handling much of the operations that are necessary to implement complex algorithms, but which contribute little to the total computation time. The truly time-critical portions of the algorithm are translated into hardware, using the FPGA to implement the small fraction of the total code complexity that accounts for most of the overall run-time. In that way the strengths of both FPGAs and standard processors are combined into a single system. Processors can easily implement a large variety of operations by working through a complex series of instructions stored in high-density memory chips.
Mapping all those instructions into FPGA logic means that the complete functionality of the entire program must be available simultaneously, using a huge amount of circuit space. However, we can implement only the most frequently used portions of code inside the FPGA, achieving a significant performance boost with only a small amount of hardware. During the execution of the program, the processor executes the software code until it hits a portion of the code implemented inside the FPGA coprocessor. The processor then transfers the inputs to the function to the FPGA coprocessor and tells it to begin computing the correct subroutine.
Once the FPGA has computed the function, the results are transferred back to the processor, which continues with the rest of the software code. An added benefit of the coprocessor model is that the software-to-hardware compiler does not have to support all operations from the software programming language, since complex functions such as multiplication or memory loads can be handled instead by the host processor.
This does limit the portions of the code that can be translated into hardware. However, a system that converts the complete program into hardware must either convert those operations into FPGA realizations, yielding much larger area requirements, or ban those constructs from the source code, limiting the types of operations and algorithms that can be supported. Systems such as the Nimble compiler developed at Synopsys provide a middle ground, making effective use of both FPGA and CPU components for different portions of a given computation.
Although compiling only the critical regions of a software algorithm can reduce the area requirements and avoid hardware-inefficient operations, it does introduce problems unique to those types of systems. One problem is that some mechanism must be introduced for communicating operands and results between the processor and the coprocessor. For systems like Harvard's Prisc, which view the FPGA as merely a mechanism for increasing the instruction set of the host processor, instructions are restricted to reading from two source registers and writing one result register, just like any other instruction on the processor.
However, other systems have much less tightly coupled FPGAs and require protocols between the two systems. In most of these systems the communication mechanism puts a hard limit on the amount of information that can be communicated and thus the amount of the computation that can be migrated to hardware. For example, if only two input words and a single output word are allowed, there is obviously only so much useful computation that can be performed in most circumstances.
A second important concern with compiling only a portion of a program into hardware is determining which portions of the code to so map. Obviously, the code that gets mapped to the hardware needs to contain a large portion of the run-time of the overall algorithm if the designer hopes to achieve significant performance improvements.
However, it is difficult to determine strictly from the source code where the critical portions of a program are. In general, the solutions are to profile the execution of the software on a sample input set, to find the most frequently executed code sequences, or to have the user pick the code sequences to use by hand.
But simply identifying the most often executed code sequences may not yield the best speedups. Specifically, some code sequences will achieve higher performance improvements than others when mapped to hardware. Also, some code sequences may be too complex to map into hardware, not fitting within the logic resources present in the FPGA. In general, greater automation of this decision process is necessary. |
|This article/section deals with mathematical concepts appropriate for late high school or early college.|
A definite integral is an integral with upper and lower limits.
A definite integral is the area under the curve between two points on the function. In the picture below, the yellow area is "positive" and the blue area is "negative". The integral is evaluated by adding the positive area together and subtracting the negative area.
If the function f(x) is real rather than complex, then the definite integral is also known as a Riemann integral.
Solving Definite Integrals
Sometimes approximations, such as the Riemann Integral or Simpson's rule are used. These approximations are used when:
- The exact answer is not needed, only a close approximation. (Common in Engineering)
- The rule for integration is very complex. (Such as )
- The rule for integration is simply unknown. (Such as , the Zeta function) |
The play section for the story of the 10 lepers is a great chance to reinforce counting activities or play maths based games.
group balance – write some large numbers of a set of cards and shuffle them. Gather the children together and explain that the first number is the number of people in a group and the second number is the number of hands or feet to be touching the floor. The group then gathers and works out how to balance with the number given. having a limited amount of room by using a hula hoop can make this exercise harder for older children. small groups can just use the second number.
Musical chairs – I think more than half of the lessons on this story have suggested this game, it links in so well and it’s a firm favourite. Old favourites are often reassuring to younger or new children. If you have a small number of children why not get them to share something that happened to them that week when they are the last person standing, make sure you thank them and then return to the game.
What’s the time Master – this is a nicer version of what’s the time Mr wolf, and so the rules may well already be familiar to the children. The Master (wolf) holds a treat, biscuits perhaps, and rather than shout out ‘dinner time’ and chase them he shouts out ‘prayer time’ and they all have to kneel down.
Dominoes – make a giant set from foam or card and play as a group, it’s a good thing to leave out as an option for distracted children to go back to as well.
Board games – Play Ludo or a counting board game. The teams of pieces will tie in nicely to the story. |
An acute myocardial infarction, also known as a heart attack, occurs when a blockage in one or more of the blood vessels leading to the heart muscle causes a disruption in the blood flow to the cardiac tissue. Without blood flow, oxygen cannot be delivered to the heart muscle or myocardium, and the tissue begins to die. If the lack of oxygen is prolonged, irreversible tissue death results. A typical heart attack can kill roughly one billion cells and, unfortunately, the heart is unable to replace these dead cells fast enough to recover from the damage. This initial, permanent, cell death is the precursor to the long-term effects caused by a heart attack.
Changes in Structure and Function
The American College of Sports Medicine's "Exercise Management for Persons With Chronic Diseases and Disabilities" provides a thorough explanation of the long-term effects of heart attacks on the structure and function of the heart muscle. Cardiac muscle contractions, or heart beats, are a very systematic and organized event. When a portion of the myocardium dies as a result of a heart attack, the efficiency of the cardiac system deteriorates. Dead tissue does not contract or contribute to the heart beat. The muscle loses its synchronicity, and contractions become disorganized.
The remaining cells of the heart begin to take on a different shape and tend to enlarge; this is known as hypertrophy and is the heart’s attempt to counter the loss of synchronicity and organization to maintain efficiency. The enlarged cells do not contract as forcefully as normal-sized cells, and thus the ability of the heart to generate sufficient force during each contraction is hindered.
The electrical system of the heart that signals for a contraction may also become disturbed as a result of the changes in cell structure. This can lead to irregular heart rhythms, known as arrhythmias. If unable to be resolved through medication or other therapeutic means, arrhythmias generally require permanent pacemaker implantation.
Heart failure is a long-term result of the changes in muscle structure and function. For an indefinite period following a moderate heart attack, the cardiac tissue attempts to compensate for the loss of tissue by changing its structure, as noted above. This process is termed compensatory heart failure. Once these mechanisms fail, however, the heart is unable to keep up with the demands of the body, and decompensated heart failure ensues. Heart failure brings with it additional complications within the cardiovascular system beyond those incurred as a direct result of a heart attack.
The American Heart Association publishes yearly statistical information on heart attacks and heart failure in the research journal "Circulation." The 2010 publication predicts some 935,000 heart attacks in the United States in 2010 alone, which will contribute to the roughly 5.3 million Americans suffering from heart failure. There is a 20 percent one-year mortality rate for heart failure--20 percent of individuals, or one in five, diagnosed with heart failure die within one year of the initial diagnosis.
The Role of Exercise
While the tissue damage incurred during a heart attack may not be recoverable, programs such as cardiac rehabilitation, which emphasize exercising the heart muscle, can protect the remaining heart tissue and delay the onset of heart failure. In the February issue of the "Journal of Applied Physiology," Dr. Ben Esch examines the functional and structural benefits of exercise following damage to the heart muscle. Exercise increases blood flow to the heart, increases the synchronicity of contraction and, like exercising skeletal muscle, creates a stronger heart muscle. These factors slow the structural changes that heart attacks tend to cause and allow the viable tissue to remain stronger for longer. |
When students in grades 3–8 have reading skills that are below benchmark, they lose ground more rapidly. As they move up in school, reading becomes all about learning new information and content. Providing access to human-read audiobooks can support reading skill development. Audiobooks allow students to hear explicit sounds of letters and letter patterns that form words. Audiobooks also help students engage in text and gain exposure to more words, ultimately improving vocabulary, comprehension and critical thinking skills. Here are seven reasons why audiobooks are the perfect accommodation for struggling readers. Because listening to an audiobook accomplishes the following:
1. Increases word exposure and improves vocabulary.
When students are offered the opportunity to have audiobooks in the classroom, their world can finally open up. Having books read aloud helps these struggling readers move beyond the decoding and right into learning. The more words they learn and incorporate into their knowledge-base, the better able they will be to access grade-level materials.
2. Builds background knowledge.
Students in grades 3-8 come to the classroom with differing experiences for sure, but those who’ve also struggled with reading arrive even less prepared. Human-read audiobooks expose students to academic vocabulary and the language of books. This exposure helps build their background knowledge, an essential component to an evolving student. It also helps develop higher-order thinking skills. The ability to build background quickly through audiobooks cannot be underestimated. If students are left to read only materials at their reading level, they lose out. They lose opportunities to get access to content and information that represents their capabilities and intellect. This is not only frustrating and causes emotional stress, but also limits learning experiences.
3. Reduces working-memory deficit.
Students who struggle with decoding and the mechanics of reading spend so much time focusing on sounding out the words that it is difficult for them to retain the information they are reading. By eliminating the focus on decoding they are now able to retain, remember, and understand the content. When students begin reading with their ears, they start building their working memory. This helps them respond to questions about the text more readily. The more often this happens, the more confident a student gets around the one subject that has plagued them, reading. Building working memory helps make other reading tasks easier and improves reading ability.
4. Removes printed word decoding anxiety.
As soon as the pressure to read the written word is gone, students are open to learn and happy to find out they can. Audiobooks allow students to be immersed in the meaning of text. They also remove the lag time of decoding, which becomes increasingly important as texts become more rigorous. Anxiety plays a huge part in a struggling reader’s entire school experience, so the introduction and regular use of audiobooks can actually help students enjoy school more.
5. Increases comprehension.
When students can hear the story or information as a whole, read by a human being, their comprehension increases. Reading books word-by-word doesn’t help create a whole experience. Kids in grades 3-8 who can finally put all the pieces of information together at one sitting, begin to make meaning of text.
6. Develops grade-level appropriate content knowledge.
Giving students access to grade-level materials by providing an audiobook accommodation improves their self-esteem and increases their participation in class and peer discussions. They are now able to work alongside their peers and get hours of time back. Just because a student can’t read the words in the same way as their peers, doesn’t mean they aren’t developmentally ready to learn this information. Listening to audiobooks brings the information to the student when they are ready for it, not when they can read it.
7. Gives students educational independence.
When students get access to the content and are able to work independently, it gives them the confidence to become successful learners and control their educational outcome. Students who are given the audiobook advantage as an accommodation also have more continuity of learning in the classroom. This means peer relationships can develop normally and students can feel more like insiders.
Learn more about using audiobooks in the classroom to improve reading scores. |
A GLOSSARY OF ANTHOPOLOGICAL
AND GEOLOGICAL TERMS
ANTELIAN - The name of an Upper Paleolithic culture located in the Levant during the Late Pleistocene, identified by distinct fossil remains associated with the Cro-Magnon variation of Modern Man. (Dated: approx. 35,000-14,000 B.C.).
ATERIAN - The name of an Upper Paleolithic culture of North Africa during the Late Pleistocene, identified by distinct fossil remains associated with the Type de Mechta variation of Modern Man. (Dated: approx. 35,000-22,000 B.C.)
ATLATL - A spear thrower, a hooked, hand-held "extension" of the human arm to enhance spear throwing. It made its appearance in North Africa somewhere between 25,000 and 40,000 years ago.
ATLITIAN - The name of an Upper Paleolithic culture located in the Levant during the Late Pleistocene, identified by distinct fossil remains associated with the Cro-Magnon variation of Modern Man. Dates: 14,000-10,000 B.C.
ARCHAEOLOGY - The scientific study of life and culture of ancient peoples as by excavation of ancient cities, relics, artifacts, etc., and the inspection and analysis of anciently inhabited caves, grottos, etc. Sometimes spelled "archeology".
ARCHEOLOGY - The so-called "American spelling," commandeered in the early 1960s by processual archaeologists to distinguish them as espousing the views of the "New Archeology" as opposed to the earlier traditional methodology.
AURIGNACIAN - The name of an Upper Paleolithic culture located in Western Europe during the Late Pleistocene and associated with the Cro-Magnon variation of Modern Man. Dates: 35,000-22,000 B.C.
AZILIAN - The name of a Cro-Magnon Mesolithic culture located in Western Europe and the British Isles, existing during the Mesolithic Age (10,000-7,000 B.C.) but whose tools were more characteristic of Upper Paleolithic cultures.
BEFORE PRESENT - Because the "present" is always changing, authorities decided to make 1950 A.D. the official date to represent the "present"therefore all B.P. ("before present") dates are routinely calculated from that date.
BRACHYCEPHALY - Literally "round-headed". An anthropological term distinguishing from dolichocephaly ("long-headed").
BRECCIA - A mass of material (e.g., earth, rocks, fossils, sand) which has been solidified by some kind of cementing matrix, such as lime salts from water.
CAPSIAN - The name of an early culture of North Africa, existing during the Mesolithic Age but using Upper Paleolithic-type tools (includes the robust Type de Mechta and the more gracile eastern "Type-A"). Dates: 10,000-7,000 B.C.
CARBON 14 - A radio isotope of carbon, used in estimating dates from carbon preserved during the last 70,000 years. Dating range is limited only because the isotope gradually disappears by decaying into the more stable Carbon-12.
CENOZOIC - Literally "Recent Life," the Age of Mammals beginning roughly 70 million years ago.
CHROMOSOMES - The carriers of inheritance, thread-like structures in the nuclei of cells on which genes are located. Exists in pairs, one member of each pair being supplied by each parent.
COMBE-CAPELLE - An "eastern" variation of Upper Paleolithic European Modern Man. Other named "eastern" types are Brunn Man, Predmost Man, Grimaldi Manall more gracile than the Cro-Magnoid types. Dates: 38,000-10,000 B.C.
CRO-MAGNOID - A tall, robust, large-brained Modern Man much resembling Cro-Magnon whose remains have been found in portions of the Americas. Diagnostic trait: short-face, long-skull combination known as "disharmonism".
CRO-MAGNON - A tall, robust, large-brained variation of Modern Man dating from 35,000 B.C. to the present occupying Western Europe and several Atlantic islands. Some remains (exhibiting the diagnostic trait known as "disharmonism") have been found in the Levant. Equivalent type in North Africa is known as Type de Mechta.
DENDROCHRONOLOGY - An "absolute dating" method achieved by comparing the ring pattern of anciently felled trees to a master chart.
DISHARMONISM - The seemingly incongruent elements composed of a short, almost "squashed," face, combined with a wide and very long (dolichocephalic) cranium: present only in Cro-Magnon and Cro-Magnoid types of Modern Man.
DOLICHOCEPHALY - Literally "long headed". An anthropological term distinguishing from Brachycephaly ("round headed").
EVOLUTION - The gradual change in species of animals or plants in succeeding generations due to mutations triggered by natural radiation (cosmic rays, nuclear radiation, x-rays, gamma rays) and certain toxic chemical processes.
FISSION-TRACK - A radiometric dating technique based on analyses of the damage tracks left by fission fragments in certain uranium bearing minerals and glasses. It has helped our understanding of the thermal history of continental crust, the timing of volcanic events, and to determine the age of archeological artifacts.
FLORINE-DATING - The process whereby natural florine (present in ground water) which has accumulated in fossilized bones is used to determine the age of recovered fossilized material.
FOSSIL - Any evidence of a human, animal, or plant preserved over a long period of time; but most often on this web site refering to mineralized or otherwise preserved bone material.
GENES - The units of inheritance, located on the chromosomes in the nuclei of cells. Genes and chromosomes are paired (received from each parent), and the joint action of these determines the characteristics of individual offspring.
GLACIATION - The forming of ice sheets on land, either by extension of mountain glaciers in glaciated valleys, or by the formation of continental glaciers during a glacial phase.
HOMINID - Pertaining to the family Hominidae, of man, and thus relating to characteristics or members of this family, usually as distinguished from pongid (pertaining to members of the Pongidae, or apes). Distinct in meaning from "hominoid," which refers to both families.
HOMINIDAE - The family of "man," including modern man and fossil man (in the subfamily of Homininae) and the man-apes (Australopithecenae) and likely including the Pliocene form Ramapithecus. Characterized by upright bipedalism and certain traits of dentition.
HOMININAE - A subfamily of the family Hominidae, containing the fossil men of the Pleistocene and living man (Homo erectus to Homo sapiens), but excluding the Australopithecinae.
IBERO-MAURUSIAN - An Upper Paleolithic culture (now known as "Oranian") existing in North Africa during the Late Pleistocene and associated with the human Type de Mechta. (Dated: approx. 22,000-14,000 B.C.)
MAGDALENIAN - An Upper Paleolithic culture of Western Europe existing during the Late Pleistocene and associated with the Cro-Magnon variation of Modern Man. Dates: 14,000-10,000 B.C.
MESOLITHIC - Middle Stone Age generally, but more specifically refering to a group of transitional cultures between the end of Paleolithic culture and the appearance of Neolithic activities (the latter including extensive farming, extensive use of clay pottery, prolific domestication of animals, etc.). Dated traditionally 10,000-7,000 B.C.
MESOZOIC - Middle Life, the Age of Reptiles, the geological period dominated by dinosaurs, but including the earliest development of mammals.
MIOCENE - The fourth division of the Cenozoic or Tertiary era, lasting from about 25-10 million years ago, during which the earliest apes made their appearance.
MOUILLIAN - The name of an Upper Paleolithic culture of North Africa existing in the Late Pleistocene, identified by distinct fossil remains associated with the Type de Mechta variation of Modern Man. (Dated: approx. 14,000-10,000 B.C.)
MOUSTRIAN - a Late Pleistocene complex of stone industries of the Lower Paleolithic, exclusively associated with Neanderthal Man (technically known as Homo sapiens neanderthalensis). Dates: 250,000-35,000 B.C.
MUTATION - A permanant change in any gene in an individual which in turn produces a change which will be passed on to any occuring offspring.
NATUFIAN - The name of a Cro-Magnon Mesolithic culture located in the Levant during the Mesolithic Age (10,000-7,000 B.C.). Diagnostic traits: incipient agriculture, crude pottery, and possibly dog domestication.
NEANDERTHAL - A stocky, powerful, large-brained type of man known technically as Homo sapiens neanderthalensis who lived during the Lower Paleolithic and made rather crude stone tools. They were the first to bury their dead.
NEOLITHIC - New Stone Age, the age before the knowledge of metallurgy, but in which pottery and farming flourished, and certain animals (goats, sheep, etc.) were domesticated. Small villages existed. Dated traditionally from 7,000-3500 B.C.
ORANIAN - The name of an Upper Paleolithic culture (formerly called "Ibero-Maurusian") existing during the Late Pleistocene, located in North Africa and associated with Type de Mechta variation of Modern Man. (Dated approx. 22,000-14,000 B.C.)
PALEOLITHIC - Old Stone Age, Divided into Lower Paleolithic (pre-Modern Man) and Upper Paleolithic (Modern Man), the latter beginning about 38,000 B.C. and distinguished from the former by utensils made using a flake-blade technique.
PERIGORDIAN - An Upper Paleolithic wide-spread European culture, usually divided into Lower and Upper, which existed from beginning to end of the Upper Paleolithic Age (38,000-10,000 B.C.). The physical type associated with the Perigordian is known as the "eastern," more gracile variation of Modern Man (sometimes generalized as "Combe Capelle").
PLEISTOCENE - The earliest of the two divisions of the Quaternary era, and last division of geological time before the Recent (Holocene): the Ice Age, lasting from perhaps 3,000,000-12,000 years ago. All fossil humans belong to this period.
PLIOCENE - The fifth division of the Cenezoic or Tertiary era, lasting from about 12 million to 3 million years ago. The ancestors to modern apes belong to this period.
PRIMATES - An order of mammals to which man, apes, monkeys, and prosimians (lemurs, lorises, etc.) belong.
SOLUTREAN - An Upper Paleolithic culture of Western Europe existing during the Late Pleistocene, identified by distinct fossil remains associated with the Cro-Magnon variation of Modern Man. Dates: 22,000-14,000 B.C.
THERMOLUMINESCENCE - (TL) dating by means of measuring the accumulated radiation dose of the time elapsed since a clay artifact was originally fired. As the material is heated during testing, a weak light signal proportional to the radiation dose is produced. The amount of light emitted determinates the firing date.
TYPE-DE-MECHTA - A tall, robust, large-brained variation of Modern Man occupying North Africa from 35,000 B.C. to the present. Diagnostic trait: short-face, long-skull combination known as "disharmonism". The equivalent of Cro-Magnon in Europe.
URANIUM-THORIUM - A radiometric dating technique commonly used to determine the age of certain carbonate materials. Age is determined from the degree to which equilibrium has been restored between the radioactive isotope thorium-230 and its radioactive parent uranium-234 within a sample.
Atlantek Software Inc., Version 1.0
Compiled by R. Cedric Leonard
Last update: 15 May 2009. |
Just like us, dogs see three-dimensional objects in our world. This includes people, other animals and inanimate objects with height, width and depth. Questions remain, however, about how well dogs can see television or other two-dimensional objects that lack depth. To understand why, we first need to look at how dogs see things differently than people.
Setting Their Sights
Retinas allow the eyes of both people and pets to take in light. To sort through the light, retinas have what are called rods and cones. Colors are seen through the cones. Rods allow for strong night vision and the ability to see motion. Not surprisingly, dogs have more rods in their retinas than humans. As a result, dogs see better at night and are better at sensing motion. On the other hand, dogs lack as many cones as we do, meaning they do not see colors in the same way and with the same amount of variation.
Dogs Watching Television
Many people wonder if dogs can really watch or understand the two-dimensional images found on television. With today's technology, dogs might be able to see the same things we do on TV. Part of this is due to the increased flicker rate for televisions. The flicker rate refers to the number of images projected on the screen per second. When set high enough, the flicker rate tricks the brain into seeing a film rather than just a blinking picture. Today’s televisions are capable of producing about 70 images per second, a rate that can allow dogs to perceive the images as a film. People, in contrast, are able to perceive a film on television at just 20 to 50 frames per second, according to Sciencenordic.com.
TV Just for Dogs
The notion of dogs not being able to watch television is becoming as outdated as the old, clunky televisions of years past. Some television stations have programming just for dogs. The content takes into account what dogs can see and hear as well as what interests them. Additionally, the programs are tailored to what dogs need. Programs featuring soothing images and sounds are used to calm anxious dogs. Other programs are have exciting images and sounds to keep dogs stimulated and alert.
Since dogs are keen at detecting movement, that might explain why they appear to watch television. A dog might jump off of the couch or bark when he sees another dog on TV, for example. One theory holds that dogs can see objects running around on TV, but they really do not understand them. It's simply the movement that is attracting their attention. Even so, the evidence suggests dogs can see both three-dimensional and two-dimensional objects. Other than that, only our dogs really know what they are thinking and seeing.
- Apple Tree House/Photodisc/Getty Images |
When it comes to high cholesterol, what you can’t see can hurt you.
High cholesterol has no symptoms, so many people may not know if they are at risk. But approximately 1 in every 6 adults — that’s 17 percent of the U.S. adult population — has high blood cholesterol.
Where cholesterol comes from
Your body makes cholesterol, a waxy, fat-like substance that travels through your bloodstream. Cholesterol has important functions, including helping make hormones, vitamin D and substances to help you digest some foods.
Cholesterol also is in some of the food we eat, including fatty meats, shrimp and dairy products.
If you have high blood cholesterol, over time it can harden and clog your arteries. This in turn can put you at risk of developing heart disease, the top killer of women and men in the United States. In fact, people with high cholesterol have about twice the risk as others of having heart disease.
You can’t control all of your risks factors for high cholesterol. For instance, cholesterol levels tend to go up as we age, and high cholesterol can run in families. But there are lifestyle factors we can control.
When to get tested
The federal Centers for Disease Control says cholesterol is a health indicator that needs to be monitored just like blood pressure. Everyone age 20 and older should have their cholesterol measured at least once every five years, according to the CDC. All children and adolescents should have their cholesterol checked at least once between the ages of 9 and 11, and again between 17 and 21.
A simple blood test called a lipid profile can tell you your cholesterol levels. Here’s what the test will measure:
- Low-density lipoproteins: This is called “bad” cholesterol and commonly referred to as LDL. LDL cholesterol makes up the majority of the body’s cholesterol, and high levels can lead to heart disease, heart attack and stroke.
- High-density lipoproteins: This is called “good” cholesterol and referred to as HDL. Scientists believe that HDL cholesterol absorbs bad cholesterol and carries it to the liver, where it is flushed from the body.
- Triglycerides: This is a different type of fat that doctors usually check as part of a cholesterol test. High levels of triglycerides can raise the risk of heart disease.
Here are the cholesterol and triglyceride levels you want to aim for:
- Total cholesterol: Less than 200 mg/dL
- LDL, or “bad” cholesterol: Less than 100 mg/dL
- HDL, or “good” cholesterol: 60 mg/dL or higher
- Triglycerides: Less than 150 mg/dL
If your cholesterol is high, lifestyle changes may be enough to lower it. If not, medicine could be necessary. Your age, gender and blood pressure could all factor into a decision.
Talk to your health care provider about what your numbers mean for you. |
How to Develop a Child’s Interest in Books and Reading
Research suggests that children who enjoy books are more likely to want to learn to read and will keep trying even when they find it hard. Therefore, it is important to keep their interaction around books a positive one.
Books come in all shapes and sizes and are made from various materials.
- Have a few bath books. even if most of the time your baby or toddler just chews on them, they will be handling a book and possibly turning the pages. Giving you the opportunity to talk about the pictures with them.
- Try to have a range of cloth, hard books and suitable picture books around the house and in your child’s play area so they can pick them up at any time. This way they can explore them for themselves, (even if it is to give them a quick chew on) not as an adult sharing activity.
Through Drawing & Writing
Drawing and making your own simple story book can be a great way of getting your child interested in books and reading. Children love to hear stories about them. Simple homemade books about them can be a great way of introducing children to books and reading. This can be a very effective approach to encourage reluctant readers.
- Draw simple pictures with, or for, your child and talk through what you are drawing, for example; a picture of a house with matchstick people. The pictures could be telling the events of the day for example going to the park or walking the dog.
- If drawing is not your thing (like me) use fuzzy felt instead to make a picture to share the story of their day.
Through Songs and Nursery Rhymes
Children’s songs and nursery rhymes cover a wide range of concepts such as going through every day sequences in the nursery rhyme ‘Here we go Round the Mulberry Bush’ which uses the phrase ‘This is the way we…’ to order the event of getting up in the morning. Some introduce concept such as size, numbers, colours and shapes. While others tell stories for example ‘Baa Baa Black Sheep’ or ‘We’re going on a Bear Hunt’ by Michael Rosen & Helen Oxenbury.
Sharing books that enable you and your child to sing along (retell) their favourite songs and nursery rhymes means that the child knows what to expect and that they are going to have fun and enjoy the experience. Over time you will then be able to introduce books with new songs and nursery rhymes, building on your child’s positive experience of other book sharing sessions with you. Remember you may have to revert back to the old favourite time and time again, but stick with it.
Through Book Sharing
What books should you choose?
- Pick children’s books you enjoy
- Pick books your child enjoys
- Give your child time to choose and look at books (your local library is a great place for this).
- Follow your child’s interests
- Use ‘true’ books and stories (not those specifically written for developing phonics knowledge or rigidly structured reading scheme books for teaching and learning to read, these will come from school and serve a different purpose).
Book Sharing Tips
- Remember that a child’s age personality, mood and stage of development will affect how they interact with the book.
- Keep the interaction around the book positive and fun, if you are not enjoying it your child will pick up on this.
- Keep your child involved, remember you do not have to read the book word for word, it is the positive sharing experience that is important.
- If your child does not seem interested in reading or sharing books, start slowly by sharing /reading one or two pages at a time. Keep the interaction positive and over time their interest will grow.
- If your child is showing no interest then try again another time.
- When reading a book with your child that you really like then tell them that you like the book or story. Your child may not agree with you and insist on their favourite book which after reading for the 500th time you may be bored with but keep with it, there will be another favourite book!
- Try to share books throughout the day not just at nap and bed times. I found having a couple of books in my bag really useful as I could then share a book with one of my girl’s while the other was swimming or with them both while waiting for the bus or sharing tea and cake in a café.
- Read with your child every day. There are some days where this just seems impossible to manage. Remember one minute is better than no minutes and it does not have to be a book you are reading, there are lots of environment reading matter you could use such as painting/pictures, posters, advertisement, road signs and maps. |
Troctolite is an Refers to igneous rocks that crystallized underground. One of the three basic types of rock that also include sedimentary and metamorphic. Igneous rocks are formed by the solidification of magma and comprised predominately of silicate minerals. Based on bulk chemical analysis, igneous rocks can be grouped into four major groups based on their SiO2 content: 1. Felsic: consisting of Also referred to as the plagioclase feldspar series. Plagioclase is a common rock-forming series of feldspar minerals containing a continuous solid solution of calcium and sodium: (Na1-x,Cax)(Alx+1,Si1-x)Si2O8 where x = 0 to 1. The Ca-rich end-member is called anorthite (pure anorthite has formula: CaAl2Si2O8) and the Na-rich end-member is albite An alumino-silicate mineral containing a solid solution of calcium, sodium and potassium. Over half the Earth’s crust is composed of feldspars and due to their abundance, feldspars are used in the classification of igneous rocks. A more complete explanation can be found on the feldspar group page. and Group of silicate minerals, (Mg,Fe)2SiO4, with the compositional endpoints of forsterite (Mg2SiO4) and fayalite (Fe2SiO4). Olivine is commonly found in all chondrites within both the matrix and chondrules, achondrites including most primitive achondrites and some evolved achondrites, in pallasites as large yellow-green crystals (brown when terrestrialized), in the silicate portion. It is a member of gabbroic rocks family. It is compositionally similar to Work in progress Coarse-grained igneous rock of basaltic composition that formed at depth and is 90% plagioclase. clinopyroxene, https://www.sandatlas.org/gabbro/ The most important mineral groups that make up this rock type are plagioclase and pyroxene. Plagioclase usually predominates over pyroxene. Plagioclase is sodium-calcium feldspar. It contains more calcium than sodium in gabbro. If there is. The main difference is that it does not contain A class of silicate (SiO3) minerals that form a solid solution between iron and magnesium and can contain up to 50% calcium. Pyroxenes are important rock forming minerals and critical to understanding igneous processes. For more detailed information, please read the Pyroxene Group article found in the Meteoritics & Classification category. or contains very little while it is a major Inorganic substance that is (1) naturally occurring (but does not have a biologic or man-made origin) and formed by physical (not biological) forces with a (2) defined chemical composition of limited variation, has a (3) distinctive set of of physical properties including being a solid, and has a (4) homogeneous in gabbro. It can be described as pyroxene-depleted gabbro. |
Electrons are strange and unusual little fellows. Strange things happen when too many or too few of the little fellows get together. Some things may be attracted to other things or some things may push other things away. Occasionally you may see a spark of light and sound. The light and sound may be quite small or may be as large as a bolt of lightning. When electrons gather, strange things happen. Those strange things are static electricity. Now that you’ve spent a few lessons learning about the strange world of the atom (Unit 3 & Unit 8), it’s time to play with them.
A Note about SafetyA lot of folks get nervous around electricity. You can't always 'see' what's going on (will I get a shock when I touch that?), and many people have a certain level of fear around anything electrical in general. I mean, electrons are small, and you can't see electricity, but you can certainly see its effects (like with blenders, door bells, and alarm clocks).
Electricity is predictable. The voltages and amperage we're working with in the unit are way below the "caution" limit, and the batteries we recommend won't leak acid if your kids connect them the wrong way. (And you should expect them to short-circuit things - it's part of the learning process.) I am going to help you set up a safe learning environment so your kids are free to experiment without you losing sleep over it.
I'm going to walk you through every step of the way, and leave you to observe the reactions and write down what you notice. We'll learn how to turn on electrical components, like buzzers and motors, and then I'll show you how to connect them together to build robots. It's not enough just to learn about these ideas - you have to use them in a way that's useful (and practical). That's when the learning really sticks to their brain.
One of the best things you can do with this unit is to take notes in a journal as you go. Snap photos of yourself doing the actual experiment and paste them in alongside your drawing of your experimental setup. This is the same way scientists document their own findings, and it's a lot of fun to look back at the splattered pages later on and see how far you've come. I always jot down my questions that didn't get answered with the experiment across the top of the page so I can research it more later. Are you ready to get started? |
Adolf Hitler was the undisputed leader of the National Socialist German Workers Party—known as Nazis—since 1921. In 1923, he was arrested and imprisoned for trying to overthrow the German government. His trial brought him fame and followers. He used the subsequent jail time to dictate his political ideas in a book, Mein Kampf—My Struggle. Hitler’s ideological goals included territorial expansion, consolidation of a racially pure state, and elimination of the European Jews and other perceived enemies of Germany.
After his release from prison in 1924, Hitler began seeking political power through legal means, such as elections, rather than through violent attempts to overthrow the government.
Modern propaganda techniques—including strong images and simple messages—helped propel Austrian-born Hitler from a little known extremist to a leading candidate in Germany’s 1932 presidential elections.
A common misconception about Hitler’s rise to power is that he was voted into office. In January 1933, President Paul von Hindenburg appointed Hitler Chancellor, the head of the German government.
Early Years and World War I
Adolf Hitler (1889–1945) was born on April 20, 1889, in the Upper Austrian border town Braunau am Inn. In 1898, the Hitler family moved to Linz, the capital of Upper Austria. Seeking a career in the visual arts, Hitler fought bitterly with his father, who wanted him to enter the Habsburg civil service.
Hitler lived in Vienna between February 1908 and May 1913, when he left for Munich. There, he drifted and supported himself by painting watercolors and sketches until World War I gave new direction to his life. He joined the army. During the war, he was wounded twice (in 1916 and 1918) and was awarded several medals.
In October 1918, after he was partially blinded in a mustard gas attack near Ypres in Belgium, Hitler was sent to a military hospital in Pasewalk. News of the November 11, 1918, armistice reached him there as he was recuperating. Released from the hospital in November 1918, Hitler returned to Munich.
In 1919, he joined the Information Office of the Bavarian Military Administration. This office gathered intelligence on civilian political parties and provided anti-Communist “political education” for the troops. In August 1919, as a course instructor, Hitler made his first virulent antisemitic speeches. A month later, he first expressed an antisemitic, racist ideology on paper, advocating removal of Jews from Germany.
Leader of the Nazi Party
Hitler joined what would become the Nazi Party in October 1919. He helped devise the party political program in 1920. The program was based on racist antisemitism, expansionist nationalism, and anti-immigrant hostility. By 1921, he was the absolute Führer (Leader) of the Nazi Party. Membership in the Nazi Party swelled in two years to 55,000, supported by more than 4,000 men in the paramilitary SA (Sturmabteilung; Storm Troopers).
Rejecting political participation in Weimar elections, Hitler and the Nazi Party leadership sought to overthrow the government of Bavaria, a state in the Weimar Republic. The Beer Hall Putsch took place on November 9, 1923. After the putsch collapsed, a Munich court tried Hitler and other ringleaders on charges of high treason. Hitler used the trial as a stage to attack the system of parliamentary democracy and promote xenophobic nationalism. Hitler was found guilty, but received a light sentence and was released after serving just one year in detention. He used his time in prison to begin writing Mein Kampf (My Struggle), his autobiography, published in 1926. In the book, he unveiled an explicitly, race-based Nationalist, social Darwinist, and antisemitic vision of human history. He advocated dictatorship at home, military expansion, and seizure of “living space” (Lebensraum) in the East. This living space was where the Germans intended to cleanse the east of indigenous and “inferior” populations.
After his release from prison, Hitler reorganized and reunified the Nazi Party. He changed its political strategy to incorporate engagement in electoral politics, programs targeting new and alienated voters, and bridge building to overcome traditional conflicts in German society.
Using language fashioned to reflect the fears and hopes of potential voters, the Nazis campaigned for
- Renewing national defense capacity
- Restoring national sovereignty
- Annihilating Communism
- Overturning the Versailles Treaty
- Eliminating foreign and Jewish political and cultural influence in Germany and reversing the moral depravity that it allegedly created
- Generating economic prosperity and creating jobs
Testing this strategy in the national parliamentary elections of 1928, the Nazis received a disappointing 2.6% percent of the vote.
With the onset of the Great Depression in 1930, Nazi agitation began to have increasing impact in the German population. When the majority coalition government collapsed in March, the three middle-class parties invoked emergency constitutional provisions to hold extraordinary parliamentary elections, hoping to manufacture a governing majority that would permanently exclude the Social Democrats and the political Left from governing. When this maneuver failed, German governments in 1930-1932 resorted to ruling by presidential decree rather than parliamentary consent.
The Nazis made their electoral breakthrough in 1930 by combining modern technology, modern political market research, and intimidation through violence for which the leadership could deny responsibility. The party’s youthful energy untainted by past association with democratic governments also helped them break through electoral barriers. They captured nearly a fifth of the popular vote, attracting new, unemployed, and alienated voters.
Hitler was a powerful and spellbinding speaker who attracted a wide following of Germans desperate for change. The Nazi appeal grew steadily in 1931 and 1932, creating a sense of inevitability that Hitler would come to power and save the country from political paralysis, economic impoverishment, cultural atrophy, and Communism. After running for President of the Republic in spring 1932, Hitler and the Nazis captured 37.3% of the vote in the July 1932 elections. They became the largest political party in Germany. Constant electioneering after 1930, accompanied by politically-motivated street violence, swelled the membership of the Nazi Party to 450,000, the SA to more than 400,000, and the SS to more than 50,000 in 1932.
Chancellor of Germany
The Nazi share of the vote declined to 33.7 % in the November 1932 parliamentary elections. The decrease blunted Hitler’s appeal and created a political and financial crisis in the Nazi Party. Former Chancellor (June-November 1932) Franz von Papen rescued Hitler. Von Papen believed that Nazi electoral losses rendered them more susceptible to control by the more experienced but unpopular conservative elites. Willing to risk a Nazi-German nationalist coalition with Hitler as Chancellor, von Papen reached agreement with Hitler and the German Nationalists in early January 1933. He persuaded President Paul von Hindenburg that Germany was out of other options. Reluctantly, von Hindenburg appointed Hitler Chancellor on January 30, 1933.
Following his appointment as chancellor, Adolf Hitler began laying the foundations of the Nazi state. He seized every opportunity to turn Germany into a one-party dictatorship.
German president Paul von Hindenburg died in August 1934. Hitler had secured the support of the army with the Röhm purge of June 30, 1934. He abolished the presidency and proclaimed himself Führer of the German people (Volk). All military personnel and all civil servants swore a new oath of personal loyalty to Hitler as Führer. Hitler also continued to hold the position of Reich Chancellor (head of government).
Series: Adolf Hitler
Series: The Weimar Republic
Critical Thinking Questions
- What qualities and characteristics of leadership did Hitler seem to have and to demonstrate?
- What other societal factors and attitudes contributed to the rise of Hitler?
- How can knowledge of the events in Germany and Europe before the Nazis came to power help citizens today respond to threats of genocide and mass atrocity? |
On this day in 1777, during the American Revolution, the Continental Congress adopts a resolution stating that “the flag of the United States be thirteen alternate stripes red and white” and that “the Union be thirteen stars, white in a blue field, representing a new Constellation.”
The national flag, which became known as the “stars and stripes,” was based on the “Grand Union” flag, a banner carried by the Continental Army in 1776 that also consisted of 13 red and white stripes. According to legend, Philadelphia seamstress Betsy Ross designed the new canton for the flag, which consisted of a circle of 13 stars and a blue background, at the request of General George Washington. Historians have been unable to conclusively prove or disprove this legend.
With the entrance of new states into the United States after independence, new stripes and stars were added to represent new additions to the Union. In 1818, however, Congress enacted a law stipulating that the 13 original stripes be restored and that only stars be added to represent new states.
On June 14, 1877, the first Flag Day observance was held on the 100th anniversary of the adoption of the American flag. As instructed by Congress, the U.S. flag was flown from all public buildings across the country. In the years after the first Flag Day, several states continued to observe the anniversary, and in 1949 Congress officially designated June 14 as Flag Day, a national day of observance. |
Here's a list of over 30 Science Fair ideas to get you started. Then download science experiments, and watch experiment videos to inspire your project.
What causes the volcano to explode in the volcano science experiment is the reaction between the baking soda and vinegar. Baking soda is a base and vinegar is an acid. When a base and an acid are mixed, it creates a byproduct of carbon dioxide, which causes the mixture to overflow just like a shaken soda bottle.
How Does the Elephant Toothpaste Science Experiment Work. This experiment shows a very impressive and fast chemical reaction! Hydrogen peroxide is a combination of hydrogen and oxygen (H2O2). In this experiment, yeast is a catalyst that helps release oxygen molecules from the hydrogen peroxide solution.
3-2-1 Blast Off! This simple and fun science experiment teaches children about Action and Reaction. Using everyday household items, children learn how the force of air moving in one direction can propel balloon in the opposite direction, much like a rocket!
Sep 27, 2022 · Science Objectives The Advanced Colloids Experiment-Temperature-7 (ACE-T-7) experiment involves the design and assembly of complex three-dimensional structures from small particles suspended within a fluid medium. These so-called “self-assembled colloidal structures”, are vital to the design of advanced optical materials and active devices.
Oct 24, 2019 · 5 – 6-year-old kids can experiment this by noting time and they can learn how much oxygen is required to burn the candle. They can learn about smoke and wax. 8+-year-old kids can learn chemical equations, balancing them and the detailed science behind the candle and glass experiment.
Sep 27, 2022 · The Advanced Colloids Experiment-Heated-2 (ACE-H-2) experiments utilize optical microscopy for time- and space-resolved imaging of spherical colloids of various sizes and concentrations. Colloidal assembly and the structure’s stability can be controlled by mediating the particles’ electrostatic charge and surface chemistry. |
Technology is developing increasingly fast. Even as the world faces new challenges, technology offers solutions that can help humanity move forward. Here are ten new technologies that promise to leave a lasting impact on our world.
- Gene Editing
Gene editing allows scientists to change defective genes in a human body. Many human diseases, such as Huntington’s disease and cystic fibrosis, are caused by errors in humans’ gene sequences. Gene editing technologies like CRISPR allow scientists to splice open gene sequences and edit specific, flawed genes. The technology is still evolving—scientists recently discovered how to do this using only electrical fields. Gene editing may soon become a standard medical procedure. Using it, doctors can cure genetic diseases that were once fatal.
- Brain-Controlled Computers
By measuring electrical activity within the brain, some computers can now allow users to type or even play ping pong using only their thoughts. It’s easy to see how this technology could affect people as it develops. It could help disabled people move a wheelchair, surf the internet, or work online using only their thoughts. It could also enable easier multitasking and instantaneous access to computing.
Although our world is covered with water, most of it is saltwater. As the world’s population grows, finding enough clean freshwater for everyone will be difficult. Desalination offers an exciting solution: it converts saltwater into freshwater. While this technology is far from new, it has recently been innovated to be far more useful. When salt water is converted into freshwater, there is a leftover brine full of salt and metals. But recent technological innovation has found a way to extract metals from the brine. There’s a possibility that those extracted metals could be sold, making desalination an affordable solution to our water crisis.
- Lab-Grown Organs
Our society faces a worrying shortage of organs, and people die every day while waiting for transplants. However, scientists have discovered how to grow organs using stem cells. Recently, scientists were able to grow kidney tissue by using human stem cells implanted in mice. Those stem cells became working kidney-like structures. This technology has the potential to save millions of lives every year.
- Wearable Electronics
Already, wearable electronics like Fitbit help people around the world gather data on their health. These electronics measure metrics like heart rate, physical activity, and stress levels. As wearable electronics become more common, the healthcare industry will be able to use this improved health data. More patients will be able to share their electronic data to help doctors diagnose their health issues. This will likely lead to better patient outcomes, as well as helping doctors correctly diagnose patients and avoid medical malpractice lawsuits.
Although it was originally developed to support Bitcoin, blockchain technology has become even more influential than cryptocurrency. Blockchain works like a giant online transaction ledger. It records every single transaction that occurs, and the data cannot be altered without destroying later records. This makes data entered on the blockchain extremely secure. What’s more, blockchain is supported by a vast network of computers rather than being held in a centralized location. This makes it much harder to hack. Blockchain could become the new foundation of online payments, contracts, government data, and business development. It could revolutionize the way we use and understand the Internet.
- Self-Driving Cars
Once an ambitious dream confined to science fiction, self-driving cars are now a reality. They are safer and environmentally friendly. They offer a solution to motor accidents and drunk driving. They also provide transportation to disabled and elderly people, people who couldn’t ordinarily drive. As companies such as Tesla finetune the technology, self-driving cars promise to become the norm in societies around the globe.
- Improved Artificial Intelligence
As artificial intelligence becomes more and more sophisticated, it has the potential to dramatically change the way we live. Soon, AI may be completely integrated into people’s day to day lives. It could track budgets, advise fashion choices, plan menus, and order groceries. Artificial intelligence is making great leaps forward in language processing and social awareness—it could expand into innumerable areas of human life.
- Understanding Genetic Risk
With scientists’ increasing knowledge of genetics, they are able to accurately predict people’s risk for heart attacks, tobacco addiction, and other problems. They can perform a DNA test that offers a set of predictions about a person and their health. This could reform the healthcare industry. For example, if a woman knows that she is at a very high risk for breast cancer, she could start getting mammograms earlier and more frequently. These tests are somewhat controversial because they could also be used to predict traits such as intelligence. However, they could save numerous lives.
- Instant Language Translation
Google recently developed Pixel Buds—earbuds that, when paired with a phone and the Google Translate app, can immediately translate languages. One person wears the earbuds and speaks, while the other holds a phone which plays the translated message. This allows the participants to quickly converse and maintain eye contact during the conversation. This invention is extremely promising for breaking down cross-cultural communication barriers. Enabling simple, easy communication across language barriers could help people grow closer together in a divisive era.
These technologies have the potential to completely revolutionize the way we live. When looking towards the future, we need to keep our eye on these developments. |
5 Ways to Help Patients Determine if Between Colds and Allergies
Many patients come to your location with symptoms that could be related either to a cold or to allergies. This is due to the fact that both produce very similar symptoms. Congestion, a runny nose, sore throat and cough are just a few of the reasons that there are many full waiting rooms at certain times of the year.
Educating patients about the ways that a cold might differ from allergies may help to lift some of the burden from a practice that has too many patients waiting for diagnosis and not quite enough medical professionals.
Causes of the Common Cold
A cold is most commonly caused when individuals are exposed to a virus. Over 200 known viruses have been identified as they cause colds. The most common of these are rhinoviruses which are estimated to cause anywhere between 10-14 percent of adult colds. These are viruses that use the mouth, nose or eyes as an entry point. They travel through the air via droplets which are produced when someone bearing the virus coughs or sneezes. They can also travel on hands, leaving contamination on anything from doorknobs to toys and phones.
The common cold is referred to by medical professionals as a viral upper respiratory tract infection. Individuals never build a complete resistance to colds as new viruses are continually developing.
Colds occur with the most frequency in cool-weather months. This may be due to the fact that most modern homes and buildings depend on recycled warmed air to maintain comfort instead of introducing fresh (cold) air continually.
Causes of Allergies
There may be as many causes for allergy symptoms as there are types of cold-causing viruses. Some of the largest allergens include:
- Insect stings or bites
- Animal Dander
In addition these well-known allergens, there are some lesser items that cause similar symptoms. Some of these are:
- Household chemicals
- Dust Mites
With such a wide variety of items that may be causing allergies, it can be hard for an individual to know exactly what they are allergic to. Allergy testing is available that can help pinpoint what allergens individuals should work to avoid.
Such testing might involve a blood test, skin prick test or patch test. Food allergies may require a patient to undergo an elimination diet or a food challenge test.
Comparison of Cold and Allergy Symptoms
Arming your patients with advice about how to figure out if they are suffering from a cold or from allergies may be beneficial for both doctor and patient. Here are 5 things that patients may want to consider:
- Duration – Typically, symptoms of a cold last between 3 and 14 days. Any longer than that and chances are high that the patient is suffering from an ongoing allergic reaction.
- Mucus Color – The typical rule of thumb is that clear mucus tends to be due to allergies while mucus that is yellow or green is more likely to be from a cold.
- Time of Year – Allergies, especially the seasonal variety, tend to be highest in the spring, summer and fall of the year. Patients who experience the same type of symptoms at the same time each year may want to undergo allergy testing.
- Salute – Once a patient is introduced to the “allergic salute”, they may begin to see it everywhere. This telltale allergy sign is common among children who repeatedly push up on their nose using the palm of their hand to make it stop itching. Some may even develop a small crease or line across the bridge of their nose due to the repeated upward pressure. At times, children who have a cold may use this wiping method as well, but a constantly repeated action is far more likely to be because of allergies.
- Fever – Any sign of a fever when an individual has cold or allergy symptoms tend to point directly toward a cold, as the body uses a fever in an effort to burn off the virus or bacteria. Allergies typically do not produce any rise in body temperature.
Informed Patients are Happy Patients
The more information you can share with your patients, they happier they may be. For instance, if they are able to decide that they most likely have a cold, they will be able to save time driving to the office for advice on how to deal with allergy symptoms. Alternatively, patients suffering from early cold symptoms are sure to benefit when they make an appointment early in the illness. An in-house dispensary can only help to raise patient satisfaction rates should they need any type of medication to offset the symptoms from their cold. |
Boost your exam performance with free NCERT Solutions available at Aasoka. Students can clear their doubts related to the chapter with the questions and answers available on the platform. Access the NCERT Solutions for Class 11 anytime anywhere and study at your own pace. The latest CBSE syllabus and guidelines were followed by the professionals while preparing these solutions. It meets the student’s learning requirement which in turn helps them to score the marks they desire.
“Equality” chapter of Class 11 Political Science explains the concepts including different political philosophies such as socialism, Marxism, feminism, and liberalism; the significance of equality, what is equality, various dimensions of equality, etc.
Some people argue that inequality is natural while others maintain that it is equality which is natural and inequalities which we notice around us are created by society. Which view do you support? Give reasons.
Some people argue that inequality is natural because nature has endowed different men with different capacities. One individual is born with the genius of a poet, another with that of a musician, a third with that of an engineer. The vast majority do not possess special aptitude of any kind. But in our opinion equality is natural and inequalities which we notice around us are created by society. The concept of equality implies that all people, as human beings are equal. Hence, they all are entitled to the same rights and opportunities to develop their skill and to achieve their goals. Social inequalities are created by society.
There is a view that absolute economic equality is neither possible nor desirable. It is argued that the most a society can do is to try and reduce the gaps between the richest and poorest members of society. Do you agree?
The popular meaning of equality is that all men are equal, that all should get equal income and equal treatment. We fully agree with the view that absolute economic equality is neither possible nor desirable. Absolute equality of wealth or income has never existed in a society. There is not a single country in the world where absolute economic equality exists. Even in Communist countries like China, North Korea, etc., Communists have not succeeded in establishing absolute economic equality. Laski defines economic equality in a limited sense as consisting in equal opportunities for everyone to develop his natural faculties and power. With equal opportunities, inequalities may continue to exist between individuals but there is the possibility of improving one’s position in society with sufficient efforts and determination. The government must try to reduce great inequalities in wealth. The concentration of property in the hands of a few is fatal to the purposes of the state and the socialists are right in insisting that either the state must dominate property or property will dominate the state. Economic equality can exist, when all people have reasonable economic opportunities to develop themselves. Adequate scope for employment, reasonable wages, adequate leisure and other economic rights create economic equality. Means of production and distribution should be controlled in such a way that they stand for public welfare. In a nutshell, economic equality means that there should not exist wide gaps of income among the members of society. Wealth should not concentrate only in a few hands. Gross inequalities of wealth should not exist at all. Everybody should have economic minimum.
Match the following concepts with appropriate instances :
- Affirmative action
- Equality of opportunity
- Equal Rights
1. (b) 2. (c) 3. (a)
A government report on farmers’ problems says that small and marginal farmers cannot get good prices from the market. It recommends that the government should intervene to ensure a better price but only for small and marginal farmers. Is this recommendation consistent with the principle of equality ?
Recommendation of government report that the government should intervene to ensure better price for small farmers is consistent with the principle of equality because small farmers cannot compete with big farmers. Hence, government intervention to help small farmers is in ccordance with the principle of equality.
Which of the following violates the principle of equality? Why?
- Every child in class will read the text of the play by turn.
- The Government of Canada encouraged white Europeans to migrate to Canada from the end of the Second World War till 1960.
- There is a separate railway reservation counter for senior citizens.
- Access to some forest areas is reserved for certain tribal communities.
Encouragement to white Europeans to migrate to Canada is a violation of the principle of equality because it is a clear-cut case of discrimination on the basis of race and colour.
Here are some arguments in favour of the right to vote for women. Which of these are consistent with the idea of equality ? Give reasons.
- Women are our mothers. We shall not disrespect our mothers by denying them the right to vote.
- Decisions of the government affect women as well as men, therefore, they also should have a say in choosing the rulers.
- Not granting women the right to vote will cause disharmony in the family.
- Women constitute half of humanity. You cannot subjugate them for long by denying them the right to vote.
- Not consistent with principle of equality.
- Consistent with principle of equality.
- Consistent with principle of equality because both women and men should be given right to vote.
- Consistent with principle of equality because it is democratic. If women are not given the right to vote, a large section of the population will remain unrepresented in the government and democracy will not be a success. |
Can you find the centre of the circle with just five lines?
Suppose you have a circle, like the one in the figure below. At your disposal, you have a compass, a straightedge (like a ruler, but without length ticks), and a pencil.
Can you find the centre of the circle with just five lines? (Every time you use the compass counts as one line, and every time you use the straightedge counts as another line.)
Give it some thought!
If you need any clarification whatsoever, feel free to ask in the comment section below.
Congratulations to the ones that solved this problem correctly and, in particular, to the ones who sent me their correct solutions:
Know how to solve this? Join the list of solvers by emailing me your solution!
There are many ways in which the centre of the circle can be found! However, doing that with just 5 lines is the challenge.
Recall that the centre of the circle is the point that is at the same distance of all the points in the circumference. So, if you draw any chord and then draw its bisector, you know that bisector will go through the centre of the circle (point A in the figure):
In the figure above, I picked two arbitrary points D and E and drew the chord [DE]. Then, I used D and E two draw to circles:
Then, the line defined by the two intersections of those two circles goes through the centre (A). If we do that once more, the intersections of those two bisectors give you the centre:
However, this uses a total of 8 lines. We want to do this in just 5... And yet, going down to 6 lines is easy: we just need to realise we don't really care about the chords, only their endpoints... And picking arbitrary points on the circumference doesn't cost any “lines”:
The final step comes from realising that we don't need 4 separate circles! The two bisector lines of the implied chords can be drawn with just 3 circles if we pick the points well enough!
After drawing the first two auxiliary circles, pick one of the circles. That circle will intersect the original circle at a point that you haven't used yet (H in the figure below). Use that point as the centre of the third circle, which you can draw with a radius equal to the other two auxiliary circles:
By making use of those 3 circles we can draw 2 bisectors which intersect at the centre of the circle. That makes up for a total of 5 lines.
Here is a GIF of the solution:
Don't forget to subscribe to the newsletter to get bi-weekly problems sent straight to your inbox.
Espero que tenhas aprendido algo novo! Se sim, considera seguir as pisadas dos leitores que me pagaram uma fatia de pizza 🍕. O teu pequeno contributo ajuda-me a manter este projeto grátis e livre de anúncios aborrecidos. |
The Sustainable Development Goals ( SDG’s) are a set of 17 targets set up by the UN, in which 193 countries pledge to make real, impactful change in the world by 2030. The World’s Largest Lesson has taken on the role of bringing these goals and targets to children and young people everywhere and unites them in action.
They produce free and creative resources for educators to teach lessons, run projects and stimulate action in support of the Goals. At the heart of their resources are animated films written by Sir Ken Robinson, animated by Aardman and introduced by figures students know and respect, like Emma Watson, Serena Williams, Malala Yousafzai, Kolo Touré, Neymar Jr, Hrithik Roshan and Nancy Ajram. The films establish a context for the Goals and inspire students to use their creative powers to support and take action for them. |
Subtitle Making Friends with Your Emotions
Description Description Iggy
the multicolored chameleon
turns different colors
depending on his emotions
. While visiting the animals
in the jungle
, he turns different colors as he feels angry
, and more. He sets out to find someone who looks like him
, and when he meets a yellow chick, he feels happy
! But then he discovers that Little Chick likes to eat worms, and that makes Iggy feel angry
—and he turns red
. So Iggy sets off to find a friend who is red. At the conclusion of his adventures
, Iggy learns that it’s important to be himself
, and that it’s okay to feel the strong emotions
that he feels.
- This board book, which features full-color artwork, helps children develop their emotional intelligence by introducing them to different scenarios that generate a variety of responses and emotions.
- It also provides an opportunity for parents or caregivers to explain how to cope with a variety of feelings.
- Children will relate to the adorable animal characters and pick up on the valuable lessons the chameleon learns. And perhaps most importantly, young readers will learn just how important it is to be themselves! |
What is non-melanoma skin cancer?
Non-melanoma skin cancer starts in the cells of the skin. A cancerous (malignant) growth is a group of cancer cells that can grow into and destroy nearby tissue. It can also spread (metastasize) to other parts of the body, but this is rare with non-melanoma skin cancer.
The skin is the body’s largest organ. It covers your entire body and protects you against harmful factors from the environment such as the sun, hot temperatures and germs. The skin controls body temperature, removes waste products from the body through sweat and provides the sense of touch. It also helps make vitamin D.
Cells in the skin sometimes change and no longer grow or behave normally. These changes may lead to non-cancerous (benign) growths such as dermatofibromas, moles, skin tags and warts.
Changes to cells of the skin can also cause precancerous conditions. This means that the abnormal cells are not yet cancer, but there is a chance that they may become cancer if they aren’t treated. A precancerous condition of the skin is actinic keratosis.
But in some cases, changes to skin cells can cause non-melanoma skin cancer. Most often, non-melanoma skin cancer starts in round cells called basal cells found in the top layer of the skin (epidermis). This type of cancer is called basal cell carcinoma (BCC) and makes up about 75%–80% of all skin cancers. Non-melanoma skin cancer can also start in squamous cells of the skin, which are flat cells found in the outer part of the epidermis. This type of cancer is called squamous cell carcinoma (SCC) and makes up about 20% of all skin cancers. BCC and SCC tend to grow slowly and are often found early.
Rare types of non-melanoma skin cancer can also develop. These include Merkel cell carcinoma and cutaneous T-cell lymphoma. |
We still have so much to learn about our cosmic neighborhood. There's no doubt that we've learned a lot about our solar system over the past six and a half decades of spaceflight, but there are still mysteries lurking around every corner. One of the biggest ones is the planet Uranus. Other than a brief flyby by NASA's Voyager 2 spacecraft in 1986, we haven't visited the planet at all.
But that's soon going to change. According to this year's decadal survey by the National Academies of Sciences, Engineering, and Medicine, a flagship orbiter and probe mission to Uranus should be NASA's main planetary science project of the next decade. (A decadal survey is a report that polls the scientific community on top research priorities it's prepared every 10 years.) At a virtual town hall on Aug. 18, Dr. Lori Glaze, director of NASA's Planetary Science Division, announced a very rough timeline for the potential Uranus mission.
We are working towards initiating ... some studies of a Uranus orbiter probe mission no later than the fiscal year 2024. We will explore a range of complexity and cost options as part of those studies, she said, later adding that the studies could even commence as early as the fiscal year 2023.
The Uranus Orbiter and Probe (UOP), as this mission concept is known, would see a spacecraft spend several years orbiting the ice giant, with a potential probe making a dive down through Uranus' atmosphere. Not only would this research enhance our knowledge of the planet itself, but it could also give us insight into the evolution of ice giant systems, which are found throughout our galaxy. Such flagship missions, of course, take plenty of research and planning the UOP is still just a concept and not a mission quite yet. The kickoff in FY 2024 would simply be the early phases of research to figure out the potential shape of the mission.
Glaze said, we need to make sure that we are putting in place a mission that can be implemented and executed.
Though there's no formal timeline yet, Glaze did indicate that a launch could happen as soon as the early 2030s, which would place the spacecraft's arrival at Uranus sometime in the 2040s or beyond. It will take anywhere from 12 to 15 years for it to traverse the nearly 2 billion miles (3.2 billion kilometers) between Earth and Uranus.
Glaze said, I think it's fantastic now that we have very clear guidance from the decadal survey on the highest priority next flagship, and the fact that they've specifically identified Uranus as the ice giant to visit. We're really excited about this. |
Results of the Online Historical Summer School
On June 21–25, 2021, the Online Historical Summer School for the Study of the Holodomor of 1932–1933 took place.
The Summer Historical Online School for the Study of the Holodomor of 1932–1933 is an educational project for high school students, students, young scientists and teachers interested in studying the topic of the Holodomor genocide.
During five days, about 500 participants from different parts of Ukraine had the opportunity to attend a video tour, 6 lectures and 3 educational classes, learn about different aspects of the Holodomor: historical background of genocide, resistance to collectivization and the Holodomor, national liberation struggle of Ukrainians, legal assessment of genocide, the fate of children during the Holodomor, the psychological consequences, considered the Holodomor in the context of world genocides.
The purpose of the school is to draw attention to the topic of the Holodomor-genocide of 1932–1933, to encourage young people to study and research the history of Ukraine; to create a research environment in the public space that will help free Ukrainian society from the post-genocidal syndrome.
Leading specialists in the Holodomor genocide were involved in the summer school. They provided a meaningful course and an applied aspects (interactive lectures, video tours, discussions, debates, exercises on working with sources, online games, etc.).
Organizers of the online school “Holodomor: to Know in Order to Be” are the Holodomor Research Institute and Holodomor Museum. |
While women have always played a central role in Jewish life, especially in the home, this has not been the case in the synagogue. Judaism traditionally called on men to assume roles in religious ceremonies taking place in houses of worship. Further segregating women, tradition placed females separately from men in the synagogue, removed from the areas where male-dominated rituals, prayers, and sermons took place. Sometimes women could see these events, but not hear them; and sometimes sight was denied them, but not sounds. Until the 19th century, women’s roles in synagogue life were inevitably limited.
In the Middle Ages in Rhineland synagogues, however, women were given their own prayer spaces adjacent to the main men’s hall. These were substantial autonomous spaces, such as at Worms and Speyer. Since these areas had only minimal connection by sight or sound to the main synagogue, women prayer leaders were required. Elsewhere, women were relegated to a small room on the other side of the wall opposite the ark, as was the case in Sopron, and later at the Remu Synagogue in Krakow.
Woodcuts illustrating early 16th-century German books about Jewish customs show women near the entrance to the main worship space, but separated by a divider (mechitzah). This might have been the norm for smaller synagogues, and women might have attended only on major holidays. In many larger synagogues in Italy and Holland, women were segregated in galleries and hidden behind wooden or metal grilles. By the 17th century, more accommodations were made, especially in new synagogues designed with permanent galleries for women. The examples are many, including the Portuguese Synagogue in Amsterdam, the Great Synagogue in Livorno, and the central synagogues of Paris and Vienna. Sometimes, especially in Polish synagogues, upper galleries were added above the entrance vestibules opposite the ark wall.
While this selection of images illustrates restrictions placed on women by separate seating arrangements, it also shows ways in which females found roles in synagogue life, ranging from regularly performed rituals to extraordinary contributions. Representations of women at the mikveh and in positions of power, financially supporting the building of new synagogues, creating and adding to the liturgy used within them, and contributing to charitable and educational endeavors, demonstrate the responsibilities Jewish women undertook in support of their faith. In addition, the scenes portrayed here confirm that as men went to synagogue, women performed essential religious functions in the home. |
Sometimes images arrive that make it clear that the space age is not a throw-away line, but a reality.
This one was taken by a satellite orbiting Mars, and it shows the Earth and the moon. Kind of remarkable, given that the camera — the High Resolution Imaging Science Experiment (HiRISE) camera on NASA’s Mars Reconnaissance Orbiter — was 127 million miles away
And HiRISE is not a far-seeing telescope, but rather a camera designed to look down on Mars from 160 to 200 miles away. It’s job (among other tasks) is to image the terrain, measure the compounds and minerals below, and keep an eye on Mars dust storms, climate, and the downhill steaks that periodically appear on some inclines and may contain surface salty water.
The image is a composite image of Earth and its moon, combining the best Earth image with the best moon image from four sets of images acquired on Nov. 20, 2016 by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA’s Mars Reconnaissance Orbiter.
Each was separately processed prior to combining them so that the moon is bright enough to see. The moon is much darker than Earth and would barely be visible at the same brightness scale as Earth. The combined view retains the correct sizes and positions of the two relative to each other.
This is how JPL described the details:
HiRISE takes images in three wavelength bands: infrared, red, and blue-green. These are displayed here as red, green, and blue, respectively. This is similar to Landsat images in which vegetation appears red. The reddish feature in the middle of the Earth image is Australia. Southeast Asia appears as the reddish area (due to vegetation) near the top; Antarctica is the bright blob at bottom-left. Other bright areas are clouds.
What I find especially intriguing about the image is that it is precisely the kind of “direct imaging” that the exoplanet community hopes to some day do with distant planets. With this kind of imaging, scientists not only can detect the glints of water, the presence of land, the dynamics of clouds and climate, but they can also get better spectrographic measurements of what chemicals are present.
Some exoplanets are being painstakingly direct imaged, but the difficulty factor is high and the result is most likely one or two pixels. And since the planets are orbiting stars that send out light that hides any exoplanets present, coronagraphs are needed inside the telescopes to block out the sun and its rays.… Read more |
A couple of months ago, a group of scientists discovered an ancient shark in the North Atlantic ocean. While they knew that this shark had definitely reached senior age, they didn’t realize until recently that the animal is estimated to be a whopping 512 years old.
At 512 years, that would make this ancient shark the oldest living vertebrate in the world.
Even though over half a millennium seems like a crazy amount of time, Greenland sharks actually tend to outlive most other animals because they are a very slow-growing species. These sharks generally reach a mature age at 150 years old, and some reports have shown that some sharks have lived for almost 400 years. This new discovery of a 512-year-old definitely breaks a record.
512 years would put the birth date of this shark at 1505, older than Shakespeare.
The discovery was detailed in a research study published in the journal Science. Marine biologist Julius Nielsen and his team used a technique to measure the amount of radiocarbon in the eye lenses of Greenland sharks, revealing the possible age of this senior animal.
“It definitely tells us that this creature is extraordinary and it should be considered among the absolute oldest animals in the world,” the biologist told.
This research suggests that Greenland sharks can live much longer than professors and scientists initially thought.
No less than 28 Greenland sharks were studied and analyzed for this research paper. The new age determination method for these sharks definitely brings some much-needed accuracy to the field, as older methods have proven to be very unreliable.
Previously, scientists used the size of the animal to determine their length. Sharks of the same ‘Somniosidae’ family, usually grow about 0.4 inches per year. While this method can give a rough estimate of a shark’s age, it’s by no means scientifically accurate, especially once a certain maturity has been reached.
“Fish biologists have tried to determine the age and longevity of Greenland sharks for decades, but without success. Given that this shark is the apex predator (king of the food chain) in Arctic waters, it is almost unbelievable that we didn’t know whether the shark lives for 20 years, or for 1,000 years,” expert Steven Campana from the University of Iceland stated.
Nielsen has been doing research on Greenland sharks for almost his entire academic career. The animals are known to eat rotting polar bear carcasses, of which the scientist shared a picture earlier. He also says that the species very frequently have to deal with pesky parasites that latch to their eyes, which is why the eyes usually don’t look that healthy.
Because these sharks tend to live hundreds of years, they usually don’t stick around in the same spot forever. Sharks from all over the world were studied, but genetic results of practically all of them were similar, suggesting they all originated from one place and then migrated. The reproduction of Greenland sharks is still somewhat of a mystery, although the scientists do know that the cold water of the Arctic is a preferred place for them stay.
In further research, scientists now want to uncover why the Greenland shark lives so much longer than other vertebrates.
They hope that they can discover more about the shark’s long-life genes and will also try to make the link with life expectancy in several different species.
“This is the longest living vertebrate on the planet,” he said. “Together with colleagues in Denmark, Greenland, USA, and China, we are currently sequencing its whole nuclear genome which will help us discover why the Greenland shark not only lives longer than other shark species but other vertebrates.”
When the research scientist was asked how in the world this shark could possibly reach the age of over 500 years old, he guessed that the cold water combined with a slow metabolism would be responsible. He does admit right after, however, that further research is still needed and that this explanation is just a theory.
“The answer likely has to do with a very slow metabolism and the cold waters that they inhabit. I’m just the messenger on this. I have no idea.”
Source: Sci-Tech Universe, New Yorker, Science Mag
“Even though living over half a millennium?” A millennium is 2000 years. 512 years is clearly not over half a millennium.
Millenium = one thousand years…
😂 What?! You’re trolling right?
Please just leave it alone. You don’t need to study and you don’t need to draw attention to it. Let it live without it being bother. |
Operating systems have many different software programs that help them run basic processes for the computer. Some of this software users can replace or delete. Other types of software are vital to the operating system and help it function correctly. Likewise, some software is highly complex and multilayered, while other types are simple and take up only a little space. Utilities tend to be smaller, more basic types of software.
An operating system is a conglomeration of software that controls the hardware of the computer and ensures that the computer can perform all its basic functions, which are necessary for all other programs to work. The operating system helps additional programs integrate with the computer so that they can run. Because the operating system is so important, it is usually the first software added to the computer. Operating systems contain a number of utilities.
A utility in an operating system is a computer program that performs a single task, usually very specific and related to only part of the operating system software. These programs work mostly with system resources such as memory and basic data flow. They often help computers organise their memory and set apart memory for applications that are added later in the life of the computer.
Operating systems also use software known as applications, and it can sometimes be difficult to tell what the difference is. In general, utilities are smaller and more simple than applications. Applications are complex and perform many functions instead of only one, often functions that are not directly related to the basic computer structure. Word processors and datasheet programs are two of the most common applications.
There are utilities for most components of the operating system. One of the most common types is the disk drive utility, which manages the disk drives that the computer creates. Other utilities manage printers and other basic devices that are linked to the computer and need a direct line to computer memory.
Some utilities in operating systems may not be completely necessary, and some utilities can even be part of applications. For instance, a simple program within an application that allows it to print to multiple locations may be considered a utility, as can simple tools within an operating systems. These tools can often be added on or taken away from the operating system as desired. |
Sound plays an important role in our life. It is through sound we know that a period in school is over or if someone is approaching you by just listening to the footsteps. Vibrating objects produce sound. Vibration is the to and fro or back and forth movement of an object. Sound needs a medium to travel. Hence, it cannot travel in a vacuum.
Sound In Humans
In human beings, the sound is produced by the larynx. Two vocal cords are stretched are stretched across the larynx in a way that it leaves behind a narrow slit between the two for the passage of air. When the lungs force the air through the slit, it causes the vocal cords to vibrate producing sound. Muscles attached to the vocal cords make the cords either loose or tight. The quality of voice is different depending on whether the vocal cord is held firmly or loosely.
Humans hear sound through the ear. When sound enters the human ear it travels down to the eardrum through a canal. The sound vibrations cause the eardrum to vibrate. The eardrum then sends a signal to the inner brain which then sends a message to the brain.
Frequency, Amplitude and Time-Period of a Sound
Frequency, amplitude and time-period are important properties of sound that help us differentiate between various sounds. Their definitions are given below:
- Frequency – The number of oscillations per second is known as frequency.
- Amplitude – The number of molecules displaced by sound refers to the amplitude of a sound wave
- Time-Period – Time required for a complete cycle of vibration to pass. The time period is the inverse of frequency.
Higher the frequency, higher the pitch and shrillness of voice. The loudness of a sound depends on the amplitude of the sound. Higher the amplitude of sound, louder the sound. Sounds that are pleasing to the ear are known as music while sound that causes discomfort to the ear is known as noise. For a human ear, the range of audible frequencies of sound varies from 20 Hz to 20,000 Hz. Excessive noise leads to sound pollution and can pose health risks for human beings.
Sound Class 8 Extra Questions
- The sound from a mosquito is produced when it vibrates its wings at an average rate of 500 vibrations per second. What is the time period of the vibration?
- What is the difference between noise and music? Can music become noise Sometimes?
- List sources of noise pollution in your surroundings
|NCERT Solutions for class 8 Science Chapter 13|
|NCERT Exemplar for class 8 Science Chapter 13|
Learn more about sound and its properties and other related topics including CBSE class 8 science notes, at BYJU’S. |
Asperger syndrome is a form of autism, a developmental disability affecting a person’s communication, behaviour and the way they experience the world. We’ve teamed up with National Autistic Society to explain how Asperger syndrome affects people who have it.
What is Aspergers?
Asperger syndrome is a form of autism, and people who have it find it harder to read the signals that most of us take for granted. This means communicating and interacting with others is more difficult for them, which can lead to high levels of anxiety and confusion.
Asperger syndrome is usually diagnosed later in children than autism, and sometimes may not even be recognised until adulthood. The reason for this is that the condition varies greatly from person to person, which makes diagnosis difficult.
Like with autism, the best way for getting a diagnosis is to visit a GP, who can refer patients to other health professionals for a formal diagnosis.
About Asperger’s syndrome
While there are similarities with autism, people with Asperger syndrome have less problems with speaking and are often of average, or above average, intelligence. Learning disabilities that are associated with autism, like dyslexia and dyspraxia, are not usually seen in people with Asperger’s syndrome.
Although the characteristics of Asperger syndrome vary from one person to another, they are usually divided into three main groups – difficulty with social communication, social interaction and social imagination.
Difficulty with social communication
Emotional and social expression is an area that people with Asperger syndrome can sometimes find difficult. For example, they may:
- struggle to understand gestures, facial expressions or tone of voice
- have difficulty knowing when to start or end a conversation and choosing topics to talk about
- use complex words and phrases without fully understanding what they mean
- be very literal in what they say and can have difficulty understanding jokes, metaphor and sarcasm. For example, a person with Asperger syndrome may be confused by the phrase ‘That’s cool’ when people use it to say something is good
Keeping sentences short and being clear and concise is key is helping someone with Asperger syndrome understand you.
Difficulty with social interaction
Although many people with Asperger syndrome want to be sociable, they may have difficulty initiating and sustaining social relationships, which can make them very anxious. People with the condition may:
- struggle to make and maintain friendships
- not understand the unwritten ‘social rules’ that most of us pick up without thinking. For example, they may stand too close to another person, or start an inappropriate topic of conversation find other people unpredictable and confusing
- become withdrawn and seem uninterested in other people, appearing almost aloof
- behave in what may seem an inappropriate manner
Difficulty with social imagination
While people with Asperger syndrome can be imaginative in the conventional use of the word, they can have difficulty with social imagination, which includes interpreting other people’s thoughts, feelings or actions.
Some children with Asperger syndrome may find it difficult to play ‘let’s pretend’ games or prefer subjects rooted in logic and systems, such as mathematics.
As well as different characteristics, people with the condition may have a love of routines, special interests and sensory difficulties.
Love of routines
In an attempt to try making the world less confusing, people with Asperger syndrome may have rules and rituals. Walking the same way to school everyday could be the case for young children, and in class they may get upset if there is a sudden change to the timetable.
People with Asperger syndrome often prefer to order their day to a set pattern. For example, if they work set hours, an unexpected delay to their journey to or from work can make them anxious or upset.
People with Asperger syndrome may develop an intense and even obsessive interest in a hobby or collecting. These interests can be lifelong be replaced by an unconnected interest.
For example, a person with Asperger syndrome may focus on learning all there is to know about trains or computers. Some can become exceptionally knowledgeable in their chosen field of interest, and could later study or work in their favourite subjects.
People with Asperger syndrome may also have sensory difficulties. These can occur in one or all of the senses (sight, sound, smell, touch, or taste), and degree of difficulty can vary.
Most commonly, an individual’s senses are either intensified (over-sensitive) or underdeveloped (under-sensitive). Bright lights or overpowering smells, for instance, can be a cause of anxiety and pain for people with Asperger syndrome.
People with sensory sensitivity may also find it harder to use their body awareness system. This system tells us where our bodies are, so it can be harder for them to navigate rooms avoiding obstructions, and carry out ‘fine motor’ tasks such as tying shoelaces.
Some people with Asperger syndrome may rock or spin to help with balance and posture or to help them deal with stress.
As with autism, people with Asperger syndrome have the opportunity of reaching their full potential if well supported. There are many approaches, therapies and interventions that can improve their quality of life, which can include communication-based interventions, behavioural therapy and dietary changes.
For more information on Aspergers syndrome, visit National Autistic Society website. |
- B.A. Taylor University
"Tell me and I forget, teach me and I may remember, involve me and I learn." - Benjamin Franklin
This quote guides my style of teaching. In my Spanish classes we do take notes on vocabulary, grammar and phrases, but it is always either introduced or developed through student involvement, be it a game or a conversation or a listening exercise. For example, before defining new vocabulary I will make small gestures or use the pictures and dialogue provided in the book in order to draw students in and have them determine the meaning of a word or phrase. Students enjoy this ‘guessing game,’ and are quick to respond with what they believe the meaning of a word to be. They are excited by their discovery and, as they are interacting more with the vocabulary in context rather than in simple note taking, it cements the idea of a word more fully in their minds.
Students love games. Students participate readily when we play bingo to review definitions of words. They demonstrate their creativity (and humor) when acting out or drawing a vocabulary word in front of their peers. This is a great game as the students responding to the charades or drawings are doing the more difficult aspect of memorizing vocabulary: recalling the word and its spelling. The nature of playing a game makes this learning engaging rather than rote and dull repetition.
And students love to talk! Since language is our means to relate and communicate with each other, I am able to easily weave this love of theirs into my classroom. Students are so eager to communicate with each other, that they barely notice that they are working through a spoken exercise that repeats the same grammar structure several times. They love watching Telehistorias, mini video clips that use the vocabulary and grammar in context, often repeating bits of the conversation with each other outside of class. The best part, however, of teaching another language is when students suddenly realize that they can have a conversation in another language: that they could actually communicate with a Spanish speaker. Once they recognize this new ability, they are motivated to learn as much as they can. They become so involved with the learning process that they guide and excite the desire to learn in other students. |
Do all Zebras look alike to you?
They are all white with black stripes. But Scientists can identify individual zebras by “scanning” their stripes like a barcode.
The ‘Stripespotter’ is a scanning system developed to identify individual zebras from a single picture.
This system is so accurate; it can be used on other striped animals like tigers and giraffes.
Scientific Explanation for Zebra Stripes
The field ecologists take pictures of zebras with their regular cameras. These pictures are then loaded on the database of the Stripespotter. A portion of the picture is highlighted by the scientists, say the hind leg. This highlighted area is scanned by the Stripespotter and assigned a ‘stripecode’. Each animal has a unique stripecode.
When other pictures of the animals are loaded and other parts highlighted, the Stripespotter finds it in the database and gives a matching result. It also provides a feedback as to why the two images are of the same zebra. |
Mercury bioremediation processes as mercury occurs naturally in the environment and is found in both elemental inorganic and organic forms. It generally occurs in two oxidation states, Hg+1 and Hg+2, they are commonly found as:
- elemental mercury,
- mercuric chloride,
- mercuric sulfide (cinnabar ore),
- and methylmercury.
Soil sediments actively adsorb the ionic form of mercury, while iron oxides adsorb mercury ions in neutral soils. Furthermore, organic matter adsorbs mercury ions in acidic environments. However, in the absence of organic matter, mercury becomes mobile and evaporates or can leach on to groundwater (1). Pollution of groundwater with mercury results in accumulation and biomagnification in living systems, posing serious risks to humans. So it is important to mitigate the pollution and mercury bioremediation is one of the solution.
Furthermore, many countries have taken preventive measures to reduce exposure to toxic forms of mercury. Yet, there is persistent pollution of ground water requiring urgent intervention in the form of remediation. Thus, bioremediation, application of living organisms to remove pollutants from the environment in a safe and sustainable manner is a solution for cleaner groundwater.
The above figure shows the global distribution of emissions of mercury. Majority of which originates from East and Southeast Asia. In 2013, there was about 1,960 tonnes of mercury emissions (2). In addition, Figure 2 shows contribution of major industries towards mercury emissions. Most noteworthy of these are gold mining and coal burning industries. Typically, mercury enters aquatic environments either through the atmosphere or through anthropogenic sources. Subsequently, inorganic forms of mercury in water are converted to methylmercury due to the action of microbes and are absorbed by animals and fish. Therefore, the consumption of contaminated fish by humans results in exposure to toxic levels of mercury (1) resulting in autoimmune diseases, fatigue, loss of hair, depression, insomnia, memory loss, among others (3,4).
Mercury present in the environment is removed from the cycle when it becomes part of the ocean sediment or lake sediment. This is mainly as a result of its association with mineral compounds in the sediments. Figure 3 shows the complex biogeochemical cycle of mercury, shifting from one form to another in air, water and soil. As the figure shows, the abiotic factors influence the cycle (oceans, soil and air) as well as biotic factors such as microbes and marine life.
Microbes involved in mercury bioremediation
Microbes are able to survive in adverse conditions, especially in environments with contamination of heavy metals. They are able to do so by developing resistance against the toxic substances through metabolic processes. These processes can either be transformation of the valence state of metal ions, extracellular precipitation or volatilization (5). Mercury present in soil or water can be detoxified by microbes by undergoing reduction. Especially relevant are species such as Pseudomonas, Escherichia, Bacillus, and Clostridium, which are involved in detoxification of mercury through methylation.
As a consequence, they produce volatile methylated compounds that mobilize mercury (6). Similar reduction of mercury has also been exhibited by Shewanella oneidensis (a dissimilatory metal-reducing bacterium), Geobacter sulfurreducens and Geobacter metallireducens (7). In a unique case, the microbial fauna of Arctic circle have also been found to possess merA genes, responsible for the reduction of Hg+2 to its volatile elemental form. In addition these microbial populations included algae Fucus sp. and Desmarestia sp. and thick photosynthetic microbial masses (8).
Mode of action of microbes on mercury bioremediation
Mercury-resistant bacteria have unique metabolic properties to transform toxic mercury into non-toxic forms. The extent of mercury concentration determines the proportion of mercury resistant bacteria in the contaminated environments (9). In addition, marine bacteria typically eliminate mercury from their surroundings by facilitating binding of the mercury with thiols to reduce toxicity. They also inhibit entry of mercury into the cell through the selectively permeable membrane. Another important mechanism of action adopted by mercury resistant bacteria is associated with the mer operon. The genes merA and merB are two functionally important genes harbored by the operon which code for mercuric ion reductase and organomercurial lyase enzyme respectively. Furthermore these enzymes reduce toxic methylmercury into nontoxic volatile elemental form, and together provide broad spectrum resistance to mercury (10).
Besides naturally occurring microbes, several genetically engineered microbes have been designed with mercury resistance properties by introducing the mer operon. For example, Deinococcus geothemalis has been modified by supplying it with mer genes from E. coli which facilitates the reduction of Hg2+. Similarly, Cupriavidus metallidurans MSR33 has been supplied with merB and merG genes for regulation of mercury biodegradation. Also, Pseudomonas has been supplied with pMR68 plasmid containing mer genes making the strain resistant to mercury (11). Subsequently, introduction of mer genes in appropriate microbes imparts them with the ability to bioremediate mercury in the natural environment and hence, prove advantageous for large scale cleaning up.
Mercury bioremediation as most effective solution
Mercury possess the ability to form amalgamations with other metals, thus, it finds wide application in the industrial processes. Due to its large scale use, the risk of exposure to this toxic metal is increasing rapidly, hence its elimination is crucial to ensure health of the environment. The mercury bioremediation using microbes is a suitable method, due to its cost effectiveness and ability to restore the quality of ecosystem in situ. Mercury resistant bacteria is a promising solution since they passively release the non-toxic forms of Hg into the environment. Also, they do not pose the problem of buildup of contaminated biomass. The successful introduction of mer operon in several microbial species opens to the possibility for developing new and practical solutions of large scale remediation. There should be further research in testing these strains for ex-situ bioremediation and effectively developing a standardized remediation plan.
- Otto M, Bajpai S, Martha Otto SB. Treatment technologies for mercury in soil, waste, and water. Remediat J [Internet]. 2007;18(1):21–8. Available from: G:\bibliotheek\digitale documenten\A704 treatment of.pdf\nhttp://doi.wiley.com/10.1002/rem.20150.
- United Nations Environment Programme. Global Mercury Assessment 2013: Sources, Emissions, Releases, and Environmental Transport. United Nations Environment Programme. 2013.
- Ainza C, Trevors J, Saier M. Environmental mercury rising. Water Air Soil Pollut. 2010;205:47–8.
- Gulati K, Banerjee B, S. BL, Ray A. Effects of diesel exhaust, heavy metals and pesticides on various organ systems: Possible mechanisms and strategies for prevention and treatment. Indian J Exp Biol. 2010;48:710–21.
- Sinha RK, Valani D, Sinha S, Singh S, Herat S. Bioremediation of Contaminated Sites: a Low-Cost Nature’S Biotechnology for Environmental Clean Up By Versatile Microbes, Plants & Earthworms. Solid Waste Management and Environmental Remediation. 2009. 1-72 p.
- Ramasamy K, Kamaludeen, Banu SP. Bioremediation of Metals : Microbial Processes and Techniques. In: Singh N, Tripathi R, editors. Environmental Bioremediation Technologies. Springer Berlin Heidelberg; 2007. p. 173–87.
- Wiatrowski HA, Ward PM, Barkay T. Novel Reduction of Mercury(II) by Mercury-Sensitive Dissimilatory Metal Reducing Bacteria. Environ Sci Technol. 2006;40(21):6690–6.
- Poulain AJ, Ní Chadhain SM, Ariya PA, Amyot M, Garcia E, Campbell PGC, et al. Potential for mercury reduction by microbes in the high Arctic. Appl Environ Microbiol. 2007;73(7):2230–8.
- Dash HR, Das S. Assessment of mercury pollution through mercury resistant marine bacteria in Bhitarkanika mangrove ecosystem, Odisha, India. Indian J Geomarine Sci [Internet]. 2014;43(6):1109–11021. Available from: http://nopr.niscair.res.in/bitstream/123456789/28981/3/IJMS 43(6) 1109-1121.pdf.
- Dash HR, Das S. mercury bioremediation and the importance of bacterial mer genes. Int Biodeterior Biodegrad [Internet]. 2012;75(November 2012):207–13. Available from: http://dx.doi.org/10.1016/j.ibiod.2012.07.023.
- Dixit R, Wasiullah, Malaviya D, Pandiyan K, Singh UB, Sahu A, et al. Bioremediation of heavy metals from soil and aquatic environment: An overview of principles and criteria of fundamental processes. Sustain. 2015;7(2):2189–212.
Latest posts by Yashika Kapoor (see all)
- Economic burden of atopic dermatitis in India and the USA - July 16, 2018
- Novel drug development to curb malaria - May 30, 2018
- Allergic factors responsible for Asthma - May 1, 2018 |
The Early 20th Century
In 1904, Japan defeated the Russian Empire in the Russo-Japanese War. In 1910, the Japanese Empire annexed the Korean peninsula. It was a member of the Allied powers during the First World War where the country attacked German possessions in the Pacific. By 1937, the country went to war with China again and invaded Manchuria. Relations with the West were already deteriorating at this point, particularly the U.S. as Japanese fighter planes sunk the USS Panay river gunboat on the Yangtze River.
World War II
Japan was a member of the Axis Powers during the Second World War. After the Fall of France to Nazi Germany, Japan took possession of French Indochina. The Empire was subjected to an oil embargo enacted by the United States. On December 7, 1941, the Japanese attacked the U.S. Pacific Fleet in Pearl Harbor, Hawaii. The Japanese also marched through the Shanghai International Settlement, placing several foreign national in captivity. While it crippled the U.S. initially, it inevitably brought the American entry into WWII. Subsequently, the Japanese would then invade European and American territories in Southeast Asia and the Pacific. While Malaya, Singapore, and the Dutch East Indies easily fell, the Philippines continued to resist until May 7, 1942 with the capture of Corregidor Island. Japan then installed puppet governments on its occupied territories as part of the Japanese Co-Prosperity Sphere. The Empire's fate was sealed when the U.S. Navy sunk all of the Imperial Japanese's carriers (Akagi, Kaga, Soryu, and Hiryu) during the Battle of Midway in June 1942. With the lack of carriers, the Japanese were placed in the defensive as the U.S. began an island hopping campaign from 1942-1943, capturing Guadalcanal, Tarawa, Makin Atoll, and Bougainville. By 1944, the U.S. captured Saipan and retook Guam and soon began the Liberation of the Philippines, which was completed in July 1945. On August 6, 1945, the U.S. dropped the first atomic bomb on the city of Hiroshima and three-days later on Nagasaki. The same day, the Soviet Union declared war on Japan and came thundering through Imperial Japanese Army positions in Manchuria and Korea. Japan unconditionally surrendered on August 12, 1945, now known as V-J Day. A formal signing of surrendered happened on September 2, 1945 on the USS Missouri, formally ending World War II.
After the war, Japan was devastated but quickly recovered from the damages. From 1945 to 1951, the United States military occupied Japan. And it was from this that Japan can no longer militarize, its constitution rewritten in 1947 by the United States, and its emperor only now a symbolic head. The Self-Defense Forces were created as a result of majority of the U.S. forces being deployed to South Korea during the Korean War. By the 1960s and the 1970s, Japan became a rich country. In November 1970, a man named Yukio Mishima and his organization the Tatenokai (Shield Society) tried to overthrow the current government in a coup to restore the Japanese militarism. Due to the lack of support, Mishima's coup failed to materialize as he was mocked by the SDF soldiers. Mishima committed suicide soon after. The United States meanwhile, maintained their troops in the islands as a deterrence to both the Soviet Union, China, and North Korea. Japan was given a major non-NATO ally status of the United States prior to World War III.
World War III
Prior to war, the JSDF and the United States military forces were on constant high alert due to the Soviet presence in the Kuril and the Sakhalin islands. Though no direct military engagement occurred with the Soviets, U.S. and Japanese fighter jets would occasionally chase out Soviet aircraft outside Japan's airspace. Maritime Self Defense Force patrol boats and P-3 Orions began patrolling the waters around Japan for Soviet submarines.
Upon China's entry to the war in January 1990, the JSDF, USMC and USAF braced for an invasion. The Japan Maritime Self-Defense Force clashed with the PLA in the East China Sea. Subsequently, Japan and South Korea put aside their differences and fought the common enemy in the region. While attacked by China, the JSDF managed to send a few of its forces to the Korean peninsula, the first time since 1945. Though this was controversial to the South Koreans due to Japan's bloody history with Korea, any help to defend the Republic of Korea was gladly accepted and welcomed.
China launched air attacks on the U.S. bases in the country. Additionally, cities along the eastern coastline were shelled by the PLAN. The Soviet Union launched their first air attacks on the island of Hokkaido, eventually occupying the island on the process. Upon the invasion of Taiwan to the PRC, the Chinese then invaded Okinawa with a force of 4000 men. A joint USMC-JSDF operation successfully repelled the Chinese invasion but casualties were high on both sides. In the north, the US and Self Defense Forces counterattacked the Soviets in Hokkaido. Since Japan and the U.S. were at war, this made the reclamation of Kuril and Sakhalin fair game. Casualties were high of both sides but unexpectedly, the Soviets withdrew all of a sudden. This surprised the U.S. and Japanese troops who promptly recaptured the island.
Japan's economy bloomed again after the Third World War. Post-war Japan was marked with technological advances such as robotics, video game consoles, automobiles, etc. It would also be one of the richest countries in Asia. In 2005, the country formally apologized and compensated to the nations affected by its wartime past. This included the Chinese Federated Union and Korea, both of which were notably anti-Japanese prior to the apology. As a result of this apology, relations between China and Korea have been cordial and better compared to the past years.
On March 11, 2011, a 9.0 magnitude quake struck off the coast of Tohoku. This was followed by a tsunami. This caused severe damages to the coastal towns as well as damaging the nearby Tohoku Power Plant - which released radiation to the coast. The international community quickly responded and sent aid to Japan.
In 2017, Japan, Taiwan, and the Chinese Federated Union are negotiating a possible joint-administration of the Senkaku (Diayu) Islands in the East China Sea.
Government and Politics
Japan's armed forces are called the Japan Self Defense Force (JSDF). Article 9 of the Japanese constitution states that the SDF cannot be used for conquest and can only fight in a defensive war. However, there is a debate within the Diet to let the SDF respond to allies, particularly the U.S., the Philippines, and the ANZ, in a form of collective self defense in the face of a belligerent nation or a campaign against a terrorist organization.
The Self-Defense Force use a mix of locally made weapons and equipment from Howa and Mitsubishi and equipment procured from the U.S.
The following are the branches of the Self Defense Force:
- Japan Ground Self Defense Force
- Japan Maritime Self Defense Force
- Japan Air Self Defense Force
Japan has good relations with the United States, the United Kingdom, Australia, New Zealand, Taiwan, the Philippines, Brunei, and other numerous states around the world. Relations with Korea and China has also warmed in the recent years since the nation formally apologized for its war crimes caused during World War II on August 15, 2005; on the 60th anniversary on VJ Day. Japan promised to provide reparations to the countries it affected during the war, which continues to this day. Nonetheless, some challenges for the relations with Taiwan, Korea, and China are some territorial disputes in the East China Sea. The nations involved are currently discussing how to resolve these issues peacefully. |
The recent news of H1N1 or Swine Flu caused panic and uproar of sorts, but that is why you need to know the facts. For instance, did you know? The first pandemic of H1N1 was way back in 2009, in Mexico. It is a respiratory disease and is contagious. It lasts from 3-7 days and serious illness may take up to 9 days to recover. As in the cases of most contagious diseases, the best way forward is prevention. We hope the information you get here helps you prevent or cope with this disease.
What is Swine Flu?
As the name suggests, the virus that infects the respiratory tract of pigs has the same symptoms as a virus in human flu. The pigs that survive, the sickness lasts up to two weeks. People who are closely associated with pigs, for example, pig rearers, vets or pork food processors may come in contact with this virus and develop the swine flu infection through the same virus. Similarly, when humans with flu come in contact with pigs, the pigs can get infected too.
What Causes it?
Swine flu or H1N1 is contagious, so that means that simple acts of coughing or sneezing will send the virus flying into the air, and if ingested or inhaled, you can be infected. Therefore, one thing must be clear here that you will never attract the virus by eating cooked pork. If you touch an infected surface and then eat or touch your eyes and nose, then chances of spreading the virus from saliva and mucus are much higher.
Risk Factors for Swine Flu
From the little outbreaks prevalent since 2009 and the recent one in India, it is seen that H1N1 is most common with children ages 5 years and up. However, you can be at a higher risk of contracting this disease if you were:
- A senior, over the age of 65
- Pregnant women
- Young children under 5 years
- Teenagers under 19 years regularly taking aspirin therapy
- Weak immune system
- Chronic illnesses such as diabetes, asthma or heart disease.
Symptoms of H1N1
If you or your loved one ever had the flu, then the symptoms of H1N1 are much the same. Since it is the respiratory tract that is infected, you will have trouble breathing because of the stuffy nose, coughing, chills, and fever. There can be nausea and vomiting, sore throat, body ache, fatigue, and diarrhea too. It is so exhausting that you will have to rest for two weeks to recover.
If you are a person with a weak immunity or who easily gets sick, then you may worry, that’s understandable. The symptoms let you know this is a terrible disease however, most patients recover with no problems. The high-risk individuals though are likely to have a worse outcome. The complications may look like a severe case of pneumonia or bronchitis including sinus and ear infections.
H1N1 is contagious and so, the easiest way to prevent getting infected by the virus is to keep yourself away from an outbreak in your community. Whether it is school, work or any public gathering, stay at home before you get a vaccination for it. If you find yourself surrounded by people affected by the virus, make sure you wash your hands with warm water and soap.
Avoid touching your face, nose or eyes with unwashed hands. If you feel you have any of the symptoms mentioned above, rush to your doctor for a diagnosis and take prescribed medicines. It’s best to stay at home and rest if you contract the illness. Use disposable tissues or sneeze into your shoulders to prevent others from being infected.
Home Remedies to Prevent H1N1
- Increase daily intake of citrus fruits, such as amla.
- Eat at least five tulsi leaves daily in the morning to boost immunity.
- Drink warm milk mixed with a pinch of turmeric at nights.
- Swallow two pods of raw garlic, on empty stomach in the morning with warm water.
- Regular Pranayam can help you keep your throat and lungs healthy.
The best way to ease your symptoms of swine flu is to remain hydrated. Drink plenty of fluids such as warm water, soup or juices. You need to rest as much as possible. Last, stay informed with local community news to learn about new vaccine availability and other relevant information about H1N1. |
Learning by doing helps students perform better in science
Students who physically experience scientific concepts understand them more deeply and score better on science tests, according to a new UChicago-led study.
Brain scans showed that students who took a hands-on approach to learning had activation in sensory and motor-related parts of the brain when they later thought about concepts such as angular momentum and torque. Activation of these brain areas was associated with better quiz performance by college physics students who participated in the research.
The study, published online April 24 in Psychological Science, comes from the Department of Psychology's Human Performance Lab, directed by Prof. Sian Beilock, an internationally known expert on the mind–body connection and author of the book "How the Body Knows Its Mind."
Beilock and her co-authors, Prof. Susan Fischer at DePaul University, UChicago graduate student Carly Kontra and postdoctoral scholar Dan Lyons, explain that hands-on experiences may benefit students more than previously realized, particularly in the world of virtual laboratories and online learning, This may be especially true for the initial stages of learning and in areas of science education that lend themselves to physical experiences.
"This gives new meaning to the idea of learning," said Beilock. "When we're thinking about math or physics, getting students to actually physically experience some of the concepts they're learning about changes how they process the information, which could lead to better performance on a test."
The study included experiments in the laboratory involving student behavior and brain imaging and one randomized trial in a college physics classroom. The hands-on studies used a system of two bicycle wheels that spun independently on a single axle, which allowed students to understand the concept of angular momentum—at work when a moving bicycle appears more stable than a stationary one. To experience angular momentum, students held the wheels by the axle and were instructed to tilt the axle from horizontal to vertical, while attempting to keep a laser pointer on a target line on the wall. When the axle tilted, the students experienced torque—the resistive force that causes objects to rotate.
The students were divided into groups, with some of the students tilting a set of bicycle wheel, while the other group simply observed. A post-test showed that those who had actively participated in the experiment outperformed the observation group.
The researchers used functional magnetic resonance imaging to see what regions of the brain were activated when students reasoned through the concepts of angular momentum and torque. While in the brain scanner, the students looked at animated pictures of an avatar spinning bicycle wheels—similar to the wheels they spun or watched other students spin. Later students took a quiz on the material.
"When students have a physical experience moving the wheels, they are more likely to activate sensory and motor areas of the brain when they are later thinking about the science concepts they learned about," said Beilock. "These sensory and motor-related brain areas are known to be important for our ability to make sense of forces, angles and trajectories.
A final experiment took place in a college-level physics class, to study whether the benefits of action experience could be seen on quizzes and homework taken days later. Students were randomly assigned to either the action or observation roles. Overall, the action group earned quiz grades that were about 7 percent higher than the observation group, even though they had fairly matched grades on other quizzes during the quarter.
For Beilock, the findings stressed the importance of classroom practices that physically engage students in the learning process, especially for math and science.
"In many situations, when we allow our bodies to become part of the learning process, we understand better," Beilock said. "Reading about a concept in a textbook or even seeing a demonstration in class is not the same as physically experiencing what you are learning about. We need to rethink how we are teaching math and science because our actions matter for how and what we learn." |
©Copyright 2018 GEOSCIENCE RESEARCH INSTITUTE
11060 Campus Street • Loma Linda, California 92350 • 909-558-4548
The stability of organic (carbon-based) molecules is an interesting and challenging topic as there are many different types of functional groups, molecular configurations, and molecular collisions to consider. Research on the stability of ascorbic acid (Vitamin C) and other vitamins demonstrates which factors to consider when it comes to the preservation of carbon-based molecules. Ascorbic acid is a very important but very unstable organic molecule which is characteristic of the class of organic molecules we know as vitamins (Fig. 1).
Vitamin stability has been studied for decades under a variety of storage conditions, and it is interesting to see how chemical manufacturers address long term stability issues. As stated on the website of DSM (a chemical company located in the Netherlands): “The vitamin manufacturing industry has developed products of high purity and quality, with improved stability, high bioavailability and optimum handling and mixing properties…. However, when dealing with complex and reactive compounds such as the vitamins, no product form can offer complete and unlimited protection against destructive conditions, excessive periods of storage or severe manufacturing processes. The individual feed manufacturer must take responsibility for assuring customers that vitamins have been stored, handled and added to feeds in an optimum manner and that vitamin levels are routinely monitored for quality assurance.”
Temperature, water content, pH, oxygen levels, light (type/intensity), catalysts (metals like Fe, Cu, etc), inhibitors, chemical interactions, energy (heat), and time are all factors that affect the stability of organic molecules. Double bonds and other functional groups are susceptible to rearrangements and reactions that vary with these conditions and is why organic chemistry textbooks are so thick! Vitamin C is somewhat stable in a dry, powdered form but dilution in water greatly accelerates the transformation of ascorbic acid into a biologically unusable form. Low pH’s can slow this degradation but at neutral to higher pH, dilute solutions of vitamin C can degrade very quickly. Every organic molecule has its own conditions of stability. In general, UV-light and oxygen are constantly attacking these molecules and rearranging their structures into molecular configurations unsuitable for their original purpose. Water speeds the degradation. This is why many vitamins and pharmaceuticals are packaged in thick, dark containers with desiccants.
Eliminating water, oxygen, and energetic radiation (gamma, x-ray, UV, visible) can greatly extend and preserve organic molecules which is why some biomolecules can be preserved for longer periods of time when embedded in crystalline or amorphous solids like amber or stone. Scientists have tried to mimic natural means to preserve biochemical molecules through the use of sugars like trehalose. Trehalose can help enzymes and proteins preserve their activity when lyophilized (freeze-dried) together. Other sugars and polyols have been explored as a partner chemical that provides many hydrogen bonding sites that stabilize the complex 3-D structure of proteins, enzymes, and nucleic acids in the absence of water but trehalose seems to be one of the best.
Water Bears (tardigrades) (Fig. 2) have been in the news lately because new information about their genome relating to their ability to survive harsh conditions such as absolute zero, vacuum of space, and high temperatures around volcanoes was recently published.
The November 7, 2016 issue of Chemical & Engineering News featured this recent research as it interests chemists and engineers who are trying to find innovative ways to preserve unstable carbon-based molecules of life: “Although commonly found in moss and lichens, tardigrades are truly aquatic animals, requiring a film of water surrounding their body to take in oxygen and expel carbon dioxide. Without water, they dry out, practically cease metabolism, and curl up into a sturdy desiccated form called a tun. It is the tun state that enables tardigrades to withstand many extremes. And then if they return to water, they bounce right back.” It is believed that tardigrades produce various “dry-tolerant proteins” that “are intrinsically disordered in water but develop secondary structures in the dehydrated state that allow them to stabilize DNA, proteins, and cell membranes.”
Carbon-based chemistry in living systems is constantly under thermodynamic and kinetic distress from heat, light, radiation, oxygen, water and other reactive chemicals that limits their longevity. This is to say nothing of the enzymatic biological attacks from the microbial world that slice-and-dice organic chemicals in an effort recycle them for their own energetic requirements. The same flexibility that allows living systems to constantly recycle and renew carbon-based materials are the same mechanisms that inhibit long term stability.
Ryan T. Hayes is a Ph.D. chemist (Andrews University) studying how to preserve vitamin C and other biomolecules through the use of spherical nanopolymers called dendrimers. |
What are Work and Energy?Edit
The energy is the ability to do a work.
Energy is measured by the result of applied force/power over a period of time to make changes/work.
Work is defined by a force applied to an object to change this object’s physical properties.
Power = Energy / Time No force used no change and none work was done.
Power and energy are closely related, although, they are not the same.
Power is the rate at which energy is delivered, not an amount of energy itself.
Electrical unit of power is Watt (named after the scientist James Watt):
1 Watt = 1 Joule / Second.
Energy = Power × Time.
Energy exist in various forms: kinetic [Ek = ½ m, v2 ], potential [Ep = m g h], thermal (heat), chemical, electrical, electrochemical, magnetic, sound, light, and nuclear.
Energy can be converted from one form into another in many ways:
Through the gravitation forces When gravity accelerates a falling object, it converts its potential energy to kinetic energy, or when an object is lifted, the gravitational field stores the energy exerted by the lifter as potential energy in the earth-object system.
Through the electric and magnetic fields forces Electrically charged particles in the presence of an electric field possess potential energy. The fields’ forces can accelerate particles and convert this particle's potential energy into kinetic energy. Charged particles can interact via electric and magnetic fields to transfer energy between them, i.e.: an electrical current in a conductor transforms electrical energy into heat.
Frictional Forces A mass object with its potential and kinetic energy associated with the position, orientation, and the object’s motion can be converted into thermal energy (heat), whenever the object slides against another object. The sliding causes the molecules on the surfaces of contact to interact via electromagnetic fields with one another and start vibrating.
Through emitting or absorbing photons of light When photons of lights fall on an object, a photon may pass through the object, be reflected by the object, or be absorbed by the atoms making up the object. Depending on the smoothness of the surface, the scale of the photon's wavelength, the reflection may be either diffuse (rough surface) or coherent (smooth surface). If the photon is absorbed, the photon's energy may also be split and converted in one of these ways: • Photo-thermal effect The energy absorbed may simply produce thermal energy, or heat in the object. In this case, the photon's energy is converted into vibrations of the molecules called phonons, which is actually heat energy. • Photoelectric effect The energy absorbed may be converted into kinetic energy of conduction electrons, and hence electrical energy. • Photochemical effect The energy may bring about chemical changes that effectively store the energy. Nuclear reactions occur when the nuclei of particles combine [fusion reaction] or when nuclei split apart [fission reaction].
In the International System of Units [SI system], the electrical energy unit is 1 Joule (named after the English physicist James Prescott Joule).
A one Joule is the amount of energy we expend as work if we exert a force of a one Newton of force over a distance of one meter.
It takes a one Joule of energy to lift 1 lb. about 9 inches.
The unit of force in the International System of Units is 1 Newton (named after the English physicist Isaac Newton).
One Newton of force is the force that can accelerate a mass of 1 kilogram (about 2.205 lbs), such that it picks up 1 meter per second of velocity during each second that the force is exerted.
Appliances rating is how much energy per unit time these appliances draw. This quantity is called the "power":
Energy = Power x Time = (100 Joules/Second) × (3600 Seconds) = 360,000 Joules |
Sponges are generally considered to be among the most primitive of animals. They have no organs, but are rather and assemblage of several kinds of specialized cells. A sponge feeds by moving water in through small pores on its surface into chambers where plankton is trapped by specialized collar cells. The water is then pumped into larger chambers and expelled through large pores. Both of the common sponges in Bartlett Cove are encrusting forms, which means that they spread over the surfaces of rocks in a relatively thin layer. The bread crumb sponge (Halichondria) is greenish color, up to an inch thick and has a very distinctive odor. The other common sponge, Haliclona, is thinner and smoother, with conspicuous volcano-like pores and a deep shade or purple or pink.
Last updated: March 16, 2018 |
The history of lupus begins in 1828 when the French dermatologist, Biett described the disease. For the next 45 years, studies of the disease showed nothing more than descriptions that emphasized skin changes. In the mid 1800’s, Pierre Cazenave was the first person to have a comprehensive description of lupus. The disease was named because of a wolf-bite shaped rash (the butterfly rash) that appears across the nose and cheeks of many lupus patients. “Lupus” is the Latin word for wolf.
In 1873, a dermatologist named Kaposi noted that people with a history of lupus lesions also experienced problems in internal organs. Then in the 1890’s, a famous American physician, Sir William Osler, discovered that some patients had internal organ involvement but no history of lupus skin problems at all. In 1948, a finding by Dr. Malcolm Hargraves of the Mayo Clinic that showed that patients with SLE had a LE cell in their blood, allowed doctors to develop a simple blood test. This blood test, along with a medical history and family history of lupus was used to diagnose many more cases of lupus. In the 1950’s, scientists discovered antinuclear antibodies (proteins that cause the immune system to attack its own tissues) which lead to the development of more sensitive tests for SLE. Studies using mice (murine models) in the last 40 years have also increased our understanding of the disease.
Now, after decades of little action, things are happening. The community is seeing unprecedented interest from biotechnology and pharmaceutical companies that translates into dozens of companies interested in lupus and even more trials for lupus patients. As few as seven or eight years ago, there were just two to three trials in progress. Today there are 12 to 15. |
1. Divided government is when the congress and the executive branch are controlled by opposite parties such as the executive branch may be controlled by the Democrat party and the congressional branch may be controlled by the republican party so both branches are divided due to party opposition. Since the President nominates a candidate for the supreme judge in the supreme court and the Senate confirms it, there would be a delay in confirmation of federal court nominees since president would most likely choose a support a candidate who reflects the president’s parties views therefore opposing the congressional parties view so this would cause a delay in the confirmation.
2. Congressional oversight is congress supervising the executive branch and its U.S. federal agencies as Congress reviews, monitors and supervises federal agencies and its actions therefore Congress holds hearing for review of an executive agencies activities and supervises those agencies.
3. Voting patterns of the Congress correlate with their political party affiliation as member would share the same beliefs as their party and support similar ideas therefore a member of the congress who is republican will most likely vote for a republican leader.
4. The constitution doesn't expressly prohibit sex discrimination in employment but in 14th
Amendment, the congress doesn't mention the equal protection clause which states that no person shall be taken away from the constitution given rights therefore laws must apply to everyone equally. Slavery is expressly prohibited in the 5th Amendment, cruel and unusual punishment is also expressly prohibited in the 8th amendment and unreasonable search and seizure is expressly prohibited in the 9th amendment.
5. Gerrymandering is manipulating of district lines to control districts in order to favor one political party over another. Gerrymandering consists of cracking and packing. Cracking consists of spreading voters of a particular party so they don't have a large voting bloc.
Packing consists of redrawing of district lines to concentrate one political party’s voters into a single district to reduce their influence on other districts.
6. Regulatory agencies make decisions independently so therefore they don't consult with the regulated industry before making any decisions.
7. Americans with Disabilities Act is an example of a federal mandate as Americans with
Disabilities is a federal law enforced by the Federal court. State supremacy refers to supremacy clause which mentions that constitution us the “supreme law of the land” therefore Americans with Disabilities Act is not an example of state supremacy. Horizontal
Federalism is when state governments need to interact with one another in order to regulate policies therefore Americans with Disabilities act is not horizontal federalism. Affirmative
Action is policy of favoring the disadvantaged group who suffer from discrimination so
Americans with disabilities act is not Affirmative action either. Dual Federalism is when both the federal and state governments work together and the power is clearly divided between the state and the federal government therefore Americans with Disabilities is a federal mandate as it protects disabled americans from discrimination by enforcing the law through the federal government.
8. Separation of powers is the idea of vesting the legislative, executive and judicial powers of government in separate bodies therefore when independent regulatory agencies make rules, enforce those rules and adjudicate disputes arising under those rules, they risk violating constitutional concept of separation of powers as they use legislative, executive and judicial powers without separating powers.
9. Separation of students by race even in equally good schools is seen as unconstitutional since the equal protection clause of the 14th amendment requires that states treat all citizens alike, regardless of race and therefore |
Scientists at the University of Kentucky have brought human regeneration — a concept straight out of science fiction — one step closer to reality by assembling the genome of the axolotl, salamander which exclusively inhabits a lake near Mexico city. They also have remarkable regenerative abilities.
“It’s hard to find a body part they can’t regenerate: the limbs, the tail, the spinal cord, the eye, and in some species, the lens, even half of their brain has been shown to regenerate,” said Randal Voss, a professor in the UK Spinal Cord and Brain Injury Research Center and a co-PI on the project.
Humans actually share many of the same genes with the axolotl. However, their genome is 10 times larger, which poses a formidable barrier to genetic analyses.
A genome is like a puzzle. Until it is assembled in the correct order, scientists cannot attempt large scale analyses of genome structure and function for later application in humans. However, Voss along with his partner in the project Jeremiah Smith cleverly adapted a classical genetic approach called linkage mapping to put the axolotl genome together in the correct order. This is the first genome of this size to be assembled to date.
“Just a few years ago, no one thought it was possible to assemble a 30+GB genome,” said Smith. “We have now shown it is possible using a cost-effective and accessible method, which opens up the possibility of routinely sequencing other animals with large genomes.”
Voss and Smith also used the assembled data to rapidly identify a gene that causes a heart defect in an axolotl as proof of concept, thus providing a new model of human disease.
“Biomedical research is increasingly becoming a genetically-driven enterprise,” said Voss. “To understand human disease, you have to be able to study gene functions in other organisms like the axolotl.”
“Now that we have access to genomic information, we can really start to probe axolotl gene functions and learn how they are able to regenerate body parts. Hopefully someday we can translate this information to human therapy, with potential applications for spinal cord injury, stroke, joint-repair…the sky’s the limit, really.” |
These standards are directed toward fostering students’ understanding and working knowledge of concepts of print, the alphabetic principle, and other basic conventions of the English writing system. These foundational skills are not an end in and of themselves; rather, they are necessary and important components of an effective, comprehensive reading program designed to develop proficient readers with the capacity to comprehend texts across a range of types and disciplines. Instruction should be differentiated: good readers will need much less practice with these concepts than struggling readers will. The point is to teach students what they need to learn and not what they already know—to discern when particular children or activities warrant more or less attention.
Note: In kindergarten, children are expected to demonstrate increasing awareness and competence in the areas that follow. |
It might be expected, prima facie, that roughly the same number of surnames in a sample would begin with each letter of the Roman alphabet and that the proportions of surnames categorised by their initial letters would be approximately uniform and equal to 1/26.
However, for many kinds of alphabetic data, the distribution of initials is skewed. A mathematical relationship (known as Benford's law for numeric data) seems to hold when adapted to model alphabetic data.
See http://plus.maths.org/issue9/features/benford/ regarding numeric data.
Using logs with base 27, the expected proportion (P) of surnames beginning with any letter is P = log[(n+1)/n], where 0 < n < 27 is the alphabetic rank of the letter and the cumulative function of P = log[(n+1)/n] is Sum(P) = log(n+1).
This model indicates a probability that 33% of a sample of surnames will begin with either A or B and that 67% of the surnames in that sample can be expected to begin with one of the eight letters from A to H.
A generalised version of this law would not work for truly random sets of data. It would work best for data that are neither completely random nor overly constrained, but rather lie somewhere in between. These data could be wide ranging and would typically result from several processes with many influences.
Michael Mernagh, Cork, Ireland. February 17, 2011.
More information about formatting options |
In an earlier study Professor Kristian Kristiansen from the University of Gothenburg in Sweden and Lundbeck Foundation Professor Eske Willerslev from the Centre for GeoGenetics at the University of Copenhagen, and their research teams, showed that the large demographic changes during the first part of the Bronze Age happened as a result of massive migrations of Yamnaya people from the Pontic-Caspian steppes into Neolithic Europe. They were also able to show that plague was widespread in both Europe and Central Asia at this time.
Now Professor Kristiansen and Professor Willerslev with co-authors reveal a more detailed view of the mechanism behind the emerging culture known as the Corded Ware Culture -- the result of the encounter between the Yamnaya and the Neolithic people. Professor Kristian Kristiansen says: "We are now for the first time able to combine results from genetics, strontium isotopes on mobility and diet, and historical linguistics on language change, to demonstrate how the integration process unfolded on the ground after the Yamnaya migrations from the steppe. In our grand synthesis we argue that Yamnaya migrants were predominantly males, who married women who came from neighbouring Stone Age farming societies" These Stone Age Neolithic societies were based on large farming communities reflected in their collective burial ritual often in big stone chambers, so called megaliths. Very different from the traditions of the incoming migrants.
The origin of the Yamnaya
The Yamnaya people originated on the Caspian steppes where they lived as pastoralists and herders, using wagons as mobile homes. From burial pits archaeologists have found extensive use of thick plant mats and felt covers. Their economy was based on meat, dairy products and fish, they were tall and rather healthy with little caries in their teeth. No agriculture is documented. Barrows were aligned in groups forming lines in the landscape to mark seasonal routes and after death diseased people were put into individual graves under small family barrows. Their burial ritual thus embodied a new perception of the individual and of small monogamous family groups as the foundation of society. The continent encountered by the Yamnaya people around 3000 BC had seen a decline in the agrarian Stone Age societies, thereby allowing space for incoming migrants. This decline was probably the result of a widespread plague from Siberia to the Baltic.
"The disease dynamic here may have been comparable to the European colonization process in America after Christopher Columbus", says Kristiansen. "Perhaps Yamnaya brought plague to Europe and caused a massive collapse in the population".
"Black Youth" as migrating males and their marriage to Neolithic women
In the new synthesis article, Kristiansen and colleagues argue for a dominance of males during the early phase after the migrations, and correspond to the old Indo-European mythology of later times. These sources talk about war-bands of youths - called "Black Youth" -- who were employed in pioneer migrations as a dynamic force. Evidence from strontium isotopic analyses, published in 2016 by Kristiansen together with Douglas Price and Karl Goran Sjogren, showed that a majority of the women in Corded Ware burials in south Germany were non-locals who had married in from Neolithic societies, since they had a Neolithic diet in their childhood. These results now form part of the new synthesis. Professor Kristian Kristiansen says: "Existing archaeological evidence of a strong 90% male dominance in the early phase of the Corded Ware/Single Grave Culture settlement in Jutland, Denmark, and elsewhere can now be explained by the old Indo-European tradition of war bands of young males who did not have any inheritance to look forward to. Therefore they were probably more willing to make a career as migrating war bands."
These Neolithic women also brought new knowledge of pottery production, and started to imitate pottery containers made of wood from the Yamnaya migrants. In this way a new pottery culture was created called Corded Ware, because of the cord impressions around the neck of the pots. They were made for beer drinking, and the new migrants also learned how to grow barley from the in-married Neolithic women in order to produce beer.
Rapid genetic changeover from Neolithic to Corded Ware cultures after 3000 BC
Eske Willerslev undertook the ancient DNA analyses together with Morten Allentoft and Martin Sikora. Professor Willerslev says:
"In our big Bronze Age study, published in 2015 we were astonished to see how strong and fast the genetic changeover was from the Neolithic to the Corded Ware. There was a heavy reduction of Neolithic DNA in temperate Europe, and a dramatic increase of the new Yamnaya genomic component that was only marginally present in Europe prior to 3000 BC. Moreover, the apparent abruptness with which this change occurred indicates that it was a large-scale migration event, rather than a slow periodic inflow of people".
New words and new Proto-Germanic dialect
The Yamnaya brought the Indo-European languages into Bronze Age Europe, but as herders, they did not have words for crops or cultivation, unlike the Neolithic farmers. As the Corded Ware Culture developed it adopted words related to farming from the indigenous Neolithic people, which they were admixing with. Guus Kroonen, a historical linguist, was able to demonstrate that these new words did not belong to the original Indo-European languages. Therefore it was possible to conclude that the Neolithic people were not speaking an Indo-European language, as did the Yamnaya migrants. Thus, the process of genetic and cultural admixture was accompanied by a process of language admixture, creating the foundations for later Germanic languages, termed Proto-Germanic.
The birth of the Bronze Age
The Yamnaya migrations from the Pontic-Caspian steppe into temperate Europe changed the course of history: they brought not only a new language, but also new ideas about how society was organized around small monogamous families with individual ownership to animals and land. This new society became the foundation for the Bronze Age, and for the way European societies continued to develop to the present.
The paper "Re-theorising mobility and the formation of culture and language among the Corded Ware Culture in Europe" by Kristiansen, Allentoft, Frei, Iversen, Johannsen, Kroonen, Pospiezny, Price, Rasmussen, Sjögren, Sikora and Willerslev is published in the journal Antiquity 4 April 2017. |
Mary Ann Cavanaugh, Grade 4 Teacher
Students will be able to identify erosion and explain the causes of erosion.
Materials: potted plant
disposable aluminum pans
container for water
Related URLs: Wind Erosion
abe.www.ecn.purdue.edu/~agen521/epadir/erosion/wind_erosion.html Water Erosion
abe.www.ecn.purdue.edu/~agen521/epadir/erosion/water_erosion.html Glacial Erosion
Class demonstration (20 minutes):
Outside activity (25 minutes):
- Take a potted plant out of the pot, with soil intact. Discuss how the roots of the plant help to hold the soil in place. Ask what would happen if the plant was not in a pot, but in the ground and water keep running over it. Introduce the term erosion and discuss how wind, water, and ice can cause erosion. Ask students if and where they have ever seen the effects of erosion.
- Explain that the class is going to go out to the playground to examine the effects of erosion on our playground and surrounding school property. Ask students to remember how plants hold soil and to pay special attention to the placement of trees and shrubs on the school grounds. Students will be asked to take a pencil and notebook to write and draw evidence of erosion on the school property.
Closing discussion (15 minutes):
- As a class, point out evidence of erosion on the school grounds. Some good examples are often near drains, drain pipes, and at the edges of the blacktop.
- Then have the students pair up with a partner to examine the rest of the area to look for other signs of erosion. Don't forget to set boundaries where students may explore.
- When students find examples of erosion, they are to describe it in their journals and draw a labeled rough sketch of the erosion.
- After students are back in the room, ask them to share what they have written in their journals about the effects of erosion on the playground and school property.
- Ask if anyone noticed the placement of trees and shrubs. Ask the students if the trees and shrubs were placed in particular areas to help stop the effects of erosion.
Classroom review (10 minutes)
Computer Activity (20 minutes):
- Review the term erosion and how plants help stop erosion.
- Discuss the forms of erosion that were witnessed on the playground and school property. Explain that most of the erosion that was witnessed on the playground was caused by water.
Follow-Up/Extension Activity (20 minutes):
- Have students view the effects of wind, water, and ice on soil and rocks by going to these sites. Instruct students to read the information and view the pictures.
- Provide each pair of students with a disposable aluminum baking tray, enough soil to fill the tray, water, small container, newspapers and some rocks. Cover each working area with newspapers.
- Instruct students to fill their tray with soil, patting down to firm in place. Position rocks in the soil so that they can not move about freely.
- Place the narrow side of the tray filled with soil and rocks on a book, so as to place the tray on a slant.
- Next have one of the students pour little drops of water, starting at the highest part of the tray, so the water can run down the soil.
- Ask students to notice if any changes are taking place in their trays. See if the soil or rocks are moving out of position.
- Direct the other student to pour larger amounts of water at the highest part of the tray. Again, ask the students to describe what changes are taking place in the tray. Are they seeing signs of erosion?
My students love this lesson. They especially enjoy exploring the school grounds for signs of erosion. The hands on activity is another highlight of this lesson. |
PAGE PAGE 16 Sampling and Statistics Statistics We start the discussion in the natural way. We all have a general feeling about what statistics is. In the course of these lecture notes, we will lay out the detail about what statistics is and how it is used. For now we give a quick definition. Suppose we have information on the test scores of students enrolled in a statistics class. In statistical terminology, the whole set of numbers that represents the scores of students is called ?data set?, the name of each student is called an ?element?, and the score of each student is called an ?observation?. Data: Information from observations, outcomes, responses, measurements. Example: Lists of the prices of 25 recently sold homes, score of 15 students, and age of all employees of a company. Statistics is the study of how to collect, organizes, analyze, and interpret numerical information from data. Broadly speaking, applied statistics can be divided into two areas: Descriptive statistics and inferential statistics Descriptive statistics consists of methods for organizing, displaying, and describing data by using tables, graphs, and summary measures. Suppose we have information about the percentage of adults who carry different number of plastic cards. Number of cards Percentage of Adults 1 to 3 50 4 to 6 30 7 to 9 7 10 or more cards 13 sum 100 A data set in its original form is usually very large. Consequently, such a data is not very helpful in drawing conclusions or making decisions. So we reduce data to manageable size by constructing tables, drawing graphs, or calculating summary measure such as average. The portion of statistics that helps us to do this type of statistical analysis is called Descriptive statistics. Descriptive Statistics Inferential statistics consists of methods that use sample results to help make decisions or predictions about population. In statistics, the collection of all elements of interest is called a population. The selection of a few elements from this population is called a Sample. A major portion of statistics deals with making decisions, inferences, and predictions about populations based on results obtained from samples. For example, we may make decision s about the political views of all college and university students based on the political views of 1000 students selected from a few colleges and universities. The area of statistics that deal with such decision-making procedures is referred to as inferential statistics. Inferential Statistics Ha SHAPE \* MERGEFORMAT The collection of information from the elements of a population or sample is called a survey. A survey that includes every element of target population is called a census. The technique of collecting information from a portion of the population is called a sample survey. Sampling and Types of Data Population vs. Sample Typically, population data is very hard or even impossible to gather. Statisticians and researchers will instead extract data from a sample. There are several types of data that is of interest. We can classify data into two types: Numerical or Quantitative data is data where the observations are numbers. For example, age, height, on a scale from one to ten..., distance, number of ,... Categorical or Qualitative data is data where the observations are non-numerical. For example, favorite color, choice of politician, ... Parameter vs. Statistic A parameter is a numerical summery of the population, Such as mean, median, mode, range, variance, standard deviation. A statistic is a numerical summery of a sample taken from the population. More details in chapter 2 A sample that represents the characteristics of the population as closely as possible is called representative sample An example, to find average income of families living in New York City is by conducting sample survey, the sample must contain family who belongs to different income groups in almost the same proportion as they exist in the population. Random Samples When we conduct a survey we always attempt to achieve a random sample. A simple random sample of size n is one in which every possible subset of size n has equal chance of being selected. For example, to choose a random sample of 20 people with phone numbers, we can use a random number generator to randomly select 20 phone numbers. Caution: A simple random sample is almost always impossible to achieve in the real world. For example, using the phone number generator, we will only be able to collect data from those who have a phone, pick up the phone, and are willing to participate in the phone survey. Because of this most surveys have inherent flaws. However, a survey with a small flaw is better then no information. Many surveys are done using convenience sampling. For example a researcher stands outside a supermarket and interviews anyone eager to respond. One way to overcome the problem of obtaining a random sample is to use HYPERLINK "http://ltcconline.net/greenl/courses/201/projects/StratifiedSampling.htm" stratified sampling . Stratified sampling ensures that members of each strata (or type) are included in the survey. For example we may randomly select 50 Caucasians, 25 Hispanics, and 10 Philipinos from the Lake Tahoe community to ensure that the main three ethnic groups are represented. One problem with sampling is that often the researcher only gets respondents who are eager to be interviewed. One way to combat this is to use HYPERLINK "http://ltcconline.net/greenl/courses/201/projects/cluster_sampling.htm" cluster sampling . This process involves breaking the population into several groups or clusters. Some of the clusters are randomly selected and the researcher makes sure that every individual in the selected clusters are surveyed. This usually involves paying for the respondents to take the survey. A sample maybe random or nonrandom. In a random sample, each element of the population has the same chance of being included in the sample. One way to select a random sample is by lottery or draw. A simple example is when a teacher puts each student's name on a slip of paper and places in a hat and then draws names from the hat without looking. Variable Variables: A variable is a characteristic under study that assumes different values for different elements. In contrast to a variable, the value of a constant is fixed. Example of variables are the income of households, the makes of cars owned by people. A variable is often denoted by x; y; or z Some variables can be measured numerically, whereas others cannot. A variable that can assume numerical values is called a quantitative variable. The values that a certain quantitative variable can assume may be countable or non-countable. The key features to describe are the center and the spread (variability) of the data. For example what is a typical amount of precipitation? is there much variation from year to year? Ha For example, We can count the number of cars owned by a family (Discrete Variable) However, we cannot count the height of family members. (Continues Variable) Discrete Variable: A variable whose values are countable. Continuous Variable: A variable whose values can be assumed any numerical value over a certain interval or intervals. Continues: (Length, Age, Height, Weight, Time) Discrete: (Number of: Houses, Cars, Accidents) Variables that cannot be measured numerically but can be divided into different categories are called qualitative or categorical variables. Gendre: (Male and Female) Religious affiliation: (Catholic, Jewish, Muslim, Other, None) The key features to describe are the relative number of observations. For example percentages. Bar Charts, Frequency Distributions , and Histograms All of us heard the saying "a picture is worth thousand words." A graphical display can reveal at a glance the main characteristics of a data set. The bar graph and the pie chart are the two types of graphs used to display qualitative data. Frequency Distributions, Bar Graphs, and Circle Graphs (Pie Charts) The frequency of a particular event is the number of times that the event occurs. The relative frequency is the proportion of observed responses in the category. Example: We asked the students what country their car is from (or no car) and make a tally of the answers. Then we computed the frequency and relative frequency of each category. The relative frequency is computed by dividing the frequency by the total number of respondents. The following table summarizes. Country Frequency Relative Frequency US 6 0.3 Japan 7 0.35 Europe 2 0.1 Korea 1 0.05 None 4 0.2 Total 20 1 RELATIVE FREQUENCY EMBED Equation.DSMT4 EMBED Equation.DSMT4 For example: 6/20=0.3, 7/20=0.35, 2/20=0.1 and so on Below is a bar graph for the car data. Since the height represents the frequency. Notice that the widths of the bars are always the same. INCLUDEPICTURE "http://ltcconline.net/greenl/courses/201/descstat/histogram.gif" \* MERGEFORMATINET NOTE: Pareto chart is a special type of "bar graph.? It is a bar graph with categories ordered by their frequency, from the tallest bar to the shortest bar. We make a circle graph often called a pie chart of this data by placing wedges in the circle of proportionate size to the frequencies. Below is a circle graph the shows this data. INCLUDEPICTURE "http://ltcconline.net/greenl/courses/201/descstat/hist.h2.gif" \* MERGEFORMATINET To find the angles of each of the slices we use the formula Frequency Angle = x 360 Total For example to find the angle for US cars we have 6 Angle = x 360 = 108 degrees 20 Graph for Quantitative Variables Data can be displayed in a histogram, a dot plot or stem-and-leaf plot. Dot Plots A dot plot shows a dot for each observation, placed just above the value on the number line for the observations. To construct a dot plot, Draw a horizontal line. Label it with the name of the variable, and mark regular values of the variable on it. For each observation, place a dot above its value on the number line. The dot plot portrays the individual observations. The number of dots above a value on the number line represents the frequency of occurrence of that value. From a dot plot, we would be able to reconstruct (at least approximately) all the data in the sample. Example 1: 2, 3, 3, 6, 7, 7, 7, 7, 8, 8, 8, 8, 8, 9, and 9 * ** * ** * *** *** ** ---------------------------------------------------------------- 0 1 2 3 4 5 6 7 8 9 Histograms Histograms are bar graphs whose vertical coordinate is the frequency count and whose horizontal coordinate corresponds to a numerical interval. Example: The depth of clarity of Lake Tahoe was measured at several different places with the results in inches as follows: 15.4, 16.7, 16.9, 17.0, 20.2, 25.3, 28.8, 29.1, 30.4, 34.5, 35.2, 36.7, 39.1, 39.4, 39.6, 39.8, 40.1, 42.3, 43.5, 45.6, 45.9, 48.3, 48.5, 48.7, 49.0, 49.1, 49.3, 49.5, 50.1, 50.2, 52.3 We use a frequency distribution table with class intervals of length 5. Class Interval Frequency Relative Frequency Cumulative Relative Frequency 15 -<20 4 0.129 0.129 20 -<25 1 0.032 0.161 25 -< 30 3 0.097 0.258 30 -< 35 2 0.065 0.323 35 -< 40 6 0.194 0.516 40 -< 45 3 0.097 0.613 45 -< 50 9 0.290 0.903 50 -< 55 3 0.097 1.000 Total 31 1.000 Below is the graph of the histogram The Shape of a Histogram A histogram is unimodal if there is one hump, bimodal if there are two humps and multimodal if there are many humps. A non-symmetric histogram is called skewed if it is not symmetric. INCLUDEPICTURE "http://ltcconline.net/greenl/courses/201/descstat/symHist.gif" \* MERGEFORMATINET Unimodal, Symmetric, Nonskewed INCLUDEPICTURE "http://ltcconline.net/greenl/courses/201/descstat/SkewHist.gif" \* MERGEFORMATINET Non-symmetric, Skewed Right INCLUDEPICTURE "http://ltcconline.net/greenl/courses/201/descstat/BimodalHist.gif" \* MERGEFORMATINET Bimodal Descriptive Statistics and Stem and Leaf Diagrams Stem and Leaf Diagrams For data that we want to understand how it looks without losing the individual data points, we use a stem and leaf diagram. To construct a stem and leaf diagram, we put the first digit or more (the stem) on the left and that digit's corresponding list (leaf) on the right. We can also have the high and low of the digit. If we want to compare two data sets we can draw the digits in the middle, the first set of leaves on the right, and the second set of leaves on the left. This is useful for comparing two data sets. A comparative stem and leaf diagram is often used. The middle represents the stems, and the left and right sides are the leaves of each of the two data sets. Example A computer retailer collected data on the number of computers sold during 20 consecutive Saturdays during the year. The results are as follows: 12, 14, 14, 17, 21, 24, 24, 25, 25, 26, 26, 27, 29, 31, 34, 35, 36, 39, 40, 42, 42, 45, 46, 47, 49, 49, 56, 59, 62 We can put this data into a stem and leaf diagram as shown below. The first digit represents the stem and the second digit represents the leaf. The stem is written on the left hand side (once per value) and the leaf is written on the right hand side next to the corresponding stem. 1| 2 4 4 7 2| 1 4 4 5 5 6 6 7 9 3| 1 4 5 6 9 4| 0 2 2 5 6 7 9 9 5| 6 9 6| 2 It is easy to see the shape of the distribution without losing any of the individual data. To read the stem and leaf diagram, for example the first row corresponds to all the data from 10 to 17 Cross-Section vs. Time-Series Data Cross-Section Data: contain information on different elements of population or sample for the same period of time. Example: The following table shows the 1998 earning of six celebrities. Celebrity 1998 Earning (millions of dollars) Jerry Seinfeld 267 Steven Spielberg 175 Operah Winfrey 125 Michael Jordan 69 Master P. 56.5 Eddie Murphy 47.5 The Time-Series data contain information on same element of population or sample for the different periods of time. Example: The following table shows the average salaries of all major baseball players for the year 1995 through 1999. Year Average Salary 1995 $ 1,094,440 1996 $ 1,101,455 1997 $ 1,314,420 1998 $ 1,384,530 1999 $ 1,567,873 Mean, Mode, Median, and Standard Deviation The Mean and Mode The sample mean is the average and is computed as the sum of all the observed outcomes from the sample divided by the total number of events. We use x as the symbol for the sample mean. In math terms, INCLUDEPICTURE "http://www.ltcconline.net/greenl/courses/201/descstat/mean.h1.gif" \* MERGEFORMATINET where n is the sample size and the x correspond to the observed valued. Example Suppose you randomly sampled six acres in the Desolation Wilderness for a non-indigenous weed and came up with the following counts of this weed in this region: 34, 43, 81, 106, 106 and 115 We compute the sample mean by adding and dividing by the number of samples, 6. 34 + 43 + 81 + 106 + 106 + 115 = 80.83 6 We can say that the sample mean of non-indigenous weed is 80.83. The mode of a set of data is the number with the highest frequency. In the above example 106 is the mode, since it occurs twice and the rest of the outcomes occur only once. The population mean is the average of the entire population and is usually impossible to compute. We use the Greek letter EMBED Equation.DSMT4 for the population mean. Median One problem with using the mean, is that it often does not depict the typical outcome. If there is one outcome that is very far from the rest of the data, then the mean will be strongly affected by this outcome. Such an outcome is called and outlier. An alternative measure is the median. The median is the middle score. If we have an even number of events we take the average of the two middles. The median is better for describing the typical value. It is often used for income and home prices. Example Suppose you randomly selected 10 house prices in the South Lake Tahoe area. Your are interested in the typical house price. In $100,000 the prices were 2.7, 2.9, 3.1, 3.4, 3.7, 4.1, 4.3, 4.7, 4.7, 40.8 If we computed the mean, we would say that the average house price is 710,000. Although this number is true, it does not reflect the price for available housing in South Lake Tahoe. A closer look at the data shows that the house valued at 40.8 x $100,000 = $4.08 million skews the data. Instead, we use the median. Since there is an even number of outcomes, we take the average of the middle two 3.7 + 4.1 = 3.9 2 The median house price is $390,000. This better reflects what house shoppers should expect to spend. Example: At a ski rental shop data was collected on the number of rentals on each of ten consecutive Saturdays: 44, 50, 38, 96, 42, 47, 40, 39, 46, 50. To find the sample mean, add them and divide by 10: 44 + 50 + 38 + 96 + 42 + 47 + 40 + 39 + 46 + 50 = 49.2 10 Notice that the mean value is not a value of the sample. To find the median, first sort the data: 38, 39, 40, 42, 44, 46, 47, 50, 50, 96 Notice that there are two middle numbers 44 and 46. To find the median we take the average of the two. 44 + 46 Median = = 45 2 Notice also that the mean is larger than all but three of the data points. The mean is influenced by outliers while the median is robust. Outlier: An outlier is an observation that falls well above or well below the overall bulk of the data. Example 1: 22, 34, 68, 75, 79, 79, 81, 83, 84, 87, 90, 92, 96, and 156 156 is an outlier Example 2: 5, 34, 68, 75, 79, 79, 81, 83, 84, 87, 90, 92, 96, and 99 5 is an outlier Range The Range is difference between the largest and smallest value in a set of data. For example: 1,3,4,5,5,6,7,11 Range = 11-1=10 Variance, Standard Deviation and Coefficient of Variation The mean, mode, median, and trimmed mean do a nice job in telling where the center of the data set is, but often we are interested in more. For example, a pharmaceutical engineer develops a new drug that regulates iron in the blood. Suppose she finds out that the average sugar content after taking the medication is the optimal level. This does not mean that the drug is effective. There is a possibility that half of the patients have dangerously low sugar content while the other half has dangerously high content. Instead of the drug being an effective regulator, it is a deadly poison. What the pharmacist needs is a measure of how far the data is spread apart. This is what the variance and standard deviation do. First we show the formulas for these measurements. Then we will go through the steps on how to use the formulas. We define the variance to be INCLUDEPICTURE "http://www.ltcconline.net/greenl/courses/201/descstat/mean.h2.gif" \* MERGEFORMATINET and the standard deviation to be INCLUDEPICTURE "http://www.ltcconline.net/greenl/courses/201/descstat/mean.h3.gif" \* MERGEFORMATINET Variance and Standard Deviation: Step by Step Calculate the mean, x. Write a table that subtracts the mean from each observed value. Square each of the differences. Add this column. Divide by n -1 where n is the number of items in the sample This is the variance. To get the standard deviation we take the square root of the variance. Example The owner of the Ches Tahoe restaurant is interested in how much people spend at the restaurant. He examines 10 randomly selected receipts for parties of four and writes down the following data. 44, 50, 38, 96, 42, 47, 40, 39, 46, 50 He calculated the mean by adding and dividing by 10 to get x = 49.2 Below is the table for getting the standard deviation: x x - 49.2 (x - 49.2 )2 44 -5.2 27.04 50 0.8 0.64 38 11.2 125.44 96 46.8 2190.24 42 -7.2 51.84 47 -2.2 4.84 40 -9.2 84.64 39 -10.2 104.04 46 -3.2 10.24 50 0.8 0.64 Total 2600.4 Now 2600.4 = 288.7 10 - 1 Hence the variance is 289 and the standard deviation is the square root of 289 = 17. What this means is that most of the patrons probably spend between $32.20 and $66.20. The sample standard deviation will be denoted by s and the population standard deviation will be denoted by the Greek letter EMBED Equation.DSMT4 EMBED Equation.DSMT4 . The sample variance will be denoted by s2 and the population variance will be denoted by EMBED Equation.DSMT4 2. The variance and standard deviation describe how spread out the data is. If the data all lies close to the mean, then the standard deviation will be small, while if the data is spread out over a large range of values, s will be large. Having outliers will increase the standard deviation. One of the flaws involved with the standard deviation, is that it depends on the units that are used. One way of handling this difficulty, is called the coefficient of variation which is the standard deviation divided by the mean times 100% EMBED Equation.DSMT4 CV = 100% EMBED Equation.DSMT4 In the above example, it is 17 100% = 34.6% 49.2 This tells us that the standard deviation of the restaurant bills is 34.6% of the mean. Chebyshev's Theorem A mathematician named Chebyshev came up with bounds on how much of the data must lie close to the mean. In particular for any positive k, the proportion of the data that lies within k standard deviations of the mean is at least 1 1 - k2 For example, if k = 2 this number is 1 1 - = .75 22 This tell us that at least 75% of the data lies within 75% of the mean. In the above example, we can say that at least 75% of the diners spent between 49.2 - 2(17) = 15.2 and 49.2 + 2(17) = 83.2 dollars. EMBED Equation.DSMT4 and for Grouped Data Calculating the Mean from a Frequency Distribution Since calculating the mean and standard deviation is tedious, we can save some of this work when we have a frequency distribution. Suppose we were interested in how many siblings are in statistics students' families. We come up with a frequency distribution table below. Number of Children 1 2 3 4 5 6 7 Frequency 5 12 8 3 0 0 1 Notice that since there are 29 respondents, calculating the mean would be very tedious. Instead, we see that there are five ones, 12 twos, 8 threes, 3 fours, and 1 seven. Hence the total count of siblings is 1(5) + 2(12) + 3(8) + 4(3) + 7(1) = 72 Now divide by the number of respondents to get the mean. 72 EMBED Equation.DSMT4 = = 2.5 29 Extending the Frequency Distribution Table Just as with the mean formula, there is an easier way to compute the standard deviation given a frequency distribution table. We extend the table as follows: Number of Children (x) Frequency (f) xf x2f 1 5 5 5 2 12 24 48 3 8 24 72 4 3 12 48 5 0 0 0 6 0 0 0 7 1 7 49 Totals EMBED Equation.DSMT4 = 29 EMBED Equation.DSMT4 = 72 EMBED Equation.DSMT4 = 222 Next we calculate EMBED Equation.DSMT4 EMBED Equation.DSMT4 Now finally apply the formula INCLUDEPICTURE "http://www.ltcconline.net/greenl/courses/201/descstat/meanSD1.gif" \* MERGEFORMATINET to get INCLUDEPICTURE "http://www.ltcconline.net/greenl/courses/201/descstat/meanSD2.gif" \* MERGEFORMATINET EMBED Equation.DSMT4 Weighted Averages Sometimes instead of the simple mean, we want to weight certain outcomes higher then others. For example, for your statistics class, the following percentages are given Homework = 150 Midterm = 450 Project = 100 Final = 300 Suppose that you received an 84% on your homework, a 96% on your midterms, a 98% on your project and an 78% on your final. What is your average for you class? To compute the weighted average, we use the formula EMBED Equation.DSMT4 We have EMBED Equation.DSMT4 and EMBED Equation.DSMT4 Now divide to get your weighted average 900.5 = .9005 1000 You squeaked by with an "A". Percentiles and Box Plots Percentiles We saw that the median splits the data so that half lies below the median. Often we are interested in the percent of the data that lies below an observed value. We call the rth percentile the value such that r percent of the data fall at or below that value. Example If you score in the 75th percentile, then 75% of the population scored lower than you. Example Suppose the test scores were 22, 34, 68, 75, 79, 79, 81, 83, 84, 87, 90, 92, 96, and 99 If your score was the 75, in what percentile did you score? Solution There were 14 scores reported and there were 4 scores at or below yours. We divide 4 100% = 29 14 So you scored in the 29th percentile. There are special percentile that deserve recognition. The second quartile (Q2) is the median or the 50th percentile The first quartile (Q1) is the median of the data that falls below the median. This is the 25th percentile The third quartile (Q3) is the median of the data falling above the median. This is the 75th percentile We define the interquartile range as the difference between the first and the third quartile IQR = Q3 - Q1 Range: The range is difference between the largest and the smallest observations. Example : Range = 99 -22 =77 Box Plots Another way of representing data is with a box plot or the five-number summary (minimum value, first quartile; Q1, median, third quartile; Q3, and the maximum value). To construct a box plot we do the following: Draw a rectangular box whose bottom is the lower quartile (25th percentile) and whose top is the upper quartile (75th percentile). Draw a horizontal line segment inside the box to represent the median. Extend horizontal line segments ("whiskers") from each end of the box out to the most extreme observations. Example: Suppose the test scores were 22, 34, 68, 75, 79, 79, 81, 83, 84, 87, 90, 92, 96, and 99 Minimum value = 22 First quartile Q1 = 75 the first quartile is the median of 7 smallest observations Median Q2 = 82 the median of 14 values is the average of 7th and 8th observations The third Quartile Q3 = 90 the third quartile is the median of 7 largest observations Maximum value = 99 Q1=75 Median = 82 Q3 = 90 SHAPE \* MERGEFORMAT Minimum Value = 22 Maximum Value = 99
Want to see the other 23 page(s) in Lecture1-Math220.doc?JOIN TODAY FOR FREE! |
NOAA researchers and their international partners in Indonesia, Papua New Guinea, and the Solomon Islands are using satellite transmitter technology to track the endangered leatherback sea turtle across the Pacific Ocean. Transmitters attached to the carapace of the turtle send signals to satellites providing researchers with information on the animals' geographic location, diving behavior, and sea temperatures.
Recently, a female leatherback sea turtle was tracked for 647 days and 12,744 miles during its journey from a nesting beach of Papua, Indonesia to its foraging area off the Pacific coast of the United States of America.
This international collaborative effort allows researchers to learn what migratory routes and foraging habitat are used by these endangered ambassadors of the sea. Understanding sea turtles' movements is critical to understanding what habitat is important for their survival and recovery and ensuring their protection as they pass through multiple nation's territories and international waters.
Leatherback populations face threats from egg harvesting, fishery bycatch, ingestion of debris, direct harvest, and habitat loss. Satellite tracking technology is one tool allowing NOAA researchers to unlock secrets of the incredible journeys of this species, allowing us to better understand where they go, what threats they might face at sea, and what management efforts will be required to ensure this species' survival. The new technology can be used in all the world's oceans and is being used for other sea turtle and non-sea turtle species research.
Cite This Page: |
What is DNA sequencing?
Finding a single gene amid the vast stretches of DNA that make up the human genome - three billion base-pairs' worth - requires a set of powerful tools. The Human Genome Project (HGP) was devoted to developing new and better tools to make gene hunts faster, cheaper and practical for almost any scientist to accomplish.
These tools include genetic maps, physical maps and DNA sequence - which is a detailed description of the order of the chemical building blocks, or bases, in a given stretch of DNA. Indeed, the monumental achievement of the HGP was its successful sequencing of the entire length of human DNA, also referred to as the human genome.
Scientists need to know the sequence of bases because it tells them the kind of genetic information that is carried in a particular segment of DNA. For example, they can use sequence information to determine which stretches of DNA contain genes, as well as to analyze those genes for changes in sequence, called mutations, that may cause disease.
What sequencing methods were developed?
The first methods for sequencing DNA were developed in the mid-1970s. At that time, scientists could sequence only a few base pairs per year, not nearly enough to sequence a single gene, much less the entire human genome. By the time the HGP began in 1990, only a few laboratories had managed to sequence a mere 100,000 bases, and the cost of sequencing remained very high. Since then, technological improvements and automation have increased speed and lowered cost to the point where individual genes can be sequenced routinely, and some labs can sequence well over 100 million bases per year.
Beginning in the late 1990s, the scientific community witnessed a remarkable climax of accomplishments related to DNA sequencing. In addition to the historic sequencing of the human genome, sequences have now been generated for the genomes of several key model organisms, including the mouse (Mus musculus); the rat (Rattus norvegicus); two fruit flies (Drosophila melanogaster and D. pseudoobscura); two roundworms (Caenorhabditis elegans and C. briggsae); yeast (Saccharomyces cerevisiae) and several other fungi; a malaria-carrying mosquito (Anopheles gambiae) along with a malaria-causing parasite (Plasmodium falciparum); two sea squirts (Ciona savignyi and C. intestinalis); a long list of microbes; and a couple of plants, including mustard weed (Arabidopsis thaliana) and rice (Oryza sativa). Sequencing work is well underway on the honey bee (Apis mellifera), and is just getting started or expected to begin soon on the chimpanzee (Pan troglodytes), the cow (Bos taurus), the dog (Canis familiaris) and the chicken (Gallus gallus).The relative genetic simplicity of many of these model organisms make them ideal terrain for future technology development.
Although providing a single reference sequence of the human genome is an extraordinary achievement, further advances in sequencing technology are necessary so large amounts of DNA can be manipulated and compared with other genomes quickly and cheaply. Comparing differences among long stretches of DNA - one million bases or more - taken from many individuals should yield an enormous amount of information about the role of inheritance in disease susceptibility, response to environmental influences and even evolution.
What did scientists discover for future research?
The Human Genome Project's (HGP) successful sequencing of the human genome has provided scientists with a virtual blueprint of the human being. However, this accomplishment should be viewed not as an end in itself, but rather as a starting point for even more exciting research. Armed with the human genome sequence, researchers are now trying to unravel some of biology's most complicated processes: how a baby develops from a single cell, how genes coordinate the functions of tissues and organs, how disease predisposition occurs and how the human brain works.
DNA sequence information derived by the HGP laboratories is freely accessible to scientists through GenBank [ncbi.nih.gov], a database run by the National Institutes of Health and the National Library of Medicine's National Center for Biotechnology Information [ncbi.nih.gov].
Last Reviewed: December 27, 2011 |
New method to remove cyanide from industrial waste water
Chemists at the University of Amsterdam (UvA) have discovered a new method for removing cyanide from the waste water of steel mills. The removal of cyanide from such water is expensive but essential. Paula Oulego Blanco, Dr Raveendran Shiju and Prof. Gadi Rothenberg from the UvA’s Sustainable Chemistry research priority area discovered a way to do this faster, cheaper and more efficiently.
Rothenberg: ‘Society needs more and more steel. In 2014, the worldwide production of steel was a staggering 1.6 billion tons. Any improvement in the production process results in a benefit to the environment. Our new catalyst enables a simple, efficient and safe removal of cyanide from the steel waste water.'
A catalyst for cleaner water
The group first tested the new catalyst in the laboratory using a ‘cocktail’ of simulated waste water. When that proved to be a success, the researchers repeated the experiments and the measurements with waste water from a steel mill. Here, too, they found a reduction of 90%.
The invention pertains to a heterogeneous catalyst. This is a solid material that does not dissolve and is not consumed during the process. This means that a small amount of catalyst can be used to purify large amounts of waste water. The intimate reactions at the catalyst surface are still unknown, as is the case in many solid-catalysed reactions.
The UvA has filed a patent application that will be made available to the steel-making industry. Rothenberg's research group has patented several different catalysts over the past few years. Some of these are now applied by the chemical industry, while others form the basis for start-up companies or bilateral collaborations.
Environmental effects of steel production
Steel is one of the most widely used materials on earth. Its ubiquity in everyday life makes its absence almost unimaginable. The production of steel has an impact on the environment, and steel-making companies are continuously trying to improve their environmental performance and invest in new technologies to achieve this goal. |
There are many ways to generate a random number based on how many bits you want and how random it has to be.
It is important to remember three things:
- Computers are not random; as such they can only generate pseudo-random numbers (numbers which seem random but eventually repeat or show a pattern).
- User input is pretty random, and is a very useful thing for making numbers more random.
- If you give a random number generator the same seed, you will get the same output. So use user input to seed the generator.
r register can be used too, since its value will be fairly random when observed infrequently, but many emulators don't emulate it and (if emulated correctly, or on hardware) it only provides 7 bits of data.
An easy and effective way to seed a generator is to simply increment (or otherwise modify) its seed once per frame, and/or increment/modify it based on user input. The important part is to then pass these seed data into an algorithm that produces pseudo-randomly-distributed results, which have little correlation to the input data.
Phantasy Star's random number generator
; Uses a 16-bit RAM variable called RandomNumberGeneratorWord
; Returns an 8-bit pseudo-random number in a
ld a,h ; get high byte
rrca ; rotate right by 2
xor h ; xor with original
rrca ; rotate right by 1
xor l ; xor with low byte
rrca ; rotate right by 4
xor l ; xor again
rra ; rotate right by 1 through carry
adc hl,hl ; add RandomNumberGeneratorWord to itself
ld hl,$733c ; if last xor resulted in zero then re-seed random number generator
+: ld a,r ; r = refresh register = semi-random number
xor l ; xor with l which is fairly random
ret ; return random number in a |
A collision is an interaction between two objects that have made contact (usually) with each other. As in any interaction, a collision results in a force being applied to the two colliding objects. Newton's laws of motion govern such collisions. In the second unit of The Physics Classroom, Newton's third law of motion was introduced and discussed. It was said that...
... in every interaction, there is a pair of forces acting on the two interacting objects. The size of the force on the first object equals the size of the force on the second object. The direction of the force on the first object is opposite to the direction of the force on the second object. Forces always come in pairs - equal and opposite action-reaction force pairs.
Newton's Laws Applied to Collisions
Newton's third law of motion is naturally applied to collisions between two objects. In a collision between two objects, both objects experience forces that are equal in magnitude and opposite in direction. Such forces often cause one object to speed up (gain momentum) and the other object to slow down (lose momentum). According to Newton's third law, the forces on the two objects are equal in magnitude. While the forces are equal in magnitude and opposite in direction, the accelerations of the objects are not necessarily equal in magnitude. In accord with Newton's second law of motion, the acceleration of an object is dependent upon both force and mass. Thus, if the colliding objects have unequal mass, they will have unequal accelerations as a result of the contact force that results during the collision.
Consider the collision between the club head and the golf ball in the sport of golf. When the club head of a moving golf club collides with a golf ball at rest upon a tee, the force experienced by the club head is equal to the force experienced by the golf ball. Most observers of this collision have difficulty with this concept because they perceive the high speed given to the ball as the result of the collision. They are not observing unequal forces upon the ball and club head, but rather unequal accelerations. Both club head and ball experience equal forces, yet the ball experiences a greater acceleration due to its smaller mass. In a collision, there is a force on both objects that causes an acceleration of both objects. The forces are equal in magnitude and opposite in direction, yet the least massive object receives the greatest acceleration.
Consider the collision between a moving seven ball and an eight ball that is at rest in the sport of table pool. When the seven ball collides with the eight ball, each ball experiences an equal force directed in opposite directions. The rightward moving seven ball experiences a leftward force that causes it to slow down; the eight ball experiences a rightward force that causes it to speed up. Since the two balls have equal masses, they will also experience equal accelerations. In a collision, there is a force on both objects that causes an acceleration of both objects; the forces are equal in magnitude and opposite in direction. For collisions between equal-mass objects, each object experiences the same acceleration.
Consider the interaction between a male and female figure skater in pair figure skating. A woman (m = 45 kg) is kneeling on the shoulders of a man (m = 70 kg); the pair is moving along the ice at 1.5 m/s. The man gracefully tosses the woman forward through the air and onto the ice. The woman receives the forward force and the man receives a backward force. The force on the man is equal in magnitude and opposite in direction to the force on the woman. Yet the acceleration of the woman is greater than the acceleration of the man due to the smaller mass of the woman.
Many observers of this interaction have difficulty believing that the man experienced a backward force. "After all," they might argue, "the man did not move backward." Such observers are presuming that forces cause motion. In their minds, a backward force on the male skater would cause a backward motion. This is a common misconception that has been addressed elsewhere in The Physics Classroom. Forces cause acceleration, not motion. The male figure skater experiences a backwards force that causes his backwards acceleration. The male skater slows down while the woman skater speeds up. In every interaction (with no exception), there are forces acting upon the two interacting objects that are equal in magnitude and opposite in direction.
Collisions are governed by Newton's laws. The law of action-reaction (Newton's third law) explains the nature of the forces between the two interacting objects. According to the law, the force exerted by object 1 upon object 2 is equal in magnitude and opposite in direction to the force exerted by object 2 upon object 1.
Check Your Understanding
Express your understanding of Newton's third law by answering the following questions. Click the button to check your answers.
1. While driving down the road, a firefly strikes the windshield of a bus and makes a quite obvious mess in front of the face of the driver. This is a clear case of Newton's third law of motion. The firefly hit the bus and the bus hits the firefly. Which of the two forces is greater: the force on the firefly or the force on the bus?
2. For years, space travel was believed to be impossible because there was nothing that rockets could push off of in space in order to provide the propulsion necessary to accelerate. This inability of a rocket to provide propulsion in space is because ...
a. space is void of air so the rockets have nothing to push off of.
b. gravity is absent in space.
c. space is void of air and so there is no air resistance in space.
d. ... nonsense! Rockets do accelerate in space and have been able to do so for a long time.
3. Many people are familiar with the fact that a rifle recoils when fired. This recoil is the result of action-reaction force pairs. A gunpowder explosion creates hot gases that expand outward allowing the rifle to push forward on the bullet. Consistent with Newton's third law of motion, the bullet pushes backwards upon the rifle. The acceleration of the recoiling rifle is ...
a. greater than the acceleration of the bullet.
b. smaller than the acceleration of the bullet.
c. the same size as the acceleration of the bullet.
4. Kent Swimm, who is taking Physics for the third year in a row (and not because he likes it), has rowed his boat within three feet of the dock. Kent decides to jump onto the dock and turn around and dock his boat. Explain to Kent why this docking strategy is not a good strategy.
5. A clown is on the ice rink with a large medicine ball. If the clown throws the ball forward, then he is set into backwards motion with the same momentum as the ball's forward momentum. What would happen to the clown if he goes through the motion of throwing the ball without actually letting go of it? Explain.
6. Chubby, Tubby and Flubby are astronauts on a spaceship. They each have the same mass and the same strength. Chubby and Tubby decide to play catch with Flubby, intending to throw her back and forth between them. Chubby throws Flubby to Tubby and the game begins. Describe the motion of Chubby, Tubby and Flubby as the game continues. If we assume that each throw involves the same amount of push, then how many throws will the game last? |
Who's Afraid of the Big, Bad Bully? Extension Activities
- Grades: PreK–K, 1–2
About this book
HOW TO HANDLE A BULLY (Critical Thinking)
Hold a group discussion about bullies. Have children had any problems with bullies? How did they handle them? Jot down students' suggestions. Do children think that the kids in the story handled Bertha the right way at the end? Why or why not? When the discussion is over, children might want to compile their suggestions for handling bullies into an advice book, How to Get Along. |
Serfs of Poland and Russia Part I Early History of Serfdom by Robert S. Sherins, M.D.
In 1648, Cossack forces inspired a peasant uprising in southwestern Ukraine against the absentee Polish landlords, local Polish inhabitants, resident Jewish arendars (lessees of village farms and inns), and Jewish managers, who served the landlords to administer the properties and to collect the rents and taxes that had been imposed by the Polish nobility and Boyar proprietors. The resultant riots became undisciplined and deteriorated into deplorable massacres of the town’s Polish inhabitants and clergy, as well as the Jewish communities.
Huge regions of Western Russia had been occupied as a result of the expansion of the Commonwealth of Poland-Lithuania during the 16th and 17th centuries. By 1610-1612, the Polish monarch achieved suzerainty over the Russian throne in Moscow. Roman Catholic clergy persisted in their attempts to convert the local Orthodox populations in the occupied regions, while referring to the Orthodoxy as the “religion of the serfs.” Polish laws were reinstated that restricted the property rights and privileges of the serfs in Russia, further enslaving the serfs to their landlords. All of those factors served to agitate the relationships between the Polish and Russian communities. Ultimately, Hetman Bogdan Chmielnicki gathered the peasants into armed bands of fierce horsemen, who were then joined by disaffected remnants of the Tatars and Ottomans. Together, they first defeated the Russian military in decisive battles and then turned upon the Poles.
Many of us have discovered ancestors, who lived in those regions along the borders between the Commonwealth of Poland-Lithuania and Russia. The basis of the ethnic conflicts had profound effects upon our ancestors. Underlying the conflicts were issues of property rights, civil rights, and the freedom of religious worship. This article was written with the purpose of reviewing some of those issues that were fundamental to the mounting discontent among the peasants (serfs, khlops).
Early Russian Trade History
By the 8th century, Slavic tribes in Eastern Europe began to diversify from their basic agricultural occupations. To defend the small towns, outsiders were hired. In time, the mercenaries assumed increasing roles in the governance of the towns. Among them were the Vikings (also known as the Varangians), who were particularly successful in organizing the communities/tribes. It was Viking Princes, who created the Kievan Rus, a federation which became the first Russia.
Trade became essential to the economic survival of the towns. Agricultural production was often sufficient to support the small local populations, but trade was required to exchange goods for items not locally available. The river waterways from the Baltic Sea to the Caspian Sea and Black Sea were crucial for transporting those goods. Vikings had a long history of sailing expertise and plied the waters of the Don, Oka, Vistula, and Volga Rivers in their search for trading markets.
The Kievan Rus was established in 880. It was a loose federation of nearly autonomous city-states, which were communities that shared defenses under an umbrella of protection provided by the Princes. In the beginning, there were endless conflicts among the regional towns and city-states and no fixed boundaries. As well, the Princes had no legal rights of hereditary succession, which led to obvious conflicts among the claimants. Frequent nomadic tribes or bands of outsiders from the Central Asian Steppes attempted to take over the region. There was a fascinating cultural conflict between the nomadic peoples, who needed new pastures for their livestock, and the city-states, who wanted to protect their properties with boundaries.
The economic success of the Kievan Rus was based upon the access and trade along the rivers, which enabled them to exchange commercial goods with their neighbors. The Kievan Rus Princes demanded tributes for use of the waterways in the regions that they controlled. In return, they protected their neighbors from the Khazars. Commercial trading prospered, but armed conflicts developed between the Byzantines, Greeks, Khazars and the Rus. The Rus created significant commercial links with the East via the Caspian Sea routes. Trade with the West required extensive travel via the European rivers to Hungary, Poland, Germany, France, and England, as well as the Scandinavian countries of Denmark, Norway, and Sweden. Salt, metals, and jewelry were among the most important products needed by the Kievan Rus.
Early traders exchanged cattle and furs for other products. By the 11th century, monetary currencies were utilized in exchange for goods. Coins have been found that were minted as early as the 11th century. As a result, it was the cities along the waterways that became the urban trade centers of the Kievan Rus. Russians used money in trade, but it was not the first currency of the region. Greek coins were used as early as the 5th-6th century BCE (Before Common Era).
As a direct result of the expansion of trade, new colonies were developed. There were many new towns comprised of “settlers” along the rivers. Initially, “colonization” was required, which later was followed by regional organization and political management.
Maps of Russia from the 12th to 13th centuries already had identified the large regions or provincial territories that they controlled. Those regions were: Volhynia and Halicz in the South; Polotsk, Chernigov, Seversk and Riazan to the West; and to the North were Novgorod and Finland.
Importance of Agriculture
The earliest tribes could sustain themselves from their own agricultural yields. It was the weather, which limited the production from the farms. As the city-states developed there were increasing populations to feed. Ever larger supplies were required from the import/export trading along the rivers. Commercial trade was controlled totally by the Princes and their retinues. Trade was the predominant income earner. Farming was much less profitable and was the principal occupation of the common folks/peasants. Russians traded furs, honey, wax, and slaves for silks, wines, fruits, and weapons.
In order to clear the fields, smaller farms were prepared by slashing the vegetation and burning the forests. Seeds could be spread by broadcast methods. Tree branches were used to rake and spread the soil in the fields. It was an inefficient method, but few farm implements were available. Agricultural science was unknown, so fields were planted for seven or eight years until the yields from the farms declined. Then the fields would be abandoned for several years and the process would be repeated. It was a wasteful process.
In the open Russian Steppes, field grasses were continuously cropped by hand until the grasses were depleted. Then, the same fields would be tilled again until the farm yields became exhausted. There was no regular rotation of fields or crops. The so-called 3-field rotation methods were not introduced until the 14tth-15th centuries. When animal husbandry methods became available, farming production improved considerably and larger populations could be fed. As well, agricultural products then could be exported since there were ample food supplies for the local population.
Hand labor relied upon simple farm tools. Commonly used were hand axes, sickles, scythes, and hoes that could be converted into plows. Later on, iron tools were introduced, such as the iron plowshare. It wasn’t until the 17th century that oxen and horse-drawn plowing methods became firmly established.
Long before the development of the Kievan Rus, Slavic tribes began to assemble themselves into groups that were organized for more efficient labor sharing. Family clans developed which were defined and organized by their “blood” relationships. Communes of those gathered families began to appear, which were led by a “Patriarch.” They worked together for the success and betterment of the entire clan. The common labor pool was more efficient. By the 11th century, territorial communes developed. They consisted of much larger populations of shared labor, who were bound by their mutual socio-economic requirements and benefits. Families lived separately, but probably shared their resources of the local pastures and forests, and combined the tasks of tilling and operating the farms. Farming, which was very difficult and labor intensive, required large amounts of manpower that exceeded the capabilities of single families. It can be assumed that the larger territorial communes must have included other obligatory collective functions. In the Dnieper region, a territorial commune was called a Verv; in the region of Novgorod, it was known as a Mir. Importantly, the larger communes had well-defined boundaries. Free peasants, who lived among the Kievan Rus, formed the lowest social group of individuals. They were known as Smerdy (plural) or Smerd (singular).
Competition between the Elite Class and Peasantry
The noble class of the Kievan Rus consisted entirely of the princes and their retinue. In time, the elite were comprised of the landowners, court retinue of advisors and servants, and the bureaucrats. With successive generations, this social order of the privileged elite expanded to ever-larger numbers of individuals and their families. In contrast, the free peasants, who served as the essential farmers, were increasingly separated from the elite status. In the 11th century, Jaroslav, Prince of Kiev, declared that all free men were to be equal under the law. In practice, however, there were major differences in privileges between the classes. The elite were valued much higher as demonstrated by the fines imposed for the injury or murder of an elite member in contrast to a peasant. Distinctive class differences appeared by the late 11th century.
New concepts of land ownership by the governing elite developed in the 12th century. Those landowners governed huge plots of estate lands. The principal incomes of the princes and elite continued to be earned from the extensive trading practices. In time, the payments from the princes to their retinue of servants and bureaucrats became excessive. In order to keep up with the expanding population of the privileged, the princes switched their method of sustaining the elite with currencies to providing them with grants of land. By the 12th-13th centuries, ownership of very large estates of forests and farms passed to individuals other than only to the children of the princes and nobility.
Granting land to individuals was the beginning of private and hereditary property rights in Russia. Previously, the Vikings of Norway had acknowledged the concepts regarding hereditary titles and land ownership. In a similar way, the princes of the Kievan Rus encouraged the Christian Church to help organize and settle new communities by also offering them gifts of parcels of land. Thus, the private land holdings of the Church expanded significantly in Russia.
A larger retinue of servants of the nobility was required to protect the newer communities from marauding nomadic foreign tribes. That is how the Viking mercenaries first established themselves in Kiev. Later on, other groups of people with different ethnicities served the aristocracy. That new “elite” joined the upper class in the Kievan Rus. They were known as “Boyars.” By the 12th century, there was a fusion of the Boyars with the other elements from the social, political, and economic elite. This upper class became quite large and powerful. In time, however, the Boyars challenged the policies of the princes and nobility.
The land holdings of the elite continued to increase as a result of the shift from supporting them monetarily to granting them land for services provided to the monarch. Those servants of the monarchy, Druzhiny, remained loyal to the princes in return for the gifts of land. Thus, the Druzhiny became the principal and outright landowners of Russian properties.
If the Druzhiny were removed from providing the “services to the monarchy,” they were able to still retain their properties. As a result, the wealth and power of the Druzhiny shifted from total dependency upon the princes to self-sufficiency from the incomes derived from their immense land grants. Boyars and Druzhiny became increasingly independent. Sons inherited the properties from their fathers. Daughters only inherited properties if there were no brothers/sons. Inheritance was guided by the instructions of the Wills of the deceased. By the end of the 12th century, almost all of the land was divided among the princes and nobility, the Boyars and Druzhiny, and the Church. It is not known how much of the Russian land was still governed by the large territorial communes of free peasants.
Feudalism in Medieval Europe
Feudalism, as a form of land management in the Middle Ages, appeared in the 9th century with the beginning of the disintegration of the Roman and German Empires and their settlements. Possibly due to the breakdown of the authority of central governments, feudalism became established in Europe. During the era of Roman villas, land was temporarily granted, but could be revoked. Poor tenants had to give back the land to their protectors, which may have been the basis for the development of the “Manorial System” which followed. The Romans and Germans also surrounded themselves with people who offered services and military protection. A Fief was a vassal of the monarch, who swore allegiance and by so doing was awarded land (fiefdom) and special rights in return for services provided to the monarch. Thus, a system of providing land for services was established. In time, greater services were required and larger land grants were given, which in turn established a basis for demanding from the protectors the rights of inheritable lands, greater justice, and shielding from interference by the monarchy in the affairs of the landlords.
Monarchies offered gifts of land to the church in exchange for assistance in establishing monasteries and churches in newly settled regions. The land granted to the church carried feudal obligations, which were similar to the responsibilities of the secular communities. In determining the policies over church-held lands, the bishops and abbots had enormous power over policies that were implemented in the region.
Feudalism first spread throughout most of Western Europe including England, France, Germany, Italy, Spain, and partly in Scandinavia. After the 10th century, feudalism spread into Eastern Europe. With the rise of powerful monarchies, power and land was increasingly concentrated in the hands of only a few individuals. The exchange of land for services provided to the monarchy led to the emergence of a new class of burgers/boyars in the towns. As a result, conflicts arose between the monarchy and nobility and the burgers/boyars.
Larger estates required significantly different methods of management. An elaborate hierarchy of management was required to control the increasing numbers of peasants. The term, latifundia, refers to the great landed estates with primitive agriculture, which depended upon the labor of the peasants/serfs and slaves. The profits from the agricultural output directly benefited the landowners. An alternative method of selling off leaseholds on the land was financially less desirable for the owners, even though through leaseholds the owners would have been spared the enormous responsibilities of managing their huge work force. In Poland and Romania, however, the Boyars frequently utilized the leasehold methods.
In medieval Europe, a “Manorial System” was recognized, in which all phases of the agricultural community were regulated under the “lord of the land.” The fundamental purpose of the system was the economic benefit to the landowners, but it included the economic, social, local justice, and taxation policies, as well as the laws governing the land tenure of the peasants. Manorial administration was related to feudalism with the exception that there was no connection with the military defenses or political relationships of the region. The serfs held land given by the lord of the estate. In return, the lord was required to provide the serfs with specified services and money.
The prince, lesser noble, or Boyar, could administer a manorial estate. The owner was required to provide military protection, services, and income to the peasants. Land was retained by the lord and was “loaned” to the peasants, who cultivated the farms and produced the agricultural products. The lord retained all rights to the land, but could not redeem the land or increase the dues charged to the peasants for use of the land. Serfs retained their hereditary rights to the use of the land. Servitude or slavery became issues about freedom, but those rights were related to the land rather than the individual.
The “manor” was an administrative unit of the territory, which was presided over by the lord. The lord or his agents (bailiffs and provosts) served as administrator of justice, determined all public policy, and collected the taxes. Parts of the estate could be transferred to others, but the single manor and lord remained in charge. Thus, a manor might serve several subsequent lords. Tenants were required to maintain the land, roads, and bridges, as well as the castle of the lord.
Typically, the lord lived in the manor house on the property. But, the land possessions were divided into arable lands held by the serfs, meadows, woodlands, and wasteland. A serf could hold a parcel of land or a single individual might hold several separate unconnected strips. Meadows were held in common. The lord of the manor, who was compensated for the cut wood, animals hunted or fish caught in the ponds, most often retained the use of the woodlands. During poor economic times, the lord was obliged to intercede by supplying money or credits to the serfs to prevent starvation. In normal times, the lord received part of the agricultural yield or woodland products as compensation for providing the land. Other dues to the lord were based upon the rights of justice supplied by the lord, the small industries that developed on the estate, or for the use of the lord’s mills and ovens. In such cases, dues in the form of cloth, building material and ironware might be paid to the lord. As well, the lord could be compensated with food, lodging, and other services the he required.
The manorial system was probably developed from the earlier estate management concepts of the Romans and the Germans. Later on, other factors contributed to the modification of the manorial system. Economic competition in the form of capitalism and centralized monarchies gradually replaced the subsistence system of the manors, which slowly began to disappear in Spain (after the Moors), England, France (after the Revolution), Italy, Austria, Prussia, and Hungary.
There were regional differences in the rights provided to the serfs. Some tenants were completely bound to the land. That differed from slavery by the fact of their inherited rights to the land, which could be passed on to their sons. The land could not be sold or given away without the serfs. Also, the lords were required to provide certain services to the serfs. In some areas serfs held individual rights; in other regions serfs had group rights or even served without landlords. Sometimes after wars the conquered peoples were reduced to serfdom by the victors, rather than retained as slaves. Tribute/taxes then were paid to the victors.
In Russia, the peasants/serfs were known as “Smerdy.” They could become hired laborers either by contract or as indentured tenants. In dire times, some smerdy sold themselves into slavery in order to provide their families with enough food to survive. Hired tenants remained free men, but were dependant upon the proprietors for both economical and legal services. The social status of tenants was decreased and more precarious. By the 12th century, independent peasant communes had been established. However, if a peasant died without heirs, the property reverted to the prince. Later on, the individual could bequeath his land holdings to whomever he chose, including female heirs or the church. Unless there was a private special arrangement, smerdy tenants retained their personal freedoms and were able to leave the proprietor. However, in more rural areas, slaves and peasants had no freedoms.
Slaves were a primary source of income for the princes. Slaves had no special rights and were transported and traded during the early years of the Kievan Rus. As early as the 9th century, slaves were sold to the Byzantines. Russians also owned slaves, who were essential to the economy of their agricultural system. There were large rewards for catching runaway slaves. Even monasteries owned slaves, who had been either civilian or military prisoners of war. The children of slaves remained slaves. However, if a child was the issue of a slave and her master, the child slave was freed upon the death of the master. If caught, his lord could enslave an indentured runaway. Bankrupt merchants, who had used very poor judgment, could be enslaved to settle the debts and expenses that were incurred by the proprietor. Also, the property of the merchant could be sold to defray the expenses to the landowner.
While the serfs were bound to the land and had specified rights and services pledged to them by the proprietor, slaves were merely chattel and without any rights. A man, who married a slave, could remain free only if his wife’s owner granted permission. Proprietors could demand payment for the death of a slave to compensate the owner for the property, as if the slave were a parcel of land or material object. Properties could be purchased, sold, or borrowed by a slave, but always in the name of the owner. Indentured servants could borrow from the proprietor, but the interest on the loan was so great that the debt rarely could be repaid and the servant never achieved his freedom. Slave owners were completely responsible for the consequences of all actions taken by their slaves. Although it was possible for a slave to purchase his freedom, few had the resources to attain their liberty. The value of one slave was equivalent to the cost of one goat or sheep. But, a pig was valued at two slaves; a mare horse was valued at four slaves.
Russian peasants were known as “Zakupy.” They formed the largest percentage of the population. In an attempt to achieve more rights, they staged a revolt in Kiev in the 12th century. Not all Zakupy lived in the rural regions. A few lived in cities where they served their creditors. Zakupy were also known as peons. Zakupy could become enslaved if they stole from their creditors, if the master had to sell a peon to pay for the damages or expenses created by the peon, or was a captured escapee. Russian law afforded few protections for the peons. However, if a peon was abused or if the creditor took the peon’s property, the court might demand the freedom of the peon’s debt.
There were other types of farm workers than serfs, slaves, and Zakupy. Vdachi were workers, who performed labor for the proprietor and received a subsidy for their efforts. As an example, if someone were ruined as a result of a catastrophe he could seek aide from a landlord. Such money was gifted and the grant was not to be considered a loan. Vdachi worked for a specified time to repay the assistance that was granted. Riadovichi referred to laborers, who made a contract to work for a specified amount of money or services and worked for an agreed upon amount of time. Izgoi were workers, who lost their jobs and were unable to locate new employment. Such workers might have been illiterate sons of priests (priests could marry then so their children inherited the priestly caste), freed slaves, or insolvent merchants who had escaped enslavement. Later on, an “orphaned prince” qualified as an Izgoi. Brothers of a prince, who died without issue, and who would have no possibility of ascending to the throne, might have become Izgoi.
- Blum, Jerome: Lord and Peasant in Russia, from the Ninth to the Nineteenth Century. Princeton University Press, 1961.
- Dubnow, Simon M.: History of the Jews in Poland and Russia. Jewish Publication Society of America, Philadelphia, PA, 1918. Republished by Avotaynu, Inc., Bergenfield, New Jersey, 2000.
- Zamoyski, Adam: The Polish Way, A Thousand-year History of the Poles and their Culture. Hippocrene Books, Inc., New York, 1987, 1995. |
Full Heart and Cardiovascular System of the Upper Torso Description
[Continued from above] . . . The heart is mostly made of cardiac muscle tissue, which requires its own constant supply of oxygenated blood. The left and right coronary arteries provide this blood supply to feed the heart’s own energy demands. Small blockages in the coronary arteries lead to chest pain called angina pectoris; complete blockages of the coronary arteries lead to myocardial infarctions, better known as heart attacks.
The pulmonary arteries and pulmonary veins provide vital but short distance blood flow between the heart and the lungs. Exiting the heart from the right ventricle, deoxygenated blood flows into the large pulmonary trunk before splitting at the left and right pulmonary arteries. The pulmonary arteries carry blood to smaller arterioles and on to the vast capillary beds of the lungs where carbon dioxide is released and oxygen is obtained from the air in the alveoli of the lungs. These capillaries converge into larger venules, which further converge to form the left and right pulmonary veins. Each pulmonary vein carries blood from a lung back to the heart where it reenters through the left atrium.
Oxygenated blood exits the left ventricle of the heart and enters the aorta, the largest artery in the human body. The ascending aorta extends superiorly from the heart before making a 180-degree turn to the left in a portion called the arch of the aorta. From there it passes posterior to the heart as the thoracic aorta on its way toward the abdomen.
The aorta branches as it passes through the thorax, branching off into several major arteries as well as many minor ones.
- The left and right coronary arteries branch off from the ascending aorta, supplying the heart with its vital blood supply.
- The arch of the aorta branches off into three major arteries – the brachiocephalic trunk, left common carotid artery, and left subclavian artery. These arteries collectively supply the head and arms with oxygenated blood.
- The thoracic aorta continues to branch into many tiny arteries that supply blood to the organs, muscles, and skin of the thorax before entering the abdomen as the abdominal aorta.
- Blood from the abdominal aorta supplies oxygen and nutrients to the vital organs of the abdomen through arteries such as the celiac trunk and common hepatic arteries.
Functioning at the end of the circulatory cycle, the veins of the upper torso carry deoxygenated blood from the tissues of the body back to the heart to be pumped through the body again. Blood returning to the heart from the lower torso and legs enters the upper torso in a large vein called the inferior vena cava. The inferior vena cava picks up deoxygenated blood from the hepatic and phrenic veins before entering the right atrium of the heart. Blood returning from the head enters the torso through the left and right jugular veins while blood returning from the arms enters through the left and right subclavian veins. The jugular and subclavian veins on each side merge to form the left and right brachiocephalic trunks, which go on to merge into the superior vena cava. Several smaller veins carrying blood from the organs, muscles, and skin of the upper torso also merge into the superior vena cava. The superior vena cava carries all of the blood from the arms and head into the right atrium of the heart.
Prepared by Tim Taylor, Anatomy and Physiology Instructor |
Videos to help Algebra I students learn how to recognize and use parent functions for linear, absolute value, quadratic, square root, and cube root to
perform vertical and horizontal translations. They identify how the graph of y = f(x) relates to the graphs of y = f(x) + k
and y = f(x+k) for any specific values of k positive or negative, and find the constant value,
k, given the parent functions and the translated graphs. Students write the function representing the
New York State Common Core Math Module 4, Algebra I, Lesson 19
Plans and Worksheets for Algebra I
Plans and Worksheets for all Grades
Lessons for Algebra I
Common Core For Algebra I
Given any function, how does adding a positive or negative value,k , to f(x) or x affect the graph of the parent function?
The value of the constant shifts the graph of the original function k units up (if k > 0) and k units down (if k < 0) if k is added f(x) to such that the new function is (x) = f(x) + k.
The value k of shifts the graph of the original function k units to the left (if k > 0) and kunits to the right (if k < 0) if is added to f(x) such that the new function is g(x) = f(x + k).
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. |
During the Civil War, West Virginia is admitted into the Union as the 35th U.S. state, or the 24th state if the secession of the 11 Southern states were taken into account. The same day, Arthur Boreman was inaugurated as West Virginia’s first state governor.
Settlement of the western lands of Virginia came gradually in the 18th century as settlers slowly made their way across the natural Allegheny Plateau barrier. The region became increasingly important to the Virginia state government at Richmond in the 19th century, but the prevalence of small farms and absence of slavery began to estrange it from the east. Because slaves were counted in allotting representation, wealthy eastern planters dominated the Virginia legislature, and demands by western Virginians for lower taxes and infrastructure development were not met.
When Virginia voted to secede after the outbreak of the Civil War, the majority of West Virginians opposed the secession. Delegates met at Wheeling, and on June 11, 1861, nullified the Virginian ordinance of secession and proclaimed “The Restored Government of Virginia,” headed by Francis Pierpont. Confederate forces occupied a portion of West Virginia during the war, but West Virginian statehood was nonetheless approved in a referendum and a state constitution drawn up. In April 1863, U.S. President Abraham Lincoln proclaimed the admission of West Virginia into the Union effective June 20, 1863. |
Leopard-spotted horses painting on the walls of French caves during the Stone Age.
Credit: public domain
Ancient cave paintings that seemed to depict make-believe white-spotted horses might have been drawn from real life, scientists now find.
The cave paintings of the Stone Age are not only among the oldest drawings made by humans, but also serve as evidence of our growing capabilities. Scientists hotly debate how realistic these paintings are — discovering this fact could reveal whether ancient humans tended more toward accuracy or creativity.
The approximately 25,000-year-old paintings "The Dappled Horses of Pech-Merle" depict spotted horses on the walls of a cave in France remarkably similar to a pattern known as "leopard" in modern horses such as Appaloosas. Horses were popular among Stone Age artists, found in most cave paintings that have recognizable animals in them, commonly in a caricature form that slightly exaggerates the most typical "horsey" features, such as their manes of hair.
Until now, ancient DNA analyses suggested horses during the Stone Age were only black or bay colored with no evidence for white-spotted patterns. This hinted that cave paintings of leopard-patterned horses were fantasy, not accurate portrayals. Some have proposed that drawings of imaginary animals might have had some kind of symbolic or even religious value.
Research now suggests those paintings might actually have been based on the real-life appearance of the animals.
Scientists investigated the differences in genes for coat color of 31 ancient horse fossils from Siberia, Eastern and Western Europe and the Iberian Peninsula. The researchers found that a genetic mutation associated with the presence of white leopard-like spotting patterns on modern horses was present in six of the European horse fossils. Additionally, seven of the fossils had the genetic variation for black coat color, whereas 18 had bay coats.
As such, all the horse colors seen in these drawings have now been found to exist in prehistoric horse populations. The findings suggest that cave paintings of horses may be more realistic and less symbolic or fantastic than supposed. Still, although these horses might not have been imaginary, "we cannot exclude that these horses had a religious value," researcher Arne Ludwig, an evolutionary geneticist at the Leibniz Institute for Zoo and Wildlife Research in Berlin, told LiveScience.
Leopard-spotted patterns in modern horses are sometimes linked with congenital problems such as stationary night blindness, perhaps explaining why any wild horses with them eventually died out long ago. As to why so many other horse fossils were found with them in the first place, perhaps this patterning provided camouflage in the snowy environments of the Stone Age, was attractive to mates or just stuck around due to random chance.
The scientists detailed their findings online today (Nov. 7) in the Proceedings of the National Academy of Sciences. |
This reptile is one of the largest crocodile species in the world. Irises of their eyes are silvery in color while the pupil has a form of a vertical slit, helping the animal see well in low lighting conditions. Unlike other species of crocodile, these reptiles are not green. The body of American crocodile is either tan grey or olive grey in color, covered with darker patches. Their belly is white or yellowish. The back of the crocodile is partially covered with bony armour, formed by so-called osteoderms or plates, which are more scattered in American crocodile, than in other crocodile species. The top jaw has pointed teeth, having a conical form, and interlocking with the teeth on the bottom jaw of the animal. They have large fourth teeth on both sides of their bottom jaw, which are prominent even when the jaw is closed.
The American crocodiles are found on Cape Sable as well as along Lake Worth and the southeastern coast of Florida. The area of their distribution includes both Atlantic and Pacific coasts of southern Mexico, stretching to Peru and Venezuela in South America. In addition, this reptile inhabits many Caribbean islands, including Jamaica, Cuba, Hispaniola and Grand Cayman. The American crocodiles are aquatic animals, living in freshwater environments such as rivers, reservoirs, lakes as well as estuaries and swamps.
The American crocodile is most active at night. The animal spends the greater part of the evening submerged in water, which cools slowly, keeping the animal warm for a long period of time. The American crocodile is not a social animal. This reptile prefers living alone and typically avoids disturbances. However, these crocodiles occasionally socialize, usually at sunset, when the temperature of their body is low. When facing danger, the animal can be extremely aggressive. At the dry season, the crocodiles become inactive: they do without food, spending their time buried in the mud. These reptiles love sunbathing, just like alligators. When having sunbaths, the animals gape, exposing themselves to the sun with their open mouth, which helps them regulate their body temperature.
This reptile is a carnivorous animal. The diet of the American crocodile mainly consists of small mammals, fish, frogs, birds and turtles. Hatchlings forage on land, feeding mainly upon insects. Meanwhile, young crocodiles tend to consume aquatic invertebrates and small species of fish.
These reptiles have polygynous mating system, where one male mates with a number of females. At the mating season, the animals become very territorial. Usually, males compete with each other for mating rights. During the breeding season, which lasts from April to May, females lay about 30-60 eggs. The eggs are typically laid in a hole or on an elevated place. As the hatching time approaches, the female visits the nesting site more and more frequently, until, in about 9-10 weeks after being laid, the eggs finally hatch. The female helps the young hatch out of the eggs and later accompanies the hatchlings on their way to the water. Soon the young disperse, leaving the hatching site and living independently. The American crocodiles are sexually mature at the age of 8-10 years.
The species is exposed to illegal hunting and poaching due to its hide. The American crocodile also suffers from loss of their habitat as a result of human development.
On the IUCN Red List, the American crocodile is classified as a Vulnerable species. The overall number of their population is unknown, but presently increasing. However, the total estimated population in Mexico, as well as Central and South America varies from 1000 to 2000 individuals.
The American crocodile is the top predator of its range. Due to preying on a wide variety of animals, this reptile controls populations of these species. In addition, the leftover food of the American crocodile is a source of food for other animals of the area. |
To determine the input impedance of a device, both the voltage across the device and the current flowing into the device must be known.
The impedance is simply the voltage across the device E, divided by the current flowing into it, I.
This is given by the following equation:
It should be understood that since the voltage, E, and the current, I, are complex quantities the impedance, Z, is also complex. That is to say impedance has a magnitude and an angle associated with it.
When measuring loudspeaker input impedance it is common today for many measurements to be made a relatively low drive levels. This is necessitated because of the method employed in the schematic of Figure 1.
In this setup a relatively high value resistor, say 1 Kohm, is used for Rs. As seen from the input of the DUT, it is being driven by a high impedance constant current source. Had it been connected directly to the amplifier/measurement system output it would in all likelihood, be driven by a low impedance constant voltage source.
Figure 1: Schematic of a common method of measuring loudspeaker impedance. (click to enlarge)
In both of these cases constant refers to there being no change in the driving quantity (either voltage or current) as a function of frequency or load.
When Rs is much larger than the impedance of the DDT, the current in the circuit is determined only by Rs. If the voltage at the output of the amplifier, Va, is known this current is easily calculated with the following equation and is constant.
Now that we know the current flowing in the circuit, all we need to do is measure the voltage across the DUT and we can calculate its input impedance. There is nothing wrong with this method. It is limited, as previously mentioned, however, in that the drive level exciting the DUT will not be very large due to the large value of Rs.
For some applications this may be problematic. Loudspeakers are seldom used at the low drive levels to which we are limited using the above method. It may be advantageous to be able to measure the input impedance at drive levels closer to those used in actual operation.
If the current in the circuit can be measured rather than having to be assumed constant this limitation can be avoided. Using a measurement system with at least two inputs, as shown in Figure 2, can do just that.
Figure 2: Schematic of an alternate method of measuring loudspeaker impedance. (click to enlarge)
In this case Rs is made relatively small, say 1 ohm or less. This is called a current sensing resistor. It may also be referred to as a current shunt. Technically this is incorrect for this application, because a current shunt is always in parallel with a component from which current is diverted.
The voltage drop across Rs is measured by input #2 of the measurement system. The current in circuit is then calculated using equation:
The voltage across the DDT is measured by input #1 of the measurement system. We now know both the voltage across and the current flowing into the DUT so it’s input impedance can be calculated.
I used EASERA for the measurements. It has facilities for performing all of these calculations as should most dual channel FFT measurement systems.
Referencing Figure 2, channel #1 across the DDT should be set as the measurement channel while channel #2 should be set as the reference channel. Dual channel FFT systems divide the measurement channel by the reference channel, so we have:
All we have to do it multiply our dual channel FFT measurement by the value of Rs used and we get the correct value for impedance. If Rs is chosen to be 1.0 ohm this becomes really easy. |
Nick Fetty | August 19, 2015
If predictions in the Old Farmer’s Almanac are correct, Americans should brace for a cold and snowy winter even in parts of the country that typically see more mild temperatures.
The Old Farmer’s Almanac – which has been in publication since 1792 – predicts that the Midwest will see frigid conditions while the Northeast will experience below-average temperatures. Parts of the South are expected to see icy conditions and the traditionally temperate Pacific Northwest will experience its snowiest weather beginning around the middle of December and possibly continuing through February.
“Just about everybody who gets snow will have a White Christmas in one capacity or another,” Almanac editor Janice Stillman told the Associated Press.
Some meteorologists and other critics question the scientific accuracy of the Almanac’s method for predicting weather patterns. Criticis cite that the Almanac’s formula fails to “account [for] the finer nuances of meteorology, like pressure systems, cyclical weather patterns, and—of late—climate change.” Meteorologists also cite that El Niño will likely be a more accurate indicator of winter weather patterns that the Almanac’s formula.
Though the exact formula is a secret, the Almanac’s writers and editors focus on three main factors.
“We employ three scientific disciplines to make our long-range predictions: solar science, the study of sunspots and other solar activity; climatology, the study of prevailing weather patterns; and meteorology, the study of the atmosphere. We predict weather trends and events by comparing solar patterns and historical weather conditions with current solar activity.”
The first day of winter (the winter solstice) begins on December 21. |
E-learning, also known as online learning, is the use of digital tools or resources for learning. This mode of learning uses the internet. The education is complemented by electronic devices like computers, laptops, tablets, etc., for attending lectures.
E-learning often takes place in the form of online degrees, courses and the learner earns an online certificate at the end of the course. It is one of the most flexible ways of learning where students can learn from anywhere and anytime. There has been a marked increase in the number of people turning to E-Learning sites and apps. As mentioned, it is mainly because of the convenience and flexibility that it brings to the table. There are a lot of online teaching apps and platforms that have made this mode of learning easier.
E-learning lectures are delivered live, where students can interact with the teacher and clarify their doubts. Or, the online classes can be pre-recorded and sent to the students as a study guide. This helps students to study at their own pace. It is affordable, saves a lot of time, and enhances the learning process.
E-learning helps in promoting a healthy environment and creates a positive and interactive learning environment. It is one of the growing industries and is here to stay. Not just in schools and colleges, but various businesses and companies are using e-learning to train their employees. |
Children are prescribed eyeglasses for many reasons. Here are some of them.
- The child may have a refractive error in its eyes, which renders it unable to focus correctly.
- To correct a squint.
- To relieve symptoms such as headaches and eyestrain.
When the eye is not able to focus light properly, it is said to have a refractive error. This often leads to a loss in visual acuity. Refractive errors are of two types: spherical errors such as Myopia and Hyperopia, as well as cylindrical errors such as Astigmatism and Presbyopia.
Children who are seven years or younger should wear prescription glasses full time. This is a critical age for visual development. If you do not take steps to correct your child’s refractive errors in this period or if there is poor compliance with the treatment, its vision may be permanently reduced.
Did you know that the muscles that turn the eye inwards are connected to muscles that cause the eye to focus? In small children, long sightedness can cause the development of a squint, especially when the child is trying to see clearly at a close range. When glasses are worn, the muscles of the eyes relax. The child’s vision will then develop normally.
If a child’s long sightedness is corrected on time, the child may be able to do without eyeglasses after it crosses the age of 10 years. Myopia has a tendency to worsen with age. The child may have to wear glasses continuously in this case. If the child has a squint, glasses may be required to improve it, especially when surgery is ruled out. |
Other Common Names: Spider Monkey
Genus & Species: Ateles geoffroyi
IUCN Red List: Endangered.
Northeastern Mexico to the Amazon Basin.
Rain forests, mountain forests.
13-18 pounds; head and body 15-20″ long; tail as long as 35″.
30 plus years in captivity.
Mostly fruit; also eat leaves, buds, flowers, insects, and spiders.
“SPIDERY” describes this monkey: arms, legs and tail very long. The tail is longer than the head and body, and is prehensile. The under surface of the tail tip is bare and ridged to improve the grasp. Often seen hanging by the tail or by the tail and one arm or leg. Thumbs reduced or missing, and the fingers act together as hooks to grasp branches as these agile monkeys move through the tree branches.
Offspring: Usually 1.
Incubation: 139 days.
Parental care: Extensive care by female for about ten months. Babies cling to her belly for several months and then ride on her back. Gives birth at approximately three year intervals. Breeding age is four years for females and five years for males.
- Ecology, Adaptations, Etc:
Very gregarious, family groups often traveling with others in large troops (bands). Spend most of their lives in the trees. Males usually dominate females and juveniles, but over aggression is rare. Spider monkeys in nature sometimes join Capuchins and Squirrel Monkeys in feeding bands.
As with most primates, habitat destruction poses the greatest threat to their survival. Spider monkeys are quite vocal and have several different calls. Unlike most species of primates, the dominant males groom others more than others groom them. Social grooming takes up much less time in these monkeys than in most Old World monkeys. Spider Monkeys are “New World Monkeys.” People often mis-identify the gender of Spider Monkeys because females have extended clitorises. |
Purer ٠ Fresher ٠ Safer ٠ Tastier ٠ Water
Oxidation: is any chemical reaction that involves the moving of electrons. Specifically, it means the substance that gives away electrons is oxidised. When iron reacts with oxygen it forms a chemical reaction called rust because it has been oxidized (the iron has lost some electrons) and the oxygen has been reduced (the oxygen has gained some electrons).
When Oxygen reacts with the exposed chemicals in the exposed surface of a material it starts breaking down through a process known as oxidation as per the above example where a metal such as iron being broken down resulting in the formation of rust. Another example of oxidation is that of an Apple where when cut the flesh of the Apple is exposed to the air which contains oxygen which reacts with the flesh of the Apple breaking it down visible as the flesh turns brown.
The driving element in this Oxidation process is Oxygen because it follows the octet rule that is it wants to have 8 electrons in its outer most shell. Because Oxygen only has 6 electrons it its outer shell it wants to gain 2 more electrons to feel complete. Therefore, it really likes combining with electron donating atoms when it gets a chance especially with Hydrogen hence water as H 2 O.
The Oxygen Hydrogen bonding is the process that allows the Human Body to change sugars into energy during respiration. But sometimes, up to 2% of the time, in this human respiration activity Oxygen comes out of the process not totally satisfied. Thus, instead of it being its normal mellow stable self it evolves into a totally agitated state completely rogue in character known as a free radical that will practically bond with anything. It will try to pair up with the fats, proteins in the red blood cells even the DNA. And when it does this the free radical changes the chemical structure of those molecules and most of the time cause cell damage. |
You may have heard about Planet Nine—a hypothetical planet thought to exist in the outer reaches of the solar system. One possibility is that it’s not a planet at all but a tiny black hole. New research outlines a potential strategy for detecting this supposed black hole, in a search that could begin as early as next year.
Harvard astronomers Avi Loeb and Amir Siraj have proposed a new strategy for detecting a grapefruit-sized black hole in the outer solar system, in a paper that has been accepted for publication in The Astrophysical Journal Letters. Using the Vera C. Rubin Observatory, still under construction in Chile, astronomers could indirectly detect this object by observing it do what black holes do best: gobble up stuff.
The reason for thinking a black hole might be lurking out there has to do with an unexplained set of astronomical observations. Something—we don’t know what—appears to be affecting a group of objects beyond the orbit of Neptune. A possible explanation is an undetected planet, dubbed Planet Nine, with a mass between 5 and 10 Earth masses and in an elongated orbit between 400 and 800 AU from the Sun, in which 1 AU is the average distance from the Earth to the Sun. Recently, scientists proposed another explanation: a primordial black hole of a similar mass.
That we could have an ancient black hole inside our solar system is not as outlandish as it might sound. As Loeb explained to Gizmodo, it’s possible that primordial black holes are responsible for what scientists think is dark matter in the universe. If that’s the case, there should be a tremendous number of black holes out there, so it’s not foolish to think one of them got trapped in our solar system.
“This will obviously be extremely exciting, since we have been searching for the nature of the dark matter for nearly half a century,” wrote Loeb in an email to Gizmodo. “If the black hole is the dark matter, there should be 50 quadrillion like it in the Milky Way alone to make the entire mass of the Milky Way galaxy, which weighs a trillion solar masses.”
A quadrillion, by the way, is a 1 followed by 15 zeros.
Finding an object with an event horizon the size of a grapefruit sounds daunting, but these massively heavy objects can wreak havoc in their local environment. This is exactly what Loeb and Siraj are counting on, as the hypothesized black hole should suck up the occasional Oort cloud object, namely comets.
Caught in the black hole’s clutches and steadily drawing nearer to its doom, a comet should start to melt as it interacts with hot gases accumulating in the area. This process should produce a radiation signature detectable from Earth, which the scientists refer to as an accretion flare.
“Our paper shows that if Planet 9 is a black hole, then comets residing in the outskirts of the solar system—the so-called Oort cloud—would impact it, get destroyed by its strong gravitational tide, and produce a flare as they accrete onto it quickly, within less than a second,” Loeb told Gizmodo.
If the comet is big enough, it should be detectable through the Legacy Survey of Space and Time (LSST), which is set to start next year at the Rubin Observatory. This telescope is ideal for the task owing to its exceptionally large field of view. Astronomers have only a very rough idea of where they should look for Planet Nine or the black hole, but LSST will cover half of the sky and make 824 repeat visits to each spot over a 10-year period.
“If Planet 9 is a black hole, we expected to see at least a few flares about a year after LSST starts surveying the sky,” said Loeb.
This isn’t the first proposal for sniffing out a potential black hole. Earlier this year, Edward Witten, a physicist at the Institute for Advanced study, devised a proposal in which hundreds of spacecraft would be sent to the outer solar system. Changes to their sensitive clocks would signal the presence of a strong gravitational field produced by a tiny black hole. Sounds cool, but the new proposal from Loeb and Siraj is more practical.
“If indeed it turns out to be a plausible strategy, the idea that Loeb and Siraj are presenting is really nice,” Jakub Scholtz, a postdoc at the Institute for Particle Physics Phenomenology at Durham University in the UK, told Gizmodo. “It would be a game changer for Planet Nine as a primordial black hole scenario.”
Scholtz, along with his colleague James Unwin from the University of Illinois at Chicago, published a paper last year arguing that Planet Nine might actually be a black hole. He said the odds of our solar system capturing a black hole are about 50-50, so if the authors can test this, “we should go ahead and do so.”
Either way, the LSST project will produce meaningful results, as the absence of black hole evidence could point to other possibilities, such as Planet Nine actually being a planet. The mind boggles at how much we still don’t know about our own solar system. |
In geometry, the other name for
Area of One Face of Regular Tetrahedron Formula:
Total Surface Area of Regular Tetrahedron Formula:
Slant Height of a Regular Tetrahedron Formula:
Altitude of a Regular Tetrahedron Formula:
Volume of a Regular Tetrahedron Formula
This is a 3-D shape that could also be defined as the special kind of pyramid with a flat polygon base and triangular faces that will connect the base with a common point. When we are talking about the tetrahedron, the base can be defined as the triangle so it is popular as the triangular pyramid.
This is possible folding the shape into a single sheet of paper. For every tetrahedron, there exists one sphere where all four vertices lie and another sphere is the tangent to tetrahedron’s faces. Now, it is further divided into two categories – regular tetrahedron and irregular tetrahedron. In the case of regular tetrahedron, the faces are of same size and shape, edges could be taken as of equal length.
Regular tetrahedron could not full the space but they have to arrange in such way that could make cubic honeycomb which is a tessellation. In most of the cases, regular tetrahedron are self-dual where dual is one more regular tetrahedron. Further, there could be some special cases too based on dimensions and other properties defined. One more case here is isosceles tetrahedron where all four faces would be congruent triangles. |
Climate Change Hurts Deserts, Too
WHY YOU SHOULD CARE
You know about the impact of climate change on rainforests and oceans. But the picture isn’t complete without the delicate web of life being slowly parched in the world’s deserts and drylands.
By Melissa Pandika
We often picture climate change drying up rainforests, oceans and other hotbeds of biodiversity. But what happens to regions that are already dry? Far from barren, deserts and drylands sustain a surprising variety of animal species, as well as human life — but not for long, if global temperatures continue to rise. Climate change and human activity are disturbing these delicate ecosystems, and new research shows it could have serious environmental, human and economic consequences.
Up to 20%
Amount of the world’s drylands that are degraded
Economic losses associated with this drylands degradation
Climate change-induced drought, overgrazing and unsustainable farming practices lead to a loss of vegetation, which in turn further parches arid lands by exposing infertile lower soil layers that are less able to support agriculture and wildlife — a process known as desertification . A U.N.-backed report released by the Economics of Land Degradation (ELD) in September found that up to one-fifth of drylands are degraded, resulting in estimated economic losses of about $40 billion per year.
A Somali man leads his drought-stricken camels to a water point northwest of Somalia’s capital Mogadishu in 2011. (Reuters/Thomas Mukoya)
But as we approach 2014 – the halfway point in the U.N.’s Decade for Deserts and the Fight against Desertification – desertification remains largely invisible in the conservation agenda. It get little press and is rarely addressed by policymakers.
Despite its low priority, desertification exacts a steep human cost. Drylands, which occupy roughly 40 percent of the Earth’s land area, are home to two billion people, mostly in developing African nations. According to the ELD report, annual global losses of arable land can reach 10 million hectares per year, an area roughly the size of Austria. Poor productivity in arid regions also makes them less attractive to investment, which excludes them from development.
Land degradation: A reduction in the economic value of ecosystem services and goods derived from land as a result of human activity or natural biophysical evolution, according to a U.N. report.
Desertification: Vegetation loss parches arid lands, exposing infertile lower soil layers that can’t support agriculture or wildlife.
Although the poor suffer most from desertification, we may all end up feeling its impact. The decline in arable land renews concerns about the world’s ability to feed a booming population. The U.N.’s Food and Agriculture Organization predicts that the demand for food will increase 60 percent by 2050, which will require an additional 120 million hectares of agricultural land. That’s a farm the size of South Africa.
The good news? Adopting sustainable land management — such as crop rotation, which involves growing different crops in succession in the same field – could increase world crop supplies by an estimated 2.3 billion tons, worth $1.4 trillion. Managed grazing practices – such as letting livestock graze on only one portion of pasture while allowing others to recover – might also help, the report added.
Wildlife in these regions is also on the decline. Last Tuesday, the Wildlife Conservation Society and Zoological Society of London reported that half of the species historically found in the Sahara Desert are approaching extinction, most likely due to desertification and overhunting. Seven out of 14 species historically found in the Sahara, the world’s largest tropical desert, are regionally extinct or confined to 10 percent or less of their historical range. The lion, African wild dog and a type of antelope called the bubal hartebeest have vanished entirely from the region. Other species have fared only slightly better. Only the Nubian ibex still inhabits most of its historical range, but it’s still classified as vulnerable.
Violence and instability across the region contribute to a lack of studies, which makes it hard to pinpoint the exact cause of the wildlife decline, but desertification and overhunting are the most likely culprits, researchers say.
“The Sahara serves as an example of a wider historical neglect of deserts and the human communities who depend on them,” conservation biologist and study leader Sarah Durant said in a statement.
This may be our last opportunity to put deserts back on the radar. The longer we treat them as invsible, the more likely they actually will vanish – and the desert communities and ecosystems along with them.
- Melissa Pandika Contact Melissa Pandika |
So who likes worms, Nah we don’t like ’em here, in both – their biological as well as the virtual form. Alike the biological worm, some have benefits in growing vegetation by airing soil and while fishing – the real one, some variants are deadly and do direct opposite for what we use them for. Just like the name they got from its biological name-a-like, computer worms are nasty little piece of code which can do more harm than good if created with bad intentions.
A computer worm is a form of malware or otherwise a piece of malicious software which can operate as a self-contained application and can move and copy itself from one device to another. Computer worms are similar in some ways to viruses, they replicate functional copies of themselves and have potential to cause similar damages. Usually worms are standalone software and they do not require a host program or human intervention once launched to propagate. To spread worms, bad actors either exploit a vulnerability on the target system or use some kind of social engineering to trick users into executing them. A computer worm is malware that reproduces itself and spreads over connected networks. They have the capability to run as stand-alone programs that replicate themselves and run in the background without getting noticed.
A worm hybrid is a piece of malware that spreads like a worm, but it also modifies program code like a virus or else carries some sort of malicious payload, such as a virus, ransomware or some other type of malware to inflict damage. Although some worms are designed to do nothing more than propagate themselves to new victim systems, most worms are associated with viruses, rootkits or other malicious software.
Computer worms make use of some of the un-sighted and most dangerous vulnerabilities in a victim’s computer. Worms often use parts of an operating system that are automated and invisible to the user, which can make them both very difficult to detect and insanely dangerous. They generally target per-existing vulnerabilities in the operating system of the computers they attempt to infect. Many of the most widespread and destructive forms of malware have been worms. Sometimes the worm delivery can have objective for larger mission beyond the reproduction and propogation of the worm itself.
How WORMS Spread
In order to spread, computer worms use existing vulnerabilities in networks. A worm enters a computer through a vulnerability in the system and takes advantage of file-transport or information-transport features on the system, allowing it to travel without help. Usually, a worm looks for a back door to penetrate the network unnoticed. More advanced worms use encryption, wipers, and ransomware technologies as leverage to harm their targets. In more targeted attempts to get computer worms into circulation for the first time, hackers often send phishing e-mails or instant messages with malicious attachments. Cyber criminals try to camouflage the worm so that the recipient is willing to run the program. For this purpose, for example, double file extensions are used and / or a data name that looks harmless or urgent, such as “tax benefits” “free-vacations” “free-money” or anything that’s eye catchy. When the user opens the attachment or link, they will immediately download the malware (computer worm) into the system or be directed to a dangerous website. In this way, the worm finds its way into the user’s system without them noticing.Once opened, these files could provide a link to a malicious website or automatically download the computer worm. Once installed on a computer, it takes stock of all the other computers its victim had interacted with in the past and figures out how to connect. In order to propagate itself further, it will then follow known holes in networking and file transfer protocols. All worms seek out new victims in the vicinity on its own. It spread from computer to computer within networks. The worm always seeks a way to replicate and penetrate other systems. One way of doing this, for example, is for the worm to send an email to all contacts on the infected computer, which contains replicas of the worm. Computers connected to a network are susceptible to computer worms.
The computer worm does not usually infect computer files, but rather infects another computer on the network. This is done by the worm replicating itself. The worm passes this ability on to its replica, which allows it to infect other systems in the same way. Many worms now have what is known as a payload. Payload is translated as the “payload” and in this case an attachment that the worm brings with it. The worm can, for example, carry ransomware , viruses or other malware, which then cause damage to the infected systems. These can then, for example, delete files on the PC or encrypt files in the event of a blackmail attack. A computer worm can also install a back door that can later be exploited by other malware programs. This vulnerability gives the worm’s author control over the infected computer.
Lately we have seen a trend in which bad actors scam people on the internet alerting that a virus/malware has infected users system and asks to download and install certain software which would remove the infection for free. Unaware of the fact the paranoid user becomes a victim of such frauds. If you need a security software always download it from original source than from middle-man.
What WORM’s Do
Once it gets rooted, the worm silently goes to work and infects the machine without the user’s knowledge. Worms can modify and delete files, and they can even inject additional malicious software onto a computer. Sometimes a computer worm’s objective is only to make copies of itself to a level where in depleting system resources, such as hard drive space or bandwidth, by overloading a shared network. In addition to wreaking havoc on a computer’s resources, worms can also steal data, install a backdoor, and allow a hacker to gain control over a computer and alter its system settings.
In the early days of computing a worm may not do any damage at all. Worms were sometimes designed as larks or for proofs of concept to exploit security holes. It did nothing more to targeted computers than reproduce themselves in the background. Many times, the only way to know something has gone wrong was when the worm made too many copies of itself on a single system and slowed down its capability. But as OS security improved over time and writing a code for worm that could crack it got harder and took more and more resources, it reached a dead end. Today, worms almost inevitably include payloads — malicious code in more targeted attacks. There are many types of computer worms that do all sorts of different kinds of damage to their victims. Some turn computers into “zombies” or “bots” that launch DDoS attacks. Since the worm or its programmer can use the computing power of the infected system, they are often integrated into a botnet. These are then used by cyber criminals, for example for DDoS attacks or cypto-mining .
Types of WORMS
There are several types of malicious computer WORMS in the wild. Some are harmful and others not We have listed most of the types of WORMS based on their characteristics:
Email worms are usually spread as malicious executable files attached to what appear to be ordinary email messages from a friend or a promotional message. Next time you see someone offering something for free or discounted a lot than usual, think twice before jumping onto it. The computer worms are mostly spread via email attachments. It usually has double file extensions something like .mp4.exe or .avi.exe or .jpg.exe. This is a tactic used by the bad actor to deceive the victim and convince them to think that those are media files and not malicious computer programs.
Instant Messaging WORMS
They are similar to email worms, the only difference being in the way they distributed. Instant messaging, or IM worms are sent or propagated through instant messaging services and exploit access to contact lists on the victim computers. The worms are disguised as attachments or clickable links to a website, which delivers the payload. Often, short messages like “Discounted” or “Don’t miss the chance!!” “Only for you!” “You missed..last chance!” are accompanied to trick the victim into thinking that either they are the lucky one for exclusive offer released just for a few or a friend sent something interesting to watch. When you see that RUN..RUN FAR AWAY. I mean, don’t run literally. But do not click on ‘em either and destroy the chain.
These are completely independent programs. You use an infected machine to search the internet for other vulnerable machines. If a vulnerable computer is found, the worm infects it.
File Sharing WORMS
Despite the illegal nature, file sharing and peer-to-peer (p2p) file transfers is used by millions of people across the world. Doing so, they unknowingly expose their devices to the threat of file-sharing worms. Like email and instant messaging worms, these programs are often disguised as double-ended file extension.
A bot worm may be used to infect computers and turn them into zombies or bots, with the intent of using them in coordinated attacks. These are used for crypto-mining or sophisticated and co-ordinated DDoS attack.
An ethical worm is a computer worm designed to propagate across networks with the sole and good purpose of delivering patches for known security vulnerabilities. While ethical worms have been described and discussed in academics, actual examples in the wild have not been found, until recently – the solar attack in Dec 2020. The believe is most likely there because the potential for unexpected harm done to systems that react unexpectedly to such software outweighs the potential for removing vulnerabilities which in a way is very true until now. In any case, unleashing any piece of software that makes changes to a system without the permission of the system owner opens the publisher to various criminal and civil charges.
Prevention and Removing WORMS
Just like the old wise guys said, prevention is better than cure. But if you are the once who deep dive into wild and take the unknown path due to pure idiocy or to explore, following the below steps can protect you to a certain limit.
Lets understand, at the first place, the first step to remove a computer worm is to detect the presence of the worm, which can be difficult. The best way to detect a computer worm is to be aware of and recognize the symptoms of a computer worm infection. Some symptoms that may indicate the presence of a worm include: computer performance issues, including degraded system performance, system freezing or crashing unexpectedly. Unusual system behavior, including programs that execute or terminates without user interaction, unusual sounds, images or messages, the sudden appearance of unfamiliar files or icons, or the unexpected disappearance of files or icons, warning messages from the operating system or antivirus software and email messages sent to contacts without user action should raise alarms for of computer worm activities.
To help protect your computer from worms and other online threats, always ensure the below:
- Since software vulnerabilities are major infection vectors for computer worms, be sure your computer’s operating system and applications are up to date with the latest versions. Install these updates as soon as they are available and recommended by the OEM because updates most of the times include patches for security flaws. Keeping up to date with operating systems and all other software patches and updates will help reduce the risk due to newly discovered vulnerabilities.
- Phishing is another popular way for hackers to spread worms and most preferable for targeted attacks. Always be extra cautious when opening unsolicited emails, IM’s, files especially those from unknown senders that contain attachments or dubious links.
- Invest in a strong internet security software solution that can help block these threats. A good product should have anti-phishing technology as well as defences against worms, viruses, spyware, ransomware, and other online threats.
- Users should practice good cybersecurity hygiene to protect themselves against being infected with computer worms. Measures that will help prevent the threat of computer worm infections include:
- Using firewalls will help reduce access to systems by malicious software.
- Using antivirus softwarewill help prevent malicious software from running.
- Being careful not to click on attachments or links in email or other messaging applications that may expose systems to malicious software.
- Encrypt data to protect sensitive information stored on computers, servers and mobile devices.
Removing a computer worm can be difficult. In extreme cases, the system may need to be formatted, and all the software reinstalled. Use a known safe computer to download any required updates or programs to an external storage device and then install them on the affected machine. If it is possible to identify the computer worm infecting the system, there may be specific instructions or tools available to remove the infection. The system should be disconnected from the internet or any network, wired or wireless, before attempting to remove the computer worm; removable storage devices should also be removed and scanned separately for infections. Once the system is disconnected from the network, do the following:
- Update all antivirus signatures
- Scan the computer with the up-to-date antivirus software
- Use the antivirus software to remove any malware, including worms, that it finds and to clean infected files
- Confirm that the operating system and all applications are up to date and patched
Hope this was informative. So be safe online, Only a little effort and vigilance on your part can save you from a lot of unwanted nuisance and cost. Always be protected. We have more interesting topic, in our other posts. Happy Learning. |
Kathleen O'Toole, News Service (650) 725-1939; e-mail: [email protected]
Unusual origins of Europe's largest volcano explained
Mt. Etna, Europe's highest active volcano, has perplexed geophysicists for years because it sits alone on the east coast of Sicily and spews out lava that is chemically different from that of volcanoes caused by the clashing of Earth's tectonic plates.
Now Amos Nur of Stanford's Geophysics Department and his former student Zohar Gvirtzman of the Institute of Earth Sciences at Hebrew University of Jerusalem propose an explanation for Etna in the Oct. 21 issue of the journal Nature.
Etna's voluminous flows are the consequence of "slab rollback" where a chunk of the Tyrrhenian plate broke off, rapidly opening a narrow basin of magma that is sucked up from under the nearby African plate, they say. This magma, or pool of viscous asthenosphere, is what has erupted periodically from Etna over thousands of years. Mt. Vesuvius on the other side of the Tyrrhenian Sea from Etna may be the same sort of volcano, Nur adds, but that awaits further research.
Most of the Earth's volcanoes are situated over subduction systems places where one tectonic plate is sliding under another. As the whole system converges, partial melting occurs in the wedge between the plates and is spewed through faults or cracks in the Earth's crust.
Mt. Etna sits near but not on a subduction zone where three plates of Africa and Europe are converging. Sicily was once part of Corsica and Sardinia but separated, and the Tyrrhenian Sea opened up, geologists believe. The geologic record suggests the opening of a basin between the plates occurred very fast "at centimeters per year, and such basins have been a puzzle for a long time in plate tectonics," Nur says. "We asked why does something extend in the middle of convergence?"
Nur and Gvirtzman first calculated the suspected thickness of the solid upper mantle of the earth's crust, known as the lithosphere, based on the observed surface elevations of the region and what they knew about the Earth's crustal structure and buoyancy. Using a three-dimensional mechanical model of the three plates involved and a fair amount of recorded geologic data on Etna, they determined that a localized disturbance could release a narrow part of the subducted plate so that it would sink fast, creating what they call a "back-arc" basin. This basin would be shallow enough to permit a sideways flow of magma, which would be sucked out of the basin as the descending slab migrates into the Earth's mantle, leaving low pressure behind it.
"A plate with some type of topography that doesn't want to subduct could cause that type of tear" in the plate, Nur says. "The tear allows the viscous material underneath to rise from the sides, and you have a passageway to the surface."
Etna's numerous, voluminous eruptions show that it has a large underground fuel tank. Aeschylus wrote about eruptions occurring in 475 B.C. The most devastating eruptions occurred in 1169 and 1669, and the most recent in 1971. The volcano is now about 93 miles in circumference and has 260 lesser craters on its slopes.
Perhaps 15 or 20 other volcanoes in the world have similar origins, Nur says, as they seem to defy the more conventional plate tectonics model. This may be a case of "the exception proving the rule," he says. "I've never really known what that phrase meant, but I'm taking it to mean that one way to learn a lot more about plate tectonics is by understanding these exceptions."
CAPTION FOR FIGURE 1
The figure shows that Mt. Etna is situated at the junction of three fault zones and to the side of the South Tyrrhenian Sea subduction zone where the underlying plates of Africa and Europe are converging. Based on Etna's location, Amos Nur of Stanford and Zohar Gvirtzman if Hebrew University of Jerusalem have proposed an explanation for the unusual volume of magma that has spewed from Etna over thousands of years.
CAPTION FOR FIGURE 3 (to be included with release)
The cross section at left is parallel and the one at right is perpendicular to the direction of subduction underneath the South Tyrrhenian Sea. The dotted line represents the location of the top of the Ionian slab before it decoupled from the larger plate underlying southern Italy. The decoupling not only opened up the Tyrrhenian Sea but created a passageway for viscous asthenosphere to travel to the earth's surface, explaining the voluminous eruptions of Mt. Etna and also the uplifted terrain of the Calabrian Peninsula of Southern Italy, the researchers say.
By Kathleen O'Toole |
The study of and search for animals which fall outside of contemporary zoological catalogs. It consists of two primary fields of research:
- The search for living examples of animals taxonomically identified through fossil records, but which are believed to be extinct.
- The search for animals that fall outside of taxonomic records due to a lack of empirical evidence, but for which anecdotal evidence exists in the form of myths, legends, or undocumented sightings
|Dja Faunal Reserve||Mokole-Mbembe|
|Ennedi Massif||Tiger of Ennedi|
|Lake District||Windermere Lake : Bownessie|
|Sagarmatha National Park||Yeti|
|Tajik National Park||Wildman of the Pamir Moutains|
|Tropical Rainforest Sumatra||Orang Pendek (sightings in Kerinci Seblat National Park). "Consensus among witnesses is that the animal is a ground-dwelling, bipedal primate that is covered in short fur and stands between 80 centimetres (31 in) and 150 centimetres (59 in) tall." (Wiki)|
Do you know of another WHS we could connect to Cryptozoology?
A connection should:
- Not be "self evident"
- Link at least 3 different sites
- Not duplicate or merely subdivide the "Category" assignment already identified on this site.
- Add some knowledge or insight (whether significant or trivial!) about WHS for the users of this site
- Be explained, with reference to a source |
- A stage in the early embryonic development of mammals in which there is a hollow sphere with an outer layer of cells and inside the hollow sphere, there is a cluster of cells called the inner cell mass. If development continues, the outer layer of cells gives rise to the placenta and other supporting tissues needed for fetal development within the uterus while the inner cell mass cells gives rise to the tissues of the body.
* * *The modified blastula stage of mammalian embryos, consisting of the inner cell mass and a thin trophoblast layer enclosing the blastocele. SYN: blastodermic vesicle. [blasto- + G. kystis, bladder]
* * *blas·to·cyst 'blas-tə-.sist n the modified blastula of a placental mammal
* * *n.an early stage of embryonic development that consists of a hollow ball of cells with a localized thickening (the inner cell mass) that will develop into the actual embryo; the remainder of the blastocyst is composed of trophoblast. At first the blastocyst is unattached, but it soon implants in the wall of the uterus. See also implantation.
* * *blas·to·cyst (blasґto-sist) [blasto- + Gr. kystis bladder] the mammalian conceptus in the postmorula stage; it is like a blastula in having a fluid-filled cavity, but unlike it in having the surface layer not exclusively embryoblast but mainly or entirely trophoblast, in having an eccentric embryoblast, and in not being limited to one germ layer. The human blastocyst consists of an embryoblast (inner cell mass) and a thin trophoblast layer enclosing the blastocyst cavity.
Medical dictionary. 2011. |
What is an expository essay?
An expository essay is a unique literary genre written on a specific topic. The main feature of such a work is its authoring. The main features of the expository essay are a relatively small volume, the presence of the main topic (expository essay thesis statement) and its subjectively emphasized interpretation, free composition, internal semantic unity and ease of narration.
Expository writing definition
As the name implies, the student should expose something. It could be an object, experience, situation, emotion, theory, or something else that a professor assigns to them.
This type of essay is intended to teach a student how to generalize obtained information.
Essentially, an expository essay allows an essay writer to tell a story without limits. You have the artistic freedom to say what you want, as you wish. This contributes to the creativity of students and is widespread in the humanities. You should write a piece that captures the imagination of the reader, making the story as simple as possible.
In practice, an expository essay has several varieties: This may be a theory expository, a generalization of the characteristics peculiar to someone/something, revealing some interesting data you found out while studying given phenomenon.
An expository essay is a creative work, as it involves the transmission of sensory perception of a phenomenon (process, etc.) by means of language. This essay uses all expressive and creative means capable of conveying the image of the idea, an object or phenomenon that the author has.
The task of the student is to transmit sensory perception with the help of verbs to convey the properties of the described phenomenon.
However, value judgments should be avoided: good, bad, nice, etc. Since the reader will have to build his own attitude to what is being described, the main difficulty in writing is the selection of synonymous substitutions for evaluative words.
One of the training exercises for preparing this essay can be considered the task of describing the subject of a person who has never seen this subject in his life.
Each expository essay is intended to recreate in the imagination of the reader a certain image. At the same time, the subject of the expository essay has its own point of view on the image and can be free in assessments and methods of presenting the material.
To describe is to indicate, to reveal some important signs, characteristic features, signs by which we can recognize or present the subject of the expository essay.
Particular attention in the expository essay is paid to bright and interesting details, features. But at the same time, it is necessary to strive to ensure that these details do not look disparate, college essay service will help with this task but form an integral picture where everything is interconnected.
Expository Essay Outline
How to write an expository essay? An expository essay, as a rule, has a three-part composition: it will contain an introduction, the main part, and a conclusion.
Expository Essay Introduction
How to start an expository essay? Make a plan for your work and carefully consider its structure. Decide for yourself where you will start your story, what you will reflect in the main section of the work, and what conclusions you indicate in the conclusion. Everything that was invented in the process of preparing for writing an expository essay should be written on paper. Formulate, in one sentence, a topic for your basic ideas. Then list the arguments in favor of this statement. Basically, for each topic, there are about three arguments.
If the object is a phenomenon or situation, then in the introduction you can write what they are associated with or what is interesting in the first place.
Think about what attracts attention to the object, what makes it recognizable. This may be qualitative characteristics or actions that occur in it or with it.
Expository Essay Body
Avoid using obvious constructions like “This work focuses on the topic …”.
Try to use the inverted pyramid formula when writing, which implies the use of a sufficiently voluminous description of the topic at the beginning of the work and its subsequent gradual narrowing to a certain thesis. In small essays, it is necessary to write no more than 3-5 sentences, and in voluminous essays no more than one page.
First, describe the most significant signs and characteristics of the object so that it becomes recognizable, and then proceed to the details and features that complement the image, use original definitions and comparisons. Do not give banal characteristics; try to show how attentive and subtle you are. To describe the movement of thought, use verbs with a neutral meaning (one can see, one needs to understand, one starts to notice, etc.)
Expository Essay Conclusion
Write the final part of your work. Here it is necessary to summarize all the arguments and give an example of using your output in a more global sense. The arguments you have given should encourage the reader to draw certain conclusions. In the final part of the work, its main thesis should be mentioned again in order to remind readers what they are reading about.
Thoroughly work out the final summary of the above sentence. If the main function of the title and the introductory part of an expository essay is to encourage the reader to read your work, then the final sentence must make the reader remember you.
Expository Essay Format
Do not underestimate the importance of pagination in an essay, its length and overall presentation of the material. In practice, the requirements for the essay should be taken into account as much as possible so as not to worsen your work at random.
The circle of readers. Analyze who you want to reach with an essay? Who can write my essay for me? Who and what are you going to convince in this way? The essay should be written so that it is addressed to a certain circle of persons. |
What is a sensory room?
A sensory room is a room which incorporates multiple pieces of equipment to provide sensory input for individuals to calm and organise themselves. We as occupational therapists call the process of calming and organising our sensory system “sensory integration” (Ayers, 1972). Typically, a sensory room would provide input from all of our sensory systems: vision, hearing, touch, movement, deep calming pressure and at times oral input (chewy or breathing tools). For example, a room could have some calming music, different lights, jumping and crashing, pulling and swinging activities and equipment to touch and play with. This room can then provide the sensory input that children are seeking or needing to be able to regulate their sensory system and concentrate.
What are the benefits of utilising a sensory room within a school?
- Calming and organising: A sensory room can assist students to organise the sensations from their body and the environment and provide a calming and organising effect which can make it easier for the child to concentrate and learn within school (Ayers, 1972).
- Concentration and Attention: Schools which have implemented sensory rooms within their schools have reported increased concentration and decreased undesirable behaviors (Mills & Chapparo 2017).
- Emotional regulation: Sensory rooms can be a safe space for children to regulate, calm their emotions and prevent sensory/emotional meltdowns. Using the sensory space can assist in preventing the child from increasing to a heightened emotional state.
- Improve body’s feedback: Deep pressure feedback provided within obstacles in the sensory room can improve body awareness which can lead to improvements in gross and fine motor tasks.
- Improve learning: With increased concentration and being in the ‘just right’ state, learning opportunities increase as does retention of information.
- Improve social engagements with peers: when a child is in a “just right state” they are able to engage and better regulate their emotional when interacting with peers.
If you are interested in working with an occupational therapist to set up a sensory space in your school, please contact Talking Matters on 8255 7137.
Ayres, A. J. (1972). ‘Sensory integration and the child’. Los Angeles: Western Psychological Services
Mills, C. and Chapparo, C. (2017). Listening to teachers: Views on delivery of a classroom based sensory intervention for students with autism. Australian Occupational Therapy Journal, 65(1), pp.15-24.
A Sensory Life!. (2019). Sensory Retreats. [online] Available at: http://asensorylife.com/sensory-retreats.html [Accessed 28 Feb. 2019].
Related Blog Posts
If you liked this post you may also like: |
Pollination is the movement of pollen from one flower to another. Flowers and pollinators have very different goals when it comes to this process. Entomophilic (insect-pollinated) flowers need pollination to occur for the purpose of plant reproduction. Similar to humans, flowers possess both male and female reproductive parts. Pollen is produced by a flower’s male reproductive parts. It contains the genetic material needed to fertilize a flower’s female reproductive parts. If a flower’s female parts are fertilized, the plant will produce seeds that will ideally fall to the ground and begin the process of germinating into a new plant. Many plants need help in moving the pollen from one flower to another – that is where pollinators come in.
Animals that move pollen (in Wyoming, most of which are insects) aren’t selflessly volunteering to move pollen around for the plants’ benefit. They are hungry. Some pollinators, such as bees and wasps, collect pollen for food for their young. Other pollinators, such as butterflies, moths, hummingbirds and including bees and wasps, drink nectar that is contained deep within a flower, and accidentally come in contact with the nearby pollen. And even others, such as beetles, flies, and ants, don’t eat pollen or nectar but feed on petals or other pollinating insects that have landed on the flower, and accidentally pick up pollen in the process.
This co-dependency between flowers and pollinators is the fiber of the fabric of life. Both plants and pollinators benefit from pollination, and when successful pollination occurs, humans benefit as well. Honey is a direct by-product of pollination, as are coffee, chocolate, cherries, almonds, squash, tomatoes, perfume, beautiful vistas of flowering woodlands and prairies, and much, much more.
On the plant description pages, you may find icons that point to bees, hummingbirds and butterflies. These icons indicate that those organisms are known to pollinate (and hence, visit) that flower, so planting it may attract them to your yard!
The most important pollinator in Wyoming is not an animal at all - it is the wind! Most of the state is comprised of prairie; prairies are most comprised of grasses, and grasses are wind pollinated. But, when it comes to showy flowers or vegetables we eat, we most certainly rely on the living, breathing pollinators of Wyoming. They include:
- Other miscellaneous bees (diggers, squash, etc.)
- Honey (not native to the United States, but still important pollinators)
- Parnassian and Swallowtail
- Whites and Sulfur
- Hooktip and False Owlet
- Tumbling Flower
- Soft-winged Flower
- False Blister
Other pollinators, such as mosquitoes, ants, bats, other birds and mammals are important in other places of the world (especially the tropics) but not in Wyoming. |
When we mention fats, we are referring to lipids, which includes a number of different compounds from fatty acids to triacylglycerols. Lipids play many roles in the body and should be included in our diet as an energy source and a source of essential fatty acids. Essential fatty acids cannot be made in the body and include polyunsaturated fatty acids (PUFAs) linoleic acid of the omega-6 family and alpha-linoleic acid of the omega-3 family.
They are found in abundance in foods such as grains, seeds, flaxseeds and spirulina. A popular source of omega-3 fatty acids is oily fish. There are many health benefits to consuming essential fatty acids including maintenance of blood clotting and lowering of blood pressure.Fat is also needed in order to absorb fat soluble vitamins A, D, K and E which have a number of benefits, amongst other things, for eyesight, skin and bones.
Cholesterol is a substance made by our bodies but also found in some foods. It has various functions including making vitamin D and some hormones. However too much of a certain type of cholesterol can increase the risk of heart disease.
There are two types of cholesterol, HDL and LDL. Cholesterol is carried in the blood by lipoproteins of which there are two main forms, low density lipoproteins (LDL) and high density lipoproteins (HDL). LDL is seen as ‘bad cholesterol’ as it can increase the chances of heart disease, whilst HDL cholesterol is seen as ‘good cholesterol’ as it can protect against heart disease.
Exercise is a great way to get a healthy cholesterol level but it is important to keep an eye on diet. Saturated fats increase the amount of cholesterol and unsaturated fats, found in nuts and oily fish reduce it. Avocado is a food known to boost the amount of HDL cholesterol in the body. |
Examples[change | change source]
Properties[change | change source]
Sine waves can be measured too. The shape of a sine wave is given by its amplitude, phase, wavelength and frequency. The speed that the sine wave moves can be measured. The amplitude and wavelength of the sine wave are shown in the picture.
The highest point on a wave is called the crest. The lowest point is called the trough. The crest of a wave and the trough of a wave are always twice the wave's amplitude apart from each other. The part of the wave halfway in between the crest and the trough is called the baseline.
Complicated waveforms (like the sound waves of music) can be made by adding up sine waves of different frequencies. This is how mp3 audio files are converted from their compressed form into the music we can hear. Complex waves can be separated int sine waves by Fourier analysis.
Waves and matter[change | change source]
Some waves can move through matter while others cannot. For instance, some waves can move through empty space, light waves for example. Sound waves, on the other hand, cannot move through empty space. Inherently, all waves carry energy from one place to another when they move. In some applications of technology, waves may carry meaningful information from one place to another, such as news on the radio.
Usually, after a wave moves through matter, the matter is the same as it was before the wave was introduced, though in some cases, matter can be affected by waves traveling through it. In 1922, Louis de Broglie found out that all waves are also particles, and all particles are also waves.
Types[change | change source]
- Transverse wave: the vibrations of particles are perpendicular ⊥ to the direction of travel of the wave. Transverse waves have crests and troughs. Wave crests and troughs move along a travelling transverse wave.
- Longitudinal wave: the vibrations of particles are parallel to the direction of travel of wave. Longitudinal waves have compressions and rarefactions. Compressions and rarefactions move along a travelling longitudinal wave.
- Standing wave: a wave that remains in a constant position.
- Travelling wave: The blue waves move off to the right. They are traveling waves. The red waves do not move. They are standing waves.
- Solitary wave: Solitary waves are hard to explain. They were first observed in a river channel in 1834. Something gets a bulge of water starting moving up the channel and the bulge on the surface of the water continues to move up the stream. At first, physicists did not believe the story they heard from the man who observed it.
- Light waves can move through space. Light is different from wind or water because light sometimes acts like waves and sometimes it acts as little bits called "particles." The nature of light is a big part of quantum mechanics. |
Two important groups of plants in the coral reef are seaweeds, also known as macroscopic algae, and sea grasses. Both types of organisms are autotrophs. Along with the one-celled autotrophs, these marine plants support the food chains in the reef.
Compared to other marine ecosystems, the number and diversity of plants in coral reefs are relatively low. Small plant populations may be due to the fact that competition for space on reefs is very high, and corals often outcompete plants for the best reef locations. In addition, many of the reef animals are grazers, and they may hold down the size of seaweed populations; in experiments where grazers are removed from an area of reef, plant density increases dramatically. Another reason may be that coral reef waters are low in nutrients needed to support abundant plant life. Despite all of these hurdles, several species of green, red, and brown macroalgae, as well as grasses, flourish in the reef environment. |
This is a typical Social Studies unit — with a twist. Instead of doing activities related to the usual countries like Mexico or Argentina, students invent their own countries for a brand new planet called "Geos." Ideal for split grade classrooms, the stimulating activities in this unit place an emphasis on creativity and cooperative learning. Students start off by creating their own country: a name and capital city is chosen. Then, students go on to create a map of their country. A range of major project is included from creating a coat of arms to inventing a national sport. Our Social Studies lesson provides a teacher and student section with a variety of activities, evaluation and student examples to create a well-rounded lesson plan. |
Learning to Talk
9 & 10 Months
Talking to your baby makes a difference. Research shows that when you imitate and respond to your baby's sounds, it helps him understand language.
Parents who respond when baby "talks" help draw his attention to his own sounds. This makes talking more interesting and important to your infant.
Encourage him to practice talking by playing games with him. When baby makes sounds, repeat them back to him. Pause and give baby a chance to answer.
Your imitation excites him and may cause him to repeat the sounds. Keep listening! You may hear certain tones of voice and sentence patterns in your child's babbling.
Baby may have a sound, like "ba." that he uses to mean many different things. These "words" indicate talking isn't far away. Between 9 and 12 months, baby might have a real word or two mixed in with the babbling.
Source: Nebraska Extension NuFacts |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.