content
stringlengths 275
370k
|
---|
kilt, knee-length skirtlike garment that is worn by men as a major element of the traditional national garb of Scotland. (The other main component of Highland dress, as the traditional male garb of Scotland is called, is the plaid, which is a rectangular length of cloth worn over the left shoulder.) The kilt is a length of woven wool that is permanently pleated except for sections at each end and wrapped around the wearer’s waist in such a way that the pleats are massed at the wearer’s back and the flat, unpleated ends overlap to form a double layer at his front. Both kilt and plaid are usually made of cloth woven with a cross-checked repeating pattern known as a tartan.
The kilt and plaid ensemble developed in 17th-century Scotland from the féile-breacan, a long piece of woolen cloth whose pleated first half was wrapped around the wearer’s waist, while the (unpleated) second half was then wrapped around the upper body, with a loose end thrown over the left shoulder. Subsequently in the 17th century two lengths of cloth began to be worn for these purposes, and the kilt and plaid thus came to be separate garments.
The plaid and kilt form the only national costume in the British Isles that is worn for ordinary purposes, rather than merely for special occasions. Highland dress is also the uniform of Scottish regiments in the British army, and kilts have been worn in battle as recently as World War II. |
Presentation on theme: "Monitoring comprehension Workshop 2 Debbie Draper, Julie Fullgrabe & Sue Eden."— Presentation transcript:
Monitoring comprehension Workshop 2 Debbie Draper, Julie Fullgrabe & Sue Eden
Overview of the session The inner conversation –hearing the inner voice that assists reading leaving tracks of thinking- ways to demonstrate thinking while reading The different types of (human)readers in a class Why meaning breaks down and what to do about it – fix-up strategies Think aloud strategies to share thinking with students
Monitoring understanding is essential to engage with the reading strategies Monitoring understanding Making connections questioning visualising Inference Summarising synthesis
Do I really have to teach reading? Content,comprehension grades 6-12 Cris Tovani Observations about readingTeaching points Ask yourself why am I doing this? How will it help students think, read or write more thoughtfully about my content? Good readers use reading writing and talk to deepen their understanding Reading strategies are options for thinking. One comprehension tool is not more important than another. There is no specific order, sequence or template for introducing strategies Good readers have a variety of ways to think about text Is the reading authentic?Good readers don’t need end of chapter summaries or isolated skill sheets. They ask their own questions, based on their own need for a deeper understanding of the text Don’t isolate strategies into individual activities. Build on previous learning Good readers reread and return to build and extend their knowledge or to enhance enjoyment of reading for enjoyment
Links to Tfel 1.1 understand how self and others learn 1.2 develop deep pedagogical and content knowledge Understanding how students learn to read through your own experiences
Learning to learn. Using dialogue as a means to sharing understanding
Part one Your own inner voice and how you use it.
Just relax…..let your mind go free
What did your inner voice say to you? When you are a busy person, your mind is always having conversations with you. What went on with you? If it’s appropriate, what did your inner voice say or think about?
Listening to the inner voice-George Costanza does not like what he hears
NOT listening to any voice
You need to hear your inner voice Without recognising this voice, it will be harder to ‘think aloud’ with students and share the thoughts you have as a competent reader
Part 2 Leaving tracks of thinking- ways to demonstrate thinking while reading
Sticky labels were invented to monitor comprehension… They come in all sorts of shapes and sizes and kids love them ! They are a great way to keep track of thoughts and ideas and can be placed in books to refer back to They can help students to show tracks of their learning without Interruption when working independently They support remembering what you read far better than highlighting
Discuss this quote-what do you think- have you ever highlighted to extremes? Highlighting text- ‘first of all : throw away the highlighter in favour of a pen or pencil. Highlighting can actually distract from the business of learning and dilute your comprehension. It only seems like an active reading strategy; in actual fact, it can lull your into dangerous passivity’. (Harvard College library 2007)
Text coding R- reminds me of T-T text to text ? Question ! Surprising Make it meaningful for your class, create your own codes
From the text- Teaching Reading Comprehension Strategies Sheena Cameron
Leaving tracks of thinking Margin notes Sticky notes Many of these approaches will be dealt with further as we explore the strategies in more detail.
Which of these strategies have you tried? Reading aloud Lifting texts from sources and sharing with students Re-reading for deeper meaning Thinking aloud/coding text Classroom strategies
Strategies that Work Use some of the previous strategies when you read the 4 pages provided from Strategies that Work to make tracks of your thinking. Share what you have identified as important with someone near you. Is it the same or different?
Part 3 Different types of readers and reading behaviours
24 Awareness of reading Four levels of metacognitive awareness and the ways in which readers monitor their thinking about their reading are described in Strategies That Work:
Types of readers Tacit readers lack awareness of how they think when they read. Aware readers may realize when meaning has broken down, but lack strategies to fix the problem or repair confusion. Strategic readers use a variety of strategies to enhance understanding and monitor and repair meaning when it is disrupted Reflective readers can apply strategies flexibly depending on their goals for reading. They reflect on their thinking and revise their use of strategies. You can observe this reflective stance when students comment with surprise, amazement, or wonder as they read
Group chat Think of particular students that you have taught or are teaching that fit into each category of reader. How do you know they were one of these types of readers?
Comprehension shouldn’t be silent Kelley and Clausen Grace These authors talk about ‘fake or disengaged readers and mindless reading’ What behaviours have you seen ‘fake readers’ doing? You have probably been one yourself at some time. Y chart about behaviours of fake reading
Disengaged reading.. Looks like Sounds like Feels like
Part 4 Why meaning breaks down and how to fix it. MONITOR your understanding
Identifying synergistic regulation involving c-Myc and sp1in human tissues Read the pages silently. Highlight in one colour the text you understand highlight in another colour the text that is confusing or difficult to understand. What are you thinking about as you embark on this task?
After you have read some of the text.. Of the parts of the text you highlighted as being hard to understand, could you not read it well because of lack of background knowledge? Vocabulary? Writing style? Discuss with someone what they learned about themselves as readers through the experience, and what they can take back to their work with struggling readers. What was your inner voice doing as you read this?
Was it? Thinking about what you need to do at school? Panicking? Thinking about what to buy on the way home for dinner? Making rude comments about the activity? Trying to make connections, question etc etc
The inner conversation The fact is that all readers space out when they read. Kids need to know this or they risk feeling inadequate when it happens to them. Once readers are made aware of their inner conversation, they tend to catch themselves quicker and repair meaning if there is a problem. Strategies that work. Page 27
Checking on monitoring of comprehension-inconsistent element An easy and informative technique to see whether students are monitoring their comprehension is to select a passage on a group’s instructional level, then retype it adding an inconsistent element. Introduce the selection as you would normally do when you are getting students ready to read (tapping prior knowledge, setting a purpose for reading). After reading, ask students to comment on what they read. They may summarize or relate the information to a personal experience. See if any student points out the inconsistent element. Text example- Earthquakes
When meaning has broken down… Reasons for breakdownWhat can be done about it run into words that are unknown or unusual Vocabulary- ask, word substitute, dictionary Stopped concentratingRe-read or read aloud Reading too fastSlow down and re-read Lose thread of contentRead in smaller chunks, re-read before and after Not know enough about the topicFind out more, teacher scaffolding, easier text Lose visualisations of contentTry to find mental pictures, look at source material such as the internet Can’t see text organisationKnow and teach text types so they can be recognised Didn’t know which strategy to useExplicit teaching of strategies so that students can try appropriate ones.
From the text- Teaching Reading Comprehension Strategies Sheena Cameron
Podcast about monitoring reading
Part 5 Think alouds- strategies to share with students- making the implicit explicit
Think-Alouds have been described as "eavesdropping on someone's thinking." With this strategy, teachers verbalise aloud while reading a selection orally. Their verbalisations include describing things they're doing as they read to monitor their comprehension. The purpose of the think-aloud strategy is to model for students how skilled readers construct meaning from a text
Sentence starters for think alouds So far, I've learned... This made me think of... That didn't make sense. I think ___ will happen next. I reread that part because... I was confused by... I think the most important part was... That is interesting because... I wonder why... I just thought of...
Reciprocal think alouds In reciprocal think-alouds, students are paired with a partner. Students take turns thinking aloud as they read a difficult text. While the first student is thinking aloud, the second student listens and records what the first student says. Then students change roles so that each partner has a chance to think aloud and to observe the process. Students reflect on the process together, sharing the things they tried and discussing what worked well for them and what didn't. As they write about their findings, they can start a mutual learning log that they can refer back to.
Use the checklist to observe my think aloud about the text- Small pox
Summary Which strategies to monitor understanding do you think are appropriate for your context? How will you introduce this strategy with your staff? How might you do any of this with your class? |
Points of Literature-Main Idea and Details
Lesson 4 of 13
Objective: SWBAT identify the main topic of a text and of specific sections within the text by using the organization of the story elements.
- Tulip Sees America** Cynthia Rylant & Lisa Desimini
- 'Points of Literature' worksheet
- 'Points of Literature' powerpoint
- Blank paper and crayons/colored pencils for each student
- Text Feature Headers for Literature
- Map of the US
- Set up whiteboard with ‘Points of Literature’ organizer
- Lesson vocabulary words from the Reading/Writing word wall: main idea, key details, literature, summarizing, illustration
This lesson is a follow up to another lesson, Points of Informational Text, that I taught about story structure in informational text. I used the same organizer for a timeline and I wanted students to see that information can be organized similarly across many books. I also wanted to give them more practice with determining main idea and supporting details, which is very clear in this book because of the limited text and concrete subject matter.
**I chose this book because it's a mix of literature and informational text ideas. The topic of geography should be common knowledge for students. We are getting ready to talk about Westward Expansion in Social Studies and I want the students to see and hear some of the names of states. Reading cross-curricular materials (geography) helps my students bridge their learning across the school day and gain an exposure to Geography material in the genre of Literature.
**I will warn you that there is a picture and text of the character running naked through the desert. I used the book and read the page and moved on. The kids giggled, but since I didn't make a big deal out of it, they didn't either.
Let's Get Excited!
Underlined words below are lesson vocabulary words that are emphasized and written on sentence strips for my Reading & Writing word wall. I pull off the words off the wall for each lesson, helping students understand this key 'reading and writing' vocabulary can be generalized across texts and topics. The focus on acquiring and using these words is part of a shift in the Common Core Standards towards building students’ academic vocabulary. My words are color coded ‘pink’ for literature/’blue’ for reading strategies/’orange’ for informational text/'yellow' for writing/’green’ for all other words)
Bring students to a common starting point
- “Today we are going to read about a girl who is going on a trip to places in America.” If you could go on a trip, where would you go?” This is literature, a story about a girl who travels across America."
- “Let’s look at these pictures and give me some states to 'point' to that look interesting. Has anyone gone to any other states?” Take ideas - encourage students to share what they liked about the state.
- Use the pointer - the idea of 'pointing' is the focus of the lesson
- Show powerpoint to bring students to a similar starting point. States have different landscapes. Students have different ideas where they would want to go. Here's a short video of the students sharing their thoughts.
Explain the task
- "Today we are going to read a book called, Tulip Sees America. She starts her trip in one state and then moves onto other states."
- “As we read this story, we are going to ‘point’ to the kinds of structure in the story.”
- “Here are the story elements we’ve talked about: characters, setting, problem, action and solution.” I referenced the headers to remind kids what they represent.
- “There is another way that stories can be organized to help the reader understand the text. The author has placed used a pattern to show the illustrations and text. That is what we will point to today.””
Discuss the organization of the story
- “I’m going to read a few pages and I want you to see if you recognize the structure.” (Stop after the ‘Nebraska pages’). Prompt for: 4 pages for each state; states go west across America.
- “What did you notice? Is there a pattern we can see in the story structure? What is the main idea and details? (The goal is for students to understand the good details make a main idea more clear and help us connect to it.)
Model how to find main idea and details
- “I’m going to show the structure on my ‘points of literature’ organizer. The main idea goes on the top above the arrow. Then the details that support that idea go below the arrows.” Here's an example of the whiteboard at the beginning.
- "The first place she went to was Iowa. Let me read those pages again...What's the main idea? I'll write that 'There's no place like Iowa."
- "Now what are the supporting ideas? Take ideas. Yes, Iowa has rolling hills and is foggy." Write that one the board.
- "Now that I've finished telling about the main idea on the first state, you can continue the next few pages."
- For a few students who were struggling, I continued on to demonstrate 2 states. This is an example of how I completed more on the whiteboard.
As students look at the story structure of this piece of literature, they are evaluating the story as a whole and way the parts relate to each other. (RL.2.5) The shift in Common Core ELA standards is toward presenting students with strategies to see how text is organized and recognize structure and patterns within the text. Ultimately, students who can determine the structure of a story, will be better able to predict, connect, and summarize.
Students Take A Turn
Explain the task
- “Now it’s your turn to write a main idea an supporting details for other states.
- "Listen to the rest of the story again and choose states that interests you.
- "Listen for the main idea and the supporting details and put them on your organizer.”
- Finish the book give students time to fill out the rest of the organizer.
- Here is a completed student worksheet.
- Some of my kids were bothered by the open space, so they added lines.
Reflect and share
- "I see lots of great supporting details and I see some different ideas. Books often have several supporting ideas for a main point. That's the evidence that makes the main point stronger. Keep this in mind when you are writing a story. If you make a main point, you should include several supporting ideas that give evidence to the main idea."
- “Who wants to share your ideas?" Take volunteers and prompt them with questions.
- This is a students' thoughts on organizing the worksheet.
- "What was the main idea?"
- "What details supported that idea?"
- "How do these details help us understand?”
Apply What You've Learned!
Explain the task
- “Now that we have a pattern of main ideas and supporting details and a pattern on the map, you can continue a pattern for a state that you choose.” Pass out the blank pieces of paper.
- “You’ll need to choose a state. Think about the structure of the story. The states moves W to E so you’ll have to pick a state along the path." Refer to the map.
- "The states are all introduced with the words..."The ..... in .....(state name)."
- "Think about what makes your state special. Use descriptive language like they did in the book."
- “Make an illustration for your state.”
- Here's one of my completed student's projects.
Share your work
- Invite students to share their state ideas.
- This is a video of student reflecting about her project.
Scaffolding and Special Education: This lesson could be easily scaffolded up or down, depending on student ability.
For students with academic challenges, this lesson should be easier because I am reading to the group. A large class map will be more helpful and perhaps, post-its, to make a bigger visual would be helpful. They may need a ‘buddy’ to help with the worksheet. For the state page that they make, perhaps some ideas on a desk whiteboard would help them. This is an explanation from one of my students with hearing challenges.
This is a great lesson for students with more academic ability. The geography lesson embedded in this lesson is applicable for any level of students. Challenge them to use higher level vocabulary from them (Iowa is agricultural vs Iowa has corn) as well as deeper supports to the main idea (Nebraska) |
The International Day for the Eradication of Poverty was observed across the world on October 17, 2018. This year’s theme of the day is ‘coming together with those furthest behind to build an inclusive world of universal respect for human rights and dignity.’ The year 2018 marks the 25th anniversary of the declaration of the day by the UN General Assembly, in its resolution dated December 22, 1992.
The International Day for the Eradication of Poverty aims to ensure that the active participation of people living in extreme poverty and those furthest behind is a driving force in all efforts made to overcome poverty, including in the design and implementation of programmes and policies which affect them.
• The International Day for the Eradication of Poverty was established by the UN General Assembly on October 17, 1987.
• On that day, over a hundred thousand people gathered at the Trocadéro in Paris , where the Universal Declaration of Human Rights was signed in 1948, to honour the victims of extreme poverty, violence and hunger.
• They proclaimed that poverty is a violation of human rights and affirmed the need to come together to ensure that these rights are respected.
• Since then, people of all backgrounds, beliefs and social origins have gathered every year on October 17 to renew their commitment and show their solidarity with the poor.
• The day presents an opportunity to acknowledge the effort and struggle of people living in poverty, a chance for them to make their concerns heard and a moment to recognise that poor people are the first ones to fight against poverty.
• The commemoration also reflects the willingness of people living in poverty to use their expertise to contribute to the eradication of poverty. |
What to do with this activity?
While you are at the seaside, collect shells and notice how beautiful they are. Some of the shells you find might still have a live animal inside. The shell is their protection. It's like a skeleton, only on the outside. If you find a living creature, have a good look at it and leave it where it is.
Bring home your collection of different types of empty sea shells. Learn to identify the common shells from around the Irish coastline. Find pictures and names of some common shells on the PDF link at the top right of this page. It's from a nature website called Buglife.
If your child shows an interest in marine life (things that live in the sea) make sure to encourage them. Have a look at this quiz from Sherkin Island Marine Station. Don't worry if they don't know all the answers. Use it as an opportunity to do some research with books or online. Don't forget, there are lots of books about every subject in the children's section of your local library.
It’s important to encourage whatever reading your child is doing at this age. Children have their own interests and hobbies so they will be more inclined to read information about these subjects. Having comics, papers or magazines around the house will make it easier for your child to get into reading. Your child might find it appealing to read online and you might like that the book can be read by an automated voice. E books can be looked at when you are on the move, making sure that your child is careful with your computer or phone.
Your child might like to read a section of the newspaper or a magazine – the sports, fashion or cooking sections - depending on their interests. They might like to read a short piece from a newspaper and underline facts with a pen and opinion with a pencil. You can then talk about the difference between fact and opinion (there are good examples in sports writing). Encourage your child to read instructions for mending bikes, building models and playing new games.
Rate this activity
Based on 33 reviews
How would you rate it?
1 = Poor, 5 = Great. |
Presentation on theme: "Welcome to our Key Stage 1 maths evening"— Presentation transcript:
1 Welcome to our Key Stage 1 maths evening Would you like to be able to help more with homework?Are you confused by how we teach the children maths these days?Would you like to know more about the methods we use?This evening we will…Outline the structure of a typical maths lessonShow how maths strategies develop across key stage 1Show you the school’s calculations policy and how you can use this to help your child with their mathsIdentify key areas where you can help your child with their mathsShare useful resources and websites
2 Teaching input and activity The maths lessonA daily key stage 1 maths lesson usually consists of 3 parts.Mental starterTeaching input and activityPlenary
3 Mental starter10 minute ‘warm-up’ activity at the start of every lessonFocuses on counting and the number system, mental recall of number facts (e.g. number bonds to 10, doubling, halving, times tables)
4 Time to warm up your brains! Maths Pack 1, number paintFull circle
5 Teaching input and activity This part focuses on the teaching and learning of maths concepts and gives the children time to practise and consolidate their mathematical skills.Lasts approximately minutes
6 Plenary Review of the children’s learning Check children’s understanding (Assessment for Learning)Give time for children to self-assessNext steps
7 Which method should my child be learning? Helping you child with maths guide for parents
8 Calculations policy for addition Our school Calculations policy shows how the method progresses across the year groups. Some children may be ready for the year appropriate method, other may be consolidating the previous year’s or being extended.Copies available for you to take away today.
9 How would you solve these calculations? Which method did you use?2 + 5 =2 + 8 =7 + 7 =6 + 7 ======What skills were you using?Does that method work for all of these calculations?
10 How would you solve these calculations? 2 + 5 = (start with the larger number & count on)2 + 8 = (number bonds to 10)7 + 7 = (doubling)6 + 7 = (near doubles; double 6, then add 1 more)= (add 10, add 1)= (add 10, subtract 1)= (could add 20, add 1 or add tens, add units and then total)= (adding by partitioning)= (add 10, add 1)
11 + plus sum Language of addition add total count on altogether addition increasealtogethertotaladdsummore than
14 Addition across key stage 1 Counting on using a 0 to 20 number line.6 + 5 =Mr Marshall has 6 sweets in one pocket and 5 in the other. How many sweets does he have in total?2) 5 jumps of 11) Start on 63) Answer is 11
15 Addition across key stage 1 This could also be calculated using partitioning.
27 Numbers and the number system Rapid recall of number pairs to 10 and then to 20.Doubles and halves
28 Learning about moneyTo become familiar with the value and denominations of moneyPennies in a pound and different amounts that make £1Help him/her to add and subtract amount of money perhaps within a pocket money context (workout whether they can afford a particular toy or treat)Shop using money and calculate change.
29 Learning times tablesLearning multiplication facts is a vital part of any child’s mathematical development. Once rapid recall of multiplication facts becomes possible, a whole host of mathematical activities will seem easier.Children need to be able to recall multiplication facts in any order and also to derive associated division facts. The expectations for each year group are set out below:Year 1Count on or back in ones, twos, fives and tens and use this knowledge to derive the multiples of 2, 5 and 10.Year 2Derive and recall multiplication facts for the 2, 5 and 10 times-tables and the related division facts.
30 Learning times tables www.teachingtables.co.uk Year 3 Derive and recall multiplication facts for the 2, 3, 4, 5, 6 and 10 times-tables and the corresponding division facts.Year 4Derive and recall multiplication facts up to 10 × 10, the corresponding division facts.Year 5Recall quickly multiplication facts up to 10 x 10 and derive quickly corresponding division facts.Year 6Use knowledge of place value and multiplication facts to 10 × 10 to derive related multiplication and division facts involving decimals(e.g. If I know 8x7=56, I can use that to workout 0.8 × 7=5.6)Use knowledge of multiplication facts to derive quickly squares of numbers to 12×12.
31 What should they be able to do? The aim is that for each times table:The children should be able to say the table in order.E.g. 1 times 3 is 3, 2 times 3 is 6.They should be able to answer questions in any order.E.g. “What is 4 x 5?” “What is 2 x 7?”They should be able to answer – “How many 2’s in 18?” “How many 5’s in 20?”They should also be able to link their tables with division –e.g. 5 x 3 is 15, so 15 ÷ 3 = 5
32 Eight times eight is sixty-four, close your mouth and There are lots of ways you can help your child to learn their times tables. Different activities suit different learning styles Remember it should be fun!Buy a times table CD or tape. Listening to songs and singingcan help children learn their tables in a fun way.If your child likes to write or draw they can write out their times tablesor copy them from a chart. See how quickly they can do it and can theyimprove on their time?3) If your child is always on the move try saying them as they go up the stairs of when out walking. They can chant them as they skip or bounce a ball.Make up silly rhymes to help with facts they are struggling to remember e.g.Eight times eight is sixty-four, close your mouth andshut the door!A tree on skates fell on the floor; three times eight is twenty-four.
33 Parents evening‘Maths targets. A booklet for parents’ publication from the Numeracy strategyIdeas for games, key learning skills for that year group.
34 (Copy available to download from our school’s website) Parents evening(Copy available to download from our school’s website) |
How do the rights and responsibilities of high school students with disabilities change as they enter college?
The rights and responsibilities of high school students with disabilities are different from those of college students with disabilities. Understanding the legal and practical issues involved can help students with disabilities successfully transition from high school to college.
All students should be prepared to face an increased level of academic competition and have less contact with instructors as they transition from high school to college. The college learning environment is less supervised and requires that students apply more self-determination skills than they needed in high school. In college, they must make more decisions for themselves and take responsibility for their own actions, learning, successes, and failures.
Students with disabilities face these same changes and must deal with a new and more complex process of external support than ever before. As reported by McGuire (1991), "Often college-bound students with learning disabilities fail to understand that they will face a different set of demands within a postsecondary setting. They soon become overwhelmed by the amount of assigned material as well as the fast pace of instruction. Many lack the skills and strategies that are necessary for managing and self-monitoring their learning in a variety of contexts." It is vital that students equip themselves with a well-thought-out plan and strategies for success long before that first day of class.
It is critical that students with disabilities fully understand the impact of their disabilities and how their disabilities affect their ability to learn and participate in specific college courses. Understanding their rights and, equally important, their responsibilities as college students with disabilities is also critical for success. The disabled student services office at the college can help students reach these goals. This office can play a key role in success and also refer students to other offices on campus where support services are available.
The college student is responsible for making requests for accommodations in advance and must often assume more responsibility for the accommodations themselves (e.g., finding note takers and getting textbooks in advance). The student may also need to interact more extensively with instructors to explain the disability and campus accommodation procedures. The student must advocate for accommodations. The student should keep in mind that it may take longer to get some accommodation issues resolved than it did in high school, so it is important to plan ahead.
Although no two people learn in exactly the same manner and need the same accommodations, the following list may be useful to a student with a disability. It includes tips offered by successful college students with disabilities involved in DO-IT:
- Select an appropriate set of classes, and talk to your academic advisor, disabled student services personnel, faculty members, and other students about classes you are considering.
- Complete classes required for graduation early in your program so you don't get stuck with scheduling conflicts or full classes in your final year.
- Try to get a copy of the class syllabus so you can see exactly what the requirements will be for a specific class.
- Purchase your textbooks early, if possible.
- Be organized and manage your time wisely; keep track of important due dates and exams.
- Schedule a specific time each day for studying (make sure you are "alert" at these times, not sleepy or hungry).
- Remember to take study breaks; avoid marathon study sessions and cramming.
- The environment in which you study is important. Choose a location where you feel comfortable, where it is quiet, and where you will be free from distractions.
For information on how disability services differ between high school and college consult Differences Between High School and College, Transitioning to College, and Accommodation Differences Between High School and College.
For more tips and suggestions for success and information on the rights and responsibilities of high school students entering college, consult College Survival Skills. For a list of similar resources, go to College Preparation Resources for Students.
Last update or review: September 27, 2012 |
SICARII, name, of Latin origin, used by *Josephus for Jewish patriots who maintained active resistance against the Roman government of Judea, and Jewish collaborators with it, during the period 6–73 C.E. The name derived from the Latin word sica, "curved dagger"; in Roman usage, sicarii, i.e., those armed with such weapons, was a synonym for bandits. According to Josephus, the Jewish Sicarii used short daggers, μικρἁ Ξιφίδια (mikra ziphidia), concealed in their clothing, to murder their victims, usually at religious festivals (Wars, 2:254–5, 425; Ant., 20:186–7). The fact that Josephus employs the Latin sicarii, transliterated into Greek as σικαριοι (sikarioi) suggests that he adopted a term used by the Roman occupation forces; his own (Greek) word for "bandit," which he more generally uses to describe the Jewish resistance fighters, is λησταί (lestai). For a full description of their activities, see *Zealots and Sicarii.
[Samuel G.F. Brandon]
Source: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved. |
So, what do you need help with?
Researching for a History assessment piece can often be the most daunting part of the subject. However, it needn't be. Research is a systematic process that, if followed step-by-step, will become a logical and efficient part of your work. There are nine stages of good research.
All sources, both primary and secondary, are made by people and may be biased (one-sided) and incomplete. Two people can see exactly the same incident and yet remember it differently. So too, modern historians can study the same evidence and reach different conclusions.
As you study History, you will be asked to complete a range of assessment types. Understanding what each kind of assessment task requires will
help you to prepare more effectively for it.
Finding good sources can always be difficult and time-consuming. To aid in your research, this part has a collection of online sources, categorised by historical era. |
Why Lesson Planet?
Students observe and demonstrate how to read with expression. They discuss the types of emotions and expressions to use while reading, and identify the appropriate punctuation for a variety of sentences. Students then write a sentence that displays an emotion, and read the book "When Sophie Gets Angry! Really, Really Angry!" |
Although the average American may know very little about printed circuit boards, they would find it hard to live without them. PCBs are the workhorses of modern-day electronics. They are found inside of computers, printers, microwave ovens, cell phones, digital clocks, stereos, televisions, and more.
A PCB provides electrical connectivity using conductive tracks, pads, and other components that are etched from copper sheets. These components are laminated onto a non-conductive substrate. PCBs can be very simple and have just one copper layer. They can also be very complex, with capacitors, resistors, and other components on multiple, interconnected layers.
Printed circuit boards were not invented overnight. They are the product of many advances in electronics. Here’s a look at some of the milestones that have led to today’s high-functioning PCBs:
Electronic Circuit Making Equipment (ECME). This was a step in the right direction toward the printed circuit board. The technology could produce three radios per minute. Its inventor, John Sargrove, sprayed metal onto Bakelite plastic board.
Auto-assembly. This process came to consumer electronics in the 1950s thanks to the United States Army. Automated assembly changed the future of electronics circuits.
Point-to-point construction. Before printed circuits, we relied on point-to-point construction. This non-automated construction of electronics circuits used vacuum tubes and large sockets. The results were big, unwieldy circuits put together with screws and wire nuts. The contacts often corroded or became loose, and the circuits often failed. Because these circuits had to be assembled manually, they were more costly and more prone to wiring errors.
Through-hole technology. Printed circuit boards used to be dotted with holes to accommodate the wire leads that came from each electronic component. The leads were sent through the holes and soldered to the PCB trace. This changed due to the above-mentioned auto-assembly process. Through auto-assembly, the leads were inserted into an interconnection of copper foil and dip soldering. This process was the precursor to lamination and etching that is used today.
Surface mount technology (SMT). Through-hole technology is still used for making connections between multi-layer printed circuit boards. However, surface mount technology allows for more efficient component connections and, ultimately, smaller boards with greater functionality and lower production costs. Surface mount technology came into use in the 1980s.
Wave soldering. Clumsy, error-prone manual soldering was replaced by automatic soldering. Referred to as wave soldering, a printed circuit board passes over a wave of molten solder, and the components are automatically soldered.
These are just a few of the advances that brought PCBs to where they are today. The future holds many exciting possibilities as printed circuit boards continue to evolve. As a sneak peek, think 3D-printing of PCBs and “green” printed circuit boards made of recyclable paper. |
What happened the last time a vegetated Earth shifted from an extremely cold climate to desert-like conditions? And what does it tell us about climate change today?
John Isbell is on a quest to coax that information from the geology of the southernmost portions of the Earth. It won't be easy, because the last transition from "icehouse to greenhouse" occurred between 335 and 290 million years ago.
An expert in glaciation from the late Paleozoic Era, Isbell is challenging many assumptions about the way drastic climate change naturally unfolds. The research helps form the all-important baseline needed to predict what the added effects of human activity will bring.
Starting from 'deep freeze'
In the late Paleozoic, the modern continents were fused together into two huge land masses, with what is now the Southern Hemisphere, including Antarctica, called Gondwana.
During the span of more than 60 million years, Gondwana shifted from a state of deep freeze into one so hot and dry it supported the appearance of reptiles. The change, however, didn't happen uniformly, Isbell says.
In fact, his research has shaken the common belief that Gondwana was covered by one massive sheet of ice which gradually and steadily melted away as conditions warmed.
Isbell has found that at least 22 individual ice sheets were located in various places over the region. And the state of glaciation during the long warming period was marked by dramatic swings in temperature and atmospheric carbon dioxide (CO2) levels.
"There appears to be a direct association between low CO2 levels and glaciation," he says. "A lot of the changes in greenhouse gases and in a shrinking ice volume then are similar to what we're seeing today."
When the ice finally started disappearing, he says, it did so in the polar regions first and lingered in other parts of Gondwana with higher elevations. He attributes that to different conditions across Gondwana, such as mountain-building events, which would have preserved glaciers longer.
All about the carbon
To get an accurate picture of the range of conditions in the late Paleozoic, Isbell has traveled to Antarctica 16 times and has joined colleagues from around the world as part of an interdisciplinary team funded by the National Science Foundation. They have regularly gone to places where no one has ever walked on the rocks before.
One of his colleagues is paleoecologist Erik Gulbranson, who studies plant communities from the tail end of the Paleozoic and how they evolved in concert with the climatic changes. The information contained in fossil soil and plants, he says, can reveal a lot about carbon cycling, which is so central for applying the work to climate change today.
Documenting the particulars of how the carbon cycle behaved so long ago will allow them to answer questions like, 'What was the main force behind glaciation during the late Paleozoic? Was it mountain-building or climate change?'
Another characteristic of the late Paleozoic shift is that once the climate warmed significantly and atmospheric CO2 levels soared, the Earth's climate remained hot and dry for another 200 million years.
"These natural cycles are very long, and that's an important difference with what we're seeing with the contemporary global climate change," says Gulbranson. "Today, we're seeing change in greenhouse gas concentrations of CO2 on the order of centuries and decades."
Ancient trees and soil
In order to explain today's accelerated warming, Gulbranson's research illustrates that glaciers alone don't tell the whole story.
Many environmental factors leave an imprint on the carbon contained in tree trunks from this period. One of the things Gulbranson hypothesizes from his research in Antarctica is that an increase in deciduous trees occurred in higher latitudes during the late Paleozoic, driven by higher temperatures.
What he doesn't yet know is what the net effect was on the carbon cycle.
While trees soak in CO2 and give off oxygen, there are other environmental processes to consider, says Gulbranson. For example, CO2 emissions also come from soil as microbes speed up their consumption of organic matter with rising temperatures.
"The high latitudes today contain the largest amount of carbon locked up as organic material and permafrost soils on Earth today," he says. "It actually exceeds the amount of carbon you can measure in the rain forests. So what happens to that stockpile of carbon when you warm it and grow a forest over it is completely unknown."
Another unknown is whether the Northern Hemisphere during this time was also glaciated and warming. The pair are about to find out. With UWM backing, they will do field work in northeastern Russia this summer to study glacial deposits from the late Paleozoic.
The two scientists' work is complementary. Dating the rock is essential to pinpointing the rate of change in the carbon cycle, which would be the warning signal we could use today to indicate that nature is becoming dangerously unbalanced.
"If we figure out what happened with the glaciers," says Isbell, "and add it to what we know about other conditions - we will be able to unlock the answers to climate change."
Video available: http://youtu. |
[back to list of essays]
Ling 420, Morphology
Be prepared to give brief answers to the following questions.
- What are the significant
characteristics of an agglutinating language?
- How does a fusional language
- How are Welsh nouns inflected
for possession? What processes are involved? Are they phonological or
- The instructions in several
of our problems that deal with paradigms recommend the use of an operation
known as "matrix permutation" as part of arriving at an optimal
solution. What is the purpose of this operation, and how does it help the
- A number of the 3rd
Declension Latin nouns we have examined show syncretism between the dative
and ablative plural forms in their paradigm, with identical forms both in -ibus.
According to markedness theory, what does this tell us about the semantic
distinction between dative and ablative? Neuter nouns show syncretism
between the nominative and accusative. What does this tell us about the
semantic distinction between nominative and accusative?
- Syncretism parallel to that
between the dative and ablative often does not take place in the singular.
What does the fact that this syncretism occurs in the plural but not in
the singular tell us about the plural?
- Historically, English dive
was a weak verb that formed its past by regular -ed suffixation.
For many speakers today, dive has become a strong verb, with the past
tense dove. How might this have come about? (Hint: consider verbs
such as connive, drive, derive, strive, thrive, ride, abide, decide,
- In comparing Latin momordi:
'I bit, I have bitten' with mordeo: 'I bite', and ce:pi: 'I
seized, I have seized' with capio: 'I seize', etc., what are some
of the problems in identifying a 'perfect' morpheme of the Bloomfield or
- Given a copy of the diagram
used as a basis for Latin 7, be able to locate within stem, increment,
suffix, or some combination of the three, where the exponents of a given
feature--such as +future, +past, +perfect, +subjunctive, +passive,
+speaker, +addresses, +plural (person)--are located.
- When presented with pairs of
present and perfect stems for a given Latin verb, be prepared to identify
each of the processes involved in forming the perfect stem from the
present stem that should be indicated somewhere in a sort handle, if one
were sorting a number of such pairs into types.
- Distinguish between
phonological processes and morphological processes. Give examples.
- Give four answers to the
question, "What are words?"
- Be able to match the
following terms with their definitions, or with language examples.
Furthermore, when an example of the phenomenon from a given language is
presented for matching, be able to indicate whether the example involves
Inflection (I), Compounding (C), Derivation [or
another type of word-formation other than compounding] (D), or
whether these distinctions are Not Applicable (NA).
- analogical change
- automatic alternation
counterexample of same
- consonant ablaut
- endocentric compound
- equipollent terms
- exocentric compound
- facultative expression
- fixed ordering; counterexample
- inalienable possession
- leading form
- marked term
- minimal free form;
counterexample of same
- nonrecursion; counterexample
- popular etymology
- privative terms
- stress modification
- subjunctive vowel
- syncretism; its
relation to markedness
- unmarked term
- vowel ablaut
- vowel reversal
- zero expression
Specifics with respect to the Final Examination,
Be prepared to match terms with language data in
which the concept is identified. As an example, the Turkish word for 'my
child', which contains a "soft g," is used by Matthews (1991:151) to
exemplify the concept of fusion. Chapter 11 in Matthews (1991)
gives four main characteristics of words. Be especially prepared to recognize
and identify examples that (if they existed) would constitute counterexamples
to these characteristics.
After you have identified the concepts exemplified
in various sets of language data, be prepared to indicate whether each example
involves Inflection (I), Compounding (C), Derivation (D),
or whether these distinctions are Not Applicable (NA) to the example at
hand. Thus, for example, one would indicate that the Turkish example above
A distinctive feature analysis of Latin verb
inflection identifies the following privative morphosyntactic features as being
marked by the inflections: perfect, subjunctive, future, past
["imperfect"], passive, first person, second person, plural. Be
prepared to indicate whether the exponents of a particular feature or
distinction are located in the stem, the increment, the suffix (as we used
these terms in Latin 7), or in some combination of the three. For example, if
asked where the distinction "perfect
vs. nonperfect in the (present) active indicative" is made, one would
reply "in both the stem and the suffix," because making this
distinction involves substituting the perfect stem for the present stem, and
the -i: set of suffixes for the -o: set of
suffixes. (It does not involve a change in increment, however, because the
increment is zero in both the perfect and the nonperfect.)
prepared to identify the ways in which the perfect stem for a given Latin verb
differs from its present stem counterpart. (This is the same kind of operation
we did in determining proper "sort handles" in Latin 7.)
prepared to answer several "short answer" questions from those listed
[back to top] [back to list of |
While herpesviruses infect most animals – including humans – with incurable disease, Cornell researchers have found a genetic trail to thwart its reproductive powers, cutting its infective powers by a factor of up to 10,000.
The technique involves locking up virus DNA inside its viral carriers, reports a study published in the Journal of Virology in July that opens a much-needed new pathway for antiviral drug development.
About 95 percent of adults in the United States contract a herpesvirus by age 40, according to the Centers for Disease Control and Prevention. Effects can include cold sores, chicken pox, shingles, mononucleosis, blindness, birth defects, encephalitis, cancer and transplant rejection. The virus can be fatal to human babies, animals and people living with HIV, undergoing chemotherapy or relying on organ transplants.
"Giving antiviral medicine is critical to transplant recipients and babies whose mothers have active infections – herpes kills 2,000 babies a year in the U.S. alone," said Joel Baines, the James Law Professor of Virology at Cornell, whose research associate Kui Yang led the study. "Viruses develop drug resistance just like bacteria do. This discovery offers a new tactic in the arms race against herpes."
Yang and Baines discovered how virus particles (virions) assemble themselves to hold and release their DNA. A virus infection is like an army making tanks to invade and hijack an enemy fleet. Inside each virion, viral DNA sits in a sealed compartment called a capsid, waiting for the opportune time to emerge and enter the host cell's nucleus.
However, for DNA to squeeze into the capsid to begin with, it must pass through the portal vertex – a screw-shaped portal protein containing an internal channel through which the DNA passes. Connecting this protein to material making up the capsid's walls is the first step to assembling the whole capsid and, eventually, the virion tank.
"We've been looking at this vertex for ways to stop virion assembly," said Baines. "Our previous work found that a peptide, or mini-protein, binds to the vertex then connects similar proteins to form capsid walls. This study tested the idea that adding the peptide in excess would block capsid assembly. We reduced viral infectiveness up to ten thousandfold– but the reason for it was the opposite of what we expected."
Using electron microscopy, they saw that adding more of the peptides didn't keep virions from assembling. Instead, extra peptides bound to the portal vertex, locking the opening and trapping viral DNA inside the capsid. Normally the peptide exits the capsid once its construction job is complete, but adding it back during early infection plugs up the portal. Without the ability to release DNA, virions could not hijack host cells or spread infection.
"This aspect of viral replication has never been targeted by an antiviral before," said Baines. "It's a basic part of how all herpesviruses work. They all have very similarly structured capsids, portals and peptides. So it's possible that this portal-plugging principle could work on a variety of herpesviruses, providing a new approach to drugs combating the whole spectrum of herpesviruses across species."
The study was titled "A Herpes Simplex Virus Scaffold Peptide That Binds the Portal Vertex Inhibits Early Steps in Viral Replication." |
Hello! I shall do part (a) first...
Have you learnt Cosine Rule?
Basically Cosine Rule states that (in the context of this question):
Therefore, by applying Cosine Rule, we can find AC:
[rounded off to 5 significant figures] = 11.2 cm [rounded off to 3 significant figures]
Okay now for part (b)...
The formula for area of triangle is (in the context of the question):
Area of triangle ABC
In general, the formula is Area of triangle , where a and b are adjacent sides of a triangle and C is the angle opposite side c (i.e. the angle between sides a and b).
By applying this formula, we can answer the question:
Area of triangle ABC = = (rounded off to 3 significant figures)
Hope this helps! ^^ |
FREQUENCIES AND WAVELENGTHS
Compared to sound waves, the frequency of light waves is very high and the wavelength is very
short. To measure these wavelengths more conveniently, a special unit of measure called an
ANGSTROM UNIT, or more usually, an ANGSTROM (
ZDV GHYLVHG $QRWKHU FRPPRQ XQLW XVHG WR
measure these waves is the millimicron (P ZKLFK LV RQH PLOOLRQWK RI D PLOOLPHWHU 2QH P) HTXDOV WHQ
angstroms. One angstrom equals 1055-10m.
Q33. What unit is used to measure the different wavelengths of light?
FREQUENCIES AND COLOR
For our discussion of light wave waves, we will use the millimicron measurement. The wavelength
of a light determines the color of the light. Figure 1-18 indicates that light with a wavelength of 700
millimicrons is red, and that light with a wavelength of 500 millimicrons is blue-green. This illustration
shows approximate wavelengths of the different colors in the visible spectrum. In actual fact, the color of
light depends on its frequency, not its wavelength. However, light is measured in wavelengths.
Figure 1-18.Use of a prism to split white light into different colors.
When the wavelength of 700 millimicrons is measured in a medium such as air, it produces the color
red, but the same wave measured in a different medium will have a different wavelength. When red light
which has been traveling in air enters glass, it loses speed. Its wavelength becomes shorter or compressed,
but it continues to be red. This illustrates that the color of light depends on frequency and not on
wavelength. The color scale in figure 1-18 is based on the wavelengths in air.
When a beam of white light (sunlight) is passed through a PRISM, as shown in figure 1-18, it is
refracted and dispersed (the phenomenon is known as DISPERSION) into its component wavelengths.
Each of these wavelengths causes a different reaction of the eye, which sees the various colors that
compose the visible spectrum. The visible spectrum is recorded as a mixture of red, orange, yellow,
green, blue, indigo, and violet. White light results when the PRIMARIES (red, green, and blue) are mixed |
Would you think it odd to find a turtle in a flowing stream? If it is a wood turtle (Clemmys insculpta), a stream is the perfect place to be. This fascinating turtle requires freshwater stream habitats, but its home territory will also include the bordering woodlands, meadows and farmlands. The wood turtle’s home range is unusual for a turtle, not only because it is both aquatic and terrestrial, but also because it tends to be linear, following the length of the stream for a mile or more.
Wood turtles are not difficult to identify. A full-grown turtle’s carapace length can reach nine inches, though most found are under eight inches long. The species name insculpta refers to the sculptured appearance of the top shell, or carapace. Each scute on the carapace shows a series of raised ridges (concentric growth rings) that rise like small pyramids. The plastron (bottom shell) is a creamy yellow patterned with irregular black markings along its border, and individuals can be identified by their unique plastron markings. The skin on the wood turtle’s legs and neck is covered in large scales, but it is the strikingly beautiful burnt orange coloring that will take your breath away.
“Ole Redlegs” is an omnivore and eats a variety of foods, including algae, moss, grass, violets, berries, earthworms, slugs, insects, tadpoles, and carrion. It finds these foods in its stream habitat, but also in nearby plowed fields, meadows, and woodlands. The wood turtle is remarkable for its willingness to travel both on land and in the water to find a meal. It is a good climber and, believe it or not, is capable of scaling chain link fencing.
Mating takes place in water in the spring, but the female will lay her eggs in sand or soft soil along gravel beds, roadsides, or meadows. A wood turtle can live for sixty years or more, but will not begin to reproduce until it is at least 14 years old and a female will lay only 4 to 12 eggs per clutch.
In the past this species was collected for human consumption, but now populations suffer mostly from habitat loss and fragmentation. Fragmentation of habitat leads to turtle populations becoming isolated, which leads to increased incidences of road mortality, predation, and collection for pets as they travel in search of mates and suitable nesting sites. Turtles in hay fields are often killed by mowing equipment. Considered imperiled within much of its range, the wood turtle was placed under the protection of the Convention on International Trade in Endangered Species (CITES) in 1992. It is a species of special concern in Connecticut, which protects it under the Connecticut Endangered Species Act. You can help the wood turtle by protecting its streams and adjacent uplands and by leaving it to wander in its natural habitat.
International and national conservation groups designated 2011 as the Year of the Turtle.
The following is a list from the CT DEP on what you can do to help turtles:
- Leave turtles in the wild. They should never be kept as pets. Whether collected singly or for the pet trade, turtles that are removed from the wild are no longer able to be a reproducing member of a population. Every turtle removed reduces the ability of the population to maintain itself.
- Never release a captive turtle into the wild. It probably would not survive, may not be native to the area, and could introduce diseases to wild populations.
- Do not disturb turtles nesting in yards or gardens.
- As you drive, watch out for turtles crossing the road. Turtles found crossing roads in June and July are often pregnant females and they should be helped on their way and not collected. Without creating a traffic hazard or compromising safety, drivers are encouraged to avoid running over turtles that are crossing roads. Also, still keeping safety precautions in mind, you may elect to pick up turtles from the road and move them onto the side they are headed. Never relocate a turtle to another area that is far from where you found it.
- Do not litter. Turtles and other wildlife may accidentally ingest or become entangled in garbage (especially plastic garbage) and die.
- Learn more about turtles and their conservation concerns. Spread the word to others on how they can help Connecticut’s turtle populations.
Story and photos: Cindi Kobak |
|Students use computers and related technologies to support and enhance their work in various areas of the school. Subject area teachers and computer teachers collaborate to develop uses of computer technology that support the curriculum and culture of the school. The necessary skills and attitudes are taught both in subject area classes and during specified computer periods. Each division has a dedicated computer teacher, and there are three computer labs available for instruction or use by faculty and students. In addition, computers are located in all classrooms, science labs and libraries. All computers are networked and connected to the Internet. Almost all classrooms throughout the building are equipped with SMART Boards.|
Technology is used in the homeroom and in the Lower School Computer Lab to enhance the curriculum and for special projects throughout the Lower School. Beginning in Classes I and II, boys begin to work on the computer once a week and focus on the lessons already being taught in their homerooms. In Class III, boys have formal computer classes once a week in half classes for the whole school year. The boys begin the year by learning about the different hardware components of a computer and how information passes through them. Next, the boys continue to learn computer skills while working on projects that are integrated with the homeroom curriculum. They are introduced to word processing, presentation and graphic editing software. The Internet is used for research. In the course of completing their assignments, the boys are introduced to using computers on a local area network.
All Middle School boys meet for computer class once a week in the Middle School Computer Lab. Classes IV and V meet in half groups, while Class VI meets as a whole homeroom. The room and teacher are available for subject area classes during other periods. In computer classes, boys study touch typing, continue to develop their proficiency at using computers on a local area network and learn to use a variety of applications for their work in different subject areas. They use word processing applications, spreadsheets, web page editors, presentation software, multimedia tools and other educational software packages. In addition to using the Internet for research, boys build their own web pages and sites to publish work and share data on the school intranet.
Boys in the Upper School explore computer applications that allow them to develop communication skills emphasizing visual modes. Class VII boys create projects using Microsoft Excel, Adobe Illustrator and Flash. Class VIII boys create their own websites using Adobe Dreamweaver and Photoshop. Integrating the computer and science curricula, Illustrator and Flash are used in Class IX to teach game theory and illustrate principles of physics. |
Using them can be a valuable tool to improving your students' pronunciation.
- Why use phonemic symbols?
- Is it important for teachers to know the phonemic symbols?
- Is it difficult to learn phonemic symbols?
- What is the best way to learn phonemic symbols?
- Which phonemic symbols are the easiest to learn?
- Don't I need to have a perfect English accent in order to use phonemic symbols?
Why use phonemic symbols?
The alphabet which we use to write English has 26 letters but (British) English has 44 sounds. Inevitably, English spelling is not a reliable guide to pronunciation because
- Some letters have more than one sound
- Sometimes letters are not pronounced at all
- The same sound may be represented by different letters
- Sometimes syllables indicated by the spelling are not pronounced at all
Here are a few challenging questions to put to your students:
- How do you pronounce gh in 'enough', 'through' and 'ghost'? (like f in fun, not pronounced, like g in got)
- How many syllables are there in 'chocolate'? (2)
The letters of the alphabet can be a poor guide to pronunciation. Phonemic symbols, in contrast, are a totally reliable guide. Each symbol represents one sound consistently. Here are five good reasons why students should know phonemic symbols.
- Students can use dictionaries effectively. The second bit of information in dictionaries for English language learners is the word in phonemic symbols. It comes right after the word itself. Knowing phonemic symbols enables students to get the maximum information from dictionaries.
- Students can become independent learners. They can find out the pronunciation of a word by themselves without asking the teacher. What is more, they can write down the correct pronunciation of a word that they hear. If they cannot use phonemic symbols for this, they will use the sound values of letters in their own language and this will perpetuate pronunciation errors.
- Phonemic symbols are a visual aid. Students can see that two words differ, or are the same, in pronunciation. For example they can see that 'son' and sun' must be pronounced the same because the phonemic symbols are the same. They can use their eyes to help their ears and if they are able to hold and manipulate cards with the symbols on, then they are using the sense of touch as well. The more senses students use, the better they will learn.
- Phonemic symbols, arranged in a chart, are part of every student's armoury of learning resources. Just as they have a dictionary for vocabulary and a grammar book for grammar, so they need reference materials for pronunciation: the phonemic symbols and simple, key words that show the sound of each symbol.
- Although speaking a language is a performance skill, knowledge of how the language works is still of great value. Here is another question to ask students: How many different sounds are there in English? Usually, students do not know. Phonemic symbols on the wall in a classroom remind them that there are 44. Even if they have not mastered all of them, they know what the target is and where the problems are. The chart is a map of English sounds. Even with a map, you can get lost but you are better off with a map than without one.
Is it important for teachers to know the phonemic symbols?
To be frank, yes. Every profession has specialist knowledge that is not widely known outside the profession. If you are a doctor, you will be able to name every bone in the human body, which most people can't do. If you are a language teacher, then you know phonemic symbols, which most people don't. Students can learn these symbols by themselves and one day you might meet a student who asks you to write a word on the board using phonemic symbols. It is best to be prepared.
Is it difficult to learn phonemic symbols?
Absolutely not. 19 of the 44 symbols have the same sound and shape as letters of the alphabet. This means that some words, such as 'pet', look the same whether written with phonemic symbols or letters of the alphabet. That leaves just 25 to learn. Compare that with the hundreds of different pieces of information in a grammar book or the thousands of words in even a small dictionary. It is a very small learning load. Moreover, it is visual and shapes are easy to remember. Anyone who can drive is able to recognise more than 25 symbols giving information about road conditions. Even if we go beyond separate, individual sounds and include linking, elision and assimilation, there is still a limited and clearly defined set of things to learn.
What is the best way to learn phonemic symbols?
Most native-speaker teachers of English learn grammar from the textbooks they use when they first start teaching, because they are unlikely to have been exposed to any formal study of English grammar. They learn by teaching, which is a very effective way of learning. It is possible to learn phonemic symbols in the same way. You just need to keep one symbol ahead of the students.
Which phonemic symbols are the easiest to learn?
The consonants are the easiest, because most of them have the same form as a letter of the alphabet (17 out of 24). Therefore, it is best to start by teaching students a large number of consonant symbols and a small number of easy vowel symbols such as /e/ and /i/. Note, however, that the sound /j/ represents the initial sound of 'yellow', not the initial sound of 'judge'. Experience shows that students are very likely to make mistakes with the symbol /j/, so it needs special attention.
Don't I need to have a perfect English accent in order to use phonemic symbols?
Not at all. It is true that the 44 phonemes in British English are based on the sounds of Received Pronunciation, an accent which is not frequently heard nowadays. Most native-speaker teachers do not have this accent but still use phonemic symbols. When the symbols are arranged in a chart, each one occupies a box. This indicates that the real sound that you actually hear can vary up to certain limits, depending on the influence of other sounds and on individual ways of speaking. There is not just one perfect way to say each sound - there is an acceptable range of pronunciations. Think of the pieces in a game of chess. They can vary considerably in size, shape and appearance but we can always recognise a knight because it behaves like a knight and not like a king. The point is that words such as 'ship', sheep', 'sip' and 'seep' should sound different from each other, not that each sound is pronounced exactly like the sounds of RP. Learning phonemic symbols will help students to understand the importance of length and voicing. Simply knowing that the symbol : indicates a long sound can be very helpful.
There is no end to our study of grammar and vocabulary but phonemic symbols are limited, visual and physical. They may seem challenging at first but it is like learning to swim or ride a bicycle. Once you can do it, it is easy and you never forget.
Alan Stanton, teacher trainer and materials writer |
Sebastes is a genus of fish in the scorpionfish family Scorpaenidae, most of which have the common name of Rockfish. Most of the world's 102 rockfish species live in the north Pacific, although one species lives in the South Pacific/Atlantic and 4 species live in the north Atlantic. The coast off South California is the area of highest rockfish diversity, with 56 species living in the Southern California Bight.
Rockfish range from the intertidal zone to almost 3000 m deep, usually living benthically on various substrates, often (as the name suggests) around rock outcrops. Some rockfish species are very long lived, amongst the longest living fish on earth, with a maximum reported age of 205 years for Sebastes aleutianus (Cailliet et al. 2001).
Rockfish are an important sport and commercial fish, and many species have been overfished, and seasons are tightly controlled in many areas. |
Power from sunlight — a long-time dream of philosophers and inventors — is hecoming an engineering reality. Solar heating and cooling is beginning to undergo commercial development. A pilot plant phase has begun for groundbased solar electric plants. The ultimate solar power plant — a power station in space called powersat — is being studied by Boeing. But why a power station in space? Isn’t that prohibitively expensive and impractical? The Boeing Company does not think so, and its findings have been summarized in this report.
Two primary candidates for a means of converting solar power to electrical power in space exist: solar cells (photovoltaic) and thermal engines. Although Boeing is investigating both candidates with equal vigor, this report primarily deals with a thermal-engine concept called powersat.
The powersat will be essentially continuously illuminated by sunlight (no night, no weather) and will collect over six times the solar energy falling on any equivalent size area on Earth. Power beamed from the powersat can be coupled to a converter station sited in any part of the nation — or the world, for that matter — to provide continuous baseload electric power. In contrast, early ground-based solar plants will produce intermediate load (i.e., only daytime) power and only in sunny regions. Continuous illumination at higher intensity offers a potential economic advantage to spacebased solar power if the transportation to space can be accomplished at a sufficiently low cost. Boeing’s studies of the system economics indicate that this accomplishment is possible and that the outlook for commercially competitive electric power from satellites is promising.
The powersat envisioned by Boeing would use lightweight mirrors to concentrate sunlight into a cavity and thereby heat the cavity so that It serves as a “boiler.” The heat would then be supplied to turbine generators similar to those in use at conventional powerplants. These machines convert about a third of the input heat energy to electricity; the other two-thirds (the thermal pollution of conventional powerplants) is returned to the environment. The powersat would use space radiators to reradiate this unusable heat to space far from the Earth. The electricity would be converted to a microwave beam for transmission to a receiving antenna on Earth for commercial distribution as electric power.
The powersat would be large — many square kilometers in size — but would produce great amounts of power. A typical design requires h8.6 square kilometers (12 000 acres) of mirrors for 10 000 000 kilowatts of electric output from the ground station. Most of the satellite area consists of thin, reflective plastic film, which minimizes the weight to be transported to space.
The powersat illustrated here is the result of conceptual design studies performed at Boeing over the past year. The four power generation modules shown provide a reasonable compromise between the simplicity of a single large module and the practical considerations of transportation and operation.
Each module consists of a mainframe structure formed from fold-out trusses, a spiderweblike fill-in structure to support the plastic film mirrors, 10 000 to 12 000 lOll.7l-square-meter (0.25 acre) mirrors and their spreader frames, a cavity heat absorber surrounded by twelve 3OO-megawatt helium turbogenerators, and a heat radiator.
Attached to one of the modules on a rotating joint are the microwave generator and antenna. The electric power produced by the turbogenerators is routed to the microwave generator for conversion and transmission.
The parts of the satellite are designed as subassemblies for transportation by the space freighter. For example, one turbogenerator with its heat exchangers and accessories can be packaged on a pallet for a single-launch delivery; the pallet forms a portion of the wall of the cavity heat absorber. Hexagonal plastic film mirrors can be folded and rolled so that many reflectors can be launched together.
Located in a stationary orbit 35 405.6 kilometers (22 000 miles) above the Earth, the powersats will be illuminated by sunlight more than 99 percent of the time. They will appear to hang motionless in the sky, and a simple fixed-position array of antenna elements (dipoles) will serve as the ground-based converter for the power beam. The converter array will be approximately 8.0 kilometers (5 miles) in diameter. Its construction and appearance will resemble cyclone fencing. |
The Victorian artist Walter Crane thought that children could learn from pictures long before they could read or write. His colourful, well-designed nursery books opened parents’ eyes to the educational value of picture book reading. Lesley Delaney, curator of a display of Walter Crane’s picture books at the V&A, explains his revolutionary approach to learning to read.
Walter Crane (1845-1915) was the most prolific and influential children’s book creator of his generation. His pioneering designs for nursery books helped to popularise the idea of visual literacy. Crane radically improved the standard of ‘toy books’ – cheap, mass-market colour picture books, featuring alphabets, nursery rhymes, fairy tales, and modern stories. He also created a novel series of musical rhymes and fables for babies, as well as a set of experimental books that show how reading, writing and arithmetic can be learned through imaginative play.
Crane believed that good art and design could stimulate interest in books and help children to learn to read from a very early age. He recognised that every feature of the book – including covers, end papers, titles, illustration, type, and page layout – can be used to encourage children’s enjoyment of reading.
This visual approach to reading is seen in one of his early toy books, Grammar in Rhyme (1868). Crane uses the text box like a blackboard, placing it within the illustrations of children at play to suggest the idea of learning as an enjoyable everyday activity. To help the child’s understanding, he creates a memorable rhyme that relates the parts of speech to the games and toys shown in the pictures.
‘Bright, frank colours’ and comic touches
Crane uses colour and pattern to attract children’s interest in reading. The exciting effect is shown in the vibrant illustrations for toy books such as Beauty and the Beast (1874), which display the influence of his work as a painter, designer and decorative artist. Crane drew inspiration from a wide range of influences, including Japanese art. This can be seen in the illustrations for This Little Pig Went to Market (1870), in which he uses the bold outlines and flat colours that were typical of Japanese prints. Crane’s designs also reflect his observations of young children. He noticed that they appear to see most things in profile and prefer ‘well-designed forms and bright frank colours’. Children are not concerned with three dimensions, he suggested; they could accept symbolic representations.
To encourage close observation of the pictures, Crane adds comic touches. For example, in This Little Pig he gives the hilarious cartoon character glasses and cloven boots; he places bows on both its curly tail and pigtail wig. Children can also spot the pig displayed on the mantelpiece in the picture on the facing page. The picture panels for Puss in Boots (1874) show Crane as an early exponent of the comic strip form. The design leads the child from one frame to the next in a sequence of detailed pictures that follows the cat’s actions, enabling even pre-readers to understand the story.
Crane introduces visual jokes to help the child’s understanding of reading conventions, such as turning the page. This can be seen in the playful illustration for ‘Hey diddle diddle!’ on the cover of The Baby’s Opera (1877). The three mice featured in the bottom panel appear to be running into the book. They reappear in the following pages engaging in various amusing antics, such as outwitting the cat. Crane wanted to excite children’s curiosity about what they would find on the next page. The square format of the baby books was inspired by designs for nursery tiles and provides a model for baby books even today.
The innovative fantasy series, called ‘The Romance of the Three Rs’ (1885-6), shows how early learning can be turned into imaginative games. The three titles – Little Queen Anne, about reading, Pothooks and Perseverance, about learning to write, and Slateandpencilvania, about counting – represent the first picture stories about the difficulties children face in early learning. The fluid illustrative style and use of heavy punning show similarities with the homemade books that Crane created for his own children.
Crane’s visual approach to learning attracted the interest of leading reading specialists. He collaborated with Professor Meiklejohn to produce The Golden Primer (1884-5) and also with Nellie Dale to create the ‘Walter Crane readers’ (1899). These popular reading schemes were the forerunners of the Ladybird ‘Key Words’ series and the ‘Oxford Reading Tree’.
© Lesley Delaney, UCL and the V&A
‘Walter Crane: Revolutionary picture books for the nursery’ runs from 8 November 2010 until 3 April 2011: Room 85, National Art Library Landing, V&A South Kensington, Cromwell Road, London SW7 2RL (020 7942 2000, www.vam.ac.uk).
Lesley Delaney, University College London and the National Art Library at the V&A, is supported by a Collaborative Doctoral Award from the Arts and Humanities Research Council (AHRC).
Detail of portrait of Walter Crane by George Frederic Watts. |
The Scale of Nature: Modeling the Mississippi River
Society requires artifice to survive in a region where nature might reasonably have asked a few more eons to finish a work of creation that was incomplete.
— Albert Cowdrey, quoted in John McPhee, The Control of Nature, 1989
Concrete channel of the Mississippi River Basin Model overtaken by soil and vegetation in 2010. [All photos by the author, except as noted.]
For 27 days in January 1937, rain drenched the northeastern United States. The unusually warm, wet weather thawed the frozen ground and sent torrents of water sheeting into the Ohio River. The effect was dramatic: towns throughout the region reported water levels quickly approaching, then passing, flood level. In some areas the water crested as high as 20 to 28 feet above flood stage. With national reports tallying the displaced at over one million people, the event confirmed the growing national fear that the great rivers that had contributed to the nation’s success might also threaten its future.
The country had already endured what was supposed to be the last of the "Great Floods," only ten years earlier, when the lower Mississippi River Basin suffered the most destructive inundation in U.S. history. In the aftermath of what then Secretary of Commerce Herbert Hoover called "the greatest peace-time calamity in the history of the country," Congress passed the Flood Control Act of 1928. This sweeping legislation called for the immediate implementation of a plan to control the waters of the mighty Mississippi. It was as if the nation had declared war against the river: In the next decade, the Army Corps of Engineers built 29 dams and locks, hundreds of runoff channels, and over a thousand miles of new, higher levees. It appeared that efforts to prevent another Great Flood would be successful.
But as in so many battles, the combatants misread the enemy. The 1928 plan focused on single targets, presuming that the "menace to national welfare" was the Mississippi River itself; the Corps of Engineers failed to see the river as part of a system of interconnected, aggregating threats. When several rivers in the Northeast flooded in the winter of 1936 (in particular the Connecticut, Allegheny and Monongahela), displacing hundreds of thousands of people in Massachusetts, Pennsylvania and New York, and even reaching far enough to evacuate the National Headquarters of the American Red Cross in Washington D.C., the public felt double-crossed. A New York Times editorial called for a more comprehensive approach: "If the floods have taught us anything, it is the need for something more than a dam here and a storage reservoir there. ... We need a kind of protection which considers something more than the exigencies of Johnstown, Pittsburgh and Hartford — considers the social and economic future of a nation and a continent."
Congress obliged the new national consciousness with the Flood Control Act of 1936, which declared flood control a "legitimate federal responsibility" and provided a substantial increase in federal funding for a comprehensive network of levees, dams, reservoirs and dikes. Significantly, it handed complete responsibility for flood control to the Army Corps of Engineers, a division of the War Department (later the Department of Defense), and mandated that the economic benefits of construction outweigh the costs. In essence, the act was driven by commerce but framed as national defense.
Drainage basin of the Mississippi River. [via Wikipedia]
As construction began on control structures throughout the Mississippi River Basin, and as floodwaters rushed into the Ohio River Valley in January 1937, a district engineer in Memphis, Tennessee, Major Eugene Reybold, raised concerns about this approach. Although the scope of flood control had expanded beyond the Mississippi, the work was limited by current field research methods; engineers found it difficult to track what was being done at various points along the river and thus impossible to predict how isolated "solutions" might affect one other. To understand the Mississippi River Basin as a dynamic system of interconnected waterways, the Corps needed new, more sophisticated scientific tools.
Reybold came up with a radical idea: a large-scale hydraulic model that would enable engineers to observe the interactive effects of weather and proposed control measures over time and "develop plans for the coordination of flood-control problems throughout the Mississippi River Basin." Only a physical model of all lands affected by the Mississippi River and its tributaries could meet the three major goals of the Army Corps:
... to determine methods of coordinating the operation of reservoirs to accomplish the maximum flood protection under various combinations of flood flow; to determine undesirable conditions that might result from non-coordinated use of any part of the reservoir system, particularly the untimely release of impounded water; and to determine what general flood control works were necessary (levees, reservoirs, floodways) and what improvements might be desirable at existing flood control works.
Reybold understood that such a project would require a paradigm shift in the Army Corps of Engineers. His colleague John Freeman ran a small hydraulics laboratory, the Waterways Experiment Station, in Vicksburg, Mississippi, but had been denied funding for more comprehensive research. "Field experience," said Secretary of War Dwight Davis, "is undoubtedly of much greater value than laboratory experiments could possibly be." Nevertheless, Freeman’s laboratory drew the attention of young, ambitious engineers who could see the benefit of fluid mechanics modeling. Reybold worked with the Experiment Station to construct a small section of the exceptionally steep Kanawha River as a pilot model. He knew that if he could simulate historic flood events and produce accurate flood hydrographs of the Kanawha, he could build support for a model of the entire Mississippi River Basin. Reybold’s plan worked; in 1943 the Corps of Engineers approved his proposal to build a comprehensive model.
The Mississippi River Basin Model, looking upstream on the Ohio River from Evansville, Indiana. [Courtesy of the U.S. Army Corps of Engineers]
This effigy of Old Man River is expected to make him behave better.
— Popular Science, 1948
What Reybold needed next was a site and a workforce. World War II had commandeered the Army’s stateside labor force and depleted its funding for civilian hiring. So as Reybold surveyed the area near Vicksburg for suitable topography on which to build the basin model, he also negotiated for the transfer of prisoners of war to a new internment camp. He settled on a large area of undeveloped land in Clinton, Mississippi, and under his supervision 3,000 German and Italian POWs began construction on a 200-acre working hydraulic model. The ambitious model would replicate the Mississippi River and its major tributaries — the Tennessee, Arkansas and Missouri Rivers — encompassing 41 percent of the land area of the United States and 15,000 miles of river. It would reflect existing topography and river courses throughout the Mississippi Basin, using the best data drawn from hydrographic and topographic maps, aerial photographs and valley cross-sections.
The prisoners cleared the site of a million cubic yards of dirt and rough-graded the land to match the contours of the Mississippi River Basin. To ensure that topographic shifts would be apparent, the model was built using an exaggerated vertical scale of 1:100 and a much larger horizontal scale of 1:2000. While the existing topography offered a close approximation of the actual Mississippi Basin, some areas required significant earthmoving; the Appalachian Mountains were raised 20 feet above the Gulf of Mexico, the Rockies 50 feet. An existing stream running east-to-west provided the model’s water supply. The streambed was molded to take on the shape and form of the upper reaches of the Mississippi, and a complex system of pipes and pumps distributed water throughout the model; it was regulated by a large sump and control house sited near what would become Chicago, Illinois. To simulate flood events, Reybold needed to introduce large volumes of water over short periods of time, so he designed a collection basin and 500,000-gallon storage tower system at the model’s edge. Small outflow pipes at anticipated data collection points channeled excess water to 16 miles of storm drains.
A 20-acre section in the center of the 200-acre site would be subject to high-intensity tests. Here the engineers installed a "fixed-bed model" that enabled greater precision and control, modeling the river channels and overbank flood areas in concrete. This section represented the areas of the central and lower basin perceived to be most vulnerable to catastrophic floods: the Mississippi River from Hannibal, Missouri, to Baton Rouge, Louisiana; the Atchafalaya River from its confluence with the Mississippi to the Gulf of Mexico; and the lower reaches of key tributaries, the Missouri, Ohio, Cumberland, Tennessee, Arkansas and Ouachita Rivers. Large concrete panels, flat on the underside and uniquely molded on top to reflect particular topographic shifts, were installed over the pipes and held in place with a secondary structural system. Although the fixed-bed model accounted for only 10 percent of the site, it represented a large enough area that the curvature of the earth played a significant role in the design and construction of the concrete panels. Engineers overlaid the traditional grid system with the conical Bonne Projection, skewing the surface of each panel to respond to the topographies of both the model site and the basin itself.
Concrete landforms, metal screens and brass plugs at the basin model in 2010.
The panel surfaces were enhanced with concrete riverbeds, sheer cliffs, flat plains, tributaries and oxbow lakes, as well as railroads, bridges, levees and highways. The engineers faced the significant challenge of achieving an accurate degree of "roughness," the measure of frictional resistance experienced by water as it passes over a particular surface. Because the concrete created an impermeable (fixed) ground, they installed 3/8" metal plugs of varying length, called "parallelepieds," to create drag in the water flow and simulate scouring. These brass plugs were used in conjunction with brushed and scored concrete and periodic concrete ridges to model channel roughness. To add further surface detail to "overbank phenomena" such as the vegetation observed in aerial photographs, an accordion-folded metal screen was cut to scale and placed (unfixed) at appropriate locations.
"Let the Robots Run It"
The Mississippi Basin Model quickly became the most complicated, expensive and time-consuming research project ever undertaken by the Corps. Early reports predicted that the model would be completed by 1948; later reports implied a delay of five to ten years. As early as 1949, upper sections were opened for testing, but by 1959 the model had been completed only as far south as Memphis, Tennessee. The seemingly straightforward design-and-build phase had been complicated by postwar transition and inefficient bureaucracy.
When Reybold sourced his original labor force, he handpicked POWs with knowledge of engineering and construction, specifically German engineers, whose home country had already embraced the benefits of hydraulics modeling. Repatriated after the war, the prisoners were surprisingly hard to replace. At the time, all river management works were funded by the districts that profited most directly from their development. Thus all funding (design, construction, future operation) for a model projected to visualize 41 percent of the United Stated as a single landscape had to be equitably divided among 15 districts in proportion to their river frontage. It wasn't until 1957 that direct congressional appropriations for the project were approved. With the new funding, construction moved at a more even pace, and the model was completed in 1966.
Because the budget had fluctuated greatly before Congress assumed fiscal responsibility, Reybold pushed the Corps of Engineers to re-think the model's operation. Administrators had assumed the model would be tested just as real rivers had been tested for years. Field engineers would take manually operated devices to key river bends and, operating largely independent of each other, collect data that could be relayed back to a second team who specialized in data processing. It was time-consuming, tedious and required expertise at each level of analysis — precisely the kind of inefficiency Reybold hoped to eliminate. To run full-capacity manual tests, the Corps would need an experienced staff of 600 engineers trained in field measurements all working at the same time; but if the data collection could be automated, staffing could be limited to control houses, where just a few dozen engineers could turn the model on and off and simultaneously process data.
Top Left: Eugene Reybold. Top Right: The model under construction. Bottom: Inflow, outflow and stage instruments. [Photos courtesy of the U.S. Army Corps of Engineers]
After slowly convincing individual districts that automation could cut operation costs in half, Reybold’s team awarded contracts in 1948 to two companies charged with designing instrumentation specifically for the basin model. They developed 76 inflow and outflow instruments and 160 stage instruments to simulate normative weather and flood events, all tied to a single timing unit capable of synchronizing the various sections of the model to a virtual calendar indicating the day, month and year in "model time." Thick bundles of data transmission lines connected the timing unit, inflow and outflow devices, and stage instruments to six small sheds on the periphery that served as control houses. Each control house was located near one of the major tributaries and contained a switch capable of activating that particular section of the model. The river system could be operated in full or in parts, or be turned off entirely.
When the Fake Clarifies the Real
On April 1, 1952, George Stutts, a Missouri River engineer, conducted his regular field surveys of the levees in Nebraska and reported that northwest Missouri was in "no immediate danger of flooding." Only seven days later, a new survey indicated signs of imminent and severe floods. The mayors of Omaha and Council Bluffs contacted the Army Corps District Office to propose using the basin model to predict flood stages, and the model was called into active duty for the first time.
On April 18, as the Omaha World Herald rolled out the headline "Missouri River Near Crest Here; Anxious Eyes On Soggy Levees," the basin model was halfway through 16 days of continuous 24-hour tests. Engineers issued prototype conditions to the newly installed instruments, generating simulations that forecasted likely events over the next month — crest stages, discharges, levee failure and more. As water poured through the Missouri River section of the model, the resulting data were relayed directly to aid workers in Omaha and Council Bluffs, who were able to respond with brigades of civilians and sandbags to points where levees needed to be raised only slightly; areas predicted to flood dramatically were evacuated. In total the Mississippi River Basin Model prevented an estimated $65 million in damages.
With this impressive victory against the river, Reybold’s project was vindicated. The model had allowed the Mississippi River Basin to become, for the purposes of study, an object, a manageable site. Here engineers, community leaders and civilians could gather to discuss the potential ramifications of particular flood control measures and forecast likely scenarios. Each gallon of water passing through the model was the equivalent of 1.5 million gallons per minute in the real river, meaning one day could be simulated in about five minutes. This allowed for a tremendous capacity to collect data, to use the model as an active tool for communication, and to distribute information about the river as a system. With mayors from cities up and down the river gathering in the observation tower to watch the Mississippi cycle through an entire flood season, it became possible to find edges, limits and centers, to see how and where the river might strike next. The model imbued the river with a reassuring degree of certainty. Policymakers began to adjust to a new scale of thinking.
In the 1960s, the basin model received 5000 visitors annually. [Vintage postcard via World of Decay]
Most important, the basin model acknowledged the river and its tributaries as the defining features of the landscape. Settlements, highways and railroads were all secondary to the force of moving water. Reybold demonstrated that the Mississippi River system acted continuously on many points in concert, creating a series of interconnected reactions more expansive and powerful than anyone had previously understood. He sparked an ideological shift among his fellow engineers, who had once believed that the river could be pressed into submission in order to maximize available land for human purposes. The basin model underscored the idea that not all landscapes could be transformed for development, an idea which had been lost during the frenzied period of levee building in the early 1900s. By acknowledging the true complexity of the river system, engineers could move beyond the localized approaches that had hindered flood control efforts in the 1920s and ‘30s. One person could take in the entire breadth of the Mississippi Basin in one panoramic view, and what emerged was the understanding that the river is a system, a network of continuous forces that creates unique but interconnected conditions. Each specific condition must be considered in the context of the whole.
For two decades, Reybold’s model was the tool used to extend this line of thinking throughout the Mississippi River Basin, determining flood control strategies from Montana to Louisiana. From 1949 to 1971, engineers completed 79 simulation packages at the basin model, with most requiring a minimum of two weeks and some as long as eight weeks. The tests ranged from altering the course of the river to spot-raising levee heights in vulnerable locations. The Basin Model Testing Record reads like a battle transcript. In February 1962, a series of hypothetical floods was introduced to the Ohio River. In 1967, the effects of roadway construction on the flow of the Mississippi River were tested in Lake County, Tennessee. In 1969, various channel alignments were examined in Baton Rouge, Louisiana, and basin-wide tests were conducted to verify the holding capacities of floodways and reservoirs throughout the lower basin.
But in 1971, operations came to an abrupt halt. The model received a congressional appropriation of $150,000 to conduct a two-year "Computer Application Study," a cautionary response to work emerging from the recently established Army Corps Hydrologic Engineering Center. By 1970, the HEC, like the Waterways Experiment Station before it, had become an outpost for young engineers interested in pushing the current practice of hydrologic research toward computational scripting and planning analysis technologies. As a direct challenge to the validity of the Mississippi Basin Model, the HEC had developed a river hydraulics software package called HEC-2, and the 1971 study set out to compare results of the two competing methodologies.
That was the last scheduled test at the basin model. Although the model continued to be used sporadically over the next decade, it was gradually upstaged by a mainframe computer in Sacramento, California. And then, in the early 1990s, the Army Corps walked away.
Engineers revived the model briefly during the 1973 Mississippi River Flood. [Courtesy of the U.S. Army Corps of Engineers]
The Model Today
Actually a model is nothing but a calculating machine. You don’t get anything out of it unless you put in something to start with.
— Hans Einstein, at an Army Corps of Engineers Consultants Conference, 1952
I live near the Old River Control Center, north of Baton Rouge, an environmental battlefield where the Army Corps has requisitioned $270 million (to date) and deployed 4,000 linear feet of concrete in a decades-long campaign to prevent the Mississippi River from diverting its course westward to the Atchafalaya River. In the spring of 1973, when massive flooding nearly overwhelmed the Control Center, engineers at the Mississippi River Basin Model made the decision to push forward with further structural reinforcements. It was one of the last major policy decisions based on simulations at the basin model and one that has committed southern Louisiana to a protracted strategy of actively attempting to "reverse Mother Nature." As the warm spring weather of 2010 swelled the Mississippi once again, I wanted to see for myself how a concrete control structure could be modeled in a miniature concrete landscape.
My first Google search for "mississippi river basin model" turned up only a few valid hits, one of which led to an aerial image on a blog called Google Sightseeing: Why Bother Seeing The World For Real? I, in fact, did want to see the world’s largest scaled model for real, but the internet was not providing helpful instructions. Was there a visitor’s center? Would they (and who might they be?) allow me to take photographs?
Flying over Clinton, Mississippi, via Google Earth revealed that the model is surrounded by Butts Park, a public park just south of Interstate 20. Its key features include a remote control airplane landing strip, a remote control car racetrack, and children’s soccer fields — a host of miniature spaces appropriate to the site’s lineage. At least now I had an address. I drove to Clinton and pulled into a life-size parking space, my car facing east toward Jackson, Mississippi, in the real world, and north toward the river’s headwaters at Lake Itasca, Minnesota, in the model world. But I couldn’t see the model anywhere.
Butts Park gives no immediate sign that 200 of its 260 acres is (or was) a miniature landscape. The POW barracks, the Army Corps offices, the once-imposing guard gate — they’re all gone. What remains, concealed by invasive vegetative overgrowth, is the model. And it is surprisingly intact and fairly evenly weathered after two decades of abandonment. The overgrowth has created a protective barrier of holly and poison ivy, making it nearly impossible to see from the park and protecting it from misuse. In the brush just off the mowing path of the park maintenance crew, I found a four-foot-high hurricane fence mostly intact. I jumped the fence and trudged through plastic water bottles and long-emptied bags of Doritos until my foot found an edge. As I crossed the threshold into model-space, my feet landing on rough concrete and the surrounding park immediately receded. I was standing on a two-foot-wide finger of concrete and wire mesh, a weak but steady tributary that eventually worked its way to the Cumberland River, then to the Mississippi.
The abandoned basin model surrounded by Butts Park. [Top photo by Jeffrey Carney]
Even after long neglect, the model was impressive. As I stood at its center, it consumed my view. To the left, hills along the Tennessee River gradually rose toward the Appalachian Mountains. Beyond a stand of Chinese tallow and cottonwoods, I saw a mess of metal and plastic: 14 PVC pipes of varying diameters, a metal walkway three feet above the surface, and the much less pronounced topography of the river and its tributaries converging on St. Louis. This was obviously a machine — I had to watch out for the abundant pumps, gauges and pipes as I walked — but after the rhythm of the space became familiar, the machine-ness faded into something more akin to a landscape. I walked the length of the fixed-bed portion of the model from Hannibal, Missouri, to Baton Rouge, Louisiana, in minutes. Labels for cities and towns had long since scattered, but using landforms as a guide, I could identify familiar places. Standing astride the river, with one foot on the plains of Vidalia, Louisiana, and the other on the bluffs of Natchez, Mississippi, my mind was tricked into believing that this could have been a playground and not a complex hydraulic model, an operable toy replete with countless options to alter a small, contained (and fake) universe.
When the Fake Replaces the Real
This is why mapping is never neutral, passive or without consequence; on the contrary, mapping is perhaps the most formative and creative act of any design process, first disclosing and then staging the conditions for the emergence of new realities.
— James Corner, "The Agency of Mapping," 1999
Within minutes of arrival my perspective had shifted. I became consumed with the immediacy of the experience, with the model as a series of spaces that I could occupy. I set aside my questions about its purpose and effect and engrossed myself in the challenge of parsing through the rich layers of space and the abstract simplicity of materials, dissecting it as place, not as representation. The operability of the landscape was absorbing. I passed hours testing the gates and chutes, and attempting to make a golf ball I had found in the brush wash down the river from Cairo, Illinois, to Memphis, without getting hung up in the tight meanders. (I was thwarted by an unforgivably constricted bend at New Madrid.)
Despite knowing I was looking at, standing on and manipulating an object that was no more or less than a point of reference, a miniaturization of the real thing, the size and scope of the simulation sucked me in. I couldn't hold the model in my hand or separate it from the environment surrounding it, and so it became a place in and of itself. I was lost in its depths and found it difficult to understand as merely a representation of a very real river system 30 miles to the west. I'm not suggesting the Army Corps of Engineers confused their workplace with an adult sandbox. But I am struck by the disconnect that can occur when a model becomes the substitute for the "real thing," when the copy, which can never replicate the complexity of its source, becomes the fulcrum around which decisions are made. Beyond the achievement of constructing such a model, what effect has this fake river had on our relationship with the real river it seeks to mimic? In puzzling over this question, three lessons seemed to emerge.
Lesson #1: Materials Matter
At an average thickness of six to eight inches, the constructed ground of the model hardly simulates the complexity and depth of the actual sedimentary profile. The perfectly folded metal screens do not speak to the diverse array of ecosystems and habitats that weave into the river fabric. The basin model endorses (which is to say that it cannot function without) a dangerous abstraction of real material (not the least of which is human occupation) and an unrealistic ability to contain and isolate variables in an infinitely complex natural system. In the real world, river systems cannot be reduced to the dialectic of water-or-land; they are materially ambiguous. To remove slurry from an alluvial landscape, as the model does, is to negate wetlands, to deny the exigencies of an entire ecosystem that thrives on particulate matter caught in-between states. It doesn’t matter how much territory the model covers if it relies on the amputation of inconvenient complexities to be manageable. The simulation becomes thin.
And over the years the model repeatedly expressed its limitations to the engineers. Maintenance costs became increasingly exorbitant. Water poured across the impervious concrete, but inevitably found its way to more susceptible materials, seeping through expansion joints, rusting the metal substructure of the panels and washing out packed clay around the pilings. Panels had to be realigned, rejoined and rebraced every month. Vines crept into the folded metal screens, grass pushed through small seams in the concrete. Because the model was situated in a real open-air environment and exposed to real weather, real material change became just as powerful as the on/off switch in the control house.
Lesson #2: Scale Matters
The basin model was envisioned not only as a highly efficient, technologically advanced machine, but also as a platform for communication. This is truly remarkable: a space designed to visualize environmental change, long before Rhino and 3-D Studio Max were rendering virtual fly-throughs of code-driven spatial simulations. For decades the model was the site of major conversations about the American built and natural environments. Governors, mayors, tourists and engineers gathered to see the river in motion, to discuss possible solutions, likely ramifications and the division of responsibility.
But the officials who stood here were presented with a distorted narrative. In order to build the model at manageable size without sacrificing accuracy in stage-discharge calculations, engineers decoupled the horizontal and vertical scales, exaggerating the elevation by 20 times. Reybold’s team was trained to isolate the x- and y- axes and read values instead of space, so they could separate the data from the physical impression of the model in their calculations. But it’s not likely that the model’s 5,000 annual visitors were able to manage this mind leap. The spaces they encountered were so real and so seemingly certified by sheer size that it would have been impossible to separate the experience of being in the space from what it represented. How many policy decisions were shaped by politicians who misunderstood the lessons of the basin model because of the height of its hills and cliffs?
The lower basin from Baton Rouge to the Gulf of Mexico was never modeled. [Courtesy of the U.S. Army Corps of Engineers]
Lesson #3: Scope Matters
But what’s at work here is more than just the reduction of material complexities: more than just the substitution of concrete for mud and grass, or one inch of elevation for one hundred. The basin model was flawed even in its conception, from the initial design decision that seems easiest (on paper) to justify: how far it should extend. In 1942 the Corps of Engineers decided to fund only a partial simulation:
The proposed model would reproduce all streams in the Mississippi River watershed on which reservoirs for flood control are located or contemplated, together with all dams, levees, dikes, floodwalls and other pertinent works. ... (and) only initially as far as the mouth of Old River (just north of Baton Rouge) for the reason that no inflow takes place below that point.
Thus a supposedly comprehensive model of the Mississippi River Basin stopped at Baton Rouge, Louisiana, excluding the mouth of the Mississippi River and its delta.
On my first visit, I attempted to do what we all do when faced with a map: to locate myself, my home, within the field of abstraction. I wanted to see how Reybold had designed the transition from the temperate northern states into the fragile marsh and swamp ecosystems of South Louisiana, how the hard lines of concrete in Missouri and Illinois could be softened to accept the landscapes I have known since childhood. I wanted to see the Birdfoot Delta and New Orleans (would the Lower Ninth Ward be modeled?). I wanted to see my family's home on Bayou Lafourche (once the east fork of the Mississippi River) and the Gulf of Mexico. Alas, standing ankle-deep in False River, an oxbow lake north of Baton Rouge, I found that I had reached the model's end, and it took the rather unceremonious form of a leaf-and-twig-clogged drainage ditch.
While the decision to exclude the river system below Baton Rouge was driven by the difficulties involved in financing a $17 million project that challenged existing research practices, the fact that all of the Army Corps of Engineers' experiments at the basin model produced data without New Orleans and the Gulf of Mexico inevitably colors the validity of the results and raises questions about how much the model is to blame for the rapidly disintegrating Gulf coastline. Despite best efforts to faithfully build a systems-based approach to flood control, the system was fundamentally incomplete. The 1942 report noted that "provision would be made, however, for adding the remainder of the Mississippi River Basin at any time this might become desirable," but the Army Corps went on to make 25 years of decisions about flood control here, and modeling the outflow of the Mississippi River never "became desirable."
Exploring the model in 2010.
Realness Beyond the Model
Although the Mississippi River Basin Model was never truly comprehensive — never fully systemic — it was nevertheless an incredible feat of design thinking. Ultimately, the model reflects an optimistic moment in our relationship with the greatest and most storied river on the continent. It embodies the ideal of balance and the goal of security. It acknowledges the necessity of human inhabitation and the unpredictable power of a natural system. Though incomplete and unsuccessful, the model helped to shape a larger narrative of two powerful colliding and often incompatible forces: a burgeoning, prosperous and settlement-building nation, and a mighty river, more than 2,000 miles long, with its endlessly complex geomorphology, its watershed encompassing almost half the country. The model was a tool as valuable to specialists as to citizens, demonstrating the power of visualizations to shape policy through design.
Today the basin model endures as a relic of that earlier era, long forgotten, subject to weathering and erosion, like the river system it was designed to control. As I left the model at the end of a long, hot day, it began to rain. In seconds, the river filled with water, small bits of leaves and dirt washing down toward Cape Girardeau, Missouri. The water pooled in places, spinning into eddies when the tributaries reached the main channel. I lifted the gate at what might have been St. Louis, sending a wash of muddy water toward Memphis. I could see the water rising as it moved south, small sticks and gum wrappers kicked up over the edges as the river began twisting toward Louisiana. The straits of Baton Rouge sent the water rushing out with such force that it seemed to leap out of its container and over the concrete banks and into the poison-ivy wilderness.
Using the levees as a footpath, I walked upriver toward my car, stepping out of Hannibal, Missouri, and back into Clinton, Mississippi. And just as quickly as my perspective had shifted earlier when I entered the model, it now refocused on a group of nine-year-olds gathered under a sycamore tree, waiting for the rain to pass. It seemed odd the rain would overwhelm the park, much less the model. This was, after all, a site once dedicated to the management of water. But when I reached the parking lot, I found that the corner where I’d parked had washed out on the eastern edge of the fake Mississippi River. The waters were rising, and I rolled up my pants and waded to the car and drove home.
Design Observer © 2006-2011 Observer Omnimedia LLC |
chemical compounds that react vigorously with toxic gases and convert them to nontoxic compounds.
The variety of toxic substances has given rise to decontaminants of a variety of chemical properties. Water may be considered a decontaminating agent, decomposing toxic materials at various rates (fastest and most completely, by boiling). It is used only for decontaminating clothing and in certain antigas equipment. Decontaminants with oxidizingchlorinating action (calcium or sodium salts of hypochlorous acid, the hypochlorites, and various chloramines) are more effective. Under ordinary conditions aqueous solutions and hypochlorite suspensions oxidize mustard gas and decompose toxic organophosphorous compounds, converting them to nontoxic products. The most readily available is calcium hypochlorite, or chlorinated lime. It is used dry or in aqueous suspension to decontaminate roads or terrain. Aqueous suspensions can also be used to decontaminate buildings and transportation facilities. Hypochlorites are not effective for decontamination at temperatures near 0°C; solutions of chloramines (for example, chloramine B) in organic solvents, such as alcohol or dichloroethane, are used instead. Chloramines are also used to prepare gas casualty first aid kits and for the decontamination of war materiel and terrain. Chloramines chlorinate mustard gas to nontoxic products. However, these decontaminants are not satisfactory for all toxic organophosphorous gases. For example, while they are good decontamints for substances of the V-gas type, they are useless for sarin and soman. Hence, use is made not only of oxidizing-chlorinating decontaminants but also of alkaline agents (caustic alkalies, sodium carbonate, and ammonia). Decontaminants based on certain sodium alkoxides and amines (DS-2, USA) are active against all types of toxic gases. Their decontaminating action is based on the dehydrochlorination of mustard gas and the alcoholysis of toxic organophosphorous gases.
Various organic solvents (motor fuels, alcohols) and detergent solutions may be included among the decontaminants. However, their use only offers physical decontamination (removal of the toxic material through solution or emulsification), which is always incomplete and, in a number of cases, inadequate.
REFERENCEAleksandrov, V. N. Otravliaiushchie vestchestva. Moscow, 1969.
I. T. PENZULAEV |
There are many theories explaining later prehistoric 'trade' and 'exchange systems' in stone artefacts. Evidence matching the petrographic information of transported implements with the country rock where 'factories' produced flaked stone axes is felt to be compelling. Laboratory implement source provenancing by petrography is felt to have been particularly successful in Britain. Similar source provenancing programmes have been undertaken elsewhere, including one that apparently demonstrates the dispersal of Neolithic jadeite axes from the Alps (see Giligny this issue). Consequently, throughout Europe it is widely believed that the only way 'factory' rock could have reached the places where artefacts have been found was by human carriage.
Long-distance trading and gift-exchanging are at present the explanations most widely accepted to explain the mapped distribution of implements of metamorphic, igneous and sedimentary rocks found far from their primary outcrops within the British Isles and in Continental Europe. Some of the material that follows is updated from documentation that has already appeared in print (particularly Briggs 1991) while other sections derive from unpublished articles (Briggs forthcoming).
The fundamental tenets of belief in human carriage are best summarised as follows:
Neolithic colonists arrived in Britain with sound knowledge of a flint technology. Flint was commonly used in southern Britain, where mines traded it extensively. In northern and western Britain, flint being locally absent, different rocks were selected for use. For example, the early use (between 3,700 and 3,000 bc) of rocks from Graig Lwyd, or Borrowdale. About thirty other different stone types in Britain ('grouped') and ascribed primary outcrop sources by petrography suggest that extensive workshop sites ought to have been in operation by that period. Between about 1930 and 1975, the prevailing theory held that economic need in prehistory was the reason for this transport. Factories having been found, the economic equation was secure. The factory concept then became nuclear to a Neolithic subsistence economy (Clark 1952, 244-54; 1957; 1965). Subsequently, it was explained that the axes were dispersed through 'complex social relationships' (Renfrew 1977), a view championed by Bradley and Edmonds (2005). On occasion, concession has been made to the use of pebbles (Cummins and Moore 1973, 241-2; Fenton 1984). But until recently (Williams-Thorpe et al. 1999; 2003) pebbles of petrographically 'grouped' (i.e. putative 'factory') rocks were not considered to have been utilised in the Neolithic, only 'factory rock' was acceptable. The laboratory provenancing of exotic clasts found in superficial deposits (for example those of Cumbria, N. Wales and Cornwall), although widespread among geologists (Sabine 1949; Williams-Thorpe et al. 1999), is a field of interest virtually ignored by prehistorians, who rarely credit their forbears with an ability to have turned re-cycled material to advantage.
The object of the discussion that follows is to consider the major outstanding questions in a presentation which falls into three parts.
© Internet Archaeology/Author(s)
Last updated: Wed Jul 29 2009 |
The vision and skill of the island laird, John Traill, led to the great agricultural improvements that included the building of the sheep dyke. His talent in administration led to the structure for the management of the sheep themselves in the regulations of 1839.
These were carefully worked out between the laird and the crofters and they covered the maintenance of the sheep dyke as well as the management of the flock.
On the one hand, each crofter has to right to keep a certain number of sheep on the shore. On the other, there is a duty to repair and maintain the dyke “in proportion to their allocation of sheep”
The callout for work on the dyke would come from the Sheepmen, who were appointed by election, two from each of the five Toonships on the island, to administer the various duties and ensure that the regulations were adhered to.
The dyke has been carefully built to reduce the pressure on it from a variety of factors. In parts of the island where sand blows up onto the fields, there are square openings built into the structure to allow sand to pass through and reduce the weight of it that would otherwise build up.
Throughout the twelve-mile extent there are holes between the stones to let the wind blow through, and this reduces the damage from storms, but the sheer force of a winter gale can be very destructive and take down stretches of dyke, as writers in the island record in particularly bad years.
”The gales have done tremendous damage. During one tide, from flood to first ebb, miles and miles of the sea dyke have been torn away. The sea roared in, meeting local water, flooding roads, and producing a scene of such destruction that has not been approached since 1937, when there were far more people to repair the dykes.
“The waves were so huge that they were right over the store at the foot of the pier, and it was quite terrifying to look out of our upstairs windows to see waves coming in over the dyke, and at their height towering up above the roof of the house on the other side of the loch.
“Hundreds of sheep poured in through all the gaps and ran in flocks up the rods, over the fields and round the stack. Their hooves pounded past us as we went to feed the cattle in the byres, sixty or more sheep racing through the yard.”
(Christine Muir, Orkney Days, 1986) |
GFD LabIV: Construction of parabolic turntable
If our cylindrical tank is filled with water, set turning and left until it comes in to solid body rotation, then the free-surface of the water will not be flat - it will be depressed in the middle and rise up slightly to its highest point along the rim of the tank - see
The shape of the free surface is given by
h(r) = h(0) + ((Ωērē)/(2g))
where h is the local depth, r is the distance from the axis or rotation, and
h(0) is the depth at r = 0. Thus the free surface takes on a parabolic shape.
Let's put in some numbers for our tank. We can obtain rotation rates of up to 10 rpm (which is a
Ω of 1 /s). The radius of the tank is
0.30 m and g = 9.81 m/s**2, giving
((Ωērē)/(2g)) ~ 5mm,
a small fraction of the depth to which the tank is typically filled.
It is very instructive to make the surface of our turntable parabolic. This can readily be achieved by filling a large flat-bottomed pan with resin on a turntable and letting the resin harden while the turntable is left running
(10 rpm works well) for several hours - see here. The resulting parabolic surface can then be polished to create a low friction surface.
Place a ball-bearing on the rotating parabolic surface - make sure that the table is rotating at the same speed as was used to create the parabola! Note that it does not fall in to the center, but instead finds a state of rest in which the component of gravitational
force resolved along the parabolic surface is exactly balanced by the outward-directed horizontal component of the centrifugal |
6th Grade Geometry Skills
Prior Standards Implementation
The standards listed below have been replaced by a newer set of standards.
Please go to Current 6th Grade Math Standards for current resources.
Identify parallel, perpendicular, and intersecting lines.
Use ordered pairs to describe given points in Quadrant 1 of a coordinate system.
- Billy Bug - Guide Billy to the coordinates hiding the food. coordinates 0-10 | coordinates 0-5
- Create a Haunted House...if you dare! - Follow directions to sharpen your graphing skills, create a cool haunted house and then morph it using math. Expressions, Equation Solving and Graphing in the Coordinate Plane - The Haunted house and many more coordinate plane practice problems
- Graphing Applet - interactive site which identifies the ordered pairs of points students click on - Advanced version of the same applet
- Graphing Equations and Inequalities Unit Quiz
- Graphing Ordered Pairs - explanation and interactive practice
- Graphing Skills - What's the point? find the point on the grid (choose Easy for Quadrant 1 problems)
- Grid Graph - identify and plot points on a grid
- Reading a Grid - Find Hurkle behind the grid. Type ordered pairs to find him
- Simple Coordinates Game - Students investigate the first quadrant of the Cartesian coordinate system through identifying the coordinates of points, or requesting that a particular point be plotted.
- Simple Maze Game - Students investigate the first Quadrant of the Cartesian coordinate system by directing a robot through a mine field laid out on the plane.
- Spy Guys Interactive - Using Ordered Pairs - Lesson 18
- What's the Point? - Find the x-y point on a grid
- Worksheet Generator - Print your own blank coordinate plane worksheets, you determine the number of grids on a sheet and whether the grids are numbered or not.
- Advanced (all 4 Quadrants used)
- Catch the Fly - [ all 4 quadrants are used] Use the keyboard to enter the x and y values of an ordered pair to help the fly catch a bug. No score is kept, each question is essentially a one question game.
- Coordinate Plane Practice - [ all four quadrants are displayed on each slide ] students type ordered pairs in the blocks on this interactive PowerPoint show
- General Coordinates Game - Students investigate the Cartesian coordinate system through identifying the coordinates of points, or requesting that a particular point be plotted. ( all 4 quadrants utilized )
- Maze Game - Students use their knowledge of points on a graph to move a robot to the target, while avoiding mines. ( all 4 quadrants utilized )
- Stock the Shelves - You are the clerk. Stock the shelves using a special coordinate plane.
Classify two-dimensional geometric figures using properties.
- Identify Polygons - Names of polygons by number of sides and angles
- Identify Shapes - Recognize properties and names of shapes
- Name that Polygon - Identify name of given polygon components
- Pete's Polygons - learn about closed figures called polygons
- Polygon Playground - arrange designs with online movable polygons - hundreds of polygons to drag anywhere you want
- Sam's Similar Shapes - learn about shapes that are different sizes but the same shape
- Triangle Identification - identify the triangle given its angles
- Triangle Identification - identify the triangle given its number of sides
Identify the results of transformations of two-dimensional figures (e.g., slides/translations, flips/reflections, and turns/rotations).
- Cube - Find out which colors will be on opposite faces of a cube whose faces are shown unfolded.
- Flip, Slide, or Turn - six flash movies allow you to watch the path an object takes while it is being transformed
- Flips, Slides, and Turns - a manipulative which allows you to view the effect of applying reflection, translation, and rotation transformations
- Mathematical Movements (Flips, Slides and Turns) - [ designed for 3rd grade ] a 25 page lesson with printables
- Measuring Turns - what would shapes look like if rotated
- Motion in Geometry - a 127 slide PowerPoint show
- Slides, Flips, and Turns - lesson plan with links to four printables
- Spy Guys Interactive - Slides and Flips - Lesson 17
- Transformations - a Flash movie which demonstrates flips, slides, and turns
- Transformations - [ designed for 7th grade ] a 21 slide PowerPoint show produced by Holt, Rinehart and Winston
- Translations - a seven stage lesson on flips, slides and turns
- Wrapping Paper Patterns - decide which figures would result from a flip, a slide, or a turn
Use spatial reasoning to identify the three-dimensional figure created from a two-dimensional representation of that figure (i.e., cube, rectangular prism, pyramid, cone, or cylinder).
- 3-D Object Viewer - Students may explore a variety of 3-D objects and their accompanying 2-D views.
- Building Houses with Side View - student constructs a block figure (dynamic, perspective drawing) to match (10 different figures)
- Coloring 3-D sides - [ UK spelling on this site ] Find the red sides shown in a series of 2-D drawings and click on the right face of the 3-D model to color it red. 20 questions
- Coloring 2-D sides - Use the colored portion of the 3-D object to color the correct side of the 2-D drawing.
- Cube - Find out which colors will be on opposite faces of a cube whose faces are shown unfolded.
- Guess the View - Students are given a 3-D view of an object, and then given a 2-D view of the object. Students must choose which of 6 views is being displayed from a list.
- Plot Plans and Silhouettes - from Shape and Space in Geometry - the student task is to come up with plot plans that could match the given silhouettes. Background information is available at another page .
- Rotating Houses - Students are presented with a 3-D figure created with blocks that can be rotated and flipped using a mouse. The figure must be rotated until it matches a 2-D representation of one of the views.
Classify angles as acute, obtuse, right, and straight.
- Acute One Or An Obtuse One? - [ grades 5-8 ] six page worksheet to print and use in class
- Angle Classification: Acute, Right, Obtuse - explanation and questions to answer
- Angles - Students practice their knowledge of acute, obtuse and alternate angles.
- Angles - Interactive game to help recognize angles.
- Angles - a twenty slide PowerPoint show with a quiz to be used as a whole class activity
- Basic Geometric Figures - a TAKS worksheet to print and use in class
- Classifying Angles - quiz
- Geometry Building Blocks: Classifying Angles - explanation and drawings
- How Do We Measure Angles? - six page worksheet to print and use in class
- Measuring Angles - use a virtual protractor
- Names of Angles - definition and drawings of each type
- Protractor - learn how to position and read a protractor in order to measure an angle, and how to use either scale on the protractor
- Right, Acute, and Obtuse Angles - sort angles into appropriate containers
- Right, Acute, and Obtuse Angles - figures are shown, decide which is the proper angle
- Teacher Tube Videos - links to nine videos on angles
- Triangle Explorer - Students learn about areas of triangles and about the Cartesian coordinate system through experimenting with triangles drawn on a grid.
- Triangles Side-By-Side - use geometric vocabulary to describe properties of triangles
- Types of Angles - definitions, and drawings of each type
- Spy Guys Interactive - Working with Angles - Lesson 16
Classify quadrilaterals using their defining properties.
- Construction of Bicentric Quadrilateral - a mathematical droodle
- Interactive Quadrilateral - drag the dots
- Java Sketchpad to Explore Shapes
- Practice With Quadrilaterals - a twelve question quiz with drawings
- Quad Squad - a quick lesson on classifying and describing quadrilaterals
- Quadrilateral Family - from regents prep
- Quadrilateral Matching - concentration style quiz
- Quadrilateral Quest - interactive lesson, drag and drop activity
- Quadrilateral Sorter - click on a shape to find out about it
- Quadrilaterals - a sixteen slide PowerPoint show
- Quadrilaterals - a twenty slide PowerPoint show
- Quadrilaterals - a twenty-six slide PowerPoint show produced by Holt, Rinehart and Winston
- Quadrilaterals II - a twenty-eight slide PowerPoint show
- Quadrilaterals Explained - from Math is Fun
- Quadrilaterals Quiz - six multiple choice questions
- Quadrilaterals - explanation, examples and questions to answer
- Quadrilaterals - a wikipedia entry with a large number of drawings
- Shapes: Quadrilaterals - explanation and questions to show understanding
- What's My Quadrilateral - three different teaching ideas
National Library of Virtual Manipulatives
Geometry (Grades 6,7,8)
Practice Tests! Released Tests & others
- FCAT Sample Test Book - [ 2008 ] sample questions and test taking tips
- FCAT Sample Answer Book - [ 2008 ] sample questions and test taking tips
- Sixth Grade Math - Read each question and choose the best answer. Then mark the circle next to the letter for the answer you have chosen. ( from Texas )
- Texas end-of-year Math test 2003
- Texas end-of-year Math test 2004
Various Review Aids
- Geometry Jeopardy - Topics covered: Polygons & Transformations, Circles, Symmetry, Lines & Angles, and Triangles & Congruency
- Junior High Math Interactives - includes interactive math activities, print activities, learning strategies, and videos that illustrate how math is used in everyday life.
- Math Olympics - Answer 20 multiple choice questions correctly to win the Math Olympics. Topics range from basic computation and general math knowledge to word problems with percentages, ratios, and fractions.
- Math TV: Video Word Problems - [ Grades 5+ ] Math TV is a project whose goal is to help middle school students learn how to solve challenging word problems. Each of the nineteen math problems comes with step by step video solution, follow up problems, an online calculator, and sketch pad.
- Solve It! - Math Videos provide problem solving practice for students in grades 3 to 6. Each of the thirteen sets contain five multistep word problems with step by step video solutions. Concepts include basic operations, algebraic reasoning, money, fractions, percent, perimeter, area, proportional reasoning, and measurement.
site for teachers | PowerPoint show | Acrobat document | Word document | whiteboard resource | sound | video format | interactive lesson | a quiz | lesson plan | to print |
Earth came into existence about 4.6 billion years ago, and about 3.8 billion years ago, the evolution of chemicals began. Scientists estimate that at about 3.5 billion years ago, the first cells were in existence.
Scientists believe that the first cells lived within the organic environment of the earth and used organic foods for their energy. The type of chemistry in those first cells was somewhat similar to fermentation, which uses organic molecules, such as glucose. The energy yield, although minimal, is enough to sustain living things. However, organic material would soon have been used up if this were the sole source of nutrition, so a new process had to develop.
The evolution of a pigment system that could capture energy from sunlight and store it in chemical bonds was an essential breakthrough in the evolution of living things. The organisms that possess these pigments are commonly referred to as cyanobacteria (at one time, they were called blue‐green algae). These single‐celled organisms produce carbohydrates by the process of photosynthesis. In doing so, they produce oxygen as a waste product. For a period of about 1 billion years, photosynthesis provided oxygen to the atmosphere, which gradually changed until it became oxygen rich, as it is today.
Another group of organisms that was present at the same time as the cyanobacteria was a group of bacteria called archaebacteria. Archaebacteria differ from “modern” bacteria (known as eubacteria) in that archaebacteria have a different ribosomal structure, different cell membrane composition, and different cell wall composition. The archaebacteria have been traced to a period of about 3 billion years ago. They are able to multiply at the very high temperatures that were present on the earth then, and their nutritional requirements reflect the composition of the primitive earth. |
Inertial and Satellite Positioning
Inertial navigation system
An Inertial Navigation System (INS) is a navigation aid that uses a computer and motion
sensors (accelerometers) to continuously calculate via dead reckoning the position,
orientation, and velocity (direction and speed of movement) of a moving object without
the need for external references. Other terms used to refer to inertial navigation systems
or closely related devices include inertial guidance system, inertial reference platform,
and many other variations.
Aircraft Inertial Guidance
One example of a popular INS for commercial aircraft was the Delco Carousel, which
provided partial automation of navigation in the days before complete flight management
systems became commonplace. The Carousel allowed pilots to enter a series of
waypoints, and then guided the aircraft from one waypoint to the next using an INS to
determine aircraft position. Some aircraft were equipped with dual Carousels for safety.
Inertial navigation systems in detail
INSs have angular and linear accelerometers (for changes in position); some include a
gyroscopic element (for maintaining an absolute angular reference).
Angular accelerometers measure how the vehicle is rotating in space. Generally, there's at
least one sensor for each of the three axes: pitch (nose up and down), yaw (nose left and
right) and roll (clockwise or counter-clockwise from the cockpit).
Linear accelerometers measure non-gravitational accelerations of the vehicle. Since it can
move in three axes (up & down, left & right, forward & back), there is a linear
accelerometer for each axis.
A computer continually calculates the vehicle's current position. First, for each of the six
degrees of freedom (x,y,z and θx, θy and θz), it integrates over time the sensed amount of
acceleration, together with an estimate of gravity, to calculate the current velocity. Then
it integrates the velocity to figure the current position.
Inertial guidance is difficult without computers. The desire to use inertial guidance in the
Minuteman missile and Project Apollo drove early attempts to miniaturize computers.
Inertial guidance systems are now usually combined with satellite navigation systems
through a digital filtering system. The inertial system provides short term data, while the
satellite system corrects accumulated errors of the inertial system.
An inertial guidance system that will operate near the surface of the earth must
incorporate Schuler tuning so that its platform will continue pointing towards the center
of the earth as a vehicle moves from place to place.
Inertial navigation unit of french IRBM S3.
Gimballed gyrostabilized platforms
Some systems place the linear accelerometers on a gimbaled gyrostabilized platform. The
gimbals are a set of three rings, each with a pair of bearings initially at right angles. They
let the platform twist about any rotational axis (or, rather, they let the platform keep the
same orientation while the vehicle rotates around it). There are two gyroscopes (usually)
on the platform.
Two gyroscopes are used to cancel gyroscopic precession, the tendency of a gyroscope to
twist at right angles to an input force. By mounting a pair of gyroscopes (of the same
rotational inertia and spinning at the same speed) at right angles the precessions are
cancelled, and the platform will resist twisting.
This system allows a vehicle's roll, pitch, and yaw angles to be measured directly at the
bearings of the gimbals. Relatively simple electronic circuits can be used to add up the
linear accelerations, because the directions of the linear accelerometers do not change.
The big disadvantage of this scheme is that it uses many expensive precision mechanical
parts. It also has moving parts that can wear out or jam, and is vulnerable to gimbal lock.
The primary guidance system of the Apollo spacecraft used a three-axis gyrostabilized
platform, feeding data to the Apollo Guidance Computer. Maneuvers had to be carefully
planned to avoid gimbal lock.
Fluid-suspended gyrostabilized platforms
Gimbal lock constrains maneuvering, and it would be beneficial to eliminate the slip
rings and bearings of the gimbals. Therefore, some systems use fluid bearings or a
flotation chamber to mount a gyrostabilized platform. These systems can have very high
precisions (e.g. Advanced Inertial Reference Sphere). Like all gyrostabilized platforms,
this system runs well with relatively slow, low-power computers.
The fluid bearings are pads with holes through which pressurized inert gas (such as
Helium) or oil press against the spherical shell of the platform. The fluid bearings are
very slippery, and the spherical platform can turn freely. There are usually four bearing
pads, mounted in a tetrahedral arrangement to support the platform.
In premium systems, the angular sensors are usually specialized transformer coils made
in a strip on a flexible printed circuit board. Several coil strips are mounted on great
circles around the spherical shell of the gyrostabilized platform. Electronics outside the
platform uses similar strip-shaped transformers to read the varying magnetic fields
produced by the transformers wrapped around the spherical platform. Whenever a
magnetic field changes shape, or moves, it will cut the wires of the coils on the external
transformer strips. The cutting generates an electric current in the external strip-shaped
coils, and electronics can measure that current to derive angles.
Cheap systems sometimes use bar codes to sense orientations, and use solar cells or a
single transformer to power the platform. Some small missiles have powered the platform
with light from a window or optic fibers to the motor. A research topic is to suspend the
platform with pressure from exhaust gases. Data is returned to the outside world via the
transformers, or sometimes LEDs communicating with external photodiodes.
Lightweight digital computers permit the system to eliminate the gimbals, creating
"strapdown" systems, so called because their sensors are simply strapped to the vehicle.
This reduces the cost, eliminates gimbal lock, removes the need for some calibrations,
and increases the reliability by eliminating some of the moving parts. Angular rate
sensors called "rate gyros" measure how the angular velocity of the vehicle changes.
A strapdown system has a dynamic measurement range several hundred times that
required by a gimbaled system. That is, it must integrate the vehicle's attitude changes in
pitch, roll and yaw, as well as gross movements. Gimballed systems could usually do
well with update rates of 50 to 60 updates per second. However, strapdown systems
normally update about 2000 times per second. The higher rate is needed to keep the
maximum angular measurement within a practical range for real rate gyros: about 4
milliradians. Most rate gyros are now laser interferometers.
The data updating algorithms ("direction cosines" or "quaternions") involved are too
complex to be accurately performed except by digital electronics. However, digital
computers are now so inexpensive and fast that rate gyro systems can now be practically
used and mass-produced. The Apollo lunar module used a strapdown system in its
backup Abort Guidance System (AGS).
Strapdown systems are nowadays commonly used in commercial and tactical applications
(aircraft, missiles, etc). However they are still not widespread in applications where
superb accuracy is required (like submarine navigation or strategic ICBM guidance).
The orientation of a gyroscope system can sometimes also be inferred simply from its
position history (e.g., GPS). This is, in particular, the case with planes and cars, where the
velocity vector usually implies the orientation of the vehicle body.
For example, Honeywell's Align in Motion is an initialization process where the
initialization occurs while the aircraft is moving, in the air or on the ground. This is
accomplished using GPS and an inertial reasonableness test, thereby allowing
commercial data integrity requirements to be met. This process has been FAA certified to
recover pure INS performance equivalent to stationary align procedures for civilian flight
times up to 18 hours. It avoids the need for gyroscope batteries on aircraft.
Less-expensive navigation systems, intended for use in automobiles, may use a Vibrating
structure gyroscope to detect changes in heading, and the odometer pickup to measure
distance covered along the vehicle's track. This type of system is much less accurate than
a higher-end INS, but it is adequate for the typical automobile application where GPS is
the primary navigation system, and dead reckoning is only needed to fill gaps in GPS
coverage when buildings or terrain block the satellite signals.
Hemispherical Resonator Gyros ("Brandy Snifter Gyros")
If a standing wave is induced in a globular resonant cavity (e.g. a brandy snifter), and
then the snifter is tilted, the waves tend to continue oscillating in the same plane of
movement - they don't fully tilt with the snifter. This trick is used to measure angles.
Instead of brandy snifters, the system uses hollow globes machined from piezoelectric
materials such as quartz. The electrodes to start and sense the waves are evaporated
directly onto the quartz.
This system has almost no moving parts, and is very accurate. However it is still
relatively expensive due to the cost of the precision ground and polished hollow quartz
Although successful systems were constructed, and an HRG's kinematics appear capable
of greater accuracy, they never really caught on. Laser gyros were just
more popular.
The classic system is the Delco 130Y Hemispherical Resonator Gyro, developed about
1986. See also for a picture of an HRG resonator.
Quartz rate sensors
This system is usually integrated on a silicon chip. It has two mass-balanced quartz
tuning forks, arranged "handle-to-handle" so forces cancel. Aluminum electrodes
evaporated onto the forks and the underlying chip both drive and sense the motion. The
system is both manufacturable and inexpensive. Since quartz is dimensionally stable, the
system can be accurate.
As the forks are twisted about the axis of the handle, the vibration of the tines tends to
continue in the same plane of motion. This motion has to be resisted by electrostatic
forces from the electrodes under the tines. By measuring the difference in capacitance
between the two tines of a fork, the system can determine the rate of angular motion.
Current state of the art non-military technology (2005) can build small solid state sensors
that can measure human body movements. These devices have no moving parts, and
weigh about 50 grams.
Solid state devices using the same physical principles are used to stabilize images taken
with small cameras or camcorders. These can be extremely small (≈5 mm) and are built
with MEMS (Microelectromechanical Systems) technologies.
Sensors based on magnetohydrodynamic principles can be used to measure angular
Laser gyroscopes were supposed to eliminate the bearings in the gyroscopes, and thus the
last bastion of precision machining and moving parts.
A ring laser gyro splits a beam of laser light into two beams in opposite directions
through narrow tunnels in a closed optical circular path around the perimeter of a
triangular block of temperature-stable cervit glass with reflecting mirrors placed in each
corner. When the gyro is rotating at some angular rate, the distance traveled by each
beam becomes different - the shorter path being opposite to the rotation. The phase-shift
between the two beams can be measured by an interferometer, and is proportional to the
rate of rotation (Sagnac effect).
In practice, at low rotation rates the output frequency can drop to zero after the result of
"Back scattering" causing the beams to synchronise and lock together. This is known as a
"lock-in, or laser-lock." The result is that there is no change in the interference pattern,
and therefore no measurement change.
To unlock the counter-rotating light beams, laser gyros either have independent light
paths for the two directions (usually in fiber optic gyros), or the laser gyro is mounted on
a piezo-electric dither motor that rapidly vibrates the laser ring back and forth about its
input axis through the lock-in region to decouple the light waves.
The shaker is the most accurate, because both light beams use exactly the same path.
Thus laser gyros retain moving parts, but they do not move as far.
The basic, open-loop accelerometer consists of a mass attached to a spring. The mass is
constrained to move only in-line with the spring. Acceleration causes deflection of the
mass and the offset distance is measured. The acceleration is derived from the values of
deflection distance, mass, and the spring constant. The system must also be damped to
avoid oscillation. A closed-loop accelerometer achieves higher performance by using a
feedback loop to cancel the deflection, thus keeping the mass nearly stationary.
Whenever the mass deflects, the feedback loop causes an electric coil to apply an equally
negative force on the mass, cancelling the motion. Acceleration is derived from the
amount of negative force applied. Because the mass barely moves, the non-linearities of
the spring and damping system are greatly reduced. In addition, this accelerometer
provides for increased bandwidth past the natural frequency of the sensing element.
Principle of open loop accelerometer. Acceleration in the upward direction causes the mass to deflect downward.
Global Positioning System
The Global Positioning System (GPS) is a global navigation satellite system (GNSS)
developed by the United States Department of Defense and managed by the United States
Air Force 50th Space Wing. It is the only fully functional GNSS in the world, can be
used freely, and is often used by civilians for navigation purposes. It uses a constellation
of between 24 and 32 medium Earth orbit satellites that transmit precise radiowave
signals, which allow GPS receivers to determine their current location, the time, and their
velocity. Its official name is NAVSTAR GPS. Although NAVSTAR is not an acronym, a
few backronyms have been created for it.
Since it became fully operational in 1993, GPS has become a widely used aid to
navigation worldwide, and a useful tool for map-making, land surveying, commerce,
scientific uses, and hobbies such as geocaching. Also, the precise time reference is used
in many applications including the scientific study of earthquakes and as a required time
synchronization method for cellular network protocols such as the IS-95 standard for
Artist's conception of GPS Block II-F satellite in orbit Civilian GPS receiver ("GPS navigation device") in a marine application.
Basic concept of GPS
A GPS receiver calculates its position by precisely timing the signals sent by the GPS
satellites high above the Earth. Each satellite continually transmits messages containing
the time the message was sent, precise orbital information (the ephemeris), and the
general system health and rough orbits of all GPS satellites (the almanac). The receiver
measures the transit time of each message and computes the distance to each satellite.
Geometric trilateration is used to combine these distances with the location of the
satellites to determine the receiver's location. The position is displayed, perhaps with a
moving map display or latitude and longitude; elevation information may be included.
Many GPS units also show derived information such as direction and speed, calculated
from position changes.
It might seem three satellites are enough to solve for position, since space has three
dimensions. However, even a very small clock error multiplied by the very large speed of
light—the speed at which satellite signals propagate—results in a large positional
error. Therefore receivers use four or more satellites to solve for x, y, z, and t, which is
used to correct the receiver's clock. While most GPS applications use the computed
location only and effectively hide the very accurately computed time, it is used in a few
specialized GPS applications such as time transfer, traffic signal timing, and
synchronization of cell phone base stations.
Although four satellites are required for normal operation, fewer apply in special cases. If
one variable is already known (for example, a ship or plane may have known elevation),
a receiver can determine its position using only three satellites. Some GPS receivers may
use additional clues or assumptions (such as reusing the last known altitude, dead
reckoning, inertial navigation, or including information from the vehicle computer) to
give a degraded position when fewer than four satellites are visible
Correcting a GPS receiver's clock
The method of calculating position for the case of no errors has been explained. One of
the most significant error sources is the GPS receiver's clock. Because of the very large
value of the speed of light, c, the estimated distances from the GPS receiver to the
satellites, the pseudoranges, are very sensitive to errors in the GPS receiver clock. This
suggests that an extremely accurate and expensive clock is required for the GPS receiver
to work. On the other hand, manufacturers prefer to build inexpensive GPS receivers for
mass markets. The solution for this dilemma is based on the way sphere surfaces intersect
in the GPS problem.
The current GPS consists of three major segments. These are the space segment (SS), a
control segment (CS), and a user segment (US).
The space segment (SS) comprises the orbiting GPS satellites, or Space Vehicles (SV) in
GPS parlance. The GPS design originally called for 24 SVs, eight each in three circular
orbital planes, but this was modified to six planes with four satellites each. The orbital
planes are centered on the Earth, not rotating with respect to the distant stars. The six
planes have approximately 55° inclination (tilt relative to Earth's equator) and are
separated by 60° right ascension of the ascending node (angle along the equator from a
reference point to the orbit's intersection). The orbits are arranged so that at least six
satellites are always within line of sight from almost everywhere on Earth's surface.
Orbiting at an altitude of approximately 20,200 kilometers about 10 satellites are visible
within line of sight (12,900 miles or 10,900 nautical miles; orbital radius of 26,600 km
(16,500 mi or 14,400 NM)), each SV makes two complete orbits each sidereal day. The
ground track of each satellite therefore repeats each (sidereal) day. This was very helpful
during development, since even with just four satellites, correct alignment means all four
are visible from one spot for a few hours each day. For military operations, the ground
track repeat can be used to ensure good coverage in combat zones.
As of March 2008, there are 31 actively broadcasting satellites in the GPS constellation,
and two older, retired from active service satellites kept in the constellation as orbital
spares. The additional satellites improve the precision of GPS receiver calculations by
providing redundant measurements. With the increased number of satellites, the
constellation was changed to a nonuniform arrangement. Such an arrangement was
shown to improve reliability and availability of the system, relative to a uniform system,
when multiple satellites fail
The flight paths of the satellites are tracked by US Air Force monitoring stations in
Hawaii, Kwajalein, Ascension Island, Diego Garcia, and Colorado Springs, Colorado,
along with monitor stations operated by the National Geospatial-Intelligence Agency
(NGA). The tracking information is sent to the Air Force Space Command's master
control station at Schriever Air Force Base in Colorado Springs, which is operated by the
2nd Space Operations Squadron (2 SOPS) of the United States Air Force (USAF). Then 2
SOPS contacts each GPS satellite regularly with a navigational update (using the ground
antennas at Ascension Island, Diego Garcia, Kwajalein, and Colorado Springs). These
updates synchronize the atomic clocks on board the satellites to within a few
nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital
model. The updates are created by a Kalman filter which uses inputs from the ground
monitoring stations, space weather information, and various other inputs.
Each GPS satellite continuously broadcasts a Navigation Message at 50 bit/s giving the
time-of-week, GPS week number and satellite health information (all transmitted in the
first part of the message), an ephemeris (transmitted in the second part of the message)
and an almanac (later part of the message). The messages are sent in frames, each taking
30 seconds to transmit 1500 bits.
Transmission of each 30 second frame begins precisely on the minute and half minute as
indicated by the satellite's atomic clock according to Satellite message format. Each
frame contains 5 subframes of length 6 seconds and with 300 bits. Each subframe
contains 10 words of 30 bits with length 0.6 seconds each.
Words 1 and 2 of every subframe have the same type of data. The first word is the
telemetry word which indicates the beginning of a subframe and is used by the receiver to
synch with the navigation message. The second word is the HOW or handover word and
it contains timing information which enables the receiver to identify the subframe and
provides the time the next subframe was sent.
Words 3 through 10 of subframe 1 contain data describing the satellite clock and its
relationship to GPS time. Words 3 through 10 of subframes 2 and 3, contain the
ephemeris data, giving the satellite's own precise orbit. The ephemeris is updated every 2
hours and is generally valid for 4 hours, with provisions for updates every 6 hours or
longer in non-nominal conditions. The time needed to acquire the ephemeris is becoming
a significant element of the delay to first position fix, because, as the hardware becomes
more capable, the time to lock onto the satellite signals shrinks, but the ephemeris data
requires 30 seconds (worst case) before it is received, due to the low data transmission
The almanac consists of coarse orbit and status information for each satellite in the
constellation, an ionospheric model, and information to relate GPS derived time to
Coordinated Universal Time (UTC). Words 3 through 10 of subframes 4 and 5 contain a
new part of the almanac. Each frame contains 1/25th of the almanac, so 12.5 minutes are
required to receive the entire almanac from a single satellite. The almanac serves several
purposes. The first is to assist in the acquisition of satellites at power-up by allowing the
receiver to generate a list of visible satellites based on stored position and time, while an
ephemeris from each satellite is needed to compute position fixes using that satellite. In
older hardware, lack of an almanac in a new receiver would cause long delays before
providing a valid position, because the search for each satellite was a slow process.
Advances in hardware have made the acquisition process much faster, so not having an
almanac is no longer an issue. The second purpose is for relating time derived from the
GPS (called GPS time) to the international time standard of UTC. Finally, the almanac
allows a single-frequency receiver to correct for ionospheric error by using a global
ionospheric model. The corrections are not as accurate as augmentation systems like
WAAS or dual-frequency receivers. However, it is often better than no correction, since
ionospheric error is the largest error source for a single-frequency GPS receiver. An
important thing to note about navigation data is that each satellite transmits not only its
own ephemeris, but transmits an almanac for all satellites.
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276
GHz (L2 signal). The receiver can distinguish the signals from different satellites because
GPS uses a code division multiple access (CDMA) spread-spectrum technique where the
low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence
that is different for each satellite. The receiver knows the PRN codes for each satellite
and can use this to reconstruct the actual message data. The message data is transmitted at
50 bits per second. Two distinct CDMA encodings are used: the coarse/acquisition (C/A)
code (a so-called Gold code) at 1.023 million chips per second, and the precise (P) code
at 10.23 million chips per second. The L1 carrier is modulated by both the C/A and P
codes, while the L2 carrier is only modulated by the P code. The C/A code is public
and used by civilian GPS receivers, while the P code can be encrypted as a so-called P(Y)
code which is only available to military equipment with a proper decryption key. Both
the C/A and P(Y) codes impart the precise time-of-day to the user.
In automotive GPS receivers, metallic features in windshields, such as defrosters, or car
window tinting films can act as a Faraday cage, degrading reception just inside the
Man-made EMI (electromagnetic interference) can also disrupt, or jam, GPS signals. In
one well documented case, the entire harbor of Moss Landing, California was unable to
receive GPS signals due to unintentional jamming caused by malfunctioning TV antenna
preamplifiers. Intentional jamming is also possible. Generally, stronger signals can
interfere with GPS receivers when they are within radio range, or line of sight. In 2002, a
detailed description of how to build a short range GPS L1 C/A jammer was published in
the online magazine Phrack.
The U.S. government believes that such jammers were used occasionally during the 2001
war in Afghanistan and the U.S. military claimed to destroy six GPS jammers during the
Iraq War, including one that was destroyed ironically with a GPS-guided bomb. Such a
jammer is relatively easy to detect and locate, making it an attractive target for anti-
radiation missiles. The UK Ministry of Defence tested a jamming system in the UK's
West Country on 7 and 8 June 2007.
Some countries allow the use of GPS repeaters to allow for the reception of GPS signals
indoors and in obscured locations, however, under EU and UK laws, the use of these is
prohibited as the signals can cause interference to other GPS receivers that may receive
data from both GPS satellites and the repeater.
Due to the potential for both natural and man-made noise, numerous techniques continue
to be developed to deal with the interference. The first is to not rely on GPS as a sole
source. According to John Ruley, "IFR pilots should have a fallback plan in case of a
GPS malfunction". Receiver Autonomous Integrity Monitoring (RAIM) is a feature now
included in some receivers, which is designed to provide a warning to the user if jamming
or another problem is detected. The U.S. military has also deployed their Selective
Availability / Anti-Spoofing Module (SAASM) in the Defense Advanced GPS Receiver
(DAGR). In demonstration videos, the DAGR is able to detect jamming and maintain its
lock on the encrypted GPS signals during interference which causes civilian receivers to
The accuracy of a calculation can also be improved through precise monitoring and
measuring of the existing GPS signals in additional or alternate ways.
After SA, which has been turned off, the largest error in GPS is usually the unpredictable
delay through the ionosphere. The spacecraft broadcast ionospheric model parameters,
but errors remain. This is one reason the GPS spacecraft transmit on at least two
frequencies, L1 and L2. Ionospheric delay is a well-defined function of frequency and the
total electron content (TEC) along the path, so measuring the arrival time difference
between the frequencies determines TEC and thus the precise ionospheric delay at each
Receivers with decryption keys can decode the P(Y)-code transmitted on both L1 and L2.
However, these keys are reserved for the military and "authorized" agencies and are not
available to the public. Without keys, it is still possible to use a codeless technique to
compare the P(Y) codes on L1 and L2 to gain much of the same error information.
However, this technique is slow, so it is currently limited to specialized surveying
equipment. In the future, additional civilian codes are expected to be transmitted on the
L2 and L5 frequencies (see GPS modernization, below). Then all users will be able to
perform dual-frequency measurements and directly compute ionospheric delay errors.
A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). The
error, which this corrects, arises because the pulse transition of the PRN is not
instantaneous, and thus the correlation (satellite-receiver sequence matching) operation is
imperfect. The CPGPS approach utilizes the L1 carrier wave, which has a period one
one-thousandth of the C/A bit period, to act as an additional clock signal and resolve the
uncertainty. The phase difference error in the normal GPS amounts to between 2 and 3
meters (6 to 10 ft) of ambiguity. CPGPS working to within 1% of perfect transition
reduces this error to 3 centimeters (1 inch) of ambiguity. By eliminating this source of
error, CPGPS coupled with DGPS normally realizes between 20 and 30 centimeters (8 to
12 inches) of absolute accuracy.
Relative Kinematic Positioning (RKP) is another approach for a precise GPS-based
positioning system. In this approach, determination of range signal can be resolved to a
precision of less than 10 centimeters (4 in). This is done by resolving the number of
cycles in which the signal is transmitted and received by the receiver. This can be
accomplished by using a combination of differential GPS (DGPS) correction data,
transmitting GPS signal phase information and ambiguity resolution techniques via
statistical tests—possibly with processing in real-time (real-time kinematic positioning,
While most clocks are synchronized to Coordinated Universal Time (UTC), the atomic
clocks on the satellites are set to GPS time. The difference is that GPS time is not
corrected to match the rotation of the Earth, so it does not contain leap seconds or other
corrections which are periodically added to UTC. GPS time was set to match Coordinated
Universal Time (UTC) in 1980, but has since diverged. The lack of corrections means
that GPS time remains at a constant offset (TAI - GPS = 19 seconds) with International
Atomic Time (TAI). Periodic corrections are performed on the on-board clocks to correct
relativistic effects and keep them synchronized with ground clocks.
The GPS navigation message includes the difference between GPS time and UTC, which
as of 2009 is 15 seconds due to the leap second added to UTC December 31 2008.
Receivers subtract this offset from GPS time to calculate UTC and specific timezone
values. New GPS units may not show the correct UTC time until after receiving the UTC
offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight
bits) which, given the current rate of change of the Earth's rotation (with one leap second
introduced approximately every 18 months), should be sufficient to last until
approximately year 2300.
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is
expressed as a week number and a day-of-week number. The week number is transmitted
as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again
every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI)
on January 6 1980, and the week number became zero again for the first time at 23:59:47
UTC on August 21 1999 (00:00:19 TAI on August 22 1999). To determine the current
Gregorian date, a GPS receiver must be provided with the approximate date (to within
3,584 days) to correctly translate the GPS date signal. To address this concern the
modernized GPS navigation message uses a 13-bit field, which only repeats every 8,192
weeks (157 years), thus lasting until year 2137 (157 years after GPS week zero).
This antenna is mounted on the roof of a hut containing a scientific experiment needing
Many civilian applications benefit from GPS signals, using one or more of three basic
components of the GPS: absolute location, relative movement, and time transfer.
The ability to determine the receiver's absolute location allows GPS receivers to perform
as a surveying tool or as an aid to navigation. The capacity to determine relative
movement enables a receiver to calculate local velocity and orientation, useful in vessels
or observations of the Earth. Being able to synchronize clocks to exacting standards
enables time transfer, which is critical in large communication and observation systems.
An example is CDMA digital cellular. Each base station has a GPS timing receiver to
synchronize its spreading codes with other base stations to facilitate inter-cell hand off
and support hybrid GPS/CDMA positioning of mobiles for emergency calls and other
applications. Finally, GPS enables researchers to explore the Earth environment including
the atmosphere, ionosphere and gravity field. GPS survey equipment has revolutionized
tectonics by directly measuring the motion of faults in earthquakes.
The US Government controls the export of some civilian receivers. All GPS receivers
capable of functioning above 18 km (60,000 ft) altitude and 515 m/s (1,000 knots)
are classified as munitions (weapons) for which US State Department export licenses are
required. These parameters are clearly chosen to prevent use of a receiver in a ballistic
missile. It would not prevent use in a cruise missile since their altitudes and speeds are
similar to those of ordinary aircraft.
This rule applies even to otherwise purely civilian units that only receive the L1
frequency and the C/A code and cannot correct for SA, etc.
Disabling operation above these limits exempts the receiver from classification as a
munition. Different vendors have interpreted these limitations differently. The rule
specifies operation above 18 km and 515 m/s, but some receivers stop operating at 18 km
even when stationary. This has caused problems with some amateur radio balloon
launches as they regularly reach 100,000 feet (30km).
GPS tours are also an example of civilian use. The GPS is used to determine which
content to display. For instance, when approaching a monument it would tell you about
GPS functionality has now started to move into mobile phones en masse. The first
handsets with integrated GPS were launched already in the late 1990’s, and were
available for broader consumer availability on networks such as those run by Nextel,
Sprint and Verizon in 2002 in response to US FCC mandates for handset positioning in
emergency calls. Capabilities for access by third party software developers to these
features were slower in coming, with Nextel opening up those APIs upon launch to any
developer, Sprint following in 2006, and Verizon soon thereafter.
This antenna is mounted on the roof of a hut containing a scientific experiment needing precise timing |
Misbehavior in preschool should be addressed with logical consequences.
Listening, following directions, sharing and getting along with others are a large part of any preschool curriculum. Many children enter preschool with little or no experience practicing social skills. Parents and teachers can do their best to set preschoolers up for success by providing a clear set of expectations and sticking to consistent consequences. Your bright preschooler will soon learn that grabbing another child's toy is unacceptable and it will be addressed the same way every time.
1Be proactive. Dr. Susan Campbell, author of "Behavior Problems in Preschool Children," states that the most important thing parents can do to head off poor behavior is to set children up for success. When a child consistently has problems sharing with others, the parent and teacher could talk to him before school to remind him of the importance of sharing. If your child often resorts to hitting siblings out of frustration, talk to your child's teacher before problems begin to arise at school.
2Be consistent. Classroom routines and discipline procedures reduce ambiguity for children, letting them feel safe and in control of the day. According to Clinical Psychologist Dr. Laura Markham, a consistent routine helps children develop self-discipline. A discipline procedure might begin with a warning and a clear statement about the unacceptable behavior. If the behavior continues, the next step is a time-out, followed by a discussion about how to replace the negative behavior with an acceptable behavior. Tell your child, "Preschoolers don't throw toys because someone could get hurt. Are there items in the room that are made for throwing? Would you like to find a friend to play catch with?" The time-out should then be followed by a natural consequence, such as removal from the area or toy where the trouble took place.
3Respond immediately. Children need immediate feedback when they act out. When a child breaks a rule, give a warning to the child, telling her exactly what behavior is unacceptable and why. According to the Northeast Foundation for Children (NEFC)'s "Responsive Classroom" educational approach, sometimes all it takes to redirect a misbehavior is to stand near the child. Proximity is a subtle way to let your child know you are watching, without calling her out in front of peers.
4Give logical consequences. If a child throws a block, he should be removed from the block area and directed toward another area of play. If a child hits another child, he should play alone for 3 to 5 minutes. Logical consequences make sense and work to teach the child the acceptable behavior. A child who throws blocks on Monday and loses his play privileges will likely think twice before throwing them on Tuesday because he wants to play with the blocks. Use time-outs to clarify why the child was put there and what he could do differently in the future.TipThe University of Missouri's Diana Milne, regional specialist in human development, reminds parents of the important difference between discipline and punishment. When you discipline your preschooler, you are teaching him self-control and how to take responsibility for his behavior. Punishing a child makes the parent responsible for the child's behavior. |
These three images of Jupiter, taken through the narrow angle camera of NASA's Cassini spacecraft from a distance of 77.6 million kilometers (48.2 million miles) on October 8, reveal more than is apparent to the naked eye through a telescope.
The image on the left was taken through the blue filter. The one in the middle was taken in the ultraviolet. The one on the right was taken in the near infrared.
The blue-light filter is within the part of the electromagnetic spectrum detectable by the human eye. The appearance of Jupiter in this image is, consequently, very familiar. The Great Red Spot (below and to the right of center) and the planet's well-known banded cloud lanes are obvious. The brighter bands of clouds are called zones and are probably composed of ammonia ice particles. The darker bands are called belts and are made dark by particles of unknown composition intermixed with the ammonia ice.
Jupiter's appearance changes dramatically in the ultraviolet and near infrared images. These images are near negatives of each other and illustrate the way in which observations in different wavelength regions can reveal different physical regimes on the planet.
All gases scatter sunlight efficiently at short wavelengths; this is why the sky appears blue on Earth. The effect is even more pronounced in the ultraviolet. The gases in Jupiter's atmosphere, above the clouds, are no different. They scatter strongly in the ultraviolet, making the deep banded cloud layers invisible in the middle image. Only the very high altitude haze appears dark against the bright background. The contrast is reversed in the near infrared, where methane gas, abundant on Jupiter but not on Earth, is strongly absorbing and therefore appears dark. Again the deep clouds are invisible, but now the high altitude haze appears relatively bright against the dark background. High altitude haze is seen over the poles and the equator.
The Great Red Spot, prominent in all images, is obviously a feature whose influence extends high in the atmosphere. As the Cassini cameras continue to return images of Jupiter, it will be possible to construct a three-dimensional picture of how clouds form and evolve by watching the changing appearance of Jupiter in different spectral regions.
JPL manages the Cassini mission for NASA's Office of Space Science, Washington, D.C. JPl is a division of the California Institute of Technology in Pasadena. |
Unlike most newborn creatures, elephants look geriatric right out of the womb, thanks in large part to their loose-fitting, wrinkly skin. But elephants aren't manipulating the system to collect Social Security early – their cracked skin is a clever evolutionary adaptation that protects the animals from the sun's intense rays.
African bush elephants are pachyderms (based on Greek word that means "having thick skin"), a group of large animals like hippos and rhinoceroses. These enormous warm-blooded animals can weigh around 11 tons (9.8 metric tons) and measure up to 13 feet tall (3.9 meters) at the shoulder. In short, it's a lot of flesh and bone, all baking in and absorbing often brutal African heat. And as it turns out, elephants can't sweat. Are you perspiring with sympathy yet?
Michel Milinkovitch, professor at the Department of Genetics and Evolution in the University of Geneva (UNIGE) Faculty of Science and group leader at the SIB Swiss Institute of Bioinformatics, led a team of researchers that went more than skin deep in their studies of pachyderm epidermis. Using light and electron microscopes, along with intricately detailed computing modeling, the researchers were able to determine the cause of the scaly skin. (Their research was published in the journal Nature Communications on Oct. 2. 2018).
For starters, the scientists found that the crackled appearance of elephant skin, is not a sign of aging or skin shrinkage, as is often the case with other species. Rather, it is a purposeful design resulting from the stress of the skin bending. These cracks allow the skin to retain moisture and dirt, which reduces the harmful effects of the sun and prevents wild swings in body temperature. The barrier also wards off some types of pests and parasites.
Elephant skin, unlike human skin, is resistant to shedding, so the layers – particularly the super-tough top layer, the stratum corneum -- stick around longer before sloughing off. It also has a lot more keratin (the stuff that makes up fingernails) than human skin, so it's more durable. As this thick hide is subject to everyday movement, like bending and twisting, it quickly wrinkles, with layer upon layer of wrinkly skin serving as a complex system of channels that capture and hold moisture and dirt.
So when you see elephants basking in sloppy pools, spraying water and mud to and fro, they aren't just doing it for the hilarity. The filthy goo settles into the teensy cracks in their skin, some of which are just a micrometer across, about 50 times smaller than the naked human eye can detect. Continually wetted, the skin remains permeable, helping the animals stay cooler.
Interestingly, elephant skin doesn't just randomly wrinkle — it cracks in geometric shapes that approximate other common sights in our world, from drying mud to heat-shattered asphalt, or even geometrically precise rock breakage like the Giant's Causeway in Ireland. The result is a durable cooling system that keeps these gigantic mammals from cooking in their own thick skin on steamy summer days. |
The problem is:
An object is 50000 parsecs from Earth and has an angular diameter of
30 arcseconds. What is the physical width of the object to the nearest tenth of a parsec?
Can you explain, in detail, how to do his problem and what the correct answer is?© BrainMass Inc. brainmass.com March 5, 2021, 1:13 am ad1c9bdddf
A parsec is a unit of length commongly used in astronomy. It is like any other unit, for example,
a mile or a kilometre. Just a lot longer. In this problem, angular diameter of the object has nothing
to do with the definition of a parsec. 30 arcseconds is an angle subtended by the object on
the sky ...
Detailed explanations are given about how to relate visible angular and actual linear sizes of objects observed
on the sky. The difficulty is in using small angles, such as arcseconds, common in |
Algebra is the language through which we describe patterns. Think of it as a shorthand, of sorts. As opposed to having to do something over and over again, algebra gives you a simple way to express that repetitive process. It's also seen as a "gatekeeper" subject. Once you achieve an understanding of algebra, the higher-level math subjects become accessible to you. Without it, it's impossible to move forward. It's used by people with lots of different jobs, like carpentry, engineering, and fashion design. In these tutorials, we'll cover a lot of ground. Some of the topics include linear equations, linear inequalities, linear functions, systems of equations, factoring expressions, quadratic expressions, exponents, functions, and ratios.
Introduction to algebra
Videos exploring why algebra was developed and how it helps us explain our world.
We will now equate two algebraic expressions and think about how it might constrain what value the variables can take on. The algebraic manipulation you learn here really is the heart of algebra.
Exploring a world where both sides aren't equal anymore!
Graphing and analyzing linear functions
Use the power of algebra to understand and interpret points and lines (something we typically do in geometry). This will include slope and the equation of a line.
Systems of equations and inequalities
Solving a system of equations or inequalities in two variables by elimination, substitution, and graphing.
Multiplying and factoring expressions
This topic will add a ton of tools to your algebraic toolbox. You'll be able to multiply any expression and learn to factor a bunch a well. This will allow you to solve a broad array of problems in algebra.
In this topic, we'll analyze, graph and solve quadratic equations.
Exponent expressions and equations
Solving exponential and radical expressions and equations. Using scientific notation and significant figures.
Identifying, solving, and graphing various types of functions.
Ratios and proportions
What ratios and proportions are. Using them to solve problems in the real world. |
6th Grade Curriculum
Sixth Grade Math is designed to enhance the developing math skills of the young middle school student. Various concepts and problem-solving strategies will be visited throughout the course. Sixth-grade students will begin with a review of the properties of the different mathematics operations and order of operations. They will then explore decimals and fractions, begin to evaluate expressions and equations and become more adept at solving more complex math problems. During the second semester, students in Math 6 will complete units of study involving ratio and percent, integers and rational numbers, coordinate planes, geometry and measurement and analyzing and graphing data.
The utilization of classroom practice, technology and real-life applications of the many ways these mathematical concepts are consistently used will provide students various opportunities for learning and deepening their math knowledge. Hands-on activities and projects will also be used, helping students expand the use of mathematics skill, build confidence and increase mathematical fluency.
World Geography 6
In Sixth Grade World Geography, students journey through a comprehensive study of people, places and cultures around the world. Students will participate in units of study containing major themes linked to the six essential elements of geography: 1) The world in spatial terms, 2) Places and regions, 3) Physical systems, 4) Human systems, 5) Environment and society and 6) Uses of geography. The topics of study in World Geography 6 will include Earth’s physical geography and map reading. Within these topics, students will explore the past and present physical geography, cultures and lifestyles of The United States, Canada, Latin America, Europe, Australia, Africa and Asia.
World Geography 6 involves active learning through projects, frequent use of technology and amazing virtual field trips that allow students to truly experience the areas of the world that they study. As students complete this course, they will gain a foundational knowledge of geography that they can build upon as they progress through their educational career. They will also have a better understanding of the huge role geography plays in the world in which we live.
This course assists students in becoming lifelong learners who grow in their understanding of the world. The nature of science includes these concepts: scientific explanations are based on logical thinking; are subject to rules of evidence; are consistent with observational, inferential, and experimental evidence; are open to rational critique; and are subject to refinement and change with the addition of new scientific evidence. Through the sixth grade science course, students will develop a deeper understanding of science and the scientific process through several topics including: Matter and Its interactions, Motion and Stability, Energy, Waves and Their Applications, Ecosystems, Earth’s Place in the Universe, Earth’s Systems, Earth and Human Activity, and Engineering Design.
The best way to learn science is by actively participating in science. Students will become more fluent in science through reading, writing and speaking. They will also enhance their knowledge and understanding through a variety of field experiences designed to guide students to make observations, ask questions, test solutions, give feedback, and make modifications.
In English 6, students will begin to experience more sophisticated pieces of literature for study and analysis. Students will complete units of study in which they will be reading both informational and literary texts. This course will also guide students to acquire and use new vocabulary. Students will also complete units of study to improve writing skills which will include working on grammar and development of more complex sentences. These skills will be utilized as students begin to produce larger pieces of writing such as narratives, argumentative pieces, informational pieces, and both formal and informal responses to literature. Students will also participate in the Accelerated Reader program to continue to enhance their reading and comprehension skills.
The student will work in both whole and small group settings as well as individually to examine literary works and to create their own pieces. Various projects and activities will be used throughout the course to provide opportunities for all students to gain a deeper understanding and enjoyment of reading and writing.
Introduction to Music 6
This course is offered to students in 6th grade regardless of previous musical experiences. The course allows students to explore and experience the importance of rhythm in all types of music from different cultures, periods and genres. The curriculum emphasizes the basics of reading music and music theory. Students will participate in activities and lessons, allowing them the opportunity to explore different forms of music, including singing, playing instruments, listening to varying types of music and creating musical lyrics and rhythms.
Introduction to Art 6
This elective is a six-week course. The lessons this it includes are targeted toward focused study and rigorous participation in making sense of art. An increased understanding of fine arts will be developed through critical and abstract thinking. Students will be guided through a variety of exercises to assist them in expressing themselves creatively and artistically through different mediums. As students participate in the activities and projects provided in this course, they will begin to develop as individual artists through discussion and critique. In this course, students will also complete a unit in which they will briefly examine architecture as art and how it has made a mark throughout history.
Physical Education is a semester-long course that is designed to promote physical activity, teamwork, self-motivation, self- discipline and increased development of body movement and athletic ability. Students are beginning to mature in their ability to perform various athletic tasks and movements. In this course, students will demonstrate more mature motor movement, perform more complex rhythmic skill and apply movement forms to developing motor skills. Along with athletic talent and increased motor skills, Physical Education focuses on good sportsmanship, building students’ confidence in their abilities and their willingness to try new things. Students will improve fitness levels through cardiovascular conditioning, general body strength and flexibility.
Positive communication skills are necessary for student success. This class will focus on key listening, speaking and non-verbal skills that will help to provide a solid, life-long foundation. Special attention will be given to public speaking, as well as giving and receiving feedback. |
Concept and Definition of Inflation
In general, inflation means the increase in general price level or decrease in purchasing power of money (or value of money). Inflation usually refers to a general rise in the level of prices of goods and services over a period of time. This is also referred to as price inflation. The term inflation originally referred to the debasement of currency, and was used to describe increases in the money supply; however, debates regarding cause and effect have led to its primary use today in describing price inflation. Inflation can also be described as a decline in the real value of money. When the general level of prices rises, each monetary unit buys fewer goods and services. Inflation is measured by calculating the inflation rate, which is the percentage rate of change for a price index, such as the consumer price index.
Economists generally agree that high rates of inflation and hyperinflation are caused by high growth rates of the money supply. Views on the factors that determine moderate (neither large nor small) rate of inflation are more varied. However, there is general consensus that in long run, inflation is caused by money supply increasing faster than the growth rate of the economy.
Inflation originally referred to the debasement of the currency, where gold coins were collected by the government, melted down, mixed with other metals and reissued at the same nominal value. By mixing gold with other metals, could increase the total number of coins issued using the same amount of gold. However, this action of government increased the money supply, and lower the relative value of money. As the real value of each coin had decreased, the consumer had to pay more coins in exchange for goods and services of the some value. In the 19th century, the word inflation started to appear as a direct reference to the action of increasing the amount of currency units by the central bank.
In some schools of economics and particularly in the United States in the 19th century, inflation originally was used to refer to increase of the money supply, while deflation meant decreasing it. However classical political economists from Hume to Ricardo did distinguish between and debate the cause and effect: Balloonists, for example, argued that the Bank of England had over-issued banknotes and caused the depreciation of bank notes (Price inflation).
There are many definitions of inflation. By inflation most people understand a sustained and substantial rise in prices. For example, Crowther defines inflation as a state in which the value of money is falling, i.e., prices are “rising”. According to Harry G. Johnson, “we define inflation as substantial increase in prices”. Milton Friedman writes: “By inflation I shall mean a steady and sustained rise in prices.”
According to these definitions, inflation is a process of change in the economy which has the following features:
(1) There is an abnormal-more than 8% rise in the price level;
(2) The price level rises continuously for over 2-3 years;
(3) Too much money chases too few goods. That is, there is an excessive supply of money in relation to the supply of goods and services demanded by the people.
Types of Inflation
Inflation in all countries are not the same types. In some countries, prices increases very slowly while in some countries price increases very rapidly. Therefore, experience tells us that there are many types of inflation. We explain below the various ways in which these types of inflation can be classified. This classification is based on different considerations.
(1) On the basis of rapidity with which prices rise, inflation may be classified into four types:
(a) Creeping Inflation: It is the mildest form of inflation and is not considered to have any dangerous effects on the economy. Slow Price increased is called creeping inflation in which prices rise by not more than 3% Per annum.
(b) Walking Inflation: The next stage, from the point of view of rapidity of inflation, is walking inflation. When creeping inflation continues over a decade and prices rise from 30% to 40%, this may be described as walking inflation.
(c) Running Inflation: When the speed price-rise increases, walking inflation gets converted into running inflation. Running inflation would record a 100% increase in price over a period of ten years.
(e) Galloping or Hyper Inflation: In galloping inflation, prices rise every moment. The prices may rise by 100% within a year. Hyper inflation is an indication of serious disequilibrium in the economic system.
Above given four stages of inflation can be illustrated graphically as follows:
In above given figure, time period is measured along on X-axis and inflation rate on Y-axis. ‘C’ represents creeping inflation. In this situation, it takes around 10 years for the increase in price by 10% ‘W’ represents the walking inflation. In this type of inflation, price increases by 30% to 40% in the period of 10 years. ‘R’ represents the running inflation. In such a situation, price increases by 100% in 10 years. ‘H’ represents the hyper inflation or galloping inflation. In this type of inflation, price increases by 100% in a one year of time period.
(2) On the basis of the processes through which it is induced, inflation may be classified into three types;
(a) Deficit-induced inflation: Deficit-induced inflation is that which is the result of continuous deficit financing by the government.
(b) Wage-induced inflation: Wage-induced inflation is that which is caused by a faster increase in money wages than the productivity increases allow.
(c) Profit-induced inflation: profit-induced inflation is that which is due to a sustained increase in the profits of the manufacturers due to monopoly influences.
(3) On the Basis of Time, inflation may also be classified into three types;
(a) War-time inflation: War-time inflation emerges when the government spends more than its revenue. The government uses a considerable portion of available output. Hence, there is a downward shift of supply to the civilian population thereby giving rise to an inflationary gap.
(b) Post-war inflation: Post-war inflation occurs because the government may withdraw war-time taxes. This would add to the disposable income of the community. Besides this, the excess liquidity accumulated during war-time might manifest itself into excessive demand and therefore, lead to inflation.
(c) Peace-time Inflation: Inflation in normal times is called peace-time inflation. For example, the inflation experienced in under-developed countries due to excessive aggregate expenditures is peace-time inflation.
(4) On the basis of scope, inflation is considered to be of two types: Comprehensive inflation and sporadic inflation. When there is a general rise in prices and the prices of all the commodities increases, this is comprehensive inflation. Whereas comprehensive inflation is economy-wide, sporadic inflation is of sectoral nature. Sporadic inflation results when aggregate supply is limited by physical conditions and can not be expanded quickly.
(5) Inflation may also be classified into ‘Open’ inflation and ‘suppressed’ inflation. In open inflation, prices rise freely without any government intervention. A suppressed inflation is one in which government attempts to suppress the manifestation of inflationary pressures by controlling prices, exchange rates and credit creation by bank. The Hyper inflation in 1920’s in Germany, Hungary, Russia were examples of open inflation. Inflation in Germany after the Second World-War was an example of suppressed inflation.
Causes of Inflation
In the long run inflation is generally believed to be a monetary phenomenon while in the short-run and medium term it is influenced by the relative elasticity of wages, prices and interest rates.
A great deal of economic literature concerns the question of what causes inflation and what effect it has. There are different schools of thought as to what causes inflation. However, inflation in the economy will occur due to two factors, i.e., increase in effective demand and increase in production cost. The inflation caused by the increase in demand is known as demand-pull inflation and on the other hand, the inflation caused by the increase in cost of production is known as cost-push inflation. There are many other factors within these two reasons which are explained below.
- Demand-pull Inflation
Demand-pull inflation caused by increase in aggregate demand due to increased private and government spending, etc. Demand-pull inflation is constructive to a faster rate of economic growth since the excess demand and favorable market conditions will stimulate investment and expansion. The demand-pull inflation occurs when the aggregate demand exceeds the available supply of goods and Services in existence. The process of demand-pull inflation is when the supply of money increases, rate of interest falls. This increases the investment which increases money income. As a result, the expenditure on consumption goods and investment expenditure increases. Due to this, demand increases than supply and the demand-pull inflation occurs. This can be illustrated with the help of figure;
In the figure given above, ‘AS’ is the given aggregate supply curve of the economy and each aggregate demand curve (AD0 to AD4) Shows the level of aggregate demand associated with the rising levels of money income in the economy when aggregate demand increases from AD0 to AD2, output as well as prices increase. But as the level of full employment is reached at Qf and the supply curve becomes perfectly inelastic, increases in income beyond AD2 lead to what Keynes termed “true inflation”. The rise in prices upto P2 is called “bottleneck inflation”, which is due to imbalances, shortages and rising costs in the economy as the level of full employment is approached. Beyond the point E, the aggregate supply function is assumed to be vertical and prices are rising directly due to increases in the level of money income.
According to this concept, the inflation occurs when the demand further increases after reaching the state of full employment. In such situation aggregate demand will be more than the aggregate supply because after the state of full employment supply won’t increase despite the increase in demand. Thus, price of goods will increase. This is known as demand-pull inflation.
Causes of demand-pull inflation
Demand-pull inflation caused by the following factors:
(i) Increase in the quantity of money:
The demand for goods increases rapidly when the quantity of money increases rapidly in the economy. So, increase in demand due to increase, in the quantity of money creates the ‘demand-pull inflation’.
(ii) Increase in public expenditure:
In modern time, the government spends more than revenue due to the increase in government activities. This creates fiscal deficit. Some part of the deficit is met by the government by printing new notes on account of this the expansion of money increases and inflation occurs.
(iii) Reduction in taxation:
If the government reduces taxes, households are left with more disposable income in their pockets. This leads to increase consumer spending. Thus, increasing aggregate demand and eventually causing demand-pull inflation.
(iv) Shortage of goods and services:
The price level increases when the supply of goods and services is lower in relation to demand. The production and supply decrease due to the scarcity of factors of production, hoarding by businessmen, natural calamity, scarcity of raw-materials, etc.
(v) Redistribution of income:
Redistribution of income will increase the demand for goods and services. This is because increase in the income of the people will increase their propensity to consume. Increase in consumption will increase the demand which will lead to increase in price and the ‘demand-pull inflation’ will be resulted.
- Cost-Push Inflation
Cost-push inflation is also called “supply shock inflation”, caused by drops in aggregate supply due to increased prices of inputs, for example. Take for instance a sudden decrease in the supply of oil which would increase oil prices. Producers for whom oil is a part of their costs could then pass this on to consumers in the form of increased prices.
Inflation is also caused by increase in the cost of production. As a result of increase in the cost of production, the aggregate supply declines in relation to existing demand of goods and services. Thus, the inflation that occurs due to the pressure of cost is called cost-push inflation. Cost-push inflation can be illustrated with the help of figure:
In above given diagram, aggregate output is shown on the horizontal axis and price level is shown on the vertical axis, ‘AS’ is the aggregate supply curve and ‘AD’ is the aggregate demand curve. Let us suppose that the economy is in equilibrium at the full employment level at the point where output is Q0. The corresponding price level is P0. Now, let us further suppose that there is an upward shift of costs of production from the position of AS0 to AS1. If the money income remains at the same level equilibrium output will fall to Q1 and the price level will rise to P1. Similarly, if the supply function assumes the position AS2, output will diminish to Q2 and prices will be pushed up to P2. This rise in the price level is commonly known as cost push inflation.
The causes of cost push inflation are explained as follows:
(1) Wage- Induced inflation (wage-push Inflation) :
Wage- induced inflation caused by the use of bargaining power of trade unions in raising per unit wage costs. Where trade unions have strong bargaining power, even when the worker’s productivity does not rise, they are able to get wage rates pushed up. Such pushes will lead to autonomous shift in the cost of production even if aggregate demand and level of income remain unchanged. When wages are increased without any corresponding rise in productivity, the resultant upward shift in the aggregate supply function will lead to cost-push inflation.
(2) Profit-induced inflation (profit-push inflation):
Another cause of cost-push inflation is the profit-push. It can occur only under imperfectly competitive markets. Where the monopolists and the oligopolists raise prices of their products more than the increase in cost, this may lead to cost-push inflation.
(3) International Reasons (Supply-Shock inflation):
Every country of the world will have some kind of business or economic relationship with other countries. The countries like Nepal are dependent on foreign countries for construction materials, raw materials, every including consumer goods. Hence, if the prices of these goods increase in foreign countries, the price in Nepal also automatically increase. If the inflation occurs due to price raised by foreign countries such as the price of petroleum by OPEC, it is know as ‘supply-shock’ inflation.
Effects of Inflation
An increase in the general level of prices implies a decrease in the real value of money. That is, when the General level of prices rises, each monetary unit buys fewer goods and services. Many people are willing to accept inflation if it can provide full employment because then it is a small price to pay. But we should not underestimate the socially unacceptable effects of inflation. It is true that unemployment is an economic as well as a social evil but this should not blinds to the evils of inflation. We explain below now inflation affects the economic, social, political and moral life of the people.
In general, high or unpredictable inflation rates are regarded as bad for following reasons:
Uncertainty about future inflation may discourage investment and saving.
Inflation redistributes income from those on fixed incomes, such as pensioners, and shifts it to those who draw a variable income, for example from wages and current profits which may keep pace with inflation. The real value of retained profits is destroyed at the rate of inflation as the historical cost balances stay fixed like pensioners fixed income. However, debtors may be helped by inflation due to reduction of the real value of debt burden
- International Trade:
Where fixed exchange rates are imposed, higher inflation than in trading partners economies will make exports more expensive and tend toward a weakening balance of trade. A sustained higher level of inflation than in the trading partners economies will also, over the long-run, put upward pressure on the implicit exchange rate making the fix unsustainable and potentially inviting an exchange rate crisis.
- Cost-push inflation:
Rising inflation can prompt trade unions to demand higher wages, to keep up with consumer prices. Rising wages in turn can help fuel inflation. In the case of collective bargaining, wages will be set as a factor of price expectations. This will be higher when inflation has an upward trend. This can cause a wage spiral. In a sense, inflation begets further inflationary expectations.
People buy consumer durables as stores of wealth in the absence of viable alternatives as a means of getting rid of excess cash before it is devalued, creating shortages of the hoarded objects.
In inflation gets totally out of control, it can grossly interfere with the normal working of the economy, hurting its ability to supply.
Some possibly positive effects of inflation include:
Keynesians believe that nominal wages are slow to adjust downwards. This can lead to prolonged disequilibrium and high and high unemployment in the labors market. Since inflation would lower the real wage if nominal wages are kept constant, Keynesian argue that some inflation is good for the economy. As it would allow labour markets to reach equilibrium faster.
The primary tools for controlling the money supply are the ability to set the discount rate, the rate at which banks can borrow from the central bank, and open market operations which are the central bank’s interventions into the bonds market with the aim of affecting the nominal interest rate. If an economy finds itself in a recession with already low, or even zero, nominal interest rates, then the bank cannot cut these rates further in order to stimulate the economy-this situation is known as a liquidity trap. A moderate level of inflation tends to ensure that nominal interest rates rather than the money supply in determining inflation.
A fundamental concept in inflation analysis is the relationship between inflation and unemployment, called the Phillips curve. This model suggests that there is a trade-off between price stability and employment. Therefore, some level of inflation could be considered desirable in order to minimize unemployment.
Remedies or measures to Control Inflation
Today most central banks are tasked with keeping inflation at a low level, normally 2 to 3% per annum within the targeted low inflation range which could range from 2 to 6% per annum.
It has been noted that inflation inflicts much suffering on the helpless people and disrupts society economically, socially, morally and politically. Hence, there is need for controlling inflation. There are a number of methods that have been suggested to control inflation. Control of inflation requires an integrated set of measures which may be classified as monetary, fiscal, direct controls and other measures.
- Monetary measures:
The central Bank should use both quantitative and qualitative techniques of credit control in order to achieve the objective of controlled expansion of credit. Central banks such as the U.S. Federal Reserve can affect inflation to a significant extent through setting interest rates and through other operations (i.e., using monetary policy). High interest rates and slow growth of the money supply are the traditional ways through they have different approaches. For instance, some follow a symmetrical inflation target while others only control inflation when it rises above a target, whether express or implied.
Monetarists emphasize increasing interest rates to fight inflation. Keynesians emphasize reducing demand in general often through fiscal policy, using increased taxation or reduced government spending to reduce demand as well as by using monetary policy. Supply-side economists advocate fighting inflation by fixing the exchange rate between the currency and some reference currency such as gold. This would be a rerun to the gold standard. All of these policies are achieved in practice through a process of open market. The bank rate may be raised; Securities may be sold in the open market and even resource ratios may, be increased. These steps would curb credit expansion by the commercial banks and hence control inflationary pressures in the economy.
However, anti-inflationary monetary policy suffers from certain limitations. First, since marginal efficiency of investment is too high during inflationary period, investment may become interest inelastic. Besides this, if the banks have excess reserves, the central banking techniques of monetary management may not be of much use in combating inflation. Similarly, if there is deficit induced inflation, the Central Bank can hardly do any thing to curtail the excessive monetary demand which has arisen due to structural deficiencies.
Notwithstanding its limitations, monetary policy has a role. It can assist in the expansion of productive sectors of the economy and restrict speculative inventory build-ups. In short, we may say that monetary measures should be used along with other measures to keep the inflation under control, even though its role is relatively a modest one.
- Fiscal Measures
The policy related to public expenditure, public revenue and public debt is known as fiscal policy. The main anti-inflationary fiscal measures are as follows:
(1) Reduction in public expenditure:
Increase in the volume of public expenditure can contribute much to inflation by increasing the disposable income in the hands of the people. Moreover, public expenditure, being autonomous in nature, has a multiplier effect on the levels of income, output and employment of the country. Therefore, reduction in government spending is bound to reduce inflationary pressure.
(2) Increase in Taxes:
Mobilization of additional resources in the form of higher taxation also helps in combating inflation. As more taxes are imposed, the size of disposable income is reduced and thus the inflationary gap is narrowed down. All this effort will help in reducing the inflationary pressures in the country.
(3) Control over Deficit Financing:
Deficit financing has been held to be the root cause of inflation in many countries. Excessive deficit financing and the resultant increase in money supply often lead to inflationary pressures. Therefore, the government should keep its deficits to the minimum possible when the economy is threatened with inflation.
(4) Increase in public Borrowings:
During inflationary period, the government may launch a campaign to increase savings and thus reduces the extra purchasing power. The government may offer to the people bonds which bear attractive interest rates. If price rise assumes an alarming proportion, the government may force the people to save some portion of their incomes compulsorily. Ultimately, these activities of the government controls inflation. Despite it has some imitations, but fiscal measures are important instruments of anti-inflationary strategy. When used in co-ordination with monetary policy, fiscal policy serves a very useful purpose in fighting inflation.
- Other Measures
Apart from monetary and fiscal measures, direct controls and other measures may also be used to control inflation. Direct measures may be both voluntary and compulsory. The people may be persuaded to save more by restraining expenditure on inessential consumer goods. But direct measures may also contain an element of compulsion. The government may take certain compulsory deductions from salaries and wages to credit it to the employee’s saving fund account. Thus, a part of purchasing power of the people can be kept blocked as long as the inflationary pressures are not relieved.
Direct measures also include price controls and rationing. Price control and rationing may prove helpful in arresting inflationary pressures only if an efficient public distribution system exists. Shortages of essential consumer goods will encourage black market transactions and the scope of black money will also increase. Therefore, direct controls must be accompanied by an active enforcement mechanism.
During the period of galloping inflation, an appropriate income policy may also have to be controlled. Ceilings on wage payments and also on dividend payments may be imposed. These policies will keep down the disposable income and narrow down the inflationary-gap. On the other side, checks on wages and profits keep the cost of production low and hence const-push inflation will be checked.
The ultimate remedy against inflation lies in increasing production in all the sectors, for anti-inflation measures are only short-run measures. Intensive farming and full use of industrial capacity can go a long way in controlling inflation. In fact, it is the effective remedy for inflation.
The contribution of non-economic factors in combating inflation should also not be minimized. Peaceful atmosphere, political stability, firm and dynamic leadership, efficient and honest administration, promoting standards of efficiency and sense of responsible citizenship are the prerequisites for any sound policy to combat inflation.
Thus, we find that in order to control inflation we need a multipronged policy of co-ordinating short-period and long-period measures. Only then, we can hope to keep |
Fact Sheets And Publications
Controlling Backyard Invaders
What are Invasive Plants?
Invasive plants quickly overwhelm and displace existing native plants by reducing the availability of light, water, nutrients and space. They have few, if any, natural controls to keep them in check. Ecologists now rank invasion by exotic plants, animals, and pathogens second only to habitat loss as a major threat to local biodiversity.
Invasive plants may be introduced by accident or intentionally to control erosion, provide wildlife food and habitat, or for ornamental value in gardens. Accidental introductions occur when people and goods travel worldwide. Packing material can harbor seeds or plant parts. Japanese stilt grass, now a widely escaped groundcover in woodland edges, is a prime example.
Invasive plants can be divided into two categories—(1) plants that were introduced either intentionally or accidentally but are no longer sold (i.e. multiflora rose, stilt grass) and (2) ornamental plants still grown and sold.
This brochure focuses primarily on invasive plants no longer sold. The goal is to guide home and property owners in the identification and control of aggressive invasive plants. Not only will control improve the diversity of native and non-invasive plants, but it will also improve habitat and help prevent the spread of invasive plants to neighboring areas.
UD Cooperative Extension
This institution is an equal opportunity provider.
In accordance with Federal law and U.S. Department of Agriculture policy, Cooperative Extension is prohibited from discriminating on the basis of race, color, national origin, sex, age, or disability. |
Why are most craters circular (even craters found on Earth)? By hurtling objects together at many miles per second in large laboratories, scientists have shown that only the most oblique impacts (less than 10° from the horizon) produce elliptical craters. The kinetic energy of an impactor behaves much like the energy from a nuclear bomb. The energy is transferred to the target material by a shock wave, and shock waves produced by an impact, whether oblique or head-on, propagate hemispherically. This shape means that energy is being delivered equally in all directions; resulting in a hemispherical void and thus circular craters. However, conditions in nature do not always mirror the laboratory. In fact some craters are nearly square! A portion of the rim of Lavoisier A crater tells a story of the geology before impact. Lavoisier A is a square-ish crater with a diameter of ~26 km (16 miles) found in the northwestern portion of Oceanus Procellarum.
Much of Lavoisier A's shape is thought to be due to preexisting joints or faults in the target rock. These discontinuities create zones of weakness, affecting how the shock wave travels through the material. We find square craters on other planetary bodies such as on the asteroid Eros and here on Earth! An example of a square crater that has been thoroughly studied is Meteor Crater in Arizona. This crater formed on layers of sedimentary rocks that have orthogonal vertical joints running below where the crater formed. The joints disrupted the shock wave flow in certain directions, preventing the formation of a circular crater. Another indication of weaknesses within the target layers is the appearance of the northeastern portion of the crater rim. It appears as if a layer of rock has been peeled back.
Can you find the evidence of pre-impact fracturing (square boundaries) in the full resolution NAC?
Back to Images |
Long before the terms Native American or Indian were considered, the tribes were spread throughout the Americas. Before any white man set foot on this territory, it was settled by the forefathers of bands we now call Sioux, or Cherokee, or Iroquois.
For centuries, the American Indian developed its traditions and heritage without interference. And that history is captivating.
From Mayan and Incan ruins, from the mounds left in the central and southern parts of what is now the U.S. we have learned quite a bit. It’s a story of beautiful arts and crafts and deep spirituality. Archaeologists have unearthed remarkably advanced structures and public works.
While there was inevitable tribal conflict, that was simply a slight blemish in the experience of our forebears. They were at peace with this beautiful continent and deeply plugged into nature.
The European Settler Arrives
When European leaders dispatched the first vessels in our direction, the intention was to discover new resources – but the quality of environment and the bounty of everything from timber to wildlife subsequently changed their tune. As those leaders learned from their explorers, the motivation to colonize spread like wildfire.
The English, French and Spanish raced to carve up the “New World” by shipping over inadequately prepared colonists as fast as they could. In the beginning, they skirmished with the alarmed Indians of America’s eastern seaboard. But that soon gave way to trade, because the Europeans who landed here understood their survival was doubtful without native help.
Thus followed decades of relative peace as the settlers got themselves established on American soil. But the drive to push inland followed soon after. Kings and queens from thousands of miles away were restless to find even more resources, and some colonists came for freedom and opportunity.
They wanted more space. And so began the process of forcing the American Indian out of the way.
It took the shape of cash payments, barter, and famously, treaties that were almost consistently neglected once the Indians were pushed off the territory in question.
The U.S. government’s policies towards Native Americans in the second half of the nineteenth century were motivated by the desire to expand westward into areas occupied by these Native American tribes. By the 1850s nearly all Native American tribes, roughly 360,000 in number, lived to the west of the Mississippi River. These American Indians, some from the Northwestern and Southeastern territories, were confined to Indian Territory situated in present day Oklahoma, while the Kiowa and Comanche Native American tribes shared the land of the Southern Plains.
The Sioux, Crows and Blackfeet dominated the Northern Plains. These Native American groups experienced adversity as the constant stream of European immigrants into northeastern American cities delivered a stream of immigrants into the western lands already populated by these various groups of Indians.
Find Native American Indian Jewelry in Woods Cross, Utah
The early nineteenth century in the United States was marked by its steady expansion to the Mississippi River. However, due to the Gadsden purchase, that lead to U.S. control of the borderlands of southern New Mexico and Arizona in addition to the authority over Oregon country, Texas and California; America’s expansion would not end there. Between 1830 and 1860 the United States roughly doubled the amount of land within its control.
These territorial gains coincided with the arrival of hordes of European and Asian immigrants who wanted to join the surge of American settlers heading west. This, partnered with the discovery of gold in 1849, presented attractive possibilities for those ready to make the huge journey westward. As a result, with the military’s protection and the U.S. government’s assistance, many settlers set about building their homesteads in the Great Plains and other areas of the Native American group-inhabited West.
Native American Tribes
Native American Policy can be defined as the laws and operations established and adapted in the United States to outline the relationship between Native American tribes and the federal government. When the United States first became a sovereign country, it adopted the European policies towards these native peoples, but over two centuries the U.S. designed its very own widely varying policies regarding the evolving perspectives and requirements of Native American oversight.
In 1824, in order to administrate the U.S. government’s Native American policies, Congress created a new agency inside the War Department referred to as Bureau of Indian Affairs, which worked directly with the U.S. Army to enforce their policies. At times the federal government recognized the Indians as self-governing, distinct political communities with numerous cultural identities; however, at other times the government attempted to force the Native American tribes to give up their cultural identity, hand over their land and assimilate into the American customs.
Find Native American Indian Art in Woods Cross, UT
With the steady flow of settlers into Indian “” land, Eastern newspapers circulated sensationalized reports of cruel native tribes carrying out widespread massacres of hundreds of white travelers. Although some settlers lost their lives to American Indian attacks, this was not the norm; in fact, Native American tribes generally helped settlers cross over the Plains. Not only did the American Indians offer wild game and other necessities to travelers, but they acted as guides and messengers between wagon trains as well. Despite the genial natures of the American Indians, settlers still anticipated the risk of an attack.
Find Native American Jewelry in Utah
To calm these fears, in 1851 the U.S. government kept a conference with several local Indian tribes and established the Treaty of Fort Laramie. Under this treaty, each Native American tribe accepted a bounded territory, allowed the government to construct roads and forts in this territory and pledged to not attack settlers; in return the federal government agreed to honor the boundaries of each tribe’s territory and make total payments to the Indians. The Native American tribes responded quietly to the treaty; in fact the Cheyenne, Sioux, Crow, Arapaho, Assinibione, Mandan, Gros Ventre and Arikara tribes, who entered into the treaty, even consented to end the hostilities amidst their tribes to be able to accept the conditions of the treaty.
Navajo Jewelry is Celebrated Worldwide by American Indian Art Collectors
This peaceful accord between the U.S. government and the Native American tribes didn’t hold very long. After hearing tales of fertile terrain and great mineral wealth in the West, the government soon broke their assurances established in the Treat of Fort Laramie by permitting thousands of non-Indians to flood into the area. With so many newcomers heading west, the federal government established a plan of confining Native Americans to reservations, small areas of land within a group’s territory “” earmarked exclusively for Indian use, in order to give more property for “” non-Indian settlers.
In a series of new treaties the U.S. government forced Native Americans to surrender their land and migrate to reservations in exchange for protection from attacks by white settlers. In addition, the Indians were offered a yearly payment that would include money in addition to foodstuffs, livestock, household goods and agricultural equipment. These reservations were created in an effort to pave the way for heightened U.S. growth and administration in the West, as well as to keep the Native Americans isolated from the whites in order to lower the chance for conflict.
History of the Plains Indians
These agreements had many problems. Most of all many of the native people didn’t entirely understand the document that they were confirming or the conditions within it; moreover, the treaties did not acknowledge the cultural practices of the Native Americans. In addition to this, the government agencies responsible for applying these policies were overwhelmed with awful management and corruption. In fact many treaty provisions were never executed.
The U.S. government rarely honored their side of the accords even when the Native Americans relocated quietly to their reservations. Unethical bureau agents often sold the supplies that were intended for the Indians on reservations to non-Indians. Additionally, as settlers required more property in the West, the government frequently cut the size of Indian reservations. By this time, most of the Native American people were unhappy with the treaties and angered by the settlers’ constant appetite for land.
A Look at Native American Symbols
Angered by the government’s dishonorable and unfair policies, some Native American tribes, including bands of Cheyennes, Arapahos, Comanches and Sioux, fought back. As they struggled to protect their territories and their tribes’ survival, over a thousand skirmishes and battles broke out in the West between 1861 and 1891. In an effort to force Native Americans onto the reservations and to end the violence, the U.S. government reacted to these incursions with costly military campaigns. Obviously the U.S. government’s Indian regulations required an adjustment.
Find Native American Indian Music in Woods Cross, UT
Native American policy changed dramatically following the Civil War. Reformers believed that the scheme of forcing Native Americans on to reservations was far too strict even while industrialists, who were worried about their land and resources, looked at assimilation, the cultural absorption of the American Indians into “white America” to be the singular permanent method of guaranteeing Native American survival. In 1871 the government passed a pivotal law proclaiming that the United States would no longer deal with Native American tribes as sovereign nations.
This legislation signaled a significant change in the government’s working relationship with the native peoples – Congress now viewed the Native Americans, not as countries outside of its jurisdictional control, but as wards of the government. By making Native Americans wards of the U.S. government, Congress presumed that it was easier to make the policy of assimilation a widely accepted part of the cultural mainstream of America.
More On American Indian History
Many U.S. government representatives considered assimilation as the most effective solution to what they viewed as “the Indian problem,” and the single lasting strategy for guaranteeing U.S. interests in the West and the survival of the American Indians. In order to accomplish this, the government pressed Native Americans to move out of their customary dwellings, move into wooden dwellings and become farmers.
The federal government enacted laws that required Native Americans to reject their traditional appearance and way of living. Some laws banned traditional spiritual practices while others required Indian men to cut their long hair. Agents on more than two-thirds of American Indian reservations founded courts to impose federal polices that often banned traditional ethnic and spiritual practices.
To speed the assimilation operation, the government established Indian training centers that attempted to quickly and forcefully Americanize Indian kids. As per the director of the Carlisle Indian School in Pennsylvania, the schools were designed to “kill the Indian and save the man.” In order to make this happen objective, the schools forced enrollees to speak only English, wear proper American attire and to switch their Indian names with more “American” ones. These new regulations brought Native Americans closer to the conclusion of their traditional tribal identity and the start of their life as citizens under the absolute control of the U.S. government.
Native American Treaties with the United States
In 1887, Congress handed down the General Allotment Act, the most significant component of the U.S. government’s assimilation program, which was intended to “civilize” American Indians by teaching them to become farmers. In order to accomplish this, Congress planned to create private title of Indian land by splitting up reservations, which were collectively held, and providing each family their own parcel of land.
Additionally, by forcing the Native Americans onto small plots of land, western developers and settlers could purchase the left over territory. The General Allotment Act, better known as the Dawes Act, required that the Indian lands be surveyed and each family be awarded an allotment of between 80 and 160 acres, while unmarried adults were given between 40 to 80 acres; the rest of the land was to be sold. Congress expected that the Dawes Act would divide Indian tribes and stimulate individual enterprise, while trimming the cost of Indian administration and producing prime property to be purchased by white settlers.
Find Native American Indian Clothing in Woods Cross, UT
The Dawes Act turned out to be catastrophic for the American Indians; over the next decades they existed under regulations that outlawed their traditional way of living and yet didn’t offer the vital resources to support their businesses and families. Dividing the reservations into smaller parcels of land brought about the significant reduction of Indian-owned property. Within thirty years, the people had lost over two-thirds of the acreage that they had controlled before the Dawes Act was enacted in 1887; the majority of the remaining land was purchased by white settlers.
Usually, Native Americans were duped out of their allotments or were required to sell their land in order pay bills and feed their own families. Consequently, the Indians were not “Americanized” and were often unable to become self-supporting farmers or ranchers, as the makers of the policy had wished. It also created anger among Indians for the U.S. government, as the allotment operation often destroyed land that was the spiritual and cultural location of their days.
Native American Culture
Between 1850 and 1900, life for Native Americans changed significantly. Due to U.S. administration policies, American Indians were forced from their living spaces because their native lands were parceled out. The Plains, which they had previously roamed without restriction, were now inhabited with white settlers.
The Upshot of the Indian Wars
Over all these years the Indians ended up defrauded out of their land, food and lifestyle, as the “” government’s Indian plans coerced them on to reservations and attempted to “Americanize” them. Many American Indian bands did not endure relocation, assimilation and military defeat; by 1890 the Native American population was lowered to under 250,000 persons. Thanks to generations of discriminatory and ruthless policies instituted by the United States government between 1850 and 1900, life for the American Indians was altered permanently.
[google-map location=”Woods Cross UT” |
What does moral mean in literature?
moral Add to list Share. The moral of a story is the lesson that story teaches about how to behave in the world. Moral comes from the Latin word mores, for habits. The moral of a story is supposed to teach you how to be a better person. If moral is used as an adjective, it means good, or ethical.
How do you define morals?
Morals are the prevailing standards of behavior that enable people to live cooperatively in groups. Moral refers to what societies sanction as right and acceptable. But many people use the terms morals and ethics interchangeably when talking about personal beliefs, actions, or principles.
What is moral and examples?
Moral is defined as a principle that governs right and wrong or the lesson of a fable. An example of moral is the commandment “Thou shalt not kill.” An example of moral is “Slow and steady wins the race” from “The Tortoise and the Hare.” noun.
What is a simple definition of morality?
: beliefs about what is right behavior and what is wrong behavior.: the degree to which something is right and good: the moral goodness or badness of something. See the full definition for morality in the English Language Learners Dictionary. morality.
Is Theme The moral of the story?
In truth, themes are far more general than the moral of the story. The moral is a specific lesson that the author is trying to teach. As such, a moral can be a theme, but the theme doesn’t have to be the moral of the story.
What are 5 moral values?
Compassion: understanding the suffering of others or self and wanting to do something about it. Cooperation: helping your family and friends, returning favors. Courage: willingness to do difficult things. Equality: believing everyone deserves equal rights and to be treated with respect.
What are the 4 moral principles?
The 4 basic ethical principles that apply to forensic activities are respect for autonomy, beneficence, nonmaleficence, and justice.
What are 10 moral values?
10 Moral Values Given To The Children to Lead a Wonderful Life
- Respect. Many parents make the mistake of teaching their children only about respect for elders, but that is wrong.
- Family. Family is an integral part of kids’ lives.
- Adjusting and Compromising.
- Helping Mentality.
- Respecting Religion.
- Never Hurt Anyone.
What is moral in your own words?
Morals are what you believe to be right and wrong. People can have different morals: you might say, “I like his morals” or “I wonder about his morals.” Your morals are your ideas about right and wrong, especially how you should act and treat other people.
What are bad morals?
Moral evil is any morally negative event caused by the intentional action or inaction of an agent, such as a person. An example of a moral evil might be murder, war or any other evil event for which someone can be held responsible or culpable. The distinction of evil from ‘bad‘ is complex.
Why moral is important?
Among the reasons to be moral and integral, regardless of occupation are to: Make society better. When we help make society better, we are rewarded with also making better own lives and the lives of our families and friends. Without moral conduct, society would be a miserable place.
What are examples of moral values?
Examples of moral values include:
- Being honest and trustworthy.
- Being courageous.
- Never giving up.
- Adding value to the world.
- Being patient.
- Taking personal responsibility.
What is the best definition of morality?
Morality is the belief that some behaviour is right and acceptable and that other behaviour is wrong. A morality is a system of principles and values concerning people’s behaviour, which is generally accepted by a society or by a particular group of people.
Why is morality only for person?
Only Human Beings Can Act Morally. Another reason for giving stronger preference to the interests of human beings is that only human beings can act morally. This is considered to be important because beings that can act morally are required to sacrifice their interests for the sake of others.
What is the difference between ethics and morals?
According to this understanding, “ethics” leans towards decisions based upon individual character, and the more subjective understanding of right and wrong by individuals – whereas “morals” emphasises the widely-shared communal or societal norms about right and wrong. |
Learn to Add Using Number Lines
Imagine your school is having a bake sale. You sold 4 cookies yesterday and 5 today.
How many cookies did you sell in all?
To find the answer, you must add the two numbers together. A number line can help.
What's a Number Line?
A number line has numbers on a straight line.
Numbers go from small on the left (👈) to big on the right (👉).
First, start on the number 4.
To add, move to the right on the number line. From the number 4, jump five places on the line:
The number you stop on is the answer.
So using a number line we figured out that 4 + 5 = 9!
Watch and Learn
Congratulations! You know how to add numbers with a number line. Start your practice below. |
Thursday, May 3, 2012
Fed Up With Sluggish Neutrinos, Scientists Force Light To Move Faster Than Its Own Speed Limit: The researchers pulsed ultra-short (200 nanosecond) laser pulses into a cloud of rubidium vapor, according to NIST. Next to this seed pulse, they pumped in a second laser beam at a different frequency. The rubidium amplified the seed light, so its hump hunched forward. While this was happening, photons — because light is a wave and a particle — interacted with the vapor and formed a second pulse, which could also be tuned to travel faster or slower than it is supposed to. The peaks of these light waves arrived at their targets 50 nanoseconds earlier than they would have if they were traveling at the constant C. |
The Yarrabubba Crater is our planet’s oldest asteroid impact site.
It measures 70-kilometer (44 miles) in diameter and was triggered by a giant asteroid that smashed into what is now Australia over 2.2 billion years ago.
According to the new study, the meteorite impact coincides with a time when the Earth was recovering from the “Snowball Earth.” Is this a coincidence, or could the Yarrabubba impact event be an unexpected cause of global climate change?
Anyhow, Snowball Earth ended at almost the same time as the Yarrabubba impact in the outback of Western Australia, 2.229 billion years ago.
This new age calculation also confirms the crater is the world’s oldest known preserved impact structure.
Nowadays, the Yarrabubba crater is invisible with the naked eye. But based on the local geology and the size of the crater, scientists were able to determine the possible events that have occurred back then.
To create such a crater, a 7-kilometer-wide (4.3 miles) asteroid would have had to hit an ice sheet between 2-5 kilometers (1.2 to 3.1 miles) thick at 17 kilometers (10.5 miles) per second. The shock could have pushed over 100 billion tonnes of water vapor into the atmosphere. Find similar headlines on Strange Sounds and Steve Quayle. [Nature, Imperial.ac.uk] |
Anchovies are small, silvery-green fish in the family Engraulidae. They are found throughout the Mediterranean and along parts of the coastline of Southern Europe, sometimes ranging as far north as the base of Norway. These fish have been an important source of food for centuries, for both humans and marine life alike. They are available fresh in regions where they are heavily fished, and preserved all over the world. The distinctive flavor of the preserved fish can be tasted in many dishes, especially in Mediterranean cuisine.
Some people confuse anchovies with sardines, another silvery fish in the herring family. Sardines grow larger, range in different waters, and have different physical characteristics. Six anchovy species are widely harvested for food purposes, and all of them have characteristic gaping mouths, along with pointed snouts and green to blue bodies that flash silver underwater. They feed on plankton, and also act as a food source for larger fish. Their role in the food chain makes them an important fish species to preserve.
Like many fish in the herring group, anchovies live in large schools, groups of fish that can contain thousands of individuals. Both humans and birds look for these fish by seeking areas of disturbance on the surface of the water, which indicate a panicked school of fish trying to escape a predator.
Like many heavily fished species, anchovies are potentially at risk for serious decline. Several European nations have cooperated to institute limits on their catch, and to regulate the fishing industry to ensure that the fish are caught sustainably. Many fishing companies use large drag nets, which can pose environmental problems as they stir up the ocean floor. Some of these companies have voluntarily modified their fishing practices to ensure that fisheries will remain healthy.
When fresh, the fish have a mild, slightly oily flavor. They are very popular in both France and Italy, especially grilled. Preserved anchovies, typically packed in salt and oil, are also a staple food in many European countries and around the world. They can be extremely salty, so some consumers soak them in cold water for half an hour before consuming them, to draw out some of the salt. The fish is also available in the form of paste, a thick mixture made from ground fillets, vinegar, sugar, and spices. |
Coding is an increasingly important skill in nearly every industry. With the recent focus on STEM education, elementary schools throughout the country are adding or expanding their coding curriculum.
However, many teachers and parents wonder whether students should really be learning to code this young. After all, elementary school days are already busy with lessons in math, English, social studies, basic science, and the arts. Wouldn’t it make sense to introduce coding later, after kids have mastered those other basics?
The truth is, coding teaches a mental skillset that helps elementary school students with every other aspect of education. Here’s how:
Coding Teaches Vital Critical Thinking Skills
Contrary to what many adults think, coding is about far more than learning how to accurately type lines of code. It is much more about using critical thinking to decide how to approach a problem and develop a solution.
As Steve Jobs once said, “Everybody in this country should learn to program a computer, because it teaches you how to think.”
As coding students work on increasingly complex problems, they learn to think about how to conceptualize a problem, which information is important to the task at hand, and how to analyze and synthesize that information to come up with a solution. Those critical thinking skills lend themselves to virtually every situation they will encounter throughout their lives.
Coding Projects Develop Resilience and Grit
The farther you get in coding, the harder it is to get everything right on the first try. Problems are bound to arise from typos or failed problem-solving. When that happens, though, coders don’t give up: They try and try again until they succeed.
The key to coding isn’t to be a perfect thinker or typist, but rather to identify and learn from your mistakes. Coders figure out where they went wrong, solve the problem, and bounce back.
Kids who learn to code know that initial failure doesn’t have to be final; “debugging” is just part of the process. They’re bound to carry that mindset of resilience and grit to other aspects of their life, which can only help as they grow up and encounter new challenges.
Coding Encourages Creativity
Coding students frequently have the opportunity to design something themselves and come up with their own solutions from a vague project idea. They can experiment with different ideas and different ways of approaching a complex problem.
Best of all, coding students get to see the results of their work, sparking the motivation to keep creating.
Coding Boosts Math Skills
Coding is a very logical and math-based skill. Programming students must learn to organize data and use their calculation skills so they can express their intentions in a way a computer can understand. Younger students can use kid-optimized programming languages or block-based programs to learn the basic concepts; older students can use their math skills to try coding in real-life languages.
Often, coding students improve their math skills without even realizing it, just by continuing to practice solving coding problems.
Coding Helps Children Practice Problem-Solving and Project Design
Coding teaches students how to break down a project into smaller, more manageable components and use logic to solve problems. Those skills are incredibly valuable in nearly every aspect of life, not just in school.
Whether students are applying logic to their math and science lessons, working on a long-term group project, or dealing with real-life problems, problem-solving skills are bound to come in handy. The earlier children learn these skills, the more they will help them along in life.
Coding Teaches Children About Cause and Effect
Tweaking a piece of code is one of the best ways to show a child how cause and effect works. Even a tiny change, such as a missing period, can have a dramatic effect on how or whether your code works.
In learning to code, elementary students come to understand a fundamental principle of how the world works that will help them approach many other aspects of the world.
Coding Gets Kids Comfortable With Technology
Coding helps kids become familiar and comfortable with technology in a way they otherwise wouldn’t be. Not only are they using computers as they learn, but they are also learning how those computers work.
Learning to code can forever change the way students see technology. The mysterious backend workings of a website or app start to seem much more approachable when a child realizes they already know how they could code some of the features they see.
Best of all, when kids know they are capable of learning how technology works, they are far less likely to be intimidated by any unfamiliar technology that is introduced in the future.
Elementary Students Are Excited to Learn to Code
Most elementary students love coding. It’s challenging but doable, with plenty of opportunities to be creative, see the fruit of your labor, and take pride in the projects you make at every level.
Ask any classroom of elementary coding students whether they like to code, and you’re sure to get a lot of enthusiastic yeses.
A Coding Curriculum Prepares Children for the Future
In the 2020s, computers have taken over virtually every industry. An increasing number of businesses in every sector rely on computers, and usually for far more than just a functional company website.
Children who learn to code will have an advantage over their peers who don’t. They will develop a valuable skillset that opens up more job opportunities for them, no matter which industry they decide to enter.
How to Create an Elementary Coding Curriculum
Yeti Academy is an online resource that can help elementary schools develop a STEM curriculum. Our flagship coding module, Yeti Code, presents a unique learning opportunity for both beginning and advanced students to learn to code through single and multi-player games. We also offer other project based STEM modules on computational thinking, Google Suite, Digital Literacy, Science and more for grades 3-5 and 6-9 with all the teacher resources you could ever need.
Are you interested in trying Yeti Academy? Sign up for a free account today! |
10.13 Understanding the naturalistic fallacy
Many things in our world are natural, but are not necessarily good. For example, arsenic is naturally occurring, but if you ingest this substance you will gravely suffer and you might die. In the same vein, many animals pose a threat to human survival and should not be approached. Assuming that something that is natural is “right” or “good” is referred to as “the naturalistic fallacy”.
We discuss the naturalistic fallacy here because it is important in our discussion of human evolution, specifically when we discuss sexual coercion in humans. The biggest problems with discussing human evolution arises when we begin to think that explaining why a particularly behavior evolved amounts to justifying that behavior. For example, a person researching cancer wants to gain a greater understanding of the illness, in an effort to stop the disease progression, and ultimately prevent the disease altogether. No person researching cancer is thereby justifying or promoting cancer.
It is important however, that we as evolutionary biologists, take special care of topics related to human behavior. It is critical that we do not use our science to confirm our pre-existing biases, as we have seen this play out many times in history, in which ideas about evolution have been co-opted to justify inhumane practices like eugenics, racism, sexism, and xenophobia. |
The OSI physical layer provides the means to transport the bits that make up a data link layer frame across the network media. This layer accepts a complete frame from the data link layer and encodes it as a series of signals that are transmitted onto the local media. The encoded bits that comprise a frame are received by either an end device or an intermediate device.
The process that data undergoes from a source node to a destination node is:
- The user data is segmented by the transport layer, placed into packets by the network layer, and further encapsulated as frames by the data link layer.
- The physical layer encodes the frames and creates the electrical, optical, or radio wave signals that represent the bits in each frame.
- These signals are then sent on the media one at a time.
- The destination node physical layer retrieves these individual signals from the media, restores them to their bit representations, and passes the bits up to the data link layer as a complete frame. |
source : yahoo.com
One way the Constitution limits individual rights is that it allows the legislative branch to make laws deeme?
One way the Constitution limits individual rights is that it
A.)allows the legislative branch to make laws deemed necessary to maintain stability and security
B.)contains a clause stating that all individual rights will be suspended during a national emergency
C.)allows the judicial branch to add statements that restrict rights based on court cases
D.)contains a clause stating that the president has the sole authority to restrict rights when necessary
First Amendment | U.S. Constitution | US Law | LII / Legal Information… – U.S. Constitution. First Amendment. Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of…In this way, Americans first took for themselves the liberties and rights that elsewhere were the They stated in the first ten Constitutional Amendments, known together as the Bill of Rights, what they Among these rights are the freedom of religion, speech, and the press, the right of peaceful…Everyone knows they have constitutional rights. But what is sometimes less well understood is The principles apply generally, however. Some limits on constitutional rights are well established. But we shouldn't need any law to tell us that in a highly contagious pandemic, our individual actions can…
The Constitution and the Bill of Rights – The Constitution protects freedom in many ways. here are three ways.It sets up a fair form of government. The first 10 amendments, which not all preserve individual rights, are called the Bill of Rights because they outline "freedoms" that were not in written into the Constitution.A constitution is an aggregate of fundamental principles or established precedents that constitute the legal basis of a polity, organisation or other type of entity and commonly determine how that entity is to be governed.The Constitution provided a more centralized government. If you cheat you will not learn, so why not just use your answers then see if you got the right ones, if you didn't go back through the lesson and A. The constitution limits the power of government by entitling citizens to a basic standard of living.
3-Minute Civics: Can my constitutional rights be limited? – Constitutional law is the body of rules, doctrines, and practices that govern the operation of political communities. Modern constitutional law is the offspring of nationalism as well as of the idea that the state must protect certain fundamental rights of the individual.Realization of rights and freedoms of an individual is a procedural and legal mechanism for their implementation that is revealed through the order, structure (subjects In the constitutional law the category of "obstacle in realization of rights and freedoms" is connected, first of all, with the category…Traditionally, the Sheriff is a constitutional officer, and the duties, responsibilities, and authorities are not 84, 1870) "Now it is quite true that the constitution nowhere defines what powers, rights and What this all means is that the County Sheriff is the highest, elected constitutional authority in the… |
Etika Normatif dan Terapan: Urgensi Etika Ilmiah, Profesi dalam Pendidikan
Ethical issues are problems related to human existence in all its aspects, both as individuals and society, both in relation to God, fellow humans, and the natural environment as well as covering various fields. In relation to all that - especially between fellow humans, conflicts often arise. The conflict could result in differences in interests, as well as ideological views. However, humans as intelligent creatures always long for goodness for themselves, others and the environment. Intellect is what makes humans able to create ethics. That is the ideal value of an interpersonal and social interaction, the highest good which is used as a standard for behavior. A rule is made, the principle of goodness so that it is not only a reference for himself, but also the behavior of others. This paper generally describes theoretical ethics and practical ethics. Emphasizes the benefits, usefulness of ethics in its application to the social and state dimensions. Finally, practical social ethics is needed in a plural society to solve the various problems at hand. So that plurality is no longer a threat, but becomes the glue to enrich the literacy of social relations from various backgrounds of social groups.
Copyright (c) 2021 MUNAQASYAH : Jurnal Ilmu Pendidikan dan Pembelajaran
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. |
World Hearing Day – March 3, 2021
Protect your ears from loud noises. Use hearing protection such as ear plugs and noise- cancelling earmuffs. If you are concerned about your hearing, please see a doctor. Hearing loss is gradual and can go undetected unless checked.
CDC supports the World Health Organization’s World Hearing Day, on March 3rd. World Hearing Day promotes ear and hearing care across the world and raises awareness of how to prevent deafness and hearing loss.
This year, World Hearing Day is much more than an annual observance, as we mark the release of the first-ever World Health Organization’s World Report on Hearingexternal icon. The World Health Organization brought together experts from around the world, including the US Centers for Disease Control and Prevention, for contributions to this first-ever report.
The report describes many ways we can protect our ears and hearing from loud noise:
- Turn down the volume down on personal listening devices such as headphones and earbuds.
- Avoid loud noise whenever possible.
- Give your ears a rest and take periodic breaks from noise.
- Use hearing protection such as earplugs or noise-cancelling earmuffs.
Turn down the volume of personal listening devices, such as earbuds and headphones.
Remember, good hearing and communication are important at all stages of life. Hearing loss and related ear diseases can be avoided through preventative actions such as:
- protection against loud sounds,
- good ear care practices, and
Hearing loss and related ear diseases can be addressed when early, and appropriate care is sought. People at risk of hearing loss should check their hearing regularly. People with hearing loss or related ear diseases should seek care from a health care provider.
According to the World Health Organization
- More than 360 million people live with disabling hearing loss.
- More than 1 billion people aged 12-35 years are at risk of hearing loss due to recreational noise exposure.
- Globally, the overall cost of not addressing hearing loss is more than $750 billion.
- World Report on Hearingexternal icon
- Make Listening Safeexternal icon
- WHO-ITU global standard for personal audio systems and devices pdf icon[PDF – 801 KB]external icon
- Ear and Hearing: Care Planning and Monitoring of National Strategies – A Manual pdf icon[PDF – 650 KB]external icon |
What do trees have to teach us? For the ten students in Erica Marlaine’s class at Chase Elementary School, quite a bit! A 2016-17 microgrant awardee and Earthwatch TeachEarth alum, Erica was challenged to create lesson plans around urban forestry in the hopes of engaging her students, all with special needs, in science:
- Where did the lesson activities take place, and who participated?:
Ten students at Chase Street Elementary School (Los Angeles Unified School District), ages 3-4, all with special needs.
- What urban forestry concepts did you teach in the classroom?I taught them how to differentiate different tree species. We talked about deciduous versus evergreen, the leaf shapes of a variety of trees, what trees need to survive, the importance of being kind and gentle with trees, what they liked about trees, who lived in the trees at school, and in other places, and how the students felt about the areas of school where there were trees versus the areas without trees. A variety of methods were used. We looked at different types of trees in books and online, and at the photos from the Operation Resilient Tree packet. Then almost daily, we walked around the school and talked about the different types of trees and which (Chinese Pistache) were on the Resilient Tree list. Our school borders a park, and while we are not permitted to go to the park, we can still see and collect leaves and acorns from the many trees that are on the border fence.
- What hands-on activities did you have your students perform to reinforce these concepts?They collected leaves from various trees at school (and we brought in others from home) and observed them under our microscope (purchased with the microgrant funds).
I created a project where 4 types of leaves were stapled on the left side of a paper and they had to sort through a pile of leaves to find the matching ones and attach them on the right. These were the same leaves they had previously observed under the microscope. This not only taught about trees but age appropriate concepts of similarities and differences, matching, and fine motor skills.
Interested in using this lesson plan? Download it here: Lesson Plan- Trees-Matching LessonLeaves were also used to teach pre-math concepts such as one to one correspondence and counting.
A chart was created with the students, and hung on the main board for all to see. The terms (leaves, twigs, branches, trunk, bark and roots) were reviewed and discussed quite often. They were also used in a song that we sang and acted out, “Leaves, Branches, Trunk and Roots,” sung to the tune of “Head, Shoulders, Knees and Toes.”
In addition, we did our best to measure trees, including Chinese Pistache trees, on our campus using the dual sided diameter tape measure provided by Earthwatch. They wrote down numbers as best as they could, and compared how many times the tape measure went around a tree that was big versus small.
Unfortunately, Winter Break started in mid-December and it has been raining almost non-stop since school began again on January 9th, so we were not able to complete much data collection at this time. I still plan to invite parents and students to come measure and collect data together, but it will have to be during the next group of trees (phase 3.0).
Another project involving a microgrant item was planting radishes, onions, and carrots in our root view container. First the students added water to the soil pellets and mixed and chopped until the pellets became usable soil. Then they spooned their soil into the planter, put in a few seeds, spooned in some more soil, and then we watered it. It began to grow very quickly. They are able to observe not just the above-ground growth but the roots as well. Since they all enjoy watering it, I decided to have them water it using eye droppers. They use the droppers to absorb water from a bowl, and then carefully move the dropper and squeeze it so that the water comes out onto the plant. This teaches fine motor skills as well as hand-eye coordination, especially since the root view planter is narrow.
Interested in using this lesson plan? Download it here: Lesson Plan – Planting in Root View Container
- How did you use and/or modify Earthwatch’s Resilient Trees protocol in your hands-on activity to measure trees?Without parent participation, the only thing we have been able to do as far as data collection is measure trunk diameter, and talk about which tree is bigger and how that corresponds to a bigger number. They are in preschool, so the idea that 10 is a bigger number than 5, and represents more, is something they are still learning. We did a lot with tree/leaf identification, and deciduous versus evergreen, and talked a lot about Chinese Pistache trees as there are many on our school campus.
- What were the best positive outcomes for your students as a result of these urban forestry lesson plans?There seems to be an increased excitement/interest level about trees and plants. Every morning they want to see how the plants in the root view container are doing, and they want to water them. They talk about trees as we walk around the campus, wondering why some have no leaves, or commenting about the color and shape of the leaves.
- What did your students have the most fun doing?They enjoyed measuring trees with the measuring tape, or more accurately, seeing how many times the tape measure went around each tree. They loved looking at tree parts under the microscope. They also loved the root view planting kit. It came with soil pellets so they had to add water and mix them up to “create” the soil. They also enjoyed watering the plants in the root viewer, especially with eye droppers. I was also surprised what a hit the leaf matching activity was. They enjoyed sorting through the box to find the matching leaves, and gluing them onto their paper.
- What were some challenges you faced in implementing these lesson plans? Do you have any best practices or techniques you used to overcome these challenges?
Challenges arose out of the fact that half of my students are very young (just turned 3) and many of them have autism. Most are speech delayed. Some are new to school; others have been with me for 1.5 years already. In order to provide differentiated instruction, I follow the principles of Universal Design for Learning (UDL), and provide multiple means of representation, engagement, and expression. Learners differ in how they understand and learn things, and in how they are able to express what they know. I therefore seek to provide them with information in a variety of ways: with my voice, visuals, hands-on, experiential activities, song, dance, or whatever engages them. I also understand that some may demonstrate what they have learned with words while others will create a piece of art, or build something with Legos, or act it out using puppets. I will repeat instructions as needed, and include visual step-by-step instructions for those who need that. I also modify lessons as needed based on each child’s abilities or challenges. For example, some students could rummage through a box of leaves and find the matching one. A few needed to be shown just two different leaves to start. The whole box would have been overwhelming for them to start with.
- What advice do you have for fellow educators wanting to implement a similar lesson plan? Any recommendations for facilitation techniques?I have very young students (age 3-4), so these suggestions may fit those with young students more. The microscope we got has a large viewing screen. It can also be attached via USB and the image can be projected. This is much better than trying to get young students to look through an eye piece as one does when using a standard microscope. It also allows for group work as several students can see the image at once. Calling them scientists and giving them scientist “tools” such as safety goggles and eye droppers makes each activity feel more important. If you are somewhere with bad weather, bring the tree materials into the classroom for the students to observe and use in projects, and try planting indoors. It is too early to tell if we will actually harvest radishes, carrots, and onions, but the plants are growing quite well indoors.
- Do you personally plan to share this with/recommend these activities to other educators?I already have. We meet a few times a month as a grade level (which includes 2 special education pre-K classes, one general education preschool class, and a transitional kindergarten class). We all discuss what we are doing and share projects we have done or are planning to do. We do not have to do the same things, but we often get ideas from each other and collaborate on how to extend a lesson or tailor it to fit the needs and abilities of the individual students in our classrooms. I have also spoken to one of the kindergarten teachers about planting around school, and she is interested in having her class join us. Even though her students are only in kindergarten, they seem much older than my students, and can act as great peer models and “planting buddies.”
- If other educators have questions about how you implemented your lesson plans, may they contact you?Absolutely. Erica Marlaine, [email protected] |
The five main nutrients required by the body for good health include proteins, carbohydrates, vitamins, minerals and fats. Put together, the nutrients form a balanced diet.
Proteins are referred as the building blocks of the body. They are packed as amino acids and used to perform a variety of bodily functions such as repairing cells, muscles, hair and tissues, as well as making hormones. The main sources of protein nutrients include: meat, milk, fish, eggs, cheese, beans, lentils, nuts and soybeans, among others.
Carbohydrates are nutrients that provide the body with energy. Carbs are grouped into two main groups: simple and complex. The main sources of carbohydrates nutrients include grains such wheat, corn and oats as well as fruits, vegetables and roots.
Fats provide important insulation to body organs and help in the absorption of vitamins and minerals. Fats are divided into two primary groups: saturated and unsaturated. The main sources of fats are animals and plants.
Vitamins are essential in maintaining the chemical balance in the body. There are 13 types of vitamins, which are divided into two main groups: fat soluble and water soluble. The main sources of vitamins include vegetables, fruits, lean meat, fish and whole grains.
Minerals play an important role of regulating bodily processes. Examples of important minerals are iron, calcium, phosphorus, potassium, magnesium and zinc. Food sources rich in minerals include fruits, meat, fish, beans, nuts and whole grains. |
Knowledge base dedicated to Linux and applied mathematics.
All the versions of this article: [English] [français]
Fixed point method allows us to solve non linear equations. We build an iterative method, using a sequence wich converges to a fixed point of g, this fixed point is the exact solution of f(x)=0.
The aim of this method is to solve equations of type:
Let be the solution of (E).
The idea is to bring back to equation of type:
where is a fixed point of .
We introduce a convergent sequence to the fixed point of , which is the solution of equation (E).
If and , then g has a fixed point in [a, b].
If and if exists a constant in ]0,1[ such that
over [a, b] then:
the fixed point is unique
the iteration will converge to the unique fixed point of .
Proof of the existence.
We define over [a,b], like follows:
Clearly: and because . Now we apply the intermediate value theorem to :
Proof of the unicity.
We suppose now that there exists two fixed points with . We apply the mean value theorem : there exists with:
This is a contradiction! Finally
The sequence is well defined because
and consequently provided that . As previously , we apply the mean value theorem, we show the existence of
for all with
since est dans ]0,1[.
Proof of the Corollary.
The first inequality is obvious. To prove the second inequality,we use
when m goes to infinity:
We suppose that converges to :
where is the error between and .
If exists two constants and such that:
We say that the sequence converges with order p
with a rate of convergence .
If p=1 and C<1 we say that the sequence converges linearly .
If p=2, convergence is called quadratic convergence.
If p=3, convergence is called cubic convergence.
Obviously several cases are possible, we can build several functions and that also depends on the nature of .
since , the rate of convergence and the convergence is linear(order 1) and since on [a, b].
If , we must introduce a Taylor Developement of around . Don’t forget that converges to 0. For example,
a Taylor developement of order 3 gives:
with , thus
The rate of convergence and the convergence is quadratic(order 2). We can generalize this result with the following therorem.
then the order of convergence of the fixed point method is k.
Taylor developement of order k around of gives:
with , d’o๠|
Between 1830 and 1850 Chilean silver mining grew at an unprecedented pace which transformed mining into one of the country's principal sources of wealth. The rush caused rapid demographic, infrastructural, and economic expansion in the semi-arid Norte Chico mountains where the silver deposits lay. A number of Chileans made large fortunes in the rush and made investments in other areas of the economy of Chile. By the 1850s the rush was in decline and lucrative silver mining definitely ended in the 1870s. At the same time mining activity in Chile reoriented to saltpetre operations.
Placer deposits of gold were exploited by the Spanish in the 16th century following their arrival in the same century. However, only after the independence in the 19th century did mining once again get prominence among economic activities in Chile. Following the discovery of silver at Agua Amarga (1811) and Arqueros (1825) the Norte Chico mountains north of La Serena were exhaustively prospected.
In 1832 prospector Juan Godoy found a silver outcrop (reventón) 50 km south of Copiapó in Chañarcillo. Godoy successfully claimed the discovered outcrop in his name and the name of José Godoy and Miguel Gallo. The finding attracted thousands of people to the place and generated significant wealth. During the heyday of Chañarcillo it produced more than 332 tons of silver ore until the deposits begun to be exhausted in 1874. A settlement of 600 people mushroomed in Chañarcillo leading to the establishment of surveillance system to avoid disorders and theft of ore. The settlement evolved over time to a town named Juan Godoy which came to have a plaza, school, market, hospital, theater, a railroad station, a church and graveyard.
Following the discovery of Chañarcillo many other ores were found near Copiapó well into the 1840s. The many findings resulted in the court of Copiapó receiving numerous claims (denuncios). In 1848 another large ore deposit was discovered at Tres Puntas sparkling yet another rush.
Copiapó experienced a large demographic and urbanistic growth during the rush. The town became a centre for trade and services of a large mining district. In 1851 Copiapó was connected by railroad to Caldera, its principal port of export. The mining zone slowly grew northwards into the diffuse border with Bolivia. Agriculture in Norte Chico also expanded as a consequence of the rush.
By 1855, Copiapó was already in decline. At the end of the silver rush, rich miners had diversified their assets into banking, agriculture, trade and commerce all over Chile. Example of this is silver mining magnate Matías Cousiño who initiated coal mining operations in Lota in 1852 rapidly transforming the town, from being a sparsely populated frontier zone in the mid-19th century, into a large industrial hub.
In 1870, 1570 miners worked in the Chañarcillo mines; however the mines were exhausted by 1874 and mining ended in 1888 after the mines became flooded. Despite this, Chañarcillo was the most productive mining district in 19th century Chile.
A last major discovery of silver occurred 1870 in Caracoles in Bolivian territory adjacent to Chile. Apart from being discovered by Chileans, the ore was also extracted with Chilean capital and miners. |
- Innate immunity is also known as native immunity.
- It is a resistance with which a person or lower animal is born and is non-specific.
- This type of immunity is present throughout our life.
- It may be of various types like species immunity, racial immunity or individual immunity.
- Various factors like age, hormones, nutrition, etc. influence the innate immunity in the host.
- Natural or innate immunity has three components. Among them physiochemical barriers are also one of them.
- They are the epithelial surface like skin and mucosae, and cilia while bactericidal secretions behave as chemical barriers.
- The animal body is a close system separated from the environment by skin and mucous membranes and they are impermeable to the particulate material of the size of the bacteria.
- The skin acts as a mechanical barrier to the invading microorganisms and also provides bactericidal secretions.
- The sebaceous secretions containing unsaturated fatty acids (oleic acid) and free saturated fatty acids have bactericidal and fungicidal
- The dry skin with high salt concentration in drying sweat inhibits or is lethal to many bacteria and fungi.
- The skin can be freed of transient flora easily but the resident flora cannot be removed even by washing or by the use of disinfectant.
- The superficial microorganisms of the resident flora may be diminished by vigorous surgical “scrubbing” but they get replenished rapidly from sebaceous and sweat glands.
Image source: quizlet
b) Nose, naso-pharynx and respiratory tract
- The moist surfaces of the mucous membrane lining of the nasal passage arrest various inhaled bacteria and other particulate material.
- Thus, the inspired air is largely freed of bacteria in the upper respiratory passage.
- Though some of the organisms skip this passage, they get trapped in the bronchial mucosa in the larynx and only a few may reach the bronchioles and alveoli.
- The sticky mucous secretions of respiratory mucosa and the hair like cilia sweep the secretions containing the foreign particulate.
- They are swept towards the oropharynx where it is swallowed or is coughed out.
- The cough reflex plays an important role of respiratory tract.
- The small particulate materials which manage to reach the alveoli are ingested by the phagocytic cells present there.
- The nasal and respiratory secretions contain muco-polysaccharides which can neutralize influenza and some other viruses.
- Inherited or acquired defect in the function of respiratory cilia or mucous or both make the lungs vulnerable to infection.
c) Mouth, stomach and intestinal tract
- Saliva possesses mild bactericidal action.
- The anaerobic colon bacteria produce fatty acids with antibacterial activity.
- Colonization resistance is offered by the predominant normal flora of the intestine.
- The intestinal anaerobic micro flora prevents the super infection by coliforms during antibiotic therapy.
- The low pH of gastric acid destroys most of the ingested bacteria.
- The conjunctiva is freed of bacteria and dust particles due to the activity of tears.
- Tears flush the eyes which contain lysozyme which is bactericidal in action.
Image source: sciencedirect
e) The genitourinary tract
- The flushing action of the slightly acidic urine maintains sterility of both male and female urethra.
- Semen is believed to contain antibacterial substances.
- The acid reaction of the vaginal secretion in female due to fermentation of glycogen by Lactobacillus acidophilus (normal flora) is markedly bactericidal towards most pathogenic microorganisms.
Physico-chemical barriers of innate immunity |
Pure Athletic Fuel
An athlete's diet must include foods particularly suited for the athlete's sport. This has been an area of great study in recent years. Researchers are creating diets that provide athletes with the foods that contain high levels of the nutrients needed for each particular sport. For an athlete to have proper nutrition in sports, one must have knowledge of the six classes of nutrients and the four basic food groups. The six classes of nutrients are water, vitamins, minerals, fats, carbohydrates, and proteins. Each class plays a different role in the energy cycle of the body.
Water is essential. An adult male body weight
is about 60 percent water. The average person needs two and a half liters of water a day. It is also known that a person's water mass is replaced every 11 to 13 days under light physical activity (Smith 91). With these facts in mind, it is understandable why an athlete, who undergoes heavy physical exertion, has an extreme need for large quantities of water. This water comes from the intake of all fluids and food during a day.
Vitamins are needed in small amounts. It was the last class to be discovered. It is divided in to fat-soluble and water-soluble groups of vitamins. Vitamins help regulate body functions and are not a direct source of energy. Vitamins are needed in such small quantities that a person's natural diet will provide most of the necessary amounts (Smith 5-7).
The third class is minerals. This class is divided into the minerals needed in large amounts
and the minerals needed in small amounts. The minerals needed in larger quantities include sodium, calcium, potassium, magnesium, sulfur, phosphorus, and various chlorides. Sodium is needed in extremely large amounts for very active athletes
. This is because a person's sweat contains sodium; profuse sweating will cause a large drop in the sodium level of the body. Potassium is directly associated with muscle cells and muscle fatigue. It affects the amount of water a person's body will hold. The other large-amount needed minerals provide for an athlete but have no direct effect on an athlete's performance. The trace minerals number about fourteen. Two trace minerals of concern are zinc and iron. Zinc helps repair tissues and provides for growth. Iron is the largest of these minerals and is important in the energy metabolism in body cells (Smith 5).
Fats provide the largest source of energy. They contain about twice as many calories per unit than proteins or carbohydrates. This means that fats can provide the largest amount of energy in the least amount of food. One setback is these are the nutrients that take the longest to digest.
Carbohydrates are the best source of food energy. These are broken down into a substance called glucose. This simple sugar provides short-term energy benefits where as fats provide a long-term energy. The body cannot store large amounts of energy in the form of carbohydrates. As such, carbohydrate intake for an athlete is advised to be spread out over the course of a day. This is the class of nutrients that bread, spaghetti, and pancakes fall under (Smith 15). The understanding that pasta is good for an athlete stems from this scientific data.
Proteins are the last class of nutrients. The need for these does not change dramatically with an increase in athletic activity. Proteins are needed in body tissues and have an effect in the growth and repair of the body (Smith 16).
A healthy diet will include the four major food groups. These groups are milk, meat and high-protein fibers, fruits and vegetables, and cereal and grain foods. Each group provides different nutrients to an athlete's body.
The milk group is a rich source of calcium. This is needed in the formation of one's body and bones. As such, it is important in one's growing years. Too much milk provides an athlete with an excess of animal fat, which has a negative effect in large quantities. It is recommended that a person have two servings per day.
Two servings of the meat and high-protein group daily provide adequate nutrients. This group tends to have high levels of saturated fats, which are fats that take longer to digest due to their complex structure. This is why athletes need to watch their intake of red meats and saturated fats.
Four servings are recommended for both the fruit and vegetable group and the grain group. Fruits and vegetables contain few calories and can be consumed in large quantities for their source of vitamins and nutrients. The grain group is where an athlete's main source of carbohydrates comes from. Proteins, minerals, and vitamins are found in these foods, but an athlete's high energy needs are fulfilled by the large carbohydrate content (Smith 30).
The physical exertion required to participate in sports requires an adequate source of energy. This energy comes from the food one consumes. It is recommended to eat well-balanced meals before physical activities. An extra serving of a certain food group is advised for participation in different sports. During the actual physical activity, an athlete sweats and loses large amounts of water and salt. An athlete should replenish these losses during the activity and afterwards. During the activity, water should be consumed in small amounts at regular intervals. Afterwards, an athlete may drink large amounts of water to help replenish the body's water. Also, salt should be consumed to help replenish the salt lost in sweat.
Basketball is an example of a sport that requires an extended period of physical exertion. This causes large amounts of energy to be consumed and large amounts of sweat to be lost from the athlete's body. In the past, a common meal around 2 p.m. for a night game allowed a basketball player to feel energetic during his game. Nowadays, players have become taller and larger. They require a greater amount of energy than players in the past. Players today can run faster, jump higher, and are stronger than previous players. This accounts for the extra energy needed by today's players. The 2 p.m. meal can still be eaten, but a snack closer to game time is advised. This allows the athlete to enter competition well equipped for a physically exhausting event. This snack can include water, but a liquid meal is recommended for pregame meals. This liquid meal provides energy, hydration, some fat and some protein. This allows the athlete to enter the competition feeling energetic, hydrated, and that one's hunger has been satiated. These meals leave the stomach quickly, so they can be taken closer to game time (Smith 120).
A proper diet for a basketball player would be one with multiple meals. This would produce a well-balanced diet everyday, which is needed by athletes. Large amounts of water should be consumed to help combat the loss of sweat during a physical activity. All general nutrient levels should be maintained. Each nutrient provides an athlete's body with a necessary substance. Without proper levels of these nutrients, one may be unable to recover properly from a physical event, such as a basketball game. With starting time approaching, carbohydrates would be recommended for their easy and relatively quick transformation into energy that the body can use.
Smith, Nathan J.. Food for Sport. Palo Alto: Bull Publishing Company, 1976. |
A newly discovered nematode can withstand 500 times the concentration of arsenic that is lethal to humans.
Nematodes, or roundworms, are a type of invertebrate. They look very much like worms, yet they are not related to them, and they can be much, much smaller. Some species are no longer than a fraction of a millimeter in length and are only visible under a microscope.
These tiny creatures live in almost all environments on Earth, and while the exact number of existing species of nematodes remains unknown, estimates suggest that there are at least 40,000 different species around the world.
Recently, Prof. Paul Sternberg — from the California Institute of Technology in Pasadena — and colleagues from different research institutions have found no fewer than eight species of nematodes in Mono Lake, five of which scientists had not previously described.
Mono Lake is a saltwater lake that formed naturally at least 760,000 years ago. While it has always been a saline lake, its level of salinity actually doubled in recent decades as a result of human activity.
The lake’s waters house shrimp and brine flies, and they attract wild birds, such as grebes and gulls. Now, the scientists have found that nematodes are also among the invertebrates that have made Lake Mono their home.
Of the eight species that the team recently isolated in the waters of the lake, one has a special feature: It has a remarkable resistance to arsenic, a toxic substance that is deadly to humans as well as to many other animals.
The scientists report their findings in a study paper that appears in the journal Current Biology.
New species survives extreme environments
Prof. Sternberg and team collected and analyzed various samples from Mono Lake’s three regions: Pristine Beach, Navy Beach, and Old Marina.
At Pristine Beach, they isolated just one nematode species, whereas they found three at Old Marina and seven at Navy Beach. Of a total of eight different species, three were previously known, namely: Mononchoides americanus, Diplogaster rivalis, and Prismatolaimus dolichurus.
The researchers characterized all five of the previously unknown species, but one in particular — in the genus Auanema — caught their eye. Like other species in the same genus, the new Auanema nematode has three sexes: hermaphrodite, male, and female.
And, like other Auanema nematodes — as well as the other nematode species present at Mono Lake — this new species is resistant to arsenic.
However, unlike other nematodes, its resistance to this toxic substance, as the researchers describe it, qualifies as “extreme.” It can survive exposure to a concentration of arsenic that is approximately 500 times as high as the amount that would kill a human. Moreover, it can hold out, as the scientists note, “for a prolonged period.”
It is also remarkable that in addition to surviving — even thriving — in an extreme environment, the new nematode species can also do well under laboratory conditions, which is uncommon for other so-called extremophile species.
“Extremophiles can teach us so much about innovative strategies for dealing with stress,” says first author Pei-Yin Shih. “Our study shows we still have much to learn about how these 1,000-celled animals have mastered survival in extreme environments,” she goes on to add.
Going forward, the researchers are hoping to find out the biochemical and genetic factors that might prime nematodes for survival in such extreme environments.
Moreover, they are planning to map out the genetic makeup of the new Auanema species to see whether they can pinpoint the genes that make these invertebrates so resistant to arsenic.
Arsenic contamination of drinking water poses an important threat to human health as it can lead to cancer and other health problems. Therefore, finding out more about nematodes’ resilience to this toxic substance could help lead to better ways of safeguarding public health.
Study co-author James Siho Lee argues that “[i]t’s tremendously important that we appreciate and develop a curiosity for biodiversity.”
“The next innovation for biotechnology could be out there in the wild. A new biodegradable sunscreen, for example, was discovered from extremophilic bacteria and algae. We have to protect and responsibly utilize wildlife.”
James Siho Lee |
Imagine going to school and being asked to complete a homework assignment that requires you to search for material online and write a brief summary of what you’ve found. However, you only have access to smartphone to do this work; a smartphone you have to share with your sibling who also needs it to do school work. According to research that we have done, this is a very real scenario for many students. The “homework gap” widens with limited access to the appropriate technology. Furthermore, this gap continues to widen as teachers incorporate technology-based learning into their daily curricula.
To date, most research about the digital divide has focused on the U.S. population generally, with less attention paid to determining whether the divide exists among students in the U.S. education system. We conducted a study asking students numerous questions about their access to and use of technology specifically for educational activities, both at home and in school, including the number and kinds of devices they have access to, the kind and reliability of the internet connection(s) available to them, and how often they used electronic devices for school-related activities.
First, while almost all students have access to technology and the internet at home, the number and type of technological access and internet quality can vary. A total of 14% of students reported having access to only one technological device, and most (85%) of these students were traditionally underserved students. Put differently, there was a relationship between annual family income, parents’ education level, and racial composition with technology access. The higher the family income and the higher parents’ education was the more likely students were to have access to more than one device relative to students who came from families with lower income and less formal education. White and Asian students were also more likely to have multiple technological devices relative to students who self-identified as Hispanic, Black or African American, or American Indian/Alaska Native.
Second, the number of devices students had access to is related to how often they use that device for school-related activities. Our research has found students who have access to more than one device are using their devices more frequently than students who have access to only one device, especially those students who have access to only a smartphone. This is important since students need to learn how to use technology to problem solve and can aid in the development of critical thinking skills.
In order to address the digital divide and homework gaps created by lack of technology access, especially for underserved students, it is recommended that policies be put into place that expand device access and internet among those students who lack them. Programs that help to rectify device and internet access imbalances—such as the Wireless Reach initiative or the private-sector Kajeet —can help improve educational opportunity and access for those in greatest need of assistance in preparing for and succeeding in the 21st-century economy. We also recommend that school-related activities be easily accessible for all students via mobile technology, since these are the types of devices that most students have access to at home. |
NASA’s news conference announcing the discovery of Kepler-90i and Kepler-80g was a delightful validation of a principle that has long fascinated me. We have such vast storehouses of astronomical data that finding the time for humans to mine them is deeply problematic. The application of machine learning via neural networks, as performed on Kepler data, shows what can be accomplished in digging out faint signals and hitherto undiscovered phenomena.
Specifically, we had known that Kepler-90 was a multi-planet system already, the existing tools — human analysis coupled with automated selection methods — having determined that there were seven planets there. Kepler-90i emerged as a very weak signal, and one that would not have made the initial cut using existing methods of analysis. When subjected to the machine learning algorithms developed by Google’s Christopher Shallue and Andrew Vanderburg (UT-Austin), the light curve of Kepler-90i as well as that of Kepler-80g could be identified.
Christopher Shallue described the work at the news conference:
“Kepler produced so much data that scientists couldn’t examine it all manually. The method has been to look at the strongest signals, examining them with human eyes and automated tests, not so different from looking for needles in a haystack. Out of 30,000 signals examined, 2500 planets could be confirmed. We chose to search in weaker signals, as if in a much bigger haystack.”
Machine learning shines in such situations, with the neural network able to identify planets with a far weaker signal that would have never made the initial cut for human analysis. In order to train the network, Shallue and Vanderburg fed it 15,000 Kepler signals that had already been labelled by human scientists, allowing it to learn by example to distinguish those patterns caused by planets. In their test runs, the model identified planets 96 percent of the time.
Shallue described the machine learning system as a neural network made up of layers that perform individual computations and pass them along to the next layer in the stack. Given enough layers, it becomes possible to recognize complex patterns, as we have seen in language translation, image and object identification, and the detection of tumors. Now we turn these methods to exoplanet detection in a discovery that bodes well for future discovery.
The two new planets were found through analysis of Kepler data on 670 stars, a major proof of concept for a method that will doubtless continue to improve, and one that will eventually be applied to the entire range of 150,000 stars in the Kepler and K2 dataset. That opens the possibility of numerous new planetary discoveries from the Kepler mission alone, not to mention what we will find with more advanced AI using the TESS and JWST datasets.
Andrew Vanderburg provides a bit more detail on the method at his CfA page:
Once we had built a neural network, we decided to test it out on some new signals. Using traditional transit-search methods (in particular, the same methods I use to search K2 data), we performed a new search of a handful of systems observed by Kepler (in particular, about 670 systems known already to host multiple planets). Importantly, we allowed this search to very sensitively explore weak signals. Usually, when searching Kepler data, a threshold in signal strength is set, below which weak signals are discarded, so as not to overwhelm the searcher with false positive signals. By lowering this threshold in our new search, we suspected that we might find some new planets, at the expense of a large increase in the number of false positives. But because we have a neural network that can efficiently identify real planets and screen out false positives, we could still efficiently identify new planets.
As to the planets themselves, Kepler-90i, orbiting a G-class star somewhat larger and more massive than the Sun some 2500 light years away, is interesting because it turns the Kepler-90 system into the closest thing we have to a Solar System analog, at least in terms of the number of planets. But the resemblance is hardly complete, for these planets exist in a highly compact system. Have a look at the orbital configuration here.
Image: Kepler-90 is a Sun-like star, but all of its eight planets are scrunched into the equivalent distance of Earth to the Sun. The inner planets have extremely tight orbits with a “year” on Kepler-90i lasting only 14.4 days. In comparison, Mercury’s orbit is 88 days. Consequently, Kepler-90i has an average surface temperature of 800 degrees F. Credit: NASA.
The image below shows an artist’s concept of the planets in question, though the distances are obviously not to scale. The planet sizes, however, are.
Image: The Kepler-90 planets have a similar configuration to our solar system with small planets found orbiting close to their star, and the larger planets found farther away. Credit: NASA.
Kepler-80g has an orbital period close to that of Kepler-90i, about 14 days, and is the 6th planet in its system, which has a host star that is either a late K-dwarf or an early M-dwarf. Here we find the already discovered five planets orbiting in a resonance chain, with mutual gravitational interactions keep their orbits aligned. As Andrew Vanderburg pointed out, the orbital period of the new planet could have been predicted based on the mathematical relations of this resonance, within about two minutes of the actual measure.
It was heartening to hear at the news conference that the training model used in these detections will be made publicly available. According to Google’s Shallue, about two hours suffice to train the model on a desktop computer using open source machine learning software called TensorFlow, which is produced by Google. When the code becomes available, anyone will be able to use the model on the publicly available Kepler data on their own PCs.
The paper is Shallue & Vanderburg, “Identifying Exoplanets with Deep Learning: A Five Planet Resonant Chain around Kepler-80 and an Eighth Planet around Kepler-90,” accepted for publication in The Astronomical Journal, and for now available here. |
Whether children are physically at school or the comfort of their homes, having a good conducive learning environment is essential. It's safe to assume that the traditional classroom set up has become obsolete and has, for possibly decades, failed to excite young minds.
Learners today require a space that meticulously caters to support and encourage thinking. Likewise, a learning environment should also be a safe space for trial and error. It should fully embody the philosophies of a good education. Students prefer a space that pushes them to do more and thinks more critically while providing a sense of achievement. Learning spaces should also be a place where students can interact with their peers and instructor.
Importance of Learning Environment
One of the most creative and critical aspects of teaching is creating a positive learning environment for your students. The broad term: learning environment, encompasses multiple layers of teaching. Here’s how you can create a good learning environment for your students.
Supportive and Positive Learning Culture
A learning environment should be able to make students feel connected. Students who have a sense of accomplishment and camaraderie are less likely to break under pressure.
Support systems such as mentorship programs and peer activities are vital in reassuring students. These can also be used as a medium for learners to check up on their fellow students and guide them in the long run.
Any learning environment should be a safe space for learners. Before expecting students to excel academically, they should first feel physically and mentally at peace.
A safe learning environment makes students feel respected, welcomed, supported, and acknowledged. By building a safe environment with The King’s College, you're also developing a positive learning culture.
Addresses Every Learners' Needs
As an instructor, attending to your student's psychological and physical needs is part of your duty. Competence, personal power, security, and belonging are some needs that, when met, can increase learners' progress.
Not only will this type of learning environment make your students happy, but it will also incentivise their good behaviour and focus on learning.
Learners respond better to positive criticism. Although their work might not be right on the money, as an instructor, you should be able to show your appreciation of their effort and motivate them to know more and do better.
One reason why students drop out of a course is due to bullying, teasing, and harassment. If they feel safe and adequately corrected, students will be able to see their areas of growth without the stigma.
Although students need an effective and conducive structure to learn, a good learning environment for your students should be more. It should be able to pique your student's interest and stimulate their growth.
Learners will always positively react to positivity. When their learning space provides support, acknowledgments, and camaraderie, it boosts their self-esteem and lets them focus more on the task at hand. A good learning environment should make students feel physically safe, mentally rewarding, and emotionally supported. Catering to all these is no easy task, but remains an essential aspect of teaching. |
The explosion of the Internet as a medium to enjoy and distribute information, beginning in about 1992, eventually resulted in the United States enacting the Digital Millennium Copyright Act ("DMCA") into law in 1998. This Act had an immediate and significant effect on the Internet and several industries, such as the music industry, and how copyrighted information could be reproduced and distributed online.
A Summary of the Digital Millennium Copyright Act
The DMCA of 1998 was a bill meant to control piracy on the Internet. It was supported by a number of media industries including the music and movie industries, as well as computer software manufacturers and other industries in the business of producing copyrighted content or material that could be distributed over the Internet. The bill was opposed by a number of groups concerned about civil liberties and unnecessary restrictions on information, including scientists, educators and librarians.
A Brief History of the Act
During the early part of the 1990s, as Internet technologies advanced, large "file-sharing" networks popped up online. These networks allowed Internet users with the ability to "share" files and information directly off of their home PC. The MP3 music format led to an explosion of this data-sharing activity, and the music industry became concerned that the sharing of copyrighted intellectual property would erode not only the music industry, but also any other industry where artists produce copyrighted material. The most popular file sharing applications and protocols throughout the years included:
- Gnutella and Gnutella2
- BitTorrent (isoHunt, The Pirate Bay and others)
The earliest file sharing activities were not limited to music. Users started distributing software licenses, copyrighted movies and more. As the music and movie industries prompted authorities to crack down on file sharing users, a worldwide debate started regarding the rights of copyright holders versus the freedom of access to information. While in some cases the lines were clearly drawn, in other cases issues of copyright protection were gray and even federal regulators were not sure where the line should be drawn.
The Path to the DMCA
In 1996, the World Intellectual Property Organization ("WIPO"), a special agency within the United Nations, held a diplomatic conference throughout December of 1996 which resulted in a treaty outlining copyright protections within the context of computers and the Internet. The WIPO treaty included details about how "right of distribution" or communication of computer programs, photos, and data should be regulated or limited.
As a result of this treaty, the U.S. Congress debated for months regarding the creation of a new Act that would implement the components of that treaty, as well as appease the concerned media industries that were heavily lobbying government for action. Finally, Congress passed the DMCA in 1998, and Clinton signed it into law on October 28th of that year.
Digital Millennium Copyright Act Details
The Digital Millennium Copyright Act actually goes above and beyond the original WIPO treaty and includes provisions that:
- Make it illegal to attempt to bypass computer software licenses, including the sale of devices or software that can illegally copy commercial software
- Requires Internet Service Providers to immediately remove copyrighted materials from websites or storage hosted by the ISP
- Requires online movie or music distributors to pay "licensing fees" to companies that produce and copyright movies or music
In addition to making certain actions illegal, the act actually had some protections as well.
- It allows programmers to attempt to crack copyright protections if it's part of an effort to provide products that offer better computer security of better interoperability between software products.
- It protects Internet Service Providers from any liability if they've only inadvertently (due to user activity) transmitted copyrighted information, but did not host it.
- It exempts certain organizations and institutions (like libraries and schools) from the anti-circumvention part of the act in some cases.
- It protects the liability of nonprofits and schools that act as Internet service providers when any staff or student infringes on any copyright.
The Act was a balance between protecting the rights of artists while protecting major institutions from legal liability. In the end, the Act accomplished what it was created to accomplish - and it significantly changed the hosting and transmission of information over the Internet.
The Aftershock of the DMCA
The effects of the DMCA on the Internet were immediately noticeable. Initially, any file sharing service that depended on a central server for the storage of any copyrighted files got shut down by federal authorities. However, software developers and hackers lived up to their anti-authority reputations by producing technology, like Bit Torrent, which allowed for the sharing of files through "peer-to-peer" networks. In such a configuration, users host files on their own PC and share them out to other home PC users - creating a giant network of shared files with so many nodes that it's virtually impossible for the federal government to prosecute everyone. These networks in themselves are not illegal, but it's illegal when users share out copyrighted files. However, that still hasn't stopped a significant portion of users from doing so.
Other, more positive effects of the Act included:
- The creation of many very large online "stores" where Internet users can purchase and download movies and music on-demand.
- Important legal protections for writers and graphic designers who create copyrighted content for use on the Internet.
- Important legal protection for schools and non-profit organization from major lawsuits by the powerful music and movie industries.
- Important legal protection for ISP's that can't easily control what their users do or transmit once connected to the Internet.
For more information about the Millennium act, don't forget to check out the following resources. |
Through gene therapy, scientists at Ariad Pharmaceuticals in Cambridge, Massachusetts, have come up with a way to store insulin in cells that can be released only when a pill is taken. Published in the February issue of Science, the findings hold promise not just for the treatment of diabetes, but for other medical problems which require a timed-release technique.
In the experiment, the researchers inserted insulin-producing genes and a protein into cells. Once inside the cells, the genes and the protein stick together and form clumps which are too big to leave the cell. The cells were then injected into the muscles of diabetic mice. Some of the mice were fed a drug which broke up the protein clump and released the insulin into the blood stream. The blood glucose level in these mice dropped as a result. According to lead researcher Tim Clarkson, the amount of insulin that gets released is directly related to the amount of drug given. So, consuming a larger dose of drugs would allow for a larger release of insulin. When no drug is taken, the insulin remains in the cell, causing no toxicity or adverse effects. |
Suppose there are two numbers - a and b. If a number a divides another number b exactly, we say that a is factor of b and b is called multiple of a.
Highest Common Factor (HCF) or Greatest Common Divisor (GCD)
The greatest common divisor (gcd), also known as the greatest common denominator, greatest common factor (gcf), or highest common factor (hcf), of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder. For example, the GCD of 8 and 12 is 4.
The HCF of two or more than two numbers is the greatest number that divides each of them exactly. There are two methods of finding the HCF of a given set of numbers:
1. Factorisation Method
In this method, express each one of the given numbers as the product of prime factors. The product of least powers of common prime factors gives HCF.
2. Division Method
Divide the larger number by the smaller one. Now, divide the divisor by the remainder. Repeat the process of dividing the preceding number by the remainder last obtained till zero is obtained as remainder. The last divisor is the required HCF.
Finding the HCF of more than two numbers: H.C.F. of [(H.C.F. of any two) and (the third number)] gives the HCF of three given numbers.
Least Common Multiple (LCM)
The lowest common multiple or (LCM) least common multiple or smallest common multiple of two rational numbers a and b is the smallest positive rational number that is an integer multiple of both a and b. The definition can be generalised for more than two numbers.
The least number which is exactly divisible by each one of the given numbers is called their LCM.
1. Factorisation Method of Finding LCM
Resolve each one of the given numbers into a product of prime factors. Then, LCM is the product of highest powers of all the factors.
2. Common Division Method (Short-cut Method) of Finding LCM
Arrange the given numbers in a row in any order. Divide by a number which divides exactly at least two of the given numbers and carry forward the numbers which are not divisible. Repeat the above process till no two of the numbers are divisible by the same number except 1. The product of the divisors and the undivided numbers is the required LCM of the given numbers.
Product of two numbers = Product of their HCF and LCM
a x b = HCF x LCM
Two numbers are said to be co-primes if their HCF is 1. HCF of two co-prime numbers is 1. To find LCM of co-prime numbers, just multiply them. No need to find factors.
HCF and LCM of Fractions
HCF = (HCF of Numerators) / (LCM of Denominator)
LCM = (LCM of Numerators) / (HCF of Denominator)
Applications of HCF and LCM
1. Find the Greatest Number that will exactly divide x, y, z.
Required number = H.C.F. of x, y, and z (greatest divisor).
2. Find the Greatest Number that will divide x, y and z leaving remainders a, b and c respectively.
Required number (greatest divisor) = H.C.F. of (x – a), (y – b) and (z – c).
3. Find the Least Number which is exactly divisible by x, y and z.
Required number = L.C.M. of x, y and z (least divided).
4. Find the Least Number which when divided by x, y and z leaves the remainders a, b and c respectively.
Then, it is always observed that (x – a) = (z – b) = (z – c) = K (say).
∴ Required number = (L.C.M. of x, y and z) – K.
5. Find the Least Number which when divided by x, y and z leaves the same remainder ‘r’ each case.
Required number = (L.C.M. of x, y and z) + r.
6. Find the Greatest Number that will divide x, y and z leaving the same remainder in each case.
Required number = H.C.F of (x – y), (y – z) and (z – x).
Example 1: What is the greatest number which exactly divides 110, 154 and 242?
The required number is the HCF of 110, 154 and 242.
110 = 2 × 5 × 11
154 = 2 × 7 × 11
242 = 2 × 11 × 11
∴ HCF = 2 × 11 = 22
Example 2: What is the greatest number, which when divides 3 consecutive odd numbers produces a remainder of 1.
If x, y, z be 3 consecutive odd numbers, then the required number will be the HCF of x – 1, y – 1 and z – 1.
Since x-1, y-1 and z-1 are 3 consecutive even integers, their HCF will be 2.
So, the answer is 2.
Example 3: What is the highest 3 digit number, which is exactly divisible by 3, 5, 6 and 7?
The least number which is exactly divisible by 3, 5, 6, and 7 is LCM(3, 5, 6, 7) = 210.
So, all the multiples of 210 will be exactly divisible by 3, 5, 6 and 7.
So, such greatest 3 digit number is 840 (210 × 4).
Example 4: In a farewell party, some students are giving pose for photograph, If the students stand at 4 students per row, 2 students will be left if they stand 5 per row, 3 will be left and if they stand 6 per row 4 will be left. If the total number of students are greater than 100 and less than 150, how many students are there?
If ‘N’ is the number of students, it is clear from the question that if N is divided by 4, 5, and 6, it produces a remainders of 2, 3, & 4 respectively.
Since (4 – 2) = (5 – 3) = (6 – 4) = 2, the least possible value of N is LCM(4, 5, 6) – 2 = 60 – 2 = 58.
But, 100 < N < 150.
So, the next possible value is 58 + 60 = 118
Example 5: There are some students in the class. Mr.X brought 130 chocolates and distributed to the students equally, then he was left with some chocolates. Mr Y brought 170 chocolates and distributed equally to the students. He was also left with the same no of chocolates as Mr X was left. Mr Z brought 250 chocolates, did the same thing and left with the same no of chocolates. What is the max possible no of students that were in the class?
The question can be stated as, what is the highest number, which divides 130, 170 and 250 gives the same remainder, i.e. HCF of (170 −130), (250 −170), (250 −130).
I.e. HCF (40, 80, 120) = 40. |
Imaging techniques enable neuroscientists to learn about the structure and function of cells in the nervous system. Here, Dr Zoltán Rusznák shares some captivating images of the brain and how they were made.
Neurons are the building block cells of the brain and spinal cord, communicating with each other through synapses to regulate nervous system function. Relating the shape, size, and location of neurons to their function is important in understanding mechanisms in brain health and disease. However, because neurons are small, three-dimensional, and embedded among many other cells in the nervous system, special techniques are required to be able to see them. The following pictures show neurons in the cochlear nucleus, which is the part of the brain that decodes sound information from the ear.
Stacking up to localise sound
What’s in this picture? Globular bushy cells are neurons in the cochlear nucleus that act as sophisticated timing devices. They measure tiny delays in how quickly a sound reaches both ears, which is the basis of how we localise the source of a sound.
How was it made? The picture on the left was taken with a camera attached to a microscope. It shows a round bushy cell body in the middle of a single, very thin slice of brain tissue – a slice only 0.06% of the thickness of a grain of salt! The right-hand picture is a stack of 40 images taken from successive slices of the same piece of brain tissue – imagine a stack of salami coming out of a deli slicer. The image stack results in a view of much greater depth, so that a second bushy cell body becomes visible, as well as a detailed view of the synaptic nerve terminals, indicated by the several bright green structures on the cell bodies.
Putting the puzzle together
What’s in this picture? ‘Giant neurons’ of the cochlear nucleus receive sound information from the ear and help to localise the source of sound from a single ear.
How was it made? These pictures are also made from stacks of single images, like in the previous picture. However, since the giant neurons are so, well, giant, many adjacent image stacks have to be assembled like a puzzle in order to capture the many branching nerve endings. Each square in the pictures corresponds to a single field of view of the microscope.
Merging muscarinic receptors
What’s in this picture? These small round neurons in the cochlear nucleus are called granule cells. Neurons have proteins on their surface called receptors that respond to chemical messengers and transmit signals throughout the neuron. Sometimes we want to know the specific type and location of receptors that a messenger binds to in order to transmit its signal.
How was the picture made? Different fluorescent dyes are used to distinguish the receptor from the rest of the neuron. The left-most image shows green, bead-like dots that indicate the presence of a particular type of receptor called a muscarinic M3 receptor in a slice of brain tissue. The middle picture is taken from the very same piece of brain tissue, but the tissue is instead stained blue to define the nucleus of the granule cells. When the green and blue images are merged (right-hand picture), the green dots are showed to be surrounding the blue cell nuclei. This tells us that the receptors are located on the surface of the granule cells, and suggests that these receptors mediate the effects of certain neurochemical messengers in the cochlear nucleus. This information is helpful to determine how hearing works and what might go wrong in auditory disorders. |
San José State University
Department of Economics
& Tornado Alley
in Financial Economics
At the time that Franco Modigliani and Merton Miller (M&M)
did their analysis there were four schools of thought as to what
determines the value of a
the present value of growth opportunities (PVGO).
Modigliani and Miller established that, properly interpreted, each of these approaches will give the same total equity value for the corporation. "Properly interpreted" means that dividends must be taken as "net dividends;" i.e., dividends paid out minus funds raised from the sale of new stock. Earnings must be taken as net earnings, earnings minus an imputed interest on cumulative investment. Cash flow must take into account the cash outflow of investment (free cash flow).
In addition to the four approaches that M&M brought together, there is a fifth more fundamental approach. The value of a corporations is the sum of the net present values of all its worthwhile projects.
It is important to note that their result is for the total equity in the corporation, as opposed to the value of a single share.
The cash flow of a corporation, which is defined as
can go for dividends or for investment. Investment could also be covered by the sale of new stock. This means that
Cash Flow minus Investment is called Free Cash Flow, and Dividends - Sale of New Stock is called Net Dividends.
Thus Free Cash Flow is the same as Net Dividends and therefore the value of the equity in a corporation is equal to the present value of future free cash flows and also the present value of net dividends.
M&M established that if one computes the cumulative investment of a corporation and deducts an interest from earnings based on this cumulative investment the result, which they label Net Earnings, is equal to both Net Dividends and Free Cash Flow. Therefore the equity value of a corporation is equal to the present value of all future Net Earnings.
M&M also established that if the present value of growth opportunities is calculated as the present value of of the earnings of investment projects which are in excess of the rate of discount r then the capitalized value of the current earnings plus the present value of the growth opportunities is equal to the other three methods of determining the equity value of a corporation. In the form of an equation, this says that the equity value of the corporation, P, is equal to
So M&M's analysis revealed that there is no conflict between the four schools of thought on the valuation of the equity in a corporation. The relationship also applies for a single share of the corporation.
The present value of the free cash flows is the totalling up of the cash flows for the separate projects and then computing the present values. Since net dividends are identically equal to free cash flows, it follows that the present value of the net dividends must be equal to the value of the corporation.
There is a fifth view of the valuation of the equity in a corporation; i.e., the sum of the net present values of all of its investments. This is compatible with the other four valuations.
Thus, the total value of the equity in a corporation should be equal to:
M&M analysis starts from the proposition that the stock price at any time t, pt, is equal to the present value of the next dividend payment and the price of the stock at the time of that payment; i.e.,
This is equivalent to the condition that the price of the stock is equal to the present value of all future dividends.
In this analysis a lower case letter will denote the quantity per share and the upper case letter the quantity for the whole corporation. Let Nt be the number of shares outstanding at time t. The value of the equity in the corporation at time, Pt, is just Ntpt. Thus if Equation (1) is multiplied by Nt one obtains
The quantity Ntpt+1 can be expressed as
The term pt+1(Nt+1-Nt) is the number of new shares sold between time t and time t+1 valued at the price of the stock at t+1. This may be taken to be the value of new stock sold in year t+1, St+1. Thus equation (2) reduces to
This equation implies also that
When this value for Pt+1 is substituted into the equation for t the result is:
If this process is continued the result is
This means that the value of the corporation is equal to the present value of all future Net Dividends, dividends paid out less the funds brought in by the sale of new stock.
So M&M's analysis revealed that there is no conflict between the four schools of thought on the valuation of the equity in a corporation.
For more on Modigliani and Miller's propositions see M&M2.
HOME PAGE OF Thayer Watkins |
Binare optionen 1h strategie buch46 comments
Software trading opzioni binarie
Language is comprised of sounds, words, phrases and sentences. At all levels, language is rule-based. At the sound level, phonology refers to the rules of the sound system and the rules of sound combination. At the word level, morphology refers to the structure and construction of words. Morphology skills require an understanding and use of the appropriate structure of a word, such as word roots, prefixes, and affixes called morphemes.
Syntax refers to the rules of word order and word combinations in order to form phrases and sentences. Solid syntactic skills require an understanding and use of correct word order and organization in phrases and sentences and also the ability to use increasingly complex sentences as language develops. At the word level, these children may not correctly use plural forms or verb tenses.
At the phrase or sentence level, children with syntactic deficits might use incorrect word order, leave out words, or use a limited number of complex sentences, such as those that contain prepositional clauses. Children with disorders of motor speech control are likely to have concomitant difficulties with morphology related to impaired speech control.
Children will work on developing an understanding and use of age appropriate morphemes and syntactic structures during interactive therapy activities. For children with co-occurring disorders of motor speech control, target words and phrases are developed to both improve motor speech control and improve the use of grammatical morphemes and syntax. For more information on the development of morphology and syntax, please visit, Speech Language Therapy.
Language Disorders from Infancy through Adolescence: Assessment and Intervention 2 nd Edition. If you have questions or concerns, please contact us at our Falls Chruch and Springfield offices. How does difficulty with morphology and syntax present in a child? A child with morphology and syntax deficits may: Demonstrate inconsistent or incorrect word order when speaking Use a limited number of grammatical markers e.
By age twenty-four months: Mommy no go appears Rising intonation is used to indicate a question By age thirty-six months: Overgeneralization of past-tense verb forms is in place e. Mommy no go appears Rising intonation is used to indicate a question Present tense auxiliaries have emerged e. Daddy is eating; Bunny does hop By age forty-two months: Auxiliary verbs are being ordered correctly in questions and negatives e.
What is he doing? Grammatical markers have emerged including: A variety of early complex sentence types emerge including compound sentences e. My shirt is blue and green , full prepositional clauses in sentences e. I put away the toys in the toy box , and simple infinitives I want to draw.
By age forty-eight to sixty months: Passive sentences are understood and used For more information on the development of morphology and syntax, please visit, Speech Language Therapy. References Paul, R |
While many may knock giant wind turbines as eyesores, researchers from Ames National Laboratory and the University of Colorado have conducted a study that indicates wind energy could help with crop development. The team’s preliminary research indicates that wind turbines aid in the growth of certain crops due to measurable effects upon the climate surrounding the fields.
Speaking about their discovery, Ames Laboratory associate and agricultural meteorology expert Gene Takle said, “We’ve finished the first phase of our research, and we’re confident that wind turbines do produce measurable effects on the microclimate near crops.”
Takle, who is also a professor of agricultural meteorology and director of the Climate Science Program at Iowa State University, said that the team’s research revealed that the slow-moving blades of the wind turbine not only generated electricity, but also channelled air downwards. Across farmland, this essentially had the effect of ‘bathing’ crops in a swifter and cooler air current.
While nothing is set in stone, and the findings have not definitively established whether wind turbines will increase crop health and yield potential, the findings remain an interesting discovery.
“The turbulence resulting from wind turbines may speed up natural exchange processes between crop plants and the lower atmosphere,” Takle said. “For example, the increased flow of air could help speed up the natural heat exchange and allow the crops to stay slightly cooler on hot days and slightly warmer at night. In this case, we anticipate turbines’ effects are good in the spring and fall because they would keep the crop a little warmer and help prevent a frost. Wind turbines could possibly ward off early fall frosts and extend the growing season.”
The research team also believes there could be other benefits from having wind turbines near farmland. Air from the blades could also reduce moisture levels and in turn, decrease the time in which fungi and toxins can grow on plant leaves. It would also keep crops dryer and reduce the need for artificial drying after the crops are harvested – an energy intensive process.
“We anticipate the impact of wind turbines to be subtle. But in certain years and under certain circumstances the effects could be significant,” said Takle. “When you think about a summer with a string of 105-degree days, extra wind turbulence from wind turbines might be helpful. If turbines can bring the temperature down below 100 degrees that could be a big help for crops.”
Image © paleololigo |
10 Things About Glaciers
10 Interesting Things About Glaciers
NASA keeps a close eye on glaciers
NASA satellites and aircraft are constantly above Earth, and they're especially monitoring icy regions. For example, the IceBridge mission uses instruments aboard special airplanes to measure yearly changes in the thickness of glaciers and other ice. And data collected by the GRACE satellite helps scientists understand the relationship between melting glaciers and sea level rise.
Really old snow can form a glacier.
Glaciers are huge, thick masses of ice. They form when lots of snow falls in one location for many years. Over time–decades or centuries–the snow on the bottom gets squished down by the weight of falling new snow. This compressed snow becomes ice, forming a glacier.
Glaciers are really, really big.
Glaciers can grow to be dozens or even hundreds of miles long. The world's largest glacier is the Lambert-Fisher Glacier in Antarctica. It is approximately 250 miles (400 kilometers) long and 60 miles (100 kilometers) wide. Even small glaciers are about the size of a football field!
Glaciers hold a lot of water.
Fresh water is important for lots of living things on Earth. Glaciers and ice sheets–layers of ice that cover a large region of land–hold about 68 percent of the world's fresh water. In fact, the water in your glass right now could have once been inside a glacier!
Glaciers can flow like rivers.
Gravity causes the ice inside glaciers to change shape and move. Glaciers flow from higher ground to lower ground. However, they flow so slowly that if you were standing next to a glacier, you probably wouldn't notice it was moving. In cold and snowy climates, glaciers can flow all the way down to the sea. Sometimes pieces of these glaciers–called icebergs–can break off into the ocean. NASA's OMG (Oceans Melting Greenland) mission studies the many ways that ocean waters are affecting marine glaciers.
Glaciers carry stuff as they move.
Glaciers are usually made of mostly ice, but they also pick up particles as they move. The particles within a glacier can range in size. Some of these particles are massive boulders, while others are tiny grains, called rock flour.
You can tell where a glacier has been.
As glaciers move, they rub against the ground below. Giant glaciers can carve deep grooves in the Earth, creating large valleys. Also, particles within the glacier act as sandpaper scraping against rock. Geologists use these deep valleys and scrapes to tell where glaciers traveled in the past.
Glaciers can have a bluish tint.
Snow is white and ice is often clear, but glaciers are sometimes blue. Why? Snow and ice become packed tightly under the weight of a glacier. Over time, this makes the snow very dense and forces out any air bubbles. This change in the structure of the ice crystals causes the dense ice in the glacier to absorb red light and reflect blue light.
Glaciers aren't just at the North and South Pole.
Glaciers cover about 10 percent of the world's total land area, and they aren't all at the poles. Glaciers can actually be found on every continent except Australia. Some glaciers in Ecuador and Mexico are even found near the equator!
Glaciers help keep us comfortable.
When you wear dark clothing on a hot day, your clothes absorb the sun's heat and make you warmer. Ocean water and land are dark. They become warm as they absorb energy from the sun and trap heat on Earth. Because ice is white or pale blue, it can help to reflect sunlight–and heat–back into space. |
Hegelianism is a tradition of philosophy which takes its defining characteristics from a philosophy of Georg Wilhelm Friedrich Hegel, which can be summed up by a favorite motto by Hegel (1770 – 1831), "the rational alone is real," meaning that all reality is capable of being expressed in rational categories. All of Hegel’s work was an effort to synthesize the conflicting religious and cultural elements of Christian tradition, Greek classicism, the Enlightenment and the Romantic movement into a meaningful, coherent unity. He did this by replacing Aristotle’s concept of static and constant being with the idea that all being is constantly in motion and constantly developing through a three-stage process popularly known as thesis, antithesis, and synthesis (Fichte and Schelling's formulation; Hegel's own formulation is: "in itself " (An-sich), "out of itself" (Anderssein), and "in and for itself" (An-und-für-sich)). These three stages were found throughout the whole realm of thought and being, from the most abstract logical process up to the most complicated and organized human activity, the historical succession of political and philosophical systems.
Shortly after Hegel’s death, his school diverged into three currents of thought: the conservative Rightist Hegelians who developed his philosophy along lines compatible with Christian teachings and conservative politics; the “Young Hegelians,” or leftists who took up the theory of dialectic and historical progression and developed schools of materialism, socialism, rationalism, and pantheism; and the centrists who concentrated on logic and the philosophical system itself, which they diffused throughout the Western world. In Britain, Hegelianism strongly influenced the rise of British idealism.
Hegel was born in Stuttgart, Germany in 1770 and died in Berlin, Germany in 1831. After studying theology at Tübingen he devoted himself successively to the study of contemporary philosophy and to the cultivation of the Greek classics. After about seven years spent as a private tutor in various places, he began his career as a university professor in 1801 at Jena. After an intermission of a year in which he spent as newspaper editor at Bamberg, and a short term as rector of a gymnasium at Nuremberg, he was made professor of philosophy at Heidelberg in 1816, and at the University of Berlin in 1818. Hegel's principle works are "Logic" (Wissenschaft der Logik, 1816), his "Phenomenology of Spirit" (Phänomenologie des Geistes, 1807), "Encyclopedia" (Encyklopädie der philosophischen Wissenschaften, 1817), and Philosophy of History (Vorlesungen uber die Philosophie der Geschichte, 1820). His works were collected and published by Rosenkranz in 19 vols., 1832-1842, second edition 1840-1854.
All of Hegel’s thinking was concerned with the apparent conflicts he observed in religion and politics. As a seminary student, Hegel found the souls of students of theology and philosophy disrupted by the contradictions between rationalism and supernatural religion, skepticism and faith. The political situation generated by the French revolution was in sharp contrast to the tyranny of the German princes, and the democratic beginnings of the British constitution. Hegel was also witness to the conflict between the tradition of orthodox Protestantism and its rationalist critics in Enlightenment Europe. He began his work when classicism predominated in the intellectual world of Europe, and his early political writings described the ideal of a Greek “polis” where politics and religion were combined and individuals participated democratically in both. European culture soon entered into the period of Romanticism, and this too was embraced by Hegel. All of Hegel’s work was an effort to synthesize these conflicting religious and cultural elements of Christian tradition, Greek classicism, the Enlightenment and the Romantic movement into a meaningful, coherent unity. He did this with the radical concept that, contrary to Aristotle’s portrayal of the nature of being as static and constant, all being is constantly in motion and constantly developing through a three-stage process of thesis, antithesis, and synthesis.
This theory of triadic development (Entwicklung) was applied to every aspect of existence, with the hope that philosophy would not contradict experience, but provide an ultimately true explanation for all the data collected through experience. For example, in order to know what liberty is, we take that concept where we first find it, in the unrestrained action of the savage, who does not feel the need to repress any thought, feeling, or tendency to act. Next, we find that, in order to co-exist with other people, the savage has given up this freedom in exchange for its opposite, the restraint of civilization and law, which he now regards as tyranny. Finally, in the citizen under the rule of law, we find the third stage of development, liberty in a higher and a fuller sense than that in which the savage possessed it, the liberty to do and to say and to think many things which were beyond the power of the savage. In this triadic process, the second stage is the direct opposite, the annihilation, or at least the sublation, of the first; and the third stage is the first returned to itself in a higher, truer, richer, and fuller form.
Hegel termed the three stages:
These three stages are found succeeding one another throughout the whole realm of thought and being, from the most abstract logical process up to the most complicated concrete activity of organized mind, the historical succession of political systems or the development of systems of philosophy.
In logic, which Hegel claimed was really metaphysics, the three-stage process of development is applied to reality in its most abstract form. According to Hegel, logic deals with concepts robbed of their empirical content; logic is simply an examination of the process without the contents. Hegel's study of reality begins with the logical concept of being. Hegel declared that being is essentially dynamic, tending by its very nature to pass over into nothing, and then to return to itself in the higher concept of becoming. Aristotle had supposed that there is nothing more certain than that being is identical with itself, that everything is what it is. Hegel added that it is equally certain that being tends to become its opposite, nothing, and that both are united in the concept of becoming. Aristotle saw a table as a table. Hegel saw as the whole truth that the table was once a tree, it is now a table, and one day it "will be" ashes. Thus becoming, not being, is the highest expression of reality. It is also the highest expression of thought, because we attain the fullest knowledge of a thing only when we know what it was, what it is, and what it will be, the history of its development.
At the most basic level "being" and "nothing" develop into the higher concept “becoming;” farther up the scale of development, “life” and “mind” appear as the third steps of the process and are in turn developed into higher forms of themselves. All of these are stages of “becoming.” The only thing always present is the process itself (das Werden). We may call the process by the name of "spirit" (Geist) or "idea" (Begriff). We may even call it God, because at least in the third term of every triadic development the process is God.
In considering the process of spirit, God, or the idea, it becomes clear that the idea must be studied (1) in itself, the subject of logic or metaphysics; (2) out of itself, in nature, the subject of the philosophy of nature; and (3) in and for itself, as mind, the subject of the philosophy of mind (Geistesphilosophie).
Philosophy of nature takes up the study of the “process” or “idea” at the point where its development enters into “otherness” in nature, the point where it enters into the substantial, material world. Hegel referred to nature as “estranged spirit” and saw the whole world process as a process of divine self-estrangement. By “estranged” Hegel did not mean “annihilated” or “altered.” In nature the “idea” has lost itself, because it has lost its unity and is splintered into a thousand material fragments. But the loss of unity is only apparent, because in reality the “idea” has merely concealed its unity. Examined philosophically, nature reveals itself to us in a myriad of successful attempts of the idea to emerge out of the state of otherness, and present itself as a better, fuller, richer idea, namely, “spirit,” or “mind.” Mind is, therefore, the goal of nature and also the truth of nature. Whatever is in nature is realized in a higher form in the mind which emerges from nature.
Hegel expressed the synthesis of the divine and the human in the doctrine of the absolute and the relative “Geist” (“mind” or “spirit”). “Geist” translates to “esprit” in French, “ruach” in Hebrew, “spiritus” in Latin, and “pneuma” in Greek, but in English this word has been more or less lost, partly due to British empiricism and partly to Descartes’ division of man into intellect and body. In English Hegel’s phenomenology of “Geist” has been translated as phenomenology of “mind,” but in this case the word “mind” implies an element of spiritual power, and not simply intellectual movement.
The philosophy of mind begins with the consideration of the individual, or subjective, mind. It is soon perceived, however, that individual, or subjective, mind is only the first stage, the "in-itself" stage, of mind. The next stage is objective mind, or mind objectified in law, morality, and the State. This is mind in the condition of "out-of-itself." There follows the condition of absolute mind, the state in which mind rises above all the limitations of nature and institutions, and is subjected to itself alone in art, religion, and philosophy. The essence of mind is freedom, and its development must consist in breaking away from the restrictions imposed on it in its “otherness” by nature and human institutions.
Hegel's philosophy of the State, his theory of history, and his account of absolute mind are the most interesting portions of his philosophy and the most easily understood. The State, he says, is mind objectified. The individual mind, which, on account of its passions, its prejudices, and its blind impulses, is only partly free, subjects itself to the yoke of necessity, the opposite of freedom, in order to attain a fuller realization of itself in the freedom of the citizen. This yoke of necessity is first met with in the recognition of the rights of others, next in morality, and finally in social morality, of which the primal institution is the family. Aggregates of families form civil society, which, however, is but an imperfect form of organization compared with the State. The State is the perfect social embodiment of the idea, and stands, in this stage of development, for God Himself. The State, studied in itself, furnishes for our consideration constitutional law. In relation to other States it develops international law; and in its general course through historical vicissitudes it passes through what Hegel calls the "Dialectics of History."
Hegel teaches that the constitution is the collective spirit of the nation and that the government is the embodiment of that spirit. Each nation has its own individual spirit, and the greatest of crimes is the act by which the tyrant or the conqueror stifles the spirit of a nation. War, according to Hegel, is an indispensable means of political progress, a crisis in the development of the idea which is embodied in the different States; out of this crisis the better State is certain to emerge victorious. Historical development is, therefore, a rational process, since the State is the embodiment of reason as spirit. All the apparently contingent events of history are, in reality, stages in the logical unfolding of the sovereign reason which is embodied in the State. Passions, impulse, interest, character, personality are all either the expression of reason or the instruments which reason molds for its own use. Historical events should therefore be understood as the stern, reluctant working of reason towards the full realization of itself in perfect freedom. Consequently, we must interpret history in purely rational terms, and sort the succession of events into logical categories.
The widest view of history reveals three important stages of development: Oriental monarchy (the stage of oneness, of suppression of freedom); Greek democracy (the stage of expansion, in which freedom was lost in unstable demagogy); and Christian constitutional monarchy (which represents the reintegration of freedom in constitutional government).
Even in the State, mind is limited by subjection to other minds. There remains the final step in the process of the acquisition of freedom, namely, that by which absolute mind in art, religion, and philosophy subjects itself to itself alone. Art is the mind’s intuitive contemplation of itself as realized in the art material; and the development of the arts has been conditioned by the ever-increasing "docility" with which the art material lends itself to the actualization of mind or the idea.
In religion, mind feels the superiority of itself to the particularizing limitations of finite things. In the philosophy of religion, as in the philosophy of history, there are three great moments: Oriental religion, which exaggerated the idea of the infinite; Greek religion, which gave undue importance to the finite; and Christianity, which represents the union of the infinite and the finite.
Last of all, absolute mind, as philosophy, transcends the limitations imposed on it even in religious feeling, and, discarding representative intuition, attains all truth under the form of reason. Whatever truth there is in art and in religion is contained in philosophy, in a higher form, and free from all limitations. Philosophy is, therefore, "the highest, freest and wisest phase of the union of subjective and objective mind," and the ultimate goal of all development.
No other philosophical school could compete with Hegel’s system in its rigorous formulation, its richness of content and its attempt to explain the totality of culture. For more than thirty years, it brought together the best minds of German philosophy. As its influence spread, Hegel’s thought provoked increasingly lively reactions, and was re-articulated numerous times as it mingled with contrasting philosophical positions.
There are four distinct stages in the historical development of Hegelianism. The first was the immediate crisis of the Hegelian school in Germany from 1827 through 1850, when the school was always involved in polemics against its adversaries, and divided into three currents: the Hegelian Rightists, the Young Hegelians, and the centrists. During the second phase, usually referred to as Neo-Hegelianism, from 1850 to 1904, when Hegelianism diffused into other countries, the ideas of the centrists were predominant and the primary interest was in logic and a reform of the dialectic. The third stage, a renaissance of Hegelianism, began in Germany during the first decade of the twentieth century, after Wilhelm Dilthey discovered unpublished papers from Hegel’s youth. It stressed a critical reconstruction of the genesis of Hegel’s thought, with special attention to the Enlightenment and Romanticist influences and to possible irrationalistic attitudes. This phase was characterized by the publication of original texts and historical studies, and by an interest in philology.
After World War II, the revival of Marxist studies in Europe revived many of the polemical themes of the school’s early years, and brought about renewed interest in Hegel’s influence on Marx’s interpretations of political and social problems.
Early Hegelianism passed through three periods; the polemics during the life of Hegel (1816-1831), religious controversies (1831-1839) and political debates (1840-1844). While Hegel was alive, the polemics stemmed from various objections to Hegelian thought and not from disagreements within the school. The history of Hegelianism began from the period when Hegel taught in Berlin and the publication of Naturrecht und Staatswissenschaft im Grundrisse (1821; Eng. trans., The Philosophy of Right, 1942). This book was criticized by Johann Herbart for mixing the monism of Spinoza with the transcendentalism of Kant, and the liberal press criticized Hegel for attacking Jakob Fries, a psychologizing Neo-Kantian, in the Introduction. Hegel was also criticized by disciples of Friedrich Schelling, an objective and aesthetic idealist, and of Friedrich Schleiermacher, a seminal thinker of modern theology; and by speculative theists such as Christian Weisse of Leipzig and Immanuel Fichte, the son of Johann Fichte. Some of Hegel’s responses to these criticisms made a considerable impact, particularly eight articles in the Jahrbücher für wissenschaftliche Kritik (founded 1827; “Yearbooks for Scientific Critique”), a journal of the Hegelian right. Among Hegel’s most loyal disciples and defenders were Hermann Hinrichs, his collaborator, and Karl Rosenkranz.
Soon after Hegel’s death, the school divided into three currents of thought. The “Hegelian Rightists,” in which Hegel’s direct disciples participated, defended Hegel against charges that his philosophy was liberal and pantheistic. They developed his philosophy along lines which they considered to be in accordance with Christian teaching, and sought to uphold its compatibility with the conservative political politics of the Restoration which followed the defeat of Napoleon. They included Karl Friedrich Göschel, Johann Philipp Gabler, Johann Karl Friedrich Rosenkranz, and Johann Eduard Erdmann.
Until Feuerbach’s “Thoughts regarding Death and Immortality” (1830), Hegelianism was primarily represented by the “Old Hegelians” who emphasized the Christian and conservative elements in his writings. After Feuerbach and the “Life of Jesus” (1835) of D.F. Strauss, the denial of personal religion became more prominent.
The “Hegelian Leftists" (also referred to as "Young Hegelians") were mostly indirect disciples of Hegel who interpreted Hegelianism in a revolutionary sense, at first pantheistic and later atheistic. They emphasized the dialectic as a “principle of movement” and attempted to develop a rational political and cultural reality, finding in Hegel’s dialectic the ammunition to attack the existing bourgeois, religious, monarchical social order, now regarded as only a moment in the forward development of history. The Leftists accentuated the anti-Christian tendencies of Hegel's system and developed schools of materialism, socialism, rationalism, and pantheism. They included Ludwig Andreas Feuerbach, Richter, Karl Marx, Brüno Bauer, and Otto Strauss. Max Stirner socialized with the left Hegelians but built his own philosophical system largely opposing that of these thinkers.
The centrist Hegelians were more concerned with the philosophical significance of Hegel’s system, its genesis and problems of logic. This current of thought was predominant in Neo-Hegelianism, as Hegelian thought diffused throughout Europe and the United States.
The diffusion of Hegelianism outside of Germany took two directions: Europeans were concerned with addressing political and cultural problems, while those in the United States were more interested in the philosophy of history and in political theory.
The publication of The Secret of Hegel by James Hutchinson Stirling in 1865 introduced Hegelianism to Britain where, transmuted into absolute idealism, it became part of the dominant academic philosophy in Britain until challenged by Russell and Moore in Cambridge, and writers such as J. Cook-Wilson and H.H. Prichard at Oxford, at the beginning of the twentieth century. In Britain, Hegelianism was represented during the nineteenth century by the British Idealist school of James Hutchison Stirling, Thomas Hill Green, William Wallace, John Caird, Edward Caird, Richard Lewis Nettleship, J. M. E. McTaggart, and Baillie. British interest in Hegel was largely powered by political thought.
In Denmark, Hegelianism was represented by Johan Ludvig Heiberg and Hans Lassen Martensen from the 1820s to the 1850s. Benedetto Croce and Étienne Vacherot were the leading Hegelians towards the end of the nineteenth century in Italy and France, respectively. Pierre-Joseph Proudhon was a French Hegelian Socialist. Among Catholic philosophers who were influenced by Hegel the most prominent were Georg Hermes and Anton Gunther.
In eastern European, Hegelianism was represented by philosophers and critics such as the Polish count Augustus Cieszkowski, a religious thinker whose philosophy of action was initially influenced by the left; the theistic metaphysician Bronislaw Trentowski; in Russia by literary critic Vissarion Belinsky, the democratic revolutionary writers Aleksandr Herzen and Nikolay Chernyshevsky, and certain anarchists such as the Russian exile and revolutionist Mikhail Bakunin.
Hegelianism in North America was represented by Thomas Watson and William T. Harris. In its most recent form it seems to take its inspiration from Thomas Hill Green, and whatever influence it exerts is opposed to the prevalent pragmatic tendency. Its two centers, the schools in St. Louis and Cincinnati, seemed to duplicate the German division into a conservative and a revolutionary current. The conservative Hegelians of the St. Louis school included the German Henry Brokmeyer, and William Harris, founders of the St. Louis Philosophical Society, which published an influential organ, The Journal of Speculative Philosophy. They sought a dialectical and speculative foundation for American democracy and a dialectical interpretation of the history of the United States. The Cincinnati group centered around August Willich, a former Prussian officer, and John Bernard Stallo, an organizer of the Republican Party. Willich founded the Cincinnati Republikaner, in which he reviewed Marx's Zur Kritik der politischen Ökonomie (1859) and sought to base the principles of social democracy on Feuerbach’s humanism. Stallo interpreted the democratic community as the realization of the dialectic rationality of the Spirit, with a rigorous separation of church and state.
The far-reaching influence of Hegel is partially due to the vastness of the scheme of philosophical synthesis which he conceived and partly realized. A philosophy which undertook to organize every department of knowledge, from abstract logic up to the philosophy of history, under the single formula of triadic development, had a great deal of attractiveness. But Hegel's influence is due in a still larger measure to two extrinsic circumstances. His philosophy is the highest expression of that spirit of collectivism which characterized the nineteenth century. Hegel especially revolutionized the methods of inquiry in theology. The application of his notion of development to biblical criticism and to historical investigation is obvious when the spirit and purpose of the theological literature of the first half of the nineteenth century is compared to that of contemporary theology. In science, too, and in literature, the substitution of the category of “becoming” for the category of “being” is due to the influence of Hegel's method. In political economy and political science the effect of Hegel's collectivistic conception of the State supplanted to a large extent the individualistic conception which had been handed down from the eighteenth century to the nineteenth.
All links retrieved February 13, 2014.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. |
The evolution of bats, even with ever-emerging research and fossil records, remains a mysterious one. Most evolution scientists agree that bats must have evolved from mammals, but unfortunately they can not find strong enough evidence as to which common ancestor bats splintered off from.
Scientists now theorize that bats, the only mammal known to have developed flight, evolved from small rodent-like animals, which now include rats, etc.
A discovery in 2008 did fill in a piece of this evolutionary puzzle with an exciting find. The oldest fossilized bat, dated to be over 52 million years old, put to rest another long standing argument in the scientific community as to whether flight or echolocation, the bat’s flight system, developed at different times or in congruency with each other. It turns out that this animal was able to fly but could not boast the use of echolocation. Dr. Nancy Simmons of the American Museum of Natural History in New York, who was part of the archeological find, says, “It’s clearly a bat, but unlike any previously known. In many respects it is a missing link between bats and their non-flying ancestors.”
This creature is theorized to have been a day flyer until it’s species was forced to become nocturnal to avoid new flying predators. But still this discovery does not prove from which animal family bats evolved. Even though gliding has evolved in mammals multiple times it is very clear that actual, familiar flying only developed once. And how excesses of skin found in gliding mammals developed into short, skinny flapping wings which give bats almost no resemblance to their original ancestor, still confuses every scientist to date.
The wings of bats are now highly advanced from over millions of years of evolution. They have developed hyper-sensitive wings with almost two dozen joints just within the wing membrane. This membrane is also suited to give the bat advantage over it’s flying counterparts; while birds and insects can change the angle of attack or fold in their winds to increase aerodynamic efficiency, bats have a much more flexible wing. This allows them to curve the bottom of the wing inward during their downstroke, generating greater lift for much less energy. Similarly, bats can fold in their wings closer to their bodies for each upstroke and they experience less drag. In fact, the wing is so flexible that bats can make a 180 degree turn in a matter of half a downstroke. This allows mass amounts of bats to fly very fast, in very close proximity to one another, and almost never crash.
Researchers have only recently begun to discover the remarkable ways in which bats developed a highly advanced and remarkable new style of flight. These specialized wing strokes do not only make the animal more aerodynamic, but they also allow the bat to hover, almost like an insect or hummingbird. The only problem with this is that theoretically, bats should be to large to be able to hover because flying animals over a certain weight can not beat their wings fast enough to maintain a steady state. However in the downstroke a remarkable phenomenon occurs; a tiny vortex is created at the tip of the bats wing. The vortex then swirls around the wing on the upstroke, creating a pressurized air bubble below the wing, giving the creature extra lift by lowering air pressure above.
The result is amazing; “Even as gravity plucks at its heels, the bat’s homegrown tornadoes suck it back up toward Oz.” The vortexes were discovered and retested by a number of different groups, all which came to the same conclusion. When putting a bat in a wind tunnel and sending colored streaks of smoke towards it, they can see plainly that tiny swirls appear at the tip of the bat’s wing while it is attempting to hover, and the bat flaps it’s wing three times per second, which is even more remarkable for such a large animal.
Engineers are now trying to design and manufacture a type of flying robot that can mirror the skills that bats possess. However, they are wary of simply making the robot look like a bat, but rather they want to design something that is more practical for human use while still maintaining the bat-like flight. |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
In psychology, it typically refers to the perception of a group as pure entity (an entitative group), abstracted from its attendant individuals. It is different from holistic perception. Operationally, entitativity can also be defined as perceiving a collection of social targets (e.g., individuals) as possessing unity and coherence (e.g., a group). Entitativity is highest for intimacy groups, such as the family, lower for task groups, lower yet for social categories (e.g., people of the same religion), and lowest for transitory groups, such as people waiting at the same bus stop (Lickel et al., 2000).
Campbell (1958) coined the term entitativity in order to explain why some groups are considered real groups while others are thought to be mere aggregates of individuals. He suggested that people rely on certain perceptual cues as they intuitively determine which aggregations of individuals are groups, and which are not (e.g. Spectators at a football game may seem like a disorganized collection of people, but when they shout the same cheers or express similar emotions, this gives them entitativity)(Forsyth, 2010).
Additionally, Campbell (1958) emphasized three cues that individuals can use to make judgments regarding entitativity: common fate (the extent to which individuals in the aggregate seem to experience interrelated outcomes), similarity (the extent to which the individuals display the same behaviors or resemble one another), and proximity (the distance between individuals in the aggregate). To illustrate how we make those judgments, consider the example of people sharing a table at a library. They could be friends who are studying together, or they may also be strangers happening to share the same table. If you're wondering whether this is an actual group, you would examine their common fate, similarity, and proximity. Common fate may be something like the group all getting up and leaving together while talking or laughing amongst themselves. Similarity could be as simple as noticing that they are all using the same textbooks or notes, or that they happen to be wearing the same t-shirts to organizations (i.e., fraternity, university group). Finally, their physical proximity to one another (i.e., moving to sit closer) would be the final characteristic to judge that you are witnessing individuals with entitativity (Forsyth, 2010).
There are two proposed antecedents for the entitativity perception (Ip, Chiu, & Wan, 2006):
- physical similarity
- goal/behavior similarity.
See also Edit
- Campbell, D. T. (1958). Common fate, similarity, and other indices of the status of aggregates of person as social
entities. Behavioural Science, 3, 14–25.
- Ip, G. W. M., Chiu, C. Y., & Wan, C. (2006). Birds of a feather and birds flocking together: Physical versus behavioral cues may lead to trait- versus goal-based group perception. Journal of Personality and Social Psychology, 90, 368-381.
- Forsyth, D. R. (2010). Group Dynamics (5th edition). Belmont, CA: Wadsworth.
- Lickel, B., Hamilton, D. L., Sherman, S. J. (2001). Elements of a lay theory of groups: Types of groups, relational styles, and the perception of group entitativity. Personality and Social Psychology Review, 5, 129-140.
- Lickel, B., D. L Hamilton, G. Wieczorkowska, A. Lewis, S. J Sherman, and A. N Uhles. (2000). Varieties of groups and the perception of group entitativity. Journal of Personality and Social Psychology 78, no. 2: 223–246. link to pdf
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| |
A key to surviving the Canadian winter is understanding how we are affected by the wind chill. The following information is from Environment Canada’s Weather Office a great source for current weather listings, weather FAQ and much more!
What is Wind Chill?
Anyone who has ever waited at a bus stop or taken a walk on a blustery winter day knows that you feel colder when the wind blows. This cooling sensation that is caused by the combined effect of temperature and wind, is what is known as wind chill.
On a calm day, our bodies insulate us somewhat from the outside temperature by warming up a thin layer of air close to our skin, known as the boundary layer. When the wind blows, it takes this protective layer away, exposing our skin to the outside air. It takes energy for our bodies to warm up a new layer and, if each layer keeps getting blown away, our skin temperature will drop and we will feel colder.
Wind also makes you feel colder by evaporating any moisture on your skin, a process that draws more heat away from the body. Studies show that when skin is wet, it loses heat much faster than when it is dry.
How does Wind Chill affect you?
Living in a cold country can be hazardous to your health. Each year in Canada, more than 80 people die from over-exposure to the cold, and many more suffer injuries resulting from hypothermia and frostbite. Wind chill can play a major role in such health hazards because it speeds up the rate at which your body loses heat.
How much heat you lose depends not only on the cooling effects of the cold and the wind chill, but on other factors. Good quality clothing with high insulating properties traps air, creating a thicker boundary layer around the body which keeps in the heat. Wet clothing and footwear lose their insulating properties, resulting in body-heat loss nearly equal to that of exposed skin. Your body type also determines how quickly you lose heat; people with a tall, slim build become cold much faster than those that are shorter and heavier.
We can also gain heat by increasing our metabolism or soaking up the sun. Physical activity, such as walking or skiing, increases our metabolism – which generates more body heat. Age and physical condition also play a part. Elderly people and children have less muscle mass and as a result, generate less body heat. Sunshine, even on a cold winter day, can also make a difference. Bright sunshine can make you feel as much as 10 degrees warmer. Over time, our bodies can also adapt to the cold. People who live in a cold climate are often able to withstand cold better than those from warmer climates.
Beating the chill
The best way to avoid the hazards of wind chill is to check the weather forecast before going outside, and to be prepared by dressing warmly. As a guideline, keep in mind that the risk of frostbite increases rapidly when wind chill values go below -27.
A simple way to avoid wind chill is to get out of the wind. Environment Canada’s wind chill forecasts are based on the wind you would experience on open ground; taking shelter from the wind can therefore reduce or even eliminate the wind chill factor.
A recent survey indicated that 82 per cent of Canadians use wind chill information to decide how to dress before going outside in the winter. Many groups and organizations also use the wind chill index to regulate their outdoor activities.
Schools use wind chill information to decide whether it is safe for children to go outdoors at recess; hockey clubs cancel outdoor practices when the wind chill is too cold; people who work outside for a living, such as construction workers and ski-lift operators, are required to take indoor breaks to warm up when the wind chill is very cold.
The Wind Chill Index
The index is expressed in temperature-like units, the format preferred by most Canadians. By equating the current outdoor conditions to an equivalent temperature with no wind, the index represents the degree of “chill” that your skin senses. For example, if the wind chill is -20 while the outside temperature is only -10°C, it means that your face will feel as cold as if it was a calm day (no wind) with a temperature of -20° C. See the numerical chart on wind chill values estimate for more information. The Weather Office also has an accompanying chart that lists wind chill hazards and what to do. Follow the above link to view the chart. |
A new study by the University of Pennsylvania suggests elephants are even smarter and more socially complex than previously thought. The conventional wisdom was that elephants lived in small herds that centered around females, while the males wandered independently. The new study shows that the herds are actually interconnected social groups who "track one another over large distances by calling to each other and using their sense of smell," according to Dr. Shermin de Silva.
From the Daily Mail:
‘So the "herd" of elephants one sees at any given time is often only a fragment of a much larger social group.
‘Our work shows that they are able recognize their friends and renew these bonds even after being apart for a long time.'
Other fascinating elephant facts:
- Elephants with fewer friends tended to be more loyal to the ones they had.
- Individual elephants don't just "mix randomly" with the population. They have a set of preferred companions
- A 16 percent minority of elephants changed their "top five" friends over time.
- Social bonds are strongest in the dry season, thought to be a means to help protect food and water supply.
- They indicate they approve of something by tapping the "like" button on each other's Facebook pages with their trunks. Ha ha! Just kidding! |
Bring the 'dead famous' back to life! Discover exceptional figures from British history, and explore the society in which they lived.
Was King John really the cruel and terrible king that history paints him as? This biography explores the life of King John, from his ascension to the throne, through wars with France and his own English barons, to the signing of the Magna Carta. This Great Charter was a turning point is English political history and heralded the beginning of parliament.
Through the story of King John, readers will learn about the changing nature of the monarchy in Medieval Britain, and the changing roles of the church and state in Medieval society. Readers will learn to draw conclusions from the evidence provided - a great basis for class discussions.
History VIP biographies each look at the life of a famous Briton telling the stories of these Very Important People with clear, lively text. Amazing facts are added with feature panels and lively illustrations give visual information of the time and society the VIP lived in. With these key biographies students learn how individual people's actions have shaped the course of history.
Key terms are defined in an easy-to-use glossary encouraging readers to use historical terms in their own work.
In other news - these panels give context and help readers to understand the society and events of the wider world in which the subject lived
True or False - questions lead students to question information and to interact with the facts they are presented with.
What they said - quote features bring the subjects to life using their own words!
WOW! - Boxes add humorous or amazing information to astound the reader and bring out the hilarious side of history |
Not too long ago in September, two scientists from Russia and Belarus discovered a new comet approaching the sun and due to pass by the earth late in the current calendar year of 2013. Since the astronomers were both members of the International Scientific Optical Network based in Russia, the comet has been named ISON. The comet is due to pass within 750,000 miles of the sun in November; and if it survives this journey it brighten the skies of the Northern Hemisphere around the time of Christmas.
Why Comets Elude Early Detection
Comets are very different from asteroids, for they are composed of ice, rock and organic compounds that can be several miles in diameter. This nebulous nature makes a comet much harder to detect and helps explain why Comet Ison was only detected about a year before it will be visible from planet Earth. On the other hand because of their dense concentration of matter, asteroids can be detected many years in advance.
Collision With Earth
Both asteroids and comets have collided with the earth many times over during the 4 billion history of our planet. Comets hitting the earth’s surface are believed to be the source of our carbon based molecules, so instrumental in the structure of all living substances and organisms. On the other hand asteroids are called meteorites when they enter the atmosphere and meteors when they hit the earth’s surface. A direct hit by either an asteroid or a comet has the potential for being an explosive event, much stronger than our most powerful atomic bomb.
Comet PANSTARRS will be visible in the southern Hemisphere later on this month and should remain a bright object in the night sky until March, when it will become visible north of the equator. At that time the comet will start to fade. Of particular interest is the origin of the name of this first of two comets that will pass by earth in 2013.
What Is Pan-STARRS
Discovered in June 2011, Pan-STARRS is named for the Panoramic Survey Telescope And Rapid Response System project, a comprehensive survey program based at Mount Haleakala, Hawaii. If you believe that all the recent Hollywood hullabaloo about large asteroids striking the earth’s surface and causing catastrophic damage is pure science fiction, you are slightly mistaken. Though extremely rare, such events have occurred in the past and are remotely possible, today. As a result with funding by the U.S. Air Force, a major observatory has been created in Hawaii, whereabouts: “A major goal of Pan-STARRS is to discover and characterize Earth-approaching objects, both asteroids & comets, that might pose a danger to our planet.”
Are We Doomed?
So far soothsayers are astronomical psychics have correlated the arrival of ISON with the lost planet Niburu, the dwarf star behind Jupiter, the incorrect calculation of the end of the Mayan Calendar and a possible collision with earth. No matter how you look at it 2013 will provide plenty of raw material for such fringe areas of intellectual pursuit…..And if ISON does strike and destroy the earth, it will have to be renamed, Ivan the Terrible.
Help Yeyeright Save The World From Comet Ison |
A typical brushed DC motor consists of an outer stator, typically made of either a permanent magnet or electromagnetic windings, and an inner rotor made of iron laminations with coil windings. A segmented commutator and brushes control the sequence in which the rotor windings are energized, to produce continuous rotation.
Coreless DC motors do away with the laminated iron core in the rotor. Instead, the rotor windings are wound in a skewed, or honeycomb, fashion to form a self-supporting hollow cylinder or “basket.” Because there is no iron core to support the windings, they are often held together with epoxy. The stator is made of a rare earth magnets, such as Neodymium, AlNiCo (aluminum-nickel-cobalt), or SmCo (samarium-cobalt), and sits inside the coreless rotor.
Other terms for coreless DC motors include “air core,” “slotless,” and “ironless.”
The brushes used in coreless DC motors can be made of precious metal or graphite. Precious metal brushes (silver, gold, platinum, or palladium) are paired with precious metal commutators. This design has low contact resistance and is often used in low-current applications. When sintered metal graphite brushes are used, the commutator is made of copper. The copper-graphite combination is more suitable for applications requiring higher power and higher current.
The construction of coreless DC motors provides several advantages over traditional, iron core DC motors. First, the elimination of iron significantly reduces the mass and inertia of the rotor, so very rapid acceleration and deceleration rates are possible. And no iron also means no iron losses, giving coreless designs significantly higher efficiencies (up to 90 percent) than traditional DC motors. The coreless design also reduces winding inductance, so sparking between the brushes and commutator is reduced, increasing motor life and reducing electromagnetic interference (EMI).
Motor cogging, which is an issue in traditional DC motors due to the magnetic interaction of the permanent magnets and the iron laminations, is also eliminated, since there are no laminations in the ironless design. And in turn, torque ripple is extremely low, which provides very smooth motor rotation with minimal vibration and noise.
Because these motors are often used for highly dynamic movements (high acceleration and deceleration), the coils in the rotor must be able to withstand high torque and dissipate significant heat generated by peak currents. Because there’s no iron core to act as a heat sink, the motor housing often contains ports to facilitate forced air cooling.
The compact design of coreless DC motors lends itself to applications that require a high power-to-size ratio, with motor sizes typically in the range of 6 mm to 75 mm (although sizes down to 1 mm are available) and power ratings of generally 250 W or less. Coreless designs are an especially good solution for battery-powered devices because they draw extremely low current at no-load conditions.
Coreless DC motors are used extensively in medical applications, including prosthetics, small pumps (such as insulin pumps), laboratory equipment, and X-ray machines. Their ability to handle fast, dynamic moves also makes them ideal for use in robotic applications. |
If you've been doing math for a while, you have probably come across exponents. An exponent is a number, which is called the base, followed by another number usually written in superscript. The second number is the exponent or the power. It tells you how many time to multiply the base by itself. For example, 82 means to multiply 8 by itself twice to get 16, and 103 means 10 • 10 • 10 = 1,000. When you have negative exponents, the negative exponent rule dictates that, instead of multiplying the base the indicated number of times, you divide the base into 1 that number of times. So 8 -2 = 1/(8 • 8) = 1/16 and 10-3 = 1/(10 • 10 • 10) = 1/1,000 = 0.001. It's possible to express a generalized negative exponent definition by writing: x-n = 1/xn.
TL;DR (Too Long; Didn't Read)
To multiply by a negative exponent, subtract that exponent. To divide by a negative exponent, add that exponent.
Multiplying Negative Exponents
Keeping in mind that you can multiply exponents only if they have the same base, the general rule for multiplying two numbers raised to exponents is to add the exponents. For example, x5 • x3 = x(5 +3) = x8. To see why this is true, note that x5 means (x • x • x • x •x) and x3 means (x • x • x). When you multiply these terms, you get (x • x • x • x •x • x • x • x) = x8.
A negative exponent means to divide the base raised to that power into 1. So x5 • x-3 actually means x5 • 1/x3 or (x • x • x • x • x) • 1/ (x • x • x). This is a simple division. You can cancel three of the x's, leaving (x • x) or x2. In other words, you when you multiply by a negative exponent, you still add the exponent, but since it's negative, this is equivalent to subtracting it. In general,
Sciencing Video Vault
xn • x-m = x(n - m)
Dividing Negative Exponents
According to the definition of a negative exponent, x-n = 1/xn. When you divide by a negative exponent, it's equivalent to multiplying by the same exponent, only positive. To see why this is true, consider 1/x-n = 1/(1/xn) = xn. For example, the number x5/x-3 is equivalent to x5 • x3. You add the exponents to get x8. The rule is:
xn/x-m = x(n + m)
1. Simplify x5y4 • x-2y2
Collecting the exponents:
x(5 - 2)y(4 +2)
You can only manipulate exponents if they have the same base, so you can't simplify any further.
2. Simplify (x3y-5)/(x2 y-3)
Dividing by a negative exponent is equivalent to multiplying by the same positive exponent, so you can rewrite this expression:
[(x3y-5) • y3]/ x2
x(3 - 2)y(-5 + 3)
3. Simplify x0y2/xy-3
Any number raised to an exponent of 0 is 1, so you can rewrite this expression to read:
x-1y(2 + 3) |
Left ventricular hypertrophy is enlargement (hypertrophy) of the muscle tissue that makes up the wall of your heart’s main pumping chamber (left ventricle).
Left ventricular hypertrophy develops in response to some factor, such as high blood pressure, that requires the left ventricle to work harder. As the workload increases, the walls of the chamber grow thicker, lose elasticity and eventually may fail to pump with as much force as a healthy heart.
The incidence of left ventricular hypertrophy (LVH) increases with age and is more common in people who have high blood pressure or other heart problems.
Left ventricular hypertrophy usually develops gradually. You may experience no signs or symptoms, especially during the early stages of development. When signs or symptoms are present, they may include:
- Shortness of breath
- Chest pain
- Sensation of rapid, fluttering or pounding heartbeats (palpitations)
- Rapid exhaustion with physical activity
Left ventricular hypertrophy occurs as a result of one or more things making your heart work harder than normal to pump blood to your body. For example, if you have high blood pressure, the muscles of the left ventricle must contract more forcefully than normal in order to counter the effect of the elevated blood pressure.
The effect of the stronger contraction on your heart is similar to the response of other muscles to an increased workload. If you add weight to a dumbbell for arm curls, your biceps become larger. Similarly, the work of adapting to high blood pressure may result in larger muscle tissue in the walls of the left ventricle. Unlike weight training, however, the increased workload on the heart is constant with each heartbeat and with little time for the heart muscles to relax. The increase in muscle mass causes the heart to function poorly.
Factors that can cause your heart to work harder include the following:
- High blood pressure (hypertension) is the most common cause of left ventricular hypertrophy. A blood pressure reading is given in a unit of measure called millimeters of mercury (mm Hg). Hypertension is generally defined as systolic pressure greater than 140 mm Hg and a diastolic pressure greater than 90 mm Hg, or 140/90 mm Hg. Systolic pressure is blood pressure while the heart contracts, and diastolic pressure is blood pressure while the heart rests between beats.
- Aortic valve stenosis is a narrowing of the aortic valve, the flap separating your left ventricle from the aorta, the large blood vessel that delivers oxygen-rich blood to your body. This partial obstruction of blood flow requires the left ventricle to work harder to pump blood into the aorta.
- Aortic valve regurgitation is a condition in which the heart valve separating the left ventricle and the aorta doesn’t close properly, resulting in some blood flowing backward into the left ventricle. This condition increases the volume of blood in the left ventricle and requires more force to pump it out.
- Dilated cardiomyopathy is enlargement of the left ventricle and, in some cases, other chambers of the heart. Because the space inside the left ventricle is large, it fills with more blood and requires the muscle to contract more forcefully when pumping the blood out.
- A heart attack usually causes the loss or scarring of muscle tissue. To compensate for this loss, the surviving muscles may need to pump harder.
Risk factors for left ventricular hypertrophy include the following:
- High blood pressure, a blood pressure reading greater than 140/90 mm Hg, is the greatest risk factor.
- Aortic stenosis, narrowing of the main valve through which blood leaves the heart, may increase the left ventricle’s work load.
- Obesity can cause high blood pressure and increase your body’s demand for oxygen — factors that require the left ventricle to work harder.
- Coronary artery disease is the obstruction of arteries that supply blood to your heart muscle. If your heart muscle isn’t receiving enough blood, your heart responds by pumping more forcefully.
If you have signs and symptoms associated with heart disease — such as shortness of breath, chest pain, palpitations or others — your doctor will examine your heart function and choose the best treatment.
If you have high blood pressure, your doctor may order heart-related tests as a part of the ongoing management of the condition.
For some of the exams, your doctor may refer you to a heart specialist (cardiologist). Screening tests for left ventricular hypertrophy include:
- Electrocardiogram (ECG). An electrocardiogram — also called an ECG or EKG — records electrical signals as they travel through your heart. Your doctor can look for patterns among these signals that indicate abnormal heart function and increased left ventricle muscle tissue.
- Echocardiogram. An echocardiogram uses sound waves to produce live-action images of the heart. This common test enables your doctor to watch your ventricles squeezing and relaxing and valves opening and closing in rhythm with your heartbeat.
The echocardiogram is a primary tool for diagnosing left ventricular hypertrophy. If you have left ventricular hypertrophy, your doctor will be able to see thickening of muscle tissue in the left ventricle. An echocardiogram can also reveal how much blood is pumped from the heart with each beat and how stiff the heart muscle is. It may also show related heart abnormalities, such as aortic valve stenosis.
- Magnetic resonance imaging (MRI). Magnetic resonance imaging is a technique that uses a magnetic field and radio waves to create images of soft tissues in the body. It can be used to produce a thin cross-sectional “slice” of your heart or a three-dimensional image.
Left ventricular hypertrophy changes both the structure and function of the chamber:
- The enlarged muscle loses elasticity and stiffens, preventing the chamber from filling properly and leading to increased pressure in the heart.
- The enlarged muscle tissue compresses its own blood vessels (coronary arteries) and may restrict its own supply of blood.
- The overworked muscle weakens.
Complications that can occur as a result of these problems include:
- Inability of your heart to pump enough blood to your body (heart failure)
- Abnormal heart rhythm (arrhythmia)
- Insufficient supply of oxygen to the heart (ischemic heart disease)
- Interruption of blood supply to the heart (heart attack)
- Sudden, unexpected loss of heart function, breathing and consciousness (sudden cardiac arrest)
Treatment for left ventricular hypertrophy focuses on the underlying cause of the condition. Depending on the cause, treatment may involve medication or surgery.
Treating high blood pressure
Treatment for high blood pressure usually includes both medications and lifestyle changes, such as regular exercise; a low-sodium, low-fat diet; and no smoking.
In addition to lowering blood pressure, some high blood pressure drugs may prevent further enlargement of left ventricle muscle tissue and may even result in shrinking of the hypertrophic muscles. Blood pressure drugs that may reverse muscle growth include the following:
- Thiazide diuretics act on your kidneys to help your body eliminate sodium and water, thereby reducing blood volume. Thiazide diuretics are often the first — but not the only — choice in high blood pressure medications.
- Angiotensin-converting enzyme (ACE) inhibitors are a type of drug that widens, or dilates, blood vessels to lower blood pressure, improve blood flow and decrease the workload on the heart. Examples include enalapril (Vasotec), lisinopril (Prinivil, Zestril) and captopril (Capoten).
ACE inhibitors cause an irritating cough in some people. It may be best to put up with the cough, if you can, to gain the medication’s benefits. Discuss this side effect with your doctor. Switching to another ACE inhibitor or an angiotensin II receptor blocker may help.
- Angiotensin II receptor blockers (ARBs), which include losartan (Cozaar) and valsartan (Diovan), have many of the beneficial effects of ACE inhibitors, but they don’t cause a persistent cough. They may be an alternative for people who can’t tolerate ACE inhibitors.
- Beta blockers slow your heart rate, reduce blood pressure and prevent some of the harmful effects of stress hormones. These drugs include carvedilol (Coreg), metoprolol (Toprol XL) and bisoprolol (Zebeta).
- Calcium channel blockers prevent calcium from entering cells of the heart and blood vessel walls. This lowers blood pressure. These drugs include amlodipine (Norvasc), diltiazem (Cardizem, Dilacor XR), nifedipine (Adalat, Procardia) and verapamil (Calan, Isoptin, Verelan, Covera).
Aortic valve repair or replacement
If left ventricular hypertrophy is caused by aortic valve stenosis, you may have surgery to remove the narrow valve and replace it with either an artificial valve or a tissue valve from a pig, cow or human-cadaver donor. If you have aortic valve regurgitation, the leaky valve may be surgical repaired or replaced.
The best way to help prevent left ventricular hypertrophy is to maintain healthy blood pressure. Here are a few tips to better manage your blood pressure:
- Monitor high blood pressure. If you have high blood pressure, get a home blood pressure measuring device and check your blood pressure frequently. Schedule regular checkups with your doctor. The target for healthy blood pressure is less than 120/80 mm Hg.
- Make time for exercise. Regular exercise helps lower blood pressure. Aim for 30 minutes of moderate activity at least five times a week. Talk to your doctor about whether you need to restrict certain physical activities, such as weightlifting, which may temporarily raise your blood pressure.
- Watch your diet. Avoid foods that are high in fat and salt, and increase your consumption of fruits and vegetables. Avoid alcohol and caffeinated beverages, or drink them in moderation. |
When nature made the blue-bird she wished to propitiate both the sky and the earth, so she gave him the color of the one on his back and the hue of the other on his breast. – John Burroughs
As one might expect from the amazing diversity of colors and patterns exhibited by more than 9,000 bird species found in the world, birds can see color. In fact, they can discriminate a greater variety of colors than humans; as some birds can see into the ultraviolet range.
The colors in the feathers of a bird are formed in two different ways, from either pigments or from light refraction caused by the structure of the feather.
Tiny air pockets in the barbs of feathers can scatter incoming light, resulting in a specific, non-iridescent color. Blue colors in feathers are almost always produced in this manner. Examples include the blue feathers of bluebirds, Indigo Buntings, Blue Jay’s and Steller’s Jays. – All About Birds, Color |
One of the oldest pieces in the museum is not stone or porcelain or metal, but an actual human being – someone who walked around and felt the sun on their back nearly 5,500 years ago.
Looking at the desiccated corpse we can see how wonderfully he is preserved; the skin is like leather, you can see his fingernails, and can even make out the colour of his hair. He’s known as Gebelein man, from the place in Egypt where he was found, but because of his red hair he was nicknamed ‘ginger’.
We think he was around 18-20 when he died, and when the museum did a ‘virtual autopsy’ on him in 2012 by putting him through a CT scan, (there’s a video of this in the museum, and an interactive display of the results) the evidence points to him having been murdered. There’s a stab wound in his shoulders where a long, thin blade came down and punctured his lungs.
His fine state of preservation wasn’t because he was deliberately mummified, but the hot sand he was buried in dried out his body, preserving it. And we believe that it was discovering intact bodies like this that made the ancient Egyptians work on the process of mummification. They felt that the bodies had to be preserved so that they would be available to the spirits in the afterlife – if the spirit (known as the Ka) couldn’t recognise the body that they had left, they wouldn’t be able to repossess it.
Their first attempts weren’t particularly successful, bodies were buried in the ground in small coffins to protect them from animals, but they decayed. But over the succeeding centuries, starting in about 2500BC, (so around 1,000 years after Gebelein Man) they evolved a process that resulted in the mummies that you can find in the museum.
First the body was washed and purified, and the organs were removed and packed in a substance called natron – a salt – to dry them out. The dried organs were then placed in vessels called canopic jars. The heart was left in the body as that was thought to be the centre of intelligence and feeling. The brain, on the other hand, was thought to be of no use (they thought all it did was produce snot), so that was hooked out of the head through the nose and thrown away.
The whole body was then packed and covered with natron which absorbed all the moisture out of it. After 40 days the body was washed again, and stuffed with linen or sawdust to preserve its shape. The body was covered in oil to help the skin stay elastic.
Then we get what we all associate with mummies – it was wrapped in hundreds of metres of linen ‘bandages’, then put into a wooden case, then into a sarcophagus which was generally painted or decorated with a likeness of the occupant.
As you can imagine, this was an expensive process and only high-status individuals had their bodies preserved in this way, perhaps 1-2% of the population. Even so, the museum has over 100 mummies – most in storage – but you can see those that are on display in rooms 62 and 63.
For details of my tours of the Treasures of the British Museum, click here. |
By Martha E. Machado on July 30 2018 20:15:37
The worksheets on this page have various types of practice for multiplying fractions. Included are problems that focus on cross-cancelling which is a skill that greatly simplifies the process of reducing fractions in the answer step. Cross cancelling prior to multiplying fractions results in much smaller products which are significantly easier to reduce and turn into proper fractions.
Addition is an operation in which one number is added to another number. When adding a series of numbers, there are some strategies that help simplify the addition process. One thing to remember is to group numbers to make tens. For example, when adding 2 + 7 + 8, you can add the 2 and the eight first to get ten, and then add the seven to get 17. Some teachers call these groups of numbers that add to ten, "friendly tens," since they make adding easier for the student.
If the answers numerator is greater than the denominator, then the answer is an improper fraction. This fraction should be turned into a proper fraction by taking wholes out of the numerator until the numerator is less than the denominator.
Use flashcards. Make multiplication cards for each number set. Although this may seem tedious, the process of making the cards will actually help you to learn them. Once you’ve made them, spend some time each day studying until you know them all. Focus on one number set at a time. When you go through the cards, put the ones you get wrong back into the pile so you see them multiple times. |
Scientists have succeeded in growing human stem cells in pig embryos. The approach involves generating stem cells from a patient’s skin, growing the desired new organ in a large animal like a pig, and then harvesting it for transplant into the patient’s body.
Since the organ would be made of a patient’s own cells, there would be little risk of immune rejection.
The human-organ-growing pigs would be examples of chimeras, animals composed of two different genomes. They would be generated by implanting human stem cells into an early pig embryo, resulting in an animal composed of mixed pig and human cells.
One team of biologists, led by Jun Wu and Juan Carlos Izpisua Belmonte at the Salk Institute, has shown for the first time that human stem cells can contribute to forming the tissues of a pig, despite the 90 million years of evolution between the two species.
Another group, headed by Tomoyuki Yamaguchi and Hideyuki Sato of the University of Tokyo, and Hiromitsu Nakauchi of Stanford, has reversed diabetes in mice by inserting pancreas glands composed of mouse cells that were grown in a rat.
Many technical and ethical barriers have yet to be overcome, but the research is advancing alongside the acute need for organs.
Scientists expressed confidence that ethical concerns about chimera research could be addressed.
Chimeras are typically mosaics in which each organ is a mixture of the host and donor cells. But new techniques like the Crispr-Cas gene editing system should allow the human cells in a pig embryo both to be channelled into organs of interest and to be excluded from tissues of concern like the brain and reproductive tissues. |
, also theory of probability, branch of mathematics that deals with measuring or determining quantitatively the likelihood that an event or experiment will have a particular outcome. Probability is based on the study of permutations and combinations and is the necessary foundation for statistics.
The foundation of probability is usually ascribed to the 17th-century French mathematicians Blaise Pascal and Pierre de Fermat, but mathematicians as early as Gerolamo Cardano had made important contributions to its development. Mathematical probability began in an attempt to answer certain questions arising in games of chance, such as how many times a pair of dice must be thrown before the chance that a six will appear is 50-50. Or, in another example, if two players of equal ability, in a match to be won by the first to win ten games, are obliged to suspend play when one player has won five games, and the other seven, how should the stakes be divided?
The probability of an outcome is represented by a number between 0 and 1, inclusive, with "probability 0" indicating certainty that an event will not occur and "probability 1" indicating certainty that it will occur. The simplest problems are concerned with the probability of a specified "favorable" result of an event that has a finite number of equally likely outcomes. If an event has n equally likely outcomes and f of them are termed favorable, the probability, p, of a favorable outcome is f/n. For example, a fair die can be cast in six equally likely ways; therefore, the probability of throwing a 5 or a 6 is 2/6. More involved problems are concerned with events in which the various possible outcomes are not equally likely. For example, in finding the probability of throwing a 5 or 6 with a pair of dice, the various outcomes (2, 3, ... 12) are not all equally likely. Some events may have infinitely many outcomes, such as the probability that a chord drawn at random in a circle will be longer than the radius.
Problems involving repeated trials form one of the connections between probability and statistics. To illustrate, what is the probability that exactly five 3s and at least four 6s will occur in 50 tosses of a fair die? Or, a person, tossing a fair coin twice, takes a step to the north, east, south, or west, according to whether the coin falls head, head; head, tail; tail, head; or tail, tail. What is the probability that at the end of 50 steps the person will be within 10 steps of the starting point?
In probability problems, two outcomes of an event are mutually exclusive if the probability of their joint occurrence is zero; two outcomes are independent if the probability of their joint occurrence is given as the product of the probability of their separate occurrences. Two outcomes are mutually exclusive if the occurrence of one precludes the occurrence of the other; two outcomes are independent if the occurrence or nonoccurrence of one does not alter the probability that the other will or will not occur. Compound probability is the probability of all outcomes of a certain set occurring jointly; total probability is the probability that at least one of a certain set of outcomes will occur. Conditional probability is the probability of an outcome when it is known that some other outcome has occurred or will occur.
If the probability that an outcome will occur is p, the probability that it will not occur is q = 1 - p. The odds in favor of the occurrence are given by the ratio p:q, and the odds against the occurrence are given by the ratio q:p. If the probabilities of two mutually exclusive outcomes X and Y are p and P, respectively, the odds in favor of X and against Y are p to P. If an event must result in one of the mutually exclusive outcomes O1,O2,..., On, with probabilities p1,p2,..., pn, respectively, and if v1,v2,...vn are numerical values attached to the respective outcomes, the expectation of the event is E = p1v1 + p2v2 + ...pnvn. For example, a person throws a die and wins 40 cents if it falls 1, 2, or 3; 30 cents for 4 or 5; but loses $1.20 if it falls 6. The expectation on a single throw is 3/6 × .40 + 2/6 × .30 - 1/6 × 1.20 = .10.
The most common interpretation of probability is used in statistical analysis. For example, the probability of throwing a 7 in one throw of two dice is 1/6, and this answer is interpreted to mean that if two fair dice are randomly thrown a very large number of times, about one-sixth of the throws will be 7s. This concept is frequently used to statistically determine the probability of an outcome that cannot readily be tested or is impossible to obtain. Thus, if long-range statistics show that out of every 100 people between 20 and 30 years of age, 42 will be alive at age 70, the assumption is that a person between those ages has a 42 percent probability of surviving to the age of 70.
Mathematical probability is widely used in the physical, biological, and social sciences and in industry and commerce. It is applied in such diverse areas as genetics, quantum mechanics, and insurance. It also involves deep and important theoretical problems in pure mathematics and has strong connections with the theory, known as mathematical analysis, that developed out of calculus.
Outcome: the nature of possibilities that may occur.
e.g. tossing a coin has 2 possible outcomes, which are the appearance of heads or tails
Event: the occurrence of any one of the possible outcomes
e.g. If a coin is tossed 10 times 10 is the total number of times the event takes place.
Frequency: the number of times that an outcome occurs
e.g. number of times coin lands with heads, and number of times coin lands with tails.
Three diagrams: lists the possible outcome of an experiment.
e.g. coin toss: probability of coin landing head or tail.
Equally: likely outcomes that have the same chance of occurring.e.g. when a coin is tossed the chances of a head and the chances of a tail showing are the same.
Send mail to [email protected] with
questions or comments about this web site. |
A 60 minute lesson in which students will explore some commonly used idioms. Unit 6: idioms unit lesson 5 of 5 objective: swbat demonstrate an understanding of word relationships and nuance in word meanings by illustrating their favorite idiom in literal and non-literal translations as well as using it appropriately in a sentence. English idioms for everyday use written by administrator saturday, 23 june 2012 18:22 - last updated saturday, 23 june 2012 18:59 for everybody it was collected by many sources from internet. Vocabulary workshop level d unit 1 3 review answerspdf free pdf download now source #2: vocabulary workshop level d unit 1 3 review answerspdf.
1 unit’overview’ a unit’description’ w ela’l’5b–’recognize’and’explain’the’meaning’of’common’idioms’ ’ ’ unit. Idiomunit 1 negotiations we met with representatives from the other company for over 4 hours yesterday jerry didn't waste any time he took the bull by the. Idiom worksheet 1 | rtf idiom worksheet 1 | pdf idiom worksheet 1 | preview idiom worksheet 1 | answers idiom worksheet 1 | ereading worksheet idiom worksheet 2 – give your students more exposure to idiomatic phrases this worksheet contains 15 more idioms students determine the meaning of each expression based on the context. The one hundred and fourteen two-page units that follow present the idioms in a variety of ways the first section, areas of metaphor, contains units which practice idioms from subject areas such as time is money (unit 1) or people are liquid (unit 12. Idiom videos spelling/decoding phonics: words ending in -er and -ie spelling city games for kids fluency stress grammar skill: writing questions help. Here are a few great anchor charts on idioms from around the web: from book units teacher image only - do you know the origin we have an idiom dress-up day when we are finished with this unit.
Recall 1 the word “idiom” comes from the greek word, “idios”, which translates to: a one’s own b foolish c goodbye recall 2 idioms are. Idioms unit this is a unit on idioms that i did with my third grade class for american education week it is 20 pages, but i will briefly explain them below it is 20 pages, but i will briefly explain them below. Reading wonders grade 4: unit 1 overview spelling words week 1 week 2 week 3 week 4 week 5 sandwich major evening shiny mold clamped display feline climb toll.
200 useful idioms and phrases learn common english idioms used in work , business and everyday life 48 (20 ratings) idioms and phrases unit 1 20 lectures 34:40. Advanced unit 1 - download as powerpoint presentation (ppt / pptx), pdf file (pdf), text file (txt) or view presentation slides online english lessons. He won't stop jumping down my throat the world is your oyster he is feeling under the weather.
Unit 6: idioms: money makes the world go round (it) costs an arm and a leg to be well off make ends meet (we) live from hand to mouth (our drinks were) on the house. Contents unit 1 idioms from colors 1 unit 2 idioms from food 6 unit 3 idioms from numbers 11 unit 4 idioms from parts of the body 16 unit 5 idioms from people.
Figurative language: idioms about eggs idioms can be tricky for young learners and problematic for ell students helping your students understand idioms will increase their vocabularies and enrich their writing through class discussions, introduce your. English test titled idioms with colours, for online english learners at the advanced level. Unit 3: lesson 21: dialect and idioms objective: students will compare the varieties of english in texts and videos station #1: dialect 1 discuss and develop a working definition of dialect with your group. Idioms to talk about which groups idioms according ro the topic area that they are used to talk about thus, to be snowed under [to have an enormous amount ot work to doj is included in unit 25, work.
Want music and videos with zero ads get youtube red. Isbn: 0844207497 9780844207490: oclc number: 33601142: notes: at head of title: teacher's manual description: 70 pages 23 cm: contents: idioms for everyday use the basic text for learning and communicating with english idioms unit 1 idioms from colors unit 2 idioms from food unit 3 idioms from numbers unit 4 idioms from parts of the body unit 5 idioms from people unit 6 idioms. Figurative language/idioms grade 4 unit 4 august 1, 2007 page 2 of 2 copyright 2007 © all rights reserved circumstances of the assignment/notes to the teachers: the teacher should be prepared with a list of idioms (from the list or others) and construction paper so that the students are able to compile the class book. 1 the word “idiom” comes from the greek word, “idios”, which translates to: a one’s own b foolish c goodbye 2 idioms are. |
© 2010 HowStuffWorks.com
To understand viruses, it may help to consider the French emperor Napoleon Bonaparte. In the early 19th century, Bonaparte invaded much of Europe in order to establish French dominance over the continent. He's also known for being somewhat short in stature (however unfair that reputation may be).
Like our idea of Napoleon, viruses are very small -- 100 times smaller than the average bacterium, so small that they can't be seen with an ordinary microscope. Viruses can only exert influence by invading a cell, because they're not cellular structures. They lack the ability to replicate on their own, so viruses are merely tiny packets of DNA or RNA genes enfolded in a protein coating, on the hunt for a cell they can dominate.
Viruses can infect every living thing -- from plants and animals down to the smallest bacterium. For this reason, they always have the potential to be dangerous to human life. Still, they don't become truly treacherous until they infect a cell within the body. This infection can happen several ways: by air (thanks to coughing and sneezing), via carrier insects like mosquitoes, or by transmission of body fluids such as saliva, blood or semen.
Once a virus infects a cell, it tries to take over its host completely, much as Napoleon spread the French influence with every country he fought. A virus lodged in a cell replicates and reproduces as much as possible; with each new replication, the host cell produces more viral material than it does normal genetic material. Left unchecked, the virus will cause the death of the host cell. Viruses will also spread to nearby cells and begin the process again.
The human body does have some natural defenses against a virus. A cell can initiate RNA interference when it detects viral infection, which works by decreasing the influence of the virus's genetic material in relation to the cell's usual material. The immune system also kicks into gear when it identifies a virus by producing antibodies that bind to the virus and render it unable to replicate. The immune system also releases T-cells, which work to kill the virus. Antibiotics have no effect on viruses, though vaccinations will provide immunity.
Unfortunately for humans, some viral infections outpace the immune system. Viruses can evolve much more quickly than the immune system can, which gives them a leg up in uninterrupted reproduction. And some viruses, such as HIV, work essentially by tricking the immune system. Viruses cause many diseases, including colds, measles, chicken pox, HPV, herpes, rabies, SARS and the flu. Though they're small, they pack a big punch -- and they can only sometimes be sent into exile. |
Parental jokes about selective hearing aside, discovering that your child can’t hear is no laughing matter. Some of the reasons children experience hearing loss can be corrected with medication or surgery while others result in permanent hearing damage.
Ear health professionals classify hearing loss into two categories: conductive and sensorineural. Conductive hearing loss is often treatable with medicine or surgery while sensorineural hearing loss is typically permanent.
Conductive hearing loss, caused when sound travel is obstructed in the inner and middle ear, is considered the most frequent reason for hearing loss in children. Acquired conductive hearing loss occurs after birth as a result of an abnormality or disease. The common cold or ear infection, which often produces fluid in the ear, is a good example of this condition. Earwax is another well-known culprit. If you suspect your child’s hearing is affected as a result of one of these scenarios, consult your pediatrician immediately. Both are typically temporary and can be corrected with medication when identified and treated early.
Congenital conductive hearing loss is caused by an anatomical abnormality of the outer and/or inner ear. Depending upon the nature of the abnormality, your pediatrician may recommend waiting until your child is at least three years of age before attempting to surgically correct the problem.
Sensorineural hearing loss occurs when hair cells in the cochlea or the hearing nerve in the inner ear are damaged. Sensorineural hearing loss can occur during pregnancy or after birth. Although this type of hearing loss is permanent, hearing aids and other devices can help children hear well enough to develop language skills in most cases.
Congenital sensorineural hearing loss occurs during pregnancy. Common causes for this type of impairment include viral infections such as Rubella, genetic problems, premature birth and other complications.
Possible causes of acquired sensorineural hearing loss include childhood diseases or illnesses, such as chicken pox, measles, and encephalitis. Mumps is the most common cause of one-sided deafness in the United States. A traumatic head injury can also cause permanent hearing loss.
One of the newest, and most preventable, forms of sensorineural hearing loss in children is related to loud noise in their environment. The Centers for Disease Control (CDC) estimate that 5.2 million children and adolescents aged 6-19 years have suffered permanent hearing loss as the result of noise induced hearing loss (NIHL).
The Occupational Health and Safety Administration (OSHA) sets 85 decibels (dB) as a safe level for noise, yet maximum sound from an iPOD shuffle is 115 dB and levels at rock concerts sometimes reach intensities of more than 120 dB.
Music isn’t the only culprit for this growing problem, however. Even prolonged exposure to everyday items such as gas powered lawn mowers or loud motorcycles can permanently affect a child’s hearing. You can protect your child from NIHL by teaching him to wear ear protection in noisy environments, how to cover his ears and move away from loud noises, and to listen to his music at safe levels.
More than 24,000 children are born with hearing loss in the United States every year. If you suspect your child is having trouble hearing, experts encourage you to get help immediately. According to the California Ear Institute, children with untreated hearing loss take twice as many trips to the emergency room, are ten times more likely to be held back in grade school, and stand a greater risk of being misdiagnosed with ADHD. |
Anniversaries usually celebrate successes, but for a change of pace, we’ll celebrate a series of failures, which sometimes are better teachers.
The Ranger series of space probes, launched 51 years ago at what would seem to be a straightforward target—the moon—failed six times to perform their mission before finally succeeding on the seventh try. That record caught the attention of Congress, resulted in a sweeping reorganization of the newly created National Aeronautics and Space Administration, and reformed spacecraft design and preparation for flight.
The moon became an early target for space scientists doodling on their calendars because it was the closest and most obvious target as well as the simplest mission to another celestial body to achieve. But when President John F. Kennedy’s 1962 speech made a human landing on the moon before the decade’s end a national goal, the doodles got serious. For one thing, the problem of landing astronauts safely required a thorough scouring of the moon’s rugged surface to find a suitable spot. Earthbound telescopes took pretty pictures, but this would require a much closer look. And that meant a spacecraft.
It’s worth recalling that in the early days of the Space Age, Americans became inured to failure. In 1957, the Vanguard rocket that was intended to put the first U.S. satellite in orbit blew up on the pad while millions watched on television. News cameras perched across from Cape Canaveral filmed supposedly secret tests of rocket boosters, filling home TV screens with vehicles exploding like fireworks.
Ranger began in 1959, before Kennedy’s speech launched the Apollo program, as a way to catch up with the Soviets’ Luna probes; Luna 3 had already orbited the moon and photographed most of its surface. The Jet Propulsion Laboratory in Pasadena, California, had developed the Ranger spacecraft for NASA, and the program was repurposed for Apollo to find a landing zone and to achieve the seemingly simple goal of sending cameras moonward to transmit a lot of images, then hit the surface so a seismic sensor could measure the resiliency of the lunar crust. Ranger 1 established the basic spacecraft configuration: a pair of wing-like solar arrays, a high-gain dish antenna, and a tower holding scientific sensors and instruments.
The opening launch was an omen: The first countdown was delayed, then postponed. A leak stopped the second, and a bad valve aborted the third. On the fourth, the spacecraft, while still on the pad, began to unfold as it would have in orbit. Finally, on August 22, 1961, it was launched into a parking orbit. But its Agena engine failed to restart, and the satellite tumbled in low Earth orbit until on August 30 it burned up on reentry.
Ranger 2 suffered the same fate in November.
Ranger 3 missed the moon by more than 20,000 miles and is still in orbit around the sun.
Ranger 4 hit the far side of the moon in April 1962, but its solar panels had failed to deploy; unpowered, it delivered no images.
Ranger 5 had a power failure and joined Ranger 3 in orbit. A resulting NASA inquiry labeled JPL’s proving practices as “shoot and hope,” and the Ranger design was scrapped. Blame went to the practice of heating the probe to sterilize it; JPL’s management practices were roasted, personnel were fired, and procedures were overhauled.
Two years and many sleepless nights later, the new and improved Ranger 6 hit the moon but sent home zilch. After more investigations and management shakeups, on July 31, 1964, a redesigned television system aboard Ranger 7 delivered extremely expensive images. In the press room, the end of the mission was narrated thus: “One minute to impact…. Excellent…. Excellent…. Signals to the end…. IMPACT!” |
I will be talking to you about The Virginia Plan, and why it was important for the delegates to compromise. The Virginia Plan would divide the government into three branches. The Legislative Branch would enact the laws. Another branch was the Executive Branch its part was to see that the laws were carried out. The third branch was the Judicial Branch which would see that justice is done under the law. Many opinions were presented in what the government should be and through compromise they found solutions. That is what The Virginia Plan is. |
An equation is a mathematical statement that has two expressions separated by an
equal sign. The expression on the left side of the equal sign has the same value as
the expression on the right side.
One or both of the expressions may contain variables. Solving an equation means
manipulating the expressions and finding the value of the variables.
An example might be:x = 4+8
to solve this equation we would add 4 and 8 and find that x = 12. |
1. Construct a triangle and measure its sides and angles.
2. Construct a second triangle with corresponding sides equal in length.
3. Try to alter the properties of their construction by moving the vertices.
Before the Activity
Each student should have access to a TI-84 Plus, Cabri Jr and a copy of the attached .pdf files.
During the Activity
Explore relationships (including congruence and similarity) among classes of two- and three-dimensional geometric objects, make and test conjectures about them, and solve problems involving them.
After the Activity
Review student results and, as a class, discuss questions that appeared to be more challenging. Re-teach concepts as necessary. |
One of the first lessons that an electronics student learns is that an LED provides light from current flow. But, did you know that an LED put in backwards provides current flow from light? Yes! It’s true.
Don’t believe me?
A multimeter in voltage measurement mode detects voltage in a discrete LED when held close to a light source.
Hook up a high-quality ultra-bright red LED by itself (no battery or other circuitry) to a multimeter in voltage measurement mode. Put the LED against a light source, such as a desk lamp. See the voltage? Now, hide the LED in a dark place. See a decrease in voltage?
An LED (light emitting diode) is a photosensitive semiconductor with a lens. The LED acts as a photodiode.
Photodiodes are used in robots and devices as light sensors. Photodiodes have a spectrum wavelength to which they are most sensitive, usually infrared. But, not surprisingly, a reversed LED is most sensitive to the same color of visible light as it normally emits. For example, if a circuit uses a reversed green LED, the most current will flow from exposure to green light.
Unfortunately, even under the best conditions, photodiodes (and reversed LEDs) don’t provide a lot of current flow. The output of the photodiode needs to be amplified for the light-detection signal to be useful in most circuits. A photodiode amplified by a built-in transistor is called a phototransistor.
You can connect a standalone photodiode to the input of a standalone transistor. But, it isn’t easy to control the gain of a single-transistor amplifier, and there are issues with signal noise and the amount of input current required. Instead, a better method for amplifying low-power signals in a high-quality repeatable way is an op amp chip (operational amplifier).
Putting this all together - a color sensor can be made from a reversed LED and an op amp chip. In fact, TAOS did just that with their TSLR257 (red), TSLG257 (green), and TSLB257 (blue) sensors.
Example schematic for amplifying a photodiode using an op amp.
Sadly, the TAOS TSLx257 family of color sensors has been discontinued. It’s too bad because they were a compact and easy solution.
However, this same type of circuit appears in white papers and technical notes for both National Semiconductor’s and Texas Instrument’s op amps. So, you can build a color sensor circuit using their parts.
Although the circuit will be a lot larger than one integrated into a single component, you'll be able to select specific wavelength sensitivity through your choice of LED color. And, you'll be able to determine the desired amount of signal gain through your choice of feedback resistance.
On the next page you'll see the complete schematic and solderless breadboard for the reversed LED color sensor. The remainder of the article is devoted to a series of oscilloscope traces showing the photodiode signal in action. These trace tell the story of why certain parts in the circuit improve the accuracy of the digital output and the signal-to-noise ratio on the input. |
Seema Kumar, of Discovery Channel Online, writes that scientists have discovered that the genetic make-up of dolphins is amazingly similar to humans. They’re closer to us than cows, horses, or pigs, despite the fact that they live in the water.
“The extent of the genetic similarity came as a real surprise to us,” says David Busbee of Texas A&M University. He hopes his research will reveal how long ago humans and dolphins branched off the evolutionary tree. There’s been some speculation that dolphins and whales, who breathe air, may have returned to the water AFTER first evolving into land animals.
“Dolphins are marine mammals that swim in the ocean and it was astonishing to learn that we had more in common with the dolphin than with land mammals,” says geneticist Horst Hameister.
Busbee says, “If we can show that humans are similar to dolphins, and anything that endangers dolphins is an equal concern for humans, it may be easier to persuade governments to keep oceans clean.”
There are still many mysteries about the beings who share the earth with us. Humans and dolphins may have much more in common than people think, especially when it comes to genetics.
In a Sea Grant-funded project, Texas A&M University veterinarians are comparing human chromosomes to those of dolphins and are finding that the two share many similarities. The scientists hope to use these similarities to identify and map the genes of dolphins.
Genes are organized into segments along the length of a chromosome – a tightly wound spool of DNA. This spool is made up of two, complementary, single strands of DNA bound together. Every living thing has a characteristic number of chromosomes, and each chromosome carries different genes. Dolphins have 44 chromosomes, and humans have 46. Dr. David Busbee and his team applied human “paints,” fluorescently labeled pieces of human chromosomes, to dolphin chromosomes on microscope slides. Scientists broke open dolphin cells, releasing chromosomes onto slides. The dolphin chromosomes were then treated with labled human chromosome pieces, providing the opportunity for complementary DNA strands to match up.
When scientists examined the photos taken with a fluorescence microscope, they found dolphin chromosomes fluorescently tagged with the labeled, or “painted,” pieces of human chromosomes and concluded that dolphins hold many of the same chromosomes as humans. “We started looking at these and it became very obvious to us that every human chromosome had a corollary chromosome in the dolphin,” Busbee said. “We’ve found that the dolphin genome and the human genome basically are the same. It’s just that there’s a few chromosomal rearrangements that have changed the way the genetic material is put together.”
Dolphins have been viewed as somehow magical for millennia by humans. They’re one of the only animals that appear to play, leaping out of the water and doing tricks, and the bottlenose dolphin even seems to grin widely at everything. It was inevitable that such a remarkable animal also collected a remarkable mythology that extends through today.
The first documented culture that seems to have mythology associated with the dolphin was the Minoan, a seafaring people in the Mediterranean. They left few written records, but they did leave beautiful murals on the walls of their palaces, murals that show the importance of dolphins in their mythology.
Because they were strongly associated with Poseidon by the later Greeks, this probably explains why the sea god was so often surrounded by dolphins. In one myth about Poseidon, dolphin messengers were sent to bring him a nymph he loved, who he later married. As a reward, he set the dolphin in the sky as a constellation. And he was constantly accompanied by dolphins among other sea creatures.
This wasn’t the last time the Greeks associated dolphins with romance. Aphrodite is often depicted with dolphins, riding them or being accompanied by them. Later, the god Dionysus transformed the way dolphins were perceived in Greek literature. He was set upon while at sea by a band of pirates. Instead of simply destroying the sea raiders, he transformed them into a pod of dolphins, charging them to rescue any distressed sailors in the ocean.
Pirates transforming into dolphins. Drawing from an Etruscan Black Figure Hydria, 510-500 BC
Special thanks to Larry Lawhorn. |
The teenagers or the adolescences will be able to develop with their age mates and reason properly. This helps them become more responsible, accountable and generally reasonable children in the society. They know how to relate to others, the challenges in the society and make realistic decisions. By supporting your child from early age, this will assist the child in developing and growing steadily in the cognitive development. This also helps the parents or teacher know more about the child and create an environment where the child feels accepted and understood therefore it is easy for the teenager to open up even more (Fieldman and Elliott, 2006).The adolescent period can be very challenging and confusing if not properly advised and prepared for. The teenagers should be prepared in advance for the challenges ahead of them by their parents and other people in the society like the school and church. The teenagers should be involved in activities that open their eyes to what they should be facing as time progresses on. These activities and or programs should help them also make friends and exchange ideas with other teenagers from different social backgrounds, different races, different religions so that they can accept them selves and know how to relate with others of the same age brackets with them. These activities and programs will also help them recognize who they are and what they want to be in future. |
Asia is a large and seismically active continent. In addition, it has the largest human population of any continent, so it's not surprising that many of Asia's worst natural disasters have claimed more lives than any others in history. Learn here about the most devastating floods, earthquakes, tsunamis, and more that have hit Asia.
Note: Asia has also witnessed some disastrous events that were similar to natural disasters, or began as natural disasters, but were created or exacerbated in large part by government policies or other human actions. Thus, events like the 1959-1961 famine surrounding China's "Great Leap Forward" are not listed here, because they were not truly natural disasters.
1. 1876-79 Famine | North China, 9 million dead
After a protracted drought, a serious famine hit northern China during the late Qing Dynasty years of 1876-79. The provinces of Henan, Shandong, Shaanxi, Hebei, and Shanxi all saw massive crop failures and famine conditions. An estimated 9,000,000 or more people perished due to this drought, which was caused at least in part by the El Niño-Southern Oscillation weather pattern.
2. 1931 Yellow River Floods | Central China, 4 million
In waves of flooding following a three-year drought, an estimated 3,700,000 to 4,000,000 people died along the Yellow River in central China between May and August of 1931. The death toll includes victims of drowning, disease, or famine related to the flooding.
What caused this horrific flooding? Soil in the river basin was baked hard after years of drought, so it could not absorb the run-off from record-setting snows in the mountains. On top of the melt-water, the monsoon rains were heavy that year, and an incredible seven typhoons lashed central China that summer. As a result, more than 20,000,000 acres of farmland along the Yellow River was inundated; the Yangtze River also burst its banks, killing at least 145,000 more people.
3. 1887 Yellow River Flood | Central China, 900,000
Flooding beginning in September of 1887 sent the Yellow River (Huang He)over its dikes, inundating 130,000 sq km (50,000 sq miles) of central China. Historical records indicate that river brok through in Henan Province, near Zhengzhou city. An estimated 900,000 people died, either by drowning, disease, or starvation in the aftermath of the flood.
4. 1556 Shaanxi Earthquake | Central China, 830,000
Also known as the Jianjing Great Earthquake, the Shaanxi Earthquake of January 23, 1556 was the deadliest earthquake ever recorded. (It is named for the reigning Jianjing Emperor of the Ming Dynasty.) Centered in the Wei River Valley, it impacted parts of Shaanxi, Shanxi, Henan, Gansu, Hebei, Shandong, Anhui, Hunan, and Jiangsu Provinces, and killed around 830,000 people.
Many of the victims lived in underground homes (yaodong), tunneled in to the loess; when the earthquake struck, most such homes collapsed onto their occupants. The city of Huaxian lost 100% of its structures to the quake, which also opened vast crevasses in the soft soil and triggered massive landslides. Modern estimates of the Shaanxi Earthquake's magnitude put it at just 7.9 on the Richter Scale - far from the most powerful ever recorded - but the dense populations and unstable soils of central China combined to give it the largest death toll ever.
5. 1970 Bhola Cyclone | Bangladesh, 500,000
On November 12, 1970, the deadliest tropical cyclone ever struck East Pakistan (now known as Bangladesh) and the state of West Bengal in India. In the storm surge that flooded up the Ganges River Delta, some 500,000 to 1 million people would drown.
The Bhola Cyclone was a category 3 storm - the same strength as Hurricane Katrina when it struck New Orleans, Louisiana in 2005. The cyclone produced a storm surge 10 meters (33 feet) high, which moved up the river and flooded surrounding farms. The government of Pakistan, located 3,000 miles away in Karachi, was slow to respond to this disaster in East Pakistan. In part because of this failure, civil war soon followed, and East Pakistan broke away to form the nation of Bangladesh in 1971.
6. 1839 Coringa Cyclone | Andhra Pradesh, India, 300,000
Another November storm, the November 25, 1839 Coringa Cyclone, was the second-most deadly cyclonic storm ever. It struck Andra Pradesh, on India's central east coast, sending a 40-foot storm surge onto the low-lying region. The port city of Coringa was decimated, along with some 25,000 boats and ships. Approximately 300,000 people died in the storm.
On December 26, 2004, a 9.1 magnitude earthquake off the coast of Indonesia triggered a tsunami that rippled across the entire Indian Ocean basin. Indonesia itself saw the most devastation, with an estimated death toll of 168,000, but the wave killed people in thirteen other countries around the ocean rim, some as far away as Somalia.
The total death toll likely was in the range of 230,000 to 260,000. India, Sri Lanka, and Thailand were also hard-hit, and the military junta in Myanmar (Burma) refused to release that country's death toll.
A magnitude 7.8 earthquake struck the city of Tangshan, 180 kilometers east of Beijing, on July 28, 1976. According to the Chinese government's official count, about 242,000 people were killed, although the actual death toll may have been closer to 500,000 or even 700,000.
The bustling industrial city of Tangshan, pre-earthquake population 1 million, was built on alluvial soil from the Luanhe River. During the earthquake, this soil liquefied, resulting in the collapse of 85% of Tangshan's buildings. As a result, the Great Tangshan Earthquake was one of the deadliest quakes ever recorded. |
Summary of site:
The five granites of the Mourne Mountains were intruded into their enclosing Silurian sediments (called country rock around major intrusions) between 56 and 51 million years ago by a process called cauldron subsidence. There were two centres of activity, the earlier, in the east, emplaced granites 1 to 3; the later, in the west, granites 4 and 5. Cauldron subsidence is a process in which massive cylindrical blocks, kilometres across, subside relatively quietly into the molten mass below. This locality suggests that the picture may be somewhat more complicated than this explanation suggests and that emplacement in some places could have been more violent.
Downstream of the Spelga Dam, a substantial dyke of the fourth granite can be seen to penetrate the country rock. The granite in the dyke exploits the bedding planes and joints, detaching masses up to several metres long that are now suspended in the granite (xenoliths). The contacts between the xenoliths and the granite are everywhere sharp.
Such destruction of wall and roof rocks during injection of large bodies of molten rock, to create space for them to occupy, is called stoping, considered to be a relatively violent process. It would be helpful if more sections of granite/country rock roof contacts could be seen but in the present state of erosion there is little of these areas left to explore. It is, however, possible from the evidence at this locality, that stoping played a more important role in emplacing the granites than is generally believed.
The structure of the dam covers part of this exposure and the volume of overflow controls access to the rest. |
Graptolites were colonial animals; numerous individuals lived in cup-like structures called thecae attached to a common thread or stipe. The skeleton itself was made up of two layers of chitin. In the earliest graptolites – the Dendroidea – there were two different types of cup and a thin chitinous thread called the stolon ran throughout the colony. The stolon was possibly a similar structure to the notochord which would relate the graptolites to the chordates. Their colonies were at first large, and branched extensively, but later they tended to become simpler.
The other important group, the Graptoloidea, were without chitinous stolons but are thought to have evolved from Dendroid forms. Further, only one type of cup was present, though the shape of this varied enormously within the group. Branching was not so extensive as in the Dendroidea; many forms in fact did not branch at all.
The graptolites made very rapid changes in shape and form as they evolved and are subsequently good fossils for dating the rocks. For instance, sediments formed at one point in time will contain a type of graptolite distinct from those found in sediments deposited shortly afterwards. Moreover, most graptolites drifted in open waters. Thus their remains are scattered over a wide area and enable one outcrop of rock to be directly related in age to another, hundreds of kilometers away. Unfortunately for the fossil record the race became extinct toward the end of the Silurian period.
Related category PALEONTOLOGY
Home • About • Copyright © The Worlds of David Darling • Encyclopedia of Alternative Energy • Contact |
Digital scales work with the use of a strain gauge load cell. Whereas analog scales use springs to indicate the weight of an object, digital scales convert the force of a weight to an electric signal. Its key components consist of a strain gauge, a device used to measure the strain of an object, and load cell sensor, an electronic device used to convert a force into an electrical signal. A load cell is also known as a force transducer.
Bending the Load Cell
When an item is placed on the scale, the weight is first evenly distributed. Under the flat tray of a digital scale you might find, for example, four slightly raised pegs in the corners that serve to distribute the force of the weight evenly. The mechanical design of the digital scale then applies the force of the weight to one end of a load cell. As the weight is applied, that end of the load cell bends downwards.
Deforming the Strain Gauge
The force of a weight then deforms the strain gauge. The strain gauge can consist of metal tracks, or foil, bonded to a printed circuit board or other backing. When the metal foil is strained, the backing flexes or stretches.
Conversion to Electric Signal
The strain gauge then converts the deformation to an electrical signal. Because the load cell has an electric charge, as it moves downwards, the electrical resistance changes. The resulting small change in resistance becomes an electrical signal. The signal is run through an analog to digital converter, and then passes through a microchip that "translates" the data. As a result of this final calculation, numbers indicating the weight of the object appear on the LCD display of the digital scale. |
SPOTLIGHT ON THE ENDANGERED SPECIES ACT
The Endangered Species Act (ESA) is a United States law, passed in 1973. Its purpose is to conserve threatened and endangered animals and plants and the ecosystems on which they depend. The ESA is regarded as one of the strongest environmental laws in the world.
Species in need of conservation measures are placed on one of two lists: “endangered,” in danger of extinction throughout all or a significant part of its normal range; or “threatened,” likely to become an endangered species in the foreseeable future.
Two federal agencies, the Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS), are in charge of listing species, preparing plans to help them recover, and enforcing the law. The ESA?s goal is recovery of all listed species, to the point where they are no longer in need of special protection.
The law prohibits “taking” a listed species. “Take,” as defined in the ESA includes kill, shoot, wound, hunt, capture, harm, and harass. Court decisions have held that destroying habitat which injures or kills a species is also included. The law provides for both civil and criminal penalties for violations.
The ESA has become controversial because it sometimes results in restrictions on commercial development or other economic activities to protect species or their habitat. Congress has been considering amendments to the law since 1992, but political leaders have been unable to agree on what changes should be made.
As of mid-1996, there were 960 domestic species (those occurring in the United States) on the threatened and endangered lists, over half of them plants. |
This post was written by Nicole Emanuel, a recent graduate of Macalester College and a former ABG employee and current volunteer!
Beer and botany—what could be a better pairing? Leading up to our July 13 Beer in the Garden event, we’ll be sharing some information about the biology of plants used by brewers!
First up is the most famous of all plants associated with beer: Humulus lupulus, also known as the common hop plant. It is a climbing plant similar to a vine, which often grows to about 25 feet in height. The female flowers of H. lupulus are the part of the plant that are actually known as “hops,” and that are harvested for use in beverage production. Male and female flowers are wind pollinated and typically grow on separate plants in this species. The hop plant is a perennial that sends up new growth each spring. It is native to North America, Europe, and Asia.
Hops act as flavoring and preservative agents. As most beer enthusiasts know, hops contribute bitterness to the taste and aroma of beer. Over the years, humans have bred different varieties of hops to cultivate desired traits. Some plants are bred to have especially abundant flowers, or to thrive with shorter hours of daylight. Other cultivars are prized for their flavor profiles, which can give a brew notes of “citrusy,” “grassy,” or “earthy” taste. In addition to the crucial role hops play in shaping the savor and smell of beers, they also help to preserve the beverage by killing microorganisms that could cause it to spoil. Brewers have been exploiting hops’ anti-microbial properties for hundreds of years. In fact, prior to the 9th Century, people made beer with a wide range of herbs and flowers (including dandelion and marigold). It is thought that once the preservative effects of hops were noted, the plant gained popularity and replaced other ingredients.
And it’s not just humans who benefit from the power of hops: scientists from the USDA and the Carl Hayden Bee Research Center recently concluded that H. lupulus might be useful in preventing colony collapse in honey bees! They found that hop beta acids fight pests such as varroa mites, which can attack the health of bee populations. Hives treated with acids isolated from hops were protected from mites, without showing any harm to their bee residents. (http://environment-review.yale.edu/beer-hops-beneficial-honey-bees-0)
Beloved by humans and bees alike, Humulus lupulus is a truly wonderful plant. Check out the specimen growing in our lower perennial garden! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.